uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
3,212,635,537,706 | arxiv | \section{Introduction}
An easy calculation shows that the exponential function $f(x) = e^x$ has the
somewhat `paradoxical' property that $cf(x)$ is a translate of $f(x)$ for
every $c>0$. It is also easy to see that every function of the form
$a+be^{kx}$ has this property.
This connection is also of interest from the point of view of
functional equations. In \cite{CCR} Cain, Clark and Rose introduced the notion
of vertical rigidity as follows.
\bd
A function $f:\RR \to \RR$ is called \emph{vertically rigid}, if $graph(cf)$ is
isometric to $graph (f)$ for all $c \in (0,\infty)$. (Clearly, $c \in \RR \sm
\{0\}$ would be the same.)
\ed
Obviously every function of the form $a+bx$ is also vertically
rigid. D.~Jankovi\'c conjectured (see \cite{CCR}) that the
converse is also true for continuous functions.
\bcon
\textbf(D.~Jankovi\'c)
A continuous function is vertically rigid if and
only if it is of the form $a+bx$ or $a+be^{kx}$ ($a,b,k \in \RR$).
\econ
The main result of the present paper is the proof of this conjecture.
We will need the following technical
generalisations.
\bd
If $C$ is a subset of $(0, \infty)$ and $\iG$ is a set of isometries of the
plane then we say that $f$ is vertically rigid \emph{for a set $C$ via
elements of $\iG$} if for every $c \in C$ there exists a $\varphi \in \iG$
such that $\varphi(graph(cf)) = graph (f)$.
(If we do not mention $C$ or $\iG$ then $C$ is $(0,\infty)$ and $\iG$
is the set of all isometries.)
\ed
The paper is organised as follows. In Section \ref{s:Jan} we prove
Jankovi\'c's conjecture, even if we only assume that $f$ is a
continuous vertically rigid function for an uncountable set $C$. We
show that it is sufficient to assume that $f$ has at least one point
of continuity, provided that it is vertically rigid for $C$
\emph{via translations}. We also show that it is sufficient to
assume that $f$ is bounded on some nondegenerate interval, provided
that it is vertically rigid \emph{via translations} and
$C=(0,\infty)$. In Section \ref{s:Borel} we show that Jankovi\'c's
conjecture fails for Borel measurable functions. Our example also
answers a question from \cite{CCR} that asks whether every
vertically rigid function is of the form $a+bx$ $(a,b \in \RR)$ or
$a+be^{g}$ for some $a,b \in \RR$ and additive function $g$. In
Section \ref{s:meas} we prove that every Lebesgue (Baire) measurable
function that is vertically rigid \emph{via translations} is of the
form $a+be^{kx}$ \emph{almost everywhere (on a comeagre set)}. The
case of general isometries remains open. We also prove that in many
situations the exceptional set can be removed. In Section
\ref{s:sets} we define the notion of a rigid set, discuss how it is
connected to the notion of a rigid function, and prove an ergodic
theory type result. In Section \ref{s:horiz} we define horizontally
rigid functions, and give a simple characterisation of those
functions that are horizontally rigid \emph{via translations}.
Finally, in Section \ref{s:open} we collect the open questions.
\section{Proof of Jankovi\'c's conjecture}
\lab{s:Jan}
\bt
\lab{t:Jan}
(\textbf{Jankovi\'c's conjecture}) A continuous function is vertically rigid
if and only if it is of the form $a+bx$ or $a+be^{kx}$ $(a,b,k \in \RR)$.
\et
\br
In fact, our proof will show that it is sufficient if $f$ is a continuous
function that is vertically rigid for some uncountable set $C$.
\er
It is of course very easy to see that these functions are vertically rigid and
continuous. The proof of the difficult direction goes through three theorems,
which are interesting in their own right. First we reduce the general case to
translations, then the case of translations to horizontal translations, and
finally we describe the continuous functions that are vertically rigid via
horizontal translations.
\bt
\label{t:r->t} Let $f:\RR \rightarrow \RR$ be a continuous
function vertically rigid for an uncountable set $C \subset
(0,\infty)$. Then $f$ is of the form $a+bx$ for some $a,b \in \RR$
or $f$ is vertically rigid for an uncountable set $D\subset
(0,\infty)$ via translations.
\et
\bp
Let $\varphi_c$ be the isometry belonging to $c \in C$.
First we show that we may assume that these isometries are
orientation preserving. If uncountably many of the $\varphi_c$'s are orientation
preserving then we are done by shrinking $C$.
Otherwise let $C'\subset C$ be uncountable so
that $\varphi_{c'}$ is orientation reversing for every $c' \in C'$. Fix $c_0'
\in C'$, then one can easily check that
$c_0'f$ is vertically rigid via orientation preserving
isometries for $C'' = \left\{ \frac{c'}{c_0'} : c'\in C'\right \}$.
Suppose that we have already proved the theorem in case all isometries are
orientation preserving. Then either $c_0'f$ is of the form $a+bx$, and then so
is $f$, or $c_0'f$ is vertically rigid for an uncountable set $D$ via
translations, but then so is $f$ itself (for the same set $D$, but
possibly different translations).
For a function $f$ let $S_f$ be the set of directions between pairs of
points on the graph of $f$, that is,
\[
S_f = \left\{ \frac{p-q}{|p-q|} : p,q \in graph(f),\ p \neq q \right\}.
\]
Clearly $S_f$ is a symmetric (about the origin) subset of the unit circle $S^1
\su \RR^2$. As $f$ is a function, $(0, \pm 1) \notin S_f$.
Since $f$ is continuous, it is easy to see that $S_f$ actually consists of
two (possibly degenerate) nonempty intervals. (Indeed, if $p = (x,f(x))$ and
$q = (y,f(y))$ then $x<y$ and $x>y$ define two connected sets, open
half planes in $\RR^2$, whose continuous images form $S_f$.)
An orientation preserving isometry $\varphi$ of the plane is either a
translation or a rotation. Denote by $ang(\varphi)$ the angle of $\varphi$ in
case it is a rotation, and set $ang(\varphi) = 0$ if $\varphi$ is a
translation.
Now we define two self-maps of $S^1$. Denote by $\varrho_\al$ the rotation about
the origin by angle $\al$. For $c>0$ let $\psi_c$ be the map obtained by
`multiplying by $c$', that is, let
\[
\psi_c((x,y)) = \frac{(x,cy)}{|(x,cy)|}\ \ \ ((x,y) \in S^1).
\]
It is easy to see that the rigidity of $f$ implies that for every $c \in C$
\beq
\lab{e:inv}
S_f = \varrho_{ang(\varphi_c)} (\psi_c(S_f)).
\eeq
If $S_f$ consists of two points, then $f$ is clearly of the form $a+bx$ and we
are done.
Let now $S_f = I \cup -I$, where $I$ is a subinterval of $S^1$ in
the right half plane. We claim that the endpoints of $I$ are among
$(0,\pm 1)$ and $(1,0)$. Suppose this fails, and consider
the function $l(c) = arclength \left( \psi_c(I) \right)$ $(c \in
(0,\infty))$. It is easy to see that $l$ is real analytic,
and we show that it is not constant. Let us first assume that $(0,1)$
and $(0,-1)$ are not endpoints of $I$, then $\lim_{c \to 0} l(c) = 0$,
so $l$ cannot be constant (as $l>0$). Let us now suppose that either
$(0,1)$ or $(0,-1)$ is an endpoint of $I$, then $0 < arclength(I) <
\frac\pi2$
or $\frac\pi2 < arclength(I) < \pi$. In both cases $\lim_{c \to 0}
l(c) = \frac\pi2$ but $l(c) \neq \frac\pi2$, so $l$ is not
constant. As $l$ is analytic, it attains each of its values at most
countably many
times, so there exists a $c \in C$ so that $arclength
\left( \psi_c(I) \right) \neq arclength(I)$, which contradicts
(\ref{e:inv}).
(Actually, it can be shown by a somewhat lengthy calculation using the
derivatives that $l$ attains each value at most twice.)
But this easily yields $ang(\varphi_c) = 0$ or $\pi$ for every $c
\in C$. (Note that $(0,\pm 1) \notin \nolinebreak S_f$ and that
$S_f$ is symmetric.) Just as above, we may assume that
$ang(\varphi_c) = 0$ for all $c \in C$. (Indeed, choose $C',c_0'$
analogously.) But then $f$ is vertically rigid for an uncountable
set via translations, so the proof is complete.
\ep
\bt
\label{t:t->ht}
Let $f:\RR \rightarrow \RR$ be an arbitrary
function that is vertically rigid for a set $C \su (0,\infty)$ via
translations. Then there exists $a\in \RR$ such that $f-a$ is
vertically rigid for the same set via horizontal translations.
\et
\bp
We can clearly assume that $1 \notin C$.
By assumption, for every $c \in C$ there exists $u_{c},v_{c} \in \RR$ such that
\beeq
\label{14}
cf(x)=f(x+u_{c})+v_{c} \ (\forall x \in \RR)
\eeq
Applying this first with $c=c_2$ then with $c=c_1$ we obtain
\[c_1c_2 f(x) =
c_1(f(x+u_{c_{2}})+v_{c_{2}}) =
\]
\beeq
\lab{17}
= c_{1}f(x+u_{c_{2}})+c_{1}v_{c_{2}} =
f(x+u_{c_{1}}+u_{c_{2}})+v_{c_{1}}+c_{1}v_{c_{2}}
\eeq
Interchanging $c_1$ and $c_2$ we get
\beeq
\label{18}
c_{2}c_{1} f(x) = f(x+u_{c_{2}}+u_{c_{1}})+v_{c_{2}}+c_{2}v_{c_{1}}.
\eeq
Comparing (\ref{17}) and (\ref{18}) yields
$
v_{c_{1}}+c_{1}v_{c_{2}}=v_{c_{2}}+c_{2}v_{c_{1}},
$
so
\[
\frac{v_{c_{1}}}{c_{1}-1}=\frac{v_{c_{2}}}{c_2-1}\ \ \
\textrm{ for all } c_{1},c_{2} \in C,
\]
consequently
$
a:=\frac{v_{c}}{c-1}
$
is the same value for all $c \in C$.
Substituting this back to (\ref{14}) gives
$cf(x) = f(x+u_{c})+a(c-1)$,
so $c(f(x)-a) = f(x+u_{c})-a$ for all $c\in C$,
hence $f-a$ is vertically rigid for $C$ via horizontal translations.
\ep
\bt
\label{t:htcont}
Let $f:\RR \rightarrow \RR$ be a continuous
vertically rigid function for an uncountable set $C\subset
(0,\infty)$ via horizontal translations. Then $f$ is of the form
$be^{kx}$ $(b\in \RR, k \in \RR\setminus\{0\})$.
\et
Before proving this theorem we need a definition and a lemma.
\bd
For a function $f:\RR\to\RR$ let $T_{f,C} \su \RR$ be the
additive group generated by the set $T' = \{ t\in\RR : \exists c \in C \
\forall x \in \RR \ f(x+t)=cf(x) \}$. (We will usually simply write $T$
for $T_{f,C}$.)
\ed
\bl
\lab{l:add}
Let $f:\RR \rightarrow \RR$ be a vertically rigid function for an uncountable
set $C\subset (0,\infty)$ via horizontal translations such that $f(0)=1$.
Then $T$ is dense and
\[
f(x+t) = f(x) f(t) \ \ \forall x \in \RR \ \forall t \in T.
\]
Moreover,
$f(t)>0$ for every $t \in T$.
\el
\bp
By assumption, for every $c \in C$ there exists $t_c \in \RR$
such that $cf(x) = f(x+t_c)$ for every $x \in \RR$. Then $t_c\in
T$ for every $c \in C$. Since $f$ is not
identically zero, $t_c \neq t_{c'}$ whenever $c,c' \in C$
are distinct. Hence $\{ t_c : c \in C \}$ is uncountable, so $T$
is uncountable. As every subgroup of $\RR$ is either discrete
countable or dense, $T$ is dense.
Every $t\in T$ can be written as $t=\sum _{i=1} ^{m} n_{i}t_{i}$
($t_i\in T', n_i\in \ZZ, i=1,\dots ,m$) where
$f(x+t_{i})=c_{i}f(x) \ (x \in \RR, \ i=1,\dots ,m$).
From these we easily get \beeq \lab{2} f(x+t)=c_{t}f(x),
\ \ \textrm{ where } c_t = \prod _{i=1} ^{m} c_{i}^{n_{i}}, \ x \in
\RR, \ t \in T.
\eeq
Note that $c_{t}>0$
(and also that it is not necessarily a member of $C$). It suffices
to show that $c_t = f(t)$ for every $t \in T$, but this follows if we
substitute $x=0$ into (\ref{2}).
\ep
\bp (Thm. \ref{t:htcont}) If $f$ is identically zero then we are
done, so let us assume that this is not the case. The class of
continuous vertically rigid functions for some uncountable set via
horizontal translations, as well as the class of functions of the
form $be^{kx}$ ($b\in \RR, k \in \RR\setminus\{0\}$) are both closed
under horizontal translations and under multiplication by nonzero
constants, so we may assume that $f(0)=1$. Then the previous lemma
yields that $f(t_1+t_2) = f(t_1)f(t_2)$ $(t_1,t_2 \in T)$, and also
that $f|_T>0$. Then $g(t) = \log f(t) $ is defined for every $t \in
T$, and $g$ is clearly additive on $T$. But it is well-known (and an
easy calculation) that an additive function on a dense subgroup is
either of the form $kx$, or unbounded both from above and below on
every nondegenerate interval. The second alternative cannot hold,
since $f$ is continuous, so $f|_T$ is of the form $e^{kx}$, so by
continuity $f$ is of this form everywhere. Since $C$ contains
elements different from $1$, we obtain that $f(x)=1$ $(x \in \RR)$
is not vertically rigid for $C$ via horizontal translations, hence
$k\neq 0$. \ep
Putting together the three above theorems completes the proof of Jankovi\'c's
conjecture.
We remark here that we have actually also proved the following,
which applies e.g.~to Baire class 1 functions.
\bt \label{c2} Let $f:\RR \rightarrow \RR$ be a vertically rigid
function for an uncountable set $C\subset (0,\infty)$ via
translations. If $f$ has a point of continuity then it is of the
form $a+be^{kx}$ ($a,b,k \in \RR$). If $f$ is vertically rigid via
translations (i.e.~$C=(0,\infty)$) and bounded on a nondegenerate
interval then it is of the form $a+be^{kx}$ ($a,b,k \in \RR$), too.
\et
\bp
Following the proof of the last theorem we may assume in both cases
that $f(0)=1$, the translations are horizontal, and $f|_T$ is of the
form $e^{kx}$ $(k \in \RR)$.
In the first case, let $x_{0}$ be a point of continuity of $f$, then
clearly $f(x_0)=e^{kx_0}$, since $T$ is dense. Let now $x\in \RR$ be
arbitrary, and $t_{n}\in T$ $(n\in \NN)$ be such that $\lim _{n \to
\infty} t_{n} =x_{0}-x$. Using Lemma \ref{l:add} we obtain
\[
e^{kx_0} = f(x_0) = \lim _{n \to \infty} f(x+t_{n}) = \lim _{n \to
\infty} f(x)f(t_{n}) = f(x) \lim _{n \to \infty} f(t_{n}) =
\]
\[
= f(x) \lim _{n \to \infty} e^{kt_n} = f(x) e^{k(x_0-x)} = f(x) e^{kx_0}
/ e^{kx},
\]
from which $f(x) = e^{kx}$ follows.
In the second case, for every $c>0$ there is a $t_{c}\in
T=T_{f,(0,\infty)}$ such that $cf(x)=f(x+t_{c})=f(x)f(t_{c})$. By
substituting $x=0$ into the equation we get $c=f(t_{c})=e^{kt_{c}}$
for every $c>0$. (In particular, $k\neq 0$.) So $t_{c}=\frac{\log
c}{k}$. If $c$ ranges over $(0,\infty)$ then $t_{c}$ ranges over
$\RR$, so we get $T=\RR$. Hence $f|_T=f$ is of the form $e^{kx}$,
and we are done. \ep
\bex There exists a function $f:\RR \rightarrow \RR$ that is
vertically rigid for an uncountable set $C \subset \RR$ via
horizontal translations, bounded on every bounded interval, and is
\emph{not} of the form $a+be^{kx}$ ($a,b,k \in \RR$). \eex
\bp
Let $P\su\RR$ be an uncountable linearly independent set
over $\QQ$, see e.g. \cite[19.2]{Ke} or \cite{vN}.
Define $\widehat{P}$ to be the generated additive subgroup.
Let
\[
f(x) =
\begin{cases}
e^x & \textrm{ if } x\in \widehat{P}
\\ 0 & \textrm{ if } x\in \RR \setminus \widehat{P},
\end{cases}
\]
then $f$ is clearly bounded on every bounded interval.
It is easy to see that $\frac{p}2 \in \RR \setminus \widehat{P}$ for
every $p \in P$, so $\widehat{P} \neq \RR$, hence $f$ is not
continuous, so it is not of the form $a+be^{kx}$ ($a,b,k \in \RR$).
For every $p\in P$ and $x \in \RR$ we have
$x\in \widehat{P} \iff x+p\in \widehat{P}$, which easily implies
$f(x+p)=e^{p}f(x)$. Hence $f$ is vertically rigid for the
uncountable set $C=\{e^p: \, p\in P\}$.
\ep
Jankovi\'c's conjecture has the following curious corollary.
\bcor
There are continuous functions $f$ and $g$ with isometric graphs so that $f$
is vertically rigid but $g$ is not.
\ecor
\bp
If we rotate the graph of $f(x) = e^x$ clockwise by $\frac\pi4$ then we obtain
the graph of a continuous function. By Theorem \ref{t:Jan} it is not
vertically rigid.
\ep
\section{A Borel measurable counterexample}
\lab{s:Borel}
In this section we show that Jankovi\'c's conjecture fails for Borel
measurable functions. Our example also answers Question 1 in \cite{CCR} of
Cain, Clark and Rose, which asks whether every vertically rigid function is of
the form $a+bx$ $(a,b \in \RR)$ or $a+be^{g}$ for some $a,b \in \RR$ and
additive function $g$. By Thm. 2 of \cite{CCR} $a+be^{g}$ is vertically rigid
if and only if $b=0$ or $g$ is surjective.
\bt
\lab{t:count}
There exists a Borel measurable vertically rigid function $f: \RR \to
[0,\infty)$ (via horizontal translations) that is not of the form $a+bx$ $(a,b
\in \RR)$ or $a+be^{g}$ for some $a,b \in \RR$ and additive function $g$.
\et
For definitions and basic results on Baire measurable sets (= sets
with the property of Baire), meagre
(= first category) and comeagre (= residual) sets consult
e.g.~\cite{Ox} or \cite{Ke}. For Polish spaces and Borel
isomorphisms see e.g.~\cite{Ke}.
\bp
Let $P$ be a Cantor set (nonempty nowhere dense compact set
with no isolated points) that is linearly independent over $\QQ$, see
e.g.~\cite[19.2]{Ke}. (One can also derive the existence of such a set
from \cite{vN} using the well-known fact that every
uncountable Borel or analytic set contains a Cantor set.)
It is easy to see that for all $n_1, \dots, n_k \in \ZZ$
the set $ P_{n_1, \dots, n_k} = \{ n_1p_1 + \dots + n_kp_k : p_1,
\dots ,p_k \in P\}$ is compact, hence the group $\widehat{P}$
generated by $P$ (that is, the union of the $P_{n_1, \dots,
n_k}$'s) is a Borel, actually $F_\si$ set. As $P$ is linearly
independent, each element of $\widehat{P}$ can be uniquely written
in the form $n_1p_1 + \dots + n_kp_k$.
Since $P$ and $(0,\infty)$ are uncountable Polish spaces, we can
choose a Borel
isomorphism $g:P\rightarrow (0,\infty)$. Let $f:\RR \rightarrow \RR$
be the following function:
\[
f(x) =
\begin{cases}
0 & \textrm{ if } x\in \RR \setminus \widehat{P} \\
\prod _{i=1}^k g(p_i)^{n_i} & \textrm{ if } x = \sum_{i=1}
^k n_ip_i \in \widehat{P}, \ n_i \in \ZZ, \ p_i \in P,
\ i=1,\dots,k.
\end{cases}
\]
This function is Borel, as it is Borel on the countably many Borel sets
$P_{n_1, \dots, n_k}$, and zero on the rest. However, $f$ is not continuous,
as it is unbounded on the compact set $P$. Therefore $f$ is not of the form
$a+bx$. Suppose now that $f$ is of the form $a+be^{g}$ for some $a,b \in \RR$
and additive function $g$. Clearly $b \neq 0$, since $f$ is not constant,
therefore $\frac{f-a}{b} = e^g$ is Borel measurable, and then so is $g$ by
taking logarithm. But it is well-known that every Borel (or even Lebesgue)
measurable additive function is of form $kx$ $(k \in \RR)$, hence $f$ is
continuous, a contradiction.
What remains to show is that $f$ is vertically rigid via
horizontal translations. For every $c>0$ there exists a $p \in P$
such that $g(p)=c$. Now we check that $cf(x) = f(x+p)$ for all $x
\in \RR$. Clearly $x \in \widehat{P}$ if and only if $x+p \in
\widehat{P}$. Therefore $cf(x) = f(x+p) = 0$ if $x \notin
\widehat{P}$. Let now $x = n_1p_1 + \dots + n_kp_k \in
\widehat{P}$, and assume without loss of generality that $p=p_1$
($n_1 = 0$ is also allowed). Then $cf(x) = g(p)f(x) = g(p)
g(p)^{n_{1}} g(p_2)^{n_2} \cdots g(p_k)^{n_k} = g(p)^{n_{1}+1}
g(p_2)^{n_2} \cdots g(p_k)^{n_k} = f( (n_{1}+1)p + n_2p_2+\dots +
n_kp_k ) = f(x+p)$, which finishes the proof.
\ep
\section{Lebesgue and Baire measurable functions}
\lab{s:meas}
It is easy to see that the example in the previous section is zero almost
everywhere (on a comeagre set). Indeed, it can be shown that
every $P_{ n_1, \dots, n_k }$ has uncountably many pairwise disjoint
translates.
Therefore it is still possible that the complete analogue of
Jankovi\'c's conjecture holds: every vertically rigid Lebesgue
(Baire) measurable function is of the form $a+bx$ or $a+be^{kx}$
\emph{almost everywhere (on a comeagre set)}. In this section we
prove this in case of translations. The general case remains open,
see Section \nolinebreak \ref{s:open}. We also prove that in many
situations the exceptional set can be removed.
\bt
\label{t:meas}
Let $f:\RR \rightarrow \RR$ be a vertically rigid function
for an uncountable set $C\subset (0,\infty)$ \emph{via
translations}. If $f$ is Lebesgue (Baire) measurable then it is of the form
$a+be^{kx}$ $(a,b,k \in \RR)$ almost everywhere (on a comeagre set).
\et
\bp By Theorem \ref{t:t->ht} we can assume that $f$ is vertically
rigid for $C$ via horizontal translations. As in the proof of
Theorem \ref{t:htcont} we can also assume that $f(0) = 1$. Then
Lemma \ref{l:add} implies that \beq \lab{e:1} f(x+t) = f(x) f(t) \ \
\forall x \in \RR \ \forall t \in T \eeq and $f(t)>0$ for every $t
\in T$.
First we show that the sign of $f$ is constant almost everywhere
(on a comeagre set). It is easy to see from (\ref{e:1}) that the
sets $\{ f>0 \}$, $\{ f=0 \}$, and $\{ f<0 \}$ are all Lebesgue
(Baire) measurable sets periodic modulo every $t \in T$. It is a
well-known and easy consequence of the Lebesgue density theorem
(the fact that every set with the Baire property is open modulo
meagre) that if a measurable set $H$ has a dense set of periods
then either $H$ or $\RR \sm H$ is of measure zero (meagre). But
the above three sets cover $\RR$, hence at least one of them is of
positive measure (nonmeagre), and then that one is of full measure
(comeagre). If $f=0$ almost everywhere (on a comeagre set) then we
are done, otherwise we may assume that $f>0$ almost everywhere (on
a comeagre set). (Indeed, $-f$ is also rigid via horizontal
translations, and then we can apply a horizontal translation and a
positive multiplication to achieve $f(0) = 1$.)
Set $D = \{ f>0 \}$ and define the measurable function $g = \log f$
on $D$. Recall that $D+t = D$ ($\forall t \in T$) and note that $T
\su D$. Clearly
\[
g(x+t) = g(x) + g(t) \ \ \forall x \in D \ \forall t \in T,
\]
so $g|_T$ is additive. Now we show that $g|_T$ is of the form $kx$.
Let us suppose that this is not the case. As we have mentioned
above, if an additive function is not of the form $kx$ then it is
unbounded on every interval from above (and also below). For every
Lebesgue (Baire) measurable function there is a measurable set of
positive measure (nonmeagre) on which the function is bounded. So
let $M \su D$ be a measurable set of positive measure (nonmeagre)
such that $\left| g|_M \right| \le K$ for some $K \in \RR$. By the
Lebesgue density theorem (the fact that every Baire measurable set
is open modulo meagre) there exists $\e > 0$ so that $( M+s ) \cap M
\neq \eset$ for every $s \in (-\e,\e)$. Choose $t_0\in T$ in
$(-\e,\e)$ so that $g(t_0) > 2K$. Fix an arbitrary $m_0 \in M\cap
(M-t_{0})$, then $g(m_0+t_0) = g(m_0) + g(t_0) > g(m_0) + 2K$, which
is absurd, since $m_0+t_0, m_0 \in M$ and $\left| g|_M \right| \le
K$.
Now define $h(x) = g(x) - kx$ ($x \in D$). This is a measurable
function that is periodic modulo every $t \in T$. Indeed,
\[
h(x+t) = g(x+t) - k(x+t) = g(x) -kx + g(t) - kt = h(x) + 0 = h(x).
\]
It is a well-known consequence of the Lebesgue density
theorem (the fact that every Baire measurable set is open modulo meagre)
that if the periods of a measurable function form a dense set then
the function is constant almost everywhere (on a comeagre set). Hence $g(x) =
kx + c$ almost everywhere (on a comeagre set), so $f(x) = e^c e^{kx}$ almost
everywhere (on a comeagre set), so we are done.
\ep
Our next theorem shows that the measure zero (meagre) set can be
removed, unless $f$ is constant almost everywhere (on a comeagre
set). Theorem \nolinebreak \ref{t:count} provides an almost
everywhere (on a comeagre set) constant but nonconstant function
that is vertically rigid via horizontal translations.
\bt \lab{xy} Let $f:\RR \to\RR$ be a vertically rigid function that
is of the form $a+bx$ ($b \neq 0$) or $a+be^{kx}$ ($bk \neq 0$)
almost everywhere (on a comeagre set). Then $f$ is of this form
everywhere. \et
Let us denote the 1-dimensional Hausdorff measure by $\iH^1$. For the
definition and properties see \cite{Fa} or \cite{Ma}. First we
prove the following lemma.
\bl \label{lem} Let $f, g : \RR \to \RR$ be arbitrary functions, and
let $\varphi$ be an isometry such that $\varphi( graph( f )) =
graph( g )$. Let $f', g' : \RR\rightarrow \RR$ be continuous
functions such that $f' = f$ almost everywhere (on a comeagre set)
and $g' = g$ almost everywhere (on a comeagre set). Let us also
assume that $graph(f')$, $\varphi( graph(f'))$, $graph(g')$ and
$\varphi^{-1}( graph(g'))$ are coverable by the graphs of countably
many Lipschitz (continuity suffices for the category case)
functions. Then $\varphi( graph(f')) = graph(g')$. \el
\bp
By symmetry of $f'$ and $g'$ (with $\varphi^{-1}$), it suffices to
show that $graph(g') \su \varphi( graph( f' ) )$. Since the latter set
is closed, it also suffices to show that $\varphi(
graph( f' ))$ covers a dense subset of $graph(g')$. We will actually
show that $\varphi( graph( f' ))$ covers $\iH^1$ a.e.~(relatively
comeagre many) points of $graph(g')$, which will finish the proof.
If an element of $graph(g')$ fails to be covered by $\varphi(graph(
f' ))$ then it is either in $graph(g') \sm graph(g)$ or in $\varphi(
graph(f) \sm graph(f'))\cap graph(g')$. The first set is clearly of
$\iH^1$ measure zero (relatively meagre in $graph(g')$), so it
suffices to show that this is also true for the second.
Equivalently, we need that $graph(f) \sm graph(f')$ only covers a
$\iH^1$ measure zero (relatively meagre) subset of $\varphi^{-1}(
graph(g') )$. Suppose that $\varphi^{-1}( graph(g') ) \su
\bigcup_{n=1}^\infty graph(h_n)$, where the $h_n$'s are Lipschitz
(continuous) functions. As $graph(h_n) \cap (graph(f) \sm
graph(f'))$ is clearly of $\iH^1$ measure zero for every $n$, we are
done in the measure case.
Let us now write $\{x \in \RR: f'(x) \neq f(x) \} =
\bigcup_{m=1}^\infty N_m$, where each $N_m$ is nowhere dense. It is
enough to show that each $graph(f|_{N_m})$ only covers a relatively
nowhere dense subset of $\varphi^{-1}( graph(g'))$. Fix an $m$, and
suppose that $graph(f|_{N_m})$ is dense in an open subarc $U \su
\varphi^{-1}( graph(g'))$. By the Baire Category Theorem there
exists a relatively open subarc $V \su U$ that is covered by one of
the $graph(h_n)$'s. But this is impossible, as the arc $V$ is in
$graph(h_n)$, and the set $N_m \su \RR$ is nowhere dense, so even
$N_m \times \RR$ covers at most a relatively nowhere dense subset of
$V$, hence $graph(f|_{N_m})$ cannot be dense in $V$. \ep
\bp (Thm. \ref{xy}) Using the notation of the above lemma, let first
$f$ be a vertically rigid function such that $f=f'$ almost
everywhere (on a comeagre set), where $f'$ is of the form
$a+be^{kx}$ ($bk \neq 0$).
The above lemma implies that $f'$ is also vertically rigid with
the same isometries $\varphi_c$. By considering the unique asymptote
and the limit at $\pm \infty$ of $f'$ we obtain that every
$\varphi_c$ is a translation. By Theorem \ref{t:t->ht} we may assume
that every $\varphi_c$ is actually horizontal, hence $f'$ is of the form
$be^{kx}$. Hence $cf'(x) = f'\left(x+\frac{\log(c)}{k}\right)$ for
every $x \in \RR$, $c>0$ and the same holds for $f$. Assume now that
there is an $x_0$ so that $f(x_0) \neq f'(x_0)$, then $cf(x_0) \neq
cf'(x_0)$ for every $c>0$, therefore
$f\left(x_0+\frac{\log(c)}{k}\right) \neq
f'\left(x_0+\frac{\log(c)}{k}\right)$ for every $c>0$, which is a
contradiction as $f = f'$ almost everywhere (on a comeagre set).
Assume now that $f'$ is of the form $a+bx$ ($b \neq 0$).
First we show that $f'$
is vertically rigid by the same isometries as $f$. For every $c>0$
set $g = cf$, $g' = cf'$, and let $\varphi_c$ be the isometry
mapping $graph(f)$ onto $graph(g)$. As $graph(f) \cap graph(f')$
contains at least two points and $\varphi_c(graph(f) \cap
graph(f'))$ is the graph of a function we obtain that the line
$\varphi_c(graph(f'))$ is not vertical, and similarly for
$\varphi_c^{-1}( graph(g'))$. Therefore they are coverable by the
graphs of countably many, actually a single, Lipschitz (continuous)
function, hence the previous lemma applies. Hence $f'$ is vertically
rigid by the same isometries as $f$.
Similarly to Theorem \ref {t:r->t} we can assume that $f$ is
vertically rigid via orientation preserving isometries for a set $C$
of positive outer measure (nonmeagre). So $\varphi_c$ is a rotation
or translation for every $c\in C$, and by splitting $C$ into two
parts and keeping one with positive outer measure (nonmeagre), we
can assume that $A=\{ang(\varphi _{c}): c\in C\}$ is a subset of the
left or the right half of the unit circle. We could calculate
$ang(\varphi _{c})$ explicitly, but we only need that it is a
nonconstant real analytic function. From this it is easy to see that
the set $A$ is of positive outer measure (nonmeagre). Assume now
that there is an $x_0$ so that $f(x_0) \neq f'(x_0)$. We prove that
this contradicts the fact that $\varphi_c( graph(f) )$ is the graph
of a function for every $c\in C$. For this it suffices to show that
$S_f$ (see Theorem \ref {t:r->t}) is of full measure (comeagre). But
this clearly follows simply by looking at the pairs $(p_0,q)$ and
$(q,p_0)$ where $p_0 = (x_{0}, f(x_0))$ and $q$ ranges over
$graph(f) \cap graph(f')$. \ep
\section{Rigid sets}
\lab{s:sets}
The starting point is the proof of Theorem \ref{t:r->t}. So far we
are only able to prove this result for continuous functions, and
consequently we can only handle translations in the
Borel/Lebesgue/Baire measurable case. But generalisations of the
ideas concerning the sets $S_f$ could tackle this difficulty. For a
Borel function $f$ the set $S_f$ is analytic (see e.g.~\cite{Ke}),
and every analytic set has the Baire property, so the result of this
section can be considered as the first step towards handling Borel
functions with general isometries.
See Equation (\ref{e:inv})
for the following notations.
\bd
We call a symmetric (about the origin) set $H \subset S^1$
\emph{rigid} for a set $C \su (0,\infty)$ if for every $c\in C$
there is an $\al$ such that
\beq
\lab{e:3}
H = \varrho_\al
(\psi_c(H)).
\eeq
\ed
\bl
Let $U$ be a regular open set (i.e. $int(cl(U))=U$) that is
rigid for an uncountable set $C$. Then $U=\emptyset$, or $U=S^1$, or
every connected component of $U$ is an interval whose endpoints
are among $(0,\pm 1)$ and $(\pm 1,0)$.
\el
\bp
Let $A$ be the set of arclengths of the connected components of $U$,
then $A$ is countable. Let $I$ be a connected
component of $U$ showing that $U$ is not of the desired form, then
$0 < arclength(I) < \pi$ since $U$ is symmetric and regular. As in the
proof of Theorem \ref{t:r->t} let us prove that the real analytic
function $l(c) = arclength( \psi_c(I) )$ $(c \in (0,\infty))$ is not
constant. If $I$ is in the left or right half of $S^1$ then we already
showed this there, so we may assume that $(0,1)$ or $(0,-1)$ is in
$I$. Since $\lim_{c \to \infty} \psi_c(x) \in \{ (0,\pm 1) , (\pm
1,0) \}$ for every $x \in S^1$, we obtain that $\lim_{c \to \infty}
l(c) \in \ZZ \frac\pi2$. Hence we are done using $0 <
arclength(I) < \pi$ unless $arclength(I) = \frac\pi2$. But if
$arclength(I) = \frac\pi2$ then $\lim_{c \to \infty} l(c) = 0$
since $(0,1)$ or $(0,-1)$ is in $I$, and therefore $l$ cannot be constant.
Hence $l$ attains each of its values at most countably many times,
so there is a $c \in C$ such that $arclength( \psi_c(I) ) \notin A$,
contradicting (\ref{e:3}).
\ep
One can also show, using an argument similar to the above one (by
considering the possible
distances of pairs in $H$), that the rigid sets (for $C = (0,\infty)$)
of cardinality
smaller than the continuum are the following: the empty set, the
symmetric sets of two elements and the set $\{ (0,\pm 1) , (\pm
1,0) \}$.
The next statement is somewhat of ergodic theoretic flavour.
\bt
Let $H$ be a Baire measurable set that is
rigid for an uncountable set $C$. Then in each of the four
quarters of $S^1$ determined by $(0,\pm 1)$ and $(\pm 1,0)$
either $H$ or $S^1 \sm H$ is meagre.
\et
\bp
$H$ can be written as $H = U \Delta F$ in a unique way, where $U$ is regular
open, $F$ is meagre and $\Delta$ stands for symmetric difference,
see \cite[4.6]{Ox}. Then it is easy to see by the
uniqueness of $U$ that $U$ is rigid for $C$, so we are done by the
previous lemma.
\ep
\section{Horizontally rigid functions}
\lab{s:horiz}
In this section we characterise the functions that are horizontally rigid via
translations. This answers Question 3 of \cite{CCR} in the case of
translations.
\bd
A function $f:\RR \rightarrow \RR$ is \emph{horizontally rigid}, if
$graph(f(cx))$ is isometric to $graph(f(x))$ for all $c \in (0,\infty)$.
\ed
\bt
\lab{t:horiz}
A function $f:\RR \rightarrow \RR$ is
horizontally rigid via translations if and only if there exists $r
\in \RR$ such that $f$ is constant on $(-\infty, r) $ and
$(r,\infty)$.
\et
\bp
These functions are trivially horizontally rigid via translations. As the
proof of the other direction resembles that of
Theorem \ref{t:t->ht}, we only sketch it.
For every $c>0$ there exist $u_{c},v_{c} \in \RR$ such that
$f(cx)=f(x+u_{c})+v_{c} \linebreak \ \ (x \in \RR)$.
We may assume $u_{1}=v_{1}=0$. If $c \in (0,\infty)
\setminus \{1\}$
then there is an $x_{c} \in \RR$ such that $cx_{c}=x_{c}+u_{c}$, and
substituting this back to the above equation we get $v_{c}=0$. Hence
$f(cx)=f(x+u_{c}) \ \ (x \in \RR)$
for every $c \in (0,\infty)$.
First we show that if $f$ has a period $p>0$ then $f$ is constant.
Using the last equation twice we obtain
\[
f(cx)=f(x+u_{c})=f(x+u_{c}+p)
=f\left((x+p)+u_{c}\right) = f\left(c(x+p)\right) = f(cx+cp).
\]
If $x$ ranges over $\RR$ then so does $cx$, hence $cp$ is also a period. If
$c$ ranges over $(0,\infty)$, then so does $cp$, hence every positive
number is a period, so $f$ is constant.
Using $f(cx)=f(x+u_{c})$ again twice we obtain
\[
f(c_{1}(c_{2}x)) = f(c_{2}x+u_{c_{1}}) =
f\left(c_{2}\left(x+\frac{u_{c_{1}}}{c_{2}}\right)\right) =
f\left(x+\frac{u_{c_{1}}}{c_{2}}+u_{c_{2}}\right).
\]
Interchanging $c_1$ and $c_2$ and comparing the two equations we get
\[
f\left(x+\frac{u_{c_{1}}}{c_{2}}+u_{c_{2}}\right) =
f\left(x+\frac{u_{c_{2}}}{c_{1}}+u_{c_{1}}\right),
\]
so
$\pm
\left[
\left(\frac{u_{c_{1}}}{c_{2}}+u_{c_{2}}\right) -
\left(\frac{u_{c_{2}}}{c_{1}}+u_{c_{1}}\right)
\right]$
is a period, and hence it is zero. Therefore
\[
\frac{u_{c_{1}}}{1 - \frac{1}{c_{1}}} = \frac{u_{c_{2}}}{1- \frac{1}{c_{2}}}
\textrm{ for every } c_{1},c_{2} \in (0,\infty) \setminus \{1\}.
\]
Set $r = \frac{u_{c}}{1 - \frac{1}{c}}$, then
$u_{c} = r\left(1-\frac{1}{c}\right)$
for every $c \in (0,\infty)$. Substituting this back to $f(cx)=f(x+u_{c})$
gives
$f(cx) = f \left( x+r\left( 1-\frac{1}{c} \right) \right)$.
Writing $\frac{x}{c}$ in place of $x$ yields
$f(x) = f \left( \frac{1}{c}(x-r) + r \right)$
for every $c \in (0,\infty)$.
Let $x_{0}<r$ be fixed and let $c$ range over
$(0,\infty)$, then $\frac{1}{c}(x_{0}-r)+r$ ranges over $(-\infty,r)$, so
$f(x)$ is constant for $x<r$. Similarly, $f(x)$ is also constant for $x>r$.
\ep
\section{Open questions}
\lab{s:open}
The most important open question is the following. By Theorem \ref{t:meas} the
difficulty is to handle rotations.
\bq
\lab{q:meas}
Is every vertically rigid Lebesgue (Baire)
measurable function of the form $a+bx$ or $a+be^{kx}$ $(a,b,k \in
\RR)$ almost everywhere (on a comeagre set)? Or is this conclusion
true at least for Borel measurable functions, or Baire class~1
functions, or functions with at least one point of continuity?
\eq
\br
It would be more natural to replace vertical rigidity by
\emph{almost} vertical rigidity. However, it is not clear how this
should be defined, as a set can have a measure zero projection on
one line and positive measure projection on another.
\er
\bq Let $f$ be a vertically rigid function and $c>0$ such that there
exists an isometry between $graph(f)$ and $graph(cf)$ that is not a
translation (or also not a reflection). Is then $f$ of the form
$a+bx$? Or is this true for Borel, Lebesgue, or Baire measurable
functions? And if we assume the same for \emph{every} isometry
between $graph(f)$ and $graph(cf)$? \eq
Perhaps the following question can be answered in the negative
by an easy transfinite recursion. A positive answer to the analytic (see
e.g.~\cite{Ke} for the definition of analytic sets) version would answer
Question \ref{q:meas} for Borel functions.
\bq
\lab{q:rig}
Let $I \su S^1$ be the open subarc of arclength $\frac\pi2$
connecting $(0,1)$ and $(1,0)$. For a rigid set $H$ can $H \cap I$
be anything else but $\eset$, a point, $I$ minus a point, or $I$?
How about analytic, Borel, or Lebesgue (Baire) measurable rigid
sets?
\eq
\bq
What is the role of the uncountable set $C \su (0,\infty)$ in the
results of this paper? When is it sufficient to assume that it is infinite,
dense, sufficiently large finite, or contains a $c \neq 1$?
\eq
\br Let $c_0 \neq 1$. It is easy to see that there exists a
continuous $f$ satisfying $c_0f(x) = f(x+1)$ for every $x$. Indeed,
if we define $f$ to be an arbitrary continuous function on $[0,1]$
satisfying $c_0f(0) = f(1)$ then $f$ extends to $\RR$ in a unique
manner. Then $f$ is vertically rigid via horizontal translations for
the set $C = \{c_0^n : n\in\ZZ\}$. Hence it is not sufficient to
assume for Jankovi\'c's conjecture that $C$ is infinite.
There also exists a continuous nonlinear function $f$ whose graph
consists of two half lines starting from the origin so that
$graph(2f)$ is a rotated copy of $graph(f)$. \er
\bq
Is every horizontally rigid function of the form $a+bx$ or of the form
described in Theorem \ref{t:horiz}? Or is this true if we assume Borel,
Lebesgue, or Baire measurability? Is every continuous horizontally rigid
function of the form $a+bx$?
\eq
\bq
What can we say in higher dimensions?
\eq
\noindent \textbf{Acknowledgement} In the first place, we would like
to thank the anonymous referee for his valuable comments. His
suggestions vastly improved the paper. We are also indebted to
M.~Laczkovich for some helpful remarks.
|
3,212,635,537,707 | arxiv | \setcounter{equation}{0}\Section{\setcounter{equation}{0}\Section}
\begin{document}
\title[\bf Parabolic Anderson equation]{\bf
Solvability of
parabolic Anderson equation
with fractional Gaussian noise
}
\author{\sc Zhen-Qing Chen }
\address{Department of Mathematics, University of Washington, Seattle, WA 98195, USA }
\email{zqchen@uw.edu }
\author{ \sc Yaozhong Hu}
\address{Department of Mathematical and Statistical
Sciences, University of Alberta at Edmonton,
Edmonton, Canada, T6G 2G1}
\email{ yaozhong@ualberta.ca
}
\thanks{The research of ZC is supported in part by a Simons Foundation Grant.
The research of YH is supported in part by an NSERC Discovery grant and a startup fund from University of Alberta at Edmonton.}
\date{}
\subjclass[2010]{Primary: 60H15; 60G60; Secondary: 60G15, 60G22, 35R60 }
\keywords{Stochastic heat equation, fractional Brownian fields,
Wiener chaos expansion, random field solution, necessary condition,
sufficient condition, moment bounds. }
\begin{abstract} This paper provides necessary as well as sufficient
conditions on the Hurst parameters so that the continuous time
parabolic Anderson model $\frac{\partial u}{\partial t}=\frac{1}{2}\frac{\partial^2 u}{\partial x^2}+u\dot{W}$
on $[0, \infty)\times {\mathbb R}^d $ with $d\geq 1$ has a unique random field solution, where $W(t, x)$ is a fractional Brownian sheet
on $[0, \infty)\times {\mathbb R}^d$ and formally $\dot W =\frac{\partial^{d+1}}{\partial t \partial x_1 \cdots \partial x_d} W(t, x)$.
When the noise $W(t, x)$ is white in time, our condition is both necessary and sufficient
when the initial data $u(0, x)$ is bounded between two positive constants. When the noise
is fractional in time with Hurst parameter $H_0>1/2$,
our sufficient condition, which improves the known results in literature,
is different from the necessary one.
\end{abstract}
\maketitle
\setcounter{equation}{0}\Section{Introduction}
Let $d\geq 1$.
In this paper we are interested in the following stochastic heat equation (Parabolic Anderson model)
for $u=u(t, x)$ with $t\geq 0$ and $x=(x_1, \cdots, x_d) \in{\mathbb{R}}^d$:
\begin{equation}\label{spde}
\frac{\partial u}{\partial t}=\frac{1}{2}\frac{\partial^2 u}{\partial x^2}+u\dot{W}\, \quad \hbox{on }
(0, \infty) \times {\mathbb R}^d,
\end{equation}
where { $W= \left\{ W(t, x); \, t\in [0, \infty), \, x\in {\mathbb R}^d \right\}$ }
is a centered Gaussian process defined on some complete probability space
$(\Omega, \mathcal{F}, \mathbb{P})$ with the covariance given
by
\begin{equation} \label{cov}
\mathbb{E} \left[ W(s,x)W(t,y)\right]= R_{H_0}(s,t) \prod_{i=1}^dR_{H_i}(x_i,y_i) ,
\quad s, t\geq 0, \, x, y \in {\mathbb R}^d,
\end{equation}
where $\frac12\le H_0<1$ and $ 0<H_i <1$ for $ i=1,\cdots, d$, and
where for $\beta >0$,
\begin{equation}
R_{\beta }(a, b) :=
\frac{1}{2 } \left( |a|^{2\beta }+|b|^{2\beta }-|a -b |^{2\beta }\right) \quad \hbox{for all } a, b\in \mathbb{R}\,.
\end{equation}
Here we use $:=$ as a way of definition. That is, $W(t, x)=W(t, x_1, \cdots, x_d)$ is a fractional Brownian motion of Hurst parameter $H_0\ge 1/2$ in time variable and is a
fractional Brownian motion with Hurst parameter $ H=(H_1, \cdots, H_d)$ in space variables, and formally
$\dot{W}(t, x) =\frac {\partial ^{d+1}W}{\partial t \partial x_1\cdots\partial x_d}$.
Note that $R_{1/2}(s,t)=s\wedge t$ and so $\frac{\partial ^2}{\partial s \partial t}
R_{1/2}(s,t) =\delta_{\{0\}}(s-t)$ in the distributional sense, where $\delta_{\{0\}}$ is the Dirac measure on ${\mathbb R}$ concentrated at the origin $0$.
One dimensional Brownian motion has covariance function $R_{1/2}$ so it has Hurst parameter 1/2.
For fractional Brownian motions, the smaller the Hurst parameters are, the rougher their sample paths.
The precise meaning of the (random field) solution to
\eqref{spde} will be given later in Definition \ref{def-sol-sigma}.
The product $u\dot{W}$ in the above equation is in the sense of Wick, which means that
in Equation \eqref{e.3.1} of the definition of the random field solution
the stochastic integral is understood as the
Ito-Skorohod integral.
The equation \eqref{spde} is one of the simplest stochastic partial differential equations
and describes a heat propagation through a random medium. It has a close connection with KPZ equation
through the Hopf-Cole transform. Equation \eqref{spde}
has been studied by many authors.
We refer the reader to a recent survey \cite{Hu} and references therein.
Throughout the paper, we assume that the Hurst parameter $H_0$ in time is always greater than or equal to $1/2$. But some of the spatial Hurst parameters $H_i$ in \eqref{spde} can be less than $1/2$ while others are
greater than or equal to $1/2$. Let $d_*$ denote the total number of $H_i$ whose value is strictly less than 1/2.
Without loss of generality and for the simplification of notation we can assume that $H_k<1/2$ for $1\leq k\leq d_*$ and
and $H_k\ge 1/2$ for $d_* < k \leq d$ for some $d_* \in \{0, 1, \cdots, d\}$.
Let
$$
H_*=H_1+\cdots+H_{d_*} \quad \hbox{and} \quad { H^* =H_1+\cdots+H_d } .
$$
It is known that when $H_0\ge 1/2$
and $H_i>1/2$ for all $1\leq i\leq d$,
the equation \eqref{spde} has a unique random field solution when $H^*>d-1$
(see \cite[Example 2.6]{HHNT}; see also \cite{Dal} for more general Gaussian noises that is white in time, including
the fractional Brownian noise with
$H_0=1/2$ and all the spatial Hurst parameters being greater than $1/2$).
When $H_0=1/2$ and $d_*=d$, it is a folklore that $d$ must be 1
for \eqref{spde} to be solvable
with bounded initial value.
It is shown in
\cite{HHLNT1, HHLNT2} that when $d=1$, $H_0=1/2$ and $1/4<H^*<1/2$,
the equation \eqref{spde} has a unique random field solution.
Xia Chen considered fractional Gaussian noise $W$
that has $H_0\in [1/2, 1)$ but some of $H_i\in (0, 1)$, $1\leq i\leq d$,
are greater than or equal to $1/2$ while others are allowed to be less than $1/2$.
He showed \cite[Theorems 1.2 and 1.3]{chen} that when
\begin{equation}\label{e:1.3}
\begin{cases}
2(d-{H^*} )+(d_*-2H_*)<2 &\quad \hbox{when } H_0=1/2, \smallskip \\
{H^*} >d-1 \, \hbox{ and } \
4(1-H_0)+2(d-{H^*} )+(d_*-2H_*)<4 &\quad \hbox{when } H_0>1/2 \,,
\end{cases}
\end{equation}
the SPDE \eqref{spde} has a unique global random field solution in $L^2(\Omega, \mathcal{F}, \mathbb{P})$
for any initial value $u_0(x)$ that is bounded.
He also showed that, for $d\geq 2$, when
\begin{equation}\label{e:1.3b}
H_0>1/2, \quad {H^*} =d-1 \quad \hbox{and} \quad
4(1-H_0)+ (d_*-2H_*)<2 \,,
\end{equation}
the SPDE \eqref{spde} has a unique local random field solution in $L^2(\Omega, \mathcal{F}, \mathbb{P})$
for any initial value $u_0(x)$ that is bounded.
An interesting and challenging
problem is whether the above conditions are also necessary. As we shall see, the
answer will be no and when $H_0=1/2$ we give a
necessary and sufficient condition for the existence of solution to \eqref{spde},
and when $H_0>1/2$, we improve the sufficient condition \eqref{e:1.3} as well as give a necessary condition.
The following are the two main results of this paper, by considering $H_0=1/2$ and $1/2< H_0<1$ separately.
See Definition \ref{def-sol-sigma} for the precise definition of global and local solution to \eqref{spde}.
Our main results show that the condition $H^*>d-1$ is sufficient for the existence of global
random field solution
to \eqref{spde}
when $d\geq 2$ and $H_0\in [1/2, 1)$, and is also necessary for the existence of local random field solution to \eqref{spde}
when $H_0=1/2$.
We further show that when $H_0\in (1/2, 1)$ and $d\geq 2$, there exists a unique local solution in the critical case $H^*=d-1$.
Our necessary condition for the case of $H_0>1/2$
is different from the sufficient condition
but it involve $H^*$ and $H_0$ only;
we do not need to separate the rougher ones (those with $H_i < 1/2$)
from the smoother ones (those with $H_i \geq 1/2$) in spatial Hurst parameters.
\begin{theorem}\label{t.1.1} Suppose $H_0=1/2$; that is, the noise $ W$ is white in time.
\begin{enumerate}
\item[\rm (i)] Suppose the initial condition satisfies $|u_0(x)|\le C$ for some constant
$C>0$. If
\begin{equation}\label{e:1.4}
\begin{cases}
{H^*} >1/4&\qquad\hbox{when }d= 1, \smallskip \\
{H^*} >d-1&\qquad\hbox{when }d\ge 2,
\end{cases}
\end{equation}
then the equation \eqref{spde} has a unique (global) random field solution in $L^2(\Omega, \mathcal{F}, \mathbb{P})$
with $u(0, x)=u_0 (x)$.
Moreover, in this case, there is a positive constant $C_H>0$ so that
\begin{equation}
\mathbb{E} \left[ u(t,x)^{ p}\right]\le C_H \exp\left( C_H t p^{\frac{{H^*} -d+2}{{H^*} -d+1}}\right)
\quad \hbox{for any } t\ge 0 \hbox{ and } \ p\ge 2\,. \label{e.1.5}
\end{equation}
\item[\rm (ii)] Let the initial condition satisfy
$u_0(x)\ge c$ (or $u_0(x)\le -c$) for some positive constant $c>0$.
If the equation \eqref{spde} has a local solution in $L^2(\Omega, \mathcal{F}, \mathbb{P})$, then \eqref{e:1.4} must be satisfied.
\end{enumerate}
\end{theorem}
\begin{remark}\label{r.1.2}
\begin{enumerate}
\item[(i)] In the case of $H_0=1/2$, the condition \eqref{e:1.3} is equivalent to \eqref{e:1.4} when $d=1$ and
is stronger than \eqref{e:1.4} when $d\ge 2$
since $d_*-2H_* > 0$ when $d_*\geq 1$ and $d_*-2H_* = 0$ when $d_*=0$.
\item[(ii)] Since $0<H_i<1$ for all $1\leq i=\leq d$ and $1\leq H_i<1/2$ for $1\leq i\leq d_*$,
we have $H^*< \frac{d_*}{2} +(d-d_*)$, that is, $H^*< d- (d^*/2)$.
Thus, Condition \eqref{e:1.4} implies that
$d- (d_*/2)>d-1$, or $d_*<2$. Thus under condition \eqref{e:1.4}, $d_*$ can only be $0$ or $1$.
\end{enumerate}
\end{remark}
\begin{theorem}\label{t.1.2} Suppose $H_0>1/2$.
\begin{enumerate}
\item[\rm (i)]
Let { $d\ge 1$.}
Suppose the initial condition satisfies $|u_0(x)|\le C$ for some constant
$C>0$. If
\begin{equation}\label{e:1.7}
{
\begin{cases}
{H^*} >\frac{3}4 -H_0 &\qquad\hbox{when }d= 1, \smallskip \\
{H^*} >d-1&\qquad\hbox{when }d\ge 2,
\end{cases}
}
\end{equation}
then the equation \eqref{spde} has a unique (global) random field solution in $L^2(\Omega, \mathcal{F}, \mathbb{P})$
with $u(0, x)=u_0 (x)$.
Moreover, in this case, there is a positive constant $C_{H,d} >0$ so that
\begin{equation}
\mathbb{E} \left[ u(t,x)^p\right]\le
C\exp\left[ C_H t^{\frac{{H^*} -d+2H_0}{{H^*} -d+1}} p^{\frac{{H^*} -d+2}{{H^*} -d+1}}
\right]\quad \hbox{for any } t\ge 0 \hbox{ and } p\ge 2\,. \label{e.1.8}
\end{equation}
Moreover, if
\begin{equation}\label{e:1.7b}
d\geq 2 \quad \hbox{and} \quad H^*=d-1,
\end{equation}
the equation \eqref{spde} has a unique local random field solution in $L^2(\Omega, \mathcal{F}, \mathbb{P})$
with $u(0, x)=u_0 (x)$.
\smallskip
\item[\rm (ii)] Let the initial condition satisfy
$u_0(x)\ge c$ for some positive constant $c>0$.
If the equation \eqref{spde} has a local solution in $L^2(\Omega, \mathcal{F}, \mathbb{P})$, then
\begin{equation} \label{e.1.9}
H^*+2H_0 >
\begin{cases}
5/4 &\qquad \hbox{when } d=1, \smallskip \\
(3d+2)/{4} &\qquad \hbox{when }d\ge 2\,. \\
\end{cases}
\end{equation}
\end{enumerate}
\end{theorem}
\begin{remark}
\begin{enumerate}
\item[(i)] When $d=1$ and $H_0>1/2$, condition \eqref{e:1.3}
is equivalent to $H^*>\frac34 -H_0$.
Clearly, when $H_0>1/2$, conditions \eqref{e:1.3} and \eqref{e:1.3b}
are stronger than conditions \eqref{e:1.7} and \eqref{e:1.7b}, respectively, when $d\geq 2$.
\smallskip
\item[(ii)] Condition \eqref{e:1.4} is the same as \eqref{e:1.7} if we take $H_0=1/2$ there.
\smallskip
\item[(iii)] The necessity of the condition \eqref{e:1.4} when $H_0=1/2$
and of the condition \eqref{e.1.9} when $H_0>1/2$
for the existence of solutions to SPDE \eqref{spde}
is new.
There seems no discussions about the necessary conditions
for this equation before.
\smallskip
\item[(iv)] Note that the sufficient condition and the necessary condition in Theorems \ref{t.1.1} and \ref{t.1.2} have different
requirements on the initial value $u(0, x)$ of $u$.
It is easy to check directly that, when $1/2<H_0<1$, the sufficient condition \eqref{e:1.7} is strictly stronger than the necessary condition \eqref{e.1.9}.
\smallskip
\item[(v)] Estimate \eqref{e.1.8} coincides with the upper bound part of (6.1) of
\cite{HHNT} when all Hurst parameters are greater than $1/2$ (by setting $\eta_i=2H_i-2$ and $\beta=2H_0-2$), so we expect our bound
\eqref{e.1.8} is sharp.
\smallskip
\item[(vi)] As we see from Remark \ref{r.1.2}(ii) that $H^*<d- (d_*/2)$. So condition \eqref{e:1.7b} implies that
$d_*<2$, that is, either $d_*=0$ or $d_*=1$.
\smallskip
\item[(vii)] Again from the fact that $H^*<d- (d^*/2)$, we see that when $d\geq 2$,
condition \eqref{e.1.9} implies $d_*<(d/2) +4H_0 -1< (d+6)/2$. But this
condition is not optimal. In fact, when $W$ is time indenpendent (which corresponds to the case $H_0=1$) and
space white (which corresponds to the case $H_1=\cdots=H_d=1/2$), it is known
(see \cite{hu02})
that the equation \eqref{spde} has a global solution when $d=1$ and has a local solution but has no global solution when $d= 2$,
and has no
any local solution when $d\ge 3$. So our conjecture is that even when $H_0>1/2$ and $H_i\in [1/2, 1)$ for all $1\leq i\leq d$,
to ensure the global unique solution we need $d_*\le 1$ and to ensure local solution
we need $d_*\le 2$.
\end{enumerate}
\end{remark}
In this paper, we do not discuss the solvability of \eqref{spde} when the time Hurst parameter $H_0\in (0, 1/2)$.
We refer the interested reader to \cite{CHKN,chen2, HLN} and references therein
for recent development when $H_0<1/2$. Let us also mention that for the additive noise
(namely, replace $u\dot W$ by $\dot W$ in \eqref{spde}, the necessary
and sufficient condition is known (\cite{HLT}) even for more general Gaussian noise.
\medskip
The rest of the paper is organized as follows.
In Section \ref{sec:preliminaries}, we recall some facts on Gaussian random fields, stochastic integrals with respect to them
and their properties
that will be used in this paper.
The proofs of the sufficient part of Theorems \ref{t.1.1} and \ref{t.1.2} are presented in Section \ref{S:3},
while the proof of the necessary part of these two theorems are given in Section \ref{S:4}.
In this paper, for $a, b\in {\mathbb R}$, $a\vee b:= \max \{a, b\}$ and $a\wedge b :=\min \{a, b\}$.
\setcounter{equation}{0}\Section{Preliminaries}\label{sec:preliminaries}
The noise $W$ can be viewed as a Brownian motion with values in an infinite dimensional Hilbert space. One might thus think that the stochastic integration theory with respect to $W$ can be handled by classical theories (e.g. \cite{DPZ}). However, the spatial covariance function of $\dot W$, which is formally $\prod_{i=1}^d H_i(2H_i-1) |x_i-y_i|^{2H_i-2}$, is not locally integrable along
the diagonals as $H_i<1/2$ for $1\leq i\leq d_*$. The
stochastic integral with respect to $W$ then needs to be dealt with
through other means. We recall briefly some
key points needed in this paper and we refer to \cite{HHLNT1,HHLNT2}
for more details.
We start by introducing some basic notation on Fourier transforms. The space of Schwartz functions is
denoted by $\mathcal{S}$. Its dual, the space of tempered distributions, is
denoted by $\mathcal{S}'$.
The Fourier
transform of a function $u \in \mathcal{S} $ is defined by
\[
\widehat u(\xi) :=\int_{\mathbb{R}^d} e^{-i\xi x} u(x) dx\
\]
so that the inverse Fourier transform is given by
$$
\check u (\xi) := ( 2 \pi)^{- d} \int_{\mathbb{R}^d} e^{i\xi x} u(x) dx = ( 2 \pi)^{- d} \widehat u (-\xi) .
$$
Let $C^\infty_c((0,\infty)\times {\mathbb R}^d)$ denote the space of real-valued infinitely differentiable functions with compact support on $(0, \infty) \times {\mathbb R}^d$. The noise $W$ can be described by a mean zero Gaussian family $\{W(\varphi) ,\, \varphi\in
C^\infty_c((0,\infty)\times {\mathbb R}^d)\}$ defined on a complete probability space
$(\Omega,{\mathcal{F}} ,\mathbb{P})$, whose covariance structure
is given by
\begin{equation}\label{eq:cov1a}
\mathbb{E}\left[ W(\varphi) \, W(\psi) \right]
= c_{1,H}\int_{{\mathbb R}_{+}^2\times{\mathbb R}^d}
\widehat \varphi(s,\xi) \, \overline{\widehat \psi} (r,\xi)
\, \prod_{i=1}^d |\xi_i|^{1-2H_i} \, {\gamma}_0(s-r) dsdr d\xi,
\end{equation}
where
\begin{equation}\label{e:2.2}
{\gamma}_0(s-r) =H_0(2H_0-1) |s-r|^{2H_0-2}.
\end{equation}
When $H_0=1/2$, we replace ${\gamma}_0(s-r)$ by the
Dirac delta function ${\gamma}_0(s-r)=\delta_{\{0\}}(s-r)$:
\begin{equation}\label{eq:cov1b}
\mathbb{E}\left[ W(\varphi) \, W(\psi) \right]
= c_{1,H}\int_{{\mathbb R}_{+}\times{\mathbb R}^d}
\widehat \varphi(s,\xi) \, \overline{\widehat \psi} (s,\xi)
\, \prod_{i=1}^d |\xi_i|^{1-2H_i} \, ds d\xi,
\end{equation}
where the Fourier transforms $\widehat \varphi$ and $\widehat \psi$ are understood as Fourier transforms in spatial variables only and
\begin{equation}\label{eq:expr-c1H}
c_{1,H}= \frac 1 {(2\pi)^d } \prod_{i=1}^d \Gamma(2H_i+1)\sin(\pi H_i) \,.
\end{equation}
As above throughout the remaining part of the paper we will always replace ${\gamma}_0(s-r)$
by $\delta_{\{0\}}(s-r)$
when $H_0=1/2$.
Let
$$
\mathbb H_0=\left\{ \varphi \in \mathcal{S}: \ \| \varphi \|_{\mathbb H}:=
\left( \int_{\mathbb{R}_+^2\times \mathbb{R}^d}
\widehat \varphi
(s,\xi) \overline{\widehat \varphi}
(r,\xi) |\xi|^{1-2H}d\xi {\gamma}_0(s-r) ds dr \right)^{1/2} < \infty \right\}.
$$
Since ${\gamma}_0(s-r)$ is a positive definite kernel it is well-known that
$\|\cdot\|_\mathbb H$ is a Hilbert norm (in fact it is the $L^2$ norm of the
stochastic integral $\int_{\mathbb{R}_+\times \mathbb{R}^d} \varphi
(s,x)W(ds,dx)$).
Let $ \mathbb H$ be the completion of
$\mathbb H_0$ under the above norm $\| \cdot \|_{\mathbb H}$. Using this Hilbert norm
we can define the stochastic integration with respect to $W$.
\begin{definition}\label{def:elementary-process}
For any $t\ge0$, let $\mathcal{F}_{t}$ be the $\sigma$-algebra generated by $W$ up to time $t$. An elementary process $g$ is a process given by
\begin{equation}
g(s,x)
=
\sum_{i=1}^{n} \sum_{j=1}^m X_{i,j} \, \text{\usefont{U}{psy}{m}{n}\symbol{'143}}_{(a_{i},b_{i}]}(s) \, \text{\usefont{U}{psy}{m}{n}\symbol{'143}}_{(h_j,l_{j}]}(x),\label{e.2.3}
\end{equation}
where $n$ and $m$ are finite positive integers, $-\infty<a_{1}<b_{1}<\cdots<a_{n}<b_{n}<\infty$, $h_j=(h_{j1}, \cdots, h_{jd})$ $h_{jk}<l_{jk}\ $, $\text{\usefont{U}{psy}{m}{n}\symbol{'143}}_{(h_j,l_{j}]}(x)=\prod_{k=1}^d \text{\usefont{U}{psy}{m}{n}\symbol{'143}}_{(h_{jk},l_{jk}]}(x_k)$, and $X_{i,j}$ are ${\mathcal{F}} _{a_{i}}$-measurable random variables for $i=1,\ldots,n$. The integral of such a process with respect to $W$ is defined as
\begin{eqnarray}
\int_{\mathbb{R}_+\times \mathbb{R}^d}g(s,x) \, W(ds,dx)
&=&\sum_{i=1}^{n} \sum_{j=1}^m X_{i,j} \, W\left( \text{\usefont{U}{psy}{m}{n}\symbol{'143}}_{(a_{i},b_{i}]} \otimes \text{\usefont{U}{psy}{m}{n}\symbol{'143}}_{(h_j,l_{j}]}\right) \label{eq:riemann-sums-W}\\
&=&\sum_{i=1}^{n} \sum_{j=1}^m X_{i,j} \,\big[W(b_{i},l_{j})-W(a_{i},l_{j}) -W(b_i,h_j)+ W(a_i, h_j)\big]\,. \nonumber
\end{eqnarray}
\end{definition}
The following result can be found in \cite{HHLNT1} when $d=1$.
\begin{proposition}\label{prop:intg-wrt-W}
Let $\Lambda_{{H}}$ be the space of predictable processes $g$ defined on ${\mathbb R}_{+}\times{\mathbb R}^d$ such that
almost surely $g\in \mathbb H$ and $\mathbb{E}[\|g\|_{\mathbb H}^{2}]<\infty$. Then,
we have the following statements.
\noindent
\emph{(i)}
The space of elementary processes of the form in Definition \ref{def:elementary-process} is dense in $\Lambda_{H}$.
\noindent
\emph{(ii)}
For $g\in\Lambda_{H}$, the stochastic integral $\int_{\mathbb{R}_+\times \mathbb{R}^d}g(s,x) \, W(ds,dx)$ is defined as the $L^{2}(\Omega)$-limit of $\int_{\mathbb{R}_+\times \mathbb{R}^d} g_n(s,x)W(ds,dx)$ for any $g_n$ approximating $g$, and we have
\begin{equation}\label{int isometry}
\mathbb{E}\left[ \left( \int_{\mathbb{R}_+\times \mathbb{R}^d}g(s,x) \, W(ds,dx) \right)^{2} \right]
=
\mathbb{E} \left[ \|g\|_{\mathbb H}^{2}\right].
\end{equation}
\end{proposition}
We can also define the multiple integral using the above definition.
\begin{eqnarray}
I_n(f)(t)
&=& \int_{0\le s_1<\cdots<s_n\le t}\int_{\mathbb{R}^{dn}}
f((s_1,x_1), \cdots, (s_n, x_n))W(ds_1, dx_1)\cdots W(ds_n, dx_n) \nonumber\\
&=& \frac1{n!} \int_{([0, t]\times \mathbb{R}^{d })^n}
f((s_1,x_1), \cdots, (s_n, x_n))W(ds_1, dx_1)\cdots W(ds_n, dx_n)
\,,\nonumber\\
\end{eqnarray}
where $f((s_1,x_1), \cdots, (s_n, x_n))\in \mathbb H^{\otimes n}$ is symmetric with respect to its $n$ arguments. We have
\begin{eqnarray}
\mathbb{E}\left[ \left(I_n(f) (t)\right)^2 \right]
&=& \frac{1}{n!} \int_{ [0,t]^{2n} }\int_{\mathbb{R}^{dn}}
\widehat f((s_1,\xi_1), \cdots, (s_n, \xi_n))
\overline{\widehat f} ((r_1,\xi_1), \cdots, (r_n, \xi_n)) \nonumber\\
&&\qquad \prod_{i=1}^n{\gamma}_0(s_i-r_i) \prod_{i=1}^n \prod_{k=1}^d |\xi_{ik}|^{1-2H_k} d\xi_1\cdots d\xi_n ds_1\cdots ds_n dr_1 \cdots dr_n \,, \nonumber\\
&=& \int_{{0\le s_1<\cdots<s_n\le t\atop
0\le r_1,\cdots, r_n\le t}}\int_{\mathbb{R}^{dn}}
\widehat f((s_1,\xi_1), \cdots, (s_n, \xi_n))
\overline{\widehat f} ((r_1,\xi_1), \cdots, (r_n, \xi_n)) \nonumber\\
&&\qquad \prod_{i=1}^n{\gamma}_0(s_i-r_i) \prod_{i=1}^n \prod_{k=1}^d |\xi_{ik}|^{1-2H_k} d\xi_1\cdots d\xi_n dsdr \,, \label{e.2.8}
\end{eqnarray}
where $\widehat f$ is the Fourier transform with respect to $n$ spatial variables $x_1, \cdots, x_n$; $\xi_i=(\xi_{i1}, \cdots, \xi_{id})$; $d\xi_i=d\xi_{i1}\cdots d\xi_{id}$,
$ds=ds_1\cdots ds_n$, $dr=dr_1\cdots dr_n$. Notice that in \eqref{e.2.8} we do not force an order for $r_1, \cdots, r_n$.
We also need the following lemma which can be found in \cite[Lemma 4.5]{HHNT}.
\begin{lemma}\label{lemsimplex}
Let $\alpha \in (-1+\ensuremath{\varepsilon} , 1)^m$ with $\ensuremath{\varepsilon}>0$. Denote
$|\alpha |= \sum_{i=1}^m
\alpha_i $ and
$T_m(t)=\{(r_1,r_2,\dots,r_m) \in {\mathbb R}^m: 0<r_1 <\cdots < r_m < t\}$.
Then there is a constant $\kappa$, depending only on $\ensuremath{\varepsilon}$ such that
\[
J_m(t, \alpha):=\int_{T_m(t)}\prod_{i=1}^m (r_{{\sigma}(i)}-r_{i-1})^{\alpha_i}
dr \le \frac { \kappa^m t^{|\alpha|+m } }{ \Gamma(|\alpha|+m +1)},
\]
where by convention, $r_0 =0$.
\end{lemma}
\setcounter{equation}{0}\Section{Necessary Condition}\label{S:3}
In this section we shall prove part (ii), the necessary part, of Theorems \ref{t.1.1} and \ref{t.1.2}, namely, the necessity of
\eqref{e:1.4} and \eqref{e.1.9}, respectively.
First, we give the meaning of the
(random field) solution to equation \eref{spde} in the following definition.
\begin{definition}\label{def-sol-sigma}
A real-valued predictable stochastic process $u=\{u(t,x), 0 \leq t <\infty, x \in \mathbb{R}^d
\}$ is said to be a {\it (global) random field solution} of \eref{spde} if
\begin{enumerate}
\item[(i)] for all $t\in[0, \infty)$ and $x\in{\mathbb R}^d$, the process
$ (s, y) \mapsto p_{t-s}(x-y) u(s,y) \text{\usefont{U}{psy}{m}{n}\symbol{'143}}_{[0,t]}(s)$
is an element of $\Lambda_{H}$, where $p_t(x)=(2\pi t)^{-d/2}
\exp\left( -\frac{|x|^2}{2t}\right)$ is the heat kernel on the real line associated with
$\frac{1}{2}\Delta$.
\item[(ii)] for all $t \in [0,\infty)$ and $x\in \mathbb{R}^d$ we have
\begin{equation}\label{e.3.1}
u(t,x)= p_t*u_0(x) + \int_0^t \int_{\mathbb{R}^d}p_{t-s}(x-y) u(s,y) W(ds,dy) \quad a.s.,
\end{equation}
where the stochastic integral is understood in the sense of Proposition \ref{prop:intg-wrt-W}.
\end{enumerate}
\smallskip
A real-valued stochastic process $ u(t,x)$
is said to be a {\it local (random field) solution } of \eref{spde} if there is some constant $t_0>0$
so that $u(t, x)$ is defined on $[0, t_0) \times {\mathbb R}^d$ and satisfies all the above property
for $(t, x)\in [0, t_0) \times {\mathbb R}^d$.
\smallskip
We say a random field solution $u(t, x)$ of \eref{spde} is in $L^2(\Omega, \mathcal{F}, \mathbb{P})$
if $\mathbb{E} [ u(t, x)^2] <\infty$ for every $t\geq 0$ and $x\in {\mathbb R}^d$ (for local solution,
replace $t\geq 0$ by $t\in [0, t_0)$).
\end{definition}
Repeatedly using this definition, we see that the solution of \eref{spde} has the following { Wiener } chaos expansion
\begin{equation}\label{e.3.2}
u(t,x)=\sum_{n=0}^\infty
u_n(t,x),
\end{equation}
where
\begin{equation} \label{e:3.3}
u_n(t,x)=I_n(f_n^{(t,x)}) (t)
\end{equation}
with
\begin{eqnarray}
f_n^{(t,x)}
&:=& f_n^{(t,x)}( s_1,x_1,\dots,s_n,x_n )= f_n^{(t,x)} ((s_1,x_1),\dots, (s_n,x_n) )\nonumber \\
&=& p_{t-s_n}(x-x_n)p_{s_n-s_{n-1}}(x_ n-x_{n-1})\cdots p_{s_2-s_1}(x_ 2-x_1)
p_{s_1}u_0(x_1)\,, \label{e.3.3}
\end{eqnarray}
(see, for instance, formula (4.4) in \cite{HN} or formula (3.3) in \cite{HHNT}).
The remainder in \eqref{e.3.2} will go to zero since we can expand it first for finitely
many terms to deduce
$ \mathbb{E} [ u(t, x)^2] \geq \sum_{n=0}^\infty \mathbb{E} \left[ I_2(f_n^{ (t, x)})(t)^2 \right]$.
On the other hand, $u(t, x)$ defined by \eqref{e.3.2} is a solution as long as
it converges in $L^2$.
By comparison, without loss of generality we assume $u_0(x)=1$ throughout the remaining part of this paper.
The Fourier transform of $f_n$ (with respect to the $n$ ($d$ dimensional) spatial variables
is
\begin{equation}
\widehat f_n(t,x, s_1, \xi_1, \cdots, s_n, \xi_n)
=\prod_{i=1}^n e^{-\frac{1}{2} (s_{i+1}-s_i)|\xi_i+\cdots+\xi_1|^2} e^{-{ i} x(\xi_n+\cdots+\xi_1)}\,,
\label{e.3.4}
\end{equation}
when $0<s_1<\cdots<s_n<s_{n+1}=t$, where we denote $s_{n+1}=t$ (see \cite[3.13-3.14]{hule2019} or \cite[p.8]{HHLNT2}). In general, we have
\begin{equation}
\widehat f_n(t,x, s_1, \xi_1, \cdots, s_n, \xi_n)
=\prod_{i=1}^n e^{-\frac{1}{2} (s_{{\sigma}(i+1)}-s_{{\sigma}(i)})
|\xi_{{\sigma}(i)}+\cdots+\xi_{{\sigma}(1)}|^2} e^{- i x(\xi_n+\cdots+\xi_1)}\,,
\label{e.3.4a}
\end{equation}
where $0<s_{{\sigma}(1)} <\cdots <s_{{\sigma}(n)}<s_{{\sigma}(n+1)}=t$.
\medskip
We will need the following elementary lemma.
\begin{lemma}\label{l.3.2} Let ${\varepsilon}>0$ and let $0<{\alpha}, {\beta}<1$ with ${\alpha}+{\beta}
>1$.
There is a constant $c\geq 1$ independent of ${\varepsilon}>0$ so that for all $x\in (0, 3{\varepsilon} )$,
\begin{equation}\label{e:3.6}
c^{-1} x^{1-({\alpha}+{\beta})} \leq \int_0^{\varepsilon} u^{-{\alpha}} (u+x)^{-{\beta}} du\leq c x^{1-({\alpha}+{\beta})} .
\end{equation}
\end{lemma}
\begin{proof} By a change of variable $u=xv$,
$$
\int_0^{\varepsilon} u^{-{\alpha}} (u+x)^{-{\beta}} du = x^{1-({\alpha} + {\beta})} \int_0^{{\varepsilon}/x} v^{-{\alpha}} (1+v)^{-{\beta}} dv .
$$
The desired conclusion \eqref{e:3.6} follows from this.
\end{proof}
\medskip
In the following, we use $C_H$ to denote a positive constant depending on $H=(H_1, \cdots, H_d)$ as well as the dimension
$d\geq 1$, whose exact value is unimportant and may change from line to line.
For two non-negative functions $f$ and $g$,
notation $f\asymp g$ means that there is a constant $c\geq 1$ so that $c^{-1} f \leq g\leq c f $
on a specified common definition domain of $f$ and $g$.
\medskip
\noindent {\it Proof of Part (ii) of Theorems
\ref{t.1.1} and \ref{t.1.2}}.\ \
First, we consider
the one dimensional case $d=1$ with $H_0>1/2$. Denote $H=H_1$.
Let us consider the second chaos in
\eqref{e.3.2}. From now on we denote
$ I_n(f_2^{t,0}) = I_n(f_2^{t,0})(t)$.
\begin{eqnarray}
\mathbb{E} \left[ I_2(f_2^{t,0}) ^2 \right]
&=& \frac12 \int_{{0\le s_1 < s_2\le t\atop
0\le r_1< r_2\le t} }\int_{\mathbb{R}^2}
e^{- \frac12 (t-s_2+t-r_1 )|\xi_2+ \xi_1|^2-\frac12 (s_2-s_1+r_2-r_1)|\xi_1|^2
}\nonumber\\
&&\qquad
|\xi_1|^{1-2H}|\xi_2|^{1-2H}d\xi_1 d\xi_2 {\gamma}_0(s_2-r_2) {\gamma}_0(s_1-r_1) ds_1ds_2
dr_1dr_2\nonumber\\
&&\quad + \frac12 \int_{{0\le s_1 < s_2\le t\atop
0\le r_2< r_1\le t} }\int_{\mathbb{R}^2}
e^{- \frac12 (t-s_2+t-r_2)|\xi_2+ \xi_1|^2- \frac{s_2-s_1 }{2} |\xi_1|^2-\frac{r_1-r_2}{2}|\xi_2|^2}\nonumber\\
&&\qquad
|\xi_1|^{1-2H}|\xi_2|^{1-2H}d\xi_1 d\xi_2 {\gamma}_0(s_2-r_2) {\gamma}_0(s_1-r_1) ds_1ds_2
dr_1dr_2\nonumber\\
&\ge & \frac12 \int_{{0\le s_1\le s_2\le t\atop 0\le r_1<r_2\le t}}g(s_1, s_2, r_1, r_2 ) {\gamma}_0(s_2-r_2) {\gamma}_0(s_1-r_1) dr_1dr_2 ds_1ds_2 \,,\nonumber\\
\label{e.3.6}
\end{eqnarray}
where
\[
g(s_1, s_2, r_1, r_2 )=\int_{\mathbb{R}^2}
e^{- \frac12 (t-s_2+t-r_1 )|\xi_2+ \xi_1|^2-\frac12 (s_2-s_1+r_2-r_1)|\xi_1|^2
}
|\xi_1|^{1-2H}|\xi_2|^{1-2H}d\xi_1 d\xi_2 .
\]
Making substitution $\eta_1=\xi_1$ and $\eta_2=\xi_1+\xi_2$, we have for $0<s_1<s_2<t$
and $0<r_1<r_2<t$,
\begin{eqnarray}
g(s_1, s_2, r_1, r_2 )
&=&\int_{\mathbb{R}^2}
e^{- \frac12 (t-s_2+t-r_2)|\eta_2|^2-\frac12 (s_2-s_1+r_2-r_1)|\eta_1|^2}
|\eta_1|^{1-2H}|\eta_2-\eta_1|^{1-2H}d\eta_1 d\eta_2\nonumber\\
&=&\frac{C_H}{\sqrt{(t-s_2+t-r_2)(s_2-s_1+s_2-s_1)}} \mathbb{E} \Bigg[
\left|\frac{X_1 }{\sqrt{s_2-s_1+r_2-r_1}}
\right|^{1-2H}\nonumber\\
&&\qquad\quad \times \left|\frac{X_2}{\sqrt{t-s_2+t-r_2}}-\frac{X_1 }{\sqrt{s_2-s_1+r_2-r_1}}\right|^{1-2H}\Bigg]\,,\nonumber\\
&= & C_H (t-s_2+t-r_2)^{H-1 }(s_2-s_1+r_2-r_1)^{2H-3/2} \nonumber\\
&& \quad \times \mathbb{E} \left[
\left| X_1 ( \sqrt{s_2-s_1+r_2-r_1} X_2 -\sqrt{t-s_2+t-r_2} X_1)\right|^{1-2H}\right] \nonumber\\
&=& C_H (t-s_2+t-r_2)^{H-1 }(s_2-s_1+r_2-r_1)^{2H-3/2} (t-s_1+t-r_1)^{1/2-H} \nonumber\\
&&\qquad\quad \times \mathbb{E} \left[
\left| X_1 \left( \frac{\sqrt{s_2-s_1+r_2-r_1}}{\sqrt{t-s_1+t-r_1}} X_2 -\frac{\sqrt{t-s_2
+t-r_2}}{\sqrt{t-s_1+t-r_1}} X_1\right)\right|^{1-2H}\right] , \nonumber \\
\label{e:3.8}
\end{eqnarray}
where $X_1$ and $X_2$ are two independent standard Gaussian
random variables. Denote
\[
f(\lambda):= \mathbb{E} \left[ |X_1 (\lambda X_1-\sqrt{1-\lambda^2} X_2)|^{1-2H}\right]
\,,\quad \lambda \in [0, 1]\,.
\]
We claim that
\begin{equation} \label{e:3.9}
\min_{\lambda \in [0, 1]} f(\lambda )>0 \quad \hbox{for any } 0 < H <1.
\end{equation}
First note that for a standard Gaussian random variable $Z$ and $a\in {\mathbb R}$,
by the Hardy-Littlewood (symmetric rearrangement) inequality,
\begin{eqnarray} \label{e:3.10}
\mathbb{E} \left[ |Z-a|^{1-2H} \right]&=&\frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty |z-a|^{1-2H} e^{-|z|^2/2} dz \nonumber \\
&\leq& \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty |z |^{1-2H} e^{-|z|^2/2} dz = \mathbb{E} \left[ |Z|^{1-2H} \right].
\end{eqnarray}
Taking conditional expectation on $\sigma (X_1)$ and then
using \eqref{e:3.10}, we have for any $0<H<1$,
\begin{eqnarray*}
f(\lambda) &=& \mathbb{E} \left[ |X_1|^{1-2H} \, \mathbb{E} \left( |\lambda X_1-\sqrt{1-\lambda^2} X_2)|^{1-2H} \big| \sigma (X_1) \right) \right] \\
&\leq & (1-\lambda^2)^{1/2 -H} \, \mathbb{E} \left[ |X_1|^{1-2H} \right] \, \mathbb{E} \left[ |X_2|^{1-2H} \right] <\infty.
\end{eqnarray*}
We conclude by the dominated convergence theorem that $f(\lambda)$ is a positive continuous function on $[0, 1)$ with
$f(0)= \left( \mathbb{E} \left[ |X_1 |^{1-2H}\right] \right)^2 \in (0, \infty)$.
{We need to consider the behavior of $f (\lambda) $ near $\lambda =1$. }
When $0<H<3/4$, {$f$ is a continuous function on $[0, 1]$ with $f(1)= \mathbb{E} \left[ |X_1|^{2-4H}\right]
\in (0, \infty)$ and so $f$ is bounded between two positive constants on $[0, 1]$. In particular, we have \eqref{e:3.9}.}
When $3/4\le H<1$, by Fatou's lemma,
$$ \liminf_{\lambda \to 1-} f(\lambda) { \geq f(1) }
= \mathbb{E} \left[ |X_1|^{2(1-2H)} \right] =\infty.
$$
This establishes the claim \eqref{e:3.9}
for all $H\in [0, 1]$.
Consequently, we have by \eqref{e:3.8} that for any $H\in (0, 1)$,
there is a constant $C_H>0$ so that
\begin{equation} \label{e:3.11}
g(s_1, s_2, r_1, r_2 ) \geq C_H (t-s_1+t-r_1)^{1/2-H} (t-s_2+t-r_2)^{H-1 }(s_2-s_1+r_2-r_1)^{2H-3/2} \,.
\end{equation}
In order for $\mathbb{E} \left[ I_2(f_2^{t,0})(t)^2 \right]$ to be finite,
by \eqref{e.3.6} and \eqref{e:3.11}
the following integral must be finite:
\begin{eqnarray} \label{e:3.13}
&&
\Upsilon:=\int_{{0\le s_1<s_2\le t\atop 0\le r_1<r_2\le t}}
(t-s_1+t-r_1)^{1/2-H} (t-s_2+t-r_2)^{H-1 }(s_2-s_1+r_2-r_1)^{2H-3/2}\nonumber\\
&&\qquad\qquad \times |s_2-r_2|^{2H_0-2}|s_1-r_1|^{2H_0-2} dr_1dr_2 ds_1ds_2 \,.
\end{eqnarray}
It is obvious that
{
\begin{eqnarray}
\Upsilon&\ge & \int_{ 0\le r_1<s_1<r_2<s_2\le t }
(t-s_1+t-r_1)^{1/2-H} (t-s_2+t-r_2)^{H-1 }(s_2-s_1+r_2-r_1)^{2H-3/2}\nonumber\\
&&\qquad\qquad \times |s_2-r_2|^{2H_0-2}|s_1-r_1|^{2H_0-2} dr_1dr_2 ds_1ds_2 \nonumber\\
&\ge& \int_{ 0\le r_1<s_1<r_2<s_2\le t }
(t-s_2+t-r_2)^{ -1 /2}(s_2-s_1+r_2-r_1)^{2H-3/2}\nonumber\\
&&\qquad\qquad \times |s_2-r_2|^{2H_0-2}|s_1-r_1|^{2H_0-2} dr_1dr_2 ds_1ds_2 \,.
\label{e.3.8}
\end{eqnarray}
Making substitution from $r_1, s_1, r_2$ to $u=s_1-r_1$, $v=r_2-s_1$, $w=s_2-r_2$,
we have
\begin{eqnarray}
\Upsilon
&\ge& \int_{ u,v,w>0, u+v+w< s_2<t }
(2t-2s_2+w)^{ -1 /2}(u+2v+w)^{2H-3/2}
w^{2H_0-2}u^{2H_0-2} dudvdwds_2 \nonumber\\
&\ge & \int_{ v,w>0, \, v+w\le s_2 /2, \, s_2<t } \left(\int_0^{s_2/2} (u+2v+w)^{2H-3/2} u^{2H_0-2} du \right)
\nonumber\\ &&\qquad\qquad \times
(2t-2s_2+1)^{ -1 /2} w^{2H_0-2} dw dv ds_2
\nonumber \\
&\ge & c_1\int_{ v,w>0, \, v+w\le s_2/2, \, s_2<t }
(2t-2s_2+1)^{ -1 /2}(2v+w)^{2H_0 +2H-5/2}
w^{2H_0-2} dvdwds_2 \nonumber\\
&\ge & c_1\int_{ 0< v \le s_2/4, \, s_2<t }
(2t-2s_2+1)^{ -1 /2} \left( \int_0^{s_2/4} (2v+w)^{2H_0 +2H-5/2}
w^{2H_0-2} dw \right) dvds_2 \nonumber\\
&\ge & c_1^2 \int_{ 0< v \le s_2/4, \, s_2<t }
(2t-2s_2+1)^{ -1 /2} v ^{4H_0 +2H-7/2} dv ds_2 , \label{e.3.11}
\end{eqnarray}
where in the third and fifth inequality we used Lemma \ref{l.3.2}.
So $\Upsilon <\infty \,$ implies the integral in \eqref{e.3.11} is finite, which happens }
only when $4H_0 +2H-7/2>-1$, that is,
\begin{equation}
4H_0 +2H>5/2\,.
\end{equation}
This proves
Theorem \ref{t.1.2}(ii) when $d=1$.
When $H_0=1/2$, the equation \eqref{e:3.13} becomes
\begin{eqnarray} \label{e:3.17a}
\Upsilon= { 2^{2H-2} }
\int_{ 0 <s_1 <s_2\le t }
(t-s_1)^{1/2-H} (t-s_2 )^{H-1 }(s_2-s_1 )^{2H-3/2} ds_1ds_2 \,.
\end{eqnarray}
This integral is finite only when the exponent $2H-3/2$ of $s_2-s_1$
in \eqref{e:3.17a} is larger than $-1$. This requires $H>1/4$. Notice that by formally letting $H_0=1/2$ in
$4H_0 +2H>5/2$ one also obtains $H>1/4$.
This proves
Theorem \ref{t.1.1}(ii) when $d=1$.
\medskip
Next we consider the case that the dimension $d\ge 2$. We still consider the second chaos. As for the one dimensional case we have
\begin{eqnarray}
\mathbb{E} \left[ (I_2(f_2))^2 \right]
&\ge & \frac12 \int_{{0\le s_1\le s_2\le t\atop
0\le r_1\le r_2\le t} }\int_{\mathbb{R}^{2d} }
e^{- \frac12(t-s_2+t-r_2)|\xi_2+ \xi_1|^2-\frac12 (s_2-s_1+r_2-r_1 )|\xi_1|^2}\nonumber\\
&&\qquad
\times \prod_{k=1}^d |\xi_{1k}|^{1-2H_k}|\xi_{2k} |^{1-2H_k} {\gamma}_0(s_1-r_1){\gamma}_0(s_2-r_2) d\xi_1 d\xi_2 ds_1ds_2 dr_dr_2\nonumber\\
&=& \int_{{0\le s_1\le s_2\le t\atop
0\le r_1\le r_2\le t} }\prod_{k=1}^d g_k(s_1, s_2, r_1, r_2) {\gamma}_0(s_1-r_1){\gamma}_0(s_2-r_2) ds_1ds_2 dr_1dr_2\,, \nonumber\\
\label{e.3.13}
\end{eqnarray}
where
\begin{equation}\label{e.3.14}
g_k(s_1, s_2, r_1, r_2) =
\int_{\mathbb{R}^2}
e^{- \frac12(t-s_2+t-r_2)|\eta_2|^2 -\frac12 (s_2-s_1+r_2-r_1)|\eta_1|^2}
|\eta_1|^{1-2H_k}|\eta_2-\eta_1|^{1-2H_k}d\eta_1 d\eta_2 ,
\end{equation}
which can be estimated by using \eqref{e:3.11}. Thus, we have
\begin{eqnarray}
\mathbb{E} \left[ (I_2(f_2))^2 \right]
&\ge & C \int_{{0\le s_1\le s_2\le t\atop
0\le r_1\le r_2\le t} } (t-s_1+t-r_1)^{d/2-{H^*} } (t-s_2+t-r_2)^{{H^*} -d }\nonumber\\
&& \times (s_2-s_1+r_2-r_1)^{2{H^*} -3d/2} |s_1-r_1|^{2H_0-2} |s_2-r_2|^{2H_0-2} ds_1ds_2 dr_1dr_2 \nonumber \\
\label{e:3.16}
\end{eqnarray}
With the same argument as for \eqref{e.3.11}, we see that the above integral is finite only if
\[
2{H^*} -\frac{3d}{2}+4H_0-2>-1\,.
\]
This proves part (ii), the necessary part, of Theorem \ref{t.1.2} for $d\geq 2$.
\smallskip
When $H_0=1/2$, the inequality \eqref{e:3.16}
becomes
\begin{eqnarray}
\mathbb{E} \left[ (I_2(f_2))^2 \right]
&\ge & C \int_{ 0\le s_1\le s_2\le t } (t-s_1 )^{d/2-{H^*} } (t-s_2 )^{{H^*} -d } (s_2-s_1 )^{2{H^*} -3d/2} ds_1ds_2 \nonumber \\
&=& C\int_0^t (t-s_1)^{d/2-{H^*} } \left[\int_{s_1}^{t} (t-s_2)^{{H^*} -d }(s_2-s_1)^{2{H^*} -3d/2} ds_2 \right] ds_1 \nonumber \\
&=& CB({H^*} -d+1,2{H^*} -\frac{3d}{2}+1) \int_0^t (t-s_1)^{-d/2} (t-s_1)^{2{H^*} -\frac{3d}{2}+1} ds_1
\nonumber \\
&=& CB({H^*} -d+1,2{H^*} -\frac{3d}{2}+1) \int_0^t (t-s_1)^{2{H^*} -2d+1} ds_1,
\label{e:3.17}
\end{eqnarray}
where $B$ is the beta function and where we see that
$\mathbb{E} \left[ (I_2(f_2))^2 \right]<\infty$ if and only if when $2{H^*} -3d/2>-1$ and ${H^*} -d>-1$.
Note that for $d\ge 2$, ${H^*} -d>-1$ implies that
$2{H^*} -3d/2+1=2({H^*} -d)+(d/2)+1>0$.
This completes the proof of part (ii), the necessary part, of Theorem \ref{t.1.1} for $d\ge 2$.
\setcounter{equation}{0}\Section{Sufficient Condition}\label{S:4}
\noindent In this section, we prove part (i), the sufficient part, of Theorems \ref{t.1.1} and \ref{t.1.2}.
It suffices to consider the case that $d\geq 2$, as when $d=1$ conditions \eqref{e:1.4} and \eqref{e:1.7}
coincide with \eqref{e:1.3} so the result for $d=1$ follows from \cite[Theorems 1.2 and 1.3]{chen}.
Recall that we take $u_0(x)=1$ on ${\mathbb R}^d$. Let $u_n(t, x)$ be defined as in \eqref{e:3.3}.
We compute the $L^2(\Omega, \mathcal{F}, \mathbb{P})$ norm of each $u_n(t, x)$.
For $n\geq 0$, we have by \eqref{e.2.8}
that for $0<s_1<\cdots<s_n<t$ and $0<r_{{\sigma}(1)}<\cdots <r_{{\sigma}(n)}<t$,
\begin{eqnarray}
&& \mathbb{E} \left[ u_n^2(t,x) \right] \nonumber\\
&=& \int_{{0<s_1<\cdots<s_n<t\atop 0< r_1, \cdots, r_n<t}
}\int_{\mathbb{R}^{nd}}
\prod_{i=1}^n e^{-(s_{i+1}-s_i+r_{{\sigma}(i+1)}-r_{{\sigma}(i)})|\xi_i+\cdots+\xi_1|^2}
\prod_{k=1}^d
|\xi_{ik}|^{1-2H_k}d\xi_1\cdots d\xi_n \nonumber\\
&&\qquad
\times \prod_{i=1}^n {\gamma}_0(s_i-r_i) ds_1dr_1\cdots ds_ndr_n\nonumber\\
&=& \int_{[0, t]^{2n}} \prod_{k=1}^d g_k(s_1, \cdots, s_n, r_1, \cdots, r_n)
\prod_{i=1}^n {\gamma}_0(s_i-r_i) ds_1dr_1\cdots ds_ndr_n\,,
\label{e.4.1}
\end{eqnarray}
where $s_{n+1}=r_{n+1}:=t$ and
\begin{eqnarray}
&&g_k(s_1, \cdots, s_n, r_1, \cdots, r_n)\nonumber\\
&=& \int_{\mathbb{R}^{n }}
\prod_{i=1}^n e^{-(s_{i+1}-s_i+r_{{\sigma}(i+1)}-r_{{\sigma}(i)})|\xi_{ik}+\cdots+\xi_{1k}|^2}
|\xi_{ik}|^{1-2H_k}d\xi_{1k}\cdots d\xi_{nk}\nonumber\\
&=& \int_{\mathbb{R}^{n }}
\prod_{i=1}^n e^{-(s_{i+1}-s_i+r_{{\sigma}(i+1)}-r_{{\sigma}(i)})|\eta_i|^2}
|\eta_i-\eta_{i-1}|^{1-2H_k}d\eta_1\cdots d\eta_n\,,
\end{eqnarray}
where $\eta_0:=0$.
Denote $u_i=s_{i+1}-s_i+r_{{\sigma}(i+1)}-r_{{\sigma}(i)}$ for $1\leq i\leq n$. Let $u_0=1$, $X_0=0$,
and $\{X_1, \cdots, X_n\}$ be i.i.d standard Gaussian random variables.
Then we can write for $0<s_1<s_2<\cdots < s_{n}<t$,
\begin{eqnarray}
&&g_k(s_1, \cdots, s_n, r_1, \cdots, r_n)
= c_{H_k}^n \left( \prod_{i=1}^n u_i^{-1/2} \right) \mathbb{E}\left[\prod_{i=1}^n
\left|\frac{X_i}{\sqrt{u_i}}-\frac{X_{i-1}}{\sqrt{u_{i-1}}}\right|^{1-2H_k}\right]\nonumber\\
& &\qquad = c_{H_k}^n \left( \prod_{i=1}^n u_i^{-1/2} \right)
\left( \prod_{i=1}^n (u_iu_{i-1})^{H_k-1/2} \right) \mathbb{E}\left[ \left| X_1
\right| ^{1-2H_k} \prod_{i=2}^n
\left| \sqrt{u_{i-1}} X_i - \sqrt{u_i} X_{i-1} \right|^{1-2H_k} \right]
\nonumber\\
& &\qquad = c_{H_k}^n u_n^{H_k-1}
\left( \prod_{i=2}^{n-1} u_i^{2H_k-3/2} \right)
\left( \prod_{ i=2 }^{n } (u_i+u_{i-1})^{\frac12- H_k} \right) \nonumber\\
& &\qquad \qquad \times \mathbb{E}\left[ \left| X_1 \right| ^{1-2H_k} \prod_{i=2}^n
\left| \sqrt{\frac{u_{i-1}}{u_{i-1}+u_i} }X_i - \sqrt{\frac{u_{i }}{u_{i-1}+u_i} }X_{i-1} \right|^{1-2H_k}
\right] \,. \label{e.4.3}
\end{eqnarray}
Denote ${\lambda}_i=\sqrt{\frac{u_{i-1}}{u_{i-1}+u_i}}$. The expectation
(denoted by $I_{k,n}$) in \eqref{e.4.3} is bounded as follows.
\begin{eqnarray}
I_{k,n}
&:=& \mathbb{E}\left[ \left| X_1
\right| ^{1-2H_k} \prod_{i=2}^{d_*}
\left| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 }X_{i-1} \right|^{1-2H_k}
\cdot \prod_{i=d_*+1}^{n}
\left| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 }X_{i-1} \right|^{1-2H_k} \right] \nonumber\\
&\le & C_{d_*, H_k}\mathbb{E}\left[ \left| X_1
\right| ^{1-2H_k} \left( \prod_{i=2}^{d_*} \left(
| X_i | \vee |X_{i-1} \right)^{1-2H_k} \right)
\left( \prod_{i=d_*+1}^{n}
\left| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 }X_{i-1} \right|^{1-2H_k} \right) \right] \nonumber\\
&\le & C_{d_*, H_k} \left( \prod_{i=1}^{d_*-1}
\left( \mathbb{E} \left[\left| X_i \right| ^{1-2H_k}\right] \right) \vee \mathbb{E} \left[\left| X_i \right| ^{2-4H_k}\right] \right)
\nonumber \\
&& \quad \times \mathbb{E}\left[ \left| X_{d_*}
\right| ^{1-2H_k} \prod_{i=d_*+1}^{n}
\left| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 }X_{i-1} \right|^{1-2H_k} \right] \nonumber\\
&=& C_{d_*, H_k} \mathbb{E}\left[ \left| X_{d_*} \right| ^{1-2H_k} \prod_{i=d_*+1}^{n}
\left| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 }X_{i-1} \right|^{1-2H_k} \right]\, , \label{e:4.4}
\end{eqnarray}
with the convention that $\prod_{i= m}^n a_i :=1$ for $m>n$.
To bound the remaining expectation we use
{the following estimate for standard normal random variable $X$
from \cite[Lemma A.1]{HNS}: there is a constant $C>0$ so that for any $0<\alpha <1$, $ \lambda >0$ and $b>0$,
\begin{equation}\label{e:4.5}
\mathbb{E} \left[ | \lambda X +b |^{-\alpha} \right] \leq C (\lambda \vee b )^{-\alpha} .
\end{equation}
By taking conditional expectation on the $\sigma$-field $\sigma ( X_{d_*}, \cdots X_{n-1})$ and using
(\eqref{e:4.5}, }
\begin{eqnarray*}
&& \mathbb{E}\left[ \left| X_{d_*} \right| ^{1-2H_k} \prod_{i=d_*+1}^{n}
\big| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 } \, X_{i-1} \big|^{1-2H_k} \right]\\
& = &\mathbb{E}\left[ \mathbb{E} \Big[ \left| X_{d_*} \right| ^{1-2H_k} \prod_{i=d_*+1}^{n}
\big| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 } \, X_{i-1} \big|^{1-2H_k} \Big| \sigma ( X_{d_*}, \cdots X_{n-1}) \Big]
\right]\\ \\
& \leq & C_{H_k} \mathbb{E}\left[ \left| X_{d_*} \right| ^{1-2H_k}
\prod_{i=d_*+1}^{n-1} \big| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 } \, X_{i-1} \big|^{1-2H_k} \,
\left( \Big (\sqrt{1-{\lambda}_n^2 } \, |X_{n-1}| \Big) \vee {\lambda}_n \right)^{1-2H_k}
\right]\\
& \le & C_{H_k} {\lambda}_n^{1-2H_k} \mathbb{E}\left[ \left| X_{d_*}
\right| ^{1-2H_k} \prod_{i=d_*+1}^{n-1}
\big| {{\lambda}_i }X_i - \sqrt{1-{\lambda}_i^2 }X_{i-1} \big|^{1-2H_k} \right] \\
& \le & \cdots \le \, C_{H_k}^n
\prod_{i=d_*+1}^n {\lambda}_i^{1-2H_k} \, .
\end{eqnarray*}
Thus we have by \eqref{e.3.4}-\eqref{e:4.4} that
\begin{eqnarray}
&&g_k(s_1, \cdots, s_n, r_1, \cdots, r_n)\nonumber\\
&\le & c_{H_k}^n u_n^{H_k-1} \left( \prod_{i=1}^{n-1} u_i^{2H_k-3/2} \right) \left( \prod_{i=2}^n
(u_i+u_{i-1})^{\frac12- H_k} \right) \prod_{i=d_*+1}^n
{\lambda}_i^{1-2H_k} \nonumber\\
&=& c_{H_k}^n u_n^{H_k-1} \left( \prod_{i=1}^{n-1} u_i^{2H_k-3/2} \right)
\left( \prod_{i=2}^{d_*} (u_i+u_{i-1})^{\frac 12-H_k} \right) \prod_{i=d_*+1}^n
u_{i-1}^{\frac12 -H_k} \, . \nonumber \\
\end{eqnarray}
Consequently,
\begin{eqnarray}
{
g (s_1, \cdots, s_n, r_1, \cdots, r_n)} &=& \prod_{k=1}^d { g_k(s_1, \cdots, s_n, r_1, \cdots, r_n) }
\nonumber \\
&\le & c_{H }^n u_n^{H -d} \left( \prod_{i=1}^{n-1} u_i^{2H -\frac{3d}{2}} \right)
\left( \prod_{i=2}^{d_*} (u_i+u_{i-1})^{\frac d2- {H^*} } \right)
\prod_{i=d_*+1}^n u_{i-1}^{\frac d2 -{H^*} } \,. \nonumber \\
\label{e.4.6}
\end{eqnarray}
{(i) We first consider the case $1/2<H_0<1$. }
When $d\ge 2$ and ${H^*} >d-1$, we have ${H^*} > d/2$. We bound
$(u_i+u_{i-1})^{\frac d2- {H^*} }$ in
\eqref{e.4.6} by $ u_{i-1} ^{\frac d2- {H^*} }$. Therefore
\begin{eqnarray}
g (s_1, \cdots, s_n, r_1, \cdots, r_n)
&\le & c_{H }^n u_n^{{H^*} -d} \left( \prod_{i=1}^{n-1} u_i^{2{H^*} -\frac{3d}{2}} \right)
\left( \prod_{i=2}^{d_*}
u_{i-1} ^{\frac d2- {H^*} } \right) \prod_{i=d_*+1}^n
u_{i-1}^{\frac d2 -{H^*} }\nonumber\\
& =&
c_{H }^n \prod_{i=1}^{n} u_i^{ {H^*} -d } \,. \label{e.4.7}
\end{eqnarray}
It follows then
\begin{eqnarray}
\mathbb{E} \left[ u_n^2(t,x) \right]
&=&
\int_{{0< s_1<\cdots<s_n< t\atop 0<r_1,
\cdots, r_n<t} } g (s_1, \cdots, s_n, r_1, \cdots, r_n)
\prod_{i=1}^n {\gamma}_0(s_i-r_i) ds_1dr_1\cdots ds_ndr_n\nonumber\\
&\le & C_H^n
\int_{{0< s_1<\cdots<s_n< t\atop 0<r_1,
\cdots, r_n<t} } \prod_{i=1}^n (s_{i+1}-s_i+r_{{\sigma}(i+1)}-r_{{\sigma}(i)})^{ {H^*} -d }
\nonumber\\
&&\qquad \times \, \prod_{i=1}^n {\gamma}_0(s_i-r_i) ds_1dr_1\cdots ds_ndr_n \,.
\label{e.4.8}
\end{eqnarray}
We use $(a+b)^{-{\beta}}\le a^{-{\beta}/2}b^{-{\beta}/2}$ for all $a, b, {\beta}>0$
to get
\begin{eqnarray}
\mathbb{E} \left[ u_n^2(t,x) \right]
&\le & C_H^n
\int_{{0< s_1<\cdots<s_n< t\atop 0<r_1,
\cdots, r_n<t} } \prod_{i=1}^n (s_{i+1}-s_i )^{
\frac{{H^*} -d }{2} } \prod_{i=1}^n ( r_{{\sigma}(i+1)}-r_{{\sigma}(i)})^{
\frac{{H^*} -d }{2} } \nonumber\\
&&\qquad \times \, \prod_{i=1}^d {\gamma}_0(s_i-r_i) ds_1dr_1\cdots ds_ndr_n\nonumber\\
&= & \frac{C_{ H }^n}{ n!} \int_{[0, t]^{2n}} h(s_1, \cdots, s_n)
h(r_1, \cdots, r_n)
\prod_{i=1}^d {\gamma}_0(s_i-r_i) ds_1dr_1\cdots ds_ndr_n \,, \nonumber\\
\end{eqnarray}
where $h_n(s_1, \cdots, s_n)$ is the symmetric extension
to $[0, t]^n$ of the function $\prod_{i=1}^n (s_{i+1}-s_i )^{
\frac{{H^*} -d }{2} } $ defined on $0<s_1<\cdots<s_n<t$.
Using the multidimensional version of the Hardy-Littlewood
inequality (see e.g. \cite[ (2.4)]{HN})
we have
\begin{eqnarray}
\mathbb{E} \left[ u_n^2(t,x) \right]
&\le & \frac{C_{ H }^n}{ n!} \left[\int_{[0, t]^n} h(s_1, \cdots, s_n)^{1/H_0} ds_1 \cdots ds_n \right]^{2H_0} \, \nonumber\\
&\le & C_{ H }^n ( n! )^{2H_0-1}
\left[\int_{0<s_1<\cdots<s_n<t} \prod_{i=1}^n (s_{i+1}-s_i )^{
\frac{{H^*} -d }{2H_0} } ds_1 \cdots ds_n \right]^{2H_0} \,. \nonumber\\
\end{eqnarray}
When
\begin{equation}
\frac{{H^*} -d}{2H_0}>-1\,,
\end{equation}
we may use Lemma \ref{lemsimplex} to bound the above multiple integral
to obtain
\begin{eqnarray}
\mathbb{E} \left[ u_n^2(t,x) \right]
&\le & C_{ H }^n ( n! )^{2H_0-1}
\left[ \frac{C_{d, H, H_0} ^n} {{\Gamma}\left(\left(\frac{{H^*} -d}{2H_0}+1\right)n+1\right)} t^{ \frac{({H^*} -d)n}{2H_0}+n} \right]^{2H_0} \,.
\end{eqnarray}
For any $p\in [2, \infty)$, by hypercontractivity inequality $\| u_n (t,x)\|_p \le (p-1)^{n/2} \| u_n (t,x)\|_2
$, we have
\begin{eqnarray}
&& \|u_n (t,x)\|_p \nonumber\\
&\le & p^{n/2} \left(\mathbb{E} \left[ u_n^2(t,x) \right] \right)^{p/2}\nonumber\\
&\le& C_{ H }^{n /2} p^{n/2} ( n! )^{(H_0-1/2) }
\left[ \frac{C_{d, H, H_0} ^n} {{\Gamma}\left(\left(\frac{{H^*} -d}{2H_0}+1\right)n+1\right)} t^{ \frac{({H^*} -d)n}{2H_0}+n} \right]^{ H_0 } \,.
\label{e:4.14}
\end{eqnarray}
When $H^*>d-1$, using Stirling's formula for the gamma function that
\begin{equation}\label{e:4.15}
\Gamma (z) = \sqrt{2\pi/z} \, (z/e)^z \left(1+ O(1/z)\right) \quad \hbox{as } z\to \infty,
\end{equation}
we have by \eqref{e:4.14}
$$
\|u_n (t,x)\|_p \leq \frac{C_{d, H, H_0} ^{n } p^{n/2} } {{\Gamma}\left( ({H^*} -d +1)n /2 +1\right)} t^{ ({H^*} -d+2H_0) n /2} \,.
$$
This implies by the asymptotic behaviour of the Mittag-Leffler function
(e.g. \cite[p.41, Formula (1.8.10)]{kilbas}) that for all $t>0$
\begin{eqnarray}
&&\sum_{n=0}^\infty \| u_n (t,x)\|_p
\, \le \, \sum_{n=0}^\infty\frac{C_{ H, H_0, d }^n p^{n/2}
t^{({H^*} -d+2H_0) n/2}}{\Gamma(n({H^*} -d+1)/2+1)}\nonumber\\
& \le& C\exp\left[ C_H t^{\frac{{H^*} -d+2H_0}{{H^*} -d+1}} p^{\frac{1}{{H^*} -d+1}}
\right]<\infty \label{e:4.16} .
\end{eqnarray}
It follows that $u(t, x):= \sum_{n=0}^\infty u_n (t,x)$ converges in $L^p(\Omega, \mathcal{F}, \mathbb{P})$
for every $p\in [2, \infty)$,
and $u(t, x)$ is a global random field solution to \eqref{spde} with $u(0, x)=1$ satisfying \eqref{e.1.8}.
\medskip
When $H^*=d-1$, $H^*>d-2H_0$ as $H_0>1/2$ and
we have from \eqref{e:4.14} by the Stirling's formula \eqref{e:4.15} that
$$
\|u_n (t,x)\|_2 \leq C_{d, H, H_0} ^{n } 2^{n/2} t^{ ({H^*} -d+2H_0) n/2} \exp \left(
a_0 (H^*-d+2H_0)n \right),
$$
where $a_0:= \frac12 ( 1+ \log (2H_0/( 2H_0-1)) >0$. Clearly there is some positive constant $T_0=T_0 (d, H, H_0) $ so that
$\sum_{n=0}^\infty \|u_n (t,x)\|_2 <\infty$ for any $t\in [0, T_0)$.
It follows that $u(t, x):=\sum_{n=0}^\infty u(t, x)$ for $(t, x)\in [0, T_0)\times {\mathbb R}^d$ is a
local random field solution to \eqref{spde}.
This completes the proof of part (i), the existence part, of Theorem \ref{t.1.2}.
\medskip
(ii) When $H_0=1/2$ and $H^*>d-1$, we replace ${\gamma}_0(s-r)$ in \eqref{e.4.8} by
$\delta_{\{0\}}(s-r)$. Thus we have
\begin{eqnarray}
\mathbb{E}\left[u_n(t,x)^2\right]
&\le& C_H^n
\int_{0< s_1<\cdots<s_n< t } \prod_{i=1}^n (s_{i+1}-s_i )^{ {H^*} -d }
ds_1\cdots ds_n \nonumber\\
&\le& \frac{C_{H, d} ^n }{{\Gamma}(({H^*} -d+1)n+1)} t^{({H^*} -d+1)n}\,.
\end{eqnarray}
By a similar argument to that of \eqref{e:4.15}, we have
\[
\sum_{n=0}^\infty \|u_n (t,x)\|_p
\le \sum_{n=0}^\infty\frac{C_{ H, d }^n p^{n/2} t^{({H^*} -d+1) n/2}}{\Gamma(n({H^*} -d+1)/2+1)}
\le C\exp\left[ C_{H,d} t p^{\frac{1}{{H^*} -d+1} } \right]<\infty \,.
\]
It follows that $u(t, x):= \sum_{n=0}^\infty u_n(t, x)$ converges in $L^p(\Omega, \mathcal{F}, \mathbb{P})$
for every $p\in [2, \infty)$,
and $u(t, x)$ is a global random field solution to \eqref{spde} with $u (0, x)=1$ on ${\mathbb R}^d$ satisfying \eqref{e.1.5}.
This completes the proof of part (i), the existence part, of Theorem \ref{t.1.1}.
\bigskip
|
3,212,635,537,708 | arxiv | \section{Introduction}
\label{sec:intro}
Consider the saddle-point system
\begin{equation}
\label{eq:sp-system}
\begin{bmatrix}
A & B^T \\
B & 0
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix} =
\begin{bmatrix}
f \\
g
\end{bmatrix},
\end{equation}
where $A \in \mathbb{R}^{n \times n}$ is symmetric positive semidefinite and $B \in \mathbb{R}^{m \times n}$ has full row rank, with $m < n$. We denote the coefficient matrix by
\begin{equation}
\label{eq:defK}
\mathcal{K} = \begin{bmatrix}
A & B^T \\
B & 0
\end{bmatrix}.
\end{equation}
We assume throughout that $\mathcal{K}$ is invertible. Our goal in this paper is to derive eigenvalue bounds for $\mathcal{K}$ under the assumption that $A$ is singular.
\paragraph{Contribution of this paper} We derive a nonzero bound on the positive eigenvalues of $\mathcal{K}$ that does not require invertibility of $A$, by considering the principal angles between the ranges/kernels of $A$ and $B$. Ruiz et al. \cite{rst18} also developed a lower positive eigenvalue bound using principal angles, but their analysis assumes a positive definite $A$.
\paragraph{Notation} Our analysis will rely on the eigenvalues and singular values of $A$ and $B$, as well as some other matrices we will introduce later in the text. We will denote the eigenvalues of a matrix $M \in \mathbb{R}^{n \times n}$ by $$ \mu_i(M), \quad i=1,\dots,n,$$ and in terms of ordering we will assume that $$ \mu_1(M) \geq \mu_2(M) \geq \cdots \geq \mu_n(M).$$ We follow the same convention for singular values of a rectangular matrix $N$, but we use $\sigma$ rather than $\mu$: i.e., the the singular values of $N \in \mathbb{R}^{m \times n}$ are denoted by $$ \sigma_1(N) \geq \sigma_2(N) \geq \cdots \geq \mu_m(N) \ge 0.$$ To increase clarity, we will often refer to the maximal eigenvalues/singular values $\mu_1(M)$ and $\sigma_1(N)$ by $\mu_{\max}(M)$ and $\sigma_{\max}(M)$ respectively. Similarly, we will refer to the minimal values $\mu_n(M)$, $\sigma_m(N)$ by $\mu_{\min}(M)$ and $\sigma_{\min}(M)$. The positive eigenvalues of a matrix will be denoted by a ``+'' superscript -- for instance, we denote the smallest nonzero eigenvalue of a semidefinite matrix $M$ by $\mu_{\min}^+(M)$. For simplicity, we will omit the arguments to $\mu$ and $\sigma$ when we refer to the eigenvalues and singular values of $A$ and $B$. That is, we let
\begin{alignat*}{2}
\mu_{\max} &= \mu_{\max}(A); \ \ \sigma_{\max} &= \sigma_{\max}(B);\\
\mu_{\min} &= \mu_{\min}(A); \ \ \sigma_{\min} &= \sigma_{\min}(B);\\
& \ \ \ \ \ \mu_{\min}^+ = \mu_{\min}^+(A). &
\end{alignat*}
\paragraph{Outline}
In Section \ref{sec:bnd_general} we discuss our general approach of augmenting the leading block of a saddle-point matrix to obtain a lower bound on the positive eigenvalues. In Section \ref{sec:bnd_gamma} we provide new bounds, which rely on the angles between the kernel of $A$ and $B$. We then present numerical experiments in Section \ref{sec:numex} and concluding remarks in Section \ref{sec:conclusions}.
\section{Lower positive eigenvalue bounds using leading block augmentation}
\label{sec:bnd_general}
To illustrate the challenge posed by the problem in hand, recall the following result of Rusten and Winther \cite[Lemma 2.1]{rw92}. In their analysis it is assumed that $A$ is positive definite (as opposed to semidefinite); however, the proof of this lemma does not rely on this, so the result still holds when $A$ is semidefinite.
\begin{lemma}
\label{lem:rw}
Then, the eigenvalues of $\mathcal{K}$ are bounded in the union of intervals
$$
I^{-} \ \cup \ I^{+},
$$
where
$$
I^{-} = \left[ \frac{1}{2}(\mu_{\min} - \sqrt{\mu_{\min}^2 + 4\sigma_{max}^2}), \frac{1}{2}(\mu_{\max} - \sqrt{\mu_{\max}^2+4\sigma_{\min}^2})\right]
$$
and
$$
I^{+} = \left[ \mu_{\min}, \frac{1}{2}(\mu_{\max} + \sqrt{\mu_{\max}^2 + 4\sigma_{\max}^2}) \right].
$$
\end{lemma}
When $A$ is singular, the upper bounds on both positive and negative values of $\mathcal{K}$ are unchanged, and the lower negative bound reduces to $-\sigma_{\max}$. The main difficulty is that the lower bound on positive eigenvalues reduces to zero, which is not a useful bound, especially in situations where $\mathcal{K}$ is known to be nonsingular (which is our assumption throughout this paper). When the null spaces of $A$ and $B$ are well separated, the matrix $\mathcal{K}$ may in fact be well-conditioned and its minimal positive eigenvalue bounded away from zero.
As a motivating example that illustrates the range of possibilities, consider the coefficient matrix
\begin{equation}
\label{eq:example}
\mathcal{K} = \begin{bmatrix}
1 & 0 & b_1 \\
0 & 0 & b_2 \\
b_1 & b_2 & 0
\end{bmatrix}
\textrm{ where }
A = \begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix} \textrm{ and } B = \begin{bmatrix} b_1 & b_2 \end{bmatrix},
\end{equation}
with $b_1^2 + b_2^2 = 1$ and $b_1, b_2 > 0$. The eigenvalues of $A$ and singular value of $B$ are the same for all such $b_1$, $b_2$, but the lowest positive eigenvalue of $\mathcal{K}$ varies depending on $b_1$ and $b_2$. The eigenvalues $\lambda$ of $\mathcal{K}$ are the roots of the cubic polynomial $p(\lambda) = \lambda^3 - \lambda^2 - \lambda + b_2^2$. This polynomial has two positive roots and and one negative root \cite[Corollary 2.2]{bg21}; the smaller positive root approaches zero as $b_2$ goes to zero (i.e., when $A$ and $B$ have overlapping null spaces), but as $b_2$ goes to 1 (i.e., when $A$ and $B$ have orthogonal null spaces) the smaller positive root approaches 1.
We now present a general approach for deriving nonzero bounds for the lower positive eigenvalues of $\mathcal{K}$ when $A$ is singular. We recall the following result \cite{f75,gg03}:
\begin{lemma}
\label{lem:kw}
Let
\begin{equation}
\label{eq:def_kw}
\mathcal{K}(W) = \begin{bmatrix}
A + B^T WB & B^T \\
B & 0
\end{bmatrix},
\end{equation}
where $W \in \mathbb{R}^{m \times m}$. If $\mathcal{K}$ and $\mathcal{K}(W)$ are both nonsingular, then
\begin{equation}
\label{eq:inv_diff}
\mathcal{K}^{-1} = (\mathcal{K}(W))^{-1} + \begin{bmatrix}
0 & 0 \\
0 & W
\end{bmatrix}.
\end{equation}
\end{lemma}
We will assume that $W$ is positive semidefinite and the leading block $A_W := A+B^TWB$ of $\mathcal{K}(W)$ is positive definite. We can use this along with \eqref{eq:inv_diff} to derive a nonzero bound on the lower positive eignvalues of $\mathcal{K}$, using a free matrix parameter $W$.
\begin{theorem}
\label{thm:Wbound}
Let $W \in \mathbb{R}^{m \times m}$ be a symmetric positive semidefinite matrix and let $A_W = A + B^T W B$. Then the positive eigenvalues of $\mathcal{K}$ are greater than or equal to
$$\min\left\{ \mu_{\min}(A_W), \frac{1}{\mu_{\max}(W)} \right\}.$$
\end{theorem}
\begin{proof}
We derive a lower bound on the positive eigenvalues of $\mathcal{K}$ by considering an upper bound on the eigenvalues of $\mathcal{K}^{-1}$. By combining \cite[Equation (3.4)]{bgl05} and \eqref{eq:inv_diff}, we obtain
\begin{equation}
\label{eqn:sKinv}
\mathcal{K}^{-1} = \begin{bmatrix}
A_W^{-1} - A_W^{-1}B^TS_W^{-1}BA_W^{-1} & A_W^{-1}B^T S_W^{-1} \\
S_W^{-1}BA_W^{-1} & -S_W^{-1} + W
\end{bmatrix},
\end{equation}
where $S_W = BA_W^{-1}B^T$. Notice that we can write
$$
\mathcal{K}^{-1} = \begin{bmatrix}
A_W^{-1} & 0 \\
0 & W
\end{bmatrix} - \begin{bmatrix}
A_W^{-1} B^T \\ -I
\end{bmatrix} S_W^{-1} \begin{bmatrix} BA_W^{-1} & -I \end{bmatrix}.
$$
Because the subtracted term is positive semidefinite, we conclude that the eigenvalues of $\mathcal{K}^{-1}$ are less than or equal to the eigenvalues of
$$
\begin{bmatrix}
A_W^{-1} & 0 \\
0 & W
\end{bmatrix}.
$$
The stated result follows.
\end{proof}
\section{Augmentation-based bounds when $W = \gamma I$}
\label{sec:bnd_gamma}
As in section \ref{sec:bnd_general}, we consider the augmented matrix $\mathcal{K}(W)$, but in this case we restrict ourselves to the case where
$$W = \gamma I,$$
as is done in \cite{eg16,gg03}. For simplicity we write $$A_{\gamma} = A+\gamma B^T B ; \qquad \mathcal{K}_\gamma = \mathcal{K}(\gamma I).$$ In this case, the lower bound on positive eigenvalues presented in Theorem~\ref{thm:Wbound} reduces to $\min\left\{ \mu_{\min}(A_\gamma), \frac{1}{\gamma} \right\}$.
We first consider the special case where $\rank(A) = n-m$ and $\mathcal{K}$ is nonsingular. We say here that $A$ is \textit{lowest-rank} because if its rank were any lower then $\mathcal{K}$ would necessarily be singular. It was shown in \cite{eg15, eg16} that $A_{\gamma} $ and $\mathcal{K}_\gamma$ have unique properties, which we will use here to refine the bound on lower positive eigenvalues given in Theorem \ref{thm:Wbound}. We return in Section \ref{sec:bnd_gen} to the general case, where $A$ is assumed to be rank-deficient but not lowest rank.
\subsection{Bounds when $\rank(A) = n-m$}
\label{sec:bnd_mrd}
\begin{theorem}
\label{thm:mrd_bound}
When $\rank(A) = n-m$, we have
\begin{equation}
\label{eq:min_Agamma}
\mu_{\min}(A_{\gamma}) \ge \rho \cdot \min\left\{ \mu_{\min}^+(A), \gamma \sigma_{\min}^2(B) \right\},
\end{equation}
where $\rho \le 1$ is a constant that does not depend on $\gamma$.
\end{theorem}
\begin{proof}
We begin by writing a decomposition of $A_{\gamma}$ as was done in \cite{eg16}. Let
$$
A = U \Lambda U^T, \ \ B = Q S V^T
$$
be the reduced (economy-size) singular value decompositions of $A$ and $B$.
The matrices $\Lambda \in \mathbb{R}^{(n-m) \times (n-m)}$ and $U \in \mathbb{R}^{n \times (n-m)}$ comprise the eigenpairs of $A$ that correspond to its nonzero eigenvalues, and the columns of $V \in \mathbb{R}^{n \times m}$ are the set of eigenvectors of $B^T B$ that correspond to its nonzero eigenvalues. We can then write
\begin{equation}
\label{eq:agamma_decomp}
A_{\gamma} = P \Sigma P^T,
\end{equation}
where
$$
P = \begin{bmatrix}
U & V
\end{bmatrix}, \ \ \Sigma = \begin{bmatrix}
\Lambda & 0 \\
0 & \gamma S^2
\end{bmatrix}.
$$
The decomposition in \eqref{eq:agamma_decomp} resembles an eigenvalue decomposition, but is not an eigenvalue decomposition in general because the columns of $V$ will not be orthogonal to those of $U$.
We then derive a lower bound on the eigenvalues of $A_{\gamma}$ by obtaining an upper bound on the eigenvalues of $A_{\gamma}^{-1}$. We can write
\begin{equation}
\mu_{\max}(A_{\gamma}^{-1}) = || A_{\gamma}^{-1} || = || P^{-T} \Sigma^{-1} P^{-1} || \le ||\Sigma^{-1}|| \cdot ||P^{-1}||^2.
\label{eq:pinvnorm}
\end{equation}
The largest eigenvalue of $\Sigma^{-1}$ is equal to $\max\left\{ \frac{1}{\mu_{\min}^+}, \frac{1}{\gamma \sigma_{\min}^2} \right\}$. The stated result follows by setting $\rho = ||P^{-1}||^{-2}$ in~\eqref{eq:pinvnorm}.
We claim that $\rho \le 1$, with equality when $U$ and $V$ are mutually orthogonal (that is, when the range of $A$ is orthogonal to the range of $B^T$). To show this is the case, consider $x \in \ker(A)$. We then have
$$
P^T x = \begin{bmatrix}
U^T x \\
V^T x
\end{bmatrix} = \begin{bmatrix}
0 \\
V^T x
\end{bmatrix}.
$$
Defining $q = P^T x$, since $V$ is orthogonal we have
$$
\| q \| \leq \|P^{-T} q\| \le \|P^{-T}\| \| q\|,
$$
meaning that $||P^{-T}||$ (and therefore $||P^{-1}||$) is greater than or equal to 1. Thus, $\rho \le 1$.
\end{proof}
\begin{comment}
Theorem \ref{thm:mrd_bound} suggests that a good choice for the parameter $\gamma$ is
$$
\tilde{\gamma} = \frac{\sigma_{\min}^2}{\mu_{\min}^+}.
$$
This is the largest value of $\gamma$ that achieves the bound $\mu_{\min}(A_{\gamma}) = c \mu_{\min}^+$. Taking a larger value of $\gamma$ results in a smaller bound on $\mu_{\min}(A_{\gamma})$, while taking a smaller value can potentially result in a smaller overall bound for the smallest positive eigenvalue of $\mathcal{K}_{\gamma}$.
\end{comment}
We now provide a value for $\rho = ||P^{-1}||^{-2}$ in terms of the principal angles between $\range(A)$ and $\range(B^T)$. Let $$
\theta_i, i = 1, \ldots, m$$
denote these angles. The cosines $\cos(\theta_i)$ of these angles are given by the singular values of $U^T V$ (or $V^T U$).
\begin{lemma}
\label{lem:rho_val}
Let $\theta_{\min}$ denote the minimum principal angle between $\range(A)$ and $\range(B^T)$. Then
$$
||P^{-1}|| = \frac{1}{\sqrt{1-\cos{(\theta_{\min})}}},
$$
which implies that $\rho$ defined in \eqref{eq:min_Agamma} is given by
$$
\rho = 1-\cos{(\theta_{\min})}.
$$
\end{lemma}
\begin{proof}
We proceed by analyzing the eigenvalues of $P^TP$, using the fact that
$$
||P^{-1}|| = \frac{1}{\sqrt{\mu_{\min}({P^T P})}}.
$$
We write $P^T P$ in block form:
$$
P^T P = \begin{bmatrix}
U^T \\
V^T
\end{bmatrix}
\begin{bmatrix}
U & V
\end{bmatrix} = \begin{bmatrix}
I & U^T V \\
V^T U & I
\end{bmatrix}.
$$
The (1,1)-block of $P^TP$ is size $(n-m) \times (n-m)$ and the (2,2)-block is size $m \times m$. We now assume without loss of generality that $n-m \ge m$. (If $n-m < m$, we can reorder the blocks of $P^T P$ such that the (1,1)-block is larger, and use the same analysis as below.)
Letting $v = \begin{bmatrix} x^T & y^T \end{bmatrix}^T$ be an appropriately partitioned eigenvector, we write the eigenvalue equations for $P^T P$:
\begin{subequations}
\begin{align}
x + U^TV y &= \lambda x; \label{eq:ptp_eig_1} \\
V^TU x + y &= \lambda y.\label{eq:ptp_eig_2}
\end{align}
\end{subequations}
There is an eigenvalue $\lambda = 1$ with multiplicity $n-2m$, which we observe by choosing $x \in \ker(V^T U)$ and $y=0$. For the remaining $2m$ eigenvalues, we assume $\lambda \ne 1$. From \eqref{eq:ptp_eig_1} we have $x = \frac{1}{\lambda-1}U^T V y$, which we substitute into \eqref{eq:ptp_eig_2} to obtain
\begin{equation}
\label{eqn:y_vuuv}
y = \frac{1}{(\lambda-1)^2}V^T U U^T V y.
\end{equation}
The eigenvalues of $V^T U U^T V$ are given by $\cos^2(\theta_i)$, where $\theta_i$ are the principal angles between $\range(A)$ and $\range(B^T)$. Thus, for each $\theta_i$ we can write \eqref{eqn:y_vuuv} as
$$
y = \frac{\cos^2(\theta_i)}{(\lambda_i-1)^2} y,
$$
implying that
$$
\lambda_i = 1 \pm \cos(\theta_i).
$$
Thus each $\theta_i$ yields two distinct eigenvalues. Together with the $n-2m$ eigenvalues with $\lambda = 1$, this accounts for all $n$ eigenvalues of $P^TP$. Therefore, the smallest eigenvalue of $P^TP$ is given by $1- \cos(\theta_{\min})$; the stated result follows.
\end{proof}
We can use the results we have established for matrices with lowest-rank $A$ to derive a lower bound on the positive eigenvalues of $\mathcal{K}$ that does not require us to know the eigenvalues of $A_{\gamma}$. We saw in Theorem \ref{thm:Wbound} that for $W = \gamma I$, the bound is given by $\min\left\{ \mu_{\min}(A_\gamma), \frac{1}{\gamma} \right\}$. As $\gamma$ decreases, the value of $\mu_{\min}(A_\gamma)$ approaches zero (because $A_\gamma$ approaches $A$); thus, we achieve the best possible lower bound when
$$
\frac{1}{\gamma}= \mu_{\min}(A_\gamma).
$$
Since we do not generally know the value of $\mu_{\min}(A_\gamma)$, we can instead select $\frac{1}{\gamma}$ to be equal to the reciprocal of the lower bound on $\mu_{\min}(A_\gamma)$ given by Theorem \ref{thm:mrd_bound} and Lemma \ref{lem:rho_val}. That is, we find a $\gamma$ that satisfies
$$
\frac{1}{\gamma} = \left(1 - \cos(\theta_{\min}) \right) \min\left\{ \mu_{\min}^+, \gamma \sigma_{\min}^2\right\}.
$$
Depending on which of the arguments to the $\min$ function is smaller, we either have
$$\frac{1}{\gamma} = \mu_{\min}^+ (1- \cos(\theta_{\min}))$$
or we have $\frac{1}{\gamma} = (1- \cos(\theta_{\min})) \cdot \gamma \sigma_{\min}^2$, which implies that
$$
\frac{1}{\gamma} = \sigma_{\min}\sqrt{1-\cos(\theta_{\min})}.
$$
Therefore, if we select
$$
\frac{1}{\gamma} = \min\left\{ \mu_{min}^+ (1- \cos(\theta_{\min})), \sigma_{\min}\sqrt{1-\cos(\theta_{\min})} \right\},
$$
we know that $\mu_{\min}(A_{\gamma})$ will be greater than or equal to this value of $\frac{1}{\gamma}$. This gives the following result:
\begin{theorem}
\label{thm:mrd_K_bnd}
When $\rank(A) = n-m$, the positive eigenvalues of $\mathcal{K}$ are greater than or equal to
$$
\min\left\{ \mu_{\min}^+ (1- \cos(\theta_{\min})), \sigma_{\min}\sqrt{1-\cos(\theta_{\min})} \right\}.
$$
\end{theorem}
In some cases, more may be known about the null spaces of $A$ and $B$ than the ranges of $A$ and $B^T$. For these settings, it is convenient to re-frame the result of Theorem \ref{thm:mrd_K_bnd} to rely on the angle between kernels rather than the angle between ranges. Because $\ker(A)$ and $\ker(B)$ are respectively orthogonal to $\range(A)$ and $\range(B^T)$, the principal angles are the same between both pairs of subspaces. The following result then holds.
\begin{corollary}
\label{cor:mrd_kernels_bnd}
Let $\rank(A) = n-m$ and let $\psi_{\min}$ denote the minimum principal angle between $\ker(A)$ and $\ker(B)$. The positive eigenvalues of $\mathcal{K}$ are greater than or equal to
$$
\min\left\{ \mu_{\min}^+ (1- \cos(\psi_{\min})), \sigma_{\min}\sqrt{1-\cos(\psi_{\min})} \right\}.
$$
\end{corollary}
\subsection{Bounds when $\rank(A) \ge n-m$}
\label{sec:bnd_gen}
We now return to the case in which $A$ is rank-deficient but not lowest rank, and discuss how the results of the previous section can be extended to this case. Let us denote the eigenvalue decomposition of $A$ by:
$$
A = U \Lambda U^T.
$$
Let $\Lambda^{\max}_{n-m}$ be a diagonal matrix of the $n-m$ largest eigenvalues of $\Lambda$ and $\Lambda^{\min}_m$ be a diagonal matrix of the $m$ smallest. Similarly, let $U^{\max}_{n-m}$ denote the eigenvectors corresponding to the $n-m$ largest eigenvalues and $U^{\min}_m$ the eigenvectors corresponding to the $m$ smallest eigenvalues. We then have
\begin{equation}
\label{eq:A_spec_split}
A = \begin{bmatrix}
U^{\max}_{n-m} & U^{\min}_m
\end{bmatrix}
\begin{bmatrix}
\Lambda^{\max}_{n-m} & 0\\
0 & \Lambda^{\min}_m
\end{bmatrix}
\begin{bmatrix}
\left(U^{\max}_{n-m}\right)^T \\
\left(U^{\min}_m\right)^T
\end{bmatrix}.
\end{equation}
As before, if we consider a weight matrix $W = \gamma I$, a lower bound on the positive eigenvalues of $\mathcal{K}$ is given by
$$
\min\left\{ \frac{1}{\gamma}, \mu_{\min}(A_{\gamma}) \right\},
$$
as this bound does not depend on the nullity of $A$. When $A$ is not lowest rank, the bound of Theorem \ref{thm:mrd_bound} is not immediately applicable. However, we note from \eqref{eq:A_spec_split} that
\begin{equation}
\label{eq:a_sum}
A = A^{\max}_{n-m} + A^{\min}_m,
\end{equation}
where $A^{\max}_{n-m} = U^{\max}_{n-m}\Lambda^{\max}_{n-m}\left(U^{\max}_{n-m}\right)^T$ is semidefinite matrix with rank $n-m$ and $A^{\min}_m = U^{\min}_m \Lambda^{\min}_m \left(U^{\min}_m\right)^T$ is a semidefinite matrix with rank less than or equal to $m$. Thus, the eigenvalues of $A_{\gamma}$ are all greater than or equal to those of
$$
A^{\max}_{n-m} + \gamma B^T B =: A^{\max}_{\gamma}.
$$
The eigenvalue $\mu_{n-m}$ is the smallest eigenvalue in $\Lambda^{\max}_{n-m}$ and therefore the smallest positive eigenvalue of $A^{\max}_{n-m}$. Let $\tilde{\theta}_{\min}$ denote the minimum principal angle between $\range(A^{\max}_{n-m})$ and $\range({B^T})$. By Theorem \ref{thm:mrd_bound} and Lemma \ref{lem:rho_val}, we have
$$
\mu_{\min}(A_{\gamma}) \ge \mu_{\min}(A^{\max}_{\gamma}) \ge \left( 1-\cos(\tilde{\theta}_{\min}) \right) \cdot \min\left\{ \mu_{n-m}, \gamma \sigma_{\min}^2 \right\}.
$$
As we did before, we can select $\frac{1}{\gamma}$ to be equal to the smaller of these two values to obtain a lower bound on the positive eigenvalues of $\mathcal{K}$ that does not require forming an augmented matrix. The proof of the following theorem is similar to that of Theorem \ref{thm:mrd_K_bnd} and is omitted.
\begin{theorem}
\label{thm:rd_K_eig}
Let $A$ be semidefinite with $n-m \le \rank(A) \le n$. The positive eigenvalues of $\mathcal{K}$ are greater than or equal to
$$
\min\left\{ \mu_{n-m} (1- \cos(\tilde{\theta}_{\min})), \sigma_{\min}\sqrt{1-\cos(\tilde{\theta}_{\min})} \right\},
$$
where $\mu_{n-m}$ denotes the $(n-m)$-th largest eigenvalue of $A$ and $\tilde{\theta}_{\min}$ the smallest principal angle between $\range(B^T)$ and the subspace spanned by the eigenvectors corresponding to the $n-m$ largest eigenvalues of $A$. (Or, equivalently, $\tilde{\theta}_{\min}$ is the smallest principal angle between $\ker(B)$ and the subspace spanned by the eigenvectors corresponding to the $m$ smallest eigenvalues of $A$ -- see Corollary \ref{cor:mrd_kernels_bnd}.)
\end{theorem}
\paragraph{Remark.} Our approach in deriving the previous result was to convert a general rank-deficient $A$ into a lowest-rank $\tilde{A}$ by removing the part of the spectrum corresponding to the $m$ smallest eigenvalues. However, removing this part of the spectrum of $A$ is not always a good choice, in that it may lead to an overly pessimistic bound. For example, consider the matrix (with $n=3$ and $m=2$):
$$
\mathcal{K} = \left[\begin{array}{c c c | c c}
1 & 0 & 0 & 0 & 1 \\
0 & \alpha & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 \\
\hline
0 & 0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & 0
\end{array}\right] =: \begin{bmatrix}
A & B^T \\
B & 0
\end{bmatrix},
$$
where $0 < \alpha < 1$. The positive eigenvalues of $\mathcal{K}$ are $\alpha, 1$, and $\frac{1+\sqrt{5}}{2}$. The ``non-removed'' eigenvector $U^{\max}_{n-m}$, which is in this case the eigenvector corresponding to $\lambda = 1$, is:
$$
U^{\max}_{n-m} = \begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix}.
$$
Because this eigenvector is in the range of $B^T$, the value $\tilde{\theta}_{\min}$ is 0, meaning that Theorem \ref{thm:rd_K_eig} gives a bound of 0. We would obtain a better bound if, instead of keeping the part of the spectrum of $A$ that corresponds to the eigenvalue $\lambda = 1$, we kept the portion of the spectrum corresponding to $\lambda = \alpha$ (this would in fact give a tight bound of $\alpha$). However, the issue of optimizing what subspace of $\range(A)$ to use in order to obtain a bound is beyond the scope of this work.
\section{Numerical experiments}
\label{sec:numex}
We test our eigenvalue bounds on two problems. The first is an electromagnetics model problem described in \cite{gs07}. Consider the time-harmonic Maxwell equations in lossless media with perfectly conducting boundaries and constant coefficients. The problem is to find the vector field $u$ and multiplier $p$ such that
\begin{align*}
\nabla \times \nabla \times u + \nabla p &= f \textrm{ in } \Omega,\\
\nabla \cdot u &= 0 \textrm{ in } \Omega,\\
u \times n &= 0 \textrm{ on } \partial \Omega,\\
p &= 0 \textrm{ on } \partial \Omega.
\end{align*}
Discretizing with N\'{e}d\'{e}l\'{e}c finite elements for $u$ and nodal elements for $p$ \cite{m03} yields a linear system of the form
\begin{equation*}
\begin{bmatrix}
A & B^T \\
B & 0
\end{bmatrix}
\begin{bmatrix}
u \\
p
\end{bmatrix} =
\begin{bmatrix}
g \\
0
\end{bmatrix},
\end{equation*}
where $A$ is a discrete curl-curl operator, $B$ is a discrete divergence operator, and $M$ is the finite element mass matrix.
In the above-described problem, $A$ has rank $n-m$, and hence it is lowest rank per the terminology we use in this paper. Figure \ref{fig:maxwell} shows the predicted bound (as a solid line), the actual smallest positive eigenvalue (dashed line) for various values of $\gamma$ for a Maxwell matrix with $n = 6,080$ and $m=1,985$.
The second problem describes linear systems arising from an interior point method (IPM) solution to a quadratic program (QP); see \cite{nw06} and the references therein for a detailed description. At each iteration of the IPM, we solve a linear system with a matrix of the form
$$
\mathcal{K} = \begin{bmatrix}
H + X^{-1}Z & J^T \\
J & 0
\end{bmatrix},
$$
where $H$ and $J$ are respectively the Hessian and Jacobian matrices for the QP, and $X$ and $Z$ are diagonal matrices of the current primal and dual iterates, some entries of which go to 0 as the iterations progress. Thus, the leading block becomes progressively more ill-conditioned as the iterations proceed.
In Figure \ref{fig:tomlab} we show the results of our bounds on the first IPM iteration on TOMLAB\footnote{Test matrices available at https://tomopt.com/tomlab/.} Problem 17 for which the saddle-point matrix $\mathcal{K}$ is numerically singular. This problem has $n=293$ and $m=286$. For the particular matrix shown in the experiment below (which arises in the 12th iteration of the IPM algorithm of \cite{m92}), there are 115 ``numerically zero'' eigenvalues of the leading block (which we define as those less than machine epsilon times the largest eigenvalue of that block).
\begin{figure}[tbh!]
\centering
\includegraphics[width=.8\linewidth]{Agamma_maxwell_i4.pdf}
\caption{Comparison of predicted and actual smallest positive eigenvalue bounds at various values of $\gamma$ for the Maxwell matrix (lowest rank)}
\label{fig:maxwell}
\end{figure}
\begin{figure}[tbh!]
\centering
\includegraphics[width=.8\linewidth]{Agamma_tomlab17_iter12.pdf}
\caption{Comparison of predicted and actual smallest positive eigenvalue bounds at various values of $\gamma$ for the IPM matrix for TOMLAB QP 17}
\label{fig:tomlab}
\end{figure}
In both cases the actual smallest positive eigenvalue $\mu_{\min}(\mathcal{K})$ occurs precisely where $\frac{1}{\gamma} = \mu_{\min}(A_{\gamma})$. The bounds for the Maxwell matrix are rather tight, in the sense that they are of the same order of magnitude as the eigenvalue (we also see this with Maxwell matrices of other sizes): the predicted eigenvalue bound is 0.0453 while the actual smallest positive eigenvalue is 0.0611.
The bound for the TOMLAB problem is looser: the predicted bound is $4.716 \times 10^{-7}$ while the actual smallest positive eigenvalue is $1.817 \times 10^{-4}$. Recall that our approach for deriving the bound for a matrix with $A$ that does not have the lowest rank consisted of two steps: (1) implicitly convert the matrix to one with a lowest-rank leading block by ``dropping'' part of the spectrum of $A$ corresponding to the smallest positive eigenvalues; and (2) estimate the lower bound for the matrix with the lowest-rank leading block using the results of Section \ref{sec:bnd_mrd}, using the fact that this will also be a lower bound for the original matrix. Because our bound in the non-lowest-rank case relies on ``dropping'' part of the spectrum of $A$, as discussed in Section \ref{sec:bnd_gen}, we might in general expect that to lead to some looseness in the bound.
However, the dropping is not the cause of the looseness in this case of the TOMLAB problem, as the saddle-point matrix we obtain by simply replacing $A$ with its dropped portion $A_{n-m}^{\max}$ (defined in \eqref{eq:a_sum}) has almost the same smallest positive eigenvalue as the original matrix ($1.810 \times 10^{-4}$, compared with $1.817 \times 10^{-4}$). Thus, the looseness in this bound does not come from the dropping part of the spectrum of $A$ to create a lowest-rank matrix, but rather in the estimation of the lower positive eigenvalue bound of the modified matrix.
\section{Conclusions}
\label{sec:conclusions}
We have described a novel framework for bounding eigenvalues of saddle-point matrices by strategically augmenting some of their blocks. We used this approach to derive (nonzero) bounds on the lower positive eigenvalues of saddle-point matrices with singular leading blocks. By making certain assumptions on the augmentation parameters, we were able to derive an eigenvalue bound that does not require the formation of an augmented matrix.
Future work may include improving the bound in the non-lowest-rank case (for instance, by judiciously selecting the portion of the spectrum of $A$ that is ``dropped'') and using this framework to analyze the convergence of preconditioned iterative solvers.
\bibliographystyle{abbrv}
|
3,212,635,537,709 | arxiv | \chapter*{Einleitung}
\label{sec:Einleitung}
\addcontentsline{toc}{chapter}{Einleitung}
\pagestyle{fancy}
\setlength\headheight{22.36pt}
\fancyhf{}
\fancyhead[OR]{\footnotesize\uppercase{Einleitung}}
\fancyhead[EL]{\footnotesize\uppercase{Einleitung}}
\fancyfoot[C]{\thepage}
Es erfordert keine große Kühnheit zu behaupten, die Entwicklung der
Quantenmechanik am Anfang des zwanzigsten Jahrhunderts habe unser
physikalisches und philosophisches Weltbild so gewandelt wie kaum eine
andere Entwicklung in den Naturwissenschaften. Auch über hundert Jahre
nach Plancks Arbeiten zur Schwarzkörperstrahlung
\cite{planck1901ueber,giulini2000dachte} und Einsteins
Lichtquantenhypothese \cite{einstein1906plancksche} ist die
Quantentheorie nicht nur eines der aktivsten Gebiete der Physik, in dem
es unzählige hochinteressante Anwendungsprobleme zu lösen gibt, sondern
es stellen sich auch heute noch viele interpretatorische und
konzeptionelle Probleme, die immer wieder Anlass zu neuen Diskussionen
und Überlegungen geben \cite{auletta2001foundations}.
\subsubsection*{Verhältnis von klassischer Physik und Quantenphysik}
\label{sec:VerhKlassischerUndQuantenphysik}
Eine wichtige begriffliche Fragestellung, die zugleich
interpretatorische und auch anwendungsbezogene Relevanz hat, ist die
nach dem Verhältnis von klassischer Physik zur Quantenphysik,
insbesondere klassischer Punktmechanik zur nichtrelativistischen
Quantenmechanik und die Beziehung klassischer Feld- und
Eichtheorien zu deren Quanten"=Analoga
\cite{landsman:1998a,landsman2007between}. Besonders durch neuere
experimentelle Techniken gewinnen diese Fragen an Aktualität. Im
Speziellen sind an dieser Stelle etwa quantenoptische Experimente zu
nennen, siehe \cite{landsman:2006a} und \cite{roemer:2004a}. Auch wenn
man wohl behaupten darf, dass die Mehrheit der Physiker die
Quantentheorie als die fundamentalere Theorie ansieht, aus der durch
bestimmte Näherungen die Gesetze der klassischen Physik zumindest im
Prinzip gewonnen werden können sollten, ist dies eine ganz und gar
nichttriviale Behauptung, die zu vielen tiefgreifenden physikalischen
und erkenntnistheoretischen Diskussionen geführt hat, siehe
\cite{falkenburg2007particle}, \cite{landsman:1998a},
\cite{primas1983chemistry},\cite{de2002foundations},
\cite{ludwig1985axiomatic} und \cite{bunge1967foundations}. Historisch
scheint die Beziehung zwischen klassischer Physik und Quantenphysik eine
treibende Kraft gewesen zu sein. So geht etwa das Korrespondenzprinzip
auf Niels Bohr zurück, der schon in dem nach ihm benannten Atommodell
für große Quantenzahlen die Beziehung zur klassischen Strahlungsformel
betont hat, vergleiche \cite{bohr1920uber},
\cite[Sec.~5.4.1]{falkenburg2007particle}. Heisenberg,
vergleiche \cite{heisenberg1958physikalische},
\cite[Sec.~5.4.2]{falkenburg2007particle} hat diesen Gedanken in wesentlich
verallgemeinerter Form wieder aufgegriffen und wohl mit der Intuition
im Hinterkopf gearbeitet, dass für $\hbar \to 0$ aus der
Quantenphysik die klassische emergieren sollte. Natürlich ist $\hbar$
eine physikalische Konstante und der Ausdruck $\hbar \to 0$ ist so zu
verstehen, dass $\hbar$ gegen die typische Wirkung des
betrachteten Systems klein ist. Bei Dirac \cite{dirac:1964a} findet sich später
dann eine formalisiertere Form des Korrespondenzprinzips, welche
vermutlich das Denken von Generationen von Physikern geprägt haben
mag. Wie das Groenewold"=van Hove"=Theorem \cite[Satz
5.2.3]{waldmann:2007a} jedoch zeigt, ist die von Dirac vorgeschlagene
Formalisierung nicht streng widerspruchsfrei durchzuführen.
\subsubsection{Deformationsquantisierung}
\label{sec:DeformationsquantisierungEinleitung}
Nachdem in den 70er Jahren des zwanzigsten Jahrhunderts die Theorie der
formalen assoziativen Deformationen nach Gerstenhaber
\cite{gerstenhaber:1964a,gerstenhaber:1966a,gerstenhaber:1968a,gerstenhaber:1974a}
zur Verfügung stand, war es jedoch möglich eine physikalisch und
mathematisch tragfähige Formalisierung des Korrespondenzprinzips
anzugeben. Mit den Arbeiten von Bayen, Flato, Fr{\o}nstal, Lichnerwowicz
und Sternheimer \cite{bayen.et.al:1977a,bayen.et.al:1978a} konnte nun
die Quantenmechanik als formale Deformation der klassischen Mechanik
angesehen werden. Man spricht dabei in naheliegender Weise von
\neuerBegriff{Deformationsquantisierung}. Diese bildet den groben
thematischen Rahmen, in den sich die vorliegende Arbeit eingliedert.
Die Deformationsquantisierung sieht dabei die Observablenalgebra und
deren Quantisierung als primäres Objekt an und versucht die
quantenmechanische Observablenalgebra durch eine
\neuerBegriff{Deformation} des klassischen, kommutativen Produkts zu
gewinnen. Die quantenmechanische Observablenalgebra wird hierbei nicht
etwa als völlig neues Objekt verstanden, sondern die klassische wird mit
einem neuen nichtkommutativen Produkt, dem \neuerBegriff{Sternprodukt},
versehen, welches dem Operatorprodukt entsprechen soll. Der Ansatz für
dieses Produkt besteht in einer Entwicklung nach Potenzen von $\hbar$,
wobei die nullte Ordnung das kommutative Produkt der Funktionen auf dem
klassischen Phasenraum ist. Weiter verlangt man, dass der Kommutator
bezüglich des Sternprodukts in erster Ordnung von $\hbar$ mit dem
$\I\hbar$"=fachen der klassischen Poisson"=Klammer übereinstimmt. Als
essentielle algebraische Bedingung an die höheren Ordnungen erweist sich
die Forderung nach \neuerBegriff{Assoziatitivät} des
Sternprodukts. Zustände werden bei diesem Zugang als ein vom
Observablenbegriff abgeleitetes Konzept betrachtet und als positive
Funktionale beschrieben. Um dem Superpositionsprinzip Rechnung zu
tragen, betrachtet man nach Konstruktion der Sternproduktalgebra
Darstellungen derselben auf (Prä-)Hil\-bert\-räumen.
Die Deformationsquantisierung bringt mehrere konzeptionelle Vorteile mit
sich. Als erstes ist sicherlich die Tatsache zu nennen, dass der
klassische Limes auf Observablenniveau schon intrinsisch bei der
Quantisierung miteingebaut ist. Auf Zustandsniveau ist dieser Punkt
allerdings komplizierter. Die Deformationsquantisierung erlaubt es, grob
gesagt, in einem sinnvoll gewählten Rahmen alle Quantentheorien auf
kontrollierte Art und Weise, durch Deformation in einem formalem
Parameter, dem später die Rolle der Planckschen Konstante $\hbar$
zukommt, alle möglichen Quantenmodelle zu generieren, die ein gegebenes
klassisches Modell auf Observablenebene als klassischen Limes haben und
diese auch zu klassifizieren. Welches dieser dann die Natur richtig
beschreibt entscheidet natürlich das Experiment. Auch wenn der
klassische Limes dann immer noch ein sehr tiefes Problem ist, stellt die
Deformationsquantisierung einen guten Ausgangspunkt dar um dieses besser
verstehen zu können. Weiter zeigt es sich, dass die
Deformationsquantisierung auch die Quantisierung geometrisch
komplizierter klassischer Systeme begrifflich signifikant klarer und
handhabbarer macht. Insbesondere bringt die explizite Verwendung
formaler Potenzreihen eine klare Trennung geometrischer und
funktionalanalytischer Schwierigkeiten mit sich. Diese Vorteile legen
auch nahe, dass die Deformationsquantisierung eine wichtige Rolle beim
Verständnis von Quantengravitationstheorien spielen könnte. Nach dem
bisher Diskutierten ist klar, dass die zunächst rein mathematische Frage
nach Existenz und Klassifikation von Sternprodukten auf beliebigen
Phasenräumen, das heißt auf beliebigen symplektischen oder allgemeiner
Poisson"=Mannigfaltigkeiten, von eminenter physikalischer Bedeutung
ist. Mathematisch betrachtet war die Lösung dieser Fragen jedoch alles
andere als einfach, insbesondere im Falle von
Poisson"=Mannigfaltigkeiten. Die mathematische Tiefe des letztgenannten
Problems, das von Kontsevich 1997 gelöst und 2003
\cite{kontsevich:2003a} veröffentlicht wurde, zeigt sich nicht zuletzt
darin, dass dieses Resultat mit der Fields"=Medaille gewürdigt
wurde. Die auch im symplektischen Fall nichttriviale Frage nach der
Existenz von Sternprodukten konnte in voller Allgemeinheit schon früher
von DeWilde und Lecomte \cite{dewilde.lecomte:1983b}, Fedosov
\cite{fedosov:1985a,fedosov:1986a,fedosov:1989a} und Omori, Maeda und
Yoshida \cite{omori.maeda.yoshioka:1991a} positiv
beantwortet werden. Aufbauend auf dem Beweis von Fedosov gelang Nest und
Tsygan \cite{nest.tsygan:1995a,nest.tsygan:1995b}, Deligne
\cite{deligne:1995a} und Bertelson, Cahen und Gutt
\cite{bertelson.cahen.gutt:1997a} auch eine Klassifikation von
Sternprodukten auf symplektischen Mannigfaltigkeiten.
\subsubsection{Entwicklung der Physik}
\label{sec:EntwicklungDerPhysik}
Möchte man die treibenden Kräfte benennen, die überhaupt die rasante
Entwicklung der modernen Physik, sowohl in reiner Erkenntnisgewinnung
als auch in ihrer technischen Anwendung, in den letzten Jahrhunderten
ermöglichten, so stößt man unweigerlich zum einen auf das Experiment,
das bestimmte Phänomene der Natur isoliert und nicht wie früher alles
als Gesamtes betrachtet, zum anderen auf die Mathematisierung der
Naturwissenschaft sowie auf den Symmetriebegriff. Letzterer spielt nicht
nur bei der Lösung physikalischer Probleme, sondern auch bei der
fundamentalen Theoriebildung und der Abbildung von
Naturgesetzmäßigkeiten in mathematische Strukturen eine enorm wichtige
Rolle. Mit den beiden letztgenannten Triebfedern der
Physikgeschichte beschäftigt sich auch diese Arbeit.
\subsubsection{Mathematische Physik}
\label{sec:MathematischePhysik}
Die vorliegende Abhandlung ist in der mathematischen Physik anzusiedeln,
einem Gebiet der Physik, bei dem die oben genannte Mathematisierung in
besonderem Maße als Leitgedanke dient. Zu ihrer Aufgabe gehört es, zum
einen zu untersuchen, welche mathematische Sprache für bestimmte
physikalische Probleme angemessen sein könnte und diese auch
mitzuentwickeln, zum anderen jene dann rigoros auf physikalische
Probleme anzuwenden. Gerade durch letztgenannten Anspruch scheint sich
die mathematische Physik bisweilen mit physikalischen Problemen zu
beschäftigen, die seit Jahren auf einem eher heuristischen Niveau als
gelöst betrachtet werden. Trotzdem spielt die mathematische Physik in
der Gesamtentwicklung der Physik wohl eine außergewöhnlich wichtige
Rolle, auch wenn dies auf den ersten Blick möglicherweise nur schwer
erkennbar ist. Hierzu sind einige Punkte zu nennen. Erstens erscheint es
letztlich für ein wirkliches Verständnis der Natur für unabdingbar, dass
die Sprache der einen bestimmten Wirklichkeitsbereich beschreibenden
Theorie klar und angemessen gewählt ist, dass sie begrifflich logisch
konsistent ist und insbesondere keine Widersprüche aufweist. Sich aber
um derartige Fragen zu kümmern, dauert lange und ist mühsam, oft ohne
direkt neue physikalische Effekte aufzudecken. Zweitens scheint es für
den Gesamtfortschritt der Physik wichtig zu sein, die logischen
Beziehungen zwischen den verschiedenen Grundpfeilern einer Theorie
genauestens zu kennen, um bei Entdeckung neuer Phänomene klarer sehen zu
können, welche Teile möglicherweise zusammenstürzen, welche beibehalten
werden können und wo die Theorie erweitert werden kann. Außerdem spielt
eine mathematisch durchdachte und logisch stringente Formulierung einer
Theorie eine wichtige Rolle in der Vermittlung dieser Theorie sowohl an
zukünftige Forscher im Bereich der Physik als auch an solche, die die
Theorie in der Technik anwenden wollen. Drittens ist das immerwährende
Prüfen und Weiterentwicklen der Schnittstelle von Mathematik und Physik
auch rein praktisch relevant, und zwar um physikalische Probleme in einer
Form vorliegen zu haben, in der neue Entwicklungen aus der angewandten
Mathematik, etwa der Numerik, möglichst direkt verwendet werden
können. Man verdeutliche sich die oben genannten Punkte exemplarisch an
den historisch schon etwas weiter zurückliegenden Ereignissen der
Geschichte der Physik, die aber gerade aus diesem Grund eine Beurteilung
für die Gesamtentwicklung der Physik zulassen, nämlich der Formulierung
der Elektrodynamik nach Maxwell, die Formulierung der Quantenmechanik
nach von Neumann und Mackey und die Entwicklung der allgemeinen
Relativitätstheorie durch Einstein. Schließlich wollen wir auch nicht
verschweigen, welch großen Einfluss die Physik auf Entwicklungen in der
Mathematik ausüben kann. Man denke zum Beispiel an die Arbeiten von
Witten, der mit Hilfe von Methoden aus der Quantenfeldtheorie und
Eichtheorie bedeutende mathematische Beiträge zum Verständnis von
Knoteninvarianten oder der Morsetheorie lieferte.
Da nun, wie oben beschrieben, mit der Deformationsquantisierung
insbesondere konzeptionelle Einsichten gewonnen werden sollen, ist es
ohne Weiteres verständlich, dass sie mathematisch exakt formuliert sein
sollte und somit auch Gegenstand der mathematischen Physik ist.
\subsubsection{Symmetrie}
\label{sec:Symmetrie}
Der weiter oben schon angesprochene Begriff der Symmetrie wird uns in
dieser Arbeit besonders beschäftigten. Etwa bei klassischen mechanischen
Systemen kann man durch Ausnutzen bestimmter Symmetrien konkrete
Probleme vereinfachen und quantitative, aber auch vor allem qualitative
Einsichten in das Verhalten eines Systems gewinnen, vergleiche
\cite{cushman.bates:1997a}. Das wohl prominenteste Beispiel einer
solchen Verwendung von Symmetrien ist das Kepler Problem, das in jeder
Anfängervorlesung behandelt wird. Auch hier besteht der Standardzugang
darin, bestimmte Symmetrien beziehungsweise die damit verbundenen
Erhaltungsgrößen auszunutzen, um die Planetenbewegungen zu verstehen.
In der klassischen Physik ist das Eliminieren von Freiheitsgraden, die
sogenannte \neuerBegriff{Phasenraumreduktion} schon eingehend studiert
worden. Auch im Rahmen der geometrischen Mechanik wurden auf diesem
Gebiet seit den 60er Jahren des vergangenen Jahrhunderts viele
Fortschritte erzielt, siehe \cite{abraham.marsden:1985a},
\cite{arnold:1989a}, \cite{marsden.ratiu:1999a} oder
\cite{cushman.bates:1997a}. Eine Frage, die in den letzten Jahren in
diesem Kontext an Bedeutung gewann, ist, grob gesagt, das Problem, ob es
äquivalent ist, eine Symmetrie direkt auszunutzen oder in mehreren
Schritten Teilsymmetrien. Es konnte gezeigt werden, dass dies unter
bestimmten schwachen technischen Annahmen immer wahr ist, siehe etwa
\cite{marsden.misiolek:2007a} für eine eingehende Diskussion.
In Eichtheorien spielen Symmetrien eine fundamentalere Rolle. In diesen
Theorien tauchen mit den Eichfreiheitsgraden nämlich prinzipiell
unphysikalische Freiheitsgrade auf. Physikalisch relevant ist hier nur
der reduzierte Phasenraum. Dessen Geometrie ist aber im Allgemeinen
kompliziert, weshalb es naheliegt, darüber nachzudenken, wie man Systeme
mit geometrisch komplexeren Phasenräumen quantisieren kann. Die
Phasenräume in klassischen Eichtheorien sind im Allgemeinen unendlich
dimensional, es handelt sich dabei ja um das Kotangentialbündel des
Raumes der Eichpotentiale oder mathematisch gesprochen der
Zusammenhangseinsformen eines Hauptfaserbündels, dessen Totalraum
physikalisch eine verallgemeinerte Phase eines Teilchens beschreibt,
dessen Raum"~(Zeit"~)Ko\-or\-di\-na\-ten wiederum durch Punkte der
Basismannigfaltigkeit beschrieben werden. Die Eichgruppe, die auf dem
Raum der Eichpotentiale operiert, ist im Allgemeinen auch unendlich
dimensional. Da die durch die Unendlichdimensionalität herrührenden
funktionalanalytischen Probleme und die geometrischen Probleme jeweils
für sich genommen schon schwierig genug sind, bietet es sich hier an,
diese Schwierigkeitsbereiche getrennt zu betrachten. Den geometrischen
Aspekt kann man modellhaft in der Sprache von Phasenraumreduktion endlich
dimensionaler Mannigfaltigkeiten untersuchen. Betrachtet man
Eichtheorien auf endlichen Gittern, so liegt eine
endlichdimensionale Situation vor.
\subsubsection{Reduktion von Sternprodukten}
\label{sec:ReduktionVonSternprodukten}
Natürlich stellt sich sofort die Frage, wie das Ausnutzen von Symmetrien
mathematisch im Rahmen der Deformationsquantisierung beschrieben werden
kann. Nach den allgemeinen Existenzsätzen für Sternprodukte gibt es
immer auch auf dem reduzierten Phasenraum ein solches. Man kann jedoch
genauer fragen, ob es konstruktive Verfahren gibt, aus einem gegebenen
Sternprodukt eines auf dem reduzierten Phasenraum zu gewinnen und ob man
idealerweise analoge Beziehungen vom ursprünglichen zum reduzierten
Sternprodukt vorliegen hat, wie es schon klassisch für die
Poisson"=Klammer der Fall ist. Zunächst müssen dazu geeignete
Invarianzbegriffe für Sternprodukte gefunden werden, insbesondere stellt
sich die Frage, ob das Konzept der Impulsabbildung im Rahmen von
Sternprodukten eine natürliche Verallgemeinerung erfährt. Mit diesem
Thema haben sich Arnal, Cortet, Molin und Piczon in
\cite{arnal.cortet.molin.pinczon:1983a} auseinandergesetzt und
verschiedene Invarianzbegriffe für Sternprodukte diskutiert. Der
naheliegendste Invarianzbegriff ist, zu fordern, dass die Wirkung auf
den Observablen mit einem Element der Symmetriegruppe ein
Algebramorphismus bezüglich des Sternprodukts ist. Daraus folgt
insbesondere, dass sie auch ein Poisson"=Isomorphismus ist. Im
symplektischen Fall genügt die Existenz einer unter der Symmetriegruppe
invarianten, torsionsfreien kovarianten Ableitung, was etwa dann der
Fall ist, wenn die Symmetriegruppe eigentlich operiert, also
insbesondere bei allen kompakten Symmetriegruppen. Bertelson, Bieliavsky
und Gutt gelang es in \cite{bertelson.bieliavsky.gutt:1998a} mit Hilfe
der Fedosov"=Konstruktion die $G$"=invarianten Sternprodukte zu
klassifizieren. Überträgt man die definierende Eigenschaft einer
Impulsabbildung durch Ersetzen der Poisson"=Klammer durch den
Sternproduktkommutator, so gelangt man zum Begriff der
\neuerBegriff{starken Invarianz} eines Sternprodukts und durch Zulassen
von Quantenkorrekturen bei der Impulsabbildung zu dem der
\neuerBegriff{Quantenimpulsabbildung} (vgl.\
\cite{bordemann.brischle.emmrich.waldmann:1996a,xu:1998a}). Xu konnte in
\cite{xu:1998a} hinreichende Bedingungen für die Existenz von
Quantenimpulsabbildungen angeben. Müller"=Bahns und Neumaier gelang es
in \cite{mueller-bahns.neumaier:2004a}, Kriterien für die Existenz von
Quantenimpulsabbildungen von Fedosov"=Sternprodukten zu
formulieren. Ebenso wurde dort untersucht, wann derartige Sternprodukte
stark invariant sind.
Eine Reduktionsmethode für Sternprodukte, für die es eine
Quantenimpulsabbildung gibt, wurde beispielsweise von Kowalzig, Neumaier
und Pflaum in \cite{kowalzig.neumaier.pflaum:2005a} für den wichtigen
Fall von Kotangentialbündeln mit Magnetfeldern vorgeschlagen. Eine
andere Methode besteht in der aus der Quantenfeldtheorie bekannten
BRST"=Quantisierungsmethode, die von Bordemann, Herbig und Waldmann in
\cite{bordemann.herbig.waldmann:2000a} für den Fall endlich vieler
Freiheitsgrade auch im Rahmen der Deformationsquantisierung beschrieben
werden konnte. Waldmann und Gutt haben den davon im regulären Fall
relevanten Teil in \cite{gutt2010involutions} formuliert. Dies werden
wir später \emph{Quanten"=Koszul}"=Schema nennen. Für abstraktere Betrachtungen
zu diesem Thema ist schließlich noch die Arbeit von Bordemann
\cite{bordemann:2005a} zu nennen. Die Frage, ob und wie in den genannten
Methoden eine Reduktion in Stufen funktioniert, wurde in der Literatur
bisher noch nicht untersucht und ist Gegenstand dieser Arbeit.
\section*{Ergebnisse}
\label{sec:Ergebnisse}
In der vorliegenden Arbeit werden zwei Probleme aus dem Themengebiet der
Konstruktion von Sternprodukten auf dem reduzierten Phasenraum gelöst.
Erstens wird die von Kowalzig, Neumaier und Pflaum in \cite[Seite
547]{kowalzig.neumaier.pflaum:2005a} aufgeworfene Frage beantwortet, was
die von diesen Autoren dargelegte Methode zur Reduktion von
Sternprodukten auf Kotangentialbündeln mit BRST"=Quantisierung in der
Formulierung von Bordemann, Herbig und Waldmann
\cite{bordemann.herbig.waldmann:2000a} zu tun hat. Es wird in
Satz~\ref{satz:VergleichMitNikolaisArbeitAllgemein} gezeigt, dass
erstgenannte Methode einen echten Spezialfall der
Quanten"=Koszul"=Reduktionsmethode und damit auch der BRST"=Methode
darstellt. Genauer stellt es sich heraus, dass sich die etwas technisch
erscheinende Methode von Kowalzig, Neumaier und Pflaum aus der
Quanten"=Koszul"=Methode durch naheliegende Wahlen von, an das
Kotangentialbündel angepassten, Eingangsdaten ergibt.
Zweitens gelingt es in dieser Arbeit, die aus der klassischen
Hamiltonschen geometrischen Mechanik bekannte Technik der symplektischen
Phasenraumreduktion in Stufen, unter gewissen Einschränkungen an die
betrachteten Symmetriegruppen, die etwa im Falle kompakter Gruppen immer
erfüllt sind, auf die Reduktionsmethode von Sternprodukten nach dem
Quanten"=Koszul"=Schema zu übertragen, siehe
Satz~\ref{thm:UebereinstimmungDerSternprodukte}.
Als Nebenprodukt dieser Ergebnisse aus der theoretisch"=mathematischen
Physik geht noch ein rein mathematisches Resultat hervor, nämlich wie
man im Rahmen eigentlicher Gruppenwirkungen durch bekannte
Integrationstechniken einen äquivarianten Spray konstruieren, vergleiche
Satz~\ref{satz:invarianterSpray}, und wie man mit Hilfe dessen den
Existenzsatz für kompatible Tubenumgebungen auf eine äquivariante
Situation verallgemeinern kann, siehe Satz~\ref{satz:GkompatibleTuben}.
\subsubsection{Einordnung der Resultate}
\label{sec:EinordnungDerResultate}
Das erstgenannte Resultat ist aus mehreren Gründen von Interesse. Zum
einen ist es natürlich im Allgemeinen für ein gutes Verständnis eines
Problems immer wichtig, auch die Beziehungen zwischen verschiedenen
Lösungsansätzen zu sehen. Zum anderen verhält es sich im betrachteten
Problem so, dass Kowalzig, Neumaier und Pflaum mit ihrer Methode für den
Fall des Kotangentialbündels etwa Aussagen über die Frage, ob
Quantisierung mit Reduktion vertauscht, treffen konnten. Ihre Methode
scheint allerdings technisch und schwer vom betrachteten Beispiel aus zu
verallgemeinern, die Quanten"=Koszul"=Methode jedoch erweckt einen
konzeptionell wesentlich klareren Eindruck. Somit gewinnen die durch die
erste Methode gewonnenen Erkenntnisse nochmals dadurch an Bedeutung, dass
sie ein Spezialfall einer anderen konzeptionell klareren Methode
darstellt. Die oben angesprochene Frage, ob Reduktion mit Quantisierung
vertauscht, ist insbesondere im Rahmen von Eichtheorien von Interesse,
wo das übliche Vorgehen wegen der komplizierten geometrischen Struktur
des reduzierten Phasenraums darin besteht, den unreduzierten Phasenraum
zu quantisieren und anschließend die Eichbedingungen zu verarbeiten
statt, wie es physikalisch und begrifflich eher angemessen wäre, den
reduzierten Phasenraum direkt zu quantisieren, da nur dieser von
direkter physikalischer Relevanz ist. Somit können Ergebnisse, die --
wenn auch nur in endlich dimensionalen Modellen -- zeigen, dass unter
bestimmten Bedingungen Reduktion mit Quantisierung vertauscht,
interessante Hinweise sein, die das übliche Vorgehen bei der
Quantisierung von Eichtheorien rechtfertigen und verständlicher machen.
Zum zweiten Resultat ist zu sagen, dass es zum einen in gewisser
intuitiver Weise für eine gute Quanten"=Reduktionsmethode als wichtig
erscheint, dass eine Reduktion in Stufen zu einer Reduktion in einem
Schritt äquivalent sein sollte, zum anderen ist dies insbesondere bei
der Quantisierung von Eichtheorien von Interesse, wo bestimmte
Eichfreiheitsgrade möglicherweise relativ einfach, andere sehr schwierig
in Griff zu bekommen sind. Wenn man nun weiß, dass diese ohne Bedenken
getrennt behandelt werden können, hat man schon eine erste Vereinfachung
des Problems erreicht und kann sich vollständig auf die schwerer zu
beherrschenden Eichsymmetrien konzentrieren. Die gewonnenen Ergebnisse
wurden auf klassischer Seite von einem symplektischen Standpunkt aus
formuliert, da hier die klassische Reduktion in Stufen schon gut
ausgearbeitet in der Literatur zur Verfügung stand und da weiter für
diesen Fall die Klassifikation und Existenz von invarianten
Sternprodukten, Quantenimpulsabbildung und anderes wohl besser verstanden
ist, als im Poisson"=Fall. Wir vermuten jedoch stark, dass alle
Überlegungen entsprechend auch für den Poisson"=Fall richtig sein
werden. Das beschriebene Resultat liefert sofort Fragestellungen für
mögliche weitere Untersuchungen und Verallgemeinerungen. Zuerst
erscheint es interessant, genauer zu sehen, ob man die Voraussetzungen an
die Symmetriegruppen nicht noch weiter abschwächen kann, oder aber
zeigen, dass dies wirklich harte Bedingungen sind. Ferner mag es auch
von Interesse sein, die Quanten"=Koszul"=Reduktion in Stufen auch für singuläre
Fälle zu untersuchen. Diese haben wir in der vorliegenden Arbeit nicht
betrachtet, da zum Zeitpunkt ihrer Verfassung noch keine vollständige
Verallgemeinerung der Quanten"=Koszul-Methode bzw.\ der BRST"=Methode im
Rahmen der Deformationsquantisierung für beliebige
stratifizierte Räume bekannt war. Es gibt eine neuere Arbeit
von Bordemann, Herbig und Pflaum \cite{bordemann2007homological} in der
die BRST"=Methode für den singulären Fall untersucht wird, allerdings nur
in dem Sinne, dass eine regulärer Phasenraum zu einem singulären
Phasenraum reduziert wird. Der Startpunkt ist also hier eine reguläre
Situation. Für eine Untersuchung von Reduktion in Stufen erscheint es
jedoch geboten, auch in einer singulären Situation starten zu
können. Schließlich stellt sich natürlich die Frage, wie unsere
Untersuchungen auf den unendlich dimensionalen feldtheoretischen Fall
verallgemeinert werden können.
Das dritte, rein mathematische Resultat erwies sich für die Lösung der
oben genannten Probleme als sehr nützlich und wird vermutlich auch in
anderen Bereichen der theoretischen und mathematischen Physik
Anwendungen finden.
\section*{Aufbau der Arbeit}
\label{sec:AufbauDerArbeit}
Die vorliegende Arbeit gliedert sich in vier Kapitel und mehrere
Anhänge.
Wir beginnen in {\bfseries Kapitel~\ref{cha:Deformationsquantisierung}}
mit einer kurzen Einführung in die Grundideen und Konzepte der
Deformationsquantisierung und führen insbesondere den Begriff des
Sternprodukts ein. Weiter gehen wir dort auf Symmetriebegriffe der
klassischen Mechanik ein, speziell wird der Begriff der Impulsabbildung
definiert und der wichtige Satz über die symplektische Reduktion
angegeben. Schließlich behandeln wir in diesem Kapitel auch Symmetriebegriffe
für Sternprodukte, insbesondere wie man das Konzept der
Impulsabbildung für diesen Rahmen verallgemeinern kann.
In {\bfseries
Kapitel~\ref{cha:Koszul-Reduktion}} widmen wir uns den
Grundlagen der Quanten"=Koszul"=Reduktion. Zuerst formulieren wir die
geometrische Reduktionstheorie algebraisch und nehmen anschließend dies
als Ausgangspunkt um das Reduktions"=Schema der
Quanten"=Koszul"=Reduktion, wie es von Gutt und Waldmann in
\cite{gutt2010involutions} dargestellt wurde, formulieren zu
können.
Diese Methode betrachten wir in {\bfseries
Kapitel~\ref{cha:QuantenKoszulAufKotangentialbuendel}} am Beispiel
eines Kotangentialbündels und können dort zeigen, dass die Methode von
Kowalzig, Neumaier und Pflaum aus \cite{kowalzig.neumaier.pflaum:2005a}
ein Spezialfall der Quanten"=Koszul"=Methode darstellt (vgl.\ Satz
\ref{satz:VergleichMitNikolaisArbeitAllgemein}). Dabei richten wir unser
Augenmerk zuerst auf den Fall verschwindender Impulswerte ohne
Magnetfeld, den wir anschließend durch Verschiebungstricks von
Tubenumgebungen auch auf nichtverschwindende, invariante Impulswerte
und in Anwesenheit eines Magnetfeldes verallgemeinern können. Dies stellt
das erste Hauptresultat der vorliegenden Arbeit dar.
Das zweite Hauptergebnis folgt in {\bfseries
Kapitel~\ref{cha:Koszul-Reduktion_in_Stufen}}, in dem wir zeigen
können, dass unter bestimmten Annahmen an die Symmetriegruppe auch die
Quanten"=Koszul"=Reduktion in Stufen durchführbar ist und in zwei Stufen
das gleiche Ergebnis liefert wie in einem Schritt. Bevor wir zu diesem
Punkt kommen, beleuchten wir noch die klassische symplektische Reduktion
in Stufen, wie sie in \cite{marsden.misiolek:2007a} beschrieben wird. Es
ist ausreichend, dies für verschwindende Impulswerte zu tun, was einige
technische Schwierigkeiten beseitigt und in dieser Variante noch nicht
in der Literatur aufgeschrieben wurde. Anschließend wenden wir uns der
Frage nach der Existenz einer Quantenimpulsabbildung für das im ersten
Reduktionsschritt erhaltene Sternprodukt zu und geben hinreichende
Existenzbedingungen sowie eine explizite Konstruktion an, siehe
Satz~\ref{thm:ReduziertesSternproduktInvariant}. Nachdem wir noch die
stark invariante Situation beleuchtet haben, formulieren wir schließlich
in Satz~\ref{thm:UebereinstimmungDerSternprodukte} das oben
angesprochene Hauptergebnis.
Die Anhänge beschäftigen sich schließlich mit diversen mathematischen
Grundlagen. In {\bfseries Anhang~\ref{cha:elementMathAnmerkungen}}
bringen wir zunächst einige elementare mathematische Bemerkungen. In
{\bfseries Anhang~\ref{cha:Differentialoperatoren}} folgt eine äußerst
knappe Einführung in die Grundbegriffe von Differentialoperatoren auf
Mannigfaltigkeiten. In {\bfseries Anhang~\ref{cha:Hauptfaserbuendel}}
geben wir anschließend eine kurze Zusammenfassung der Theorie von
Hauptfaserbündeln. {\bfseries Anhang~\ref{cha:invariante_Strukturen}}
beschäftigt sich mit $G$"=invarianten Strukturen auf
$G$"=Mannigfaltigkeiten. Hier legen wir zunächst einige topologische
Grundlagen und betrachten dann, wie man bestimmte Strukturen auf
Mannigfaltigkeiten für eigentliche Gruppenwirkungen durch
Integrationstechniken invariant machen kann. Insbesondere betrachten wir
Riemannsche Fasermetriken und Spray"=Vektorfelder. In {\bfseries
Anhang~\ref{cha:Tubensatze}} bringen wir detaillierte Beweise für
Tubensätze, die für den Hauptstrang dieser Arbeit immer wieder benötigt
werden. Insbesondere beweisen wir die angekündigte Verallgemeinerung des
Satzes über kompatible Tubenumgebungen in Anwesenheit eigentlicher
Gruppenwirkungen, siehe Satz~\ref{satz:GkompatibleTuben}.
\cleardoublepage
\pagestyle{fancy}
\setlength\headheight{22.36pt}
\fancyhf{}
\fancyhead[OR]{\footnotesize\uppercase{Konventionen}}
\fancyhead[EL]{\footnotesize\uppercase{Konventionen}}
\fancyfoot[C]{\thepage}
\section*{Konventionen}
\label{cha:Notation}
An dieser Stelle möchten wir kurz einige Konventionen einführen, die wir
in dieser Arbeit immer wieder gebrauchen wollen.
Zunächst sei gesagt, dass wir die \emph{Einsteinsche Summenkonvention}
verwenden. Vektorräume sind immer über dem Körper der reellen Zahlen
$\mathbb{R}$ oder der komplexen Zahlen $\mathbb{C}$ zu verstehen, ebenso
Lie"=Algebren, sofern nicht anders gesagt. Auftretende
Mannigfaltigkeiten werden stets als glatt, reell und zusammenhängend
angenommen, dies gilt insbesondere für Lie"=Gruppen. Falls es uns
wichtig erscheint, werden wir dies stellenweise jedoch auch nochmal im
Haupttext erwähnen. Wirkungen von Lie"=Gruppen auf Mannigfaltigkeiten
sind immer Linkswirkungen und glatt. Weiter verwenden wir die Begriffe
Wirkung und Operation einer Gruppe synonym. $\mathbb{K}$"=lineare
Abbildungen zwischen zwei $\mathbb{K}$"=Vektorräumen ($\mathbb{K} \in
\{\mathbb{R},\mathbb{C}\}$) wollen wir stets
$\mathbb{K}[[\lambda]]$"=linear auf deren formale Potenzreihen im
formalen Parameter $\lambda$ fortsetzen und mit demselben Symbol
bezeichnen, ohne dies jedes Mal explizit zu erwähnen.
Ferner benutzen wir in dieser Arbeit die in der Mathematik
gängige Notation, wollen jedoch noch auf einige ausgewählte
Bezeichnungen aufmerksam machen.
Sei $M$ eine Menge und $\{m_i\}_{i \in I}$ eine Familie von Elementen
von $M$. Falls aus dem Kontext die Indexmenge $I$ klar ist, so begehen
wir den Notationsmissbrauch einfach $\{m_i\}$ statt $\{m_i\}_{i \in I}$
zu schreiben. Ebenfalls verzichten wir gelegentlich auf das
Kompositionszeichen $\circ$ von Abbildungen, um Formeln nicht zu lang
oder unübersichtlich werden zu lassen, sofern keine Verwechslungen zu
befürchten sind. Betrachten wir eine absteigende Sequenz von Abbildungen
der Form \def\tA[#1]{A_{#1}}
\begin{equation*}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{ \dots &K^{n-1} & K^{n} & K^{n+1} & \dots \\
}; %
\path[<-] (m-1-1) edge node[auto]{$f$}(m-1-2); %
\path[<-] (m-1-2) edge node[auto]{$f$}(m-1-3); %
\path[<-] (m-1-3) edge
node[auto]{$f$}(m-1-4); %
\path[<-] (m-1-4) edge node[auto]{$f$}(m-1-5);
\end{tikzpicture}\ensuremath{\text{,}}
\end{equation*}
so schreiben wir statt $f \colon K^{n} \to K^{n-1}$ auch
$f^{\scriptscriptstyle(n)}$, um den Definitionsbereich zu
betonen. Analoges gilt für aufsteigende Sequenzen.
Ist $V$ ein Vektorraum, so bezeichnen wir seinen algebraischen
Dualraum mit $V^*$ und die adjungierte Abbildung einer linearen
Abbildung $l \dpA V \to W$ zwischen zwei Vektorräumen $V$ und $W$
notieren wir mit $l^* \dpA W^* \to V^*$. Die duale Paarung bezeichnen
wir mit $\dPaar{\cdot}{\cdot}$. Außerdem verwenden wir die
Bezeichnung $\bigwedge^\bullet V$ für $\bigoplus_k \bigwedge^k V$,
falls wir die Gradierung hervorheben wollen. Erhöht oder verringert
eine Abbildung von $\bigwedge^\bullet V$ in sich den Grad um $k$, so
machen wir dies durch $\bigwedge^\bullet V \to \bigwedge^{\bullet \pm
k}V$ deutlich. Für die Lie"=Algebra einer Lie"=Gruppe $G$ schreiben
wir $\lieAlgebra[g]$ oder auch $\Lie[G]$. Ist $G$ eine Lie"=Gruppe
und $\Phi \dpA G \times M \to M$ eine $G$"=Wirkung, so bezeichnen wir
das fundamentale Vektorfeld bezüglich dieser Wirkung zu einem Vektor
$\xi \in \lieAlgebra[g]$ mit $\xi^{\scriptscriptstyle{G}}_M$, oder,
falls wir die Gruppe nicht betonen wollen, einfach mit $\xi_M$. Ist
\(W\) ein Vektorraum und \(G\) eine Gruppe, so heißt eine Wirkung
\(\rho \colon G \times W \to W\) \neuerBegriff{linear}, falls
\(\rho(g,\cdot) \colon W \to W\) für alle \(g \in G\) linear ist. In
diesem Fall induziert \(\rho\) durch \(G \times W^* \ni
(g,\alpha)\mapsto \rho^{*}(g,\alpha) :=
(\rho(g^{-1},\cdot))^{*}(\alpha) \in W^*\) eine lineare Wirkung
\(\rho^{*}\) von $G$ auf \(W^{*}\). Ist \(M_i\), \(i=1,2\), eine
Menge und \(G\) eine Gruppe, die sowohl auf \(M_1\) als auch auf
\(M_2\) wirke, so liefert dies eine Wirkung von \(G\) auf \(M_1
\times M_2\) via \(g(m_1,m_2) := (gm_1,gm_2)\) für alle $g \in G,
(m_1,m_2) \in M_1 \times M_2$. Sind $W_1$ und $W_2$ Vektorräume, die
eine lineare $G$"=Wirkung tragen, so induziert auch dies eine
$G$"=Wirkung auf $W_1 \otimes W_2$ via $G \times (W_1 \otimes W_2)
\ni (g,w_1 \otimes w_2) \mapsto gw_1 \otimes gw_2 \in W_1 \otimes
W_2$ und linearer Fortsetzung. Analoges gilt auch für mehrfache
Tensorprodukte und äußere Potenzen.
Die glatten \emph{komplexwertigen} Funktionen auf einer Mannigfaltigkeit
$M$ bezeichnen wir immer mit $C^\infty(M)$, ist $f \in C^\infty(M)$, so
sei $\overline{f}$ die komplex konjugierte Funktion. Ist $G$ eine
Lie"=Gruppe, die auf $M$ und damit vermöge Zurückziehen auch auf
$C^\infty(M)$ wirkt, dann schreiben wir $C^\infty(M)^G$ für die
$G$"=invarianten, glatten komplexwertigen Funktionen auf $M$. Ist $\phi
\colon M \to M'$ eine glatte Abbildung zwischen Mannigfaltigkeiten, so
notieren wir den pull"=back als $\phi^* \colon C^\infty(M') \to
C^\infty(M)$. Es sollte immer aus dem Kontext klar sein, wann $^*$
den pull"=back und wann die duale Abbildung bezeichnet. Den Träger einer
Funktion $f \colon M \to V$ von einem topologischen Raum $M$ in einen
Vektorraum $V$ bezeichnen wir mit $\supp f = \abschluss{\carr f}$, wobei
$\carr f := \{p \in M \mid f(p) \neq 0\}$. Für die glatten Schnitte auf
einem Vektorbündel $E \to M$ schreiben wir $\Gamma^\infty(E)$.
\cleardoublepage
\pagestyle{fancy}
\setlength\headheight{22.36pt}
\fancyhf{}
\fancyhead[OR]{\footnotesize\rightmark}
\fancyhead[EL]{\footnotesize\leftmark}
\fancyfoot[C]{\thepage}
\renewcommand{\thepage}{\arabic{page}}
\setcounter{page}{1}
\chapter{Deformationsquantisierung}
\label{cha:Deformationsquantisierung}
Eine physikalische Theorie verwendet in der Regel die Sprache der
Mathematik, um Teile der Wirklichkeit zu beschreiben. Es ist nicht
verwunderlich, dass eine Theorie $\mathcal{T}_1$, die einen echt
größeren Wirklichkeitsbereich beschreibt als eine Theorie $\mathcal{T}_2$,
auch eine allgemeinere -- und nicht etwa eine völlig andere --
mathematische Struktur zu dessen Beschreibung verwendet. Ein schönes
Beispiel liefert etwa das Verhältnis der klassischen Punktmechanik zur
speziellen Relativitätstheorie oder die letztere im Verhältnis zur
allgemeinen Relativitätstheorie. Bei der klassischen Mechanik und der
Quantentheorie sieht man dies in verbandstheoretischer oder
algebraischer Formulierung sehr schön \cite{piron1976foundations}.
Die oben genannte echte Erweiterung des Wirklichkeitsbereichs ist oft,
und so auch in den genannten Beispielen, mit einer Änderung der Skala einer
physikalischen Größe verbunden. Es stellt sich heraus, dass sich die
Erweiterung des mathematischen Formalismus gewissermaßen durch eine
Deformation eines Repräsentanten dieser Größe innerhalb der mathematischen
Struktur der Theorie kontrollieren lässt.
Oftmals zeigt sich so a posteriori die allgemeinere Theorie nach
Abstecken des mathematischen Rahmens gewissermaßen als zwingend, da sie
stabil unter Deformationen ist. Es ist als nicht zu unterschätzende
Eigenschaft einer guten physikalischen Theorie anzusehen, dass sich ihre
charakteristische mathematische Struktur nicht durch kleine Variation
von bestimmten Konstanten der Theorie ändert, da deren Werte letztlich
gemessen werden und somit auch unweigerlich mit Messfehlern behaftet
sind.
Die \neuerBegriff{Deformationsquantisierung} beschäftigt sich nun mit
der Anwendung der oben beschriebenen Heuristik, die oft auch
\neuerBegriff{Deformationsphilosophie} genannt wird
(vgl.\ \cite{MR2285047}), auf das Verhältnis von klassischer Mechanik und
Quantenmechanik oder auch auf das von klassischen zu
Quantenfeldtheorien. Wir beschränken uns in dieser Arbeit auf den
ersten Fall, der im Übrigen auch in Termen von Gittereichtheorien
interpretiert werden kann. Im Folgenden wollen wir kurz erläutern, in
welchem Rahmen deformiert wird und führen den grundlegenden Begriff des
Sternprodukts ein. Dazu präsentieren wir zuerst in aller Kürze eine
mathematische Beschreibung der klassischen Mechanik, anschließend
erläutern wir die der Quantenmechanik und schließlich legen wir dar, was
man in diesem Zusammenhang mit Deformationen meint. Es würde jedoch den
Rahmen dieser Arbeit bei weitem sprengen, wenn wir versuchen wollten,
eine detaillierte Einführung in das Quantisierungsproblem und die
Deformationsquantisierung zu geben. Wir verweisen hierfür auf das sehr
schön geschriebene Lehrbuch von Waldmann \cite{waldmann:2007a}.
\section{Sternprodukte}
\label{sec:Sternprodukte}
Die Observablenalgebra in der klassischen Mechanik beschreibt man
typischerweise und idealisiert\footnote{Vgl.\ dazu
\cite{waldmann:2007a}, insbesondere Bemerkung 1.3.7.} durch die
kommutative Poisson"=$^*$"=Algebra der glatten komplexwertigen
Funktionen auf dem Phasenraum $\mathbb{R}^{2n}$, der mit der kanonischen
Poisson"=Klammer versehen wird, oder allgemeiner durch die auf einer
Poisson"=Mannigfaltigkeit $(M,\{\cdot,\cdot\})$. Die Poisson"=Klammer
bestimmt zusammen mit der Hamiltonfunktion die Zeitentwicklung des
Systems. \neuerBegriff{Observabel} im eigentlichen Sinne sind die
reellwertigen Funktionen $f = \overline{f} \in C^\infty(M)$. Die
möglichen Messwerte sind durch die Funktionswerte gegeben. Die reinen
Zustände sind die Punkte auf dem Phasenraum und allgemeine
Zustände werden durch positive Borelmaße mit kompaktem Träger
beschrieben. Mit Hilfe des Rieszschen Darstellungssatzes (vgl.\
\cite{rudin:1987a}) können diese alternativ als positive Funktionale
angesehen werden.
In der Quantenmechanik beschreibt man die Observablen üblicherweise als
selbstadjungierte Opertoren auf einem separablen, komplexen, im
Allgemeinen unendlich dimensionalen Hilbertraum $\mathcal{H}$. Die
reinen Zustände eines Systems werden durch Strahlen in $\mathcal{H}$
repräsentiert, allgemeine Zustände beschreibt man durch Dichtematrizen,
d.\,h.\ durch positive Spurklasse"=Operatoren auf $\mathcal{H}$ mit Spur
Eins, vgl.\ \cite{blank2008hilbert}. Typische Observablen wie der Orts"=
oder Impulsoperator sind unbeschränkt, insbesondere nicht auf ganz
$\mathcal{H}$ definiert. Man kann sich jedoch überlegen, dass es
ausreicht, nur die beschränkten Operatoren auf $\mathcal{H}$ zu
betrachten, siehe etwa \cite[Ch.~IV~2.3]{prugovecki1981quantum}.
Es stellt sich weiter als sinnvoll heraus, die im Allgemeinen
nichtkommutative $^*$"=Algebra $\mathfrak{B}(\mathcal{H})$ aller
beschränkten Operatoren auf $\mathcal{H}$ mit Operatoradjunktion als
$^*$"=Involution als Observablenalgebra anzusehen und die
selbstadjungierten, die hier gleich den Hermiteschen Operatoren sind,
als die \neuerBegriff{observablen} Elemente darin zu
bezeichnen. Zustände können dann als positive, normierte Funktionale auf
$\mathfrak{B}(\mathcal{H})$ angesehen werden. Um den wohlverstandenen
funktionalanalytischen Fragen der Einfachheit halber im Rahmen dieser
Motivation aus dem Weg zu gehen, wollen wir im Folgenden $\mathcal{H}$
nur als Prä"=Hilbertraum auffassen, was auch ausreichend ist, solange
man den Spektralkalkül nicht benötigt.
Ein verbreitetes Vorgehen, ein System der klassischen Mechanik mit
Phasenraum $\mathbb{R}^{2n}$ zu "`quantisieren"' besteht darin, der
Ortsvariablen $q^k$ den Ortsoperator $Q^k$ zuzuordnen und der
Impulsvariablen $p_k$ den Impulsoperator $P_k$, dabei werden die Orts-
und Impulsoperatoren üblicherweise wie folgt definiert.
\begin{align}
\label{eq:Ortssoperator}
Q^k \colon C_0^\infty(\mathbb{R}^n) \ni \psi \mapsto (\mathbb{R}^{n} \ni
q \mapsto (Q^k\psi)(q) = q^k \psi(q)) \in C_0^\infty(\mathbb{R}^n)
\end{align}
und
\begin{align}
\label{eq:Impulsoperator}
P_k \colon C_0^\infty(\mathbb{R}^n) \ni \psi \mapsto (\mathbb{R}^{n} \ni
q \mapsto (P_k\psi)(q) = \frac{\hbar}{\I} \frac{\partial
\psi}{\partial q^k}(q)) \in C_0^\infty(\mathbb{R}^n)\ensuremath{\text{.}}
\end{align}
Dabei bezeichnet $C^\infty_0(\mathbb{R}^n)$ den Prä"=Hilbertraum der
glatten Funktionen mit kompaktem Träger auf $\mathbb{R}^n$.
Nun ist schon bei einfachsten klassischen Observablen, etwa der Form
$q^kp_k$, nicht klar, wie diese grobe Regel anzuwenden ist, denn auf der
einen Seite vertauschen $q^k$ und $p_k$, die korrespondierenden
Operatoren $Q^k$ und $P_k$ tun dies jedoch nicht. Man muss also eine
Ordnungsvorschrift wählen. Die einfachste Möglichkeit dies zu tun, ist
die sogenannte \neuerBegriff{Standardordnung}. Hat man eine in $q^k$ und
$p_l$ polynomiale Funktion vorliegen, so schreibt man erst alle Impulse
nach rechts und ersetzt anschließend die Orts- und Impulsvariablen wie
oben beschrieben. Man kann zeigen, dass so eine $\mathbb{C}$"=lineare
Bijektion $\rho_{\mathrm{std}}\colon \mathrm{Pol}(\mathbb{R}^{2n}) \to
\mathrm{DiffOp}_{\mathrm{Pol}}(C^\infty_0(\mathbb{R}^n))$ von den
polynomialen Funktionen $\mathrm{Pol}(\mathbb{R}^{2n})$ auf
$\mathbb{R}^{2n}$ in die Differentialoperatoren auf
$C^\infty_0(\mathbb{R}^n)$ mit polynomialen Koeffizienten definiert
wird, siehe \cite[Prop. 5.2.7]{waldmann:2007a}. Analog liefern viele
andere Ordnungsvorschriften genauso $\mathbb{C}$"=lineare Bijektionen
$\rho \colon \mathrm{Pol}(\mathbb{R}^{2n}) \to
\mathrm{DiffOp}_{\mathrm{Pol}}(C^\infty_0(\mathbb{R}^n))$. Die grobe
Idee besteht nun darin, mit Hilfe eines derartigen $\rho$ die
Komposition von Differentialoperatoren auf den Raum $
\mathrm{Pol}(\mathbb{R}^{2n})$ zurückzuziehen und die so erhaltene
assoziative Multiplikation mit der punktweisen, kommutativen
Multiplikation zu vergleichen. Für $\rho_{\mathrm{std}}$ erhält man so
das sogenannte \neuerBegriff{standardgeordnete Sternprodukt}
$\star_{\mathrm{std}}$.
\begin{proposition}
\label{prop:StandardgeordnetesSternprodukt}
Das standardgeordnete Sternprodukt $\star_{\mathrm{std}}$ ist für
alle $f,f' \in \mathrm{Pol}(\mathbb{R}^{2n})$ durch
\begin{align}
\label{eq:StandardgeordnetesSternprodukt}
f \star_{\mathrm{std}} f' =
\rho_{\mathrm{std}}^{-1}(\rho_{\mathrm{std}}(f)\rho_{\mathrm{std}}(f'))
= \sum_{r = 0}^\infty \frac{1}{r!} \left(\frac{\hbar}{\I} \right)^r
\sum_{i_1,\dots,i_r} \frac{\partial^r f}{\partial p_{i_1}
\dotsm \partial p_{i_r}} \frac{\partial^r f'}{\partial q^{i_1}
\dotsm \partial q^{i_r}}
\end{align}
gegeben. Dabei sind $(q^1,\dots,q^n,p_1,\dots,p_n)$ Koordinaten von
$\mathbb{R}^{2n}$, wobei die $q^i$ als Orts- und die $p_i$ als
Impulskoordinaten interpretiert werden.
\end{proposition}
Einen Beweis dieser Tatsache findet man in \cite[Prop. 5.2.17]{waldmann:2007a}.
\begin{bemerkung}
\label{bem:PolynomialSternprodukt}
Offensichtlich kann man die Definition von $\star_{\mathrm{std}}$ ohne
Weiteres auf Funktionen ausdehnen, die nur in den Impulsvariablen
polynomial sind.
\end{bemerkung}
\begin{bemerkung}
\label{bem:Hermitesch}
Aus physikalischer Sicht hat die Standardordnung einen wesentlichen
Nachteil, denn sie bildet observable Elemente $f = \overline{f} \in \mathrm{Pol}(\mathbb{R}^{2n})$ im
Allgemeinen nicht wieder auf observable Elemente ab. So ist zum
Beispiel der Operator $\rho_{\mathrm{std}}(q^1p_1)$ kein
Hermitescher Operator bezüglich der üblichen
Prä"=Hilbertraumstruktur von $C^\infty_0(\mathbb{R}^n)$. Für
$\star_{\mathrm{std}}$ spiegelt sich dieser Defekt darin wieder, dass
im Allgemeinen die Gleichung $\overline{f \star_{\mathrm{std}} f'} =
\overline{f'} \star_{\mathrm{std}} \overline{f}$ verletzt ist. Wir
werden später, nachdem der allgemeine Begriff des Sternprodukts
eingeführt wurde, Beispiele sehen, wo diese Problematik nicht auftritt.
\end{bemerkung}
Diese Überlegungen münden letztlich in den Begriff des Sternprodukts,
den wir nun definieren wollen. Für weitere Details zur beschriebenen
Motivation müssen wir leider wieder auf \cite{waldmann:2007a}
verweisen. Ebenso verweisen wir für die im Folgenden benötigten
Grundlagen zur Differentialgeometrie, symplektischen und
Poisson"=Geometrie, die wir alle in dieser Arbeit voraussetzen wollen,
auf die zahlreiche Literatur. Es seien exemplarisch \cite{lee:2003a},
\cite{ortega.ratiu:2004} und \cite{waldmann:2007a} genannt.
\begin{definition}[Sternprodukt]
\label{def:Sternprodukt}
Ein \neuerBegriff{formales Sternprodukt} oder kurz
\neuerBegriff{Sternprodukt} $\star$ für eine Poisson"=Mannigfaltigkeit
$(M,\{\cdot,\cdot\})$ ist eine $\mathbb{C}[[\lambda]]$"=bilineare Abbildung
\begin{align}
\label{eq:Sternprodukt}
\star \colon C^\infty(M)[[\lambda]] \times C^\infty(M)[[\lambda]] \to
C^\infty(M)[[\lambda]]
\end{align}
der Form
\begin{align}
\label{eq:FormSternprodukt}
\star = \sum_{r=0}^\infty \lambda^r \star_r
\end{align}
mit $\mathbb{C}$"=bilinearen Abbildungen $\star_r \colon C^\infty(M)
\times C^\infty(M) \to C^\infty(M)$ welche auf die übliche Weise
$\mathbb{C}[[\lambda]]$"=bilinear fortgesetzt werden, so dass für
alle $f,f' \in C^\infty(M)$ folgende Bedingungen erfüllt sind.
\begin{definitionEnum}
\item %
$\star$ ist assoziativ.
\item %
$f\star_0 f' = f\cdot f'$.
\item %
$f \star_1 f' - f' \star_1 f = \I \{f,f'\}$.
\end{definitionEnum}
Falls zusätzlich $\star_r$ für alle $r \in \mathbb{N}$ ein
Bidifferentialoperator ist, heißt $\star$
\neuerBegriff{differentiell}. Gilt $\overline {(f \star f')} =
\overline{f'} \star \overline{f}$ für alle $f,f' \in C^\infty(M)$,
nennt man $\star$ \neuerBegriff{Hermitesch}. Dabei definiert man
$\overline{\lambda} = \lambda$.
\end{definition}
\begin{bemerkung}
\label{bem:Unteralgebra}
Wir wollen im Folgenden auch Produkte, die nur auf formalen
Potenzreihen von Poisson-$^*$-Unteralgebren von
$C^\infty(M)[[\lambda]]$ definiert sind, als Sternprodukte bezeichnen,
sofern sie die sonstigen Eigenschaften aus Definition
\ref{def:Sternprodukt} erfüllen.
\end{bemerkung}
\begin{bemerkung}
\label{bem:KonvergenzProbleme}
Interpretiert man $M$ als Phasenraum eines physikalischen Systems, so
kann man sich überlegen, dass es sinnvoll ist anzunehmen, dass
$\star_r$ für alle $r \in \mathbb{N}$ die Dimension von
$\frac{1}{[\mathrm{Wirkung}]^r}$ hat. Grob gesagt legt das Beispiel
des Kotangentialbündels nahe, die Koordinatenfunktionen auf einem
beliebigen Phasenraum mit $\sqrt{[\mathrm{Wirkung}]}$ als
physikalische Dimension zu interpretieren, da es auf dem
Kotangentialbündel genausoviele Koordinaten gibt, die man als
Ortskoordinaten interpretieren kann, wie solche, die man als
Impulskoordinaten ansehen kann und deren Produkte somit die Einheit
einer Wirkung tragen. Dann überlegt man sich im Wesentlichen durch
Betrachten der Differentiationsordnung in der Fedosovkonstruktion
(vgl.\ \cite[Abschnitt 6.4]{waldmann:2007a}), dass bei einem
Fedosov"=Sternprodukt $\star_r$ die Dimension
$\frac{1}{[\mathrm{Wirkung}]^r}$ hat. Da jedes differentielle
Sternprodukt zu einem Fedosovsternprodukt äquivalent ist (vgl.\
\cite[Satz 6.4.27]{waldmann:2007a}), ist es naheliegend, dies auch
für beliebige differentielle Sternprodukte so zu interpretieren.
Weiter kann man $\lambda$ als $\hbar$ auffassen, welches ja die
Dimension einer Wirkung hat. In dieser Sichtweise wollen wir für
$\star$ das Symbol $\star^{\scriptscriptstyle \hbar}$
verwenden. Somit ist für alle $r \in \mathbb{N}$ die Größe
${\star^{\scriptscriptstyle{\hbar}}_r}\hbar^r$ dimensionslos und man
kann für zwei Funktionen $f,f' \in C^\infty(M)$ sinnvoll nach der
Konvergenz der Reihe $\sum_{r = 0}^\infty \hbar^r f
{\star^{\scriptscriptstyle{\hbar}}_r} f'$ fragen. Es ist bisher im
Allgemeinen nicht klar, ob und wie man eine sinnvolle Topologie
angeben kann, in der diese Konvergenz zu verstehen ist. Außerdem wäre
es wünschenswert, eine Unteralgebra $\mathcal{A}$ von
$C^\infty(M)[[\lambda]]$ auszuzeichnen, bei der für alle $f,f' \in
\mathcal{A}$ die Reihe $f\star^{\scriptscriptstyle{\hbar}}f'$ in
einem bestimmten Sinne konvergiert. Siehe
\cite{beiser.roemer.waldmann:2007a} und \cite{beiser:2005a} für
nichttriviale Beispiele. Für weitere Betrachtungen zum
Konvergenzproblem sei auch auf das Buch von Fedosov
\cite{fedosov:1996a} verwiesen. In den Beispielen
\ref{bsp:WeylSternprodukt} und \ref{bsp:WickSternprodukt} werden wir
noch einfache, aber physikalisch relevante Situationen angeben, in
denen das Konvergenzproblem einfach zu lösen ist.
\end{bemerkung}
\begingroup
\emergencystretch=0.6em
\begin{definition}[Äquivalenz von Sternprodukten]
\label{def:Aequivalenz}
Sei $\star$ bzw.\ $\star'$ ein formales Sternprodukt auf der
Poisson"=Mannigfaltigkeit $(M,\{\cdot,\cdot\})$ bzw.\ $(M',\{\cdot,\cdot\}')$.
\begin{definitionEnum}
\item %
Die formalen Sternprodukte $\star$ und $\star'$ heißen
\neuerBegriff{isomorph}, falls es einen
$\mathbb{C}[[\lambda]]$"=Algebraisomorphismus $S \colon
(C^\infty(M)[[\lambda]],\star) \to
(C^\infty(M')[[\lambda]],\star')$ gibt. Kann $S$ als
$\mathbb{C}[[\lambda]]$"=lineare Fortsetzung eines
Poisson"=Algebraisomorphismus $s \colon
(C^\infty(M),\{\cdot,\cdot\}) \to (C^\infty(M'),\{\cdot,\cdot\}')$
gewählt werden, so wollen wir $\star$ und $\star'$
\neuerBegriff{Poisson"=isomorph} nennen. Sind
$(M,\{\cdot,\cdot\})$ und $(M',\{\cdot,\cdot\}')$ symplektisch, so
sagen wir auch $\star$ und $\star'$ sind
\neuerBegriff{symplektomorph}.
\item %
Gilt $(M,\{\cdot,\cdot\}) = (M',\{\cdot,\cdot\}')$, so heißen
$\star$ und $\star'$ \neuerBegriff{äquivalent}, falls es einen
$\mathbb{C}[[\lambda]]$"=Algebraisomorphismus $S \colon
C^\infty(M)[[\lambda]] \to C^\infty(M')[[\lambda]]$ der Form $S =
\operatorname{\mathsf{id}} + \sum_{r = 1}^\infty \lambda^r S_r$ von
$(C^\infty(M)[[\lambda]],\star)$ nach
$(C^\infty(M')[[\lambda]],\star')$ gibt.
\end{definitionEnum}
\end{definition}
\begin{bemerkung}
\label{bem:MilnorsExercise}
Eine Abbildung $s \colon C^\infty(M) \to C^\infty(M')$ ist genau dann
ein Algebraisomorphismus, wenn es einen Diffeomorphismus $\phi \colon
M' \to M$ mit $s = \phi^*$ gibt (\emph{Milnors Exercise}
\cite{grabowski:2005a}). Insbesondere kann man sich so jeden
Poisson"=Algebraisomorphismus $s \colon (C^\infty(M),\{\cdot,\cdot\})
\to (C^\infty(M'),\{\cdot,\cdot\}')$ als pull-back einer
Poisson"=Abbildung denken.
\end{bemerkung}
\begin{bemerkung}
\label{bem:AequivalenzTransformation}
Ist $\star'$ ein Sternprodukt für $(M',\{\cdot,\cdot\}')$, so liefert
eine Abbildung
\begin{equation}
\label{eq:AequivalenzTransformation}
S = S_0 +
\sum^{\infty}_{r=1} \lambda^r S_r \colon C^\infty(M')[[\lambda]]
\to C^\infty(M)[[\lambda]]
\end{equation}
mit einem Poisson"=Algebraisomorphismus $S_0 \colon C^\infty(M') \to
C^\infty(M)$ und $\mathbb{C}$-linearen Abbildungen $S_r \colon
C^\infty(M') \to C^\infty(M)$ sowie $S_r(1) = 0$ für $r >
0$ durch
\begin{equation}
\label{eq:AequivalenzTransformation2}
f \star \tilde f = S(S^{-1}(f) \star' S^{-1}(\tilde f))
\end{equation}
für alle $f,\tilde f \in C^\infty(M)[[\lambda]]$ ein Sternprodukt $\star$
für $(M,\{\cdot,\cdot\})$, siehe dazu auch
\cite[Prop.~6.1.7]{waldmann:2007a}. Falls $S = \phi^*$ für einen
Poisson"=Diffeomorphismus $\phi \colon M \to M'$ gilt, so schreiben
wir auch $\phi^*\star' := \star$ und sprechen von einem (mit $\phi$)
\neuerBegriff{zurückgezogenen} Sternprodukt.
\end{bemerkung}
\endgroup
Wir kommen nun zu zwei einfachen Beispielen für Sternprodukte, das \neuerBegriff{Weyl-}
und das \neuerBegriff{Wick"=Sternprodukt}.
\begin{beispiel}[Weyl"=Sternprodukt]
\label{bsp:WeylSternprodukt}
Auf $M = \mathbb{R}^{2n}$ mit den Ortskoordinaten $q^k$ und den
Impulskoordinaten $p_k$ definiert
\begin{align}
\label{eq:WeylSternprodukt}
f \star_{\mathrm{\scriptscriptstyle{Weyl}}} f' := \mu \circ \exp(-\frac{\I\lambda}{2}
\omega^{rs} \frac{\partial}{\partial x^r} \otimes
\frac{\partial}{\partial x^s} ) f\otimes f'
\end{align}
für alle $f,f' \in C^\infty(\mathbb{R}^{2n})$ und
$\mathbb{C}[[\lambda]]$"=lineare Fortsetzung ein Sternprodukt auf
$(\mathbb{R}^{2n},\omega)$. Dabei ist $\omega = dq^i \wedge d p_i$
die kanonische symplektische Form auf $\mathbb{R}^{2n}$ mit
Fundamentalmatrix $(\omega_{ij})$ und deren
inverser Matrix $(\omega^{ij})$. Die Abbildung $\mu \colon
C^\infty(\mathbb{R}^{2n}) \otimes C^\infty(\mathbb{R}^{2n}) \to
C^\infty(\mathbb{R}^{2n})$ bezeichnet die punktweise Multiplikation
von Funktionen. Das Sternprodukt
$\star_{\mathrm{\scriptscriptstyle{Weyl}}}$ heißt auch
\neuerBegriff{Weyl"=Moyal"=Sternprodukt} oder kurz
\neuerBegriff{Weyl"=Sternprodukt}. Das Weyl"=Sternprodukt ist
differentiell und Hermitesch. Für die Unteralgebra der in den
Impulskoordinaten polynomialen Funktionen sieht man leicht, dass die
Exponentialreihe abbricht.
\end{beispiel}
\begin{beispiel}[Wick"=Sternprodukt]
\label{bsp:WickSternprodukt}
Es sei $M = \mathbb{C}^n = \mathbb{R}^{2n}$ mit der kanonischen
symplektischen Form $\omega$ versehen. Dann definiert
\begin{align}
\label{eq:WickSternprodukt}
f \star_{\mathrm{\scriptscriptstyle{Wick}}} f' := \mu \circ \exp(2 \lambda
\frac{\partial}{\partial z^k} \otimes \frac{\partial}{\partial
\overline{z}^k}) f \otimes f'
\end{align}
für alle $f,f' \in C^\infty(\mathbb{C}^n)$ und
$\mathbb{C}[[\lambda]]$"=lineare Fortsetzung ein Sternprodukt für
$(\mathbb{R}^{2n},\omega)$. Dabei ist $\mu$ definiert wie in Beispiel
\ref{bsp:WeylSternprodukt} und $z^i$, $\overline{z}^i$ sind die
holomorphen und anti"=holomorphen Standardkoordinaten der reellen
Mannigfaltigkeit $\mathbb{C}^n$. Man nennt
$\star_{\mathrm{\scriptscriptstyle{Wick}}}$
\neuerBegriff{Wick"=Sternprodukt}. Es ist differentiell und
Hermitesch. Wählt man die in $z^i$ und $\overline{z}^i$ polynomialen
Funktionen als Unteralgebra der glatten Funktionen auf
$\mathbb{C}^n$, so hat man auch hier nach Ersetzen von $\lambda$
durch $\hbar$ eine konvergente Situation vorliegen.
\end{beispiel}
Für weitere konkrete Beispiele von Sternprodukten verweisen wir auf
\cite{waldmann:2007a}, \cite{waldmann:1998a} und \cite{gutt:1983a}.
Nach dem bisher Gesagten ergibt sich freilich die Frage, ob Sternprodukte
immer existieren und ob man diese auf sinnvolle Weise klassifizieren
kann. Dies wurde in voller Allgemeinheit, d.\,h.\ für beliebige
Poisson"=Mannigfaltigkeiten $1997$ von Kontsevich beantwortet, jedoch
erst 2003 publiziert (vgl.\ \cite{kontsevich:2003a}). Wir fassen sein
Resultat in folgendem Satz zusammen.
\begin{satz}[Existenz und Klassifikation von Sternprodukten auf
Poisson"=Mannigfaltigkeiten]
\label{thm:KontsevichExistenzUndKlassifikation}
Sei $(M,\{\cdot,\cdot\})$ eine Poisson"=Mannigfaltigkeit mit
Poisson"=Tensor $\pi$ (vgl.\ \cite[Def. 4.1.7]{waldmann:2007a}), dann
gilt Folgendes.
\begin{satzEnum}
\item %
Es gibt ein differentielles Sternprodukt für
$(M,\{\cdot,\cdot\})$.
\item%
Die Äquivalenzklassen von Sternprodukten auf $(M,\{\cdot,\cdot\})$
stehen zu den Äquivalenzklassen von formalen Deformationen
(vgl.\ \cite[Def. 4.2.35]{waldmann:2007a}) des Poisson"=Tensors
$\pi$ modulo formalen Diffeomorphismen im Sinne von
\cite[Def. 4.2.40]{waldmann:2007a} in Bijektion.
\end{satzEnum}
\end{satz}
Das Problem dieser Aussage besteht darin, dass die formalen
Poisson"=Tensoren modulo formalen Diffeomorphismen im Allgemeinen sehr
unzugänglich sind.
Anders als im Poisson"=Fall ist die Existenz eines Sternprodukts auf
einer symplektischen Mannigfaltigkeit aufgrund des Darboux"=Theorems
(vgl.\ \cite[Satz 3.1.24]{waldmann:2007a}) zumindest lokal klar. Die
Schwierigkeit ist hier, zu zeigen, dass auch tatsächlich ein
globales Sternprodukt existiert. DeWilde und Lecomte gelang der erste
allgemeine Existenzbeweis für Sternprodukte auf symplektischen
Mannigfaltigkeiten schon 1983, siehe \cite{dewilde.lecomte:1983b} sowie
\cite{gutt.rawnsley:1999a} für eine neuere Darstellung. Einen anderen
unabhängigen und viel geometrischeren Beweis konnte Fedosov in einer
zunächst wenig beachteten Arbeit angeben, siehe
\cite{fedosov:1985a,fedosov:1986a,fedosov:1989a}. Ein dritter Beweis
stammt von Omori, Maeda und Yoshida
\cite{omori.maeda.yoshioka:1991a}. Mit Hilfe der Fedosov Konstruktion
gelang es die Äquivalenzklassen von Sternprodukten auf symplektischen
Mannigfaltigkeiten zu klassifizieren, insbesondere sind hier die
Arbeiten von Nest und Tsygan \cite{nest.tsygan:1995a,nest.tsygan:1995b},
Deligne \cite{deligne:1995a}, Bertelson, Cahen und Gutt
\cite{bertelson.cahen.gutt:1997a} und Weinstein und Xu
\cite{weinstein.xu:1998a} zu nennen.
\begin{satz}[Klassifikation von Sternprodukten auf symplektischen
Mannigfaltigkeiten]
\label{satz:KlassifikationSymplektisch}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit. Dann stehen die
Äquivalenzklassen von differentiellen Sternprodukten zu den formalen
Potenzreihen $\mathrm{H}^2_{\mathrm{dR}}(M,\mathbb{C})[[\lambda]]$,
der zweiten de"=Rham"=Kohomologie von $M$ mit komplexen
Koeffizienten in Bijektion.
\end{satz}
\begin{bemerkung}[Fedosov"=Konstruktion]
\label{bem:Fedosov}
Die Fedosovkonstruktion liefert für jedes geschlossene $\Omega \in
\lambda \Gamma^\infty(\bigwedge\nolimits^2 T^*M)[[\lambda]]$ ein differentielles
Sternprodukt $\star_{\Omega}$, das sogenannte
\neuerBegriff{Fedosov"=Sternprodukt}. Es zeigt sich, dass erstens zwei
Fedosov"=Sternprodukte $\star_{\Omega}$ und $\star_{\Omega'}$ genau
dann äquivalent sind, wenn $\Omega - \Omega'$ exakt ist, d.\,h.\ in
$H^2_{\mathrm{dR}}(M,\mathbb{C})[[\lambda]]$ die Gleichung $[\Omega]
= [\Omega]$ gilt und zweitens dass jedes differentielle Sternprodukt
zu einem Fedosovsternprodukt äquivalent ist. Für eine ausführliche
Lehrbuchdarstellung dieses Sachverhaltes verweisen wir auf
\cite[Abschnitt 6.4]{waldmann:2007a}. Die Fedosovkonstruktion hat außer
$\Omega$ noch weitere Eingangsdaten, etwa eine torsionsfreie
kovariante Ableitung $\nabla$. Allerdings ändert eine Variation
dieser zusätzlichen Daten nicht die Äquivalenzklasse des
erhaltenen Sternprodukts, weshalb wir sie hier unterschlagen wollen.
\end{bemerkung}
Satz \ref{satz:KlassifikationSymplektisch} besagt im Speziellen, dass
auf einer Mannigfaltigkeit mit verschwindender zweiter de"=Rham
Kohomologie, wie etwa dem $\mathbb{R}^n$, $n \in \mathbb{N}$, alle
Sternprodukte äquivalent sind. Insbesondere sind nach dem
Poincar\'e"=Lemma im symplektischen Fall auch alle Sternprodukte lokal
äquivalent, womit sich eine Nicht"=Äquivalenz zweier Sternprodukte als
globaler Effekt erweist. Ersteres legt auch nahe, die obigen Ergebnisse
als Klassifikation bis auf Wahl einer (verallgemeinerten)
Ordnungsvorschrift zu interpretieren. Innerhalb einer Äquivalenzklasse von
Sternprodukten kann es durchaus verschiedene Sternprodukte geben, die
physikalisch als nicht äquivalent anzusehen sind.
Bisher haben wir nur die Observablenalgebra betrachtet, jedoch noch
nicht darüber gesprochen, wie wir Zustände in der
Deformationsquantisierung beschreiben wollen. Auch wenn dies für die
vorliegende Arbeit nicht direkt von Belang ist, wollen wir auf diesen
Punkt der allgemeinen Übersicht halber in aller Kürze kurz eingehen.
Es stellt sich als sinnvoll heraus, in der Deformationsquantisierung,
wie auch in anderen Zugängen zur Quantenmechanik, Zustände als ein von
den Observablen abgeleitetes Konzept anzusehen, nämlich als positive
Funktionale auf der Observablenalgebra. Mit Hilfe des Begriffs konvexer
Zerlegbarkeit, kann man von gemischten und reinen Zuständen sprechen. Um
das für die Quantenmechanik wichtige Superpositionsprinzip auch in die
Deformationsquantisierung implementieren zu können, kann man
Darstellungen von Sternproduktalgebren betrachten. Wir können an dieser
Stelle jedoch nicht weiter darauf eingehen. Siehe \cite{waldmann:2007a}
und die Referenzen dort für weitergehende Betrachtungen. Allgemein zum
Zustandskonzept und Superpositionsprinzip sehr lesenswert ist das Buch
von Araki \cite{araki1999mathematical}.
\section{Symmetrien und Invarianzen}
\label{sec:SymmetrienUndInvarianzen}
Die in dieser Arbeit betrachteten Symmetrien sind mathematisch immer
durch eine Gruppenwirkung einer Lie"=Gruppe $G$ auf einer
Mannigfaltigkeit $M$ gegeben, wobei letztere als Phasenraum eines
klassischen mechanischen Systems interpretiert werden kann. Wenn wir von
Lie"=Gruppe sprechen, meinen wir eine zusammenhängende
Lie"=Gruppe. Sofern nichts anderes gesagt, bezeichnet $\lieAlgebra$
stets die Lie"=Algebra von $G$.
\subsection{Impulsabbildungen und Phasenraumreduktion in der klassischen Mechanik}
\label{sec:ImpulsAbbKlassMechanik}
In diesem Abschnitt wiederholen wir den für diese Arbeit wichtigen
Begriff der Impulsabbildung und stellen die Phasenraum"=Reduktion nach
Marsden"=Weinstein vor. Wir orientieren uns hierbei unter anderem an
\cite{waldmann:2007a}, \cite{ortega.ratiu:2004} und
\cite{abraham.marsden:1985a}.
Es sei zunächst daran erinnert, dass man eine
Wirkung $\Phi \colon G \times M \to M$ einer Lie"=Gruppe $G$ auf einer
Poisson"=Mannigfaltigkeit $(M,\{\cdot,\cdot\})$ \neuerBegriff{kanonisch}
nennt, falls sie durch Poisson"=Abbildungen wirkt, d.\,h.\ falls
$\Phi_g^*\{f,f'\} = \{\Phi_g^*f,\Phi_g^*f'\}$ für alle $g \in G$ und $f,f'
\in C^\infty(M)$ gilt. Ist $M$ symplektisch, bedeutet dies, dass sie
durch Symplektomorphismen wirkt, man spricht in diesem Fall auch von
einer \neuerBegriff{symplektischen} Wirkung.
\begin{definition}[Impulsabbildung]
\label{def:Impulsabbildung}
Sei $(M,\{\cdot,\cdot\})$ eine Poisson"=Mannigfaltigkeit und $G$ eine
Lie"=Gruppe, die kanonisch auf $M$ operiere. Eine glatte
Abbildung
\begin{align}
\label{eq:Impulsabbildung}
J \colon M \to \lieAlgebra^*
\end{align}
heißt \neuerBegriff{Impulsabbildung}, falls für jedes $\xi \in
\lieAlgebra$ die Gleichung
\begin{align}
\label{eq:ImpulsabbildungDefGleichung}
\xi_M = \{\cdot,J(\xi)\}
\end{align}
erfüllt ist, wobei $J(\xi) \in C^\infty(M)$ als punktweise duale
Paarung $J(\xi)(p) := \dPaar{J(p)}{\xi}$ für alle $p \in M$ und $\xi
\in \lieAlgebra$ zu verstehen ist.
\end{definition}
\begin{bemerkung}
\label{bem:ImpulsabbildungSichtweiseRaume}
Äquivalent zu Definition \ref{def:Impulsabbildung} kann man eine
Impulsabbildung $J$ vermöge $J(\xi)(p) = \dPaar{J(p)}{\xi}$ für alle
$p \in M$ und $\xi
\in \lieAlgebra$ auch als lineare Abbildung
\begin{align}
\label{eq:ImpulsabbildungAndereSichtweise}
J \colon \lieAlgebra \to C^\infty(M)
\end{align}
auffassen, die $\overline{J(\xi)} = J(\xi)$ für alle $\xi \in
\lieAlgebra$ und Gleichung \eqref{eq:ImpulsabbildungDefGleichung}
erfüllt. Wir wollen im Folgenden beide Sichtweisen gleichberechtigt
verwenden.
\end{bemerkung}
\begin{bemerkung}
\label{bem:WirkungenundAequivarianteImpulsabb}
$G$ operiert auf $\lieAlgebra$ vermöge der adjungierten, auf
$\lieAlgebra^*$ vermöge der koadjungierten Wirkung und auf
$C^\infty(M)$ durch pull"=backs. Wenn wir von der $G$-Äquivarianz der
Impulsabbildung sprechen, beziehen wir uns immer auf diese
(Links-)Wir\-kungen. Des Weiteren sei bemerkt, dass die
$G$"=Äquivarianz bezüglich der einen Auffassung einer Impulsabbildung
äquivalent zur
$G$"=Äquivarianz in der anderen Auffassung ist.
\end{bemerkung}
\begin{bemerkung}
\label{bem:Eindeutigkeit}
Ist $M$ zusammenhängend, so ist $J$ eindeutig bis auf Addition von
Elementen aus $\lieAlgebra^*$.
\end{bemerkung}
Eine Impulsabbildung braucht für eine gegebene Wirkung nicht zu
existieren, siehe etwa \cite[4.5.16]{ortega.ratiu:2004} für ein
einfaches Gegenbeispiel im symplektischen Fall. Die folgende Proposition,
vgl.\ \cite[Prop. 4.5.17]{ortega.ratiu:2004}, klärt die Situation für den
symplektischen Fall und liefert zwei einfach zu prüfende hinreichende
Existenzkriterien. Die entsprechende Frage für allgemeine
Poisson"=Mannigfaltigkeiten ist komplizierter.
\begin{proposition}
\label{prop:ExistenzImpulsAbbildung}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit, auf der eine
Lie"=Gruppe $G$ symplektisch operiere. Es gibt genau dann eine
Impulsabbildung zu dieser Wirkung, wenn die lineare Abbildung
\begin{align}
\label{eq:AbbildungFuerExistenzVonImpulsabbildungen}
\lieAlgebra/[\lieAlgebra,\lieAlgebra] &\to H^1_{\mathrm{dR}}(M,\mathbb{R}) \\
[\xi] &\mapsto [\omega(\xi_M,\cdot)]
\end{align}
identisch Null ist. Dabei bezeichnet $H^1_{\mathrm{dR}}(M,\mathbb{R})$ die erste
de"=Rham"=Kohomologie von $M$ mit Koeffizienten in $\mathbb{R}$.
\end{proposition}
\begin{korollar}
\label{kor:HinreichendeBedFuerExistenzVonImpulsabb}
Seien $(M,\omega)$ und $G$ wie in Proposition
\ref{prop:ExistenzImpulsAbbildung}. Dann sind folgende Bedingungen
hinreichend für die Existenz einer Impulsabbildung.
\begin{korollarEnum}
\item %
\label{item:deRhamExFuerImpulsabb}
$H^1_{\mathrm{dR}}(M,\mathbb{R}) = 0$.
\item %
\label{item:LieAlgebarExFuerImpulsabb}
$[\lieAlgebra,\lieAlgebra] = \lieAlgebra$.
\end{korollarEnum}
\end{korollar}
\begin{bemerkung}
\label{bem:HinreichendeBedFuerExistenzVonImpulsabb}
Die Bedingung~\refitem{item:deRhamExFuerImpulsabb} aus Korollar
\ref{kor:HinreichendeBedFuerExistenzVonImpulsabb} ist insbesondere
für jede einfach zusammenhängende Mannigfaltigkeit $M$ erfüllt.
Lie"=Algebren, die Bedingung~\refitem{item:LieAlgebarExFuerImpulsabb}
erfüllen heißen auch vollkommen oder perfekt, insbesondere ist jede
halbeinfache Lie"=Algebra perfekt, siehe
\cite[Satz~10.7]{schottenloher:1995a}. Für nichttriviale abelsche
Lie"=Algebren ist~\refitem{item:LieAlgebarExFuerImpulsabb}
offensichtlich nie erfüllt.
\end{bemerkung}
Die folgende Proposition (vgl.\ \cite{waldmann:2007a}) ist insbesondere
deshalb nützlich, da wir Lie"=Gruppen immer als zusammenhängend
voraussetzen.
\begin{proposition}
\label{prop:AequivariantUndInfinitesimalAequivariant}
Sei $(M,\{\cdot,\cdot\})$ eine Poisson"=Mannigfaltigkeit, $G$ eine
zusammenhängende Lie"=Gruppe, die auf $M$ kanonisch operiere. Eine
Impulsabbildung $J\colon M \to \lieAlgebra^*$ ist genau dann
$G$"=äquivariant, wenn die Gleichung $\{J(\xi),J(\eta)\} =
J([\xi,\eta])$ für alle $\xi,\eta \in \lieAlgebra$ erfüllt ist.
\end{proposition}
Eine gegebene Impulsabbildung $J\colon M \to \lieAlgebra^*$ ist im Allgemeinen nicht $G$"=äquivariant. Für kompakte $G$ kann man aus $J$
jedoch immer durch Integration über $G$ eine andere, invariante
Impulsabbildung bauen, vgl.\ \cite[Prop. 4.5.19]{ortega.ratiu:2004}.
\begin{proposition}
\label{prop:KompaktInvarianteImpulsabbildung}
Sei $G$ eine kompakte Lie"=Gruppe, die auf einer
Poisson"=Mannigfaltigkeit $(M,\{\cdot,\cdot\})$ kanonisch
operiere. Falls eine Impulsabbildung existiert, so gibt es auch eine
$G$"=äquivariante Impulsabbildung.
\end{proposition}
Eine andere Idee besteht darin, eine gegebene Impulsabbildung additiv so
abzuändern, dass sie äquivariant wird. Dies ist Gegenstand der folgenden
Proposition, siehe \cite[Satz 3.3.48]{waldmann:2007a}.
\begin{proposition}
\label{prop:ExistenzAequivarianterImpulsabbildungen}
Sei $G$ eine Lie"=Gruppe, die auf einer zusammenhängenden
symplektischen Mannigfaltigkeit $(M,\omega)$ symplektisch operiere und
$J\colon M \to \lieAlgebra^*$ eine Impulsabbildung. Dann ist für alle
$\xi,\eta \in \lieAlgebra$ die Funktion $c(\xi,\eta) := J([\xi,\eta])
- \{J(\xi),J(\eta)\}$ konstant auf $M$ und die Abbildung $c \colon
\lieAlgebra \times \lieAlgebra \ni (\xi,\eta) \mapsto c(\xi,\eta) \in
\mathbb{R}$ ist ein $2$"=Lie"=Algebra"=Kozyklus. Weiter gibt es genau
dann eine $G$"=äquivariante Impulsabbildung $J'$, wenn das Element
$[c]$ der zweiten Lie"=Algebra"=Kohomologie $H^2(\lieAlgebra)$
trivial ist.
\end{proposition}
\begin{bemerkung}
\label{bem:InvarianteImpulsabbildung}
Falls es eine Impulsabbildung gibt, so ist eine hinreichende
Bedingung für die Existenz einer $G$"=äquivarianten Impulsabbildung,
dass die zweite Lie"=Algebra"=Kohomologie $H^2(\lieAlgebra)$
verschwindet. Dies ist nach dem Whitehead"=Lemma
(vgl.\ \cite[52]{guillemin.sternberg:1984b}) insbesondere für jede
halbeinfache Lie"=Algebra erfüllt.
\end{bemerkung}
Siehe \cite{ortega.ratiu:2004} und die dort
angegebenen Referenzen für weitere Kriterien für die Existenz
äquivarianter Impulsabbildungen.
Im Lichte der obigen Betrachtungen erscheint die folgende Definition
gerechtfertigt.
\begin{definition}
\label{def:Hamiltonsch}
Sei $(M,\{\cdot,\cdot\})$ eine Poisson"=Mannigfaltigkeit und
$\Phi\colon G \times M \to M$ eine kanonische Wirkung einer Lie"=Gruppe $G$.
\begin{definitionEnum}
\item %
$\Phi$ heißt \neuerBegriff{Hamiltonsch}, falls es eine
Impulsabbildung gibt.
\item %
$\Phi$ heißt \neuerBegriff{stark Hamiltonsch}, falls es eine
$G$"=äquivariante Impulsabbildung gibt.
\end{definitionEnum}
\end{definition}
Um die physikalische Interpretation anzudeuten und zur abkürzenden
Schreibweise, geben wir noch folgende Definition.
\begin{definition}[Hamiltonsches System]
\label{def:hamiltonschesSystem}
Ist $(M,\{\cdot,\cdot\})$ eine Poisson"=Mannigfaltigkeit und $H \in
C^\infty(M)$, so nennen wir das Tripel $(M,\{\cdot,\cdot\},H)$ ein
\neuerBegriff{Hamiltonsches System}. Die Funktion $H$ heißt dann auch
\neuerBegriff{Hamiltonfunktion}. Den Fluß des Hamiltonschen
Vektorfelds $X_H := \{\cdot,H\}$ nennen wir
\neuerBegriff{Zeitentwicklung} von $H$. Ist $f \in C^\infty(M)$
invariant unter der Zeitenwicklung von $H$, so sagen wir $f$ ist eine
\neuerBegriff{Erhaltungsgröße} (bezüglich der Zeitenwicklung von $H$).
\end{definition}
Die nächste Proposition, siehe etwa \cite[Satz
11.4.1]{marsden.ratiu:2000a}, verdeutlicht die physikalische Bedeutung
des Begriffs Impulsabbildung.
\begin{proposition}[Noether"=Theorem]
\label{prop:NotherTheorem}
Sei $(M,\{\cdot,\cdot\})$ eine Poisson"=Mannigfaltigkeit, $G$ eine
zusammenhängende Lie"=Gruppe, die auf $M$ Hamiltonsch wirke und $J
\colon M \to \lieAlgebra^*$ eine Impulsabbildung. Dann ist eine
Hamiltonfunktion $H \in C^\infty(M)$ genau dann $G$"=invariant, wenn
$J(\xi) \in C^\infty(M)$ eine Erhaltungsgröße bezüglich der
Zeitentwicklung von $H$ ist.
\end{proposition}
Geometrisch grundlegend für das Verständnis der restlichen Arbeit ist
der Satz über die symplektische Reduktion nach Marsden, Weinstein und
Meyer, \cite{meyer:1973a} \cite{marsden.weinstein:1974a}, vgl.\ auch
\cite[Satz~3.3.55]{waldmann:2007a} für eine ausführliche
Lehrbuchdarstellung.
\begin{satz}[Marsden"=Weinstein"=Reduktion]
\label{satz:MarsdenWeinstein}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit, $G$ eine
Lie"=Gruppe mit zugehöriger Lie"=Algebra $\lieAlgebra$ und $\Phi \colon
G \times M \to M$ eine stark Hamiltonsche Wirkung mit
$G$"=äquivarianter Impulsabbildung $J \colon M \to \lieAlgebra^*$.
Sei weiter $\mu \in \lieAlgebra^*$ ein regulärer Wert und die
Impulsniveaufläche $C:= J^{-1}(\{\mu\}) \subseteq M$ nicht
leer. Zudem wirke die Isotropiegruppe $G_\mu \subset G$ von $\mu$
frei und eigentlich auf $M$. Unter diesen Voraussetzungen gibt es
eine eindeutig bestimmte symplektische Form $\omega_{\mathrm{red}}$
auf dem Quotientenraum $M_{\mathrm{red}}:= C/G_{\mu}$, so dass
\begin{align}
\label{eq:ReduzierteSymplektischeForm}
\ensuremath{\imath^*} \omega = \pi^* \omega_{\mathrm{red}}
\end{align}
gilt. Dabei ist $\ensuremath{\imath} \colon C \hookrightarrow M$ die kanonische
Inklusion und $\pi \colon C \to M_{\mathrm{red}}$ die kanonische Projektion auf
den Quotienten $M_{\mathrm{red}}$. Insbesondere ist $\pi$ eine surjektive
Submersion.
\end{satz}
Man beachte, dass die Voraussetzungen des Satzes insbesondere dann
erfüllt sind, wenn $G$ frei und eigentlich auf ganz $M$ wirkt. Falls
$\mu$ $G$"=invariant ist, gilt $G_{\mu} = G$.
\begin{definition}[Reduzierter Phasenraum]
\label{def:RedPhasenraum}
Man nennt die symplektische Mannigfaltigkeit $(\Mred,\omega_{\mathrm{red}})$
\neuerBegriff{reduzierter Phasenraum} zu $(M,\omega,J,\mu)$.
\end{definition}
\begin{bemerkung}[Eichtheorie]
\label{bem:Eichtheorie}
In einer klassischen Eichtheorie betrachtet man den affinen, im
Allgemeinen unendlich dimensionalen Vektorraum $\mathcal{A}$ der
Zusammenhangseinsformen auf einem Hauptfaserbündel $P \to B$ mit
Strukturgruppe $S$. Auf $\mathcal{A}$ operiert die Gruppe
$\mathcal{G}$ der Hauptfaserbündelautomorphismen über der Identität
auf $P \to B$ vermöge pull"=backs, oder allgemeiner eine Untergruppe
davon. In physikalischer Interpretation ist $B$ zum Beispiel die
Raumzeit, nur der physikalische Raum, oder allgemeiner eine
Cauchy"=Hyperfläche $\Sigma$ in einer global hyperbolischen Raumzeit
(vgl.\ \cite[2.2.4]{waldmann2009wave}). $P$ interpretiert man als Raum
verallgemeinerter Phasenfaktoren, $\mathcal{A}$ als Raum der
Eichpotentiale und $S$ als interne Symmetriegruppe. Die Gruppe
$\mathcal{G}$ ist dann die Eichgruppe und zwei Eichfelder entlang
eines $\mathcal{G}$"=Orbits sind physikalisch äquivalent. Wir wollen
annehmen, dass eine global hyperbolische Raumzeit in der Form $\Sigma
\times \mathbb{R}$ vorliegt, d.\,h.\ wir schon eine Zeiteichfixierung
durchgeführt haben und nehmen $B = \Sigma$ an. Dann kann man
versuchen, die zeitliche Dynamik der Eichpotentiale als Hamiltonsches
System zu beschreiben. Man betrachtet dabei das (i.\,Allg.\ unendlich
dimensionale) Kotangentialbündel $T^*\mathcal{A}$, versehen mit der
kanonischen symplektischen Form. Die Eichgruppe $\mathcal{G}$ wirkt
dann auf $\mathcal{M} := T^*\mathcal{A}$ vermöge der gelifteten
$\mathcal{G}$ Wirkung auf $\mathcal{A}$ und man kann als
Impulsabbildung die kanonische wählen. Nun ist man also zumindest
strukturell in der Situation von Satz
\ref{satz:MarsdenWeinstein}. Allerdings sind $\mathcal{M}$ und
$\mathcal{G}$ unendlich dimensional. Außerdem ist es im Allgemeinen
nicht der Fall, dass $\mathcal{G}$ frei wirkt, so dass im Quotienten
ein symplektisch stratifizierter Raum zu erwarten ist. Betrachtet man
eine Gittereichtheorie, d.\,h.\ ersetzt konkret $\Sigma$ durch ein
endliches Gitter, so werden $\mathcal{M}$ und $\mathcal{G}$ endlich
dimensional, die Problematik, dass im Allgemeinen Singularitäten
auftreten, bleibt jedoch bestehen. Für das genauere Studium von
Gittereichtheorien in oben skizzierter Formulierung verweisen wir auf
Arbeiten von Rudolph
et.\,al.\ \cite{huebschmann2009gauge,fischer.rudolph.schmidt2007,charzynski.kijowski.rudolph2005},
für eine Lehrbuchdarstellung singulärer (endlich"=dimensionaler)
Marsden"=Weinstein"=Reduktion siehe \cite[Ch. 8]{ortega.ratiu:2004}.
Für eine elementare Einführung in die Hamiltonsche, geometrische
Sichtweise von Eichtheorien sei auf die Arbeit von Belot
\cite{belot2003symmetry} verwiesen. Als weitere Literatur seien
\cite{kijowski1979symplectic},\cite{marathe1992mathematical},
\cite{schottenloher:1995a},
\cite{mitter1981bundle},\cite{kijowski1984canonical},
\cite{trautman1970fibre}, \cite{shabanov2000geometry},
\cite{emmrich.roemer:1990a}, \cite{rudolph2002gauge},
\cite{moncrief1980reduction} und \cite{daniel1980geometrical} genannt.
\end{bemerkung}
Auch wenn der folgende Satz für den weiteren Verlauf dieser Arbeit
nicht benötigt wird, möchten wir ihn dennoch an dieser Stelle
zitieren, vgl.\ \cite[Thm. 6.1.1]{ortega.ratiu:2004} und
\cite[6.1.10]{ortega.ratiu:2004}, da er die physikalische
Bedeutung des vorangegangenen Satzes innerhalb der klassischen
Mechanik deutlich macht.
\begin{satz}[Reduktion und Rekonstruktion der Dynamik]
\label{satz:ReduktionDerDynamik}
Seien die Voraussetzungen wir in Satz~\ref{satz:MarsdenWeinstein}
gegeben. Weiter sei $H \in C^\infty(M)^G$ eine $G$"=invariante
Hamiltonfunktion. Dann sind folgende Aussagen richtig.
\begin{satzEnum}
\item %
Das Hamiltonsche Vektorfeld $X_H$ ist tangential an die
Impulsniveaufläche $C$ und dessen Fluss $\Phi_t^{X_H}$
bildet diese in sich ab.
\item %
Es gibt eine eindeutig bestimmte Hamiltonfunktion
$H_{\mathrm{red}} \in C^\infty(\Mred)$, so dass
\begin{align}
\label{eq:ReduktionDerDynamik1}
\ensuremath{\imath^*} H = \pi^* H_{\mathrm{red}}
\end{align}
und
\begin{align}
\label{eq:ReduktionDerDynamik2}
\pi \circ \Phi_t^{X_H} \circ \ensuremath{\imath} = \Phi_t^{X_{H_\mathrm{red}}} \circ \pi
\end{align}
gilt, wobei $\Phi^{X_{H_\mathrm{red}}}$ den Fluss von $X_{H_{\mathrm{red}}}$
bezeichnet.
\item %
Sei $p_0\in C$, $d(t)$ eine glatte Kurve in $C$ mit $d(0) = p_0$,
$\xi(t)$ eine glatte Kurve in $\lieAlgebra_\mu := \{\xi \in
\lieAlgebra \mid \mathrm{ad}^*_\xi \mu = 0\}$ mit
\begin{align}
\label{eq:RekonstruktionDerDynamik1}
\xi(t)_M(d(t)) = X_H(d(t)) - \dot d(t)
\end{align}
und
$g(t)$ eine glatte Kurve in $G_\mu$ mit
\begin{align}
\label{eq:RekonstruktionDerDynamik2}
\dot g(t) = T_eL_{g(t)}\xi(t) \quad \text{und} \quad g(0) = e \ensuremath{\text{,}}
\end{align}
dann ist $c(t) = g(t) d(t)$ eine Integralkurve von $X_H$ mit $c(0)
= p_0$.
Falls $G$ abelsch ist, gilt
\begin{align}
\label{eq:RekonstruktionDerDynamik3}
g(t) = \exp\left(\int_0^t \xi(s) \, ds\right) \ensuremath{\text{.}}
\end{align}
Ist nun $\gamma \colon TC \to \lieAlgebra_{\mu}$ eine
Zusammenhangseinsform für das $G_\mu$"=Hauptfaserbündel $\pi
\colon C \to \Mred$ und sei $d(t)$ die horizontale Hebung
von $\Phi_t^{X_{H_{\mathrm{red}}}}(\pi(p_0))$ durch $p_0$, dann ist
\begin{align}
\label{eq:RekonstruktionDerDynamik4}
\xi(t) = \gamma(X_H(d(t)))
\end{align}
eine Lösung von Gleichung \eqref{eq:RekonstruktionDerDynamik1}.
\end{satzEnum}
\end{satz}
Gleichung \eqref{eq:RekonstruktionDerDynamik1} ist typischerweise
algebraischer Natur. Falls zum Beispiel $G$ eine Matrix"=Lie"=Gruppe ist,
handelt es sich einfach um eine Matrixgleichung.
Die Bedeutung dieser Reduktionssätze liegen innerhalb der klassischen
Mechanik zum einen in einem Gewinn des qualitativen Verständnisses eines
mechanischen Systems, zum anderen können sie hilfreich sein bei der
Lösung konkreter Bewegungsgleichungen eines solchen.
An dieser Stelle möchten wir noch bemerken, dass eine Impulsabbildung $J
\colon M \to \lieAlgebra^*$ submersiv ist, d.\,h.\ alle Werte auch reguläre
Werte sind, falls $G$ frei auf $M$ wirkt. Wir werden auf diesen Punkt
nun in etwas größerer Allgemeinheit eingehen.
\begin{definition}
\label{def:lokalFreiUndInfiniFrei}
Eine Wirkung einer Lie"=Gruppe $G$ auf einer Mannigfaltigkeit $M$
heißt \neuerBegriff{lokal frei}, falls für alle $p \in M$ die
Stabilisatorgruppe $G_p := \{g \in G \mid gp = p\}$ diskret ist,
d.\,h.\ es für jedes $g \in G_p$ eine offene Umgebung $U$ von $g$ in $G$
gibt, so dass $U \cap G_p = \{g\}$. Sie heißt
\neuerBegriff{infinitesimal frei}, falls für alle $p \in M$ die
\neuerBegriff{Symmetriealgebra} $\lieAlgebra_p := \{\xi \in \lieAlgebra \mid
\xi_M(p) = 0\}$ trivial ist, d.\,h.\ $\lieAlgebra_p = \{0\}$ gilt.
\end{definition}
\begin{bemerkung}
\label{bem:lokalFrei}
\begin{bemerkungEnum}
\item %
Jede freie Wirkung ist offensichtlich auch lokal frei.
\item %
Falls die Wirkung eigentlich ist, ist $G_p$ kompakt für alle $p
\in M$, d.\,h.\ sie ist genau dann lokal frei, wenn $G_p$ endlich ist
für alle $p \in M$.
\end{bemerkungEnum}
\end{bemerkung}
\begin{proposition}
\label{prop:lokalFrei}
Sei $M$ eine Mannigfaltigkeit, die eine Wirkung $\Phi \colon M \times
G \to G$ einer Lie"=Gruppe $G$ trage. Dann sind die folgenden Aussagen äquivalent.
\begin{propositionEnum}
\item %
\label{item:lokalFrei1}
$\Phi$ ist lokal frei.
\item %
\label{item:lokalFrei2}
Für alle $p \in M$ gibt es eine offene Umgebung $V_p$ von $e$ in
$G$ mit für $V_p \cap G_p = \{e\}$.
\item %
\label{item:lokalFrei3}
$\Phi$ ist infinitesimal frei.
\end{propositionEnum}
\end{proposition}
\begin{proof}
Die Implikation von~\refitem{item:lokalFrei1} nach~\refitem{item:lokalFrei2} ist trivial. Sei also $p \in M$ und
$V_p$ eine offene Umgebung von $e$ in $G$ mit $V_p \cap G_p = \{e\}$. Sei
weiter $g \in G_p$. Da die Wirkung stetig ist, ist $gV_p$ eine
offene Umgebung von $g$. Offensichtlich gilt für $h \in G_p \cap
gV_p$ auch $g^{-1}h \in G_p \cap V_p$, also $g^{-1}h = e$,
d.\,h.\ $h= g$, womit sich~\refitem{item:lokalFrei1} ergibt.
Sei nun $\Phi$ infinitesimal frei. Sei $\hat \Phi \colon G \times
M \to M \times M$, $(g,p) \mapsto (gp,p)$. Es genügt zu zeigen,
dass es für alle $p \in M$ eine Umgebung $V_p \times W_p$ von $(e,p)$
gibt, so dass $\hat \Phi\at{V_p \times W_p}$ injektiv ist. Denn dann
ist offenbar $V_p$ eine Umgebung wie in~\refitem{item:lokalFrei2}. Um dies zu sehen, genügt es nach dem
Immersionssatz (vgl.\ \cite[Thm. 3.5.7]{marsden:2002}) zu zeigen,
dass die Ableitung von $\hat \Phi$ an der Stelle $(e,p)$ injektiv
ist. Dies ist aber nach Voraussetzung und Proposition
\ref{prop:produkte} klar, denn für $\xi \in \lieAlgebra = T_eG$
und $v_p \in T_pM$ gilt
\begin{align*}
0 = T_{(e,p)}\hat \Phi \TIsoI{(\xi,v_p)} &= \TIsoI{(T_e
\Phi_{\cdot}(p), 0)} + \TIsoI{(v_p,v_p)} = \TIsoI{(\xi_M(p) +
v_p, v_p)} \\
&\implies v_p = 0 \, \text{und} \, \xi_M(p) = 0 \ensuremath{\text{.}}
\end{align*}
Daraus folgt nach Voraussetzung schon $\xi = 0$, also auch
$\TIsoI{(\xi,v_p)} = 0$.
Sei nun angenommen $\Phi$ nicht infinitesimal frei. Dann gibt es
ein $p \in M$ und ein $0 \neq\xi \in \lieAlgebra$ mit
$T_e\Phi(p) = \xi_M(p) = 0$. Es gibt also eine Umgebung $V$
von $e$ in $G$, so dass $\Phi_g(p) = p$ für alle $g \in V$,
d.\,h.~\refitem{item:lokalFrei2} kann nicht gelten.
\end{proof}
\begin{proposition}
\label{lem:LineareAlgebra}
Seien $V$ und $W$ endlich dimensionale Vektorräume über $\mathbb{K}$ und $A \colon V
\to W$ linear, so ist $A$ genau dann surjektiv, wenn aus $\dPaar{Av}{w^*}
= 0 \; \forall v \in V$ schon $w^* = 0 \in W^*$ folgt.
\end{proposition}
\begin{proof}
Die Aussage ist äquivalent dazu, dass $A$ genau dann injektiv ist,
wenn die duale Abbildung $A^*$ surjektiv ist. Damit ist die
Behauptung klar.
\end{proof}
\begin{lemma}
\label{lem:JSurjektiv}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit und $\Phi \colon
G \times M \to M$ eine Hamiltonsche Wirkung einer Lie"=Gruppe $G$ mit
Impulsabbildung $J \colon M \to \lieAlgebra^*$. Dann ist $p \in M$
genau dann ein regulärer Punkt von $J$, wenn $\lieAlgebra_p = \{0\}$
gilt. Insbesondere ist $\Phi$ genau dann lokal frei, wenn $J$
submersiv ist, was genau dann der Fall ist, wenn jeder Wert von $J$
regulär ist.
\end{lemma}
\begin{proof}
Für $\mu \in \lieAlgebra^*$ bezeichne in diesem Beweis $I_\mu\colon
\lieAlgebra^* \to T_\mu\lieAlgebra^*$ den kanonischen Isomorphismus
der Mannigfaltigkeit $\lieAlgebra^*$ mit ihrem Tangentialraum
$T_{\mu}\lieAlgebra^*$ an der Stelle $\mu$.
\begin{align*}
p \in M \; \text{regulär} &\iff T_pJ \colon T_pM \to
T_{J(p)}\lieAlgebra^* \; \text{ist surjektiv} \\
&\iff I_{J(p)}^{-1} \circ T_pJ \colon T_pM \to \lieAlgebra^* \;
\text{ist surjektiv} \\
&\iff \dPaar{I_{J(p)}^{-1} \circ T_p J v}{\xi} = 0 \; \forall v \in T_pM \implies \xi =
0 &&\eAnn{nach Proposition \ref{lem:LineareAlgebra}}\\
&\iff \dPaar{d_pJ(\xi)}{v} = 0 \;\forall v \in T_pM \implies \xi = 0 \\
&\iff \omega_p(\xi_M(p),v) = 0 \; \forall v \in T_pM \implies \xi = 0
\\
&\iff \xi_M(p) = 0 \implies \xi = 0 &&\eAnn{da $\omega_p$ nicht ausgeartet} \\
&\iff \lieAlgebra_p = \{0\}
\end{align*}
Die Umformung im vierten Schritt sieht man leicht in Koordinaten. Ist
etwa $(U,x)$ eine Karte von $M$ um $p$, $\{e_i\}$ eine Basis von
$\lieAlgebra$ und $\{e^i\}$ die zugehörige duale Basis, und schreiben
wir $v = v_k \frac{\partial}{\partial x^k}\at{p}$ sowie $\xi = \xi^i
e_i$, dann rechnet man leicht
\begin{align*}
\dPaar{I_{J(p)}^{-1} \circ T_p J v}{\xi} = \dPaar{\frac{\partial
J_i}{\partial x^k}\at{p}v_k e^i}{\xi^j e_j}
&= v_k \frac{\partial J_i}{\partial x^k}\at{p} \xi^i \\
&= v_k \frac{\partial}{\partial x^k}\at{p}(\xi^i J_i)
= v_k
\frac{\partial}{\partial x^k}(J(\xi))
= \dPaar{d_pJ(\xi)}{v}
\end{align*}
nach.
\end{proof}
\subsection{Invarianzbegriffe in der Deformationsquantisierung}
\label{sec:InvarianzbegriffeInDerDeformationsquantisierung}
Wir kommen nun zu Invarianzbegriffen von Sternprodukten
(vgl.\ \cite{arnal.cortet.molin.pinczon:1983a},
\cite{xu:1998a,bordemann.brischle.emmrich.waldmann:1996a}).
\begin{definition}[Invarianz von Sternprodukten]
\label{def:InvariantesSternprodukt}
Sei $(M,\{\cdot,\cdot\})$ eine Poisson"=Mannigfaltigkeit, die eine
kanonische Wirkung $\Phi \colon G \times M \to M$ einer Lie"=Gruppe
$G$ trage. Ein Sternprodukt $\star$ für $(M,\{\cdot,\cdot\})$ heißt
\neuerBegriff{$G$"=invariant}, wenn für alle $g \in G$ und $f,f' \in
C^\infty(M)$ die Gleichung
\begin{align}
\label{eq:InvariantesSternprodukt}
\Phi_g^*(f \star f') = \Phi_g^*f \star \Phi_g^* f'
\end{align}
gilt. Ist $J \colon M \to \lieAlgebra^*$ eine $G$"=äquivariante
Impulsabbildung, so heißt $\star$ \neuerBegriff{stark invariant},
falls für alle $\xi \in \lieAlgebra$ und $f \in C^\infty(M)$
\begin{align}
\label{eq:SternproduktKovariant}
\I \lambda \{J(\xi),f\} = [J(\xi),f]_{\star}
\end{align}
gilt, wobei $[\cdot,\cdot]_\star$ der Kommutator bezüglich des
Sternprodukts $\star$ ist.
\end{definition}
\begin{bemerkung}
\label{bem:BeziehungStarkInvariantInvariant}
Die starke Invarianz eines Sternprodukts impliziert dessen Invarianz
im Falle einer zusammenhängenden Lie"=Gruppe. Der Umkehrschluss gilt
im Allgemeinen nicht. Siehe etwa \cite[p. 112,
136]{bordemann.herbig.waldmann:2000a}.
\end{bemerkung}
Als nächstes verallgemeinern wir das Konzept der Impulsabbildung, indem
wir noch Terme höherer $\lambda$"=Ordnungen zulassen.
\begin{definition}[Quantenimpulsabbildung]
\label{def:Quantenimpulsabbildung}
Sei $(M,\{\cdot,\cdot\})$ eine Poisson"=Mannigfaltigkeit, $G$ eine
Lie"=Gruppe, die auf $M$ wirke. Weiter sei $\star$ ein
$G$"=invariantes Sternprodukt auf $M$. Dann heißt eine lineare
Abbildung $\qJ \colon \lieAlgebra \to C^\infty(M)[[\lambda]]$ der
Form
\begin{align}
\qJ = \sum_{r=0}^\infty \lambda^r J_r
\end{align}
mit linearen Abbildungen $J_r \colon \lieAlgebra \to C^\infty(M)$
\neuerBegriff{Quantenhamiltonfunktion} (oder auch
\neuerBegriff{Quanten"=Hamiltonian}), falls $\qJ$ die Deformation
einer Impulsabbildung ist, d.\,h.\ $J := J_0$ eine Impulsabbildung ist
und für alle $f \in C^\infty(M)$ sowie $\xi \in \lieAlgebra$ die
Gleichung
\begin{align}
\label{eq:QuantenHamiltonian}
\I \lambda \{J(\xi),f\} = [\qJ(\xi),f]_\star
\end{align}
gilt. Falls zusätzlich für alle $\xi,\eta \in
\lieAlgebra$ die Gleichung
\begin{align}
\label{eq:Quantenimpulsabbildung}
\I \lambda \qJ([\xi,\eta]) = [\qJ(\xi),\qJ(\eta)]_\star
\end{align}
wahr ist, nennt man $\qJ$
\neuerBegriff{Quantenimpulsabbildung}. Eine $G$"=äquivariante
Quantenhamiltonfunktion heißt auch \neuerBegriff{$G$"=äquivariante
Quantenimpulsabbildung},
vgl.\ \cite[p. 492]{kowalzig.neumaier.pflaum:2005a}.
\end{definition}
\begin{bemerkung}
\label{bem:StarkInvariant}
Das Sternprodukt $\star$ ist offensichtlich genau dann stark
invariant, wenn die klassische Impulsabbildung $J$ auch eine
$G$"=äquivariante Quantenimpulsabbildung ist.
\end{bemerkung}
\begin{bemerkung}
\label{bem:QuantenhamiltonfunktionAndereSichtweise}
Wie bei der klassischen Impulsabbildung, vgl.\ Bemerkung~\ref{bem:ImpulsabbildungSichtweiseRaume}, kann man eine
Quantenhamiltonfunktion über die Beziehung $\qJ(\xi)(p) =
\dPaar{\qJ(p)}{\xi}$ für alle $p \in M$ und $\xi \in \lieAlgebra$ als
Abbildung
\begin{align}
\label{eq:QuantenhamiltonfunktionAndereSichtweise}
\qJ \colon M \to \lieAlgebra_{\mathbb{C}}^*[[\lambda]]
\end{align}
der Form $\qJ = \sum_{r=0}^\infty \lambda^r J_r$ mit glatten
Abbildungen $J_r\colon M \to \lieAlgebra_{\mathbb{C}}^*$ auffassen,
so dass $J_0$ eine Impulsabbildung ist und $\qJ$ die Gleichung
\eqref{eq:QuantenHamiltonian} erfüllt. Dabei bezeichnet
$\lieAlgebra_{\mathbb{C}} = \lieAlgebra \otimes_{\mathbb{R}}
\mathbb{C}$ die komplexifizierte Lie"=Algebra. Auch hier wollen wir
beide Sichtweisen austauschbar verwenden.
\end{bemerkung}
\begin{bemerkung}
\label{bem:PhysikalicheBedeutung}
\begin{bemerkungEnum}
\item %
Ist $H = J(\xi)$ für ein $\xi \in \lieAlgebra$, so folgt aus der
starken Invarianz des Sternprodukts $\star$, dass die
Quantenzeitentwicklung jeder Observablen bezüglich $H$ mit der
klassischen übereinstimmt. Dies ist vom Beispiel des harmonischen
Oszillators wohlbekannt. Für eine eingehendere Diskussion der
physikalischen Bedeutung stark invarianter Sternprodukte verweisen
wir auf die Pionier"=Arbeit \cite{bayen.et.al:1978a} von Bayen
et.\,al.\
\item %
Wenn Symmetrien eines klassischen Systems nicht als Symmetrien
des korrespondierenden Quantensystems implementiert werden
können, spricht man von \neuerBegriff{Anomalien}. Römer und
Paufler bemerken etwa in \cite{romer2000anomalies}, dass für
$\qJ = J$ eine Verletzung von Gleichung
\eqref{eq:SternproduktKovariant} feldtheoretisch Schwinger
Termen entsprechen sollte und eine Verletzung von Gleichung
\eqref{eq:QuantenHamiltonian} anomalen Ward"=Identitäten. Sie
vermuten weiter, dass diese und andere Anomalien nicht
auftreten, wenn man auch Quantenimpulsabbildungen zulässt.
\end{bemerkungEnum}
\end{bemerkung}
Die folgende Proposition zeigt, dass in der vorliegenden Situation von
zusammenhängenden Lie"=Gruppen der Begriff der $G$"=äquivarianten
Quantenimpulsabbildung aus Definition \ref{def:Quantenimpulsabbildung}
mit dem einer Quantenimpulsabbildung, die $G$"=äquivariant ist, übereinstimmt.
\begin{proposition}
\label{bem:QuantenImpulsabbHamiltonian}
Die Voraussetzungen seien wie in Definition
\ref{def:Quantenimpulsabbildung} gegeben. Ist $\qJ$ eine
$G$"=äquivariante Quantenhamiltonfunktion, so ist $\qJ$ schon eine
Quantenimpulsabbildung.
Da wir $G$ als zusammenhängend annehmen, gilt auch die Umkehrung.
\end{proposition}
\begin{proof}
Sei $\qJ$ eine $G$"=äquivariante Quantenhamiltonfunktion. Dann gilt für
alle $g \in G$ und $\eta \in \lieAlgebra$
\begin{align*}
g \qJ(\eta) = \qJ(g \eta)\ensuremath{\text{,}}
\end{align*}
also durch Ableiten nach $g$ für alle $\xi \in \lieAlgebra$
\begin{align*}
\xi_M(\qJ(\eta)) = -\qJ([\xi,\eta])
\end{align*}
d.\,h.\
\begin{align*}
[\qJ(\xi),\qJ(\eta)]_{\star} = \I \lambda \{J(\xi),\qJ(\eta)\} =
-\I \lambda \xi_M(\qJ(\eta)) = \I \lambda \qJ([\xi,\eta]) \quad
\forall \xi \in \lieAlgebra \ensuremath{\text{.}}
\end{align*}
Sei umgekehrt $\qJ$ eine Quantenimpulsabbildung und seien $\xi,\eta \in
\lieAlgebra$. Dann gilt
\begin{align*}
-\xi_M(\qJ(\eta)) = \{J(\xi),\qJ(\eta)\} =
\frac{1}{\I \lambda}[\qJ(\xi),\qJ(\eta)]_{\star} = \qJ([\xi,\eta]) \ensuremath{\text{.}}
\end{align*}
Da $G$ zusammenhängend ist, folgt die $G$"=Äquivarianz von $\qJ$.
\end{proof}
Über die Eindeutigkeit von Quantenimpulsabbildungen kann man analoge
Aussagen treffen wie bei der klassischer Impulsabbildungen.
\begin{proposition}
\label{prop:EindeutigkeitvonQuanteimpulsabbildungen}
Seien die Voraussetzungen wie in Definition
\ref{def:Quantenimpulsabbildung} gegeben und zusätzlich
$\{\cdot,\cdot\}$ symplektisch und $M$ zusammenhängend.
\begin{propositionEnum}
\item %
Eine Quantenhamiltonfunktion $\qJ$ zu einer gegebenen
Impulsabbildung $J$ ist bis auf Addition von Elementen aus
$\bigwedge_{\mathbb{C}}\lieAlgebra_{\mathbb{C}}^*[[\lambda]]$ eindeutig
bestimmt.
\item %
Eine Quantenimpulsabbildung $\qJ$ zu einer gegebenen
$G$"=äquivarianten Impulsabbildung $J$ ist bis auf Addition von
$G$"=invarianten Elementen aus $\bigwedge_{\mathbb{C}}
\lieAlgebra_{\mathbb{C}}^*[[\lambda]]$ eindeutig bestimmt. Diese
sind genau die formalen Potenzreihen von Lie"=Algebra
$1$"=Kozykeln. Insbesondere sind Quantenimpulsabbildungen eindeutig,
wenn die erste Lie"=Algebra"=Kohomologie verschwindet.
\end{propositionEnum}
\end{proposition}
\begin{proof}
Da das Zentrum von $C^\infty(M)[[\lambda]]$ bezüglich $\star$ nur aus
den konstanten Funktionen besteht
(vgl.\ \cite[Bsp. 6.3.15]{waldmann:2007a}) ist die Aussage klar, siehe
auch \cite{xu:1998a}.
\end{proof}
\begin{bemerkung}
\label{bem:KlassenFeinerMitSymmetrie}
Wir haben im vorangegangenen Abschnitt gesehen, dass die
Klassifikation von Sternprodukten auf symplektischen
Mannigfaltigkeiten mit Hilfe der de"=Rham"=Kohomologie im Allgemeinen
physikalisch zu grob ist. In vielen Situationen wird man jedoch gute
physikalische Gründe haben, bestimmte Invarianzforderungen an das
Sternprodukt zu stellen. Dies verkleinert im Allgemeinen die
Sternprodukt"=Äquivalenzklassen oder zeichnet gar ein Sternprodukt
aus. Für eine weitergehende Diskussion der Klassifikation von
invarianten Sternprodukten verweisen wir auf die Diplomarbeit von
Schaumann \cite{schaumann:2010}.
Dort wird insbesondere als schönes Beispiel gezeigt, dass das
Weyl"=Moyal"=Sternprodukt das einzige $\mathbb{Sp}_{2n} \ltimes
\mathbb{R}^{2n}$"=invariante, differentielle $\mathbb{Sp}_{2n}$"=stark
invariante Sternprodukt für den $\mathbb{R}^{2n}$ mit kanonischer
symplektischer Form ist. Dabei bezeichnet $\mathbb{Sp}_{2n}$ die
symplektische Gruppe und $\ltimes$ das semi"=direkte"=Produkt von
Gruppen. Für Details verweisen wir auf \cite{schaumann:2010}.
\end{bemerkung}
Mit Hilfe der Fedosov Konstruktion erhält man das folgende Resultat über
die Existenz von invarianten Sternprodukten auf symplektischen
Mannigfaltigkeiten, vgl.\ \cite[Prop. 6.1]{xu:1998a}.
\begin{satz}[Existenz invarianter Sternprodukte]
\label{satz:ExistenzInvarianterSternprodukte}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit und $G$ eine
Lie"=Gruppe, die auf $M$ symplektisch operiere. Es gibt genau dann
ein $G$"=invariantes Sternprodukt $\star$ für $(M,\omega)$, wenn es
eine $G$"=invariante, torsionsfreie kovariante Ableitung $\nabla$ auf
$M$ gibt. Sind $\nabla$ und die formale Potenzreihe geschlossener
Zweiformen $\Omega \in \lambda\Gamma^\infty(\bigwedge\nolimits^2
T^*M)[[\lambda]] $ $G$"=invariant, so ist auch das induzierte
Fedosovsternprodukt $\star_{\Omega}$ $G$"=invariant.
\end{satz}
\begin{korollar}
\label{kor:ExistenzInvarianterSternprodukte}
Die Lie"=Gruppe $G$ wirke symplektisch und eigentlich auf der
symplektischen Mannigfaltigkeit $(M,\omega)$. Dann existiert ein
$G$"=invariantes Sternprodukt $\star$ für $(M,\omega)$.
\end{korollar}
\begin{proof}
Da die $G$"=Wirkung eigentlich ist, existiert nach Korollar
\ref{kor:InvarianterZusammenhang} eine $G$"=invariante, torsionsfreie
kovariante Ableitung auf $M$, so dass die Behauptung mit Satz
\ref{satz:ExistenzInvarianterSternprodukte} klar ist.
\end{proof}
\begin{definition}
\label{def:Gaequivalent}
Zwei $G$"=invariante Sternprodukte $\star$ und $\star'$ für eine
Poisson"=Mannigfaltigkeit $(M,\{\cdot,\cdot\})$, auf der eine
Lie"=Gruppe $G$ operiere, heißen \neuerBegriff{$G$"=äquivalent},
falls es eine $G$"=äquivariante Äquivalenztransformation zwischen
$\star$ und $\star'$ gibt.
\end{definition}
Bertelson, Bieliavsky und Gutt konnten in
\cite{bertelson.bieliavsky.gutt:1998a} mit Hilfe der
Fedosov"=Konstruktion das folgende Klassifikationsresultat für
$G$"=invariante Sternprodukte zeigen.
\begin{satz}
\label{satz:KlassifikationGinvarianterSternprodukte}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit auf der eine
Lie"=Gruppe $G$ derart wirke, dass eine $G$"=invariante,
torsionsfreie Ableitung $\nabla$ auf $M$ existiere. Dann gibt es
eine, von $\nabla$ unabhängige, Bijektion zwischen der Menge der
Äquivalenzklassen von $G$"=invarianten Sternprodukten im Sinne der
$G$"=Äquivalenz in die Menge der formalen Potenzreihen von Elementen
der zweiten $G$"=invarianten de"=Rham Kohomologie von $M$.
\end{satz}
Müller"=Bahns und Neumaier\cite{mueller-bahns.neumaier:2004a} haben
Existenzkriterien für Quantenhamiltonfunktionen angegeben. Wir fassen
diese im nächsten Satz zusammen.
\begin{satz}
\label{satz:ExistenzVonQuantenImpulsabb}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit auf der eine
Lie"=Gruppe $G$ symplektisch operiere. Weiter sei $\nabla$ eine
$G$"=invariante, torsionsfreie kovariante Ableitung und $\Omega \in \lambda
\Gamma^\infty(\bigwedge\nolimits^2 T^*M)$ eine formale Reihe $G$"=invarianter
Zweiformen. Dann besitzt das zugehörige $G$"=invariante
Fedosov"=Sternprodukt $\star_{\Omega}$ genau dann eine
Quantenhamiltonfunktion, wenn es eine formale Potenzreihe $\qJ =
\sum_{r = 0}^\infty \lambda^r J_r$ mit glatten Abbildungen $J_r
\colon M \to \lieAlgebra^*$ gibt, die für alle $\xi \in \lieAlgebra$
\begin{align}
\label{eq:BedingungQuantenhamilton}
d \qJ(\xi) = (\omega + \Omega)(\xi_M,\cdot)
\end{align}
erfüllt.
\end{satz}
\begin{korollar}
\label{kor:StarkInvariantFedosov}
Seien die Voraussetzungen wie in Satz
\ref{satz:ExistenzVonQuantenImpulsabb} und $J \colon M \to \lieAlgebra^*$
eine klassische Impulsabbildung, so ist $\star_{\Omega}$ genau dann
stark invariant, wenn für alle $\xi \in \lieAlgebra$
\begin{align}
\label{eq:StarkInvariantFedosov}
\Omega(\xi_M,\cdot) = 0
\end{align}
gilt.
\end{korollar}
\begin{bemerkung}
\label{bem:HinreichendeBedinungFuerStarkInvariant}
Wählt man $\Omega = 0$, so ist Gleichung
\eqref{eq:StarkInvariantFedosov} trivial erfüllt. Es ist also eine
hinreichende Bedingung für die Existenz eines stark invarianten
Sternprodukts, dass eine $G$"=invariante, torsionsfreie kovariante Ableitung
existiert, was z.\,B.\ immer dann der Fall ist, wenn die $G$"=Wirkung
eigentlich ist.
\end{bemerkung}
Xu konnte in \cite[Prop. 6.3]{xu:1998a} die aus der klassischen
Situation schon bekannte Proposition
\ref{prop:ExistenzAequivarianterImpulsabbildungen} für
Quantenimpulsabbildungen verallgemeinern.
\begin{proposition}
\label{prop:QuantenImpulsabbildungUndLieAlgebraCohomology}
Sei $(M,\omega)$ eine zusammenhängende symplektische Mannigfaltigkeit
und $G$ eine zusammenhängende Lie"=Gruppe, die auf $M$ wirke. Sei
weiter $\star$ ein Sternprodukt für $(M,\omega)$ und $\qJ$ eine
Quantenimpulsabbildung. Für alle $\xi,\eta \in \lieAlgebra$ ist die
Funktion $c(\xi,\eta) := \qJ([\xi,\eta]) -
[\qJ(\xi),\qJ(\eta)]_{\star}$ konstant als Element von
$C^\infty(M)[[\lambda]]$ und die Abbildung $\lieAlgebra \times
\lieAlgebra \ni (\xi,\eta) \mapsto c(\xi,\eta) \in
\mathbb{C}[[\lambda]]$ ist ein Lie"=Algebra 2"=Kozyklus. Es gibt
genau dann eine $G$"=äquivariante Impulsabbildung, wenn $[c]$ als
Element der zweiten Lie"=Algebra"=Kohomologie
$H^2(\lieAlgebra,\mathbb{C}[[\lambda]]) \simeq
H^2(\lieAlgebra)\otimes \mathbb{C}[[\lambda]]$ verschwindet.
\end{proposition}
\begin{bemerkung}
\label{bem:Quantenimpulsabbildungundliealgebracohomology}
Falls es eine Quantenimpulsabbildung $\qJ$ gibt, ist nach
Proposition~\ref{prop:QuantenImpulsabbildungUndLieAlgebraCohomology}
das Verschwinden der zweiten Lie"=Algebra"=Kohomologie hinreichend
für die Existenz einer $G$"=äquivarianten Impulsabbildung. Dies ist,
wie in Bemerkung \ref{bem:InvarianteImpulsabbildung} schon gesagt,
insbesondere für halbeinfache Lie"=Algebren der Fall.
\end{bemerkung}
Die aus der klassischen Situation bekannte Proposition
\ref{prop:KompaktInvarianteImpulsabbildung} wollen wir nun auf den
Quantenfall verallgemeinern. Wie es scheint, ist dies noch nicht in der
Literatur bekannt.
\begingroup
\begin{proposition}
\label{prop:IntegrierenVonQuantenimpulsabbildungen}
Sei $G$ eine kompakte, zusammenhängende Lie"=Gruppe, die auf einer
Poisson"=Mannigfaltigkeit $(M,\{\cdot,\cdot\})$ operiere und es gebe
eine Quantenhamiltonfunktion $\qJ$ zu einer gegebenen
$G$"=äquivarianten Impulsabbildung $J$ und einem $G$"=invarianten
differentiellen Sternprodukt $\star$, dann exisitert auch eine $G$"=äquivariante
Quantenhamiltonfunktion.
\end{proposition}
\begin{proof}
Für jedes $g \in G$ definieren wir $\qJ[g]$ durch $\qJ[g](\xi ) :=
g(\qJ(g^{-1}\xi))$. Dann ist auch $\qJ[g]$ eine
Quantenhamiltonfunktion. Denn in der Tat rechnet man nach
\begin{align*}
[\qJ[g](\xi),f]_{\star} &= [g(\qJ(g^{-1}\xi)),f]_\star =
g[\qJ(g^{-1}\xi),g^{-1} f]_\star \\ &= \I \lambda g
\{J(g^{-1}\xi),g^{-1} f\} = \I \lambda g \{g^{-1}J(\xi),g^{-1} f\}
= \I \lambda \{J(\xi),f\} \ensuremath{\text{.}}
\end{align*}
Dabei wurde im zweiten Schritt die Invarianz von $\star$ verwendet,
im vierten, dass $J$ $G$"=äquivariant ist und im letzten schließlich,
dass $G$ auf $C^\infty(M)$ durch Poisson"=Abbildungen operiert. Sein
nun $dg$ das normierte Haar"=Maß auf $G$, dann definieren wir
$\langle \qJ \rangle$ für jedes $\xi \in \lieAlgebra$ durch $\langle
\qJ \rangle (\xi) := \int_G \qJ[g](\xi) \, dg$. Da $\star$
differentiell ist, folgt mit der Linearität des $\star$"=Kommutators
für alle $f \in C^\infty(M)$
\begin{align*}
[\langle \qJ \rangle (\xi),f]_{\star} =
\int_G[\qJ[g](\xi),f]_{\star} \, dg = \I \lambda \int_G
\{J(\xi),f\} \, dg = \I \lambda \{J(\xi),f\} \ensuremath{\text{.}}
\end{align*}
Somit ist $\langle \qJ \rangle$ eine
Quantenhamiltonfunktion. Dass $\langle \qJ \rangle$ tatsächlich
$G$"=äquivariant ist, ist mit Korollar \ref{kor:integrieren_inv_funkt} klar.
\end{proof}
\endgroup
Wir wollen nun noch sehen, wie sich Quantenimpulsabbildungen unter
$G$"=äquivarianten Sternprodukthomomorphismen verhalten.
\begin{proposition}
\label{prop:QuantenImpulsabbildungUndSternproduktHomos}
Seien $(M,\{\cdot,\cdot\})$ und $(M',\{\cdot,\cdot\}')$
Poisson"=Mannigfaltigkeiten mit Sternprodukten $\star$ und
$\star'$. Weiter wirke eine Lie"=Gruppe $G$ sowohl auf $M$ als auch
auf $M'$, sodass $\star$ und $\star'$ bezüglich dieser $G$"=invariant
sind. Ferner sei $S\colon (C^\infty(M)[[\lambda]],\star) \to
(C^\infty(M')[[\lambda]],\star')$ ein $G$"=äquivarianter
Isomorphismus von Algebren. Ist nun $\qJ$ eine Quantenimpulsabbildung
für $\star$, so ist $S(\qJ)$ eine Quantenimpulsabbildung für
$\star'$, wobei $S(\qJ)$ als $S(\qJ)(\xi) := S(\qJ(\xi))$ für alle
$\xi \in \lieAlgebra$ zu verstehen ist.
\end{proposition}
\begin{proof}
Klar.
\end{proof}
Mit Hilfe von Proposition
\ref{prop:QuantenImpulsabbildungUndSternproduktHomos} gelingt es
Schaumann eine Klassifikation der Sternprodukte mit
Quantenimpulsabbildungen in Termen der $G$"=äquivarianten
de"=Rham"=Kohomologie anzugeben, siehe \cite{schaumann:2010}. Wir
wollen an dieser Stelle jedoch nicht näher darauf eingehen.
\chapter{Koszul-Reduktion}
\label{cha:Koszul-Reduktion}
Ziel dieses Kapitels ist es, ein Sternprodukt für den reduzierten
Phasenraum zu konstruieren. Das vorgestellte Schema basiert auf der
Arbeit \cite{bordemann.herbig.waldmann:2000a} von Bordemann, Herbig und
Waldmann, welche in dem zitierten Papier die aus der Quantenfeldtheorie
bekannte Quantisierungsmethode nach Becchi, Rouet, Stora und Tyutin,
kurz BRST"=Methode genannt, in den Rahmen der Deformationsquantisierung
übertragen haben. In unserer Situation ist es jedoch
nicht notwendig, den gesamten BRST"=Komplex zu betrachten. Aus Sicht der
homologischen Algebra genügt es, wie wir sehen werden, die untersten
Grade des klassischen und des Quanten"=Koszul"=Komplexes zu
verstehen. Dieser Weg wurde auch in einer Arbeit von Gutt und Waldmann
\cite{gutt2010involutions} beschritten.
In diesem Kapitel sei $(M,\omega)$ eine symplektische Mannigfaltigkeit
mit induzierter Poisson"=Klammer $\{\cdot,\cdot\}$. Weiter sei $G$ eine
zusammenhängende Lie"=Gruppe, die stark Hamiltonsch sowie frei und
eigentlich auf $M$ operiere und $J\colon M \to \lieAlgebra^*$ eine
$G$"=äquivariante Impulsabbildung. Die Zwangsfläche $C := J^{-1}(0)$ sei
dabei nicht leer. Ferner bezeichnen wir mit $\ensuremath{\imath} \colon C
\hookrightarrow M$ die Inklusion und mit $\pi \colon C \to \Mred$ die
kanonische Projektion auf den reduzierten Phasenraum $\Mred =
C/G$. Schließlich sei $\star$ ein Sternprodukt für $(M,\omega)$, für das
eine $G$"=äquivariante Quantenimpulsabbildung $\qJ \colon \lieAlgebra
\to C^\infty(M)[[\lambda]]$ existiert.
\section{Symplektische Reduktion und Koszul-Komplex}
\label{sec:SymplektischeReduktionUndKoszulKomplex}
\subsection{Algebraische Beschreibung von $C^\infty(\Mred)$}
\label{sec:AlgeraischeBeschreibungDerFunktionenAufDemReduziertenPhasenraum}
In diesem Abschnitt formulieren wir die geometrische
Marsden"=Weinstein"=Reduktion algebraisch, um so ein analoges Vorgehen in
der Quanten"=Situation motivieren zu können. Zuerst möchten wir die
Poisson"=Algebra $(C^\infty(\Mred),\{\cdot,\cdot\}_{\mathrm{red}})$ der
glatten Funktionen auf dem reduzierten Phasenraum so beschreiben, dass
die Beziehung zur Poisson"=Algebra der glatten Funktionen auf dem
ursprünglichen Phasenraum algebraisch klar sichtbar wird und dann als
Grundlage für die spätere Konstruktion des Sternprodukts für den
reduzierten Phasenraum dienen kann.
\begin{proposition}
\label{prop:InvarianteFunktionen}
Die Abbildung
\begin{align}
\label{eq:FunktionenAufRedPhasenraumZwangsflaecheInvariant}
\pi^* \colon \CM[\Mred] \to \CM[C]^G
\end{align}
ist ein
Algebraisomorphismus zwischen den glatten Funktionen
$C^\infty(\Mred)$ auf dem reduzierten Phasenraum und den
$G$"=invarianten Funktionen $C^\infty(C)^G$ auf der Zwangsfläche.
\end{proposition}
\begin{proof}
Zunächst ist klar, dass $\pi^*(C^\infty(\Mred))$ tatsächlich in
$C^\infty(C)^G$ liegt. Ebenfalls unmittelbar einsichtig ist die
Verträglichkeit mit der Algebramultiplikation. Die Injektivität von
$\pi^*$ folgt sofort aus der Surjektivität von $\pi$. Um die
Surjektivität von $\pi^*$ zu zeigen, sei $f \in \CM[C]^G$ beliebig. Da $f$
$G$"=invariant und $\pi$ eine surjektive Submersion ist, gibt es
eine eindeutig bestimmte glatte Funktion $\tilde f \in \CM[\Mred]$
mit $f = \pi^* \tilde f$.
\end{proof}
Sei $\kIdeal := \ker \ensuremath{\imath^*}$ das Verschwindungsideal von $C$ und
$\kbIdeal := \{f \in \CM \mid \{f,h\} \in \kIdeal \;\, \forall h \in
\kIdeal\}$ der Lie"=Idealisator von $\kIdeal$.
Die folgende wohlbekannte Proposition gibt eine algebraische
Charakterisierung des Tangentialraumes einer Untermannigfaltigkeit mit
Hilfe des Verschwindungsideals.
\begin{proposition}
\label{prop:TangentialRaumUntermannigfaltigkeit}
Sei $\ensuremath{\imath} \colon C \hookrightarrow M$ eine Untermannigfaltigkeit einer Mannigfaltigkeit
$M$, dann gilt für alle $c \in C$
\begin{align}
\label{eq:TangentialRaumUntermannigfaltigkeit}
T_c\ensuremath{\imath} T_cC = \{v \in T_cM \mid v(f) = 0 \quad \forall f \in
\kIdeal\} \ensuremath{\text{.}}
\end{align}
\end{proposition}
\begin{proof}
Siehe \cite[Prop. 8.5]{lee:2003a}.
\end{proof}
Als nächstes geben wir ein geometrisches Kriterium an, wann $\kIdeal$
eine Poisson"=Unteralgebra von $C^\infty(M)$ ist. Dazu sei zunächst
daran erinnert, dass man eine Untermannigfaltigkeit $\ensuremath{\imath} \colon C
\hookrightarrow M$ \neuerBegriff{koisotrop} nennt, wenn in jedem Punkt
$c \in C$ das symplektische Komplement $T_cC^\bot := \{w_c \in T_cM
\mid \omega_c(T_c\ensuremath{\imath} v_c,w_c) = 0 \quad \forall v_c \in T_cC\}$ von
$T_cC$ in $T_c\ensuremath{\imath} T_cC$ liegt
(vgl.\ \cite[Ch. 5.3]{abraham.marsden:1985a},
\cite[4.1.5]{ortega.ratiu:2004}).
\begin{proposition}
\label{prop:Koisotrop}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit mit
induzierter Poisson"=Klammer $\{\cdot,\cdot\}$ und $C \subset
M$ eine Untermannigfaltigkeit. Dann sind die folgenden Aussagen richtig.
\begin{propositionEnum}
\item %
\label{item:CharaktSymplektKomplement}
Sei $c \in C$, dann gilt für das symplektische Komplement $T_cC^\bot
\subset T_cM$
\begin{align}
\label{eq:SymplektischeKomplement}
T_cC^\bot = \{X_f(c) | f \in \kIdeal\} \ensuremath{\text{.}}
\end{align}
\item %
\label{item:PoissonUnteralgebraKoisotrop}
$C$ ist genau dann koisotrop, wenn $\kIdeal$ eine
Poisson"=Unteralgebra von $C^\infty(M)$ ist.
\end{propositionEnum}
\end{proposition}
\begin{proof}
\begin{beweisEnum}
\item %
Zunächst beachte man, dass die rechte Seite von Gleichung
\eqref{eq:SymplektischeKomplement} tatsächlich ein Vektorraum
ist, da $C^\infty(M) \ni f \mapsto X_f \in
\Gamma^\infty(TM)$ offensichtlich linear ist.
Sei $f \in \kIdeal$, dann folgt für alle $c \in C$ und $v
\in T_c\ensuremath{\imath} T_cC$
\begin{align*}
\omega(X_f(c),v) = d f\at{c}(v) = v(f) = 0 \ensuremath{\text{,}}
\end{align*}
also $X_f(c) \in T_cC^\bot$. Für die umgekehrte Inklusion sei
$(U,x)$ eine Untermannigfaltigkeitskarte von $C$ um $c \in C$
und $\chi \in C^\infty(M)$ eine glatte Abschneidefunktion mit
$\supp \chi \subset U$ und $\chi = 1$ auf einer Umgebung von
$c$. Dann gilt für $j \in \{\dim C + 1,\dots,\dim M\}$ und
$f^j := \chi \cdot x^j \in C^\infty(M)$ die Gleichung $\ensuremath{\imath}^*
f^j = 0$ und $\{df^j\at{c}\} = \{dx^j\at{c}\}$ ist linear
unabhängig. Somit ist $\{X_{f^j}(c)\}$ linear
unabhängig. Andererseits gilt
(vgl.\ \cite[Prop. 5.3.2]{abraham.marsden:1985a}) $\dim
T_cC^\bot = \dim T_cM - \dim T_c\ensuremath{\imath} T_cC = \dim M - \dim C$.
\item %
Sei $C \subset M$ koisotrop, d.\,h.\ für alle $c \in C$ gilt
$T_cC^\bot \subset T_c\ensuremath{\imath} T_cC$. Seien dann $f,f' \in
\kIdeal$, so gilt nach Proposition
\ref{prop:TangentialRaumUntermannigfaltigkeit} und
Teil~\refitem{item:CharaktSymplektKomplement}
$\ensuremath{\imath}^*\{f,f'\}(c) = X_{f'}(c)(f) = 0$ für alle $c \in C$.
Sei umgekehrt $\kIdeal$ eine Poisson"=Unteralgebra und $v \in
T_cC^\bot$ beliebig. Dann gibt es nach Teil~\refitem{item:CharaktSymplektKomplement} ein $f \in \kIdeal$
mit $v = X_f(c)$. Es folgt $X_f(c)(f') = \ensuremath{\imath}^*\{f',f\}(c) =
0$ für alle $f' \in \kIdeal$ und $c \in C$, also mit
Prop. \ref{prop:TangentialRaumUntermannigfaltigkeit} $v =
X_f(c) \in T_c\ensuremath{\imath} T_cC$.
\end{beweisEnum}
\end{proof}
Da in der von uns betrachteten Situation die Wirkung eigentlich ist,
ist die Bahn $G\cdot p$ für alle $p \in M$ eine eingebettete
Untermannigfaltigkeit von $M$ und der Tangentialraum $T_{p'}(G\cdot
p)$ an $p' \in M$ lässt sich als
\begin{align}
\label{eq:TangentialraumAnBahn}
T_{p'}(G\cdot p) = \{\xi_M(p') \mid \xi \in \lieAlgebra \}
\end{align}
schreiben, siehe etwa \cite[Bem. 3.3.26,
Prop. 3.3.24]{waldmann:2007a}.
Offensichtlich gilt für jedes $p \in M$, $v \in T_pM$ und $\xi \in \lieAlgebra$
\begin{align*}
\omega_p(\xi_M(p),v) = \dPaar{d_pJ(\xi)}{v} = \dPaar{I_{J(p)}^{-1}
\circ T_pJv}{\xi} \ensuremath{\text{,}}
\end{align*}
wobei $I_{J(p)}\colon \lieAlgebra^* \to T_{J(p)}\lieAlgebra^*$ die
kanonische Identifikation der Mannigfaltigkeit $\lieAlgebra^*$ mit
ihrem Tangentialraum $T_{J(p)}\lieAlgebra^*$ an der Stelle $J(p)$
bezeichnet. Die elementare Rechnung zum zweiten Gleichheitszeichen
wurde im Beweis von Lemma \ref{lem:JSurjektiv} ausgeführt.
Damit liegt $v$ genau dann in $\ker T_pJ$, wenn es ein Element von
$(T_p(G\cdot p))^\bot$ ist. Wegen $T_c\ensuremath{\imath} T_cC = \ker T_cJ$
(vgl.\ \cite[Lemma 8.15]{lee:2003a}), gilt also für $c \in C$
\begin{align*}
T_c(G\cdot c)^\bot = T_c\ensuremath{\imath} T_cC \ensuremath{\text{.}}
\end{align*}
%
Mit der Kettenregel, der Äquivarianz von $J$ und Definition von $C$
gilt für $c \in C$
\begin{align*}
T_cJ \xi_M(c) = T_cJ T_e \phi_{\cdot}(c) \xi = T_e(J \circ \phi_{\cdot}(c)) = T_e(\phi_{\cdot}(\underbrace{J(c)}_{=0}))
= 0 \ensuremath{\text{,}}
\end{align*}
wobei $\phi \colon G \times M \to M$ die gegebene $G$"=Wirkung auf $M$
ist. Mit $T_c\ensuremath{\imath} T_cC = \ker T_cJ$ und Gleichung
\eqref{eq:TangentialraumAnBahn} folgt daraus
\begin{align*}
T_c(G\cdot c) \subset T_c\ensuremath{\imath} T_cC\ensuremath{\text{,}}
\end{align*}
und demnach auch
\begin{align}
\label{eq:BeiUnsKoisotrop}
T_cC^\bot = T_c(G \cdot c)^{\bot \bot} = T_c(G \cdot c) \subset
T_c\ensuremath{\imath} T_cC \ensuremath{\text{.}}
\end{align}
Für einen Beweis der dabei im zweiten Schritt angewendeten Rechenregel
verweisen wir auf \cite[Prop. 5.3.2]{abraham.marsden:1985a}.
Wir können also die folgende Proposition festhalten.
\begin{proposition}
\label{prop:BeiUnsKoisotrop}
$C = J^{-1}(0)$ liegt koisotrop in $M$.
\end{proposition}
Wir sind nun in der Lage, die angestrebte algebraische Formulierung der
Observablenalgebra des reduzierten Phasenraumes anzugeben. Dies ist
Gegenstand der nächsten Proposition.
{
\emergencystretch=1.6em
\begin{proposition}
\label{prop:FunktionenAufDemReduziertenPhasenraum}
\begin{propositionEnum}
\item %
$\kIdeal$ ist ein Poisson"=Ideal\footnote{d.\,h.\ $\kIdeal$ ist sowohl
ein Ideal bezüglich Algebramultiplikation als auch bezüglich der
durch $\{\cdot,\cdot\}$ gegebenen Lie"=Klammer, es gilt also für
alle $f \in C^\infty(M)$ und $f' \in \kIdeal$ $f\cdot f' \in \kIdeal$
und $\{f,f'\} \in \kIdeal$,
vgl.\ \cite[S. 347]{cushman.bates:1997a}} in $\kbIdeal$, wodurch
$\kbIdeal/\kIdeal$ via $\{[f],[h]\}' := [\{f,h\}]$ zu einer
Poisson"=Algebra mit Poisson"=Klammer $\{\cdot,\cdot\}'$ wird.
\item %
\label{item:EinschraenkungSurjektiv} %
Die Abbildung $\ensuremath{\imath}^* \colon \CM[M] \to C^\infty(C)$ ist
surjektiv.
\item %
\label{item:CharakterisierungDesKlassischenIdealisators}
Es gilt
\begin{align}
\label{eq:CharakterisierungDesKlassischcenIdealisators}
\kbIdeal = \{f \in C^\infty(M) \mid \ensuremath{\imath^*} f \in
\pi^*C^\infty(\Mred)\} \ensuremath{\text{.}}
\end{align}
Ist $\prol \colon C^\infty(C) \to C^\infty(M)$ ein
Rechtsinverses zu $\ensuremath{\imath^*}$, dann gilt insbesondere $\prol
\pi^*C^\infty(\Mred) \subset \kbIdeal$.
\item %
Die Abbildung
\begin{align}
\label{eq:FunktionenAufDemReduziertenPhasenraum}
\mathsf{iso}\colon \kbIdeal/\kIdeal \ni [f] \mapsto \ensuremath{\imath}^* f \in
\pi^*\CM[\Mred] = \CM[C]^G
\end{align}
ist ein Algebraisomorphismus. Dabei ist die Algebramultiplikation auf
$\kbIdeal/\kIdeal$ durch $[f][f'] := [ff']$ für $f,f' \in
\kbIdeal$ gegeben.
Ist $\prol \colon
C^\infty(C) \to C^\infty(M)$ ein Rechtsinverses zu $\ensuremath{\imath}^* \colon
C^\infty(M) \to C^\infty(C)$, so gilt für jedes $f \in {C^\infty(C)}^G$
\begin{align}
\label{eq:FunktionenAufDemReduziertenPhasenraumIsoInverses}
\mathsf{iso}^{-1}(f) = [\prol(f)] \in \kbIdeal/\kIdeal \ensuremath{\text{.}}
\end{align}
\item %
Die Abbildung
\begin{align}
\label{eq:FunktionenAufDemReduziertenPhasenRaumEndgIso}
\mathsf{iso}^{-1} \circ \pi^* \colon C^\infty(\Mred) \to \kbIdeal/\kIdeal
\end{align}
ist ein Isomorphismus von Poisson"=Algebren. Ist $\prol \colon
C^\infty(C) \to C^\infty(M)$ ein Rechts\-in\-verses zu $\ensuremath{\imath}^* \colon
C^\infty(M) \to C^\infty(C)$, so gilt insbesondere für $\varphi,\varphi' \in C^\infty(\Mred)$
\begin{align}
\label{eq:ReduziertePoissonKlammer}
\pi^*\{\varphi,\varphi'\}_{\mathrm{red}} = \ensuremath{\imath}^*\{\prol \pi^* \varphi, \prol \pi^* \varphi'\} \ensuremath{\text{.}}
\end{align}
\end{propositionEnum}
\end{proposition}
}
\begin{proof}
\begin{beweisEnum}
\item %
Da nach Proposition
\ref{prop:Koisotrop}~\refitem{item:PoissonUnteralgebraKoisotrop}
und Proposition \ref{prop:BeiUnsKoisotrop} das Ideal $\kIdeal$
eine Poisson"=Unteralgebra von $C^\infty(M)$ ist, gilt
offensichtlich $\kIdeal \subset \kbIdeal $. Der Rest ist somit
klar.
\item %
Man kann jedes $f \in C^\infty(C)$ lokal glatt fortsetzen und
vermöge einer Zerlegung der Eins eine glatte Fortsetzung von $f$
auf eine offene Umgebung von $C$ erlangen. Mit Hilfe einer glatten
Abschneidefunktion erhält man letztlich eine Fortsetzung zu
einer glatten Funktion auf ganz $M$.
\item %
Zunächst bemerken wir, dass für $c \in C$ die Gleichung
\begin{align}
\label{eq:FundiHamVektorFelder}
\{\xi_M(c) \mid \xi \in \lieAlgebra\} = T_c(G \cdot c) =
T_cC^\bot = \{X_h(c) \mid h \in \kIdeal\} \ensuremath{\text{.}} \tag{$*$}
\end{align}
gilt. Sei nun $f \in C^\infty(M)$. Wir betrachten die folgenden
Äquivalenzumformungen.
\begin{align*}
\ensuremath{\imath}^*f \in C^\infty(C)^G &\iff \Phi_g^*(\ensuremath{\imath}^* f) = \ensuremath{\imath}^* f
&&\forall g \in G \\
&\iff f(g c) = f(c) &&\forall g \in G, c\in C \\
&\iff \xi_M(c)(f) = 0 &&\forall \xi \in \lieAlgebra, c \in
C \\
&\iff X_h(c)(f) = 0 &&\forall h \in \kIdeal, c \in C\\
&\iff \{f,h\}(c) = 0 &&\forall h \in \kIdeal, c\in C \\
&\iff f \in \kbIdeal \ensuremath{\text{.}}
\end{align*}
Dabei wurde in der dritten Äquivalenz verwendet, dass $G$
zusammenhängend ist und in der vierten die Gleichung
\eqref{eq:FundiHamVektorFelder}.
\item %
Aufgrund der Definition von $\kIdeal = \ker \ensuremath{\imath^*}$ ist nach
Gleichung~\eqref{eq:CharakterisierungDesKlassischcenIdealisators}
klar, dass $\mathsf{iso}$ wohldefiniert und injektiv ist. Auch die
Verträglichkeit mit der Algebra- und Vektorraumstruktur ist
klar. Die Surjektivität folgt aus Teil~\refitem{item:EinschraenkungSurjektiv} und~\refitem{item:CharakterisierungDesKlassischenIdealisators}. Sei
nun $\prol$ ein Rechtsinverses für $\ensuremath{\imath}^*$. Dann gilt für $f \in
{C^\infty(C)}^G$ nach der obigen Äquivalenz $\prol f \in
\kbIdeal$, da $\ensuremath{\imath^*} \prol f = f \in C^\infty(C)^G$. Weiter ist
dann offensichtlich die Gleichung
\begin{align*}
\mathsf{iso}([\prol(f)]) = \ensuremath{\imath}^* \prol(f) = f
\end{align*}
richtig, und für $f \in \kbIdeal$ gilt
\begin{align*}
[\prol(\mathsf{iso}([f]))] = [\prol(\ensuremath{\imath}^* f)] = [f] \ensuremath{\text{,}}
\end{align*}
da $\prol \ensuremath{\imath}^* f - f \in \kIdeal$.
\item %
Nach dem bisher Gezeigten ist klar, dass $\mathsf{iso}^{-1} \circ
\pi^*$ ein Isomorphismus von Algebren ist. Wir müssen also nur
prüfen, ob die Poisson"=Klammern respektiert werden. Dazu sei
$\prol$ ein Rechtsinverses zu $\ensuremath{\imath}^*$.
Zunächst sehen wir mit Gleichung
\eqref{eq:TangentialRaumUntermannigfaltigkeit}, dass für das
Hamiltonsche Vektorfeld $X_f$ einer Funktion $f \in \kbIdeal$ für
alle $c \in C$ die Beziehung $X_f(c) \in T_c\ensuremath{\imath} T_cC$ gilt, denn
für $f' \in \kIdeal$ ist $X_f(c)(f') = \ensuremath{\imath}^*\{f',f\}(c) = 0$. Es
gibt somit ein Vektorfeld $\tilde X_f$ auf $C$, das zu $X_f$
$\ensuremath{\imath}$"=verwandt ist. Weiter seien $c \in C$ und $v \in T_cC$
beliebig. Dann gilt für $\varphi \in C^\infty(\Mred)$
\begin{align*}
\omega_{\mathrm{red}}\at{\pi(c)}(T_c\pi {\tilde X}_{\prol \pi^*
\varphi}(c), T_c\pi v) &= \omega\at{c}(T_c \ensuremath{\imath}{\tilde X}_{\prol \pi^*
\varphi}(c),T_c\ensuremath{\imath} v) \\
&= \omega\at{c}(X_{\prol \pi^*\varphi}(c),T_c\ensuremath{\imath} v)
\\
&= \dPaar{d_c(\prol \pi^* \varphi)}{T_c \ensuremath{\imath} v} \\
&= \dPaar{d_c(\prol
\pi^* \varphi \circ \ensuremath{\imath})}{v} \\
&= \dPaar{d_c(\ensuremath{\imath}^*\prol (\varphi \circ
\pi))}{v} \\
&= \dPaar{d_c(\varphi \circ \pi)}{v} \\
&= \dPaar{d_{\pi(c)}\varphi}{T_c\pi v} \\
&= \omega_{\mathrm{red}}\at{\pi(c)}(X_\varphi(\pi(c)),T_c\pi v)\ensuremath{\text{.}}
\end{align*}
Wegen der Nicht"=Ausgeartetheit von $\omega_{\mathrm{red}}$ sind
demnach die Vektorfelder ${\tilde X}_{\prol \pi^* \varphi}$ und $X_\varphi$
$\pi$"=verwandt. Somit ergibt sich für $\varphi,{\varphi'} \in C^\infty(\Mred)$
\begin{align*}
\ensuremath{\imath}^* \{\prol \pi^* \varphi,\prol \pi^* {\varphi'}\} %
&= \ensuremath{\imath}^*(\omega(X_{\prol
\pi^* \varphi}, X_{\prol \pi^* {\varphi'}})) \\
&= \ensuremath{\imath}^*(\omega(T\ensuremath{\imath} {\tilde
X}_{\prol \pi^* \varphi},T \ensuremath{\imath} {\tilde X}_{\prol \pi^* {\varphi'}}))\\
&= (\ensuremath{\imath}^*\omega)({\tilde X}_{\prol \pi^* \varphi},{\tilde X}_{\prol
\pi^* {\varphi'}}) \\
&= (\pi^*\omega_{\mathrm{red}})({\tilde X}_{\prol
\pi^* \varphi},{\tilde X}_{\prol \pi^* {\varphi'}} )\\
&= (\omega_{\mathrm{red}} \circ \pi)(T\pi {\tilde X}_{\prol \pi^* \varphi}, T \pi
{\tilde X}_{\prol \pi^* {\varphi'}}) \\
&= ({\omega_{\mathrm{red}}\circ \pi})(X_\varphi\circ
\pi,X_{\varphi'}\circ \pi) \\
&= \pi^*(\omega_{\mathrm{red}}(X_\varphi,X_{\varphi'})) =
\pi^*\{\varphi,{\varphi'}\}_{\mathrm{red}} \ensuremath{\text{.}}
\end{align*}
Damit gilt natürlich auch
\begin{align*}
(\mathsf{iso}^{-1}\circ \pi^*)(\{\varphi,{\varphi'}\}_{\mathrm{red}}) %
&= [\prol(\ensuremath{\imath}^* \{\prol \pi^* \varphi, \prol \pi^* {\varphi'}\})] \\
&= [\{\prol
\pi^* \varphi, \prol \pi^* {\varphi'} \}]\\
&= \{(\mathsf{iso}^{-1}\circ\pi^*) \varphi,(\mathsf{iso}^{-1} \circ\pi^*)
{\varphi'}\} \ensuremath{\text{,}}
\end{align*}
d.\,h.\ $\mathsf{iso}^{-1}\circ \pi^*$ ist ein Isomorphismus von Poisson"=Algebren.
\end{beweisEnum}
\end{proof}
\subsection{Augmentierter Koszul-Komplex}
\label{sec:AugementierterKoszulKomplex}
Da a priori keine \glqq{}Quanteneinschränkung\grqq{} gegeben ist,
wollen wir eine andere Charakterisierung von $\kIdeal$ angeben, die eine
naheliegende Analogie für den Quantenfall zulässt. Dazu sei
$\langle J \rangle$ das von der Impulsabbildung $J$ erzeugte Ideal. Ist
$\{e_\alpha\}$ eine Basis von $\lieAlgebra$, so ist $\langle J \rangle =
\{f^\alpha \dPaar{J}{e_\alpha} \mid f^\alpha \in C^\infty(M)\}$. Man
beachte, dass diese Definition unabhängig von der Wahl der Basis
$\{e_\alpha\}$ von $\lieAlgebra$ ist. Offensichtlich gilt $\langle J \rangle
\subset \kIdeal$. Wir wollen im Folgenden zeigen, dass sogar $\langle J
\rangle = \kIdeal$ erfüllt ist. Dazu müssen wir beweisen, dass es für
jedes $f \in \kIdeal$ glatte Funktionen $f^\alpha \in C^\infty(M)$ gibt
mit $f = f^\alpha \dPaar{J}{e_\alpha}$. Definiert man die Abbildung
$\partial \colon C^\infty(M) \otimes \lieAlgebra \to C^\infty(M)$, $f
\otimes \xi \mapsto f \dPaar{J}{\xi}$ und lineare Fortsetzung, so
schreibt sich dies als $f = \partial (f^\alpha \otimes e_\alpha)$.
\begin{definition}[Augmentierter Koszul-Komplex]
\label{def:KoszulKomplex}
Für $k \in \mathbb{N}$ definiert man die Abbildung
\begin{align}
\label{eq:KoszulFormel}
\partial \colon C^\infty(M) \otimes \bigwedge\nolimits^k
\lieAlgebra \to C^\infty(M) \otimes \bigwedge\nolimits^{k-1} \lieAlgebra
\end{align}
durch lineare Fortsetzung von
\begin{align}
\label{eq:KoszuFormel2}
f\otimes \xi \mapsto f \dPaar{J}{e_\alpha}\otimes
\Ins{e^\alpha}\xi \ensuremath{\text{,}}
\end{align}
wobei $\Ins{e^\alpha}$ die antisymmetrische Einsetzderivation bezeichnet.
Die so erhaltene Sequenz
\def\tA[#1]{A_{#1}}
\begin{equation}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex] %
{ C^\infty(C) & C^\infty(M) \otimes \bigwedge\nolimits^0 \lieAlgebra &
C^\infty(M)
\otimes \bigwedge\nolimits^1 \lieAlgebra & \dots \\
}; %
\path[<-] (m-1-1) edge node[auto]{$\ensuremath{\imath}^*$}(m-1-2); %
\path[<-] (m-1-2) edge node[auto]{$\partial$}(m-1-3); %
\path[<-] (m-1-3) edge node[auto]{$\partial$}(m-1-4); %
\end{tikzpicture},
\end{equation}
heißt \neuerBegriff{(augmentierter) Koszul"=Komplex} $K$.
\end{definition}
\begin{bemerkung}
\label{bem:KoszulKomplex}
Der augmentierte Koszul"=Komplex ist tatsächlich ein Komplex, d.\,h.\ es
gilt $\partial^2 = 0$ und $\ensuremath{\imath^*} \partial = 0$.
\end{bemerkung}
Eigentlich bezeichnet man nur den Teil ohne $C^\infty(C)$ als
Koszul"=Komplex \cite[XXI,§4]{lang:1997a},
\cite[3.4.6]{loday:1998a}. Durch $C^\infty(C)
\stackrel{\phantom{=}\ensuremath{\imath^*}}{\longleftarrow}$ wird dieser augmentiert. Ist $U
\subset M$ eine offene Teilmenge, so bezeichnen wir im Folgenden mit
$\partial_U \colon C^\infty(U)\otimes \bigwedge\nolimits^\bullet \lieAlgebra \to
C^\infty(U)\otimes \bigwedge\nolimits^{\bullet + 1} \lieAlgebra$ die eindeutig
bestimmte Abbildung, so dass für alle $f \otimes \xi \in
C^\infty(M)\otimes \bigwedge\nolimits^\bullet \lieAlgebra$ die Gleichung
$\partial_U( f\at{U}\otimes \xi) = (\partial (f\otimes \xi))\at{U}$
gilt. Die Abbildung $\partial_U$ ist offenbar für alle $k \in
\mathbb{N}$ und $f \otimes \xi \in C^\infty(U) \otimes \bigwedge\nolimits^k
\lieAlgebra$ durch $\partial_U(f\otimes\xi) = f J_\alpha\at{U} \otimes
\Ins{e^\alpha}\xi$ und lineare Fortsetzung gegeben.
Die Gleichung $\langle J \rangle = \kIdeal$ ist dann äquivalent zu der
Aussage, dass der augmentierte Koszul"=Komplex an der Stelle
$C^\infty(M) \otimes \bigwedge\nolimits^0 \lieAlgebra$ exakt ist. Da es
keinen wesentlichen Mehraufwand bedeutet, vor allem da es sich für die
spätere Quantisierung als nützlich erweisen wird, werden wir den ganzen
augmentierten Koszul"=Komplex betrachten und durch Konstruktion einer
expliziten zusammenziehbaren Kettenhomotopie zeigen, dass alle
Homologien verschwinden. Mit anderen Worten, der augmentierte
Koszul"=Komplex ist eine Auflösung des $C^\infty(M)$"=Moduls
$C^\infty(C)$. Bevor wir uns dieser etwas technischen Aufgabe zuwenden,
sei noch folgende Äquivarianzeigenschaft von $\partial$ bemerkt.
\begin{proposition}[Äquivarianz von $\partial$]
\label{prop:partialInvariant}
Sei $H$ eine Lie"=Gruppe, die auf $M$ wirke und sei weiter eine
lineare $H$"=Wirkung auf $\lieAlgebra[g]$ gegeben, so dass $J$
$H$"=äquivariant ist, dann ist auch $\partial$
$H$"=äquivariant.
\end{proposition}
\begin{proof}
Sei $h \in H$, dann gilt
\begin{align*}
h (\partial (f \otimes \xi)) &= h ( J(e_{\alpha}) f
\otimes \Ins{e^{\alpha}} \xi)\\
&= h (J(e_{\alpha}) f) \otimes h \Ins{e^{\alpha}}\xi \\
&= h J(e_{\alpha}) h f \otimes \Ins{h e^{\alpha}} (h \xi)
\\
&= J(h e_{\alpha}) h f \otimes \Ins{h e^\alpha} (h \xi) \\
&= \partial(h f \otimes h \xi) = \partial (h (f \otimes \xi)) \ensuremath{\text{.}}
\end{align*}
\end{proof}
\subsection{Konstruktion der Homotopie}
\label{sec:KonstruktionDerHomotopie}
Da in dieser Arbeit nur zusammenziehbare Kettenhomotopien vorkommen,
bezeichnen wir diese im Folgenden der Kürze halber als Homotopie.
Ist $U$ eine offene Umgebung von $C$ und $r \colon U \to C$ eine glatte
Retraktion, so liegt es nahe, diese zu verwenden um auf geometrischem
Wege ein Rechtsinverses zu $\ensuremath{\imath^*}$ zu konstruieren, wie es schon in
Proposition \ref{prop:FunktionenAufDemReduziertenPhasenraum} rein
algebraisch verwendet wurde. Dies führt die nächste Definition genauer
aus. In unserer Situation existiert nach dem Tubensatz
\ref{satz:GkompatibleTuben} immer eine derartige glatte Retraktion. Ist
$H$ eine Lie"=Gruppe, die eigentlich auf $M$ wirkt und $C$ stabilisiert,
so kann $U$ $H$"=invariant und $r$ $H$"=äquivariant gewählt werden.
\begin{definition}[Geometrische Prolongationsabbildung]
\label{def:geometrischeProlongation}
Sei $C$ eine Untermannigfaltigkeit einer Mannigfaltigkeit $M$, $U$
und $O$ offene Umgebungen von $C$ in $M$ mit $\abschluss{O}
\subset U$ und $\psi_U \colon
M \to \mathbb{R}$ eine glatte Funktion, so dass $\{\psi_U,1 -
\psi_U\}$ eine glatte Zerlegung der Eins ist, die der offenen
Überdeckung $\{U,M\setminus \abschluss{O}\}$ untergeordnet ist. Weiter sei $r
\colon U \to C$ eine glatte Retraktion auf $C$. In diesem Fall nennen
wir die Abbildung
\begin{align}
\label{eq:geometrischeProlongation}
&\prol_{M,O,r,\psi_U} \colon C^\infty(C) \to C^\infty(M)\\
&\prol_{M,O,r,\psi_U}(f)(p) :=
\begin{cases}
\psi_U(p)f(r(p)) & \text{falls $p \in U$} \\
0 & \text{sonst}
\end{cases}
\end{align}
\neuerBegriff{geometrische Prolongationsabbildung} oder kurz
\neuerBegriff{geometrische Prolongation}. Falls aus dem Zusammenhang
klar ist, welche Daten $(M,O,r,\psi_U)$ gewählt wurden schreiben
wir einfach $\prol$ statt $\prol_{M,O,r,\psi_U}$.
\end{definition}
\begin{bemerkung}
\label{bem:geometrischeProlongation}
Es gelten die Bezeichnungen aus Definition
\ref{def:geometrischeProlongation} und $H$ sei eine Lie"=Gruppe, die
auf $M$ wirke. Falls $C$, $U$, $O$ und $\psi_U$ $H$"=invariant sind
und $r$ $H$"=äquivariant ist, ist auch $\prol$ $H$"=äquivariant. Nach
Satz \ref{satz:GkompatibleTuben} und Satz
\ref{satz:inv_zerlegung_der_eins} existiert immer eine glatte,
$H$"=äquivariante Retraktion und eine glatte $H$"=invariante Funktion
$\psi_U$ mit den Eigenschaften aus Definition
\ref{def:geometrischeProlongation}.
\end{bemerkung}
\begin{proposition}
\label{prop:EigenschaftenGeometrischeProlongation}
Ist $\prol$ eine geometrische Prolongation, so gilt
\begin{align}
\ensuremath{\imath}^* \prol = \operatorname{\mathsf{id}}\label{eq:EigenschaftenGeometrischeProlongation}\ensuremath{\text{,}}
\end{align}
wobei $\ensuremath{\imath} \colon C \hookrightarrow M$ die kanonische Inklusion der
Untermannigfaltigkeit $C$ in die Mannigfaltigkeit $M$ ist.
\end{proposition}
\begin{proof}
Klar.
\end{proof}
Um die Geometrie besser unter Kontrolle zu haben und die
Homotopie konstruieren zu können, werden wir in
einer Tubenumgebung arbeiten, die wir, wie der folgende Satz zeigt,
besonders angepasst wählen können.
\begin{satz}
\label{satz:CxgA}
Sei $M$ eine glatte Mannigfaltigkeit, und \(W\) ein
endlichdimensionaler Vektorraum. Weiter sei \(G\) eine Lie"=Gruppe,
die auf \(M\) eigentlich wirke und auf $W$ linear. Zudem sei eine
$G$"=äquivariante, glatte Abbildung $J \colon M \to W$ gegeben und $0
\in W$ ein regulärer Wert von $J$ mit $C := J^{-1}(0) \neq
\emptyset$. Ist schließlich \(r\colon U' \to C\) eine glatte,
\(G\)"=äquivariante Retraktion von einer offenen, \(G\)"=invarianten
Umgebung \(U'\) von \(C\) in \(M\) auf \(C\) (die z.\,B.\ von einer
Tubenumgebung herrühren könnte), so gibt es \(G\)"=invariante, offene
Umgebungen \(U \subset U'\) von \(C\) in \(M\) und \(V\) von \(C
\times \{0\}\) in \(C \times W\), so dass
\begin{align}
\label{eq:DefinitionDerGutenTubenAbbildung}
F \colon U \to V, \quad u \mapsto (r(u),J(u))
\end{align}
ein
\(G\)"=äquivarianter Diffeomorphismus ist.
%
Für alle \((c,v) \in V\) gilt dann
%
\begin{align}
\label{eq:TubeneigenschaftDerSpeziellenTube1}
F^{-1}(c,0) = c \quad \text{und} \quad J \circ F^{-1}(c,v) = v \ensuremath{\text{.}}
\end{align}
\(V\) kann sternförmig in Faserrichtung gewählt werden, d.\,h.\ derart, dass
aus \((c,v) \in V \) schon \( (c,tv) \in V\) für alle \(t \in [0,1]\)
folgt.
\end{satz}
\begin{proof}
Wir definieren $\widetilde{F} \dpA U' \to C \times W$ durch
$\widetilde{F}(u) := (r(u),J(u))$.
Zuerst zeigen wir, dass $T_c\tilde{F}$ für alle $c \in C$ injektiv
ist. Dazu sei $v \in T_cM$ mit $T_c\tilde{F}v = 0$. Mit Proposition
\ref{prop:produkte}~\refitem{item:ProdukteKomma} ist dies aber
äquivalent zu $T_cr v = 0$ und $T_cJv = 0$. Da $0$ ein regulärer Wert
von $J$ ist, bedeutet letzteres, dass $v \in T_c\iota T_cC$ ist,
wobei $\iota \dpA C \hookrightarrow M$ die Inklusion bezeichnet. Da
weiter $r \circ \iota = \operatorname{\mathsf{id}}$ gilt, folgt aus $T_cr v = 0$ schon $v =
0$. Somit ist $T_c\widetilde{F}$ injektiv und aus Dimensionsgründen
auch bijektiv für alle $c \in C$. Damit ist mit dem Satz über die
Umkehrfunktion $\widetilde{F}$ ein lokaler Homöomorphismus auf einer
offenen Umgebung von $C$. Ferner ist offensichtlich wegen der
$G$"=Äquivarianz von $J$ die Menge $C \times \{0\}$ $G$"=invariant
und $s \dpA C \times \{0\} \to U'$ mit $s((c,0)) = c$ liefert einen
stetigen, $G$"=äquivarianten Schnitt von $\widetilde{F}$ auf $C \times
\{0\}$. Wegen der $G$"=Äquivarianz von $r$ und $J$ ist offenbar auch
$\widetilde{F}$
$G$"=äquivariant.
Mit Korollar \ref{kor:AufUmgebungVonCHomoe} folgt dann, dass es eine
$G$"=invariante, offene Umgebung $\widetilde{U}$ von $s(C \times
\{0\}) = C$ gibt, so dass $\widetilde{F}\at{\tilde U}$ ein
$G$"=äquivarianter Homöomorphismus auf eine offene, $G$"=invariante
Teilmenge $\widetilde{V}$ von $C \times W$ ist. Offenbar gilt nach
Konstruktion $J \circ \widetilde{F}^{-1}(c,v) = v$ für $(c,v) \in
\tilde{V} \subset C \times W$. Nach Proposition \ref{lem:1Ball} gibt
es eine $G$"=invariante Riemannsche Fasermetrik $h$ auf $C \times W$
mit $V := B_1(C) \subset \widetilde{V}$. Somit sind $U :=
\widetilde{F}\at{\widetilde{U}}^{-1}(V)$, $V$ und $F :=
\widetilde{F}\at{U} \dpA U \to V$ wie gewünscht.
\end{proof}
Wir können insbesondere den folgenden Spezialfall des vorangehenden
Satzes formulieren.
\begin{korollar}
\label{kor:Cxg}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit und $G$ eine
Lie"=Gruppe mit zugehöriger Lie"=Algebra $\lieAlgebra[G]$. $G$ wirke
auf $\lieAlgebra^*$ vermöge $\Ad^*$ und auf $M$ stark Hamiltonsch sowie
eigentlich und sei $J \dpA M \to \lieAlgebra[g]^*$ die zugehörige
$G$"=äquivariante Impulsabbildung. Sei schließlich $0 \in
\lieAlgebra[g]^*$ ein regulärer Wert für $J$ und $C :=
J^{-1}(\{0\}) \neq \emptyset$.
Dann gibt es eine $G$"=invariante Umgebung $U$ von $C$ in $M$ und
einen $G$"=äquivarianten Diffeomorphismus $F \dpA U \to V \subset C
\times \lieAlgebra[g]^*$ auf eine offene $G$"=invariante Umgebung $V$
von $C \times \{0\}$, wobei die $G$"=Wirkung auf $C \times
\lieAlgebra[g]^*$ durch $g (c,v) := (g c,\Ad^*_{g^{-1}}
\,v)$, für alle $g \in G$, $(c,v) \in C \times \lieAlgebra[g]^*$
gegeben ist. Man kann
sogar erreichen, dass $J \circ F^{-1}(c,v) = v$ für $(c,v) \in
V$ gilt, sowie dass für jedes $(c,v) \in V$ auch $(c,tv) \in V$ für jedes
$t \in [0,1]$ ist.
\end{korollar}
\begin{bemerkung}[Gute Tubenumgebung]
\label{bem:SpezielleTubenumgebung}
$(C\times W \to C,V,U,F^{-1})$ aus Satz \ref{satz:CxgA} ist insbesondere
eine $G$"=äquivariante Tubenumgebung von $C$ mit $G$"=äquivarianter
Retraktion $r \dpA U \to C$. Eine derartige Tubenumgebung wollen wir
im Folgenden auch \neuerBegriff{von $r$ induzierte \tn{gute}
Tubenumgebung} oder einfach \neuerBegriff{\tn{gute} Tubenumgebung}
nennen. Analog sprechen wir dann von \neuerBegriff{\tn{guter}
Tubenabbildung} usw.\ Schließlich vereinbaren wir den
Notationsmissbrauch auch die inverse Tubenabbildung $F$ als solche zu
bezeichnen und sprechen auch von der \tn{guten} Tubenabbildung $F \dpA U
\to V$.
\end{bemerkung}
Mit Hilfe einer guten Tubenumgebung werden wir nun die gesuchten
Homotopie konstruieren.
\begin{lemma}
\label{lem:HomotopienInDerTubenumgebung}
Sei $\Psi \colon U \to V \subset C \times \lieAlgebra^*$ eine
\tn{gute} Tubenabbildung mit zugehöriger Retraktion $r \colon U \to
C$. Weiter sei die Abbildung $\h_U \colon C^\infty(U) \otimes
\bigwedge\nolimits^\bullet \lieAlgebra \to C^\infty(U) \otimes
\bigwedge\nolimits^{\bullet + 1}\lieAlgebra$ für alle $k \in \mathbb{N}$
$f \otimes \eta \in C^\infty(U) \otimes \bigwedge\nolimits^k \lieAlgebra$ und
$(c,\mu) \in V$
durch lineare Fortsetzung von
\begin{align}
\label{eq:HomotopienInDerTubenumgebung}
\h_U(f \otimes \eta) \circ \Psi^{-1}(c,\mu) := \int_{0}^1
t^k \partial^\alpha(f \circ \Psi^{-1})(c,t \mu) \, dt \otimes
e_\alpha \wedge \eta
\end{align}
gegeben, wobei
$\partial^\alpha := \partial_{e^\alpha}$ die Ableitung in Richtung
$e^\alpha$ bezeichnet. Dann gilt
\begin{align}
\label{eq:HomotopienInDerTubenumgebung2}
\h_U\partial_U x + \partial_U \h_U x = x \quad \text{für $x \in
C^\infty(U) \otimes \bigwedge\nolimits^k \lieAlgebra$ und $k \geq 1$} \ensuremath{\text{,}}
\end{align}
\begin{align}
\label{eq:HomotopienInDerTubenumgebung3}
r^* \ensuremath{\imath}^* f + \partial_U \h_U f = f \quad \text{für $f \in
C^\infty(U)$}
\end{align}
und
\begin{align}
\label{eq:HomotopienInDerTubenumgebung4}
\h_U r^* = 0 \ensuremath{\text{.}}
\end{align}
Trägt $M$ eine Wirkung einer Lie"=Gruppe $H$ unter der $C$ stabil ist,
$\lieAlgebra$ eine lineare $H$"=Wirkung und ist $\Psi$
$H$"=äquivariant, so gilt dies auch für $\h_U$.
\end{lemma}
\begin{proof}
Der Beweis verläuft ähnlich wie der des klassischen
Poincar\'e"=Lemmas, (vgl.\ etwa \cite[Lemma 11.4,
Thm. 11.11]{lee:2003a}). Für $\alpha \in \{1,\dots,\dim G\}$ sei
$\mathrm{pr}_\alpha \colon C \times \lieAlgebra^* \to \lieAlgebra^*$
die $\alpha$"=te Koordinatenfunktion bezüglich der Basis
$\{e^\alpha\}$, d.\,h.\ für $(c,\mu_\alpha e^\alpha) \in C \times
\lieAlgebra^*$ gilt die Gleichung $\mathrm{pr}_\alpha(c,\mu_\beta
e^\beta) = \mu_\alpha$. Man beachte $J_\alpha \circ \Psi^{-1} =
\mathrm{pr}_\alpha$. Für $k \in \mathbb{N}$ und $f \otimes \eta \in
C^\infty(U) \otimes \bigwedge\nolimits^k \lieAlgebra$ gilt schließlich
\begin{align*}
\lefteqn{(\partial \h_U f\otimes \eta)(\Psi^{-1}(c,\mu))}%
\\ &= J_\alpha(\Psi^{-1}(c,\mu))
\Ins{e^\alpha} \h_U(f \otimes \eta)(\Psi^{-1}(c,\mu)) \\
&= \mu_\alpha \left ( \int_0^1 t^k \partial^\beta(f \circ
\Psi^{-1})(c,t\mu)
\, dt \otimes \Ins{e^\alpha}(e_\beta \wedge \eta) \right) \\
&= \mu_\alpha \left ( \int_0^1 t^k \partial^\beta(f \circ
\Psi^{-1})(c,t\mu)
\, dt \otimes \delta^\alpha_\beta \eta \right) \\
&\phantom{=} - (1 - \delta_{0k})\mu_\alpha \left ( \int_0^1
t^k \partial^\beta(f \circ \Psi^{-1})(c,t\mu) \, dt \otimes
(e_\beta \wedge \Ins{e^\alpha} \eta) \right) \\
&= \int_0^1 t^k \frac{d}{dt}(f \circ \Psi^{-1}(c,t\mu)) \, dt \otimes \eta \\
&\phantom{=} - (1 - \delta_{0k})\int_0^1
t^{k-1}\left(\mathrm{pr}_\alpha \cdot \partial^\beta(f \circ
\Psi^{-1}) \right)(c,t\mu) \, dt \otimes
e_\beta \wedge \Ins{e^\alpha} \eta \\
&= \int_0^1 \frac{d}{dt} \left(\left(t^k
f(\Psi^{-1}(c,t\mu))\right)\right)\, dt \otimes \eta -
\int_0^1 \frac{d t^k}{dt} (f \circ \Psi^{-1})(c,t\mu) \, dt
\otimes \eta \\
&\phantom{=} - (1 - \delta_{0k}) \int_0^1 t^{k-1} \partial^\beta
(\mathrm{pr}_\alpha \cdot f \circ \Psi^{-1})(c,t\mu) \, dt \otimes
e_\beta \wedge\Ins{e^\alpha}
\eta \\
&\phantom{=} + (1 - \delta_{0k}) \int_0^1
t^{k-1}(\delta_\alpha^\beta f \circ
\Psi^{-1})(c,t\mu) \, dt \otimes e_\beta \wedge \Ins{e^\alpha} \eta \\
&= ((f\circ \Psi^{-1})(c,\mu) - \delta_{0k}(f \circ
\Psi^{-1})(c,0))\otimes \eta - (1 - \delta_{0k}) k \int_0^1 t^{k-1}
(f\circ\Psi^{-1})(c,t\mu) \, dt \otimes \eta
\\
&\phantom{=} - (1 - \delta_{0k})((\h_U\partial (f\otimes
\eta))\circ \Psi^{-1})(c,\mu) + (1 - \delta_{0k}) k \int_0^1 t^{k-1} (f \circ
\Psi^{-1})(c,t\mu)
\, dt \otimes \eta \\
&= (f \otimes\eta)\circ\Psi^{-1}(c,\mu) - (f \otimes \eta) \circ
\Psi^{-1}(c,0) \delta_{0k} - (1 - \delta_{0k})((\h_U\partial
(f\otimes \eta))\circ \Psi^{-1})(c,\mu) \ensuremath{\text{.}}
\end{align*}
Dabei wurde im dritten Schritt ausgenutzt, dass $\Ins{e^\alpha}$ eine
Super"=Derivation ist, im drauffolgenden Schritt kam die Kettenregel
für Ableitungen zur Anwendung, beim nächsten Gleichheitszeichen wurde
zweimal die Produktregel für Ableitungen verwendet und schließlich im
vorletzten Schritt haben wir die Gleichung $e_\alpha \wedge
\Ins{e^\alpha} \xi = k \xi$ ausgenutzt, die für alle $k \in
\mathbb{N}$ und alle $\xi \in \bigwedge\nolimits^k \lieAlgebra$ gilt.
Aus obiger Rechnung folgt dann unmittelbar die Behauptung. Ist nun
$H$ eine Lie"=Gruppe wie in den Voraussetzungen, so beachte man, dass
für alle $(c,\mu) \in C \times \lieAlgebra$ und $h \in H$ die
Gleichung
\begin{align*}
(h(\partial_{e^\alpha}(f \circ \Psi^{-1})))(c,\mu)
&= \partial_{e^\alpha}(f \circ \Psi^{-1})(h^{-1} (c,\mu))\\ &=
\lim_{s \to 0} \frac{f \circ \Psi^{-1}(h^{-1} c,h^{-1}\mu + s e^\alpha) - f
\circ
\Psi^{-1}(h^{-1}c,h^{-1}\mu)}{s} \\
&= \lim_{s \to 0}\frac{(h (f \circ \Psi^{-1}))(c,\mu + s
he^\alpha) - (h (f\circ\Psi^{-1}))(c,\mu)}{s} \\
&= (\partial_{h e^\alpha}(h(f\circ \Psi^{-1})))(c,\mu)
\end{align*}
gilt. Ferner gilt $\dPaar{h e_\alpha}{h e^\beta} = \dPaar{h
h^{-1} e_\alpha}{e^\beta} = \delta_\alpha^\beta$, d.\,h.\ $\{h
e_\alpha\}$ und $\{h e^\alpha\}$ sind zueinander duale Basen für
$\lieAlgebra$ und $\lieAlgebra^*$. Mit diesen Bemerkungen sind die
behaupteten Invarianzeigenschaften klar.
\end{proof}
Das folgende technische Lemma wird sich als nützlich erweisen, die in
Lemma \ref{lem:HomotopienInDerTubenumgebung} auf
einer Tubenumgebung konstruierte Homotopie zu globalisieren.
\begin{lemma}
\label{lem:GlobalisierungsAbbildung}
Sei $W \subset M\setminus C$ offen. Dann gibt es eine glatte Abbildung
$\xi \colon W \to \lieAlgebra$ mit
\begin{align}
\label{eq:GlobalisierungsAbbildung}
\dPaar{J\at{W}}{\xi} = 1 \ensuremath{\text{.}}
\end{align}
Sei $H$ eine Lie"=Gruppe, die auf $M$ eigentlich wirke und sei $W$
$H$"=invariant sowie $J$ $H$"=äquivariant. Weiter sei
$\lieAlgebra$ mit einer linearen $H$"=Wirkung versehen. Dann
kann $\xi$ auch $H$"=äquivariant gewählt werden.
\end{lemma}
\begin{proof}
Wir betrachten direkt den Fall mit Gruppenwirkung. Der andere ergibt
sich trivialerweise mit $H = \{e\}$. Sei $x \in W$ beliebig. Als
erstes konstruieren wir eine glatte $H$"=äquivariante Abbildung
$\eta^{[x]} \colon H\cdot x \to \lieAlgebra$ mit der Eigenschaft
$\dPaar{J\at{H\cdot x}}{\eta^{[x]}} = 1$ wie folgt. Da die
$H$"=Wirkung eigentlich ist, ist die Stabilisatorgruppe $H_x$ von $x$
kompakt, womit ein $H$"=invariantes Skalarprodukt $\langle \cdot ,
\cdot \rangle_x$ mit induzierter Norm $|\cdot|_x$ auf $\lieAlgebra$
existiert (vgl.\ Lemma \ref{kor:invarianteFasermetrik}). Ist nun $\nu
\in \lieAlgebra^*$, so sei $\nu^{(x)}$ das eindeutig bestimmte
Element von $\lieAlgebra$ mit $\langle \nu^{(x)}, \cdot \rangle_x =
\nu$. Für $h \in H_x$ ist dann ${J(x)}^{(x)} = {J(hx)}^{(x)} =
h{J(x)}^{(x)}$, denn wie man leicht nachvollzieht gilt für jedes
$\rho \in \lieAlgebra$
\begin{align*}
\langle {J(h x)}^{(x)}, \rho \rangle_x = \dPaar{J(hx)}{\rho} &=
\dPaar{h J(x)}{\rho} \\ &=
\dPaar{J(x)}{h^{-1} \rho} = \langle {J(x)}^{(x)}, h^{-1} \rho
\rangle_x = \langle h {J(x)}^{(x)},\rho \rangle_x \ensuremath{\text{.}}
\end{align*}
Sind $h,h' \in H$ mit $hx = h'x$ so ist offenbar $\tilde h := h^{-1}
h' \in H_x$ und es folgt $\tilde h J(x)^{(x)} = J(x)^{(x)}$, d.\,h.\
$h J^{(x)}(x) = h' J^{(x)}(x)$. Somit drückt sich die glatte
Abbildung $H \to \lieAlgebra$, $h \mapsto h
{J(x)}^{(x)}/|J(x)^{(x)}|_x$ zur glatten Abbildung $H/H_x \to
\lieAlgebra$, $[h] \mapsto h {J(x)}^{(x)}/|{J(x)}^{(x)}|_x$
herunter. Da bekanntlich (vgl.\ \cite[Prop. 3.3.24]{waldmann:2007a})
$H/H_x \to H\cdot x$, $[h] \mapsto hx$ ein Diffeomorphismus ist,
folgt durch Komposition, dass $\eta^{[x]} \colon H\cdot x \to
\lieAlgebra$, $hx \mapsto h J^{(x)}/|{J(x)}^{(x)}|_x$ eine
wohldefinierte glatte Abbildung ist. Per Konstruktion ist
$\eta^{[x]}$ $H$"=äquivariant. Nun können wir nach Bemerkung
\ref{bem:geometrischeProlongation} eine $H$"=invariante offene
Umgebung $\tilde{U}^{[x]}$ von $H \cdot x$ in $M$ sowie eine
$H$"=äquivariante geometrische Prolongation $\prol^{[x]} \colon
C^\infty(H \cdot x) \to C^\infty(M)$ darauf wählen. Wir definieren
dann $\xi^{[x]} := \prol^{[x]} \eta^{[x]}$. Aus Stetigkeits- und
Invarianzgründen gibt es eine offene, $H$"=invariante Umgebung
$U^{[x]} \subset \tilde{U}^{[x]}$ von $H\cdot x$ mit
$\dPaar{J}{\xi^{[x]}}\at{U^{[x]}} > 0$. Damit ist ${\xi^{[x]}}'
\colon U^{[x]} \to \lieAlgebra$, $p \mapsto
\frac{\xi^{[x]}(p)}{\dPaar{J(p)}{\xi^{[x]}}}$ eine wohldefinierte
glatte und $H$"=äquivariante Abbildung und es gilt
$\dPaar{J\at{U^{[x]}}}{{\xi^{[x]}}'} = 1$. Wir wählen dann eine der
offenen $H$"=invarianten Überdeckung $\{U^{[x]}\}_{x\in W}$
untergeordnete, glatte, $H$"=invariante, lokal endliche Zerlegung der
Eins $\{\chi_x\}_{x \in W}$ (vgl. Satz
\ref{satz:inv_zerlegung_der_eins}) und setzen $\xi := \sum_{x\in
W}\chi_x {\xi^{[x]}}'$. Dies erfüllt offensichtlich die gewünschte
Eigenschaft $\dPaar{J\at{W}}{\xi} = 1$.
\end{proof}
\begin{korollar}
\label{kor:XiMitPsi}
Es seien dieselben Voraussetzungen wie in Lemma
\ref{lem:GlobalisierungsAbbildung} gegeben und zusätzlich eine glatte
Funktion $\psi_W \colon M \to \mathbb{R}$ mit $\supp \psi_W \subset W$. Dann
gibt es eine glatte Abbildung $\xi \colon M \to \lieAlgebra$ mit
$\supp \xi \subset W$ und $\dPaar{J}{\xi} = \psi_W$. Ist $H$ eine
Lie"=Gruppe wie in Lemma~\ref{lem:GlobalisierungsAbbildung} und $W$
sowie $\psi_W$ $H$"=invariant, so ist $\xi$ $H$"=äquivariant wählbar.
\end{korollar}
\begin{proof}
Nach Lemma \ref{lem:GlobalisierungsAbbildung} gibt es ein glattes
$\tilde \xi \colon W \to \lieAlgebra$ mit $\dPaar{J\at{W}}{\xi} =
1$. Dann definiere man $\xi := \psi_W \tilde \xi \colon M \to
\lieAlgebra$.
\end{proof}
\begin{proposition}
\label{prop:HomotopieEigenschaftVonXiW}
Sei $W \subset M\setminus C$ offen und $\psi_W \in C^\infty(M)$ mit $\supp
\psi_W \subset W$ sowie $\xi \colon M \to \lieAlgebra$ eine glatte
Abbildung mit $\dPaar{J}{ \xi} = \psi_W$ und $\supp \xi \subset
W$. Ist $\{e_\alpha\}$ eine Basis von $\lieAlgebra$, so gibt es eindeutig
bestimmte Funktionen $\xi^\alpha \in C^\infty(M)$ mit $\xi =
\xi^\alpha e_\alpha$. Sei $\h_W \colon C^\infty(M) \otimes
\bigwedge\nolimits^{\bullet} \lieAlgebra \to C^\infty(M) \otimes
\bigwedge\nolimits^{\bullet +1}\lieAlgebra$, für alle $k \in \mathbb{N}$ und
$f\otimes\eta \in C^\infty(M)\otimes \bigwedge\nolimits^k \lieAlgebra$ durch
$\h_W(f\otimes \eta) := \xi^\alpha f\otimes e_\alpha \wedge \eta$ und lineare
Fortsetzung definiert. Diese Definition ist unabhängig von der Wahl
der Basis $\{e_\alpha\}$ und es gilt
\begin{align}
\label{eq:HomotopieEigenschaftVonXiW1}
\partial \h_W x + (1 - \delta_{0k}) \h_W \partial x = \psi_W x \quad \forall x \in
C^\infty(M) \otimes \bigwedge\nolimits^k \lieAlgebra, k \in \mathbb{N} \ensuremath{\text{.}}
\end{align}
Es trage $M$ die Wirkung einer Lie"=Gruppe $H$, welche auch auf
$\lieAlgebra$ linear operiere. Falls $\psi_W$ $H$"=invariant und $\xi$
$H$"=äquivariant ist, so ist auch $\h_W$ $H$"=äquivariant.
\end{proposition}
\begin{proof}
Sei $k \in \mathbb{N}$ und $f \otimes \eta \in C^\infty(M) \otimes \bigwedge\nolimits^k \lieAlgebra$, dann gilt
\begin{align*}
\lefteqn{\partial \h_W (f \otimes \eta) + (1 - \delta_{0k})
\h_W \partial
(f \otimes \eta)}\\
&= \partial(\xi^\alpha f \otimes e_\alpha \wedge \eta) + (1 -
\delta_{0k})\h_W(f J_\alpha \otimes \Ins{e^\alpha} \eta) \\
&= J_\beta \xi^\alpha f \otimes \Ins{e^\beta}(e_\alpha \wedge
\eta) + (1 - \delta_{0k}) f J_\alpha \xi^\beta \otimes (e_\beta
\wedge
\Ins{e^\alpha}\eta) \\
&= J_\alpha\xi^\alpha f\otimes \eta -
(1 - \delta_{0k})J_\beta\xi^\alpha f\otimes e_\alpha \wedge\Ins{ e^\beta} \eta +
(1 - \delta_{0k}) f J_\alpha \xi^\beta \otimes (e_\beta
\wedge
\Ins{e^\alpha}\eta) \\
&= \dPaar{J}{\xi}(f \otimes \eta) \\
&= \psi_W (f\otimes \eta) \ensuremath{\text{.}}
\end{align*}
Um die Äquivarianzeigenschaften zu zeigen sei $h \in H$ , $k \in
\mathbb{N}$ und $f\otimes \eta \in C^\infty(M) \otimes \bigwedge\nolimits^k
\lieAlgebra$, dann folgt:
\begin{align*}
\h_W(h(f \otimes \eta)) &= \h_W(hf \otimes h \eta) = \xi^\alpha hf
\otimes e_\alpha \wedge h\eta \\
&= h(h^{-1} \xi^\alpha f \otimes h^{-1} e_\alpha \wedge \eta) = h
\h_W(f \otimes \eta) \ensuremath{\text{.}}
\end{align*}
\end{proof}
Wir kommen nun zum angekündigten Existenzsatz einer globalen Homotopie.
\begin{satz}[Homotopie]
\label{satz:globalisierteHomotopie}
Sei $\Psi \colon U \to V \subset C \times \lieAlgebra^*$ eine
\tn{gute} Tubenabbildung mit zugehöriger Retraktion $r \colon U \to
C$ und $\h_U \colon C^\infty(U) \otimes \bigwedge\nolimits^\bullet \lieAlgebra
\to C^\infty(U) \otimes \bigwedge\nolimits^{\bullet + 1} \lieAlgebra$ wie in
Lemma \ref{lem:HomotopienInDerTubenumgebung}. Sei weiter $O$ eine
offene Umgebung von $C$ mit $\abschluss{O} \subset U$, $W :=
M\setminus \abschluss{O}$ sowie $\{\psi_U,\psi_W\}$ eine der offenen
Überdeckung $\{U,W\}$ untergeordnete glatte Zerlegung der Eins. Dann
gibt es eine glatte Abbildung $\xi \colon M \to \lieAlgebra$ mit
$\supp \xi \in W$, $\dPaar{J}{\xi} = \psi_W$ und $\xi\at{\supp \psi_W
\cap U} = -\h_U(\psi_U\at{U})\at{\supp \psi_W \cap U}$. Weiter sei
$\h_W$ davon induziert wie in Proposition~\ref{prop:HomotopieEigenschaftVonXiW}. Schließlich sei $\prol :=
\prol_{M,O,r,\psi_U}$. Dann erfüllt die Abbildung $\h \colon
C^\infty(M) \otimes \bigwedge\nolimits^\bullet \lieAlgebra \to C^\infty(M)
\otimes \bigwedge\nolimits^{\bullet + 1} \lieAlgebra$, welche für $k \in
\mathbb{N}$ und $f \otimes \eta \in C^\infty(M) \otimes \bigwedge\nolimits^k
\lieAlgebra$ durch
\begin{align}
\h(f \otimes \eta):= \psi_U \h_U(f\at{U} \otimes \eta) + \h_W(f
\otimes \eta)
\end{align}
und lineare Fortsetzung definiert ist,
die folgenden Gleichungen.
\begin{align}
\label{eq:globalisiertHomotopie1}
\h\partial x + \partial \h x = x \quad \text{für $x \in
C^\infty(M) \otimes \bigwedge\nolimits^k \lieAlgebra$ und $k \geq 1$}\ensuremath{\text{,}}
\end{align}
\begin{align}
\label{eq:globalisiertHomotopie2}
\prol \ensuremath{\imath}^* f + \partial \h f = f \quad \text{für $f \in
C^\infty(M)$}
\end{align}
und
\begin{align}
\label{eq:globalisierteHomotopie3}
\h \prol f = 0 \quad \text{für $f \in C^\infty(C)$}\ensuremath{\text{.}}
\end{align}
Trägt $M$ eine Wirkung einer Lie"=Gruppe $H$, unter der $C$ stabil ist, und
$\lieAlgebra$ eine lineare $H$"=Wirkung, so kann man $\psi_U$,
$\psi_W$ $H$"=invariant und $\xi$ $H$"=äquivariant wählen. Ebenso kann
nach Lemma \ref{lem:HomotopienInDerTubenumgebung} $\h_U$
$H$"=äquivariant gewählt werden und nach Proposition
\ref{prop:HomotopieEigenschaftVonXiW} auch $\h_W$. Unter diesen
Annahmen ist $\h$ ebenfalls $H$"=äquivariant.
\end{satz}
\begin{proof}
Sei $\hat \xi \colon U \cap W \to \lieAlgebra$ definiert durch $\hat
\xi = -\h_U(\psi\at{U})\at{U \cap W}$. Dann gilt offensichtlich für
$p \in U \cap W$
\begin{align*}
\dPaar{J(p)}{\hat \xi(p)} = J_\alpha(p) \xi^\alpha(p) &= -
(\partial \h_U(\psi_U\at{U}))(p) \\
&= -(\psi_U(p) - (r^*\ensuremath{\imath^*} \psi_U\at{U})(p)) = 1 - \psi_U(p) =
\psi_W(p)
\end{align*}
Nun wählen wir auf $W$ ein $\xi' \colon M \to \lieAlgebra$ mit
$\dPaar{J}{\xi'} = \psi_W$. Sei weiter $\{\chi_1,\chi_2\}$ eine
Zerlegung der Eins von $W$ mit $\supp \chi_1 \subset U \cap W$ und
$\supp \chi_2 \subset W \setminus \supp \psi_U$, dann erfüllt $\xi :=
\chi_1 \hat \xi + \chi_2 \xi'$ die gewünschten Eigenschaften.
Für die äquivariante Situation beachte man für $h \in H$ und $p \in U$
\begin{align*}
(\h_U(\psi_U))(hp) &= \int_0^1(\partial_{e^\alpha}(\psi_U \circ
\Psi^{-1}))(r(hp)t J(hp)) \, dt \, e_\alpha \\
&= \int_0^1(\partial_{e^\alpha}(\psi_U \circ
\Psi^{-1}))(h(r(p),tJ(p))) \, dt \, e_\alpha \\
&= \int_0^1(h^{-1}(\partial_{e^\alpha}(\psi_U \circ \Psi^{-1})))(r(p),tJ(p))
\, dt \, e_\alpha \\
&= \int_0^1 \partial_{h^{-1} e^\alpha}(h^{-1} (\psi_U \circ
\Psi^{-1}))(r(p),tJ(p)) \, dt \, e_\alpha \\
&= h\left(\int_0^1 \partial_{h^{-1} e^\alpha}(\psi_U \circ \Psi^{-1}) \,
dt \, h^{-1} e_\alpha\right) \\
&= h(\h_U(\psi_U)(p)),
\end{align*}
womit $\hat \xi$ $H$"=äquivariant ist. Dabei wurde beim zweitletzten
Schritt die Invarianz von $\psi_U$ und die Äquivarianz von $\Psi$
verwendet:
\begin{align*}
h^{-1} (\psi_U \circ \Psi^{-1})(p) = \psi_U \circ \Psi^{-1}(hp) =
\psi_U (h \Psi^{-1}(p)) = \psi_U \circ \Psi^{-1}(p) \quad \forall
p \in V
\end{align*}
Für $k >
0$ und $x \in C^\infty(M) \otimes \bigwedge\nolimits^k \lieAlgebra$ gilt
\begin{align*}
\h \partial x + \partial \h x = \psi_U \h_U (\partial x)\at{U} +
\h_W \partial x + \psi_U \partial \h_U x\at{U} + \partial \h_W x = \psi_U x + \psi_W
x = x
\end{align*}
und für $f \in C^\infty(M)$
\begin{align*}
\prol \ensuremath{\imath}^* f + \partial \h f = \psi_U r^* \ensuremath{\imath}^* f +
\psi_U \partial \h_U f\at{U} + \partial \h_W f = \psi_U f + \psi_W
f = f \ensuremath{\text{.}}
\end{align*}
Für den letzten Teil der Aussage beachte man, dass wegen
$\dPaar{J\at{W}}{\xi} = \psi_W$ schon $\supp \psi_W \subset \supp
\xi$ gilt, d.\,h.\ für $p \notin \supp \psi_U \cap \supp \xi$ gilt $p
\notin \supp \psi_U \cap \supp \psi_W$, also offensichtlich
$\psi_U(p)\h_U(\psi_U)(p) = 0$. Man rechnet dann nach, dass für $p \in W$
\begin{align*}
\h_W(\prol f)(p) &= \xi^\alpha(p) \prol f(p) e_\alpha \\
&=
\begin{cases}
\psi_U(p) f(r(p)) \xi(p) & \text{für } p \in
\supp \psi_U \cap \supp \xi \subset U \\
0 &\text{sonst}
\end{cases} \\
&=
\begin{cases}
- \psi_U(p)f(r(p)) \h_U(\psi_U) &\text{für } p \in \supp \psi_U
\cap \supp \xi \\
0 &\text{sonst}
\end{cases}\ensuremath{\text{.}}
\end{align*}
Andererseits gilt
\begin{align*}
(\h_U\prol f)(p) &= \int_0^1 \partial^\alpha((\psi_U r^*f)\circ
\Psi^{-1})(r(p),tJ(p)) \, dt\, e_\alpha \\
&= \int_0^1 \partial^\alpha((\psi_U \circ \Psi^{-1})f\circ r \circ
\Psi^{-1})(r(p),tJ(p)) \, dt \, e_\alpha \\
&= \int_0^1 f(r(p)) \partial^\alpha(\psi_U \circ
\Psi^{-1})(r(p),tJ(p)) \, dt\, e_\alpha \\
&= f(r(p)) \h_U(\psi_U)(p) \ensuremath{\text{.}}
\end{align*}
Damit ist klar, dass
\begin{align*}
\h \prol f = 0
\end{align*}
gilt.
\end{proof}
Wir haben nun also eine explizite Homotopie für den Koszul"=Komplex
konstruiert, insbesondere wissen wir damit, dass $\kIdeal = \ker \ensuremath{\imath^*} =
\im \kkoszul[1] = \langle J \rangle$ gilt. Da wir später die
geometrischen Eingangsdaten für diese Homotopie explizit verwenden
werden, erscheint die folgende Definition sinnvoll.
\begin{definition}[Geometrische Homotopie"=Daten]
\label{def:GeometrischeHomotopieDaten}
Das Tupel $\mathsf{GH} := (\Psi \colon U \to V, O \subset U,
\psi_U,\psi_W,\xi)$ aus Satz \ref{satz:globalisierteHomotopie} nennen
wir auch \neuerBegriff{geometrische Homotopie"=Daten} (für $C$). Ist $H$ eine
Lie"=Gruppe wie in Satz \ref{satz:globalisierteHomotopie}, und sind
$\Psi$ und $\xi$ $H$"=äquivariant sowie $O$, $U$, $\psi_U$, $\psi_W$
$H$"=invariant gewählt, so nennen wir $\mathsf{GH}$ $H$"=invariante
geometrische Homotopie"=Daten (für $C$).
\end{definition}
\begin{bemerkung}
\label{bem:induzierteDingens}
Wie wir in diesem Abschnitt gesehen haben, induzieren die
geometrischen Homotopie"=Daten eine zusammenziehbare Kettenhomotopie
$\h$ für den Koszul"=Komplex sowie eine geometrische
Prolongationsabbildung $\prol$ sodass $\h \prol = 0$ gilt. Ist $H$
eine Lie"=Gruppe wie in Satz \ref{satz:globalisierteHomotopie} und
$J$ $H$"=äquivariant, so haben wir gesehen, dass es immer möglich
ist, $H$"=invariante Homotopiedaten zu wählen. Insbesondere die davon
induzierten Abbildungen $\h$ und $\prol$ $H$"=äquivariant.
\end{bemerkung}
\subsection{Chevalley-Eilenberg-Komplex}
\label{sec:ChevalleyEilenberg}
Zum Abschluss dieses Abschnitts möchten wir darlegen, dass man den oben
vorgestellten Koszul"=Komplex als Spezialfall des
Chevalley"=Eilenberg"=Komplexes auffassen kann. Diese Sichtweise wird
sich bei der im nächsten Kapitel vorgestellten Konstruktion eines
Sternprodukts auf dem reduzierten Phasenraum als hilfreich
erweisen. Zunächst wollen wir jedoch eine allgemeine Definition des
Chevalley"=Eilenberg"=Komplexes geben und zeigen, dass es sich dabei
tatsächlich um einen Komplex handelt. Die von uns verwendete Definition
findet man etwa in dem Buch von Loday \cite[Def. 10.1.3]{loday:1998a},
mit dem kleinen Unterschied, dass wir nicht wie Loday über einem Körper,
sondern etwas allgemeiner, über einem kommutativen Ring $R$ mit
$\mathbb{Q} \subset R$ arbeiten wollen. Bevor wir zur Definition kommen,
sei noch kurz an den Begriff der Lie"=Algebra"=Darstellung einer
Lie"=Algebra $\lieAlgebra$ über $R$ auf ein $R$"=Modul $V$
erinnert. Darunter verstehen wir eine $R$"=lineare Abbildung $\rho
\colon \lieAlgebra \to \mathrm{\operatorname{End}}(V)$, so dass für alle
$\xi,\eta \in \lieAlgebra$ die Gleichung $\rho([\xi,\eta]) = \rho(\xi)
\circ \rho(\eta) - \rho(\eta)\circ \rho(\xi)$ gilt.
Damit ist $\rho$ also nichts anderes, als ein
$R$"=Lie"=Algebrahomomorphismus von $(\lieAlgebra,[\cdot,\cdot])$ in
die $R$"=Lie"=Algebra der $R$"=linearen Endomorphismen
$\mathrm{\operatorname{End}}(V)$ auf $V$, welche mit dem Kommutator als
Lie"=Klammer versehen ist.
\begin{definition}[Chevalley"=Eilenberg"=Komplex]
\label{def:Chevalley-Eilenberg-Komplex}
Sei $\lieAlgebra$ eine Lie"=Algebra über $R$, $V$ ein Modul über $R$
und $\rho \colon \lieAlgebra \to \mathrm{\operatorname{End}}(V)$ eine
Lie"=Algebra"=Darstellung. Dann heißt die
Sequenz von Abbildungen \def\tA[#1]{A_{#1}}
\begin{equation}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{
V & V \otimes \bigwedge\nolimits^1
\lieAlgebra & V
\otimes \bigwedge\nolimits^2 \lieAlgebra & \dots \\
}; %
\path[<-] (m-1-1) edge node[auto]{$\chevalley$}(m-1-2); %
\path[<-] (m-1-2) edge node[auto]{$\chevalley$}(m-1-3); %
\path[<-] (m-1-3) edge node[auto]{$\chevalley$}(m-1-4); %
\end{tikzpicture},
\end{equation}
bei der die Abbildung $\chevalley$ für alle $n \in \mathbb{N}$ und $v
\otimes \xi_1 \wedge \dots \wedge \xi_n \in V \otimes \bigwedge\nolimits^k \lieAlgebra$ durch
\begin{multline}
\label{eq:DefinitionChevalleyEilenbergOperator}
\chevalley(v \otimes \xi_1 \wedge \dots \wedge \xi_n) := \sum_{1 \leq
j \leq n} (-1)^{j+1} \rho(\xi_j)v \otimes \xi_1 \wedge \dots \wedge
\widehat{\xi_j} \wedge \dots \wedge \xi_n \\
+ \sum_{1 \leq i < j \leq n}(-1)^{i + j - 1} v \otimes [\xi_i,\xi_j]
\wedge \xi_1 \wedge \dots \wedge \widehat{\xi_i} \wedge \dots \wedge
\widehat{\xi_j} \wedge \dots \wedge \xi_n
\end{multline}
sowie durch lineare Fortsetzung gegeben ist,
\neuerBegriff{Chevalley"=Eilenberg"=Komplex} zur Lie"=Algebra"=Darstellung
$\rho$. Dabei bedeutet $\widehat{\xi_i}$ das Weglassen der Variablen
$\xi_i$. Der Randoperator $\chevalley$ heißt
$\neuerBegriff{Chevalley"=Eilenberg"=Operator}$ (zur Lie"=Algebra
Darstellung $\rho$).
\end{definition}
Um eventuellen Unklarheiten vorzubeugen, sei an dieser Stelle bemerkt,
dass es sich bei dem im Papier von Bordemann, Herbig und Waldmann
\cite{bordemann.herbig.waldmann:2000a} betrachteten
Chevalley"=Eilenberg"=Komplex um den zu obigem Kettenkomplex dualen
Kokettenkomplex handelt.
Schreibt man für $\xi,\eta \in \lieAlgebra$ und $v \in V$ $\xi \wirk v
:= \rho(\xi)(v)$ und $\xi \wirk \eta := [\xi,\eta]$ sowie
$(\xi_0,\xi_1,\dots,\xi_n) := (\xi_0 \otimes \xi_1 \wedge \dots \wedge
\xi_n) \in V \otimes \bigwedge\nolimits^n \lieAlgebra$, so lässt sich
$\chevalley$ offensichtlich in der folgenden etwas homogeneren Form schreiben.
\begin{align}
\label{eq:HomogeneFormVonChevalleyEilenberg}
\chevalley(\xi_0,\dots,\xi_n) = \sum_{0 \leq i < j \leq n}
(-1)^{j+1}(\xi_0,\xi_1,\dots,\xi_{i-1},\xi_j \wirk \xi_i,
\xi_{i+1},\dots,\widehat{\xi_j},\dots,\xi_n) \ensuremath{\text{.}}
\end{align}
Für $n \in \mathbb{N}$ und $(\xi_0,\dots,\xi_n) \in V \otimes \bigwedge\nolimits^n
\lieAlgebra$ und $\xi \in \lieAlgebra$ setzen wir
\begin{align}
\label{eq:WirkungAufDachprodukt}
\xi \wirk (\xi_0,\dots,\xi_n) := \sum_{i=0}^n (\xi_0,\dots,\xi \wirk \xi_i,\dots,\xi_n)\ensuremath{\text{.}}
\end{align}
Dies definiert offensichtlich eine Lie"=Algebra-Wirkung $\wirk$ auf $V
\otimes \bigwedge\nolimits^n \lieAlgebra$, d.\,h.\ es gilt insbesondere für
$\xi,\xi' \in \lieAlgebra$ und $\gamma \in V \otimes \bigwedge\nolimits^n
\lieAlgebra$ die Gleichung
\begin{align}
\label{eq:LieAlegrbaWirkungseigenschaft}
(\xi \wirk \xi') \wirk \gamma = [\xi,\xi'] \wirk \gamma = \xi \wirk
(\xi' \wirk \gamma) - \xi' \wirk (\xi \wirk \gamma) \ensuremath{\text{.}}
\end{align}
Mit dieser Notation können wir nun ohne große Mühe zeigen, dass es
sich beim Chevalley"=Eilenberg"=Komplex tatsächlich um einen Komplex
handelt, d.\,h.\ $\chevalley^2 = 0$ gilt.
\begin{proposition}
\label{prop:ChevalleyQuatratGleichNull}
\begin{propositionEnum}
\item %
\label{item:ChevalleyReinziehen}
Für jedes $n\in \mathbb{N}$ und $\alpha = (\xi_0,\dots,\xi_n)
\in V \otimes \bigwedge\nolimits^n \lieAlgebra$ und jedes $\xi \in
\lieAlgebra$ gilt
\begin{align}
\label{eq:ChevalleyReinziehen}
\chevalley (\xi \wirk \alpha) = \xi \wirk \chevalley \alpha \ensuremath{\text{,}}
\end{align}
d.\,h.\ $\chevalley$ ist $\lieAlgebra$"=äquivariant.
\item %
\label{item:ChevalleyQuatratGleichNull}
Es gilt
\begin{align}
\label{eq:ChevalleyQuatratGleichNull}
\chevalley^2 = 0 \ensuremath{\text{.}}
\end{align}
\end{propositionEnum}
\end{proposition}
\begin{proof}
Wir folgen bei diesem Beweis eng der Argumentation von Loday,
vgl.\ \cite[Lem. 10.6.3]{loday:1998a}.
\begin{beweisEnum}
\item %
Wir zeigen die Aussage durch vollständige Induktion. Für $n = 0$
ist sie trivialerweise erfüllt. Wir wollen annehmen, dass sie für
ein festes $n-1 \geq 0$ gelte und zeigen, dass sie dann auch schon für
$n$ gilt. Dazu seien $\xi \in \lieAlgebra$ und $\alpha :=
(\beta,y)$ mit $\beta := (\xi_0,\dots,\xi_{n-1}) \in V \otimes
\bigwedge\nolimits^{n-1}\lieAlgebra$ und $y := \xi_n \in \lieAlgebra$
gegeben.
Nach Definition von $\chevalley$ und $\wirk$ gelten offensichtlich
die folgenden Verträglichkeitsbedingungen.
\begin{align}
\label{eq:ChevalleyMitKlammer}
\chevalley(\beta,y) = (\chevalley \beta,y) + (-1)^{n+1} y \wirk
\beta \tag{$*$},
\end{align}
\begin{align}
\label{eq:WirkungMitKlammer}
\xi \wirk (\beta,y) = (\xi \wirk \beta,y) + (\beta,\xi \wirk y)\ensuremath{\text{.}}
\tag{$**$}
\end{align}
Damit sehen wir
\begin{align*}
\chevalley(\xi \wirk \alpha)%
&= \chevalley((\xi \wirk \beta,y) + (\beta,\xi \wirk y))
&&\eAnn{nach \eqref{eq:WirkungMitKlammer}}\\
&= (\chevalley(\xi \wirk \beta),y) + (-1)^{n+1} y\wirk (\xi
\wirk
\beta) &&\eAnn{nach \eqref{eq:ChevalleyMitKlammer}}\\
&\phantom{=}+ (\chevalley \beta,\xi \wirk y) + (-1)^{n+1}(\xi
\wirk y) \wirk \beta
\\
&= (\xi \wirk \chevalley \beta,y) + (\chevalley \beta, \xi
\wirk y) &&\eAnn{Induktionsvoraussetzung}\\
&\phantom{=}+ (-1)^{n+1}((\xi \wirk y) \wirk \beta + y \wirk
(\xi
\wirk \beta))\\
&= \xi \wirk (\chevalley \beta,y) + (-1)^{n+1}\xi \wirk (y
\wirk
\beta) &&\eAnn{nach \eqref{eq:WirkungMitKlammer} und \eqref{eq:LieAlegrbaWirkungseigenschaft}} \\
&= \xi \wirk (\chevalley(\beta,y)) &&\eAnn{nach \eqref{eq:ChevalleyMitKlammer}}\\
&= \xi \wirk \chevalley \alpha \ensuremath{\text{.}}
\end{align*}
\item %
Wir zeigen durch vollständige Induktion $\chevalley^2 \alpha = 0$
für alle $\alpha \in V \otimes \bigwedge\nolimits^n \lieAlgebra$. Für $n =
0$ ist die Aussage trivial. Wir nehmen nun an, sie gelte für $n-1$
und zeigen, dass daraus folgt, dass sie auch für $n \in
\mathbb{N}$ gilt. Dazu seien wieder $\xi \in \lieAlgebra$ und
$\alpha := (\beta,y)$ mit $\beta := (\xi_0,\dots,\xi_{n-1}) \in V
\otimes \bigwedge\nolimits^{n-1}\lieAlgebra$ und $y := \xi_n \in
\lieAlgebra$ gegeben.
\begin{align*}
\chevalley^2 \alpha %
&= \chevalley\chevalley(\beta,y) \\
&= \chevalley((\chevalley \beta, y) + (-1)^{n+1} y \wirk \beta)
&&\eAnn{nach \eqref{eq:ChevalleyMitKlammer}}\\
&= (\chevalley^2 \beta,y) + (-1)^{n} y \wirk \chevalley
\beta &&\eAnn{nach
\eqref{eq:ChevalleyMitKlammer} und \eqref{eq:ChevalleyReinziehen}}\\
&\phantom{=}+ (-1)^{n+1} y \wirk \chevalley \beta \\
&= (\chevalley^2\beta,y) = 0
&&\eAnn{Induktionsvoraussetzung} \ensuremath{\text{.}}
\end{align*}
\end{beweisEnum}
\end{proof}
Nun kommen wir noch zu einer lokalen Formel für den
Chevalley"=Eilenberg"=Operator. Diese wird sich später als hilfreich
erweisen, den Quanten"=Koszul"=Operator so zu schreiben, wie er in
der Literatur auftaucht.
\begin{proposition}
\label{prop:ChevalleyEilenbergDifferential}
Die Voraussetzungen seien wie in Definition
\ref{def:Chevalley-Eilenberg-Komplex}, die Lie"=Algebra
$\lieAlgebra$ sei zusätzlich frei und endlich erzeugt mit einer Basis
$\{e_\alpha\}$ und zugehöriger dualer Basis $\{e^\alpha\}$, dann gilt für alle
$n \in \mathbb{N}$ und $v \otimes \xi \in V \otimes \bigwedge\nolimits^n
\lieAlgebra$ die folgende Gleichung.
\begin{align}
\label{eq:LokaleDarstellungDesChevalleyEilenbergDifferentials}
\chevalley(v \otimes \xi) = \rho(e_\alpha)(v)\otimes \Ins{e^\alpha} \xi -
\frac{1}{2} c_{\alpha \beta}^\gamma v \otimes e_\gamma \wedge
\Ins{e^\alpha} \Ins{e^\beta}\xi \ensuremath{\text{.}}
\end{align}
Dabei sind $\{c_{\alpha \beta}^\gamma\}$ die Strukturkonstanten der
Lie"=Algebra $\lieAlgebra$ bezüglich der Basis $\{e_\alpha\}$, die für
$\alpha,\beta,\gamma \in \{1,\dots,\dim \lieAlgebra\}$ durch $c_{\alpha
\beta}^\gamma := \dPaar{e^\gamma}{[e_\alpha,e_\beta]}$ gegeben sind.
\end{proposition}
\begin{proof}
Der erste Summand der rechten Seite von Gleichung
\eqref{eq:LokaleDarstellungDesChevalleyEilenbergDifferentials} stimmt
offensichtlich mit dem ersten Summenden der rechten Seite der
$\chevalley$ definierenden Gleichung
\eqref{eq:DefinitionChevalleyEilenbergOperator} überein. Sei nun $\xi
= \xi_1 \wedge \dots \wedge \xi_n$. Dann ist der zweite Summand
offensichtlich gleich
\begin{align*}
&\lefteqn{- \frac{1}{2} c_{\alpha \beta}^\gamma v \otimes e_\gamma \wedge
\Ins{e^\alpha} \Ins{e^\beta}\xi} \\
&=-\frac{1}{2}v \otimes
[e_\alpha,e_\beta] \wedge \Ins{e^\alpha}(\sum_{j=1}^n(-1)^{j-1}
\xi_1 \wedge \dots \wedge
\Ins{e^\beta}\xi_j \wedge \dots \xi_n)\\
&=-\frac{1}{2} v \otimes \sum_{j=1}^n(-1)^{j-1} [e_\alpha,\xi_j]
\wedge (\sum_{i = 1}^{j-1} (-1)^{i-1} \xi_1 \wedge \dots \wedge
\Ins{e^\alpha}\xi_i \wedge \dots \wedge
\widehat{\xi_j} \wedge \dots \wedge\xi_n \\
&\phantom{v \otimes \sum_{j=1}^n(-1)^j [e_\alpha,\xi_j] \wedge (}
+ \sum_{i = j+1}^n (-1)^{i} \xi_1 \wedge \dots \wedge
\widehat{\xi_j} \wedge
\dots \wedge \Ins{e^\alpha} \xi_i \wedge \dots \wedge \xi_n) \\
&=-\frac{1}{2} v \otimes \sum_{j=1}^n \sum_{i=1}^{j-1}
(-1)^{j+i} [\xi_i,\xi_j] \wedge \xi_1 \wedge \dots \wedge
\widehat{\xi_i} \wedge \dots
\wedge \widehat{\xi_j} \wedge \dots \wedge \xi_n \\
&\phantom{=}- \frac{1}{2} v\otimes\sum_{j=1}^n \sum_{i=j+1}^n
(-1)^{j+i -1} [\xi_i,\xi_j] \wedge \xi_1 \wedge \dots \wedge
\widehat{\xi_j} \wedge \dots \wedge \widehat{\xi_i} \wedge \dots
\wedge \xi_n\\
&= \sum_{1 \leq i < j \leq n}(-1)^{i + j - 1} v \otimes
[\xi_i,\xi_j] \wedge \xi_1 \wedge \dots \wedge \widehat{\xi_i}
\wedge \dots \wedge \widehat{\xi_j} \wedge \dots \wedge \xi_n
\ensuremath{\text{.}}
\end{align*}
\end{proof}
Nun kommen wir zur schon angekündigten Interpretation des
Koszul-Komplexes als Chevalley"=Eilenberg-Komplex.
\begin{bemerkung}
\label{bem:KoszulAlsChevalleyEilenbergKlassisch}
Versieht man $\lieAlgebra$ mit der trivialen Lie"=Klammer als
Lie"=Algebrastruktur, so definiert $\rho_J \colon \lieAlgebra \to
\operatorname{\mathrm{End}}(C^\infty(M))$, $\rho_J(\xi)(f) :=
f\cdot J$ offensichtlich eine Lie"=Algebra-Darstellung. Dann ist klar,
dass der Chevalley"=Eilenberg"=Komplex zu $\rho_J$ mit dem
Koszul"=Komplex übereinstimmt.
\end{bemerkung}
\section{Quanten-Koszul-Reduktion}
\label{sec:QuantenKoszul}
In diesem Kapitel wollen wir die Grundlagen der
Quanten"=Koszul"=Reduktion darstellen. Wie bereits gezeigt, gilt in der
klassischen Situation $C^\infty(\Mred) \simeq \kbIdeal/\kIdeal$ als
Poisson"=Algebren. Die Poisson"=Klammer auf $\kbIdeal/\kIdeal$ wird
dabei auf naheliegende Art und Weise von der gegebenen auf $C^\infty(M)$
induziert. Die grobe Idee, ein Sternprodukt auf
$C^\infty(\Mred)[[\lambda]]$ zu konstruieren, besteht nun darin für eine
Basis $\{e_\alpha\}$ von $\lieAlgebra$, das $\star$"=Linksideal $\qIdeal
:= \langle \qJ \rangle_\star := \{f^\alpha \star \dPaar{\qJ}{e_\alpha}
\mid f^\alpha \in C^\infty(M)[[\lambda]]\}$ und den
$\star$"=Lie"=Idealisator $\qbIdeal := \{f \in C^\infty(M)[[\lambda]]
\mid [f,f']_\star \in \qIdeal \quad \forall f' \in \qIdeal\}$ von
$\qIdeal$ in $C^\infty(M)[[\lambda]]$ zu betrachten, via $\star$ auf
$\qbIdeal/\qIdeal$ ein assoziatives Produkt zu induzieren,
einen geeigneten Isomorphismus von $\qbIdeal/\qIdeal$ nach
$C^\infty(\Mred)[[\lambda]]$ zu finden und so vermöge diesem ein
Sternprodukt für $(\Mred,\omega_{\mathrm{red}})$ zu induzieren. Man
beachte, dass die Definition von $\qIdeal$ offensichtlich unabhängig von
der Wahl der Basis $\{e_\alpha\}$ ist. Durch Verwendung geometrischer
Konstruktionen im Hintergrund dieser algebraischen Betrachtungen nährt
sich dabei die Hoffnung, dass dieses auch differentiell wird. Wir zeigen
in diesem Kapitel, dass sich diese Heuristik in der Tat durchführen
lässt.
\subsection{Quanten-Koszul-Komplex}
\label{sec:QuantenKoszulKomplex}
\begingroup
\emergencystretch=0.8em
In Analogie zum Vorgehen in der klassischen Situation suchen wir eine
Abbildung $\qkoszul \colon C^\infty(M)[[\lambda]] \otimes
\bigwedge\nolimits^{\bullet} \lieAlgebra \to C^\infty(M)[[\lambda]] \otimes
\bigwedge\nolimits^{\bullet "=1} \lieAlgebra $, so dass $\qIdeal =
\qkoszul(C^\infty(M)[[\lambda]] \otimes \lieAlgebra)$ gilt. Für $f \otimes
\xi \in C^\infty(M)[[\lambda]] \otimes \lieAlgebra$ definieren wir
$\qkoszul$ auf die naheliegende Weise durch
\begin{align}
\qkoszul(f \otimes \xi) := f\star \qJ(e_\alpha) \xi^\alpha =
f \star \qJ({e_\alpha}) \Ins{e^\alpha}\xi \ensuremath{\text{.}}
\end{align}
Dabei ist hier und im Folgenden $\{e_\alpha\}$ wieder
eine Basis von $\lieAlgebra$ mit dualer Basis $\{e^\alpha\}$ und ein $\xi \in
\lieAlgebra$ schreiben wir als $\xi = \xi^\alpha
e_\alpha$. Die gegebene Definition ist unabhängig von der
Wahl dieser Basis. Als nächstes möchten wir die Definition von
$\qkoszul$ auf ganz $C^\infty(M)[[\lambda]] \otimes \bigwedge\nolimits^\bullet \lieAlgebra$
ausdehnen und zwar so, dass $\qkoszul^2 = 0$ gilt. Kennt man
den Chevalley"=Eilenberg"=Komplex aus Definition
\ref{def:Chevalley-Eilenberg-Komplex}, so bereitet dies keine
Schwierigkeiten. Man betrachte nämlich die Lie"=Algebra
$\lieAlgebra_{\mathbb{C}}[[\lambda]]$ über dem kommutativen Ring
$\mathbb{C}[[\lambda]]$ mit der Lie"=Algebra"=Struktur $-\I \lambda
[\cdot,\cdot]$. Dann definiert die Abbildung
\begin{align}
\label{eq:QuantenDarstellungMitJ}
\rho_{\qJ} \colon \lieAlgebra_{\mathbb{C}}[[\lambda]] \to
\operatorname{\mathrm{End}}_{\mathbb{C}[[\lambda]]}(C^\infty(M)[[\lambda]]),
\quad \rho_{\qJ}(\xi)(f) := f \star \qJ(\xi) \quad \forall \xi \in
\lieAlgebra, f \in C^\infty(M)[[\lambda]]
\end{align}
eine Lie"=Algebra"=Darstellung, da $\qJ$ eine Quantenimpulsabbildung
ist.
Ausgeschrieben heißt dies für $\xi,\eta \in \lieAlgebra$ und
$f \in C^\infty(M)[[\lambda]]$
\begin{align}
\label{eq:QuantenDarstellungMitJ1}
\rho_{\qJ}(\xi)(\rho_{\qJ}(\eta)(f)) -
\rho_{\qJ}(\xi)(\rho_{\qJ}(\eta)(f)) &= f \star \qJ(\eta) \star
\qJ(\xi) - f \star \qJ(\xi) \star \qJ(\eta)\notag\\
&= f \star (-\I \lambda
\qJ([\xi,\eta])) = \rho_{\qJ}(-\I \lambda [\xi,\eta])(f) \ensuremath{\text{.}}
\end{align}
\endgroup
Man beachte, dass hier die Assoziativität von $\star$ verwendet wurde.
Betrachtet man die Definition des Chevalley"=Eilenberg Differentials
$\chevalley$ zur Darstellung $\rho_{\qJ}$, so induziert die kanonische
Einbettung, $C^\infty(M)[[\lambda]] \otimes \bigwedge\nolimits^k
\lieAlgebra \hookrightarrow C^\infty(M)[[\lambda]] \otimes
\bigwedge\nolimits^k_{\mathbb{C}} \lieAlgebra_{\mathbb{C}}[[\lambda]]$, die, da $\lieAlgebra$
endlichdimensional ist, offensichtlich sogar ein Isomorphismus ist,
einen zum Chevalley"=Eilenberg"=Komplex isomorphen Komplex
\def\tA[#1]{A_{#1}}
\begin{equation}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{
C^\infty(M)[[\lambda]] & C^\infty(M)[[\lambda]] \otimes \bigwedge\nolimits^1
\lieAlgebra & C^\infty(M)[[\lambda]]
\otimes \bigwedge\nolimits^2 \lieAlgebra & \dots \\
}; %
\path[<-] (m-1-1) edge node[auto]{$\qkoszul$}(m-1-2); %
\path[<-] (m-1-2) edge node[auto]{$\qkoszul$}(m-1-3); %
\path[<-] (m-1-3) edge node[auto]{$\qkoszul$}(m-1-4); %
\end{tikzpicture},
\end{equation}
den wir \neuerBegriff{Quanten"=Koszul-Komplex} mit
\neuerBegriff{Quanten"=Koszul"=Operator} $\qkoszul$ nennen
wollen. Wie man leicht sieht, stimmt $\qkoszul$ in niedrigstem Grad mit dem
oben schon definierten $\qkoszul$ überein. Es sei bemerkt, dass die
konkrete Gestalt von $\qkoszul$ im Folgenden nur im niedrigsten Grad
relevant sein wird, ansonsten benötigen wir nur, dass $\qkoszul^2 = 0$
gilt. Der Vollständigkeit halber und zum besseren Vergleich mit der
Literatur (vgl.\ \cite[Def. 3.3]{gutt2010involutions},
\cite[Def. 16]{bordemann.herbig.waldmann:2000a})
wollen wir an dieser Stelle die explizite Formel des oben
konstruierten Quanten"=Koszul-Operators niederschreiben.
\begin{proposition}
\label{prop:KonkreteGestaltDesQuantenKoszuloperators}
Für den Quanten"=Koszul"=Operator $\qkoszul$ gilt für $f \in
C^\infty(M)[[\lambda]]$ und $\xi \in \lieAlgebra$ die folgende Formel.
\begin{align}
\label{eq:KonkreteGestaltDesQuantenKoszuloperators}
\qkoszul(f \otimes \xi) = f \star \qJ(e_\alpha) \otimes \Ins{e^\alpha} \xi +
\I \lambda \frac{1}{2} c_{\alpha \beta}^\gamma f \otimes e_\gamma \wedge
\Ins{e^\alpha} \Ins{e^\beta}\xi \ensuremath{\text{.}}
\end{align}
Dabei sind $\{c_{\alpha \beta}^\gamma\}$ die Strukturkonstanten der
Lie"=Algebra $\lieAlgebra$ bezüglich der Basis $\{e_\alpha\}$, die für
$\alpha,\beta,\gamma \in \{1,\dots,\dim G\}$ durch $c_{\alpha
\beta}^\gamma := \dPaar{e^\gamma}{[e_\alpha,e_\beta]}$ gegeben sind.
\end{proposition}
In \cite{gutt2010involutions} und \cite{bordemann.herbig.waldmann:2000a}
steht in der Definition von $\qkoszul$ noch ein zusätzlicher Summand der
Form $\I \lambda \kappa \Ins{\Delta}$, wobei $\Delta :=
\dPaar{e^\alpha}{[e_\alpha,e_\beta]}e^\beta \in \lieAlgebra^*$ die modulare
Einsform von $\lieAlgebra$ bezeichnet und $\kappa$ ein Element von
$\mathbb{C}[[\lambda]]$ ist. Da Sternproduktkommutatoren auf konstanten
Funktionen verschwinden und nach Definition von $\qkoszul$ ist klar,
dass man diesen Term durch eine Redefinition der gegebenen
Quantenimpulsabbildung absorbieren kann. Um zu sehen, dass dies auch an
möglichen Äquivarianzeigenschaften dieser nichts ändert, geben wir die
folgende Proposition an.
\begin{proposition}
\label{prop:ModulareEinsformInvariant}
Sei $H$ eine Lie"=Gruppe und $\rho \dpA H \times \lieAlgebra[g] \to
\lieAlgebra[g]$ eine lineare $H$"=Wirkung auf $\lieAlgebra[g]$, welche
die Lie"=Algebrastruktur respektiert, d.\,h.\ es gilt $\rho_h([\xi,\xi']) = [\rho_h
(\xi), \rho_h (\xi')]$ für alle $h \in H$ und $\xi,\xi' \in
\lieAlgebra[g]$. Dann ist die modulare Einsform $\Delta \in
\lieAlgebra[g]^*$ von $H$"=invariant bezüglich der $\rho^*$"=Wirkung
und $\Ins{\Delta}$ ist $H$"=äquivariant.
\end{proposition}
\begin{proof}
Wir schreiben $h \xi := \rho(h,\xi)$ für $h \in H$ und $\xi \in
\lieAlgebra[g]$. Dann gilt für alle $h \in H$
\begin{align*}
h \Delta \stackrel{\rho_h \text{linear}}{=}
\dPaar{e^\alpha}{[e_\alpha,e_\beta]} h e^\beta = \dPaar{h
e^\alpha}{h [e_\alpha,e_\beta]} h e^\beta = \dPaar{h
e^\alpha}{[h e_\alpha,h e_\beta]} h e^\beta = \Delta
\end{align*}
und
\begin{align*}
h\Ins{\Delta}(f \otimes \xi) = h f \otimes h \Ins{\Delta} \xi &= (h
f) \otimes (\Ins{h \Delta})(h \xi) \\ &=(h f) \otimes (\Ins{\Delta} (h
\xi)) = \Ins{\Delta}(h(f\otimes \xi)) \ensuremath{\text{.}}
\end{align*}
\end{proof}
\begin{proposition}
\label{prop:QuantenKoszulInvariant}
Sei $H$ eine Lie"=Gruppe, die auf $M$ wirke. Weiter sei eine lineare
$H$"=Wirkung auf $\lieAlgebra[g]$ gegeben, welche die
Lie"=Algebrastruktur respektiere. $\bigwedge\nolimits^\bullet \lieAlgebra[g]$ sei mit
der davon induzierten Wirkung versehen. Ist $\qJ$ $H$"=äquivariant, so
ist auch $\qkoszul$ $H$"=äquivariant.
\end{proposition}
\begin{proof}
Sei $h \in H$, $k \in \mathbb{N}$, $\xi \in \bigwedge\nolimits^k
\lieAlgebra[g]$ und $f \in \CM[M]$. Weiter sei $\{e_\alpha\}$ eine Basis
von $\lieAlgebra[g]$ und $\{e^\alpha\}$ die zugehörige duale Basis. Dann
gilt
\begin{align*}
h (\qkoszul (f \otimes \xi)) &= h ( \qJ(e_{\alpha}) \star f
\otimes \Ins{e^{\alpha}} \xi) + h (c_{\alpha \beta}^\gamma f
\otimes
e_\gamma \wedge \Ins{e^\alpha}\Ins{e^\beta} \xi)\\
&= h (\qJ(e_{\alpha}) \star f) \otimes h \Ins{e^{\alpha}}\xi +
c_{\alpha \beta}^\gamma hf \otimes
he_\gamma \wedge h(\Ins{e^\alpha}\Ins{e^\beta} \xi) \\
&= h \qJ(e_{\alpha}) \star h f \otimes \Ins{h e^{\alpha}} (h \xi)
+ c_{\alpha \beta}^\gamma hf \otimes he_\gamma \wedge
\Ins{he^\alpha}\Ins{he^\beta} h\xi
\\
&= \qJ(h e_{\alpha}) \star h f \otimes \Ins{h e^\alpha} (h \xi) +
c_{\alpha \beta}^\gamma hf \otimes e_\gamma \wedge
\Ins{e^\alpha}\Ins{e^\beta} h\xi \\
&= \qkoszul(h f \otimes h \xi) = \qkoszul (h (f \otimes \xi))
\ensuremath{\text{.}}
\end{align*}
Im zweitletzten Schritt wurde dabei ausgenutzt, dass $\{h e_\alpha\}$ und
$\{h e^\alpha\}$ zueinander duale Basen von $\lieAlgebra[g]$ und
$\lieAlgebra[g]^*$ sind und dass die Strukturkonstanten bezüglich
dieser die gleichen sind wie bezüglich $\{e_\alpha\}$, denn
\begin{align*}
\dPaar{h e^\gamma}{[he_\alpha,he_\beta]} = \dPaar{h
e^\gamma}{h[e_\alpha,e_\beta]} =
\dPaar{e^\gamma}{[e_\alpha,e_\beta]}\ensuremath{\text{.}}
\end{align*}
\end{proof}
\subsection{Quanteneinschränkung}
\label{sec:Quanteneinschraenkung}
\begingroup
\emergencystretch=0.8em Als nächstes wollen wir eine geeignete
Augmentationsabbildung $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \colon C^\infty(M)[[\lambda]] \to
C^\infty(C)[[\lambda]]$ finden, so dass $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ eine Deformation von
$\ensuremath{\imath^*}$ ist und der augmentierte Koszul"=Komplex an der Stelle
$C^\infty(M)[[\lambda]] \otimes
\bigwedge\nolimits^0 \lieAlgebra$ exakt ist.
\endgroup
Als Ausgangspunkt wählen wir Gleichung \eqref{eq:globalisiertHomotopie2}
und erhalten für $f \in C^\infty(M)[[\lambda]]$
\begin{align}
\label{eq:IdeeQuantenEinschr}
\qkoszul[1] \h[0] f + \prol \ensuremath{\imath^*} f = (\qkoszul[1] - \kkoszul[1])\h[0] f + \kkoszul[1] \h[0]
f + \prol \ensuremath{\imath^*} f = (\operatorname{\mathsf{id}} + (\qkoszul[1] - \kkoszul[1])\h[0])f \ensuremath{\text{.}}
\end{align}
Der Operator $\qkoszul[1] - \kkoszul[1]$ ist mindestens von der Ordnung
$\lambda$, womit $\operatorname{\mathsf{id}} + (\qkoszul[1] - \kkoszul[1])\h[0]$ invertierbar ist.
Somit ergibt sich sofort die Gleichung
\begin{align}
\label{eq:IdeeQuantenEinschr2} \qkoszul[1] \h[0] \frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} +
(\qkoszul[1] - \kkoszul[1])\h[0]} + \prol \ensuremath{\imath^*} \frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} +
(\qkoszul[1] - \kkoszul[1])\h[0]} = \operatorname{\mathsf{id}} \ensuremath{\text{.}}
\end{align}
Dies motiviert die Definition der Augmentierungsabbildung und einer
Deformation von $\h[0]$ wir folgt.
\begin{align}
\label{eq:QuantenEinschrDef}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} := \ensuremath{\imath^*} \frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} + (\qkoszul[1] - \kkoszul[1])\h[0]}
\end{align}
und
\begin{align}
\label{eq:defHomotopie}
\qh[0] := \h[0] \frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} + (\qkoszul[1] - \kkoszul[1])\h[0]} \ensuremath{\text{.}}
\end{align}
Mit diesen Definitionen erhalten wir die folgende Proposition.
\begin{proposition}
\label{prop:QuantenHomotopieUntereOrdnung}
Es gilt
\begin{align}
\operatorname{\mathsf{id}} = \qkoszul[1] \qh[0] + \prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}\label{eq:QuantenHomotopieUntereOrdnung}
\end{align}
und
\begin{align}
\label{eq:QuantenHomotopieUntereOrdnung2}
\qh[0] \prol = 0 \ensuremath{\text{.}}
\end{align}
Sei $H$ eine Lie"=Gruppe, die auf $M$ wirke und sei eine lineare
$H$"=Wirkung auf $\lieAlgebra[g]$ gegeben, so dass $\h$, $\kkoszul$
und $\qkoszul$ $H$"=äquivariant sind, dann sind auch $\qh[0]$ und
$\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ $H$"=äquivariant.
\end{proposition}
\begin{proof}
Klar.
\end{proof}
\begin{bemerkung}
\label{bem:Quanteneinschraenkung}
Aus Gleichung \eqref{eq:QuantenHomotopieUntereOrdnung} ergibt sich
insbesondere die wichtige Charakterisierung
\begin{align*}
\qIdeal = \ker \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \ensuremath{\text{.}}
\end{align*}
\end{bemerkung}
Aus naheliegenden Gründen wollen wir $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ im Folgenden auch als
\neuerBegriff{Quanteneinschränkung} bezeichnen.
\begin{proposition}
\label{prop:QuantenAugmentierung}
Die Abbildung $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} = \sum_{r=0}^\infty \lambda^r \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_r\colon
C^\infty(M)[[\lambda]] \to C^\infty(C)[[\lambda]]$ erfüllt die
folgenden drei Bedingungen.
\begin{align}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_0 &= \ensuremath{\imath^*} \ensuremath{\text{.}} \label{eq:QuantenAugmentierung1}\\
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \qkoszul[1] &= 0 \ensuremath{\text{.}} \label{eq:QuantenAugmentierung2}\\
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol &= \operatorname{\mathsf{id}} \label{eq:QuantenAugmentierung3} \ensuremath{\text{.}}
\end{align}
Dabei impliziert die dritte unmittelbar, dass $\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ eine
Projektion ist mit $\im{(\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}})} =
\prol(C^\infty(M)[[\lambda]])$. Weiter gilt $\ker{(\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}})} =
\qIdeal$, womit auch die direkte Summenzerlegung
\begin{align}
\label{eq:DirekteSummenzerlegung}
C^\infty(M)[[\lambda]] = \qIdeal \oplus \prol(C^\infty(M)[[\lambda]])
\end{align}
gilt.
\end{proposition}
\begin{proof}
Gleichung \eqref{eq:QuantenAugmentierung3} folgt sofort aus
Gleichung~\eqref{eq:globalisierteHomotopie3}, denn per Definition von
$\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ gilt
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol = \ensuremath{\imath^*} \frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} + (\qkoszul[1]
- \kkoszul[1])\h[0]}\prol = \ensuremath{\imath^*} \prol = \operatorname{\mathsf{id}} \ensuremath{\text{.}}
\end{align*}
Um Gleichung \eqref{eq:QuantenAugmentierung2} zu zeigen, beachten wir
zunächst, dass nach Gleichung \eqref{eq:QuantenHomotopieUntereOrdnung}
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(\qkoszul[1] \qh[0] + \prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}) = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \qkoszul[1] \qh[0]
+ \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}
\end{align*}
gilt, woraus
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \qkoszul[1] \qh[0] = 0
\end{align*}
folgt und damit direkt auch
\begin{align}
\label{eq:qResqKoszulhNull}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \qkoszul[1] \h[0] = 0 \ensuremath{\text{.}} \tag{$*$}
\end{align}
Wegen $\qkoszul[1] \qkoszul[2] = 0$ gilt andererseits
\begin{align*}
\qkoszul[1] \h[0] \kkoszul[1] &= \qkoszul[1](\operatorname{\mathsf{id}} - \kkoszul[2] \h[1])
\\ &= \qkoszul[1](\operatorname{\mathsf{id}} + \qkoszul[2] \h[1] - \kkoszul[2]\h[1]) =
\qkoszul[1](\operatorname{\mathsf{id}} + (\qkoszul[2] - \kkoszul[2])\h[1]) \ensuremath{\text{.}}
\end{align*}
\begingroup
\emergencystretch=0.8em
Dabei haben wir im ersten Schritt die Homotopieeigenschaft von $\h$
ausgenutzt. So erhalten wir mit \eqref{eq:qResqKoszulhNull} die Gleichung
\begin{align*}
0 = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \qkoszul[1] \h[0] \kkoszul[1] = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \qkoszul[1](\operatorname{\mathsf{id}} +
(\qkoszul[2] - \kkoszul[2])\h[1]) \ensuremath{\text{,}}
\end{align*}
woraus sich, unter Verwendung der Invertierbarkeit von $\operatorname{\mathsf{id}} + (\qkoszul[2]
- \kkoszul[2])\h[1]$, Gleichung~\eqref{eq:QuantenAugmentierung2} ergibt.
Aus Gleichung \eqref{eq:QuantenAugmentierung3} folgt
$(\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}})^2 = \prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ und $\im(\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}) =
\prol(C^\infty(M)[[\lambda]])$, d.\,h.\ $\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ ist eine Projektion
auf $\prol(C^\infty(M)[[\lambda]])$. Wir berechnen nun den Kern von
$\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$. Dazu stellen wir zuerst fest, dass $\im \qkoszul[1] =
\im(\qkoszul[1] \qh[0])$ gilt. Die Inklusion $\im(\qkoszul[1]) \supset
\im(\qkoszul[1] \qh[0])$ ist trivial. Die andere ergibt sich unmittelbar aus der
Beziehung
\begin{align*}
\qkoszul[1] \qh[0] \qkoszul[1] = (\operatorname{\mathsf{id}} - \prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}})\qkoszul[1] =
\qkoszul[1]\ensuremath{\text{,}}
\end{align*}
wobei hier beim ersten Schritt Gleichung
\eqref{eq:QuantenHomotopieUntereOrdnung} verwendet wurde und beim
zweiten \eqref{eq:QuantenAugmentierung2}. Damit und mit Gleichung
\eqref{eq:QuantenHomotopieUntereOrdnung} folgt dann $\ker (\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}})
= \im(\qkoszul[1] \qh[0]) = \im(\qkoszul[1]) = \qIdeal$. Da wie eben
gezeigt $\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ eine Projektion ist, folgt damit auch sofort die
Zerlegung $C^\infty(M)[[\lambda]] = \ker(\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}) \oplus \im(\prol
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}) = \qIdeal \oplus \prol(C^\infty(M)[[\lambda]])$.
\endgroup
\end{proof}
\begin{proposition}[Eindeutigkeit der Quanteneinschränkung]
\label{prop:EindeutigkeitDerQuanteneinschr}
Sei eine $\mathbb{C}[[\lambda]]$"=lineare
Abbildung $\boldsymbol{\mathrm{res}} = \sum_{r=0}^\infty \lambda^r
\boldsymbol{\mathrm{res}}_r \colon C^\infty(M)[[\lambda]] \to
C^\infty(C)[[\lambda]]$ gegeben, welche die folgenden Eigenschaften
erfüllt\ensuremath{\text{.}}
\begin{align}
\boldsymbol{\mathrm{res}}_0 &= \ensuremath{\imath^*} \label{eq:EindeutigkeitDerQuanteneinschr1}\ensuremath{\text{.}}\\
\boldsymbol{\mathrm{res}} \circ \qkoszul[1] &= 0 \label{eq:EindeutigkeitDerQuanteneinschr2} \ensuremath{\text{.}}\\
\boldsymbol{\mathrm{res}} \circ \prol &= \operatorname{\mathsf{id}} \label{eq:EindeutigkeitDerQuanteneinschr3}\ensuremath{\text{.}}
\end{align}
Dann gilt schon $\boldsymbol{\mathrm{res}} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$.
\end{proposition}
\begin{proof}
Mit Gleichung \eqref{eq:EindeutigkeitDerQuanteneinschr3} ist klar,
dass $\boldsymbol{p} := \prol \circ \boldsymbol{\mathrm{res}}$ eine
Projektion ist, d.\,h.\ $\boldsymbol{p}^2 = \boldsymbol{p}$ gilt und
$\im (\boldsymbol{p}) = \prol{(C^\infty(M))[[\lambda]]}$ erfüllt
ist. Da per Definition von $\qIdeal$ die Gleichung $\qIdeal = \im
\qkoszul[1]$ gilt, folgt mit
\eqref{eq:EindeutigkeitDerQuanteneinschr2} dass $\qIdeal$ im Kern von
$\boldsymbol{p}$ liegt. Da nun $\boldsymbol{p}$ eine Projektion ist,
gilt einerseits $C^\infty(M)[[\lambda]] = \ker(\boldsymbol{p}) \oplus
\im(\boldsymbol{p})$. Andererseits gilt nach Proposition
\ref{prop:QuantenAugmentierung} die Zerlegung $C^\infty(M)[[\lambda]]
= \qIdeal \oplus \prol(C^\infty(M)[[\lambda]])$ und wie oben gezeigt,
$\im (\boldsymbol{p}) = \prol{(C^\infty(M))[[\lambda]]}$ sowie
$\qIdeal \subset \ker(\boldsymbol{p})$. Somit ist klar, dass auch schon
$\qIdeal = \ker(\boldsymbol{p})$ gilt. Da eine Projektion durch
ihren Kern und ihr Bild bekanntlich eindeutig festgelegt ist,
erhalten wir zusammen mit Proposition \ref{prop:QuantenAugmentierung}
die Beziehung $\prol \circ \boldsymbol{\mathrm{res}} = \boldsymbol{p}
= \prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$. Durch Anwenden von $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ von links auf diese
Gleichung folgt dann mit Gleichung \eqref{eq:QuantenAugmentierung3}
$\boldsymbol{\mathrm{res}} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$.
\end{proof}
\begin{bemerkung}
\label{bem:InduzierteQuantenDings}
Aus der Definition von $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ ist ersichtlich, dass gewählte
geometrische Homotopie"=Daten, zusammen mit $J$ und $\qJ$ immer eine
Quanteneinschränkung $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ induzieren. Trägt $M$ eine Wirkung einer
Lie"=Gruppe $H$, unter der $C$ stabil ist, und $\lieAlgebra$ eine
lineare $H$"=Wirkung, und sind $J$ und $\qJ$ $H$"=äquivariant so gibt
es immer $H$"=invariante geometrische Homotopiedaten, siehe
\ref{bem:induzierteDingens} und die davon induzierte
Quanteneinschränkung ist $H$"=äquivariant. Nach Proposition
\ref{prop:EindeutigkeitDerQuanteneinschr} können wir $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ auch von
$J$, $\qJ$ und $\prol$ induziert ansehen, wobei $\prol$ seinerseits
wieder von geometrischen Homotopiedaten induziert wird, wobei man zu
dessen Konstruktion das Datum $\xi$ aus Definition
\ref{def:GeometrischeHomotopieDaten} welches zur Globalisierung von
der Homotopie benötigt wurde, nicht braucht.
\end{bemerkung}
\begin{proposition}[Lokalisierbarkeit von $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$]
\label{prop:LokalisierbarkeitDerQuanteneinschraenkung}
\begin{propositionEnum}
\item %
\label{item:lokaleGestaltDerQuanteneinschraenkung}
Sei $c \in C$, dann gibt es eine offene Umgebung $U' \subset U$
von $c$, so dass $\Psi(U') \subset C \times \lieAlgebra^*$
sternförmig bezüglich des Nullpunktes in jeder Faser ist und
so dass für dieses und jedes kleinere $U'$ mit diesen Eigenschaften
die folgende Aussage richtig ist. Für die lineare Abbildung
$\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_{U'} \colon C^\infty(U')[[\lambda]]\to
C^\infty(C)[[\lambda]]$,
\begin{align}
\label{eq:lokalisierteQuanteneinschraenkung}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_{U'} := \ensuremath{\imath}^*_{U'} \circ (\operatorname{\mathsf{id}} + (\qkoszul_{U'}
- \partial_{U'}) \h_{U',U})^{-1}
\end{align}
gilt für alle $f \in C^\infty(M)$ die Gleichung
\begin{align}
\label{eq:EinschraenkbarkeitDerQuanteneinschr}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_{U'} f\at{U'} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f \ensuremath{\text{.}}
\end{align}
Dabei ist $\h_{U',U} \colon C^\infty(U') \otimes
\bigwedge\nolimits^{\bullet}\lieAlgebra \to C^\infty(U') \otimes
\bigwedge\nolimits^{\bullet + 1} \lieAlgebra$ für $k \in \mathbb{N}$,
$f \otimes \xi \in C^\infty(U') \otimes \bigwedge\nolimits^k \lieAlgebra$
und $p\in U'$ durch
\begin{align}
\label{eq:lokalisierteHomotopie}
\h_{U',U}(f \otimes \xi)(p) = \int_0^1 t^k \partial^\alpha(f
\circ \Psi\at{U'}^{-1})(r(p),t J(p)) \, dt \otimes e_\alpha \wedge \xi
\end{align}
gegeben und $\ensuremath{\imath}_{U'}\colon C \cap U' \hookrightarrow U'$
bezeichne die Inklusion. $U'$ kann sogar als Umgebung von $C$
gewählt werden. Ist $H$ eine Lie"=Gruppe, die auf $M$ wirke,
trage $\lieAlgebra$ eine lineare $H$"=Wirkung und seien alle
vorkommenden Größen invariant bzw.\ äquivariant gewählt, so kann
auch $U'$ $H$"=äquivariant gewählt werden.
\item %
\label{item:WennInUmgebungNullDannNull}
Sei $U' \subset U$ eine offene Umgebung von $C$, so dass $\Psi(U')
\subset C \times \lieAlgebra^*$ sternförmig bezüglich des
Nullpunktes in jeder Faser ist. Ist $f \in C^\infty(M)$ mit
$f\at{U'} = 0$, dann gilt schon $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f = 0$.
\end{propositionEnum}
\end{proposition}
\begin{proof}
\begin{beweisEnum}
\item %
Es ist klar, dass es eine offene Umgebung $U'$ von $C$ gibt,
so dass $\Psi(U')$ sternförmig bezüglich des Nullpunktes in jeder
Faser ist und sowohl $\psi_U\at{U'} = 1$ als auch $U' \cap W =
\emptyset$ wahr sind. Dann gilt $\h(f)\at{U'} = \h_{U',U}f\at{U'}$
und somit wegen der Lokalität von $\qkoszul - \partial$ auch
$((\qkoszul - \partial)\h(f))\at{U'} = (\qkoszul_{U'}
- \partial_{U'})\h_{U',U}(f\at{U'})$. Induktiv erhält man dann
\begin{align*}
(((\qkoszul - \partial)\h)^k f)\at{U'} = ((\qkoszul_{U'}
- \partial_{U'})\h_{U',U})^k f\at{U'}
\end{align*}
für alle $k \in \mathbb{N}$. Der Rest ist klar.
\item %
Folgt sofort aus Teil~\refitem{item:lokaleGestaltDerQuanteneinschraenkung}.
\end{beweisEnum}
\end{proof}
\subsection{Sternprodukt für den reduzierten Phasenraum}
\label{sec:SternproduktFuerDenReduziertenPhasenraum}
Wir sind nun in der Lage ein Sternprodukt für $\Mred$ zu
konstruieren. Zunächst geben wir jedoch eine andere Charakterisierung
für den $\star$"=Lie"=Idealisator $\qbIdeal$ an. Im Folgenden seien
$(\Psi \colon U \to V, O \subset U, \psi_U,\psi_W,\xi)$ $G$"=invariante
geometrische"=Homotopiedaten. Insbesondere sind die davon induzierte
Prolongation $\prol$ und die Quanteneinschränkung $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$
$G$"=äquivariant. Wie im vorherigen Abschnitt gezeigt, können wir stets
solche wählen, da $J$ und $\qJ$ $G$"=äquivariant sind.
\begin{lemma}
\label{lem:CharakterisierungQuantenLieIdealisator}
Es gilt
\begin{align}
\label{eq:CharakterisierungQuantenLieIdealisator}
\qbIdeal = \{f \in C^\infty(M)[[\lambda]] \mid \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f \in
\pi^*C^\infty(\Mred)[[\lambda]] \} \ensuremath{\text{.}}
\end{align}
Insbesondere ist $\prol \pi^*C^\infty(\Mred)[[\lambda]] \subset \qbIdeal$.
\end{lemma}
\begin{proof}
Nach Definition von $\qbIdeal$ ist genau dann $f \in \qbIdeal$, wenn
$[f,f']_{\star} \in \qIdeal$ für alle $f' \in \qIdeal$ gilt. Dies ist
äquivalent zu der Bedingung $f' \star f \in \qIdeal$ für alle $f' \in
\qIdeal$, weil $\qIdeal$ ein $\star$"=Linksideal ist. Da weiter
$\qIdeal$ von den Komponenten von $\qJ$ erzeugt wird, ist $f$ genau
dann ein Element von $\qbIdeal$, wenn für alle $\xi \in \lieAlgebra$
die Bedingung $[f,\qJ(\xi)]_\star \in \qIdeal$ gilt. Wegen $\qIdeal =
\ker \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ (vgl.\ Bem. \ref{bem:Quanteneinschraenkung}), ist dies
weiter äquivalent zu $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}([f,\qJ(\xi)]_{\star}) = 0$ für alle $\xi \in
\lieAlgebra$. Nun gilt aber für jedes $\xi \in \lieAlgebra$ die
Gleichung
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}([f,\qJ(\xi)]_\star) = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(\I \lambda \{f,J(\xi)\}) = \I
\lambda \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(\xi_{M}f) = \I\lambda \xi_M \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f \ensuremath{\text{.}}
\end{align*}
Dabei wurde im ersten Schritt verwendet, dass $\qJ$ eine
Quantenimpulsabbildung ist und im dritten dass $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ $G$"=äquivariant
ist.
Da nach Proposition \ref{prop:InvarianteFunktionen}
$\pi^*C^\infty(\Mred) = C^\infty(C)^G$ gilt, und da $G$
zusammenhängend ist, folgt die Behauptung.
\end{proof}
Wir kommen nun zum zentralen Satz
(vgl.\ \cite[Prop. 3.12]{gutt2010involutions}) dieses Kapitels. In diesem
wird das Sternprodukt auf dem reduzierten Phasenraum angegeben.
\begin{satz}[Sternprodukt auf dem reduzierten Phasenraum]
\label{satz:SternproduktAufDemReduziertenPhasenraum}
\begin{satzEnum}
\item %
$\qbIdeal$ ist eine Unteralgebra von
$(C^\infty(M)[[\lambda]],\star)$ und $\qIdeal$ ist ein Ideal in
$\qbIdeal$, wodurch $\qbIdeal/\qIdeal$ via $[f] \bullet [f'] := [f
\star f']$ für $f,f' \in \qbIdeal$ zu einer assoziativen Algebra
wird.
\item %
Die Abbildung
\begin{align}
\label{eq:SternproduktAufDemReduziertenPhasenraum}
\boldsymbol{\mathrm{iso}} \colon \qbIdeal/\qIdeal \ni [f]
\mapsto \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f \in \pi^*C^\infty(\Mred)[[\lambda]] =
C^\infty(C)^G[[\lambda]]
\end{align}
ist ein Isomorphismus und es gilt für jedes $f \in
C^\infty(C)^G[[\lambda]]$
\begin{align}
\label{eq:SternproduktAufDemReduziertenPhasenraum2}
\boldsymbol{\mathrm{iso}}^{-1}(f) = [\prol(f)] \in
\qbIdeal/\qIdeal \ensuremath{\text{.}}
\end{align}
Weiter induziert der Isomorphismus $\boldsymbol{\mathrm{iso}}^{-1}
\circ \pi^* \colon C^\infty(\Mred)[[\lambda]] \to
\qbIdeal/\qIdeal$ ein Sternprodukt $\starred$ auf $C^\infty(\Mred)[[\lambda]]$, welches insbesondere für alle
$\phi,\phi' \in C^\infty(\Mred)[[\lambda]]$ durch die Gleichung
\begin{align}
\label{eq:SternproduktAufDemReduziertenPhasenraum3}
\pi^*(\phi \star_{\mathrm{red}} \phi') = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(\prol(\pi^*
\phi) \star \prol(\pi^* \phi'))
\end{align}
bestimmt ist.
\item %
Ist $\star$ differenziell, so auch $\starred$.
\end{satzEnum}
\end{satz}
\begin{proof}
\begin{beweisEnum}
\item %
Nach Lemma \ref{lem:CharakterisierungQuantenLieIdealisator} ist
$\qIdeal \subset \qbIdeal$ klar, die restlichen Aussagen
sind dann unmittelbar einsichtig.
\item %
Nach Lemma \ref{lem:CharakterisierungQuantenLieIdealisator} ist
$\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f \in \pi^*C^\infty(\Mred)[[\lambda]]$ für alle $f \in
\qbIdeal$. Für jedes $f\in C^\infty(C)^G[[\lambda]]$ ist die Funktion
$\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}\prol(f) = f$ $G$"=invariant, also ein Element von
$\pi^*C^\infty(\Mred)[[\lambda]] = C^\infty(C)^G[[\lambda]]$. Damit liegt,
wieder nach Lemma
\ref{lem:CharakterisierungQuantenLieIdealisator}, die Funktion
$\prol(f)$ in $\qbIdeal$. Somit sind die Abbildungen
$\boldsymbol{\mathrm{iso}}$ und $C^\infty(C)^G[[\lambda]] \ni f
\mapsto [\prol f] \in \qbIdeal/\qIdeal$ wohldefiniert. Wir
rechnen als nächstes nach, dass sie zueinander invers sind. Sei
dazu zuerst $f \in \qbIdeal$ gegeben. Dann gilt
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} (\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f - f) = 0 \ensuremath{\text{,}}
\end{align*}
also
\begin{align*}
[\prol \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f] = [f] \ensuremath{\text{.}}
\end{align*}
Sei umgekehrt $f \in \pi^*C^\infty(\Mred)[[\lambda]]$, so folgt
\begin{align*}
\boldsymbol{\mathrm{iso}}([\prol f]) = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol f = f \ensuremath{\text{.}}
\end{align*}
Formel \eqref{eq:SternproduktAufDemReduziertenPhasenraum3} ist
unmittelbar einsichtig, womit auch klar ist, dass die Gleichung
$[\phi,\phi']_{\star_{\mathrm{red}}} = \I \lambda
\{\phi,\phi'\}_{\mathrm{red}}$ gilt.
\item %
Dies folgt direkt aus dem nachfolgenden Lemma \ref{lem:S} und der
Tatsache, dass die Komposition von Multi"=Differentialoperatoren wieder
ein Multi"=Differentialoperator ist.
\end{beweisEnum}
\end{proof}
\begin{lemma}
\label{lem:S}
Es gibt eine formale Reihe $S := \operatorname{\mathsf{id}} + \sum_{r=1}^\infty \lambda^r
S_r$ von Differentialoperatoren $S_r \colon C^\infty(M) \to
C^\infty(M)$, so dass
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} = \ensuremath{\imath^*} \circ S
\end{align*}
gilt. Die Differentialoperatoren $S_r$ können dabei so gewählt
werden, dass $S_r$ für $r \geq 1$ auf Konstanten verschwindet.
\end{lemma}
\begin{proof}
Sei $U'$ eine offene Umgebung von $C$, $\ensuremath{\imath^*}_{U'}$ und $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_{U'}$
wie in Proposition
\ref{prop:LokalisierbarkeitDerQuanteneinschraenkung}. Es genügt zu
zeigen, dass es eine formale Reihe $S_{U'} = \operatorname{\mathsf{id}}_{U'} +
\sum_{r=1}^\infty\lambda^r{S_{U'}}_r$ von Differentialoperatoren ${S_{U'}}_r
\colon C^\infty(U') \to C^\infty(U') $ gibt, die auf Konstanten
verschwinden, so dass $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_{U'}= \ensuremath{\imath^*}_{U'} \circ S_{U'}$ gilt. Denn
hat man ein derartiges $S_{U'}$ gefunden, definiere man für jedes $r
\geq 1$ den Differentialoperator $S_r \colon C^\infty(M) \to
C^\infty(M)$ durch $S_r := \chi_{U'}\cdot {S_{U'}}_r \circ \ensuremath{\imath}_{U',M}^*$,
wobei $\chi_{U'} \colon M \to \mathbb{R}$ eine glatte Funktion sei
mit $\supp \chi_{U'} \subset {U'}$ und $\chi_{U'}\at{C} = 1$ und
$\ensuremath{\imath}_{U',M} \colon {U'} \hookrightarrow M$ die Inklusion
bezeichne (vgl.\ Prop. \ref{prop:VortsetzungVonDiffops}).
$S := \operatorname{\mathsf{id}} + \sum_{r=1}^\infty S_r$ hat dann die
gewünschten Eigenschaften, denn für $f \in C^\infty(M)$ gilt
\begin{align*}
(\ensuremath{\imath^*} \circ S_r)f = \ensuremath{\imath^*} (\chi_{U'} {S_{U'}}_r \circ \ensuremath{\imath^*}_{U',M})(f) =
(\ensuremath{\imath^*} \chi_{U'}) \ensuremath{\imath^*}_{U'} {S_{U'}}_r \ensuremath{\imath^*}_{U',M} f = (\ensuremath{\imath^*}_{U'} \circ
{S_{U'}}_r) f\at{U'} \quad \forall r \geq 1
\end{align*}
und somit
\begin{align*}
(\ensuremath{\imath^*} \circ S)f = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_{U'} f\at{U'} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} f \ensuremath{\text{.}}
\end{align*}
Um die lokale Situation beherrschen zu können, ist die folgende
Beobachtung hilfreich. Sei $D \colon C^\infty(U') \otimes \lieAlgebra
\to C^\infty(U')$ eine lineare Abbildung. Dann gibt es bekanntlich
lineare Abbildungen $D_i \colon C^\infty(U') \to C^\infty(U')$, so dass
$D(f \otimes \xi) = \Ins{e^i}(\xi) D_i(f)$ gilt für alle $f \otimes
\xi \in C^\infty(U') \otimes \lieAlgebra$. Falls jedes $D_i$ ein
Differentialoperator ist, wollen wir der Einfachheit halber auch $D$
als Differentialoperator bezeichnen. Ist $D_i$ für alle $i$ von der
Ordnung $k \in \mathbb{N}$, so wollen wir sagen, $D$ sei von der
Ordnung $k$. Ist nun ein derartiges $D$ der Ordnung $k$
gegeben, so gibt es einen Differentialoperator $D' \colon C^\infty(U')
\to C^\infty(U')$ von der Ordnung $k+1$, der
\begin{align*}
\ensuremath{\imath^*}_{U'} \circ D \circ \h_{U',U} = \ensuremath{\imath^*}_{U'} \circ D'
\end{align*}
erfüllt.
Dies ist in der Tat der Fall, denn sei $(U_C,x)$ eine Karte für $C$,
so ist $(U'' := \Psi^{-1}(U_C \times \lieAlgebra^*),(\overline x = x \circ
r,J_1,\dots,J_{\dim G}))$ eine Untermannigfaltigkeitskarte von $C$ in
$M$ und $D$ lässt sich in dieser Karte in der Form
%
\begin{align*}
D (f\at{U''} \otimes \xi) = \sum_{\nu = 0}^k \sum_{r+s = \nu}\Ins{e^j}(\xi)
{D_{U''}^\nu}_{j j_1\dots j_{s}}^{i_1 \dots i_r}
\frac{\partial^{r}}{\partial \overline{x}^{i_1} \dotsm \partial
\overline{x}^{i_r}} \frac{\partial^s }{\partial J_{j_1} \dotsm J_{j_s}}f
\end{align*}
mit glatten Funktionen ${D_{U''}^\nu}_{j j_1\dots j_s}^{i_1 \dots
j_r} \in C^\infty(U'')$ angeben. Sei nun $f \in C^\infty(U')$ und $c \in
C$, dann erhalten wir
\begin{align*}
\lefteqn{ (D \h_{U',U}(f))(c)}\\
&= \sum_{\nu = 0}^k \sum_{r +s = \nu} {D_{U''}^\nu}_{j j_1\dots
j_s}^{i_1 \dots j_r} \int_0^1 t^s \delta_{m,j}
\frac{\partial^{r}}{\partial x^{i_1} \dotsm \partial x^{i_r}}
\frac{\partial^s }{\partial \mu_{j_1} \dotsm \partial
\mu_{j_s} \partial \mu_m} (f \circ
\Psi^{-1})(r(c),t \cdot 0) \, dt \\
&= \sum_{\nu = 0}^k\sum_{r +s = \nu} {D_{U''}^\nu}_{j j_1\dots j_s}^{i_1 \dots j_r}
\frac{1}{s+1} \frac{\partial^{r}}{\partial \overline{x}^{i_1}
\dotsm \partial \overline{x}^{i_r}} \frac{\partial^s }{\partial
J_{j_1} \dotsm \partial J_{j_s} \partial J_j} f (c) \\
&=: (D'f)(c) \ensuremath{\text{.}}
\end{align*}
Wobei wir $D'$ lokal durch das letzte Gleichheitszeichen definieren,
wie man leicht sieht, ist $D'$ tatsächlich ein Differentialoperator
(vgl.\ \cite[Prop. A.3.6]{waldmann:2007a}). Durch ordnungsweises
Anwenden dieser Beobachtung, sieht man, dass für jede formale
Potenzreihe $D \colon C^\infty(U')[[\lambda]] \otimes \lieAlgebra \to
C^\infty(U')[[\lambda]]$ von Differentialoperatoren eine formale
Potenzreihe $D' \colon C^\infty(U')[[\lambda]] \to
C^\infty(U')[[\lambda]]$ von Differentialoperatoren mit
$\ensuremath{\imath^*}_{U'} \circ D \circ \h_{U',U} = \ensuremath{\imath^*}_{U'} \circ D'$ existiert.
Die Abbildung $B_{U'} := \frac{1}{\lambda}(\qkoszul_{U'}
- \partial_{U'})$ ist offensichtlich eine formale Reihe von
Differentialoperatoren und verschwindet auf Konstanten. Nach obiger
Beobachtung gibt es dann einen Differentialoperator $D_1'$ mit
$\ensuremath{\imath^*}_{U'} \lambda {B_{U'}}\circ \h_{U',U} = \ensuremath{\imath^*}_{U'} \circ
D_1'$. Wir setzen nun für $n \in \mathbb{N}$ $B_n := (\lambda
{B_{U'}} \circ \h_{U',U})^n$ und behaupten, dass es für jedes $n \in
\mathbb{N}$ eine formale Reihe von Differentialoperatoren $D_n'$
gibt, so dass $\ensuremath{\imath^*}_{U'} \circ D_n' = \ensuremath{\imath^*}_{U'} \circ B_n$ gilt. Der Beweis erfolgt durch vollständige Induktion. Den
Induktionsanfang haben wir dabei oben schon durchgeführt. Es gelte
die Aussage nun für ein $n \in \mathbb{N}$, dann gilt sie auch für
$n+1$. Dies sehen wir wie folgt ein. Es gilt
\begin{align*}
\ensuremath{\imath^*}_{U'} \circ B_{n+1} &= \ensuremath{\imath^*}_{U'} \circ B_n \circ
\lambda B_{U'} \circ h_{U',U} \\
&= \ensuremath{\imath^*}_{U'} \circ D_n' \circ \lambda B_{U'} \circ \h_{U',U} \ensuremath{\text{,}}
\end{align*}
wobei im letzten Schritt die Induktionsannahme einfloss. Als
Komposition von formalen Potenzreihen von Differentialoperatoren ist
$\lambda D'_n \circ B_{U'}$ wieder eine formale Potenzreihe von
Differentialoperatoren, womit wir unsere obige Beobachtung anwenden
können und das gewünschte $D_{n+1}'$ finden.
Mit dem bisher gezeigten ist dann die Existenz von $S_{U'}$ klar,
woraus wie Eingangs bemerkt sofort die Behauptung folgt.
\end{proof}
\begin{definition}
\label{def:reduziertesSternprodukt}
Das Sternprodukt $\starred$ aus Satz
\ref{satz:SternproduktAufDemReduziertenPhasenraum} nennen wir auch
das (nach dem Quanten"=Koszul"=Schema) \neuerBegriff{reduzierte
Sternprodukt}.
\end{definition}
\begin{bemerkung}
\label{bem:reduziertesSternprodukt}
\begin{bemerkungEnum}
\item %
Das nach dem Quanten"=Koszul"=Schema reduzierte Sternprodukt
ist dasselbe wie das, welches man durch die BRST"=Konstruktion
nach \cite{bordemann.herbig.waldmann:2000a} erhält.
\item %
Satz \ref{satz:SternproduktAufDemReduziertenPhasenraum} liefert
eine konkrete, relativ einfache Formel für das Sternprodukt auf
dem reduzierten Phasenraum, sogar eine, die strukturell
derjenigen für die Poisson"=Klammer auf dem reduzierten
Phasenraum gleicht,
vgl.\ Gleichung~\eqref{eq:ReduziertePoissonKlammer}.
\item %
Die Quanteneinschränkung $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ ist zwar durch Gleichung
\eqref{eq:QuantenEinschrDef} relativ konkret gegeben, es stellt
sich jedoch heraus, dass die dort auftretende geometrische
Reihe selbst in einfachen Beispielen nur sehr schwer noch
konkreter berechnet werden kann. Das in
\cite{cahen2007symplectic}
betrachtete Beispiel verdeutlicht
die zu erwartenden Schwierigkeiten.
\item %
Ist $\star$ Hermitesch, so stellt sich die Frage, ob auch
$\starred$ Hermitesch ist. Falls $\star$ zusätzlich stark
invariant ist und man als Quantenimpulsabbildung $\qJ = J +
\frac{1}{2}\I \lambda \Delta $ wählt, ist das Sternprodukt
$\starred$ wieder Hermitesch, wie Gutt und Waldmann in
\cite[Prop. 4.7]{gutt2010involutions} zeigen konnten.
\end{bemerkungEnum}
\end{bemerkung}
\subsection{Nichtverschwindende Impulswerte}
\label{sec:nichtVerschwindendeImpulswerte}
Die bisher beschriebene Quanten"=Koszul"=Reduktion ist nur für den Fall
verschwindender Impulswerte anwendbar. Für nicht"=verschwindende
$G$"=invariante Impulswerte $\mu \in \lieAlgebra^*$ liegt es nahe, die um
$-\mu$ verschobene Impulsabbildung zu betrachten und dann an der Null zu
reduzieren.
Genauer sei $(M,\omega)$ eine symplektische Mannigfaltigkeit. Weiter
wirke eine Lie"=Gruppe $G$ stark Hamiltonsch, frei und eigentlich auf
$M$, $J\colon M \to \lieAlgebra^*$ sei eine $G$"=äquivariante
Impulsabbildung und $\mu \in \lieAlgebra^*$ ein $G$"=invarianter
Impulswert mit $J^{-1}(\mu) \neq \emptyset$. Dann ist auch $J^\mu := J -
\mu \colon M \to \lieAlgebra^*$ eine $G$"=äquivariante Impulsabbildung
und es gilt $J^{-1}(\mu) = ({J^\mu})^{-1}(0)$, womit die reduzierten
Phasenräume inklusive ihrer symplektischen Struktur trivialerweise
übereinstimmen. Sei nun $\star$ ein Sternprodukt für $M$ und $\qJ$ eine
$G$"=äquivariante Quantenimpulsabbildung für $J$ und $\star$. Dann ist
$\qJ -\mu$ auch eine $G$"=äquivariante Quantenimpulsabbildung für
$J^\mu$ und $\star$, womit wir vermöge der Quanten"=Koszul"=Reduktion
ein Sternprodukt $\star_{\mathrm{red},J^\mu,0}$ für den bezüglich
$J^\mu$ am Impulswert $0$ reduzierten Phasenraum erhalten. Ist $\star$
mit $J$ stark invariant, so auch mit $J^\mu$, da Poisson"=Klammern und
$\star$"=Kommutatoren mit konstanten Funktionen verschwinden. Wir
definieren dann das reduzierte Sternprodukt $\star_{\mathrm{red},\mu}$
als $\star_{\mathrm{red},\mu} := \star_{\mathrm{red},J^\mu,0}$. Für
allgemeine, nicht notwendigerweise invariante Impulswerte kann man das
Shifting"=Theorem verwenden, an das wir zunächst kurz erinnern wollen
\begin{proposition}(Shifting-Theorem)
\label{prop:ShiftingTrick}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit und $G$ eine
Lie"=Gruppe, die stark Hamiltonsch, frei und eigentlich auf $M$
operiere und $J \colon M \to \lieAlgebra^*$ eine $G$"=äquivariante
Impulsabbildung. Weiter sei $\mu$ ein regulärer Wert von $J$ mit
$J^{-1}(\mu) \neq \emptyset$, $G\mu$ der koadjungierte Orbit und
$\omega_{G\mu}^+$ die Kirillov"=Kostant"=Souriau symplektische Form
auf $G\mu$. Dann ist $J^\mu \colon M \times G\mu \to \lieAlgebra^*$,
$(p,\nu) \mapsto J(p) - \nu$ eine $G$"=äquivariante Impulsabbildung
auf der symplektischen Mannigfaltigkeit $(M \times G\mu, \omega -
\omega_{G\mu}^+)$ für die kanonische Diagonalwirkung von $G$ auf $M
\times G \mu$ und die reduzierten Phasenräume $(M \times G
\mu)_{\mathrm{red}}$ und $M_{\mathrm{red},\mu}$ sind symplektomorph,
wobei $M_{\mathrm{red},\mu}$ den bezüglich des Impulswerts $\mu$ reduzierten
Phasenraum bezeichnet.
\end{proposition}
Der Beweis ist zwar nicht besonders schwierig, da er jedoch
Orbit"=Reduktion verwendet, auf welche wir nicht eingehen wollen und da
der Satz im weiteren Verlauf der Arbeit nicht von Bedeutung ist,
verweisen wir auf \cite[Thm. 6.5.2]{ortega.ratiu:2004}.
Man kann nun versuchen ein Sternprodukt $\star_{G \mu}$ auf $C^\infty(G
\mu)$ zu wählen, $\hat{\star} := \star \otimes \star_{G \mu}$ zu
betrachten und als reduziertes Sternprodukt $\hat{\star}_{\mathrm{red}}$ zu
nehmen. Damit dieses Vorgehen von einem konstruktivistischen Standpunkt
aus befriedigend erscheint, müsste jedoch noch untersucht werden, welche
Freiheiten man bei der Wahl von $\star_{G\mu}$ hat und wie sich diese
auf $\hat{\star}_{\mathrm{red}}$, beziehungsweise dessen Klasse
auswirken. Falls $\mu$ $G$"=invariant ist, gilt natürlich $G\mu = \{\mu\}$,
jegliche Wahlfreiheiten verschwinden und die beschriebene Anwendung des
Shifting"=Theorems liefert das weiter oben schon erklärte.
\subsection{Homotopie für den gesamten Quanten-Koszul-Komplex}
\label{sec:HomotopieFuerGesamtenQuatenKoszulKomplex}
Die folgenden Aussagen werden für die restliche Arbeit zwar nicht
benötigt, wir wollen jedoch der Vollständigkeit halber noch ausführen,
wie man auf relativ elementarem Wege eine Homotopie für den ganzen
augmentierten Quanten"=Koszul"=Komplex erhalten kann. Diese wird
interessant, wenn man den ganzen BRST"=Komplex betrachtet. Siehe
\cite{bordemann.herbig.waldmann:2000a} und
\cite{herbig2007variations} für den singulären Fall. Dort wird auch
gezeigt, wie man die Homotopie auf abstrakterem Wege, mittels
homologischer Störungstheorie bekommt.
Im Folgenden schreiben wir $\qkoszul[0] := \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ und betrachten den
Komplex
\def\tA[#1]{A_{#1}}
\begin{equation}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{
C^\infty(C)[[\lambda]] & C^\infty(M)[[\lambda]] \otimes \bigwedge\nolimits^0
\lieAlgebra & C^\infty(M)[[\lambda]]
\otimes \bigwedge\nolimits^1 \lieAlgebra & \dots \\
}; %
\path[<-] (m-1-1) edge node[auto]{$\qkoszul[0]$}(m-1-2); %
\path[<-] (m-1-2) edge node[auto]{$\qkoszul[1]$}(m-1-3); %
\path[<-] (m-1-3) edge node[auto]{$\qkoszul[2]$}(m-1-4); %
\end{tikzpicture}.
\end{equation}
Wir sind nun in der Lage, eine Homotopie für diesen Komplex anzugeben. Um
die Notation für die folgenden Betrachtungen zu vereinfachen, schreiben
wir $\h[-1] := \prol$ und $\kkoszul[0] := \ensuremath{\imath^*}$, $\kkoszul[k] := 0$, $ \h[k] :=
0$ für $k \leq -1$, $\Id[k] := 0$ für $k \leq -2$, $\Id[-1] :=
\operatorname{\mathsf{id}}_{C^\infty(C)}$ und $\Id[k] := \operatorname{\mathsf{id}}_{C^\infty(M) \otimes \bigwedge\nolimits^k
\lieAlgebra}$ für $k \geq 0$. Damit kombinieren sich die
Gleichungen $\ensuremath{\imath^*} \prol = \operatorname{\mathsf{id}}$, \eqref{eq:globalisiertHomotopie1} und
\eqref{eq:globalisiertHomotopie2} für alle $k \in \mathbb{Z}$ zur Gleichung
\begin{align}
\label{eq:KlassischeHomotopieAllesInEinem}
\h[k-1] \kkoszul[k] + \kkoszul[k+1] \h[k] = \Id[k] \ensuremath{\text{.}}
\end{align}
Da $\h[0] \prol = 0$ gilt, können wir $\qh[0]$ noch etwas
anders schreiben, so dass, grob gesagt, die Rollen von $\qkoszul$ und $\qh$
symmetrischer verteilt werden. Es gilt nämlich für $n \in \mathbb{N}$
\begin{align*}
\lefteqn{ \h[0](\h[-1](\qkoszul[0] - \kkoszul[0]) + (\qkoszul[1]
- \kkoszul[1]) \h[0])^n} \\
&= \h[0](\prol(\qkoszul[0] - \kkoszul[0]) +
(\qkoszul[1] - \kkoszul[1])\h[0])(\h[-1](\qkoszul[0] - \kkoszul[0]) +
(\qkoszul[1] - \kkoszul[1])\h[0])^{n-1} \\ &= \dots = \h[0]((\qkoszul[1]
- \kkoszul[1])\h[0])^n \ensuremath{\text{,}}
\end{align*}
also
\begin{align}
\label{eq:QuantenHomotopieGradNullSymmetrieisert}
\qh[0] = \h[0] \frac{\Id[0]}{\Id[0] + \h[-1](\qkoszul[0] - \kkoszul[0]) +
(\qkoszul[1] - \kkoszul[1])\h[0]} \ensuremath{\text{.}}
\end{align}
Den Nenner können wir mit Hilfe der klassischen Homotopieeigenschaft,
siehe Gleichung \eqref{eq:globalisiertHomotopie2}, wie folgt noch etwas
kompakter schreiben.
\begin{align}
\label{eq:QuantenHomotopieGradNullSymmetrieisertKompakter}
\lefteqn{ \operatorname{\mathsf{id}} + \h[-1](\qkoszul[0] - \kkoszul[0]) + (\qkoszul[1]
- \kkoszul[1])\h[0]}\notag\\ &= \h[-1]\kkoszul[0] + \kkoszul[1]\h[0] +
\h[-1](\qkoszul[0] - \kkoszul[0]) + (\qkoszul[1] - \kkoszul[1])\h[0] \notag\\
&= \h[-1]\qkoszul[0] + \qkoszul[1] \h[0] \ensuremath{\text{.}}
\end{align}
Demnach gilt
\begin{align}
\label{eq:QuantenHomotopieGradNullSymmetrieisertKompakter2}
\qh[0] = \h[0]\frac{\Id[0]}{\h[-1]\qkoszul[0] + \qkoszul[1] \h[0]}\ensuremath{\text{.}}
\end{align}
Dies liefert uns eine grobe Idee, wie man eine Homotopie für den
Quanten"=Koszul-Komplex definieren könnte.
\begin{proposition}
\label{prop:Quantenhomotopie1}
Die Abbildung $\h[k-1]\qkoszul[k] + \qkoszul[k+1]\h[k]$ ist für
alle $k \in \mathbb{Z}$ invertierbar.
\end{proposition}
\begin{proof}
Da $\qkoszul$ eine Deformation von $\partial$ ist, fängt
\begin{align*}
\lefteqn{\h[k-1]\qkoszul[k] + \qkoszul[k+1]\h[k] - \Id[k]}\\
&= \h[k-1]\kkoszul[k] + \kkoszul[k-1]\h[k] + \h[k-1](\qkoszul[k]
- \kkoszul[k]) + (\qkoszul[k+1] - \kkoszul[k+1])\h[k] - \Id[k]\\
&= \h[k-1](\qkoszul[k] - \kkoszul[k]) + (\qkoszul[k+1] -
\kkoszul[k+1])\h[k]
\end{align*}
mindestens in Ordnung $\lambda$ an, woraus die Behauptung folgt. Dabei
wurde im zweiten Schritt der obigen Umformung Gleichung
\eqref{eq:KlassischeHomotopieAllesInEinem} verwendet.
\end{proof}
Wir definieren nun für $k \in \mathbb{Z}$ die Abbildung
\begin{align}
\label{eq:DefinitionQuantenHomotopie}
\qh[k] := \h[k] \frac{\operatorname{\mathsf{id}}}{\h[k-1] \qkoszul[k] + \qkoszul[k+1]\h[k]}
\end{align}
und zeigen in der nächsten Proposition, dass dies eine Homotopie
für den augmentierten Quanten"=Koszul"=Komplex liefert.
\begin{proposition}
\label{prop:QuantenHomotopie}
Es gilt für alle $k \in \mathbb{Z}$ die Homotopiegleichung
\begin{align}
\label{eq:QuantenHomotopie}
\qh[k-1]\qkoszul[k] + \qkoszul[k+1]\qh[k] = \Id[k]\ensuremath{\text{.}}
\end{align}
Für $k = -1$ gilt explizit
\begin{align}
\label{eq:QuantenHomotopieGradMinusEins}
\qh[-1] = \prol
\end{align}
und für $k = 0$
\begin{align}
\label{eq:QuantenHomotopieGradNull}
\qh[0] = \h[0] \frac{\Id[0]}{\Id[0] + (\qkoszul[1] - \kkoszul[1])\h[0]} \ensuremath{\text{.}}
\end{align}
\end{proposition}
\begin{proof}
Sei $k \in \mathbb{Z}$. Wir setzen $A_k := \h[k-1]\qkoszul[k] +
\qkoszul[k+1]\h[k] - \Id[k]$. Nach Proposition
\ref{prop:Quantenhomotopie1} ist $\Id[k] + A_k$ invertierbar. Offensichtlich
gilt wegen $\qkoszul^2 = 0$ und der Definition von $A_k$
\begin{align*}
A_k \qkoszul[k+1] = (\h[k-1]\qkoszul[k] + \qkoszul[k+1]\h[k] -
\Id[k])\qkoszul[k+1] = 0 + \qkoszul[k+1]\h[k]\qkoszul[k+1] - \qkoszul[k+1]
\end{align*}
und
\begin{align*}
\qkoszul[k]A_k = \qkoszul[k](\h[k-1]\qkoszul[k] + \qkoszul[k+1]\h[k]
- \Id[k]) = \qkoszul[k] \h[k-1]\qkoszul[k] + 0 - \qkoszul[k] \ensuremath{\text{.}}
\end{align*}
Somit erhalten wir die Vertauschungsrelation
\begin{align*}
A_k \qkoszul[k+1] = \qkoszul[k+1] A_{k+1}\ensuremath{\text{,}}
\end{align*}
woraus sich unmittelbar die Beziehung
\begin{align*}
\frac{\Id[k]}{\Id[k] + A_k} \qkoszul[k+1] = \qkoszul[k+1]
\frac{\Id[k+1]}{\Id[k+1] + A_{k+1}}
\end{align*}
ergibt.
Folglich ist klar, dass
\begin{align*}
\qh[k-1]\qkoszul[k] + \qkoszul[k+1]\qh[k] &= \h[k-1]
\frac{\Id[k-1]}{\Id[k-1] + A_{k-1}}\qkoszul[k] + \qkoszul[k+1]\h[k]
\frac{\Id[k]}{\Id[k] + A_k} \\
&=
\h[k-1] \qkoszul[k]\frac{\Id[k]}{\Id[k] + A_{k}} + \qkoszul[k+1]\h[k] \frac{\Id[k]}{\Id[k] + A_k}\\
&= ( \h[k-1]\qkoszul[k] + \qkoszul[k+1]\h[k])\frac{\Id[k]}{\Id[k] + A_k} \\
&= \Id[k]
\end{align*}
gilt. Die Homotopieeigenschaft ist damit gezeigt.
Für $k= -1$ finden wir
\begin{align*}
\qh[-1] = \h[-1]\frac{\Id[-1]}{\h[-2]\qkoszul[-1] +
\qkoszul[0]\h[-1]} = \prol \frac{\Id[-1]}{0 + \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}\prol} = \prol \ensuremath{\text{.}}
\end{align*}
Der Rest ist nach Gleichung
\eqref{eq:QuantenHomotopieGradNullSymmetrieisertKompakter2} klar.
\end{proof}
\chapter{Quanten-Koszul-Reduktion auf Kotangentialbündeln}
\label{cha:QuantenKoszulAufKotangentialbuendel}
Wir wollen nun die in Abschnitt \ref{sec:QuantenKoszul} dargestellte
Reduktionsmethode für Sternprodukte auf den Spezialfall von
Kotangentialbündeln anwenden und das so erhaltene Sternprodukt auf dem
reduzierten Phasenraum mit dem von Kowalzig, Neumaier und Pflaum in
\cite{kowalzig.neumaier.pflaum:2005a} konstruierten vergleichen.
\section{Kotangentialbündel mit Magnetfeld}
\label{sec:KotangentialBuendelMitMagnetfeld}
Wir betrachten das Kotangentialbündel $\mathfrak{p} \dpA M = T^*Q \to Q$
einer Mannigfaltigkeit $Q$, versehen mit der symplektischen Form $\omegaB
= \omegaKan + \mathfrak{p}^* B$, wobei $\omegaKan \in
\Gamma^\infty(\bigwedge\nolimits^2 T^*(T^*Q))$ die kanonische symplektische Form
bezeichne und $B \in \Gamma^\infty(\bigwedge\nolimits^2 T^*Q)$ eine beliebige geschlossene
Zweiform ist. Es sei daran erinnert, dass $\omegaKan$ durch $\omegaKan := -
d \theta_{\kanSymbol} $ definiert ist. Dabei ist $\theta_{\kanSymbol} \in
\Gamma^\infty(T^*(T^*Q))$ die kanonische Einsform, welche für $\alpha_q \in
T^*_{q}Q$ und $w_{\alpha_q} \in T_{\alpha_q}(T^*Q)$ wiederum durch
\begin{align}
\label{eq:KanonischeEinsform}
\theta_{\kanSymbol}\at{\alpha_q}(w_{\alpha_q}) = \alpha_q(T_{\alpha_q}\mathfrak{p}(w_{\alpha_q}))
\end{align}
gegeben ist. Eine Karte $(U,(x^1,\dots,x^n))$ für $Q$ induziert
bekanntlich (vgl.\ \cite[Abschnitt 3.2.1]{waldmann:2007a},
\cite[Prop. 4.4]{lee:2003a}) eine Karte
$(\mathfrak{p}^{-1}(U),(q^1,\dots,q^n,p_1,\dots,p_n))$ für $T^*Q$ mit
\begin{align}
\label{eq:KotangentialbuendelKoordinaten1}
q^i(\alpha_q) = x^i(q)
\end{align}
und
\begin{align}
\label{eq:KotangentialbuendelKoordinaten2}
p_i(\alpha_q) = \alpha_q \left(\frac{\partial}{\partial x^i}\at{q} \right)
\end{align}
für $i \in \{1,\dots,n\}$, $q \in Q$ und $\alpha_q \in T^*_qQ$. Man rechnet
leicht nach (vgl.\ \cite[Lem.~3.2.2]{waldmann:2007a}), dass $\theta_{\kanSymbol}$ in diesen Koordinaten die folgende Gestalt
annimmt.
\begin{align}
\label{eq:KanonischeEinsformInKoordinaten}
\theta_{\kanSymbol}\at{\mathfrak{p}^{-1}(U)} = p_i d q^i \ensuremath{\text{.}}
\end{align}
Durch Anwenden der Definition von $\omegaKan$ folgt damit auch unmittelbar
\begin{align}
\label{eq:KanonischeSymplektischeFormInKoordinaten}
\omegaKan\at{\mathfrak{p}^{-1}(U)} = dq^i \wedge
d p_i \ensuremath{\text{.}}
\end{align}
Eine physikalische Interpretation der oben geschilderten Situation wäre
ein System von endlich vielen geladenen Teilchen im Sinne der klassischen
Punktmechanik, deren Bewegungen etwa aufgrund von
Zwangsbedingungen auf den Konfigurationsraum $Q$ eingeschränkt sind und
die sich zudem in einem Magnetfeld $B$ befinden.
Wir nehmen an, dass $Q$ eine Wirkung $\phi \colon G \times Q \to Q$
trägt und heben diese zu einer Wirkung $\Phi \colon G \times T^*Q \to
T^*Q$, mit $\Phi_g = \Phi(g,\cdot) := \mathsf{T}^*(\phi_{g^{-1}})$ für alle $g
\in G$. Dabei bezeichnet für $\phi \colon Q \to Q$ die Abbildung
$\mathsf{T}^*\phi \colon T^*Q \to T^*Q$ den Kotangentiallift, welcher
bekanntlich für alle $q \in Q$, $\alpha_{\phi(q)} \in
T^*_{\phi(q)}Q$ und $v_q \in T_qQ$ durch
${\mathsf{T}^*\phi(\alpha_{\phi(q)})}{v_q} :=
\alpha_{\phi(q)}(T_q\phi(v_q))$ definiert ist. Da für $\phi \colon Q \to
Q$ der Kotangentiallift $\mathsf{T}^*\phi$ ein Vektorbündelisomorphismus
über $\phi^{-1}$ ist, siehe etwa \cite[Satz 3.2.11]{waldmann:2007a}, ist
klar, dass $\Phi$ eine Gruppenwirkung definiert. In der Tat ist $\Phi$
auch glatt, wie man etwa leicht in Bündelkarten sieht.
Man beachte, dass sich die Eigenschaften einer Wirkung, frei und
eigentlich zu sein, auf die geliftete Wirkung vererben, siehe
Proposition \ref{prop:eigentlichAnsteckend}.
Falls $B = 0$ ist, gibt es eine kanonische $G$"=äquivariante
Impulsabbildung $\JKan \colon T^*Q \to \lieAlgebra^*$, welche für $q \in
Q$ und $\alpha_q \in T_q^*Q$ durch $\JKan(\alpha_q)(\xi) :=
\dPaar{\alpha_q}{\xi_Q(q)}$ gegeben ist, vgl.\ etwa \cite[Satz 3.3.39]{waldmann:2007a}. Die Frage, unter welchen
Voraussetzungen es eine klassische Impulsabbildung für die Situation
mit $B \neq 0$ gibt und wie sie in diesem Falle aussieht, klärt das
folgende wohlbekannte Lemma, welches auch in
\cite{kowalzig.neumaier.pflaum:2005a} gezeigt wurde.
\begin{lemma}
\label{lem:ImpulsAbbildungMitMagnetfeld}
\begin{lemmaEnum}
\item %
$G$ wirkt genau dann symplektisch bezüglich $\omega_B$, wenn $B$
$G$"=invariant ist.
\item %
Sei $B$ $G$"=invariant. Dann gibt es genau dann eine klassische
Impulsabbildung bezüglich $\omegaB$, wenn eine glatte Abbildung
$j_0 \colon Q \to \lieAlgebra^*$ existiert, so dass für alle $\xi
\in \lieAlgebra$ die Gleichung
\begin{align}
\label{eq:ImpulsAbbildungMitMagnetfeld}
d \dPaar{j_0}{\xi} = B(\xi_Q,\cdot)
\end{align}
erfüllt ist. In diesem Fall
definiert $\dPaar{\JB}{\xi} = \dPaar{\JKan}{\xi} +
\mathfrak{p}^*\dPaar{j_0}{\xi}$ für $\xi \in \lieAlgebra$ eine
klassische Impulsabbildung $\JB \colon T^*Q \to
\lieAlgebra^*$ bezüglich $\omegaB$. Weiter ist $\JB$ genau dann $G$"=äquivariant, wenn $j_0$
$G$"=äquivariant ist.
\end{lemmaEnum}
\end{lemma}
\begin{proof}
\begin{beweisEnum}
\item %
Für $g \in G$ gilt
\begin{align*}
\Phi_g^* \omegaB = \Phi_g^* \omegaKan + \Phi_g^* \mathfrak{p}^* B =
\omegaKan + \mathfrak{p}^* \phi_g^* B
\end{align*}
und somit
\begin{align*}
\Phi_g^* \omegaB = \omegaB \iff \mathfrak{p}^* B = \mathfrak{p}^*
\phi_g^* B \iff B = \phi_g^* B \ensuremath{\text{.}}
\end{align*}
Dabei wurde bei der letzten Äquivalenz die Submersivität und
Surjektivität von
$\mathfrak{p}$ verwendet.
\item %
Für $\xi \in \lieAlgebra$, $q \in Q$, $\alpha_q \in T^*_qQ$ und
$v_{\alpha_q}\in T_{\alpha_q}T^*Q $ gilt
\begin{align*}
\omegaB(\alpha_q)(\xi_{T^*Q}(\alpha_q),v_{\alpha_q}) &=
\omegaKan(\alpha_q)(\xi_{T^*Q}(\alpha_q),v_{\alpha_q}) +
B(q)(T_{\alpha_q} \mathfrak{p} \xi_{T^*Q}(\alpha_q),T_{\alpha_q} \mathfrak{p} v_{\alpha_q})\\
&= \dPaar{d_{\alpha_q} \JKan(\xi)}{v_{\alpha_q}} +
B(q)(\xi_Q(q),T_{\alpha_q}
\mathfrak{p} v_{\alpha_q}) \\
&= \dPaar{d_{\alpha_q} \JKan(\xi)}{v_{\alpha_q}} + \dPaar{B(q)(\xi_Q(q),\cdot)}{T_{\alpha_q} \mathfrak{p} v_{\alpha_q}} \\
&= \dPaar{d_{\alpha_q} \JKan(\xi)}{v_{\alpha_q}} +
\dPaar{(\mathfrak{p}^*(B(\xi_Q,\cdot)))(\alpha_q)}{v_{\alpha_q}}
\ensuremath{\text{.}}
\end{align*}
Somit gilt weiter
\begin{align*}
\lefteqn{\text{$J \colon T^*Q \to \lieAlgebra^*$ ist eine
Impulsabbildung bzgl. $\omegaB$} } \\
&\iff X_{J(\xi)} = \xi_{T^*Q} \quad \forall \xi \in
\lieAlgebra \quad \text{bzgl. $\omegaB$}
\\
&\iff d J(\xi) = \omega_B(\xi_{T^*Q},\cdot) \quad \forall \xi \in \lieAlgebra\\
&\iff d(J(\xi) - \JKan(\xi)) = \mathfrak{p}^*(B(\xi_Q,\cdot))
\quad \forall \xi \in \lieAlgebra \ensuremath{\text{.}}
\end{align*}
Ist dann $j_0 \colon Q \to \lieAlgebra^*$ eine glatte Abbildung
mit $d\dPaar{j_0}{\xi} = B(\xi_Q,\cdot)$, so ist nach diesen
Überlegungen klar, dass $\JB := \JKan + \mathfrak{p}^*j_0$ eine
Impulsabbildung bezüglich $\omegaB$ ist. Ist umgekehrt $J$ eine
Impulsabbildung bezüglich $\omegaB$, so folgt aus obigem für
$j_0 := n^*(J - \JKan)$ die Gleichung $d\dPaar{j_0}{\xi} =
B(\xi_Q,\cdot)$, wobei $n \colon Q \to T^*Q$ den Nullschnitt
bezeichne. Da $\JKan$ schließlich $G$"=äquivariant ist, ist im
Falle der Existenz eines $j_0$ die $G$"=Äquivarianz von $\JB =
\JKan + \mathfrak{p}^*j_0$ äquivalent zur $G$"=Äquivarianz von
$j_0$.
\end{beweisEnum}
\end{proof}
\begin{bemerkung}
\label{bem:KanonischeImpulsabbildung}
Ist $B = 0$, so kann man in Lemma
\ref{lem:ImpulsAbbildungMitMagnetfeld} $j_0 = 0$
wählen und erhält $\JB = \JKan$.
\end{bemerkung}
Wir wollen im Folgenden immer die Existenz einer klassischen
$G$"=äquivarianten Impulsabbildung $\JB$ bezüglich $\omegaB$ der Form
$\dPaar{\JB}{\xi} = \dPaar{\JKan}{\xi} + \mathfrak{p}^*\dPaar{j_0}{\xi}$
für $\xi \in \lieAlgebra$ mit einem $G$"=äquivarianten $j_0$ wie in
Lemma \ref{lem:ImpulsAbbildungMitMagnetfeld} annehmen und stets nur
solche betrachten, wenn wir von $\JB$ sprechen.
Bezeichne $VQ$ das Vertikalbündel des Hauptfaserbündels $\varpi \dpA Q \to Q/G$,
dessen Fasern für alle $q \in Q$ durch $V_qQ := \{\xi_Q(q) \mid \xi \in \lieAlgebra
\}$ gegeben sind.
Weiter sei $\gamma \colon TQ \to \lieAlgebra$ eine
Zusammenhangseinsform für $\varpi \dpA Q \to Q/G$
(vgl.\ Prop.~\ref{prop:ExistenzHauptfaserbuendelzusammenhang} und Prop. \ref{prop:HauptfaserBuendelZusammenhang}). Dies
liefert eine Zerlegung des Tangentialbündels
\begin{align}
\label{eq:ZerlegungTangentialbuendel}
TQ = HQ \oplus VQ \ensuremath{\text{,}}
\end{align}
wobei $HQ$ das von $\gamma$ induzierte Horizontalbündel bezeichne,
dessen Fasern für alle $q \in Q$ durch $H_qQ =
\{v_q \in T_qQ \mid \gamma(v_q) = 0 \in \lieAlgebra\}$ gegeben
sind. Setzen wir dann $H^{*^\gamma}_qQ := \{\alpha \in T_{q}^*Q \mid
\alpha\at{V_qQ} = 0\}$ und $V^{*^\gamma}_qQ := \{\alpha_q \in T_q^*Q
\mid \alpha_q\at{H_qQ} = 0\}$, so sind $H^{*^\gamma}Q := \bigcup_{q \in
Q}H^{*^\gamma}_qQ$ und $V^{*^\gamma}Q := \bigcup_{q \in
Q}V^{*^\gamma}_qQ$ Untervektorbündel von $T^*Q$ und es gilt $T^*Q =
H^{*^\gamma}Q \oplus V^{*^\gamma}Q $. Man beachte, dass die Zerlegungen
$TQ = HQ \oplus VQ$ und $T^*Q = H^{*^\gamma}Q \oplus V^{*^\gamma}Q$
$G$"=invariant sind. Dabei operiert $G$ auf $TQ$ via $G \times TQ \ni (g,v)
\mapsto T\phi_gv \in TQ$.
Die Vektorbündelstruktur von $T^*Q$ erlaubt es uns, von polynomialen Funktionen
\begin{align*}
\mathcal{P}(Q) := \{f \in C^\infty(T^*Q) \mid f\at{T^*_qQ} \text{ ist
polynomial für alle $q \in Q$ } \} \subset C^\infty(T^*Q)
\end{align*}
in den Impulskoordinaten, oder kurz von polynomialen Funktionen, zu
sprechen. Um diese Zusatzstruktur nutzen zu können, erscheint es
sinnvoll, sich auf Sternprodukte $\star$ zu beschränken, für die
$\mathcal{P}(Q)[[\lambda]]$ eine $\star$"=Unteralgebra von
$C^\infty(T^*Q)[[\lambda]]$ bildet.
\cite{kowalzig.neumaier.pflaum:2005a} folgend werden wir in diesem
Kapitel nur Sternprodukte $\star$ für $(M,\omega_B)$ mit den folgenden
Eigenschaften betrachten.
\begin{compactitem}
\item $\star$ ist $G$"=invariant, d.\,h.\ invariant bezüglich der
gelifteten Wirkung $\Phi$ auf $T^*Q$.
\item $\qJ[B]$ ist eine $G$"=äquivariante Quantenimpulsabbildung für die
klassische Impulsabbildung $J_B$ und das Sternprodukt $\star$.
\item $\mathcal{P}(Q)[[\lambda]]$ ist eine $\star$"=Unteralgebra von
$C^\infty(T^*Q)[[\lambda]]$.
\item $\qJ[B](\xi) \in \mathcal{P}(Q)[[\lambda]]$ für alle $\xi \in \lieAlgebra$.
\end{compactitem}
Kowalzig, Neumaier und Pflaum konnten mit einigem Aufwand zeigen, dass
die letzte Forderung schon aus den restlichen folgt, siehe
\cite[Cor.~4.14~ii)]{kowalzig.neumaier.pflaum:2005a}. Man beachte, dass
für die klassische Impulsabbildung nach Konstruktion schon $J_B(\xi) \in
\mathcal{P}(Q)$ für alle $\xi \in \lieAlgebra$ erfüllt ist.
\section{Reduziertes Sternprodukt für verschwindenden Impulswert und verschwindendes Magnetfeld}
\label{sec:SpezialFallImpulsMagnetfeldNull}
In diesem Abschnitt wenden wir uns dem Spezialfall verschwindender
Impulswerte $\mu = 0$ und verschwindender Magnetfelder zu. Wir nehmen
also $B = 0$ sowie $J_B = \JKan$ an und schreiben $\qJ[\mathrm{kan}] =
\qJ[B]$.
Wegen $\JKan(\xi)(\alpha_q) =
\alpha_q(\xi_Q(q))$ für $q \in Q$ und $\alpha_q \in T_q^*Q$ gilt
offensichtlich
\begin{align}
\label{eq:ImpulsniveauFlaecheSpezialfall}
C_{\mathrm{kan}} := \JKan^{-1}(0) &= \{\alpha \in T^*Q \mid
\JKan(\alpha)(\xi) = 0 \,
\forall \xi \in \lieAlgebra\} \notag \\
&= \{\alpha \in T^*Q \mid \alpha(\xi_Q(\mathfrak{p}(\alpha))) = 0 \,
\forall \xi \in \lieAlgebra\} = H^{*^\gamma}Q \ensuremath{\text{.}}
\end{align}
Ist $\mu \in \lieAlgebra^*$, so wird durch
\begin{align}
\label{eq:Zusammenhangseinsform}
\dPaar{\Gamma_\mu(q)}{v_q}
:= \dPaar{\mu}{\gamma(v_q)} \quad \text{für $q \in Q$, $v_q \in T_qQ$}
\end{align}
eine glatte Einsform
\begin{align}
\Gamma_\mu \colon Q \ni q \mapsto \Gamma_\mu(q) \in T^*Q
\end{align}
definiert und es gilt
$\Gamma_{\mu}(q) \in V^{*^\gamma}_qQ$ für alle $q \in Q$.
Wir betrachten als nächstes die Abbildung
\begin{align}
\label{eq:globaleTube}
\Psi' \dpA C_{\mathrm{kan}} \times \lieAlgebra^* \to T^*Q = C_{\mathrm{kan}} \oplus
V^{*^\gamma}Q, \quad (c,\mu) \mapsto c + \Gamma_\mu(\mathfrak{p}(c))
\ensuremath{\text{.}}
\end{align}
\begin{proposition}
\label{prop:globaleTubenabbildung}
$\Psi'$ ist eine globale, glatte, $G$"=äquivariante, \tn{gute}
Tubenumgebung von $C_{\mathrm{kan}}$ in $T^*Q$.
\end{proposition}
\begin{proof}
$\Psi'$ ist offensichtlich glatt. Seien $(c,\mu), (c',\mu') \in C_{\mathrm{kan}} \times
\lieAlgebra^*$ mit $c + \Gamma_\mu(\mathfrak{p}(c)) = c' + \Gamma_{\mu'}(\mathfrak{p}(c))$,
so erhält man sofort $c = c'$ und $\dPaar{\mu -
\mu'}{\gamma(v_{\mathfrak{p}(c)})} = 0$ für alle $v_{\mathfrak{p}(c)} \in
T_{\mathfrak{p}(c)}Q$. Mit $\gamma(\xi_Q(\mathfrak{p}(c))) = \xi$ für $\xi \in \lieAlgebra$
folgt weiter $\dPaar{\mu - \mu'}{\xi} = 0$ für alle $\xi \in
\lieAlgebra$, also auch $\mu = \mu'$. Somit ist klar, dass $\Psi'$
injektiv ist. Sieht man $C_{\mathrm{kan}} \times \lieAlgebra^*$ als Vektorbündel über
$Q$ an, so ist $\Psi'$ eine Vektorbündelmorphismus über der
Identität, womit $\Psi'$ wegen der Injektivität und aus
Dimensionsgründen schon ein Diffeomorphismus sein muss.
Dass es sich tatsächlich auch um eine Tubenabbildung handelt, ist klar,
da $\Psi'(c,0) = c$ für alle $c \in C_{\mathrm{kan}}$.
Um die Äquivarianz nachzuprüfen, sei $g \in G$. Dann gilt für $q \in Q$
und $v_{gq} \in T_{gq}Q$
\begin{align*}
\dPaar{\Gamma_{g\mu}(gq)}{v_{gq}} &= \dPaar{g \mu}{\gamma(v_{gq})} =
\dPaar{\mu}{g^{-1} \gamma(v_{gq})}\\ &=
\dPaar{\mu}{\gamma(g^{-1} v_{gq})} =
\dPaar{\Gamma_{\mu}(q)}{g^{-1} v_{gq}} = \dPaar{g
\Gamma_\mu(q)}{v_{gq}} \ensuremath{\text{.}}
\end{align*}
Demnach ist für alle $c \in C_{\mathrm{kan}}$, $g \in G$ und $\mu \in \lieAlgebra^*$ die Gleichung
\begin{align*}
\Psi'(g(c,\mu)) &= \Psi'((gc,g\mu)) = gc + \Gamma_{g\mu}(\mathfrak{p}(gc)) \\ &= gc
+ \Gamma_{g\mu}(g\mathfrak{p}(c)) = g(c + \Gamma_\mu(\mathfrak{p}(c))) = g\Psi'((c,\mu))
\end{align*}
richtig.
Sei nun $c \in C_{\mathrm{kan}}$ und $\mu \in \lieAlgebra^*$. Dann gilt für alle $\xi \in
\lieAlgebra$
\begin{align*}
\dPaar{\JKan(\Psi'(c,\mu))}{\xi} &= \dPaar{\Psi'(c,\mu)}{\xi_Q(\mathfrak{p}(c))} \\ &=
\dPaar{c}{\xi_Q(\mathfrak{p}(c))} + \dPaar{\Gamma_\mu(\mathfrak{p}(c))}{\xi_Q(\mathfrak{p}(c))}
= \dPaar{\mu}{\gamma(\xi_Q(\mathfrak{p}(c)))} = \dPaar{\mu}{\xi} \ensuremath{\text{.}}
\end{align*}
Dabei wurde im dritten Schritt $c \in C_{\mathrm{kan}} =
H_{\mathfrak{p}(c)}^{*^\gamma}Q$ und $\xi_Q(\mathfrak{p}(c)) \in
V_{\mathfrak{p}(c)}Q$ ausgenutzt.
Demnach ist $\Psi'$ auch tatsächlich eine \tn{gute} Tubenabbildung.
\end{proof}
Wir schreiben im Folgenden $\Psi := \Psi'^{-1}$.
Die Tatsache, dass wir in diesem Abschnitt nicht eine beliebige
symplektische Mannigfaltigkeit mit irgendeiner $G$"=Wirkung
betrachteten, sondern ein Kotangentialbündel mit gelifteter
$G$"=Wirkung, verhalf, nach Wahl einer Zusammenhangseinsform, die
besonders einfache globale, \tn{gute} Tubenabbildung $\Psi'$ zu
konstruieren. Insbesondere war die Vektorbündelstruktur von $T^*Q$
wichtig. Diese wollen wir auch in den weiteren Betrachtungen massiv
ausnutzen. Einerseits erlaubt sie uns, wie schon erwähnt, von
polynomialen Funktionen, zu sprechen. Die gewählte Zusammenhangseinsform
liefert andererseits eine Zerlegung des Tangentialbündels von $Q$ und
damit, wie wir in Kürze genauer darlegen werden, auch Zerlegungen von
davon induzierten Tensorbündelkonstruktionen. Diese Zerlegungen
übertragen sich bekanntlich auf die polynomialen Funktionen
$\mathcal{P}(Q)$, da diese zu den Schnitten der symmetrischen Potenzen
von $TQ$ in Bijektion stehen. An diese Tatsache möchten wir in der
folgenden Proposition erinnern, vgl.\ auch \cite[2.2.23]{waldmann:2007a}.
Dazu sei
\begin{align}\label{eq:SymmetrischeAlgebraDefGleichung}
\Gamma^\infty(\bigvee\nolimits TQ) := \bigoplus_{k=0}^\infty
\Gamma^\infty({\bigvee\nolimits}^k TQ)
\end{align}
die symmetrische Algebra des $C^\infty(Q)$"=Moduls $\Gamma^\infty(TQ)$.
\begin{proposition}
\label{prop:universelleImpulsabbildung}
Die lineare Abbildung
\begin{align}
\Gamma^\infty( TQ) \to \mathcal{P}(Q), s \mapsto
(T^*Q \ni \alpha \mapsto \dPaar{\alpha}{s(\mathfrak{p}(\alpha))})
\end{align}
besitzt eine eindeutige Fortsetzung zu einem Algebraisomorphismus
\begin{align}
\mathsf{P}\dpA \Gamma^\infty(\bigvee\nolimits TQ) \to \mathcal{P}(Q)\ensuremath{\text{.}}
\end{align}
Es gilt insbesondere für $k \in \mathbb{N}\setminus\{0\}$ und $s \in
\bigvee\nolimits^k \Gamma^\infty(T^*Q)$
\begin{align}
\label{eq:universelleImpulsabbildung}
\mathsf{P}(s)(\alpha_q) = \frac{1}{k!}
s(q)(\alpha_q,\dots,\alpha_q)
\end{align}
für alle $q \in Q$ und $\alpha_q \in T_q^*Q$ sowie
\begin{align}
\label{eq:universelleImpulsabbildungGrad0}
\mathsf{P}(\chi) = \mathfrak{p}^* \chi
\end{align}
für $\chi \in \CM[Q]$.
\end{proposition}
\begin{proof}
Klar.
\end{proof}
\begin{bemerkung}
\label{bem:GInvarianzDesIsoszwischenPolysUndSyms}
Die Wirkung von $G$ auf $TQ$ via $G \times TQ \ni (g,v) \mapsto
T\phi_gv \in TQ$ induziert für jedes $k \in \mathbb{N}$ eine Wirkung
auf $\bigvee^kTQ$ vermöge linearer Fortsetzung von $g(v_q^1 \vee
\dots \vee v_q^n) := g v_q^1 \vee \dots \vee g v_q^n$ für alle $g \in
G$, $q \in Q$ und $v_q^1,\dots,v_q^n \in T_qQ$, insbesondere gilt
$(gv)(\alpha_q^1,\dots,\alpha_q^n) = v(g^{-1} \alpha_q^1,\dots,g^{-1}
\alpha_q^n)$. Dies liefert bekanntlich wieder eine $G$"=Wirkung auf
den Schnitten $\Gamma^\infty(\bigvee^kTQ)$ durch $(gs)(q) :=
g(s(g^{-1}q))$ für alle $g \in G$, $q \in Q$ und $s \in
\Gamma^\infty(\bigvee^k TQ)$. Damit sieht man leicht, dass $\mathsf{P}$
$G$"=äquivariant ist.
\end{bemerkung}
\begin{proposition}
\label{prop:Zerlegung}
$\mathrm{PV}(\Gamma^\infty(\bigvee\nolimits TQ)) := \bigoplus_{m = 0,n =
1}^\infty \Gamma^\infty(\bigvee\nolimits^m HQ \otimes \bigvee\nolimits^n VQ)$ wird
via $(h^m\otimes v^n) \vee (h'^{m'} \otimes v'^{n'}) := (h^m \vee
h'^{m'}) \otimes (v^n \vee v'^{n'})$, für alle $m,m',n,n' \in
\mathbb{N}$ und $h^m\in \Gamma^\infty(\bigvee\nolimits^m HQ)$, $h'^{m'} \in
\Gamma^\infty(\bigvee\nolimits^{m'} HQ)$, $v^n \in \Gamma^\infty(\bigvee\nolimits^n
VQ)$ und $v'^{n'}\in \Gamma^\infty(\bigvee\nolimits^{n'} VQ)$ und
$C^\infty(Q)$"=linearer Fortsetzung zu einer assoziativen,
$\mathbb{N}\times \mathbb{N}^+$"=gradierten Algebra, wobei $x \in
\mathrm{PV}(\Gamma^\infty(\bigvee\nolimits TQ))$ genau dann
\neuerBegriff{homogen} vom Grade $(m,n) \in \mathbb{N} \times
\mathbb{N}^+$ ist, wenn $x \in \Gamma^\infty(\bigvee\nolimits^m HQ \otimes
\bigvee\nolimits^n VQ)$. Ebenso ist $\bigoplus_{m,n = 0}^\infty
\Gamma^\infty(\bigvee\nolimits^m HQ \otimes \bigvee\nolimits^n VQ)$ eine assoziative,
$\mathbb{N}\times \mathbb{N}$"=gradierte Algebra.
\end{proposition}
\begin{proof}
Klar.
\end{proof}
\begingroup
\emergencystretch=0.8em
\begin{definition}
\label{def:TeilweiseUndVollstaendigVertikaleSchnitte} %
Wir wollen Elemente der gradierten Algebra
$\mathrm{H}(\Gamma^\infty(\bigvee\nolimits TQ)) :=
\bigoplus_{k=0}^\infty\Gamma^\infty(\bigvee\nolimits^k HQ)$ als
\neuerBegriff{total horizontale Schnitte} und Elemente von
$\mathrm{PV}(\Gamma^\infty(\bigvee\nolimits TQ))$ als \neuerBegriff{teilweise
vertikale Schnitte} bezeichnen. Ein vom Grade $(m,n) \in
\mathbb{N} \times \mathbb{N}^+$ homogenes Element $T \in \Gamma^\infty(\bigvee\nolimits^m
HQ \otimes \bigvee\nolimits^n VQ) $ bezeichnen wir auch als
\neuerBegriff{horizontal vom Grad $m$} und \neuerBegriff{vertikal vom
Grad $n$}.
\end{definition}
\endgroup
Mit den Regeln der Tensorrechnung,
vgl.\ Proposition \ref{prop:RechenregelnSchnitteUndSummen}, sehen wir dann
\begingroup
\allowdisplaybreaks[1]
\begin{align}
\label{eq:ZerlegungSymmetrischeSchnitte}
\Gamma^\infty(\bigvee\nolimits TQ) &= \bigoplus_{k=0}^\infty\Gamma^\infty(\bigvee\nolimits^k TQ) \notag \\
&=\bigoplus_{k=0}^\infty \Gamma^\infty(\bigvee\nolimits^k(HQ \oplus VQ)) \notag \\
&\simeq \bigoplus_{k=0}^\infty
\Gamma^\infty(\bigoplus_{l=0}^k(\bigvee\nolimits^{k-l}HQ \otimes \bigvee\nolimits^l
VQ)) \notag \\
&\simeq \bigoplus_{k=0}^\infty\bigoplus_{l=0}^k
\Gamma^\infty(\bigvee\nolimits^{k-l}HQ \otimes \bigvee\nolimits^l
VQ) \notag \\
&= \bigoplus_{m,n=0}^\infty \Gamma^\infty(\bigvee\nolimits^{m}HQ \otimes
\bigvee\nolimits^n
VQ) \notag \\
&= \bigoplus_{k=0}^\infty \Gamma^\infty(\bigvee\nolimits^k HQ) \oplus
\bigoplus_{m=0,n=1}^\infty \Gamma^\infty(\bigvee\nolimits^{m}HQ \otimes \bigvee\nolimits^n VQ) \notag
\\
&= \mathrm{H}(\Gamma^\infty (\bigvee\nolimits TQ)) \oplus \mathrm{PV}(\Gamma^\infty
(\bigvee\nolimits TQ)) \ensuremath{\text{.}}
\end{align}
\endgroup
Die Projektionen zu dieser Zerlegung wollen wir im Folgenden mit
$\mathrm{H} \dpA \Gamma^\infty (\bigvee\nolimits TQ) \to \mathrm{H}(\Gamma^\infty
(\bigvee\nolimits TQ))$ und $\mathrm{PV} \dpA \Gamma^\infty (\bigvee\nolimits TQ) \to
\mathrm{PV}(\Gamma^\infty (\bigvee\nolimits TQ))$ bezeichnen. Offensichtlich ist
die Zerlegung \eqref{eq:ZerlegungSymmetrischeSchnitte} $G$"=invariant,
womit auch die Projektionen $\mathrm{H}$ und $\mathrm{PV}$
$G$"=äquivariant sind. Die obige Zerlegung induziert mit Hilfe des
Isomorphismus $\mathsf{P}$ aus Proposition~\ref{prop:universelleImpulsabbildung} eine Zerlegung der polynomialen
Funktionen $\mathcal{P}(Q)$ in den Raum der \neuerBegriff{total
horizontalen} polynomialen Funktionen $\mathrm{h}(\mathcal{P}(Q))$ und
den Raum der \neuerBegriff{teilweise vertikalen} polynomialen Funktionen
$\mathrm{pv}(\mathcal{P}(Q))$.
\begin{align}
\mathcal{P}(Q) = \mathrm{h}(\mathcal{P}(Q)) \oplus \mathrm{pv}(\mathcal{P}(Q))
\end{align}
Die Projektionen bezüglich dieser Zerlegung sind dann durch
\begin{align}
\label{eq:pvundh}
\mathrm{pv} = \mathsf{P} \circ \mathrm{PV} \circ \mathsf{P}^{-1}
\quad \text{und} \quad \mathrm{h} = \mathsf{P} \circ \mathrm{H} \circ
\mathsf{P}^{-1}
\end{align}
gegeben. Mit Bemerkung \ref{bem:GInvarianzDesIsoszwischenPolysUndSyms}
folgt direkt auch die $G$"=Äquivarianz von $\mathrm{h}$ und
$\mathrm{pv}$. Insbesondere gilt $\mathrm{h}(\mathcal{P}(Q)^G) =
\mathrm{h}(\mathcal{P}(Q))^G$. Ist $F \in
\mathsf{P}(\Gamma^\infty(\bigvee\nolimits^m HQ\otimes \bigvee\nolimits^n VQ)) \subset
\mathcal{P}(Q)$ mit $(m,n) \in \mathbb{N}\times
\mathbb{N}$, so nennen wir $F$ \neuerBegriff{horizontal vom Grad $m$}
und \neuerBegriff{vertikal vom Grad $n$}. Ist $\{e_\alpha\}_{1 \leq
\alpha \leq \dim G}$ eine Basis von
$\lieAlgebra[g]$ mit zugehöriger dualer Basis $\{e^\alpha\}_{1 \leq
\alpha \leq
\dim G}$, dann bilden die
zugehörigen fundamentalen Vektorfelder $\{{(e_\alpha)}_Q\}_{1 \leq
\alpha \leq
\dim G}$ bekanntlich einen globalen Rahmen für das Vertikalbündel $VQ$.
Für jedes $T \in \mathrm{PV}(\Gamma^\infty(\bigvee\nolimits TQ))$ gibt es somit
eindeutig bestimmte $\mathrm{R}^i(T) \in \mathrm{PV}(\Gamma^\infty(\bigvee\nolimits TQ))$
mit $T = \sum_{i=1}^{\dim G} \mathrm{R}^i(T)\vee {(e_i)}_Q$. Für $i \in \{1,\dots,\dim
G\}$
setzen wir $\mathrm{R}^i$ auf $\mathrm{H}(\Gamma^\infty(\bigvee\nolimits TQ))$ durch $0$ zu einer
Abbildung $\mathrm{R}^i\dpA \Gamma^\infty(\bigvee\nolimits TQ) \to \Gamma^\infty(\bigvee\nolimits TQ)$
fort. Offenbar kann man dann jedes $T \in \Gamma^\infty(\bigvee\nolimits TQ)$ in
der Form
\begin{align}
\label{eq:Aufspaltung1}
T = \mathrm{H}(T) + \mathrm{R}^i(T)\vee (e_i)_Q
\end{align}
schreiben. Weiter setzen wir
\begin{align}
\label{eq:ri}
\mathrm{r}^i := \mathsf{P} \circ \mathrm{R}^i \circ \mathsf{P}^{-1}
\dpA \mathcal{P}(Q) \to \mathcal{P}(Q)\ensuremath{\text{,}}
\end{align}
womit wir
mit Gleichung~\eqref{eq:Aufspaltung1} sehen, dass man jedes $F \in
\mathcal{P}(Q)$ in der Form
\begin{align}
\label{eq:Aufspaltung2}
F = \mathrm{h}(F) + \mathrm{r}^i(F)\mathsf{P}((e_i)_Q)
\end{align}
schreiben kann.
Wir wählen nun $(\Psi,T^*Q,1,\emptyset,\emptyset)$ als geometrische Homotopiedaten und können damit die
davon induzierte Homotopie $\h_{\mathrm{kan}}$ aus dem
Quanten-Koszul-Schema mit Hilfe der $\mathrm{r}^i$ ausdrücken.
\begin{lemma}
\label{lem:HomotopiePolynomial}
Ist $F \in \mathcal{P}(Q)$, so gilt
\begin{align}
\label{eq:HomotopiePolynomial}
\hKan(F) = \mathrm{r}^i(F) \otimes e_i\ensuremath{\text{.}}
\end{align}
\end{lemma}
\begin{proof}
Sei $v \in \Gamma^\infty(HQ)$ und $F :=
\mathsf{P}(v)$. Dann gilt für $c \in C_{\mathrm{kan}}$
und $\mu \in \lieAlgebra^*$
\begin{align*}
F \circ \Psi^{-1}(c,\mu) &= F(c + \Gamma_\mu(\mathfrak{p}(c))) = \mathsf{P}(v)(c +
\Gamma_\mu(\mathfrak{p}(c))) \\ &= v\at{\mathfrak{p}(c)}(c +
\Gamma_\mu(\mathfrak{p}(c))) = v\at{\mathfrak{p}(c)}(c) +
\underbrace{v\at{\mathfrak{p}(c)}(\Gamma_\mu(\mathfrak{p}(c)))}_{=
\dPaar{\mu}{\gamma(v(\mathfrak{p}(c)))}
= 0} = v\at{\mathfrak{p}(c)}(c) \ensuremath{\text{.}}
\end{align*}
Also haben wir
\begin{align*}
\frac{\partial}{\partial \mu_i} (F \circ \Psi^{-1})(c,\mu) = 0
\end{align*}
für alle $i \in \{1,\dots,\dim G\}$, $c \in C_{\mathrm{kan}}$ und $\mu \in
\lieAlgebra^*$, womit $\hKan(F) = 0 = \mathrm{r}^i(F) \otimes e_i$
folgt.
Für $F = \mathsf{P}((e_i)_Q)$ gilt
\begin{align*}
F \circ \Psi^{-1}(c,\mu) &= F(c + \Gamma_\mu(\mathfrak{p}(c))) =
\mathsf{P}((e_i)_Q)(c + \Gamma_\mu(\mathfrak{p}(c))) \\ &=
(e_i)_Q(\mathfrak{p}(c))(c+ \Gamma_\mu(\mathfrak{p}(c))) =
\underbrace{(e_i)_Q(\mathfrak{p}(c))(c)}_{=0, \, \text{da $c \in
H^{*^{\scriptscriptstyle{\gamma}}}_{\mathfrak{p}(c)}Q$}} +
(e_i)_Q(\mathfrak{p}(c))(\Gamma_\mu(\mathfrak{p}(c))) \\ &=
\dPaar{\mu}{\gamma((e_i)_Q(\mathfrak{p}(c)))} = \dPaar{\mu}{e_i} =
\mu_i \ensuremath{\text{,}}
\end{align*}
wenn $\mu = \mu_i e^i$. Wir erhalten somit weiter
\begin{align*}
\frac{\partial}{\partial \mu_\alpha}(F \circ \Psi^{-1})(c,\mu) =
\delta_{\alpha i}
\end{align*}
für alle $\alpha \in \{1,\dots,\dim G\}$.
Sei nun $(n,m) \in \mathbb{N} \times \mathbb{N}$ und $T =
R^{k_1,\dots,k_m}_{l_1,\dots,l_n,i} v_{k_1} \vee \dots
\vee v_{k_m} \vee (e_{l_1})_Q \vee
\dots (e_{l_n})_Q \vee (e_i)_Q = \mathrm{R}^i(T) \vee (e_i)_Q$ mit $
R^{k_1,\dots,k_m}_{l_1,\dots,l_n,i} \in C^\infty(Q)$, o.\,E.\ symmetrisch in den
unteren Indices und $v_{k_j} \in \Gamma^\infty(HQ)$. Dann gilt für $F = \mathsf{P}(T)$
\begin{align*}
F \circ \Psi^{-1}(c,\mu) &= F(c + \Gamma_\mu(\mathfrak{p}(c)))\\
&= \mathsf{P}(T)(c +
\Gamma_\mu(\mathfrak{p}(c)))\\
&= R^{k_1,\dots,k_m}_{l_1,\dots,l_n,i}(\mathfrak{p}(c)) \mathsf{P}(v_{k_1})(c +
\Gamma_\mu(\mathfrak{p}(c)))\dotsm \mathsf{P}(v_{k_m})(c+
\Gamma_\mu(\mathfrak{p}(c)))\cdot \\
&\phantom{= } \mathsf{P}((e_{l_1})_Q)(c + \Gamma_\mu(\mathfrak{p}(c)))
\dotsm \mathsf{P}((e_{l_n})_Q)(c + \Gamma_\mu(\mathfrak{p}(c)))\cdot \mathsf{P}((e_i)_Q)(c +
\Gamma_\mu(\mathfrak{p}(c))) \\
&=
R^{k_1,\dots,k_m}_{l_1,\dots,l_n,i}(\mathfrak{p}(c))v_{k_1}\at{\mathfrak{p}(c)}(c)\dotsm
v_{k_m}\at{\mathfrak{p}(c)}(c) \cdot \mu_{l_1} \dotsm
\mu_{l_n} \cdot \mu_i \ensuremath{\text{.}}
\end{align*}
Folglich ergibt sich
\begin{align*}
\frac{\partial}{\partial \mu_\alpha} (F \circ \Psi^{-1})(c,\mu)&=
R^{k_1,\dots,k_m}_{l_1,\dots,l_n,i}(\mathfrak{p}(c)) \cdot v_{k_1}\at{\mathfrak{p}(c)}(c)\dotsm
v_{k_m}\at{\mathfrak{p}(c)}(c) \cdot \\
&\phantom{= }(\delta_{\alpha l_1} \mu_{l_2}
\dotsm \mu_{l_n} \cdot \mu_i + \dots + \mu_{l_1}\dotsm \delta_{l_n
\alpha} \mu_i + \mu_{l_1} \dotsm \mu_{l_n} \delta_{i \alpha}) \\
&= (n+1) R^{k_1,\dots,k_m}_{l_1,\dots,l_n,\alpha}(\mathfrak{p}(c)) \cdot v_{k_1}\at{\mathfrak{p}(c)}(c)\dotsm
v_{k_m}\at{\mathfrak{p}(c)}(c) \cdot \mu_{l_1} \dotsm \mu_{l_n} \\
&= (n+1) \mathsf{P}(\mathrm{R}^\alpha(T))(c + \Gamma_\mu) \ensuremath{\text{.}}
\end{align*}
Des Weiteren gilt offensichtlich
\begin{align*}
\mathsf{P}(\mathrm{R}^\alpha(T))\circ\Psi^{-1}(c,t\mu) = t^n \mathsf{P}(\mathrm{R}^\alpha(T))\circ \Psi^{-1}(c,\mu)
\end{align*}
und damit
\begin{align*}
\int_0^1 \frac{\partial}{\partial \mu_\alpha}(F \circ
\Psi^{-1})(c,t\mu) \, dt
&= (n+1) \int_0^1 t^n \, dt \, \mathsf{P}(\mathrm{R}^\alpha(T))(\Psi^{-1}(c,\mu)) \\
&= \mathsf{P}(\mathrm{R}^\alpha(T))(\Psi^{-1}(c,\mu)) \ensuremath{\text{.}}
\end{align*}
Schließlich sehen wir so
\begin{align*}
\mathrm{r}^i(F) \otimes e_i = \mathrm{r}^i(\mathsf{P}(T)) \otimes e_i
= \mathsf{P}(\mathrm{R}^i(T)) \otimes e_i = \hKan(F) \ensuremath{\text{.}}
\end{align*}
\end{proof}
Wir definieren nun
\begin{align}
\label{eq:DeltaNikolai}
\Delta_\star \dpA \mathcal{P}(Q)[[\lambda]] &\to
\mathcal{P}(Q)[[\lambda]], \notag\\
F &\mapsto \frac{1}{\I \lambda} \sum_{i=1}^{\dim G} \left(\mathrm{r}^i(F)\JKan(e_i)
- \mathrm{r}^i(F) \star
\qJ[\mathrm{kan}](e_i)\right) \ensuremath{\text{.}}
\end{align}
Die Abbildung $\Delta_\star$ ist in der Tat wohldefiniert, denn nach
Voraussetzung sind die Funktionen $\mathrm{r}^i(F)$, $\JKan(e_i)$ sowie
$\qJ[\mathrm{kan}](e_i)$ polynomial und $\mathcal{P}(Q)[[\lambda]]$
bildet eine Unteralgebra von $C^\infty(M)[[\lambda]]$ bezüglich $\star$.
Mit Lemma \ref{lem:HomotopiePolynomial} sowie den Definitionen von
$\partial_{\mathrm{kan}}$ und $\qkoszul_{\mathrm{kan}}$ sehen wir für $F
\in \mathcal{P}(Q)$ sofort
\begin{align}
\label{eq:DeltaUndPartial}
\Delta_\star F = \frac{1}{\I \lambda} (\partial_{\mathrm{kan}} - \qkoszul_{\mathrm{kan}}) \hKan(F) \ensuremath{\text{.}}
\end{align}
Dabei bezeichnet $\partial_{\mathrm{kan}}$
bzw.\ $\qkoszul_{\mathrm{kan}}$ den von $\JKan$ bzw.\ $\qJ[\mathrm{kan}]$
induzierten Koszul- bzw.\ Quanten"=Koszul"=Operator.
Der folgende Satz zeigt, dass $(T^*Q)_{\mathrm{red}}$ wieder die
Struktur eines Kotangentialbündels besitzt.
\begin{samepage}
\begingroup
\emergencystretch=0.8em
\begin{satz}[Kotangentialbündelreduktion für Impulswert $0$]
\label{satz:ReduktionKotangentenbuendel}
\begin{satzEnum}
\item %
Es gibt eine eindeutig bestimmte glatte, $G$"=invariante
Abbildung
\begin{align}
u \dpA \JKan^{-1}(0) \to T^*(Q/G) \quad \text{mit} \quad
\dPaar{u(\alpha_q)}{T_q\varpi v_q} = \dPaar{\alpha_q}{v_q}
\end{align}
für alle $q \in Q$, $\alpha_q \in \JKan^{-1}(0) \cap T_q^*Q$
und $v_q \in T_qQ$. Weiter gibt es eine eindeutig bestimmte
glatte Abbildung $\overline{u} \dpA (T^*Q)_{\mathrm{red}}\to
T^*(Q/G)$, so dass das folgende Diagramm kommutiert.
\def\tA[#1]{A_{#1}}
\begin{equation}
\label{eq:DiagrammU}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{
\JKan^{-1}(0) & \\
(T^*Q)_{\mathrm{red}} & T^*(Q/G) \\
}; %
\path[->] (m-1-1) edge node[auto] {$\piKan$} (m-2-1); %
\path[->] (m-2-1) edge node[auto]{$\overline u$}(m-2-2); %
\path[->] (m-1-1) edge node[auto]{$u$}(m-2-2); %
\end{tikzpicture} \ensuremath{\text{.}}
\end{equation}
Dabei bezeichnet $\piKan$ die kanonische Projektion.
\item %
Die Abbildung $\overline u$ ist ein Symplektomorphismus.
\end{satzEnum}
\end{satz}
\end{samepage}
\begin{proof}
\begin{beweisEnum}
\item %
Wegen der Submersivität von $\varpi$ ist die Eindeutigkeit
klar.
Wir beachten die Gleichheit $\ker T_q\varpi = \{\xi_Q(q) \mid \xi \in
\lieAlgebra\}$. Dies ist klar, denn offensichtlich ist $\xi_Q(q)
\in \ker T_q\varpi$ für alle $\xi \in \lieAlgebra$ und da
$T_q\varpi$ surjektiv ist, folgt $\dim \ker T_q\varpi = \dim Q -
\dim Q/G = \dim G = \dim \{\xi_Q(q) \mid \xi \in
\lieAlgebra\}$.
Falls also $T_q \varpi v_q = T_q \varpi
\tilde{v}_q $, so folgt nach dieser Beobachtung schon, dass es ein
$\xi \in \lieAlgebra$ gibt mit $v_q - \tilde{v}_q =
\xi_Q(q)$. Dann ist aber für jedes $\alpha_q \in \JKan^{-1}(0) \cap
T_q^*Q$ schon $\dPaar{\alpha_q}{v_q - \tilde{v}_q} =
\dPaar{\alpha_q}{\xi_Q(q)} = 0$, da $\xi_Q(q) \in V_qQ$. Demnach
existiert eine glatte Abbildung $u$ mit den gewünschten
Eigenschaften.
$u$ ist sogar $G$"=invariant, denn
$\dPaar{u(g\alpha_q)}{T_{gq}\varpi v_{gq}} =
\dPaar{\alpha_q}{g^{-1}v_{gq}} =
\dPaar{u(\alpha_q)}{T_{gq}\varpi(g^{-1} v_{gq})} =
\dPaar{u(\alpha_q)}{T_{gq}\varpi v_{gq}}$ für alle $\alpha_q \in
\JKan^{-1}(0) \cap T_q^*Q$ und $v_{gq}\in T_{gq}Q$. Damit induziert
$u$ eine eindeutig bestimmte glatte Abbildung, so dass Diagramm
\eqref{eq:DiagrammU} kommutiert.
\item %
Sei $\alpha_{\varpi(q)} \in T^*_{\varpi(q)}(Q/G)$, dann ist
$(T_q\varpi)^* \alpha_{\varpi(q)} \in \JKan^{-1}(0)$, denn für
jedes $\xi \in \lieAlgebra$ gilt
\begin{align*}
\dPaar{\JKan((T_q\varpi)^*\alpha_{\varpi(q)})}{\xi} =
\dPaar{(T_q\varpi)^*\alpha_{\varpi(q)}}{\xi_Q(q)} =
\dPaar{\alpha_{\varpi(q)}}{T_q \varpi \xi_Q(q)} = 0 \ensuremath{\text{.}}
\end{align*}
Weiter gilt für alle $w_q \in T_qQ$
\begin{align*}
\dPaar{u((T_q\varpi)^* \alpha_{\varpi(q)})}{T_q\varpi w_q} =
\dPaar{(T_p\varpi)^* \alpha_{\varpi(q)}}{w_q} =
\dPaar{\alpha_{\varpi(q)}}{T_p \varpi w_q} \ensuremath{\text{,}}
\end{align*}
also wegen der Submersivität von $\varpi$ auch $u((T_q\varpi)^*
\alpha_{\varpi(q)}) = \alpha_{\varpi(q)}$.
Demnach ist $u$ und somit auch $\overline u$ surjektiv . Die
Abbildung $\overline{u}$ ist injektiv, denn ist $u(\alpha_q) =
u(\beta_{q'})$, so folgt nach Definition von $u$ schon $\varpi(q) =
\varpi(q')$, d.\,h.\ es gibt ein $g \in G$ mit $q' = gq$, und
\begin{align*}
\dPaar{\alpha_q}{w_q} &= \dPaar{u(\alpha_q)}{T_q\varpi w_q} =
\dPaar{u(\beta_{q'})}{T_q\varpi w_q} =
\dPaar{u(\beta_{q'})}{T_q(\varpi \circ \phi_g)w_q} \\&=
\dPaar{u(\beta_{q'})}{T_{\phi_g(q)}\varpi T_q \phi_g w_q} =
\dPaar{u(\beta_{q'})}{T_{gq}\varpi g w_q} =
\dPaar{\beta_{q'}}{g w_q}\\ &= \dPaar{g^{-1} \beta_{q'}}{ w_q}
\end{align*}
für alle $w_q \in T_qQ$ also $\piKan(\alpha_q) =
\piKan(\beta_{q'})$. Um zu zeigen, dass $\overline{u}$
symplektisch ist, genügt es die kanonischen $1$"=Formen zu
betrachten. Sei $\theta_{\kanSymbol}$ die kanonische $1$"=Form auf $T^*Q$
und $\overline\theta_{\kanSymbol}$ diejenige auf $T^*(Q/G)$ sowie $\ensuremath{\imath} \dpA
\JKan^{-1}(0) \hookrightarrow T^*Q$ die Inklusion. Nach
Konstruktion der reduzierten symplektischen Form ist es also
hinreichend, $\piKan^* \overline{u}^*\overline\theta_{\kanSymbol} = \ensuremath{\imath}^*\theta_{\kanSymbol}$
zu zeigen. Sei $q \in Q$, $\alpha_q \in T_q^*Q$ und
$v_{\alpha_q} \in T_{\alpha_q}TQ$, dann gilt
\begin{align*}
\dPaar{\piKan^*\overline{u}^*\overline\theta_{\kanSymbol}\at{\alpha_q}}{v_{\alpha_q}}
&=
\dPaar{\overline\theta_{\kanSymbol}\at{\overline{u}(\pi(\alpha_q))}}{T_{\alpha_q}(\overline{u}\circ
\piKan) v_{\alpha_q}}\\
&=
\dPaar{\overline{u}(\piKan(\alpha_q))}{T_{\alpha_q}(\overline{\mathfrak{p}}
\circ
\overline{u} \circ \piKan) v_{\alpha_q}}\\
&= \dPaar{\overline{u}(\piKan(\alpha_q))}{T_{\alpha_q}(\varpi
\circ \mathfrak{p} \circ \ensuremath{\imath} ) v_{\alpha_q}}\\
&= \dPaar{\alpha_q}{T_{\alpha_q} (\mathfrak{p} \circ \ensuremath{\imath}) v_{\alpha_q}} =
\dPaar{\ensuremath{\imath}^*\theta_{\kanSymbol}}{v_{\alpha_q}} \ensuremath{\text{.}}
\end{align*}
Die Abbildung $\overline{u}$ ist demnach symplektisch und
insbesondere immersiv, woraus aus Dimensionsgründen und dem Satz
über die Umkehrfunktion schon folgt, dass $\overline{u}$ ein
Diffeomorphismus, d.\,h.\ auch ein Symplektomorphismus sein muss.
\end{beweisEnum}
\end{proof}
\endgroup
\begingroup
\emergencystretch=0.8em In der folgenden Diskussion seien nun die
Abbildungen $u$ und $\overline{u}$ wie in Satz
\ref{satz:ReduktionKotangentenbuendel}. Wir setzen $^{\mathsf{h}} \dpA
\Gamma^\infty(T{(Q/G)}) \to \Gamma^\infty(TQ)$ zu einem Homomorphismus
$^{\mathsf{h}}\dpA \Gamma^\infty(\bigvee\nolimits T{(Q/G)}) \to
\Gamma^\infty(\bigvee\nolimits TQ)$ bezüglich $\vee$ fort. Insbesondere sei
$\chi^{\mathsf{h}} = \varpi^* \chi$ für $\chi \in \CM[Q/G]$. Schließlich
sei
\begin{align}
\label{eq:DefintionVonl}
l \dpA \mathcal{P}(Q/G) \to \mathrm{h}(\mathcal{P}(Q)^G)
\end{align}
durch
\begin{align}
\label{eq:DefintionVon2}
l(F) := \mathsf{P}(({\overline{\mathsf{P}}^{-1}(F))^{\mathsf{h}}})
\end{align}
für $F \in \mathcal{P}(Q/G)$ gegeben. Dabei bezeichnet die Abbildung
$\overline{\mathsf{P}} \dpA \Gamma^\infty(\bigvee\nolimits T(Q/G)) \to
\mathcal{P}(Q/G)$ den Algebraisomorphismus aus Proposition
\ref{prop:universelleImpulsabbildung} für $Q/G$. Man prüft leicht mit
Hilfe von Proposition \ref{prop:AngepassteBasen} nach, dass $l$ bijektiv
ist, siehe auch \cite[Prop. 3.3 iv)]{kowalzig.neumaier.pflaum:2005a}.
\begin{proposition}
\label{prop:EinschraenkungFuerKotangentialbuendel}
Sei $\ensuremath{\imath} \colon C_{\mathrm{kan}} \hookrightarrow T^*Q$ die
kanonische Inklusion, dann gilt
\begin{align}
\ensuremath{\imath}^*\at{\mathcal{P}(Q)^G[[\lambda]]} = u^* \circ l^{-1} \circ
\mathrm{h}\at{\mathcal{P}(Q)^G[[\lambda]]} \ensuremath{\text{.}}
\end{align}
\end{proposition}
\begin{proof}
Für $q \in Q$ und $\alpha_q \in C_{\mathrm{kan}} \cap T_q^*Q =
H^{*^\gamma}_qQ$ ist $\mathsf{P}((e_i)_Q)(\alpha_q) =
\dPaar{\alpha_q}{(e_i)_Q(q)} = 0$, also folgt für alle $F \in
\mathcal{P}(Q)^G$, da $\mathsf{P}$ ein Algebrahomomorphismus ist,
$\ensuremath{\imath}^*(F) = \ensuremath{\imath}^*(\mathrm{h}(F))$. Als nächstes zeigen wir, dass
$u^*\at{\mathcal{P}(Q/G)} = \ensuremath{\imath}^* \circ l$, d.\,h.\ auch $u^* \circ
l^{-1} = \ensuremath{\imath}^*\at{\mathrm{h}(\mathcal{P}(Q)^G)}$ gilt, woraus dann
offensichtlich die Behauptung folgt. Dazu bemerken wir zuerst, dass
sowohl $u^*$ als auch $\ensuremath{\imath}^* \circ l$ Algebrahomomorphismen sind,
womit wir diese Gleichung nur auf Erzeugern prüfen müssen. Sei also
$\chi \in \CM[Q/G]$, dann gilt für alle $q \in Q$ und $\alpha_q \in
C_{\mathrm{kan}} \cap T_q^*Q$
\begin{align*}
u^*(\overline{\mathsf{P}}(\chi))(\alpha_q) &= (u^*\circ{\overline{\mathfrak{p}}}^*
\chi)(\alpha_q) = \chi((\overline{\mathfrak{p}} \circ u)(\alpha_q)) =
\chi(\varpi \circ \mathfrak{p}( \alpha_q))\\ &= (\varpi^*
\chi)(\mathfrak{p}(\alpha_q)) = \mathfrak{p}^*\varpi^* \chi(\alpha_q)
= \mathsf{P}(\chi^{\mathsf{h}})(\alpha_q) =
l(\mathsf{\overline{P}}(\chi))(\alpha_q) \ensuremath{\text{.}}
\end{align*}
Dabei bezeichnet $\overline{\mathfrak{p}} \colon T^*(Q/G) \to Q/G$ die Fußpunktprojektion.
Ist weiter $\overline{X} \in \Gamma^\infty(T{(Q/G)})$, dann gilt für alle
$q \in Q$ und $\alpha_q \in C_{\mathrm{kan}} \cap T_q^*Q$
\begin{align*}
u^*(\overline{\mathsf{P}}(\overline{X}))(\alpha_q) &=
\overline{\mathsf{P}}(\overline{X})(u(\alpha_q)) =
\dPaar{u(\alpha_q)}{\overline{X}(\varpi(q))}\\ &=
\dPaar{u(\alpha_q)}{T_q \varpi \overline{X}^{\mathsf{h}}(q)} =
\dPaar{\alpha_q}{\overline{X}^{\mathsf{h}}(q)} =
\mathsf{P}(\overline{X}^{\mathsf{h}}(q))(\alpha_q) = l(\overline{\mathsf{P}}(\overline{X}))(\alpha_q) \ensuremath{\text{.}}
\end{align*}
Da auch $\overline{\mathsf{P}}$ ein Algebramorphismus ist, folgt die
Behauptung sofort.
\end{proof}
\endgroup
Mit Proposition \ref{prop:EinschraenkungFuerKotangentialbuendel} und
Gleichung \eqref{eq:DeltaUndPartial} erhalten wir eine explizite Formel für die
Quanteneinschränkung im Falle von Kotangentialbündeln und den oben
getroffenen Wahlen für die geometrischen Homotopie"=Daten.
\begin{proposition}
\label{prop:QuantenEinschraenkungFuerKotangentialbauendel}
Für jedes $F \in \mathcal{P}(Q)^G[[\lambda]]$ gilt
\begin{align}
\label{eq:QuantenEinschraenkungFuerKotangentialbauendel}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(F) = \left (u^* \circ l^{-1} \circ \mathrm{h} \circ \left(\frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} - \I
\lambda \Delta_\star}\right) \right)(F) \ensuremath{\text{.}}
\end{align}
\end{proposition}
Um das reduzierte Sternprodukt nach Gleichung
\eqref{eq:SternproduktAufDemReduziertenPhasenraum3} genauer bestimmen zu
können, müssen wir auch noch für $\prol \circ \piKan^*$ einen an die
Kotangentialbündelgeometrie angepassteren Ausdruck finden.
\begin{proposition}
\label{prop:ProlongationFuerKotangentialbuendel}
Für alle $f \in \overline{u}^*(\mathcal{P}(Q/G))$
ist die Gleichung
\begin{align}
\label{eq:ProlongationFuerKotangentialbuendel}
(\prol \circ \piKan^*)(f) = (l \circ (\overline{u}^{-1})^*)(f)
\end{align}
erfüllt.
\end{proposition}
\begin{proof}
Wir zeigen $l = \prol \piKan^* \overline{u}^*\at{\mathcal{P}(Q/G)}$.
Sei $\overline{X} \in
\Gamma^\infty(T{(Q/G)})$, dann gilt für $q \in Q$ und $\mu \in \lieAlgebra^*$
\begin{align*}
\mathsf{P}(\overline{X}^{\mathsf{h}})(\Gamma_\mu(q)) =
\dPaar{\overline{X}^{\mathsf{h}}(q)}{\Gamma_{\mu}(q)} =
\dPaar{\mu}{\gamma(\overline{X}^{\mathsf{h}}(q))} = 0 \ensuremath{\text{.}}
\end{align*}
Nun lässt sich jedes $f \in \mathrm{h}(\mathcal{P}(Q)^G)$ als $f =
\mathsf{P}(\overline{T}^{\mathsf{h}})$ mit $\overline{T} \in
\Gamma^\infty(\bigvee T(Q/G))$ schreiben. Da $^{\mathsf{h}}$ und
$\mathsf{P}$ Homomorphismen bezüglich $\vee$ sind, folgt für jedes $c
\in C_{\mathrm{kan}}$, $\mu \in \lieAlgebra^*$ und jedes $f \in
\mathrm{h}(\mathcal{P}(Q)^G)$ die Gleichung
\begin{align*}
f(\Psi'(c,\mu)) = f(c + \Gamma_{\mu}(\mathfrak{p}(c))) = f(c)
\end{align*}
und somit
\begin{align*}
\prol \ensuremath{\imath}^* f = f
\end{align*}
für alle $f \in \mathrm{h}(\mathcal{P}(Q)^G)$.
Und somit ergibt sich mit der im Beweis von Proposition
\ref{prop:EinschraenkungFuerKotangentialbuendel} hergeleiteten
Beziehung $u^*\circ l^{-1} = \ensuremath{\imath}^*\at{\mathrm{h}(\mathcal{P}(Q)^G)}$
\begin{align*}
\prol u^* l^{-1} = \operatorname{\mathsf{id}} \ensuremath{\text{,}}
\end{align*}
was gleichbedeutend ist mit $l = \prol u^*\at{\mathcal{P}(Q/G)} = \prol \piKan^*
\overline{u}^*\at{\mathcal{P}(Q/G)}$.
\end{proof}
Trägt man nun all die in diesem Kapitel gemachten Beobachtungen
zusammen, so erhält man eine explizitere Formel für das reduzierte
Sternprodukt.
\begin{satz}
\label{satz:ReduziertesSternproduktFuerKotangentialBuendel}
Für das nach Koszul"=Schema reduzierte Sternprodukt
$\star_{\mathrm{red}}$ gilt für alle $f,f' \in \overline{u}^*(\mathcal{P}(Q/G))$
\begin{align}
\label{eq:ReduziertesSternproduktFuerKotangentialBuendel}
\piKan^*(f \star_{\mathrm{red}} f') = \left(u^* \circ l^{-1} \circ
\mathrm{h} \circ
\frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} - \I \lambda \Delta_\star} \right) \left ((l \circ
{\overline{u}^{-1}}^*)(f) \star (l \circ {\overline{u}^{-1}}^*)(f')
\right ) \ensuremath{\text{.}}
\end{align}
\end{satz}
Satz \ref{satz:ReduziertesSternproduktFuerKotangentialBuendel} zeigt,
dass die Quanten"=Koszul"=Reduktionsmethode mit den oben getroffenen
geometrischen Wahlen im Spezialfall von Kotangentialbündeln mit
kanonischer symplektischer Form, gelifteter Wirkung und verschwindendem
Impulswert sowie ohne Magnetfeld genau das Sternprodukt ist, das
Kowalzig, Neumaier und Pflaum in \cite{kowalzig.neumaier.pflaum:2005a}
konstruiert haben.
\section{Reduziertes Sternprodukt für nichtverschwindenden Impulswert
und mit Magnetfeld}
\label{sec:nichtVerschwindendeImpulswerteInAnwesenheitVonMagnetfeldern}
In diesem Abschnitt wollen wir die Betrachtungen des vorangegangenen
verallgemeinern und sowohl nichtverschwindende, $G$"=invariante
Impulswerte $\mu$ als auch $G$"=invariante Magnetfelder $B$
zulassen. Ziel ist es, zu zeigen, dass die Konstruktion von Kowalzig,
Neumaier und Pflaum auch für diese Version einen Spezialfall der
Quanten"=Koszul-Reduktion darstellt. Um dies zu erreichen, müssen wir
spezielle geometrische Daten für die Homotopie wählen, genauer werden
wir solche durch Zurückziehen der geometrischen Homotopie"=Daten aus
Abschnitt \ref{sec:SpezialFallImpulsMagnetfeldNull} erhalten. Dies
betrachten wir zunächst in einer etwas allgemeineren Situation.
\begin{satz}[Zurückziehen geometrischer Homotopie"=Daten]
\label{satz:ZurueckZiehenVonGeomHomotopieDaten}
Seien $(M,\omega)$ und $(M',\omega')$ symplektische
Mannigfaltigkeiten und $G$ eine Lie"=Gruppe, die auf $M$ und $M'$ frei
und eigentlich wirke, sowie $J \colon M \to \lieAlgebra^*$ und $J'
\colon M' \to \lieAlgebra^*$, zugehörige $G$"=äquivariante
Impulsabbildungen mit $C := J^{-1}(0) \neq \emptyset$ und $C' := J'^{-1}(0)
\neq \emptyset$. Weiter seien $\mathsf{GH'} := (\Psi' \colon U' \to
V', O' \subset U',{\psi'}_{U'},{\psi'}_{W'},\xi')$ $G$"=invariante
geometrische Homotopie"=Daten für $C' \subset M'$. Sei nun $s \colon
M \to M'$ ein $G$"=äquivarianter Diffeomorphismus mit $J =
s^*J'$. Wir definieren $U := s^{-1}(U')$, $O := s^{-1}(O')$, $W
:= M\setminus \abschluss{O}$ und $\psi_U := s^*\psi_U'$ sowie $\psi_W
:= s^*\psi_W'$. Zudem sei $V := (s\at{C}^{-1} \times \operatorname{\mathsf{id}})(V') \subset
C \times \lieAlgebra^*$ und $\Psi := (s\at{C}^{-1} \times \operatorname{\mathsf{id}}) \circ
\Psi' \circ s\at{U} \colon U \to V$ sowie schließlich $\xi = s^*
{\xi'}$.
Dann sind $(\Psi \colon U \to V, O \subset
U,{\psi}_{U},{\psi}_{W},\xi)$ $G$"=invariante geometrische
Homotopie"=Daten für $C$ und es gilt für die davon induzierte
geometrische Homotopie $\h$ die Beziehung
\begin{align}
\label{eq:ZurueckGezogeneHomotopie}
\h = s^* \circ \h' \circ (s^{-1})^* \ensuremath{\text{,}}
\end{align}
wobei $\h'$ die von $\mathsf{GH'}$ induzierte geometrische Homotopie
bezeichne. Des Weiteren ist für die induzierte geometrische
Prolongation $\prol$ die Gleichung
\begin{align}
\label{eq:ZurueckGezogenesProl}
\prol = s^* \circ \prol' \circ (s\at{C}^{-1})^*
\end{align}
erfüllt, wobei $\prol'$ die von $\mathsf{GH'}$ induzierte
geometrische Prolongation sei.
Ist $\Psi'$ global, $O = U' = M'$, also $W' = \emptyset$,
${\psi'}_{U'} \equiv 1$ und $\xi' = \emptyset$, so gilt entsprechendes auch
für die zurückgezogenen geometrischen Homotopie"=Daten.
\end{satz}
\begin{proof}
Es ist klar, dass $O$ und $U$ offen und $G$"=invariant sind. Da $s$
ein Homöomorphismus ist, folgt $\abschluss{O} =
\abschluss{s^{-1}(O')} = s^{-1}(\abschluss{O'}) \subset s^{-1}(U') =
U$. Weiter sieht man leicht die Inklusion $C = s^{-1}(C')
\subset O$ und die Beziehung $W = M\setminus \abschluss{O} =
M\setminus s^{-1}(\abschluss{O'}) = s^{-1}(M'\setminus
\abschluss{O'}) = s^{-1}(W')$. $\{\psi_{U},\psi_W\}$ ist
offensichtlich eine glatte, $G$"=invariante Zerlegung der Eins mit
$\supp \psi_U = s^{-1}(\supp {\psi'}_{U'}) \subset U$ und $\supp
\psi_W = s^{-1}(\supp{\psi'}_{W'}) \subset W$. Ebenfalls bemerkt man
sofort, dass $V$ offen und $G$"=invariant ist sowie dass $\Psi$ ein
$G$"=äquivarianter Diffeomorphismus ist. Nach Konstruktion gilt für $c
\in C$ die Gleichung
\begin{align*}
\Psi(c) = ((s\at{C}^{-1} \times \operatorname{\mathsf{id}}))(\Psi'(s(c))) =
({s\at{C}}^{-1}(s(c)),0) = (c,0) \ensuremath{\text{.}}
\end{align*}
Des Weiteren
ist $V$ offenbar sternförmig in Faserrichtung und für $(c,\mu) \in V$
gilt
\begin{align*}
J(\Psi^{-1}(c,\mu)) = J' \circ s({s\at{U}}^{-1} \circ
\Psi'^{-1} \circ (s\at{C} \times \operatorname{\mathsf{id}})(c,\mu)) =
J'({\Psi'}^{-1}(s(c),\mu)) = \mu \ensuremath{\text{.}}
\end{align*}
Also ist $\Psi$ eine $G$"=äquivariante \tn{gute} Tubenabbildung. Sei nun
\begin{align*}
\h_U \colon \CM[U] \otimes \bigwedge\nolimits^\bullet \lieAlgebra \to
\CM[U] \otimes \bigwedge\nolimits^{\bullet + 1} \lieAlgebra
\end{align*}
die Homotopie auf $U$, die, wie wir nochmal erinnern möchten, für $k
\in \mathbb{N}$ und $f \otimes \eta \in C^\infty(U)\otimes
\bigwedge\nolimits^k \lieAlgebra$ folgendermaßen gegeben ist.
\begin{align*}
\h_U(f \otimes \eta) \circ \Psi^{-1}(c,\mu) = \int_0^1
t^k \partial^\alpha(f \circ \Psi^{-1})(c,t \mu) \, dt \otimes
e_\alpha \wedge \eta \ensuremath{\text{.}}
\end{align*}
Dann erhalten wir für $k \in \mathbb{N}$, $f \otimes \eta \in
C^\infty(U)\otimes \bigwedge\nolimits^k \lieAlgebra$ und $(c,\mu) \in V$
\begin{align*}
\lefteqn{((s\at{U})^* \circ \h'_{U'} \circ (s\at{U}^{-1})^*)(f
\otimes
\eta)(\Psi^{-1}(c,\mu))} \\
&= {\h'}_{U'}(f\circ s\at{U}^{-1} \otimes \eta)(s\at{U} \circ
\Psi^{-1}(c,\mu)) \\
&= \h'_{U'}(f\circ s\at{U}^{-1} \otimes \eta)({\Psi'}^{-1}
\circ
{(s\at{C} \times \operatorname{\mathsf{id}})}(c,\mu)) \\
&= \int_0^1 t^k \partial^\alpha (f \circ
s\at{U}^{-1}\circ{\Psi'}^{-1})(s(c),t\mu) \, dt \otimes e_\alpha \wedge \eta \\
&= \int_0^1 t^k \partial^\alpha (f \circ \Psi^{-1} \circ
(s\at{C}^{-1} \times \operatorname{\mathsf{id}}))(s(c),t\mu) \, dt \otimes e_\alpha \wedge \eta \\
&= \int_0^1 t^k \partial^\alpha (f \circ \Psi^{-1}
)((s\at{C}^{-1} \times \operatorname{\mathsf{id}})(s(c),t\mu)) \, dt \otimes e_\alpha \wedge \eta \\
&= \h_U(f \otimes \eta)(\Psi^{-1}(c,\mu)) \ensuremath{\text{.}}
\end{align*}
Zudem
gilt
\begin{align*}
\dPaar{J}{\xi} = \dPaar{s^*J'}{s^*\xi'} = s^*\psi_W = \psi_W
\end{align*}
und
\begin{align*}
\supp \xi = s^{-1}(\supp \xi') \subset W \ensuremath{\text{.}}
\end{align*}
Sei nun $p \in \supp({\psi}_{U}) \cap W$, dann ist offenbar auch
$s(p) \in \supp({\psi'}_{U'}) \cap W'$ und es folgt
\begin{align*}
\xi(p) = \xi'(s(p))&= -{\h'}_{U'}({\psi'_{U'}\at{U'}})(s(p)) \\
&= -((s\at{U}^{-1})^* \circ \h_{U} \circ
s\at{U}^*({\psi'_{U'}\at{U'}}))(s(p)) = -\h_U(\psi_U\at{U})(p)
\ensuremath{\text{,}}
\end{align*}
d.\,h.\
\begin{align*}
\xi\at{{\supp(\psi_U) \cap W}} =
-\h_U(\psi_U\at{U})\at{{\supp(\psi_U) \cap W}} \ensuremath{\text{.}}
\end{align*}
Weiter gilt für $\phi \in \CM \otimes \bigwedge\nolimits^k \lieAlgebra^*$
\begin{align*}
s^* \circ \h_{W'}' \circ (s^{-1})^* (\phi) = s^*(\xi' \wedge
(s^{-1})^*(\phi)) = s^*\xi' \wedge s^*(s^{-1})^*\phi = \xi
\wedge \phi = {\h}_{W}(\phi) \ensuremath{\text{,}}
\end{align*}
folglich auch
\begin{align*}
s^* \circ \h' \circ (s^{-1})^*(\phi) &= s^*(\psi'_{U'}
\h_{U'}'(((s^{-1})^*\phi)\at{U'})) +
s^*\h_{W'}' (s^{-1})^* \phi \\
&= \psi_U \h_U(\phi\at{U}) + \h_{W}(\phi) \\
&= \h(\phi)\ensuremath{\text{.}}
\end{align*}
Schließlich sehen wir für alle $f \in C^\infty(C)$ und $(c,\mu) \in V$
\begin{align*}
(s^* \circ {\prol'} \circ ({s\at{C}}^{-1})^*)(f)(\Psi^{-1}(c,\mu) )
&=
({\prol'}((s\at{C}^{-1})^*f))({\Psi'}^{-1}(s(c),\mu)) \\
&=
\psi'_{U'}({\Psi'}^{-1}(s(c),\mu)) \cdot f((s\at{C})^{-1}(s(c)))
\\
&=
\psi'_{U'}(s(\Psi^{-1}(c,\mu))) \cdot f(c) \\
&= \psi_U ({\Psi}^{-1}(c,\mu)) \cdot f(c)\\
&= \prol(f)({\Psi}^{-1}(c,\mu)) \ensuremath{\text{.}}
\end{align*}
\end{proof}
\begin{bemerkung}
\label{bem:Zurueckziehen}
Sind in Satz \ref{satz:ZurueckZiehenVonGeomHomotopieDaten} $\ensuremath{\imath} \colon
J^{-1}(0) \hookrightarrow M$ und $\ensuremath{\imath}' \colon J'^{-1}(0)
\hookrightarrow M'$ die
Inklusionen der Zwangsflächen in den Phasenraum, so gilt offensichtlich
\begin{align}
\label{eq:InklusionenZurueckZiehen}
\ensuremath{\imath}^* = s\at{C}^* \circ \ensuremath{\imath}'^* \circ (s^{-1})^* \ensuremath{\text{.}}
\end{align}
\end{bemerkung}
Wir betrachten nun die in Abschnitt
\ref{sec:KotangentialBuendelMitMagnetfeld} beschriebene Situation in
voller Allgemeinheit. Insbesondere sei $\mu \in \lieAlgebra^*$ ein
$G$"=invarianter Impulswert und $B \in \Gamma^\infty(\bigwedge^2 T^*Q)$
$G$"=invariant sowie geschlossen.
Da wir einen im Allgemeinen nichtverschwindenden $G$"=invarianten
Impulswert $\mu$ vorliegen haben, müssen wir die in Abschnitt
\ref{sec:nichtVerschwindendeImpulswerte} dargelegten Überlegungen
anwenden. Wir definieren also $J^\mu_B := J_B - \mu$ und $\qJ[B]^\mu :=
\qJ[B] - \mu$. Die davon induzierten Koszul-
bzw.\ Quanten"=Koszul"=Operatoren wollen wir mit $\partial^\mu_B$
bzw.\ $\qkoszul^\mu_B$ bezeichnen. Die Inklusion der Zwangsfläche
$C_B^\mu := {J^\mu_B}^{-1}(0)$ notieren wir als $\ensuremath{\imath}_B^\mu \colon
C_B^\mu \hookrightarrow T^*Q$ und die kanonische Projektion auf den reduzierten
Phasenraum als $\pi^\mu_B \colon T^*Q \to C^\mu_B/G$. Schließlich
schreiben wir $(\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}})_B^\mu$ für die Quanteneinschränkung.
Ist $\alpha \in \Gamma^\infty(T^*Q)$, so definieren wir die
\neuerBegriff{Fasertranslation} $\Faser{\alpha}\colon T^*Q \to T^*Q$
durch $\Faser{\alpha}(\zeta_q) := \zeta_q + \alpha(q)$
(vgl.\ \cite[Def. 3.2.14,~ii.)]{waldmann:2007a}). Dies ist offensichtlich
ein Diffeomorphismus von $T^*Q$ mit inverser Abbildung
$(\Faser{\alpha})^{-1} = \Faser{-\alpha}$ und es gilt
$\Faser{\alpha}^*(\mathcal{P}(Q)) \subset \mathcal{P}(Q)$. Sofern
$\alpha$ $G$"=invariant ist, folgt, wie man leicht nachrechnet, die
$G$"=Äquivarianz von $\Faser{\alpha}$ und damit unmittelbar die von
$\Faser{\alpha}^*$. Ist weiter $j \colon Q \to \lieAlgebra^*$ eine
glatte Abbildung, so definieren wir die Einsform $\Gamma_{j} \in
\Gamma^\infty(T^*Q)$ durch $\Gamma_{j} := \dPaar{j}{\gamma}$. Für ein
$G$"=äquivariantes $j$ ist, wie eine einfache Rechnung zeigt,
$\Gamma_{j}$ $G$"=invariant. Da nun $\mu \in \lieAlgebra^*$ ein
$G$"=invarianter Impulswert ist sowie $j_0 \colon Q \to \lieAlgebra^*$
eine $G$"=äquivariante Abbildung, sieht man nach obiger Anmerkung
direkt, dass auch $\FaserG{j_0 - \mu}$ $G$"=äquivariant ist.
\begin{proposition}
\label{lem:sBeiUns}
Es gilt $\FaserG{j_0 - \mu}^* \JKan = \JB[\mu]$.
\end{proposition}
\begin{proof}
Sei $\xi \in \lieAlgebra$, $q \in Q$ und $\alpha_q \in T^*_qQ$, dann gilt
\begin{align*}
(\FaserG{j_0 - \mu}^*( \JKan(\xi)))(\alpha_q) &=
\JKan(\xi)(\FaserG{j_0 - \mu}(\alpha_q)) \\
&= \JKan(\xi)(\alpha_q + \Gamma_{j_0 - \mu}(q)) \\
&= \dPaar{\xi_Q(q)}{\alpha_q} + \dPaar{\xi_Q(q)}{\Gamma_{j_0 -
\mu}(q)}\\
&= \JKan(\xi)(\alpha_q) + \dPaar{j_0(q) -
\mu}{\gamma(\xi_Q(q))}\\
&= \JKan(\xi)(\alpha_q) + \dPaar{j_0(q) - \mu}{\xi}\\
&= \JB[\mu](\xi)(\alpha_q) \ensuremath{\text{.}}
\end{align*}
\end{proof}
\begin{proposition}
\label{prop:ReduktionKotangentialbuendelAllgemein}
Es gilt $\FaserG{j_0 - \mu}(C_B^\mu) \subset
C_{\mathrm{kan}}$ und $\FaserG{j_0 - \mu}\at{C^\mu_B}
\colon C^\mu_B \to C_{\mathrm{kan}}$ ist ein
$G$"=äquivarianter Diffeomorphismus, der einen eindeutig
bestimmten Diffeomorphismus ${\oFaserG{j_0 - \mu}}
\colon C^\mu_B/G \to C_{\mathrm{kan}}/G$
induziert, so dass das Diagramm
\def\tA[#1]{A_{#1}}
\begin{equation}
\label{eq:ReduktionKotangentialbuendelAllgemein}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{
C^\mu_B & C_{\mathrm{kan}} \\
C^\mu_B/G & C_{\mathrm{kan}}/G \\
}; %
\path[->] (m-1-1) edge
node[auto]{$\FaserG{j_0 - \mu}\at{C^\mu_B}$}(m-1-2); %
\path[->] (m-1-1) edge node[auto]{$\pi^\mu_B$}(m-2-1); %
\path[->] (m-2-1) edge node[auto]{$\oFaserG{j_0 - \mu}$}(m-2-2); %
\path[->] (m-1-2) edge node[auto]{$\piKan$}(m-2-2); %
\end{tikzpicture}
\end{equation}
kommutiert.
\end{proposition}
\begin{proof}
Die erste Aussage folgt direkt aus Proposition \ref{lem:sBeiUns}, der Rest ist klar.
\end{proof}
\begin{bemerkung}
\label{bem:KotangentialbuendelreduktionSymplekto}
Man kann weiter noch das Folgende zeigen, wodurch die Struktur des
reduzierten Phasenraums geklärt wird. Er hat nämlich wieder die
Struktur eines Kotangentialbündels, mit einer symplektischen Form, die
Magnetfeldterme aufweist, genauer gilt:
\begin{bemerkungEnum}
\item %
Es gibt eine eindeutig bestimmte geschlossene Zweiform $b$ auf
$Q/G$ mit $\varpi^* b = B + d\Gamma_{j_0 - \mu}$.
\item %
Der reduzierte Phasenraum $((T^*Q)_{\mathrm{red},\mu} =
C^\mu_B/G,\omega_{\mathrm{red},\mu})$ ist symplektomorph zu
$(T^*(Q/G), ({\omegaKan})_{\mathrm{red}} +
\overline{\mathfrak{p}}^* b)$ vermöge der Abbildung $\overline{u}
\circ {\oFaserG{j_0 - \mu}} \colon C^\mu_B/G \to T^*(Q/G)$, dabei
bezeichnet $\overline{\mathfrak{p}} \colon T^*(Q/G) \to Q/G$ die
Fußpunktprojektion.
\begin{equation} \label{eq:ReduktionKotangentialbuendelAllgemein2}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{
C^\mu_B & C_{\mathrm{kan}} & \\
C^\mu_B/G & C_{\mathrm{kan}}/G & T^*(Q/G) \\
}; %
\path[->] (m-1-1) edge
node[auto]{$\FaserG{j_0 - \mu}\at{C^\mu_B}$}(m-1-2); %
\path[->] (m-1-1) edge node[auto]{$\pi^\mu_B$}(m-2-1); %
\path[->] (m-2-1) edge node[auto]{${\oFaserG{j_0 - \mu}}$}(m-2-2); %
\path[->] (m-1-2) edge node[auto]{$\piKan$}(m-2-2); %
\path[->] (m-1-2) edge node[auto]{$u$}(m-2-3); %
\path[->] (m-2-2) edge node[auto]{$\overline{u}$}(m-2-3); %
\end{tikzpicture}
\end{equation}
Diese Aussage ist für die Konstruktion von Kowalzig, Neumaier und Pflaum
\cite{kowalzig.neumaier.pflaum:2005a} wichtig, für den von uns
angestrebten Vergleich mit dieser Arbeit spielt sie jedoch nur eine
untergeordnete Rolle. Für einen Beweis dieser Tatsache verweisen wir auf
\cite[Thm. 3.2]{kowalzig.neumaier.pflaum:2005a} sowie auf die dort
angegebenen Referenzen.
\end{bemerkungEnum}
\end{bemerkung}
Wir wählen nun für $C_{\mathrm{kan}}$ die globale Tubenumgebung aus
Proposition \ref{prop:globaleTubenabbildung}, die davon induzierte
geometrische $G$"=äquivariante Homotopie sei wieder mit $\hKan$
bezeichnet und die induzierte $G$"=äquivariante Prolongation mit $\prolKan$.
Mit Proposition \ref{lem:sBeiUns} ist klar, dass mit den Wahlen $M = M' = T^*Q$, $J =
\JB[\mu]$, $J' = \JKan$ und $s = \FaserG{j_0 - \mu} \colon T^*Q \to
T^*Q$ die Voraussetzungen von Satz
\ref{satz:ZurueckZiehenVonGeomHomotopieDaten} erfüllt sind. Wir
bezeichnen die so induzierte geometrische Homotopie mit $\hB[\mu]$
und die induzierte geometrische Prolongation mit $\prolB[\mu]$.
Um den Vergleich mit der Arbeit \cite{kowalzig.neumaier.pflaum:2005a} zu
erleichtern, wollen wir zunächst einige Symbole definieren. Dazu sei
$\{e_i\}$ eine fest gewählte Basis von $\lieAlgebra$.
\begin{align}
\label{eq:SymboleVonNikolai}
\mathrm{h}_{j_0 - \mu} &:= \mathrm{h}
\circ \FaserG[-]{j_0 - \mu}^*\at{\mathcal{P}(Q)[[\lambda]]} \colon
\mathcal{P}(Q)[[\lambda]] \to \mathcal{P}(Q)[[\lambda]] \notag \\
\mathrm{r}^i_{j_0 - \mu} &:= \FaserG{j_0 - \mu}^* \circ
\mathrm{r}^i \circ \FaserG[-]{j_0 -
\mu}^*\at{\mathcal{P}(Q)[[\lambda]]} \colon
\mathcal{P}(Q)[[\lambda]] \to \mathcal{P}(Q)[[\lambda]] \quad
\forall i \in \{1,\dots,\dim G\} \notag \\
\Delta_{\mu,\star}(F) &:= \frac{1}{\I \lambda} \sum_{i=1}^{\dim G}
\left(\mathrm{r}^i_{j_0 - \mu}(F)\JB[\mu](e_i) - \mathrm{r}^i_{j_0
- \mu}(F) \star \qJ[B]^\mu(e_i)\right) \in \mathcal{P}(Q)[[\lambda]] \quad \forall F \in
\mathcal{P}(Q)[[\lambda]]
\end{align}
Man beachte, dass oben eingeführte Abbildungen wegen
$\FaserG{j_0 - \mu}^*(\mathcal{P}(Q)[[\lambda]]) \subset
\mathcal{P}(Q)[[\lambda]]$ und aufgrund der Bemerkungen nach Gleichung
\eqref{eq:DeltaNikolai} wohldefiniert sind.
Wie in Abschnitt \ref{sec:SpezialFallImpulsMagnetfeldNull} wollen wir
nun unter Ausnutzung der dortigen Resultate die Bausteine des nach der
Quanten"=Koszul"=Methode konstruierten Sternprodukts auf dem
reduzierten Phasenraum in an die Kotangentialbündelsituation angepassten
Termen schreiben, um dann einen Vergleich mit
\cite{kowalzig.neumaier.pflaum:2005a} durchführen zu können. Mit Hilfe
von Satz~\ref{satz:ZurueckZiehenVonGeomHomotopieDaten} erhalten wir dabei
die folgenden Propositionen.
\begin{proposition}
\label{prop:HomotopieMitMagnetfeld}
Für alle $F \in \mathcal{P}(Q)$ gilt
\begin{align}
\hB[\mu](F) = \mathrm{r}^i_{j_0 - \mu}(F) \otimes e_i \ensuremath{\text{.}}
\end{align}
\end{proposition}
\begin{proof}
Mit Satz \ref{satz:ZurueckZiehenVonGeomHomotopieDaten} und Lemma
\ref{lem:HomotopiePolynomial} folgt sofort für $F \in \mathcal{P}(Q)$
\begin{align*}
\hB[\mu](F) = (\FaserG{j_0 - \mu}^* \circ \hKan \circ \FaserG[-]{j_0 -
\mu}^*)(F) = \FaserG{j_0 - \mu}^* \mathrm{r}^i
(\FaserG[-]{j_0 - \mu}^*(F)) \otimes e_i = \mathrm{r}^i_{j_0 -\mu}(F) \otimes
e_i \ensuremath{\text{.}}
\end{align*}
\end{proof}
Damit ist klar, dass für $F \in \mathcal{P}(Q)$ die Gleichung
\begin{align}
\label{eq:DeltaMitMagnetfeld}
\Delta_{\mu,\star}(F) = \frac{1}{\I \lambda}(\kKoszulB[\mu] -
\qKoszulB[\mu])(\hB[\mu](F))
\end{align}
gilt.
\begin{proposition}
\label{prop:klassischeEinschraenkungMitMagnetfeld}
Für jedes $F \in \mathcal{P}(Q)^G$ ist die Gleichung
\begin{align}
\label{eq:klassischeEinschraenkungMitMagnetfeld}
{\kInB[\mu]}^*(F) = (\piB[\mu]^* \circ (\overline{u} \circ {\oFaserG{j_0
- \mu}})^* \circ l^{-1} \circ \mathrm{h}_{j_0 - \mu})(F)
\end{align}
richtig.
\end{proposition}
\begin{proof}
Sei $F \in \mathcal{P}(Q)^G$, dann erhält man mit Gleichung
\eqref{eq:InklusionenZurueckZiehen}, Proposition
\ref{prop:EinschraenkungFuerKotangentialbuendel} und Diagramm \eqref{eq:ReduktionKotangentialbuendelAllgemein2}
\begin{align*}
{\kInB[\mu]}^*(F) &= ((\FaserG{j_0 - \mu}\at{C_B^\mu})^* \circ
\kInKan^* \circ \FaserG[-]{j_0 -
\mu}^*)(F)\\
&= ((\FaserG{j_0 - \mu}\at{C_B^\mu})^*\circ u^* \circ l^{-1} \circ
\mathrm{h} \circ \FaserG[-]{j_0 -
\mu}^*)(F)\\
&= (\piB[\mu]^* \circ (\overline{u} \circ {\oFaserG{j_0 - \mu}})^*
\circ l^{-1} \circ \mathrm{h}_{j_0 - \mu})(F)\ensuremath{\text{.}}
\end{align*}
\end{proof}
\begin{proposition}
\label{lem:l}
Sei $j \colon Q \to \lieAlgebra^*$ glatt, dann gilt für jedes $F \in
\mathrm{h}(\mathcal{P}(Q))$ schon $\FaserG{j}^*(F) = F$
und somit auch $\FaserG{j}^* \circ l = l$.
\end{proposition}
\begin{proof}
Sei $Y \in \Gamma^\infty(HQ)$. Dann gilt für $q \in Q$ und $\alpha_q
\in T_q^*Q$
\begin{align*}
(\FaserG{j}^*\mathsf{P}(Y))(\alpha_q) &=
\mathsf{P}(Y)(\alpha_q + \Gamma_{j}(q)) = \mathsf{P}(Y)(\alpha_q)
+ \dPaar{Y(q)}{\Gamma_{j}(q)}\\ &= \mathsf{P}(Y)(\alpha_q) +
\dPaar{\gamma(Y(q))}{j(q)} = \mathsf{P}(Y)(\alpha_q) \ensuremath{\text{.}}
\end{align*}
Da auch $\FaserG{j}^*(\mathsf{P}(\chi)) = \mathsf{P}(\chi)$ für jedes
$\chi \in C^\infty(Q)$ und da $\FaserG{j}^*$ ein
Algebrahomomorphismus ist, folgt die Aussage.
\end{proof}
\begin{proposition}
\label{prop:PropPiMitMagnetfeld}
Für alle $f \in \oFaserG{j_0 -
\mu}^*(\overline{u}^*(\mathcal{P}(Q/G)))$ ist
\begin{align}
\label{eq:PropPiMitMagnetfeld}
(\prolB[\mu] \circ \piB[\mu]^*)(f) = (l \circ ({\overline{u}^{-1}}^*)
\circ {\oFaserG[-]{j_0 - \mu}^*})(f) \ensuremath{\text{.}}
\end{align}
\end{proposition}
\begin{proof}
Sei $f \in \oFaserG{j_0 - \mu}^*(\overline{u}^*(\mathcal{P}(Q/G)))$,
dann gilt
\begin{align*}
(l \circ ({\overline{u}^{-1}}^*) \circ {\oFaserG[-]{j_0 -
\mu}^*})(f) &= (\prolKan \circ \pi_{\mathrm{kan}}^* \circ
{\oFaserG[-]{j_0 -
\mu}^*})(f) \\
&= (\prolKan \circ (\FaserG[-]{j_0 - \mu}\at{C_{\mathrm{kan}}})^*
\circ
\piB[\mu]^*)(f)\\
&= (\FaserG[-]{j_0 - \mu}^* \circ \prolB[\mu] \circ
\piB[\mu]^*)(f) \ensuremath{\text{.}}
\end{align*}
Dabei wurde im ersten Schritt Proposition
\ref{prop:ProlongationFuerKotangentialbuendel} verwendet, im zweiten
Diagramm \eqref{eq:ReduktionKotangentialbuendelAllgemein} und im letzten
Schritt Gleichung \eqref{eq:ZurueckGezogenesProl}.
Nun gilt nach Proposition \ref{lem:l} $\FaserG{j_0 - \mu}^* \circ l = l$, woraus die Behauptung folgt.
\end{proof}
Durch Zusammenfügen der Resultate dieses Abschnittes erhält man
schließlich unter Beachtung der Injektivität von $\piB[\mu]^*$ für alle
$f,f' \in \oFaserG{j_0 - \mu}^*(\overline{u}^*(\mathcal{P}(Q/G)))$
folgende Formel für das reduzierte Sternprodukt \eqref{eq:SternproduktAufDemReduziertenPhasenraum3}:
\begin{align}
f \star_{\mathrm{red},\mu} f' = \left ((\oFaserG{j_0 - \mu}^*
\overline{u}^*) l^{-1} \mathrm{h}_{j_0 - \mu} \frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} - \I
\lambda\Delta_{\mu,\star}}\right)\left (l(\oFaserG{j_0 - \mu}^*
\overline{u}^*)^{-1} f \star l(\oFaserG{j_0 - \mu}^*
\overline{u}^*)^{-1} f'\right ) \ensuremath{\text{.}}
\end{align}
Dies führt uns unmittelbar auf das Hauptresultat dieses Kapitels.
\begin{satz}
\label{satz:VergleichMitNikolaisArbeitAllgemein}
Unter den in Abschnitt \ref{sec:KotangentialBuendelMitMagnetfeld}
erklärten Voraussetzungen und den in Abschnitt
\ref{sec:SpezialFallImpulsMagnetfeldNull} und
\ref{sec:nichtVerschwindendeImpulswerteInAnwesenheitVonMagnetfeldern}
eingeführten Bezeichnungen gilt für das durch den Symplektomorphismus
$(\overline{u} \circ {\oFaserG{j_0 - \mu}})^{-1} \colon
T^*(Q/G) \to C_B^\mu/G$ zurückgezogene Sternprodukt $\tilde
\star_{\mathrm{red},\mu} := ((\overline{u} \circ {\oFaserG{j_0 -
\mu}})^{-1})^*\star_{\mathrm{red},\mu} $ und alle $f,f' \in \mathcal{P}(Q/G)$ die
Beziehung
\begin{align}
\label{eq:VergleichMitNikolaiAllgemein}
f \tilde \star_{\mathrm{red},\mu} f' = \left ( l^{-1} \mathrm{h}_{j_0 - \mu} \frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} - \I
\lambda\Delta_{\mu,\star}}\right)\left (l (f) \star l (f')\right ).
\end{align}
\end{satz}
Dies bedeutet, dass auch für den Fall mit Magnetfeldern und
$G$"=invarianten Impulswerten ungleich Null die in
\cite[Thm. 3.5]{kowalzig.neumaier.pflaum:2005a} konstruierten
Sternprodukte einen Spezialfall des hier betrachteten
Quanten-Koszul-Reduktions"=Schemas darstellen. Kowalzig, Neumaier und
Pflaum erlauben in ihrer Arbeit noch zusätzlich Quantenimpulswerte, was
jedoch in die Quantenimpulsabbildung absorbiert werden kann und somit an
dieser Stelle keine weitere Verallgemeinerung bedeutet.
Zusammengefasst konnten wir zeigen, dass die von Kowalzig, Neumaier und
Pflaum in \cite{kowalzig.neumaier.pflaum:2005a} vorgeschlagene Methode,
im Falle von Kotangentialbündeln mit Magnetfeldern ein Sternprodukt auf
dem reduzierten Phasenraum zu konstruieren, ein Spezialfall des
allgemeineren Quanten"=Koszul"=Reduktions"=Schemas ist. Genauer ergibt
sich, wie wir gesehen haben, die zunächst sehr technisch erscheinende
Methode aus \cite{kowalzig.neumaier.pflaum:2005a} durch naheliegende
Wahlen aus dem natürlicher wirkenden Quanten"=Koszul-Reduktions"=Schema
Insbesondere handelt es sich auch um einen Spezialfall der in
\cite{bordemann.herbig.waldmann:2000a} vorgestellten
BRST"=Reduktionsmethode. Die Bedeutung der Methode von Kowalzig,
Neumaier und Pflaum besteht sicherlich unter anderem darin, dass in
diesem Rahmen Resultate zur Frage, wann Quantisierung mit Reduktion
vertauscht, gezeigt werden konnten. Die Quanten"=Koszul-Methode besticht
hingegen durch ihre konzeptionelle Klarheit. Somit wagen wir zu
behaupten, eine wichtige Brücke zwischen zwei Methoden geschlagen zu
haben.
\chapter{Koszul-Reduktion in Stufen}
\label{cha:Koszul-Reduktion_in_Stufen}
In diesem Kapitel beschäftigen wir uns mit der Frage, ob man sowohl im
klassischen als auch im Quanten"=Fall die in Kapitel
\ref{cha:Koszul-Reduktion} vorgestellten Methoden der
Phasenraumreduktion auch in zwei Stufen durchführen kann, und wie die so
gewonnenen Observablenalgebren zu denen in Beziehung stehen, die man
erhält, wenn man die vorhandenen Symmetrien auf einmal eliminiert.
Die klassische Situation wurde in der Literatur schon eingehend
untersucht, siehe etwa \cite[Ch. 6.7]{ortega.ratiu:2004} sowie
\cite{marsden.misiolek:2007a} für eine ausführliche
Darstellung. Betrachtet man Phasenraumreduktion nur für verschwindende
Impulswerte, so stellt sich heraus, dass sich die Observablenalgebra
beziehungsweise der Phasenraum, den man durch Reduktion nach der
gesamten Symmetriegruppe gewinnt, von dem, den man durch Reduktion in
zwei Stufen enthält, nur durch einen Isomorphismus unterscheidet. Für
nicht verschwindende Impulswerte wird die Situation wesentlich
komplizierter und man kann unter Annahme einiger technischer Bedingungen
an die Impulswerte (vgl.\ \cite{marsden.misiolek:2007a}) ein analoges
Resultat zeigen. Wir benötigen jedoch nur die wesentlich einfachere
Theorie für Impulswert $0$, welche wir in Abschnitt
\ref{sec:symplektischeReduktionInStufen} genauer darstellen werden.
Für die Quanten"=Koszul-Reduktion wurde die oben angesprochene Frage in
der Literatur noch nicht untersucht. Es ist eines der Hauptresultate
dieser Arbeit, dass unter bestimmten Annahmen an die Struktur der
Symmetriegruppe auch bei der Quanten"=Koszul-Reduktion die
Observablenalgebra, die man durch Reduktion in zwei Stufen erhält, zu
der in einer Stufe gewonnenen isomorph ist. Dies werden wir in
\ref{sec:QuantenKoszulInStufen} behandeln.
Die in diesem Kapitel im Fließtext eingeführten Bezeichnungen gelten für
den Rest dieser Arbeit.
\section{Symplektische Reduktion in Stufen}
\label{sec:symplektischeReduktionInStufen}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit, $G$ eine
zusammenhängende Lie"=Gruppe, die via $\Phi^{\scriptscriptstyle{G}} \dpA
G \times M \to M$ stark Hamiltonsch, frei und eigentlich auf $M$
operiere und $\ensuremath{\iota} \dpA G_1 \hookrightarrow G$ eine normale,
zusammenhängende Unter"=Lie-Gruppe von $G$; insbesondere ist $G_1$ dann
abgeschlossen in $G$ (vgl.\ \cite[Prop. 5.4.1]{lee:2003a}). Man bemerke,
dass dies via $\Phi^{\scriptscriptstyle{G_1}}(g_1,p) :=
\Phi^{\scriptscriptstyle{G}}(g_1,p)$ für alle $g_1 \in G_1$, $p \in M$
eine symplektische, freie und eigentliche Wirkung
$\Phi^{\scriptscriptstyle{G_1}} \colon G_1 \times M \to M$ von $G_1$ auf
$M$ induziert. Sei weiter $J \dpA M \to \lieAlgebra[g]^*$ eine $G$"=äquivariante Impulsabbildung.
Es stellt sich dann die naheliegende Frage, ob es egal ist, ob man
Marsden"=Weinstein-Reduktion bezüglich der großen Gruppe $G$ betreibt
oder in einem ersten Schritt bezüglich $G_1$ und in einem zweiten
bezüglich $G/G_1$. Dies ist tatsächlich der Fall, wie wir in diesem
Abschnitt in Satz \ref{satz:redPhasenraeumeSymplektomorph} für
verschwindende Impulswerte positiv präsentieren. Zunächst stellen wir
jedoch den detaillierten Rahmen des Problems bereit. Dazu überlegen wir
uns als erstes, dass die adjungierte Wirkung bezüglich $G$ eine
$G$"=Wirkung auf $\lieAlgebra_1^*$ induziert. Dies benötigt man, um zu
sehen, dass die für den ersten Reduktionsschritt im Anschluss daran
definierte Impulsabbildung $J_1$ sogar $G$"=äquivariant ist, womit auch
die durch $J_1$ definierte Zwangsfläche $G$"=invariant sein wird.
Da $G_1 \subset G$ normal ist, induziert für jedes $g \in G$ die
Abbildung $\conjG_g \dpA G \to G$, $h \mapsto ghg^{-1}$ eine eindeutige
glatte Abbildung $\tconjG_{g}\dpA G_1 \to G_1$, so dass das Diagramm
\begin{center}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex] %
{ G & G \\
G_1 & G_1 \\}; %
\path[right hook->] (m-2-1) edge node[auto] {\ensuremath{\iota}} (m-1-1);%
\path[right hook->] (m-2-2) edge node[auto] {\ensuremath{\iota}} (m-1-2);%
\path[->] (m-1-1) edge node[above=-2.5pt] {\small $\conjG_g$}
(m-1-2);%
\path[->] (m-2-1) edge node[above=-2.5pt] {\small $\tconjG_g$}
(m-2-2);%
\end{tikzpicture}
\end{center} kommutiert.
Wegen $\AdG_g = T_e \conjG_g$ kommutieren mit $\tAdG_g := T_e \tconjG_g$
aufgrund der Kettenregel auch die folgenden beiden Diagramme.
\begin{center}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\hspace*{-3cm} \matrix (m) [matrix of math nodes, row sep=3.0em,
column sep=3.5em, text height=1.5ex, text depth=0.25ex]
{ \lieAlgebra[g] & \lieAlgebra[g] \\
\lieAlgebra[g]_1 & \lieAlgebra[g]_1\\};%
\path[right hook->] (m-2-1) edge node[auto] {$T_e \ensuremath{\iota}$} (m-1-1);%
\path[right hook->] (m-2-2) edge node[right] {$T_e \ensuremath{\iota}$}
(m-1-2);%
\path[->] (m-1-1) edge node[above=-2.5pt] {\small $\AdG_g$}
(m-1-2);%
\path[->] (m-2-1) edge node[above=-2.5pt] {\small $\tAdG_g$}
(m-2-2);%
\hspace{6cm} %
\matrix (n) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{ \lieAlgebra[g]^* & \lieAlgebra[g]^* \\
\lieAlgebra[g]_1^* & \lieAlgebra[g]_1^* \\};%
\path[->] (n-1-1) edge node[left] {$(T_e\ensuremath{\iota})^*$} (n-2-1);%
\path[->] (n-1-2) edge node[right] {$(T_e\ensuremath{\iota})^*$} (n-2-2);%
\path[->] (n-1-1) edge node[above=-2.5pt] {\small$(\AdG_g)^*$}
(n-1-2);%
\path[->] (n-2-1) edge node[above=-2.5pt] {\small$(\tAdG_g)^*$}
(n-2-2);%
\end{tikzpicture}
\end{center}
Damit ist unmittelbar klar, dass $\tAdG \colon G \times \lieAlgebra[g]_1 \to
\lieAlgebra[g]_1$, $(g,\xi_1) \mapsto \tAdG_g\xi_1$ und $G \times
\lieAlgebra[g]^*_1 \to \lieAlgebra[g]^*_1$, $(g,\mu_1) \mapsto
(\tAdG_{g^{-1}})^*\mu_1$ lineare $G$"=Wirkungen auf $\lieAlgebra[g]_1$
bzw.\ $\lieAlgebra[g]^*_1$ sind. Ferner gilt offensichtlich für $g_1 \in
G_1$ per Definition $\tAdG_{g_1} = \AdG[G_1]_{g_1}$, also auch
$(\tAdG_{g_1})^* = (\AdG[G_1]_{g_1})^*$.
Wir haben demnach auf natürliche Weise eine $G$"=Wirkung auf
$\lieAlgebra[g]^*_1$ vorliegen, bezüglich der $(T_e\ensuremath{\iota})^*$
$G$"=äquivariant wird. Da diese auf $G_1 \times \lieAlgebra[g]^*_1$ mit
der koadjungierten Wirkung $(\AdG[G_1]_{\cdot^{-1}})^*$ bezüglich $G_1$ übereinstimmt, ist der im Folgenden oft verwendete Notationsmissbrauch gerechtfertigt, für $g \in G$ und $\mu_1 \in \lieAlgebra[g]_1^*$ statt
$(\tAdG_{g^{-1}})^*\mu_1$ auch hier
wieder einfach $g \mu_1$ zu schreiben.
Man sieht nun sofort, dass
\begin{align}
J_1 := (T_e \ensuremath{\iota})^* \circ J \dpA M \to
\lieAlgebra[g]_1^*\label{eq:klassischesJ1} \ensuremath{\text{,}}
\end{align}
d.\,h.\ $\dPaar{J_1(p)}{\xi_1} := \dPaar{J(p)}{T_e \ensuremath{\iota}
\xi_1}$ für $p \in M$, $\xi_1 \in \lieAlgebra[g]_1$ eine Impulsabbildung
für die $G_1$"=Wirkung definiert. In der Tat rechnet man dies für $p \in
M$ und $\xi \in \lieAlgebra[g]_1$ leicht nach,
\begin{align*}
X_{J_1(\xi_1)}(p) = X_{J(T_e\ensuremath{\iota} \xi_1)}(p) = (T_e\ensuremath{\iota}
\xi_1)^{\scriptscriptstyle{G}}_M(p) =
(\xi_1)^{\scriptscriptstyle{G_1}}_M(p) \textrm{,}
\end{align*}
wobei sich die letzte Gleichheit sofort aus der Definition und der
Kettenregel ergibt. Per Definition ist $J_1$ wegen der $G$"=Äquivarianz von $(T_e
\ensuremath{\iota})^*$ und $J$ sogar $G$"=äquivariant -- und nicht etwa nur
$G_1$"=äquivariant. Aus der $G$"=Äquivarianz folgt insbesondere, dass $C_1 :=
J_1^{-1}(0)$ stabil unter $\Phi^{\scriptscriptstyle{G}}$ ist. Die eingeschränkten Wirkungen
$\Phi^{\scriptscriptstyle{G}}_{\scriptscriptstyle{C_1}} :=
\Phi^{\scriptscriptstyle{G}}\at{G \times C_1} \dpA G \times C_1 \to C_1$
und $\Phi^{\scriptscriptstyle{G_1}}_{\scriptscriptstyle{C_1}} :=
\Phi^{\scriptscriptstyle{G_1}}\at{G_1 \times C_1} \dpA G_1 \times C_1
\to C_1$ sind offensichtlich wieder frei und eigentlich.
$G/G_1$ ist bekanntlich ebenfalls eine Lie"=Gruppe
(vgl.\ \cite[Thm. 9.22]{lee:2003a}), da $G_1$ eine normale Untergruppe von
$G$ ist.
Die kanonischen Projektionen $\wp \dpA G \to G/G_1$ und $\pi_1 \dpA C_1 \to C_1/G_1$
sind surjektive Submersionen, also auch $\wp \times
\pi_1$, vgl.\ Proposition \ref{prop:ProduktVonSubmersionen}. Offensichtlich gilt
für alle $c_1 \in C_1$ und $g \in G$
\begin{align*}
(\wp\times\pi_1)(g,c_1) = (\wp\times\pi_1)(g',c_1') &\implies \exists g_1,
h_1 \in G_1 \: \text{mit}\; g_1g(h_1c_1) = g'c_1'\\ &\implies \pi_1(gc_1) =
\pi_1(g'c_1') \textrm{,}
\end{align*}
wobei bei der letzten Implikation ausgenutzt wurde, dass $G_1$ normal in
$G$ ist. Somit gibt es eine eindeutig bestimmte glatte Abbildung
$\Phi^{\scriptscriptstyle{G/G_1}} \dpA G/G_1 \times C_1/G_1 \to C_1/G_1$,
so dass das Diagramm
\begin{equation}
\label{eq:gedrueckteWirkung}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{ G \times C_1& C_1 \\
G/G_1 \times C_1/G_1 & C_1/G_1 \\}; %
\path[->] (m-1-1) edge node[auto] {$\wp \times \pi_1$} (m-2-1); %
\path[->] (m-1-2) edge node[auto] {$\pi_1$} (m-2-2);%
\path[->] (m-1-1) edge node[above=-2.5pt] {\small
$\Phi^{\scriptscriptstyle{G}}_{\scriptscriptstyle{C_1}}$}(m-1-2);%
\path[->] (m-2-1) edge node[above=-2.5pt] {\small $
\Phi^{\scriptscriptstyle{G/G_1}}$} (m-2-2); %
\end{tikzpicture}
\end{equation}
kommutiert. Offensichtlich ist $\Phi^{\scriptscriptstyle{G/G_1}}$ eine
Wirkung von $G/G_1$ auf $C_1/G_1$. Notieren wir Äquivalenzklassen in eckigen Klammern,
so nimmt die Wirkung in unserer abkürzenden Notation folgende Gestalt an
\begin{align}
\label{eq:gedrueckteWirkungAbk}
[g][c_1] = \Phi^{\scriptscriptstyle{G/G_1}}([g],[c_1]) = [gc_1], \, g \in
G,c_1 \in C_1 \ensuremath{\text{.}}
\end{align}
\begingroup
\emergencystretch=0.8em
\begin{proposition}
$\Phi^{\scriptscriptstyle{G/G_1}}$ ist symplektisch.
\end{proposition}
\begin{proof}
Sei $\omega_{\text{red}_1}$ die reduzierte symplektische Form auf $\Mred[1] =
J_1^{-1}(0)/G$ und $\pi_1 \colon J_1^{-1}(0) \to \Mred[1]$ die
Projektion.
Dann gilt für alle $g \in G$
\begin{align*}
\pi_1^* ({\Phi^{\scriptscriptstyle{G/G_1}}_{[g]}})^*\omega_{\text{red}_1} =
({\Phi^{\scriptscriptstyle{G}}_{\scriptscriptstyle{C_1}})}_g^* \pi_1^*
\omega_{\text{red}_1}=
({\Phi^{\scriptscriptstyle{G}}_{\scriptscriptstyle{C_1}}})_g^*
\ensuremath{\imath^*}_1 \omega_{\text{red}_1} = \ensuremath{\imath^*}_1
({\Phi^{\scriptscriptstyle{G}}_g})^* \omega = \ensuremath{\imath^*}_1
\omega = \pi_1^* \omega_{\text{red}_1}\ensuremath{\text{,}}
\end{align*}
womit wegen der Submersivität von $\pi_1$ das Behauptete folgt.
\end{proof}
\endgroup
\begin{lemma}[{\cite[Lem. 5.2.1]{marsden.misiolek:2007a}}]
\label{lem:QuotientWirktEigentlich}
$\Phi^{\scriptscriptstyle{G/G_1}}$ ist frei und eigentlich.
\end{lemma}
\begin{proof}
Sei $[g][c_1] = [c_1]$, dann gibt es ein $g_1 \in G_1$ mit $g_1c_1 = c_1$. Da
$\Phi^{\scriptscriptstyle{G}}_{\scriptscriptstyle{C_1}}$ frei ist,
folgt $g_1g = e$, d.\,h.\ $g = g_1^{-1} \in G_1$, womit wir wiederum $[g]
= [e]$ erhalten. Somit haben wir gezeigt, dass
$\Phi^{\scriptscriptstyle{G/G_1}}$ frei ist.
Sei nun $([g_n])$ eine Folge in $G/G_1$ und $([{c_1}_n])$ eine
konvergente Folge in $C_1/G_1$ mit Grenzwert $[c_1] \in C_1/G_1$ so dass
$([g_n][{c_1}_n])$ gegen ein $[c_1'] \in C_1/G_1$ konvergiert. Da $C_1 \to
C_1/G_1$ ein Hauptfaserbündel ist, können wir einen lokalen Schnitt
$\chi \dpA U_{[c_1]} \to C_1$ über einer Umgebung $U_{[c_1]}$ von $[c_1]$
wählen mit $\chi([c_1]) = c_1$ und dabei o.\,E.\ annehmen, dass $[{c_1}_n] \in
U_{[c_1]}$ $\forall n \in \mathbb{N}$. Aus Stetigkeitsgründen
konvergiert $\chi([{c_1}_n])$ gegen $c_1 = \chi([c_1])$. Sei nun für jedes $n
\in \mathbb{N}$ ein Element $h_{1,n} \in G_1$ mit $\chi([{c_1}_n]) =
h_{1,n} {c_1}_n$ gewählt, so gilt $h_{1,n} {c_1}_n \to {c_1}$. Analog können wir
annehmen, dass es Elemente $\{k_{1,n}\}_{n \in \mathbb{N}} \subset
G_1$ gibt, so dass $k_{1,n} g_n {c_1}_n \to {c_1}'$, also auch
$k_{1,n}g_nh^{-1}_{1,n}(h_{1,n}{c_1}_n) \to {c_1}'$ für $n \to \infty$. Da
$\Phi$ eigentlich ist, gibt es also eine konvergente Teilfolge von
$(k_{1,n}g_nh^{-1}_{1,n})$ und aus Stetigkeitsgründen also auch eine
konvergente Teilfolge von $([g_n])$ in $G/G_1$.
\end{proof}
Um den zweiten Reduktionsschritt durchführen zu können, wollen wir
zeigen, dass es eine $G/G_1$"=äquivariante Impulsabbildung $J_2 \dpA
\Mred[1] = C_1/G_1 \to \Lie[G/G_1]^* $ gibt, welche für alle $c_1 \in
C_1, \xi \in \lieAlgebra[g]$ die Gleichung
\begin{align}
\label{eq:DefZweiteImpulsAbbildung}
\dPaar{J_2([c_1])}{T_e\wp \xi} = \dPaar{J(c_1)}{\xi}
\end{align}
erfüllt.
Um dies zu erreichen, benötigen wir die beiden folgenden elementaren
Propositionen.
\begin{proposition}
\label{prop:AdProp}
Für $g_1 \in G_1$ und $\xi \in \lieAlgebra[g]$ ist $\Ad_{g_1}\xi -
\xi \in T_e\ensuremath{\iota} \lieAlgebra[g]_1$.
\end{proposition}
\begin{proof}
Sei $\tilde{I}_{g_1} \dpA G \times G \to G$, $(g,h) \mapsto g_1g
g_1^{-1}h^{-1}$. Da $G_1 \subset G$ normal, ist $I_{g_1} \dpA G
\to G_1$, $g \mapsto g_1gg_1^{-1}g^{-1}$ wohldefiniert. Ist
$\Delta \dpA G \to G \times G$ die Diagonalabbildung $g \mapsto
(g,g)$, so gilt $\tilde{I}_{g_1} \circ \Delta = \ensuremath{\iota} \circ I_{g_1}
$ und somit, wie man leicht mit Hilfe von Proposition \ref{prop:produkte}~\refitem{item:DiagonaleKomponiert} nachprüft
\begin{align*}
\Ad_{g_1}\xi - \xi = T_e\ensuremath{\text{con}}_{g_1} \xi - T_e(\operatorname{\mathsf{id}}_G) \xi = T_e\ensuremath{\iota}
T_eI_{g_1} \xi \ensuremath{\text{,}}
\end{align*}
womit die Behauptung klar ist.
\end{proof}
\begin{proposition}
\label{prop:AdVomQuotienten}
Für die adjungierte Wirkung $\AdG[G/G_1]$ auf dem Quotienten gilt
\begin{align}
\label{eq:AdVomQuotienten}
\AdG[G/G_1]_{\wp(g)} T_e\wp \xi = T_e\wp \AdG[G]_g \xi
\end{align}
für alle $\xi \in \lieAlgebra[g]$ und $g \in G$.
\end{proposition}
\begin{proof}
Wir sehen, dass
\begin{align*}
\conjG[G/G_1]_{\wp(g)}\circ\wp(h) &= \wp(g) \wp(h) \wp(g)^{-1} \\
&= \wp(ghg^{-1}) = \wp \circ \conjG[G]_g(h) \quad \text{für alle} \,
g,h \in G
\end{align*}
gilt, woraus die Behauptung durch Anwendung der Kettenregel folgt.
\end{proof}
\begin{lemma}
\label{lem:zweiteImpulsabbildung}
Durch Gleichung \eqref{eq:DefZweiteImpulsAbbildung} wird eine
$G/G_1$"=äquivariante Impulsabbildung $J_2 \dpA \Mred[1] \to
\Lie[G/G_1]^* $ definiert.
\end{lemma}
\begin{proof}
Zuerst sehen wir ein, dass $J_2$ wohldefiniert ist. Nach Definition
von $J_1$ und $C_1$ gilt für $\xi_1 \in \lieAlgebra[g]_1$ und ${c_1}
\in C_1$ schon $\dPaar{J({c_1})}{T_e\ensuremath{\iota} \xi_1} =
\dPaar{J_1({c_1})}{\xi_1} = 0$. Um die Unabhängigkeit
von $[{c_1}]$ einzusehen, reicht es zu zeigen, dass der Ausdruck
$\dPaar{J(g_1 c_1)}{\xi}$ für alle $c_1 \in C_1$ unabhängig von $g_1
\in G_1$ ist. Für beliebige, aber fest gewählte $c_1 \in C_1$, $\xi
\in \lieAlgebra[g]$ ist die Bedingung $\dPaar{J(g_1{c_1})}{\xi} =
\dPaar{J(c_1)}{\xi}$ für alle $g_1 \in G_1$ wegen der
$G_1$"=Äquivarianz von $J$ gleichbedeutend mit
$\dPaar{J(c_1)}{\Ad_{g_1^{-1}} \xi -\xi} = 0$ für alle $g_1 \in G_1$.
Letzteres ist aber nach Proposition \ref{prop:AdProp} und Definition
von $J_1$ und $C_1$ klar.
Als nächstes erinnern wir an das kommutative Diagramm
\eqref{eq:gedrueckteWirkung}. Durch Anwenden der Kettenregel erhalten
wir daraus die folgende, für die nächste Rechnung hilfreiche,
Relation für die fundamentalen Vektorfelder.
\begin{align*}
(T_e\wp \xi)_{C/G_1}(\pi_1(c_1)) &= T_{[e]}
\Phi^{\scriptscriptstyle{G/G_1}}(\cdot,\pi_1(c_1)) T_e \wp \xi \\
&= T_e(\Phi^{\scriptscriptstyle{G/G_1}}(\wp(\cdot),\pi_1(c_1)) \xi \\
&= T_e(\pi_1 \circ \Phi^{\scriptscriptstyle{G}}(\cdot,c_1)) \xi \\
&= T_{c_1} \pi_1 T_e\Phi^{\scriptscriptstyle{G}}(\cdot,c_1) \xi \\
&= T_{c_1} \pi_1 \xi_{C_1}(c_1) \ensuremath{\text{,}}
\end{align*}
für alle $c_1 \in C_1$ und $\xi \in \lieAlgebra[g]$.
\jot=3pt
Bezeichne $\omega_{\text{red}_1}$ die durch die Beziehung $\ensuremath{\imath_1}^*
\omega = \pi_1^* \omega_{\text{red}_1}$ eindeutig bestimmte
symplektische Form auf $\Mred[1]$. Als nächstes zeigen wir, dass
$J_2$ tatsächlich eine Impulsabbildung ist. Dazu sei ${c_1} \in C_1$, $v
\in T_pC_1$ und $\xi \in \lieAlgebra[g]$, dann gilt
\jot=5pt
\begin{align*}
\omega_{\text{red}_1} (\pi_1({c_1})) ((T_e\wp
\xi)_{\Mred[1]}(\pi_1({c_1})), T_{c_1} \pi_1 v) &=
\omega_{\text{red}_1} (\pi_1({c_1})) (T_p\pi_1\xi_{C_1}({c_1}), T_{c_1}
\pi_1
v) \\
&= (\pi_1^* \omega_{\text{red}_1})({c_1})(\xi_{C_1}({c_1}),v) \\
&= (\ensuremath{\imath_1}^* \omega)({c_1})(\xi_{C_1}({c_1}),v) \\
&= \omega({c_1})(T_{c_1} \ensuremath{\imath_1} \xi_{C_1}({c_1}),T_{c_1}\ensuremath{\imath_1} v) \\
&= \omega({c_1})(\xi_M({c_1}),T_{c_1}\ensuremath{\imath_1} v) \\
&= \dPaar{d_{c_1}J(\xi)}{T_{c_1}\ensuremath{\imath_1} v} \\
&= \dPaar{d_{c_1}(J(\xi) \circ \ensuremath{\imath_1})}{v} \\
&= \dPaar{d_{c_1}(J_2(T_e\wp\xi) \circ \pi_1)}{v} \\
&= \dPaar{d_{[{c_1}]}(J_2(T_e\wp\xi))}{T_{c_1}\pi_1 v} \ensuremath{\text{.}}
\end{align*}
Wir wenden uns nun der Äquivarianz zu. Für ${c_1} \in C_1$, $g \in G$ und
$\xi \in \lieAlgebra[g]_1$ sieht man leicht folgendes ein:
\begin{align*}
\dPaar{J_2([g][{c_1}])}{T_e\wp \xi} &= \dPaar{J(g{c_1})}{\xi} = \dPaar{gJ({c_1})}{\xi} = \dPaar{J({c_1})}{g^{-1} \xi} \\
&= \dPaar{J_2([{c_1}])}{T_e \wp (g^{-1} \xi)}
\stackrel{\hidewidth\eqref{eq:AdVomQuotienten}\hidewidth}{=}
\dPaar{J_2([{c_1}])}{[g^{-1}] T_e \wp\xi} = \dPaar{[g]^{-1}J_2([{c_1}])}{T_e \wp
\xi} \ensuremath{\text{.}}
\end{align*}
Somit ist auch die $G/G_1$"=Äquivarianz von $J_2$ gezeigt.
\end{proof}
Für den weiteren Verlauf dieser Arbeit seien $C_2 := J_2^{-1}(0)$ und
$\Mred[2] := C_2/(G/G_1)$, versehen mit der reduzierten
symplektischen Form $\omega_{\text{red}_2}$, welche durch die Beziehung
$ \pi_2^* \omega_{\text{red}_2} = \imath_2^* \omega_{\text{red}_1}$
eindeutig bestimmt ist. Dabei ist $\imath_2 \dpA C_2 \hookrightarrow
\Mred[1]$ die Inklusion und $\pi_2 \dpA C_2 \to
\Mred[2]$ die kanonische Projektion auf den Quotienten. Ebenso seien $C :=
J^{-1}(0)$ und $\Mred[] := C/G$, versehen mit der reduzierten
symplektischen Form $\omega_{\text{red}}$, die durch die Gleichung
$\pi^* \omega_{\text{red}} = \ensuremath{\imath}^* \omega$ eindeutig bestimmt ist,
wobei $\ensuremath{\imath} \dpA C \hookrightarrow M$ die Inklusion ist und $\pi \dpA C
\to \Mred[]$ die kanonische Projektion auf den Quotienten. Schließlich sei
$\ensuremath{\jmath} \dpA C \hookrightarrow C_1$ die Inklusion von $C$ in $C_1$.
\begin{bemerkung}
\label{bem:PiAufC}
Ist $c \in C \subset C_1$, so ist $\pi_1(c) \in C_2$, denn für jedes
$\xi \in \lieAlgebra[g]$ gilt $\dPaar{J_2(\pi_1(c))}{T_e \wp \xi} =
\dPaar{J(c)}{\xi} = 0$. Somit gibt es eine glatte Abbildung
$\widetilde \varsigma \dpA C \to C_2$ mit $\tilde \varsigma(c) =
\pi_1(c)$ für $c \in C$. Falls $\pi(c) = \pi(c')$ für $c,c' \in C$
gilt, gibt es ein $g \in G$ mit $gc = c'$ und deshalb gilt nach
Definition der $G/G_1$"=Wirkung (vgl.\ Gleichung
\eqref{eq:gedrueckteWirkungAbk}) $[g] \pi_1(c) = \pi_1(c')$, also
$\pi_2(\pi_1(c)) = \pi_2(\pi_1(c'))$. Folglich gibt es eine glatte
Abbildung $\varsigma \dpA \Mred[] \to \Mred[2]$, so dass das Diagramm
\begin{equation}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{ C & C_2 \\
\Mred[] & \Mred[2] \\}; %
\path[->] (m-1-1) edge node[auto] {$\pi$} (m-2-1); %
\path[->] (m-1-2) edge node[auto] {$\pi_2$} (m-2-2);%
\path[->] (m-1-1) edge node[above=-2.5pt] {$\widetilde
\varsigma$}(m-1-2);%
\path[->] (m-2-1) edge node[above=-2.5pt] {$\varsigma$}
(m-2-2); %
\end{tikzpicture}
\end{equation}
kommutiert.
\end{bemerkung}
\begin{proposition}
\label{prop:tildeVarsigmaSurjektiv}
Die Abbildung $\tilde \varsigma \colon C \to C_2$ ist surjektiv.
\end{proposition}
\begin{proof}
Sei $c_2 \in C_2$, d.\,h.\ $J_2(c_2) = 0$. Es gibt nun ein $c_1 \in C_1$
mit $c_2 = \pi_1(c_1)$, so dass $\dPaar{J(c_1)}{\xi} =
\dPaar{J_2(\pi_1(c_1))}{T_e\wp \xi} = 0$ für alle $\xi \in
\lieAlgebra$. Somit gilt aber schon $J(c_1) = 0$, also $c_1 \in
C$, d.\,h.\ auch $c_2 = \pi_1\at{C}(c_1) = \tilde \varsigma(c_1)$.
\end{proof}
Insgesamt haben wir folgendes wichtige kommutative Diagramm:
\newcommand{}{}
\begin{equation}
\label{eq:Notation}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=2.0em, column
sep=2.3em, text height=1.5ex, text depth=0.25ex]
{ J^{-1}(0) & & J_1^{-1}(0) & & M & & \lieAlgebra[g]^* \\
& & & & \lieAlgebra[g]_1^* & & \\
\Mred[] & J_2^{-1}(0) & \Mred[1] & \Lie[G/G_1]^* & & & \\
& & & & & & \\
& \Mred[2] & & & & & \\ };
\path[right hook->] (m-1-1) edge node[auto] {\ensuremath{\jmath}} (m-1-3);
\path[right hook->] (m-1-3) edge node[auto] {\ensuremath{\imath_1}} (m-1-5);
\path[->] (m-1-5) edge node[auto] {$J$} (m-1-7);
\path[right hook->] (m-3-2) edge node[auto] {$\imath_2$} (m-3-3);%
\path[->] (m-3-3) edge node[auto] {$J_2$} (m-3-4);%
\path[->] (m-1-1) edge node[auto] {$\pi$} (m-3-1); %
\path[->] (m-1-1) edge node[auto] {$\tilde\varsigma$} (m-3-2); %
\path[->] (m-1-3) edge node[auto] {$\pi_1$} (m-3-3); %
\path[->] (m-1-5) edge node[auto] {$J_1$} (m-2-5); \path[->]
(m-3-2) edge node[auto] {$\pi_2$} (m-5-2); %
\path[->] (m-3-1) edge node[auto] {$\varsigma$} (m-5-2.north
west); %
\path[right hook->] (m-1-1) edge[bend left=20] node[auto] {\ensuremath{\imath}}
(m-1-5);%
\path[<-] (m-2-5) edge[] node[below,sloped] {$(T_e \ensuremath{\iota})^*$}
(m-1-7);%
\path[->] (m-3-4.east) edge[bend right=40] node[below,sloped]
{$(T_e\wp)^*$} (m-1-7);%
\end{tikzpicture}
\end{equation}
Schon aus Dimensionsgründen ist klar, dass $\widetilde \varsigma$ im Allgemeinen nicht immersiv ist. Ebenfalls aus Dimensionsgründen hat
$\varsigma$ jedoch die Chance ein Diffeomorphismus zu sein. Dies ist
tatsächlich der Fall. Der folgende zentrale Satz zeigt, dass $\varsigma$
sogar ein Symplektomorphismus ist.
\begin{satz}
\label{satz:redPhasenraeumeSymplektomorph}
Sei $(M,\omega)$ eine symplektische Mannigfaltigkeit und eine
Lie"=Gruppe $G$ wirke auf $M$ stark Hamiltonsch, frei und
eigentlich mit $G$"=äquivarianter Impulsabbildung
$J\colon M \to \lieAlgebra^*$ und $C := J^{-1}(0) \neq
\emptyset$. Sei $\ensuremath{\iota} \colon G_1 \hookrightarrow G$ eine normale
Unter"=Lie"=Gruppe von $G$ und $J_1 := (T_e\ensuremath{\imath})^*\circ J\colon M \to
\lieAlgebra_1^*$ die zugehörige Impulsabbildung. Ferner sei $\Mred =
C/G$ der reduzierte Phasenraum bezüglich der $G$"=Wirkung und
$\Mred[1] = C/G_1$ derjenige bezüglich der $G_1$"=Wirkung. Dann wirkt
$G/G_1$ frei und eigentlich auf $\Mred[1]$ und es gibt eine
$G/G_1$"=äquivariante Impulsabbildung $J_2$, die Gleichung
\eqref{eq:DefZweiteImpulsAbbildung} genügt. Unter diesen
Voraussetzungen sind $\Mred$ und der bezüglich der $G/G_1$"=Wirkung
reduzierte Phasenraum $\Mred[2] := J_1^{-1}(0)/(G/G_1)$
symplektomorph. Explizit ist durch die in Bemerkung \ref{bem:PiAufC}
eingeführte Abbildung
\begin{align}
\varsigma \dpA \Mred[] \to \Mred[2]
\end{align}
ein Symplektomorphismus gegeben.
\end{satz}
\begin{proof}
Zuerst zeigen wir, dass $\varsigma$ injektiv ist. Angenommen
$\varsigma(\pi(c)) = \varsigma(\pi(c'))$ für $c,c' \in C$, dann gilt
$\pi_2(\pi_1(c)) = \pi_2(\pi_1(c'))$. Es gibt somit ein $g \in G$ mit
$\pi_1(gc) = [g] \pi_1(c) = \pi_1(c')$. Also gibt es ein $g_1 \in
G_1$ mit $g_1gc = c'$, womit schon $\pi(c) = \pi(c')$ gilt, was zu
zeigen war.
Die Surjektivität von $\varsigma$ folgt unmittelbar aus der von
$\pi$, $\pi_2$ und $\tilde \varsigma$.
Mit Hilfe des kommutativen Diagramms \eqref{eq:Notation}
und der Definition der reduzierten symplektischen Formen rechnet man
leicht nach:
\begin{align*}
\pi^* \varsigma^* \omega_{\text{red}_2} = {\widetilde \varsigma}^*
\pi_2^* \omega_{\text{red}_2} &= {\widetilde \varsigma}^* \imath_2^*
\omega_{\text{red}_1} = \ensuremath{\jmath}^* \pi_1^* \omega_{\text{red}_1} =
\ensuremath{\jmath}^* \ensuremath{\imath_1}^* \omega = \ensuremath{\imath}^* \omega = \pi^* \omega_{\text{red}}\ensuremath{\text{.}}
\end{align*}
Demnach ist $\varsigma$ symplektisch. Insbesondere ist $\varsigma$
immersiv. Aus Dimensionsgründen ($\dim \Mred[] = \dim \Mred[2]$) und
dem Satz über die Umkehrfunktion ist $\varsigma$ ein
Diffeomorphismus.
\end{proof}
\section{Quanten-Koszul-Reduktion in Stufen}
\label{sec:QuantenKoszulInStufen}
Gegenstand dieses Kapitels ist es, zu untersuchen, ob und wie sich die
in Abschnitt \ref{sec:symplektischeReduktionInStufen} präsentierte
klassische Reduktion in Stufen auf die Quanten-Koszul-Reduktion
übertragen lässt. Zunächst stellt sich dabei die Frage, ob das in der
ersten Stufe durch Reduktion erhaltene Sternprodukt überhaupt die
Voraussetzungen für eine weitere Reduktion erfüllt. Dies werden wir im
ersten Abschnitt dieses Kapitels betrachten. Anschließend wenden wir uns
im zweiten Abschnitt der Fragestellung zu, wie das in einer Stufe reduzierte
Sternprodukt mit dem in zwei Stufen reduzierten in Beziehung steht.
Sei $\star$ ein $G$"=invariantes Sternprodukt auf $(M,\omega)$ und $\qJ
\dpA \lieAlgebra \to C^\infty(M)[[\lambda]]$ eine $G$"=äquivariante
Quantenimpulsabbildung.
Das heißt für $\xi \in \lieAlgebra[g]$, $g \in G$, $p \in M$ und $f \in \CM[M]$ gelten
die folgenden Bedingungen.
\begin{enumerate}[\itshape i.)]
\item $\qJ(\xi)$ ist eine Deformation von $J(\xi)$.
\item
\label{item:quantenInvarianteImpulsabb}
$\I \lambda{} \{J(\xi),f\} =
[\qJ(\xi),f]_{\star}$.
\item $\qJ(gp) = g\qJ(p)$.
\end{enumerate}
Es sei weiter daran erinnert, dass $\star$ genau dann stark invariant
bezüglich $G$ ist, wenn die klassische Impulsabbildung $J$ auch eine
Quantenimpulsabbildung ist.
Ist $\lieAlgebra[g]_2 \subset \lieAlgebra[g]$ mit $\lieAlgebra[g] = T_e
\ensuremath{\iota} \lieAlgebra[g]_1 \oplus \lieAlgebra[g]_2$, so induziert dies eine
Zerlegung $\lieAlgebra^* = (\lieAlgebra^*)_1 \oplus (\lieAlgebra^*)_2$,
wobei die beiden Summanden $(\lieAlgebra^*)_1$ und $(\lieAlgebra^*)_2$
durch $(\lieAlgebra^*)_1: = \{\mu \in \lieAlgebra^* \mid
\mu\at{\lieAlgebra_2} = 0\}$ und $(\lieAlgebra^*)_2: = \{\mu \in
\lieAlgebra^* \mid \mu\at{T_e\ensuremath{\iota}\lieAlgebra_1} = 0\}$ gegeben
sind. Aufgrund der $G$"=Invarianz von $\lieAlgebra_1$ folgt direkt
dieselbige von $(\lieAlgebra^*)_2$. Wir denken uns im Folgenden
$\lieAlgebra_2$ gewählt. Offensichtlich ist $(T_e
\ensuremath{\iota})^*\at{(\lieAlgebra^*)_1} \dpA (\lieAlgebra^*)_1 \to
\lieAlgebra[g]_1^*$ ein Isomorphismus von Vektorräumen.
\begin{definition}[Reduktiver homogener Raum]
\label{def:ReduktiverHomogenerRaum}
Der homogene Raum $G/G_1$ heißt \neuerBegriff{reduktiv}, falls es ein
$G_1$"=invariantes Vektorraumkomplement $\lieAlgebra_2$ von $T_e\ensuremath{\iota}
\lieAlgebra_1$ gibt.
\end{definition}
\begin{bemerkung}
\label{bem:ReduktiverHomogenerRaum}
Die $G_1$"=Invarianz von $\lieAlgebra_2$ ist, da $G_1$
zusammenhängend ist, äquivalent dazu, dass $[\xi_1,\xi_2] \in
\lieAlgebra_2$ für alle $\xi_1 \in T_e\ensuremath{\iota}\lieAlgebra_1,\xi_2 \in
\lieAlgebra_2$. Da $G_1$ in $G$ normal und somit $\lieAlgebra_1$ ein
Lie-Ideal in $\lieAlgebra$ ist, ist dies weiter äquivalent zur
Bedingung $[\xi_1,\xi_2] = 0$ für alle $\xi_1 \in T_e\ensuremath{\iota}
\lieAlgebra_1,\xi_2 \in \lieAlgebra_2$. Genauso sieht man, dass
$\lieAlgebra_2$ genau dann $G$"=invariant ist, wenn $\lieAlgebra_2$
ein Lie"=Ideal in $\lieAlgebra$ ist.
\end{bemerkung}
\begin{bemerkung}
\label{bem:invariantesKomplement}
Dass $\lieAlgebra[g]_2$ ein $G$"=invariantes Komplement von $T_e \ensuremath{\iota}
\lieAlgebra[g]_1$ ist, bedeutet, falls -- wie wir es immer annehmen
-- $G$ und $G_1$ zusammenhängend sind, dass es eine diskrete Gruppe
$\Gamma \subset G$ und einfach zusammenhängende Gruppen $\tilde G_1$
und $\tilde G_2$ gibt, so dass $G \simeq (\tilde G_1 \times \tilde
G_2)/\Gamma$. Dies ist insbesondere immer dann erfüllt, wenn $G$
kompakt ist, siehe etwa \cite[Thm. 20.21]{lee:2003a}.
\end{bemerkung}
\begin{beispiel}
\label{bsp:heisenbergLieAlgebra}
Sei $\lieAlgebra$ die Heisenberg-Lie-Algebra, die durch die
Generatoren $P,Q$ und $R$ über $\mathbb{R}$ erzeugt wird und deren
Lie-Klammer durch die Beziehungen $[P,Q] = R$ und $[P,R] = [Q,R] = 0$
festgelegt ist. Weiter sei $\lieAlgebra_1 := \mathbb{R} R$ das
Zentrum von $\lieAlgebra$. Auf Gruppenniveau entspricht dies der Wahl
von $G$ als Heisenberg-Gruppe und der von $G_1$ als
$\mathbb{R}\setminus \{0\}$. In diesem Fall gibt es zwar ein
$G_1$"=invariantes Komplement von $\lieAlgebra_1$ in $\lieAlgebra$,
etwa $\lieAlgebra_2 := \mathbb{R}\textrm{-}\mathrm{Span}\{P,Q\}$,
aber kein $G$"=invariantes Komplement, denn wie man leicht nachrechnet,
gilt $[\lieAlgebra,\lieAlgebra] \subset \lieAlgebra_1$.
\end{beispiel}
\begin{proposition}
\label{prop:p2Invariant}
Ist $\lieAlgebra_2$ $H$"=invariant (mit $H \in \{G_1,G\}$), so ist die
kanonische Projektion $\mathrm{p}_2 \colon \lieAlgebra^* \to
(\lieAlgebra^*)_2$ $H$"=äquivariant.
\end{proposition}
\begin{proof}
Seien $\mu_1 \in (\lieAlgebra^*)_1$ und $\mu_2 \in
(\lieAlgebra^*)_2$ sowie $h \in H$. Dann gilt
\begin{align*}
h \mathrm{p}_2(\mu_1 + \mu_2) = h \mu_2 = \mathrm{p}_2(h \mu_2) = \mathrm{p}_2(h \mu_1 + h
\mu_2) = \mathrm{p}_2(h(\mu_1 + \mu_2))\ensuremath{\text{.}}
\end{align*}
Dabei wurde im zweiten Schritt die $H$"=Invarianz von
$(\lieAlgebra^*)_2$ verwendet. Im dritten Schritt geht die
Voraussetzung ein, denn die $H$"=Invarianz von $\lieAlgebra_2$
impliziert sofort die von $(\lieAlgebra^*)_1$.
\end{proof}
\subsection{Invarianzeigenschaften eines nach einer Untergruppe
reduzierten Sternprodukts}
\label{sec:InvarianzDesErstenReduziertenSternprodukts}
In diesem Abschnitt sehen wir analog zur klassischen Situation,
dass $\qJ$ eine $G$"=äquivariante Quantenimpulsabbildung $\qJ[1]$ für den
ersten Reduktionsschritt und eine Quantenimpulsabbildung $\qJ[2]$
induziert. Da letztere zum zweiten Reduktionsschritt dienen soll, muss
sie auch $G/G_1$"=äquivariant sein. Um dies sicherzustellen, werden wir
weitere Bedingungen an die Gruppen stellen müssen. Wir nehmen $G/G_1$ in
diesem Abschnitt immer als reduktiv an und denken uns ein
$G_1$"=invariantes Komplement $\lieAlgebra_2$ gewählt.
Wir definieren
\begin{align}
\label{eq:J1Definition}
\qJ[1] \colon \lieAlgebra_1 \to C^\infty(M)[[\lambda]]
\end{align}
durch
\begin{align}
\qJ[1](\xi_1) := \qJ(T_e \ensuremath{\iota} \xi_1) \quad \text{ für }\xi_1 \in
\lieAlgebra[g]_1\label{eq:J1Definition2}
\end{align}
und erhalten die folgende Proposition.
\begin{proposition}
\label{prop:FettJEImpulsabb}
$\qJ[1]$ ist eine $G$"=äquivariante Quantenimpulsabbildung.
\end{proposition}
\begin{proof}
Es ist klar, dass $\qJ[1]$ eine Deformation von $J_1$ ist. Weiter
gilt für $f \in \CM[M]$ und $\xi_1 \in
\lieAlgebra[g]_1$
%
\begin{align*}
\I \lambda \{J_1(\xi_1),f\} = \I \lambda \{J(T_e \ensuremath{\iota} \xi_1),f\} =
[\qJ(T_e \ensuremath{\iota} \xi_1),f]_{\star} = [\qJ[1]( \xi_1),f]_{\star} \ensuremath{\text{.}}
\end{align*}
%
Wegen der $G$"=Äquivarianz von $(T_e \ensuremath{\iota})^*$ und der von $\qJ$ ist auch
klar, dass $\qJ[1] = (T_e\ensuremath{\iota})^* \circ \qJ$ $G$"=äquivariant ist.
\end{proof}
\begin{proposition}
\label{prop:GWirkungaufkleing1RespektiertLieAlgebra}
Die Wirkung $\tAdG$ respektiert die Lie"=Algebra-Struktur auf
$\lieAlgebra[g]_1$, d.\,h.\ $\tAdG_g$ ist für alle $g \in G$ ein
Morphismus von Lie"=Algebren.
\end{proposition}
\begin{proof}
Dies ist unmittelbar klar, da für $g \in G$ die Abbildung $\tconjG_g
\dpA G_1 \to G_1$ ein Morphismus von Lie"=Gruppen ist, was
bekanntermaßen (vgl.~\cite[Thm. 4.25]{lee:2003a}) impliziert, dass
$\tAdG_g = T_e \tconjG_g$ ein Morphismus von Lie"=Algebren ist.
\end{proof}
Im Weiteren sei $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1\dpA \CM[M][[\lambda]] \to \CM[C_1][[\lambda]]$
die von $\qJ[1]$ und einer gewählten $G$"=äquivarianten
Prolongationsabbildung $\prol[1] \dpA \CM[C_1] \to \CM[M]$ induzierte
$G$"=äquivariante Quanteneinschränkung. Die geometrischen
Homotopie"=Daten von denen $\prol[1]$ induziert werden seien an dieser
Stelle noch beliebig und $G$"=invariant. Die hier vorliegende
Wahlfreiheit werden wir später ausnutzen. Analog zur klassischen
Situation wollen wir nun zeigen, dass es eine Quantenimpulsabbildung
$\qJ[2] \colon \Mred[1] \to \Lie[G/G_1]_{\mathbb{C}}^*[[\lambda]]$ gibt,
so dass für alle $\xi_2 \in \lieAlgebra_2$ und $c_1 \in C_1$
\begin{align}
\dPaar{\qJ[2](\pi_1(c_1))}{T_e \wp \xi_2} = \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \dPaar{
\qJ}{\xi_2}(c_1)
\end{align}
ist. Anders als in der klassischen Situation kann man aber obige
Gleichung nicht für alle $\xi \in \lieAlgebra$ sondern nur für $\xi_2
\in \lieAlgebra_2$ fordern, da $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 \qJ$ im Allgemeinen Anteile aus
$(\lieAlgebra_{\mathbb{C}}^*)_1$ enthalten könnte. Die folgende
Proposition klärt die Situation. Dazu sei $\tqJ[1] \colon \lieAlgebra
\to C^\infty(M)[[\lambda]]$ durch
\begin{align}
\label{eq:tJ1a}
\tqJ[1](T_e\ensuremath{\iota} \xi_1) = \qJ[1](\xi_1) = \qJ(T_e \ensuremath{\iota} \xi_1) \quad
\text{für $\xi_1 \in \lieAlgebra_1$}
\end{align}
und
\begin{align}
\label{eq:tJ1b}
\tqJ[1](\xi_2) = 0 \quad \text{ für $\xi_2 \in \lieAlgebra_2$} \ensuremath{\text{.}}
\end{align}
gegeben.
\begin{proposition}
\label{prop:QuantenJ2}
Es gibt eine eindeutig bestimmte Abbildung
\begin{align}
\qJ[2] \colon \Mred[1] \to \Lie[G/G_1]_{\mathbb{C}}^*[[\lambda]]\label{eq:QuantenJ21}
\end{align}
mit
\begin{align}
\label{eq:QuantenJ2}
\dPaar{\qJ[2](\pi_1(c_1))}{T_e \wp \xi} = \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}}\dPaar{
\qJ - \tqJ[1]}{\xi}(c_1) \quad \forall c_1 \in
C_1, \xi \in \lieAlgebra \ensuremath{\text{.}}
\end{align}
\end{proposition}
\begin{proof}
Der Beweis geht analog zum klassischen Fall. Die Eindeutigkeit
ist klar. Wir definieren $\qJ[2]$ durch Gleichung
\eqref{eq:QuantenJ2} und müssen zeigen, dass dies tatsächlich
wohldefiniert ist. Zunächst sieht man,
dass die rechte Seite von Gleichung \eqref{eq:QuantenJ2} für
$\xi = T_e\ensuremath{\iota} \xi_1 \in T_e\ensuremath{\iota} \lieAlgebra_1$ Null ergibt, denn
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1\dPaar{\qJ - \tqJ[1]}{T_e \ensuremath{\iota} \xi_1} &=
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1\dPaar{\qJ}{T_e\ensuremath{\iota} \xi_1} -
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1\dPaar{\tqJ[1]}{T_e\ensuremath{\iota} \xi_1} \\ &=
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 \dPaar{\qJ[1]}{\xi_1} - \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1
\dPaar{\qJ[1]}{\xi_1} = 0\ensuremath{\text{,}}
\end{align*}
wobei im vorletzten Schritt die Definitionen von $\qJ[1]$ und
$\tqJ[1]$ verwendet wurden. Als nächstes zeigen wir, dass die
rechte Seite von Gleichung \eqref{eq:QuantenJ2} auch unter
einer Ersetzung von $c_1$ durch $g_1c_1$ mit $g_1 \in G_1$
unverändert bleibt, d.\,h.\ dass für alle $\xi \in \lieAlgebra$
der Ausdruck $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1\dPaar{\qJ - \tqJ[1]}{\xi}(g_1c_1)$
unabhängig von $g_1$ ist. Um dies zu sehen, bemerken wir
zunächst, dass $\tqJ[1]$ $G_1$"=äquivariant ist, da
$G/G_1$"=reduktiv ist und schon $\qJ[1]$ $G_1$"=äquivariant
ist. Da außerdem auch $\qJ$ und $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1$ $G_1$"=äquivariant
sind, gilt
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 \dPaar{\qJ - \tqJ[1]}{\xi} (g_1 c_1) &=
(g_1^{-1} \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 \dPaar{\qJ - \tqJ[1]}{\xi} )( c_1)\\
&= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 (g_1^{-1} \dPaar{\qJ - \tqJ[1]}{\xi} )( c_1)\\
&= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 (\dPaar{\qJ - \tqJ[1]}{g_1^{-1} \xi})(c_1) \ensuremath{\text{,}}
\end{align*}
womit wir sehen, dass es analog zum klassischen Fall genügt zu
zeigen, dass $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1\dPaar{\qJ - \tqJ[1]}{g_1^{-1}\xi - \xi} =
0$ gilt. Dies ist aber mit Proposition \ref{prop:AdProp}
und den obigen Ausführungen klar.
\end{proof}
Wir bemerken, dass die Abbildung $\chi\colon \mathrm{Lie}(G/G_1)^* \to
(\lieAlgebra^*)_2$, $\alpha \mapsto (T_e\wp)^*\alpha$ offensichtlich
bijektiv ist, notieren $\mathrm{p}_2 \colon \lieAlgebra^* \to (\lieAlgebra^*)_2 $
für die kanonische Projektion auf den zweiten Faktor und kommen zum
ersten wichtigen Satz dieses Abschnittes.
\begin{samepage}
\begin{satz}[Invarianzeigenschaften des reduzierten Sternprodukts]
\label{thm:ReduziertesSternproduktInvariant}
\begin{satzEnum}
\item %
Das Sternprodukt $\starred[1]$ ist $G/G_1$"=invariant.
\item %
$\qJ[2]$ ist eine Quantenhamiltonfunktion und, falls
$\lieAlgebra_2$ $G$"=invariant ist, auch $G/G_1$"=äquivariant,
d.\,h.\ eine Quantenimpulsabbildung.
\item %
\label{item:MitAlpha}
Falls $\qJ = J + \alpha$ für ein $\alpha \in
\lambda \lieAlgebra[g]_{\mathbb{C}}^*[[\lambda]]$ und $\ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} J = \ensuremath{\imath_1^*} J
$ erfüllt ist, gilt $\qJ[2] = J_2 + \chi^{-1}(\mathrm{p}_2(\alpha))$
und $\starred[1]$ ist stark invariant bezüglich der
$G/G_1$"=Wirkung.
\end{satzEnum}
\end{satz}
\end{samepage}
\begin{proof}
\begin{beweisEnum}
\item %
Dies folgt unmittelbar aus der $G$"=Äquivarianz von $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1$,
$\prol_1$, der Definition der $G/G_1$"=Wirkung sowie Gleichung
\eqref{eq:SternproduktAufDemReduziertenPhasenraum3} und der
Submersivität von $\pi_1$.
\item %
Es ist offensichtlich, dass $\qJ[2]$ eine Deformation von $J_2$ ist.
Sei $f \in \CM[{\Mred[1]}]$, dann gilt für alle $\xi_2 \in \lieAlgebra_2$
\begin{align*}
\lefteqn{\pi_1^*(\I \lambda \{f,J_2(T_e\wp \xi_2)\}_{\text{red}_1})}\\
&= \I \lambda \ensuremath{\imath_1^*} \{ \prol[1] \pi_1^*
f,\prol[1] \pi_1^* (J_2(T_e\wp \xi))\} && \eAnn{nach Gleichung \eqref{eq:ReduziertePoissonKlammer} }\\
&= \I \lambda \ensuremath{\imath_1^*} \{\prol[1] \pi_1^*
f,\prol[1] \ensuremath{\imath_1^*} J({\xi_2})\} && \eAnn{per Definition von $J_2$}\\
&= \I \lambda \ensuremath{\imath_1^*} \{\prol[1] \pi_1^* f,J({\xi_2})\}
&& \eAnn{$(\operatorname{\mathsf{id}} - \prol[1] \ensuremath{\imath_1^*})(\CM[M]) \subset \kIdeal[1]$,}[] \\
&\phantom{=} && \eAnn[]{ $\prol[1]
\pi_1^*f \in \kbIdeal[1]$ nach
Prop. \ref{prop:FunktionenAufDemReduziertenPhasenraum}~\refitem{item:CharakterisierungDesKlassischenIdealisators}} \\
&= \I \lambda \ensuremath{\imath_1^*} {\xi_2}_M(\prol[1]\pi_1^* f) \\
&= \I \lambda \ensuremath{\imath_1^*} \prol[1] {\xi_2}_{C_1}(\pi_1^* f)
&&\eAnn{$\prol[1]$
$G$-invariant}\\
&= \I \lambda \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \prol[1]{\xi_2}_{C_1}(\pi_1^* f)
&&\eAnn{da $\ensuremath{\imath_1^*} \prol[1] = \operatorname{\mathsf{id}} = \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \prol[1]$}\\
&= \I \lambda \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} {\xi_2}_M(\prol[1] \pi_1^*f)
&&\eAnn{$\prol[1]$ $G$-invariant}\\
&= \I \lambda \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \{\prol[1] \pi_1^*f,J({\xi_2})\} \\
&= \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} [\prol[1]\pi_1^* f,\qJ({\xi_2})]_{\star}
&&\eAnn{da $\qJ$ Quantenimpulsabbildung}\\
&= \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} [\prol[1]\pi_1^* f,\prol[1] \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \qJ({\xi_2})]_{\star} && \eAnn{$(\operatorname{\mathsf{id}} - \prol[1] \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}})(\CM[M][[\lambda]]) \subset \qIdeal[1]$,}[] \\
&\phantom{=} && \eAnn[]{ $\prol[1]
\pi_1^*f \in \qbIdeal[1]$ nach Lem. \ref{lem:CharakterisierungQuantenLieIdealisator}} \\
&= \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} [\prol[1]\pi_1^*
f,\prol[1] \pi_1^* \qJ[2](T_e\wp {\xi_2})]_{\star} && \eAnn{per Definition von $\qJ[2]$ } \\
&= \pi_1^*[f,\qJ[2](T_e\wp {\xi_2})]_{\starred[1]}\ensuremath{\text{.}}
\end{align*}
Damit folgt wegen der Injektivität von $\pi_1^*$ sofort
\begin{align*}
\I \lambda \{f,J_2(T_e\wp \xi_2)\}_{\text{red}_1} =
[f,\qJ[2](T_e\wp {\xi_2})]_{\starred[1]} \ensuremath{\text{.}}
\end{align*}
Die $G/G_1$"=Äquivarianz sieht man wie im Beweis zu Lemma
\ref{lem:zweiteImpulsabbildung}.
%
Für $c_1 \in C_1$, $g \in G$ und $\xi_2 \in \lieAlgebra[g]_2$ gilt
nämlich:
\begin{align*}
\dPaar{\qJ[2]([g][c_1])}{T_e\wp \xi_2} &= \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \dPaar{
\qJ}{\xi_2}(gc_1) = (g^{-1} \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \dPaar{\qJ}{\xi_2})(c_1) = \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} g^{-1} \dPaar{\qJ}{\xi_2}(c_1)\\
&= \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}}\dPaar{\qJ}{g^{-1} \xi_2}(c_1) =
\dPaar{\qJ[2]([c_1])}{T_e
\wp (g^{-1} \xi_2)} \\
&\stackrel{\hidewidth\eqref{eq:AdVomQuotienten}\hidewidth}{=}
\dPaar{\qJ[2]([c_1])}{[g]^{-1} T_e \wp\xi_2}\\
&= \dPaar{[g]\qJ[2]([c_1])}{T_e \wp \xi_2} \ensuremath{\text{.}}
\end{align*}
%
Dabei wurde im fünften Schritt verwendet, dass $\lieAlgebra_2$
$G$"=invariant ist.
\item %
Für $\xi_2 \in \lieAlgebra_2$ und $c_1 \in C_1$ gilt
\begin{align*}
\dPaar{\qJ[2](\pi_1(c_1))}{T_e\wp \xi_2}
&= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 \dPaar{\qJ}{\xi_2}(c_1) \\
&= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 \dPaar{J}{\xi_2}(c_1) + \dPaar{\alpha}{\xi_2}\\
&= \ensuremath{\imath^*}_1 \dPaar{J}{\xi_2}(c_1) +
\dPaar{\mathrm{p}_2(\alpha)}{\xi_2}\\
&=
\dPaar{J_2(\pi_1(c_1))}{T_e \wp \xi_2} +
\dPaar{\chi^{-1}(\mathrm{p}_2(\alpha))}{T_e \wp \xi_2} \ensuremath{\text{,}}
\end{align*}
und somit $\qJ[2] = J_2 + \chi^{-1}(\mathrm{p}_2(\alpha))$. Damit
ist klar, dass $\star_{\mathrm{red}_1}$ stark invariant ist, da
Quantenkommutatoren mit konstanten Funktionen verschwinden. Das
heißt mit $\qJ[2]$ ist auch $J_2 = \qJ[2] -
\chi^{-1}(\mathrm{p}_2(\alpha))$ eine Quantenimpulsabbildung.
\end{beweisEnum}
\end{proof}
\begin{bemerkung}
\label{bem:Quantenhamiltonfunktion}
\begin{bemerkungEnum}
\item
Für die $G/G_1$"=Invarianz von $\starred[1]$ benötigt offenbar
nicht dass es ein $G_1$"=invariantes Komplement von
$T_e\ensuremath{\iota}\lieAlgebra_1$ gibt.
\item %
Für die Aussage, dass $\qJ$ eine Quantenhamiltonfunktion ist,
kommt man mit der Voraussetzung aus, dass es ein
$G_1$"=invariantes Komplement von $T_e\ensuremath{\iota}\lieAlgebra_1$ gibt, was
sichergestellt ist, wenn $G/G_1$ reduktiv ist. Um weiter
reduzieren zu dürfen, benötigt man jedoch eine
Quantenimpulsabbildung. Diese erhalten wir, falls wir die echt
stärkere Bedingung, nämlich dass es ein $G$"=invariantes
Komplement von $T_e\ensuremath{\iota}\lieAlgebra_1$ gebe, fordern.
\end{bemerkungEnum}
\end{bemerkung}
Falls $\star$ stark invariant bezüglich der $G$"=Wirkung ist, kann man
die geometrischen Homotopie"=Daten und damit $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1$ so wählen, dass
tatsächlich $\ensuremath{\imath^*}_1 J = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_1 J$ gilt. Um dies zeigen
zu können benötigen wir jedoch noch einige Vorbereitungen.
\begin{lemma}
\label{lem:LinAlg}
Seien $U$, $V$, $V'$ und $W$ endlich dimensionale $\mathbb{R}$"=Vektorräume
mit $\dim V = \dim V'$ und $f \dpA U \oplus V \to W \oplus V'$ eine
lineare Abbilddung mit $f(U) \subset W$. Dann gilt
\begin{align*}
f \text{surjektiv} \implies f\at{U} \dpA U \to W \text{surjektiv}\ensuremath{\text{.}}
\end{align*}
\end{lemma}
\begin{proof}
Zunächst bemerken wir, dass es wegen der Surjektivität von $f$ für $x
\in W\oplus V'$, $u \in U$ und $v \in V$ gibt mit $x = f(u+v) = f(u) +
f(v)$. Deshalb ist $W \oplus V' = f(U) + f(V)$. Ferner gilt offenbar
$\dim(f(U)) \leq \dim W$, da nach Voraussetzung $f(U) \subset W$
ist. Demnach erhält man
\begin{align*}
\dim W + \dim V \; \,
&\stackrel{\hidewidth\text{Vor.}\hidewidth}{=}\; \, \dim W + \dim
V' =
\dim( W \oplus V' )\stackrel{f \text{surj.}} = \dim(f(U) + f(V)) \\
&= \; \, \dim(f(U)) + \dim(f(V)) - \dim(f(U) \cap f(V)) \\
&\leq \; \, \underbrace{\dim(f(U))}_{\leq \dim W} +
\underbrace{\dim(f(V))}_{\leq \dim V} \leq \dim W + \dim V \ensuremath{\text{.}}
\end{align*}
Somit gilt in obiger Ungleichungskette sogar überall
Gleichheit. Insbesondere ist die folgende Beziehung richtig.
\begin{align*}
\dim(f(U)) + \dim(f(V)) = \dim W + \dim V \ensuremath{\text{.}}
\end{align*}
Da aber $\dim (f(U)) \leq \dim W$ und $\dim (f(V)) \leq \dim V$,
ergibt sich insbesondere schon $\dim (f(U)) = \dim W$, woraus die
Behauptung folgt.
\end{proof}
\begin{proposition}
\label{prop:Jg2}
Es gilt $J(C_1) \subset (\lieAlgebra[g]^*)_2$.
\end{proposition}
\begin{proof}
Sei $\xi_1 \in \lieAlgebra_1$ und $c_1 \in C_1$. Dann gilt
offensichtlich
\begin{align*}
\dPaar{J(c_1)}{T_e\ensuremath{\iota} \xi_1} = \dPaar{J_1(c_1)}{\xi_1} = 0
\end{align*}
\end{proof}
\begin{lemma}
\label{lem:JCSubmersiv}
$J\at{C_1} \dpA C_1 \to (\lieAlgebra[g]^*)_2$ ist submersiv.
\end{lemma}
\begin{proof}
Sei $c_1 \in C_1$ beliebig aber fest gewählt. Weiter sei $K_{c_1}$
ein Komplement von $T_{c_1}\ensuremath{\imath_1} T_{c_1}C_1$ in $T_{c_1}M$, d.\,h.\
\begin{align*}
T_{c_1}\ensuremath{\imath_1} T_{c_1}C_1 \oplus K_{c_1} = T_{c_1}M \ensuremath{\text{.}}
\end{align*}
Wir definieren
\begin{align*}
U &:= T_{c_1}\ensuremath{\imath_1} T_{c_1}C_1, \quad V := K_{c_1} \ensuremath{\text{,}} \\
W &:= T_{J(c_1)}(\lieAlgebra[g]^*)_2, \quad V' :=
T_{0}(\lieAlgebra[g]^*)_1
\end{align*}
und
\begin{align*}
f := T_{c_1}J \dpA U \oplus V \to W \oplus V'\ensuremath{\text{.}}
\end{align*}
Es gilt
\begin{align*}
\dim K_{c_1} = \dim M - \dim C_1 = \dim G_1 \ensuremath{\text{.}}
\end{align*}
Also ist mit der eben eingeführten Notation $\dim V = \dim V'$. Nach
Lemma \ref{lem:JSurjektiv} ist $f$ surjektiv. Ferner wissen wir mit
Proposition \ref{prop:Jg2}, dass $J(C_1) \subset (\lieAlgebra[g]^*)_2$ gilt
und damit auch
\begin{align*}
f(U) &= T_{c_1}J(T_{c_1}\ensuremath{\imath_1} T_{c_1}C_1) = T_{c_1}(J \circ
\ensuremath{\imath_1})(T_{c_1}C_1) \subset T_{J(c_1)}(\lieAlgebra[g]^*)_2 = W\ensuremath{\text{.}}
\end{align*}
Somit können wir Lemma \ref{lem:LinAlg} anwenden und erhalten
\begin{align*}
\im (T_{c_1}J\at{C_1}) = f(U) = W =
T_{J(c_1)}(\lieAlgebra[g]^*)_2\ensuremath{\text{.}}
\end{align*}
\end{proof}
Das folgende Lemma stellt eine interessante Verallgemeinerung des
klassischen Tubensatzes (vgl.\ Def. \ref{def:Inv.Tubenumgebung})
dar. Berücksichtigt man keine Gruppenwirkungen, so ist der Satz in der
Literatur wohlbekannt, insbesondere im Kontext von Kontrolldaten
stratifizierter Räume. Meist wird für den Beweis auf die Arbeit von
Mather \cite[Prop. 6.2]{mather:1970} verwiesen. Der dort geführte Beweis
wirkt jedoch nicht besonders überzeugend. Zwei andere Beweise finden
sich in der Arbeit von Pflaum \cite[3.5.3]{pflaum:2000} und in
\cite[Ch. II. Thm. 1.6]{gibson:1976}. Letzterer scheint die dem Problem
angemessene Sprache zu verwenden und eignet sich gut, den Satz auch
unter zusätzlicher Berücksichtigung von Gruppenwirkungen zu formulieren
und zu beweisen.
\begin{lemma}[kompatible $G$-invariante Tubenumgebung]
\label{lem:kompatibleTuben}
Seien $M$ und $N$ Mannigfaltigkeiten und $G$ eine Lie"=Gruppe, die auf
$M$ und $N$ wirke. Die Wirkung auf $M$ sei eigentlich. Weiter sei $C
\subset M$ eine $G$"=invariante Untermannigfaltigkeit und $f\dpA M \to
N$ eine $G$"=äquivariante, glatte Abbildung, so dass $f\at{C}\dpA C
\to N$ submersiv ist. Dann gibt es eine $G$"=invariante Tubenumgebung
$(\pi\dpA E \to C,U_E,U_M,\tau)$ von $C$ in $M$ mit $f(r(p)) = f(p)$
für alle $p \in U_M$. Dabei bezeichnet $r := \pi \circ \tau^{-1} \dpA
U_M \to C$ die induzierte $G$"=äquivariante Retraktion.
\end{lemma}
Der Beweis von Lemma \ref{lem:kompatibleTuben} befindet sich im Anhang,
siehe Satz \ref{satz:GkompatibleTuben}.
\begin{definition}
\label{def:kompatibleTube}
Eine Tubenumgebung wie in Lemma \ref{lem:kompatibleTuben} heißt auch
\neuerBegriff{mit $f$ kompatible $G$"=invariante Tubenumgebung}.
\end{definition}
\begin{lemma}
\label{lem:NormalformJ}
Es gibt eine $G_1$"=invariante \tn{gute} Tubenumgebung $\Psi_1\dpA
U_1 \to V_1 \subset C_1 \times \lieAlgebra[g]_1^*$ von $C_1$, so dass
$\dPaar{J(\Psi^{-1}_1(c_1,\mu_1))}{\xi_2}$ für alle $c_1 \in C_1$ und
$\xi_2 \in \lieAlgebra[g]_2$ unabhängig von $\mu_1 \in
\projFaktor_2(V_1)$ ist. Dabei bezeichnet $\projFaktor_2 \dpA C_1
\times \lieAlgebra[g]^*_1 \to \lieAlgebra[g]^*_1$ die Projektion auf
den zweiten Faktor. Ist $\lieAlgebra[g]_2$ $G$"=invariant, so kann
$\Psi_1$ sogar $G$"=äquivariant gewählt werden.
\end{lemma}
\begin{proof}
Sei $\mathrm{p}_2 \dpA \lieAlgebra[g]^* \to (\lieAlgebra[g]^*)_2$ die
kanonische Projektion. Da $\lieAlgebra_2$ $G_1$"=invariant ist, ist
$\mathrm{p}_2$ nach Proposition \ref{prop:p2Invariant}
$G_1$"=äquivariant und sogar $G$"=äquivariant, falls $\lieAlgebra_2$
$G$"=invariant ist. Diese Äquivarianzeigenschaften übertragen sich
entsprechend auf die Abbildung $\mathrm{p}_2 \circ J \dpA M \to
(\lieAlgebra[g]^*)_2$. Offensichtlich ist die Aussage des Lemmas
äquivalent dazu, dass es eine $G_1$- (bzw.\ $G$)"=invariante gute
Tubenumgebung $\Psi_1 \colon U_1 \to V_1 \subset C_1 \times
\lieAlgebra_1^* $ von $C_1$ gibt, so dass $\mathrm{p}_2 \circ J
(\Psi^{-1}_1(c_1,\mu_1))$ für alle $c_1 \in C_1$ unabhängig von
$\mu_1 \in \projFaktor_2(V_1)$ ist. Ist also $r_1 \dpA U_1 \to C_1$
die von $\Psi_1$ induzierte Retraktion, so ist dies weiter äquivalent
dazu, dass für alle $p \in U_1$ schon $\mathrm{p}_2 \circ J(p) =
\mathrm{p}_2 \circ J \circ r_1(p)$ gilt. Da nun aber nach Lemma
\ref{lem:JCSubmersiv} $\mathrm{p}_2 \circ J \at{C_1} = J\at{C_1} \dpA
C_1 \to (\lieAlgebra[g]^*)_2$ submersiv ist, folgt die Behauptung
unmittelbar aus Lemma \ref{lem:kompatibleTuben} und Satz
\ref{satz:CxgA}.
\end{proof}
\begin{satz}
\label{satz:klassgleichquantenEinschraufJ}
Sei $\Psi_1\dpA U_1 \to V_1 \subset C_1 \times \lieAlgebra[g]_1^*$
wie in Lemma \ref{lem:NormalformJ} $G$"=invariant gewählt und $\ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}}$
die von $\Psi_1$ induzierte Quanteneinschränkung. Dann gilt
\begin{align}
\ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} J = \ensuremath{\imath_1^*} J \ensuremath{\text{.}}
\end{align}
\end{satz}
\begin{proof}
Sei $\{\tilde{e}_i\}_{i = 1, \dots, \dim G_1}$ eine Basis von
$\lieAlgebra[g]_1$ und $\{e_i\}_{i=\dim G_1 +1, \dots, \dim G}$ eine
Basis von $\lieAlgebra_2$. Weiter definieren wir für $i \in
\{1,\dots,\dim G_1\}$ $e_i := T_e\ensuremath{\iota} \tilde{e}_i$, womit
$\{e_i\}_{i=1, \dots, \dim G}$ eine Basis von $\lieAlgebra[g]$
ist. Seien weiter $\{\tilde{e}^i\}$ bzw.\ $\{e^i\}$ die dualen Basen
zu $\{\tilde{e}_i\}$ bzw.\ $\{e_i\}$.
Sei nun $(c_1,\mu_1) \in V_1$, dann gilt
\begin{align*}
J(e_i)(\Psi^{-1}_1(c_1,\mu_1)) = J_1(\tilde
e_i)(\Psi^{-1}_1(c_1,\mu_1)) = \mu_1(\tilde e_i) \quad \text{für $i
\in \{1,\dots,\dim G_1\}$}
\end{align*}
und $J(e_i)(\Psi^{-1}_1(c_1,\mu_1))$ ist nach Lemma
\ref{lem:NormalformJ} unabhängig von $\mu_1 \in \lieAlgebra_1^*$ für $i \in
\{\dim G_1 + 1,\dots,\dim G\}$.
Also erhält man
\begin{align*}
\partial_{\tilde{e}^{\alpha}} (J(e_i) \circ \Psi_1^{-1})
(c_1,\tilde{\mu}_1) =
\begin{cases}
\delta^{\alpha}_i &\text{für $i \in \{1,\dots,\dim G_1\}$} \\
0 &\text{sonst}
\end{cases} \ensuremath{\text{.}}
\end{align*}
Ist $\operatorname{\h_{1}}$ die von $\Psi_1$ induzierte Homotopie, so ergibt sich damit
\begin{align*}
\operatorname{\h_{1}} J(\xi) = \sum_{\alpha = 1}^{\dim G_1} \xi^{\alpha}
\tilde{e}_{\alpha} \quad \text{für $\xi = \xi^i e_i \in
\lieAlgebra$} \ensuremath{\text{.}}
\end{align*}
Damit folgt unmittelbar für alle $\xi \in \lieAlgebra$
\begin{align*}
(\operatorname{\boldsymbol{\partial}_1} - \operatorname{\partial_1}) \operatorname{\h_{1}} J(\xi) = 0
\end{align*}
und somit
\begin{align*}
\ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} J(\xi) = \frac{\operatorname{\mathsf{id}}}{\operatorname{\mathsf{id}} + (\operatorname{\boldsymbol{\partial}_1} - \operatorname{\partial_1})\operatorname{\h_{1}}}
J(\xi) &= \ensuremath{\imath_1^*} J(\xi) + 0 = \ensuremath{\imath_1^*} J(\xi) \ensuremath{\text{.}}
\end{align*}
\end{proof}
\begin{bemerkung}
Zusammen mit Satz
\ref{thm:ReduziertesSternproduktInvariant}~\refitem{item:MitAlpha}
bedeutet die Aussage von Satz
\ref{satz:klassgleichquantenEinschraufJ}, dass man die
geometrischen Wahlen derart treffen kann, dass im Falle eines stark
invarianten Sternprodukts das reduzierte Sternprodukt wieder stark
invariant ist. Siehe dazu auch Bemerkung
\ref{bem:PhysikalicheBedeutung}. Falls $\star$ zusätzlich Hermitesch
ist und man $\qJ = J + \frac{1}{2} \I \lambda \Delta $ mit modularer
Einsform $\Delta$ wählt, so zeigt
\ref{thm:ReduziertesSternproduktInvariant}~\refitem{item:MitAlpha}
weiter, dass $\qJ[2] = J_2 + \frac{1}{2} \I \lambda \Delta_2$ gilt,
wobei $\Delta_2$ die modulare Einsform von $\Lie[G/G_1]$ ist. In der
Tat, da $\lieAlgebra_2$ $G_1$"=invariant ist, folgt, wie man mit
Bemerkung \ref{bem:ReduktiverHomogenerRaum} leicht
nachrechnet $\chi^{-1}(\mathrm{p}_2)(\Delta) = \Delta_2$.
\end{bemerkung}
\subsection{Zweite Reduktionsstufe}
\label{sec:VergleichDerSternprodukte}
Wir wollen nun das Sternprodukt $\starred[1]$ bezüglich der Gruppe
$G/G_1$ weiter reduzieren und das so erhaltene Sternprodukt
$\starred[2]$ mit dem bezüglich der $G$"=Wirkung reduzierten $\starred$
vergleichen. Im Folgenden setzen wir voraus, dass $\lieAlgebra_2$
$G$"=invariant ist, $\prol[1]$ $G$"=äquivariant gewählt ist, also $\qJ[2]$
nach Satz \ref{thm:ReduziertesSternproduktInvariant} eine
$G/G_1$"=äquivariante Quantenimpulsabbildung ist.
\subsubsection{Konstruktion kompatibler geometrischer Prolongationen}
\label{sec:KonstruKompGeomProl}
In diesem Abschnitt konstruieren wir Tubenumgebungen, so dass die davon
induzierten Prolongationen naheliegende Verträglichkeitsbedingungen
erfüllen. Wir wollen im Folgenden solche von bestimmten geometrischen
Homotopie"=Daten für einen Reduktionsschritt wie in Kapitel
\ref{cha:Koszul-Reduktion} notieren und mit einem entsprechenden Index
den Reduktionsschritt andeuten. Wir schreiben also
$\prol$, $\prol_1$, $\prol_2$ usw.
\begingroup
\emergencystretch=0.1em
\begin{proposition}
\label{prop:KompositionVonRetraktionen}
Sei $\tilde r \dpA \tilde U \to C$ eine $G$"=äquivariante Retraktion
von einer $G$"=invarianten, offenen Umgebung $\tilde U$ von $C$ in
$C_1$ auf $C$ und $r_1 \dpA U_1 \to C_1$ eine $G$"=äquivariante
Retraktion von einer $G$"=invarianten, offenen Umgebung $U_1$ von $C_1$ in $M$
auf $C_1$. Sei $U := r_1^{-1}(\tilde U)$. Dann ist $r := \tilde r
\circ r_1\at{U}\dpA U \to C$ eine $G$"=äquivariante Retraktion von der
in $M$ offenen, $G$"=invarianten Umgebung $U$ von $C$ auf $C$. Es gilt
insbesondere die wichtige Verträglichkeitsbedingung
%
\begin{align}
r \circ r_1\at{U} = r \ensuremath{\text{.}}
\end{align}
\end{proposition}
\endgroup
\begin{proof}
Die Verträglichkeitsbedingung folgt sofort aus der Definition von $r$, in
der Tat gilt
%
\begin{align*}
r \circ r_1\at{U} = \tilde r \circ r_1\at{U} \circ r_1\at{U} =
\tilde r \circ r_1\at{U} = r \ensuremath{\text{.}}
\end{align*}
Der Rest ist klar.
\end{proof}
\begin{bemerkung}
Die Retraktionen $r_1$ und $\tilde r$ in Proposition
\ref{prop:KompositionVonRetraktionen} könnten insbesondere von zwei
\tn{guten} Tubenabbildungen $\Psi_1$ und $\tilde \Psi$ herrühren.
Somit kann man die vorherige Proposition zusammen mit Satz
\ref{satz:CxgA} so interpretieren, dass die Wahl der Tuben $\Psi_1$ und
$\tilde \Psi$ eine \tn{gute} Tubenabbildung $\Psi$ von $C$ in $C_1$
induziert, deren zugehörige Retraktion $r$ die
Verträglichkeitsbedingung
\begin{align}
r \circ r_1\at{U} = r
\end{align}
erfüllt.
\end{bemerkung}
Wir wenden uns nun der Konstruktion einer speziellen \tn{guten} Tubenumgebung für
$C_2$ in $\Mred[1]$ zu.
\begin{lemma}
\label{lem:RetraktionRunterD}
Sei $\tilde r \dpA C_1 \supset \tilde U \to C$ eine $G$"=äquivariante,
glatte Retraktion auf $C$. Dann gibt es eine glatte,
$G/G_1$"=äquivariante Retraktion $r_2 \dpA U_2 \to C$ von der
$G/G_1$"=invarianten, offenen Umgebung $U_2 := \pi_1(U_2)$ von $C_2$
in $\Mred[1]$ auf $C_2$, so dass das folgende Diagramm kommutiert.
\def\tA[#1]{A_{#1}}
\begin{equation}
\begin{tikzpicture}[baseline=(current
bounding box.center),description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3.0em, column
sep=3.5em, text height=1.5ex, text depth=0.25ex]
{
\tilde U & C \\
U_2 & C_2 \\
}; %
\path[->] (m-1-1) edge node[left] {$\pi_1\at{\tilde U}$} (m-2-1); %
\path[->] (m-1-2) edge node[auto]{$\pi_1\at{C}$}(m-2-2); %
\path[->] (m-1-1) edge node[auto]{$\tilde r$}(m-1-2); %
\path[->] (m-2-1) edge node[auto]{$r_2$}(m-2-2); %
\end{tikzpicture}
\end{equation}
\end{lemma}
\begin{proof}
Klar.
\end{proof}
Im Folgenden wollen wir die von $\tilde r$ bzw.\ $r_2$ induzierten guten
Tubenabbildungen mit $\tilde \Psi$ bzw.\ $\Psi_2$ bezeichnen.
\begin{bemerkung}
\label{bem:RetraktionRunterD}
Da in Lemma \ref{lem:RetraktionRunterD} $\tilde U$ insbesondere
$G_1$"=invariant ist, gilt schon $\pi_1^{-1}(U_2) =
\pi_1^{-1}(\pi_1(\tilde U)) = G_1 \tilde U = \tilde U$.
\end{bemerkung}
Wir verwenden nun die oben konstruierten Tubenumgebungen verwenden,
Prolongationen zu bauen, die in noch zu präzisierender Weise kompatibel
sein werden.
Sei $\tilde O$ eine $G$-invariante, offene Umgebung von $C$ in $C_1$ mit
$\abschluss{\tilde O} \subset \tilde U$, $\tilde W := C_1
\setminus \abschluss{\tilde O}$ und $\{\tilde \psi_{\tilde U}, \tilde
\psi_{\tilde W}\}$ eine der $G$"=invarianten, offenen Überdeckung
$\{\tilde U,\tilde W\}$ von $C_1$ untergeordnete glatte $G$"=invariante
Zerlegung der Eins. Weiter sei $O_1$ eine $G$"=invariante, offene
Umgebung von $C_1$ in $M$ mit $\abschluss{O_1} \subset U_1$, $W_1 :=
M\setminus \abschluss{O_1}$ und $\{\psi_{U_1},\psi_{W_1}\}$ eine der
$G$"=invarianten, offenen Überdeckung $\{U_1,W_1\}$ von $M$ untergeordnete
$G$"=invariante, glatte Zerlegung der Eins.
Wir definieren dann
\begin{align}
\label{eq:defqrol}
\qrol \dpA \CM[C] &\to \CM[C_1], \quad
\qrol(f)(x) :=
\begin{cases}
\tilde \psi_{\tilde U}(x) f(\tilde r(x)) &\text{für } x \in
\tilde U \\
0 &\text{sonst}
\end{cases} \quad \forall f \in C^\infty(C)\ensuremath{\text{.}}
\end{align}
Nach Konstruktion gilt
\begin{align}
\label{eq:qrolEinschr}
\ensuremath{\jmath^*} \qrol = \operatorname{\mathsf{id}} \ensuremath{\text{.}}
\end{align}
$O := r_1^{-1}(\tilde O) \cap O_1 = r_1\at{O_1}^{-1}(\tilde O)$ ist
offensichtlich offen und $G$"=invariant und es gilt $\abschluss{O}
\subset U := r_1^{-1}(\tilde U)$, denn
\begin{align}
\label{eq:RechungMitAbschlussVonO}
\abschluss{O} &= \abschluss{r_1^{-1}(\tilde O) \cap O_1}
\subset \abschluss{r_1^{-1}(\tilde O)} \cap \abschluss{O_1}
\notag\\
&\subset \abschluss{r_1^{-1}(\tilde O)} \cap U_1 \subset
r_1^{-1}(\abschluss{\tilde O}) \cap U_1 \subset r_1^{-1}(\tilde U)
\cap U_1 = U.
\end{align}
Bei der vorletzten Inklusion wurde die Stetigkeit von $r_1$
verwendet. Man beachte, dass der Schnitt des Abschlusses bezüglich $M$ einer
Teilmenge von $U_1$ mit $U_1$ mit dem Abschluss bezüglich der
Unterraumtopologie in $U_1$ derselben Menge und damit auch mit diesem
geschnitten mit $U_1$ übereinstimmt. Für diese simple topologische
Feinheit sei auf \cite[Prop. 2.1.1]{engelking:1989} verwiesen.
Die Abbildung
\begin{align}
\psi_U :=
\begin{cases}
\psi_{U_1}\at{U} \cdot (\tilde \psi_{\tilde U} \circ r_1\at{U}) &
\text{auf
$U$} \\
0 &\text{sonst}
\end{cases}
\end{align}
ist nach Konstruktion glatt und $G$"=invariant. %
Weiter gilt $\supp \psi_U \subset U$, denn zum einen folgt aus der
Stetigkeit von $r_1$
\begin{align*}
\supp \psi_{U} \cap U_1 = \abschluss{{\carr \psi_{U}}} \cap U_1 \subset
\abschluss{r_1^{-1}(\carr \tilde \psi_{\tilde U})} \cap U_1 \subset r_1^{-1}(\supp
\tilde \psi_{\tilde U}) \subset r_1^{-1}(\tilde U) = U
\end{align*}
und zum anderen gilt offenbar $\supp \psi_{U} \subset \supp \psi_{U_1}
\subset U_1$. Sei $W := M \setminus \abschluss{O}$ und $\psi_W := 1 -
\psi_U$, dann gilt $\supp(\psi_W) \subset W$. Dies sieht man wie folgt ein. Die Menge $\tilde O' :=
C_1\setminus \supp \tilde \psi_{\tilde W}$ ist eine offene Umgebung von
$C$ in $C_1$ mit $\abschluss{\tilde O} \subset \tilde O' \subset \tilde
U$ und $\tilde \psi_{\tilde U}\at{\tilde O'} = 1$. Ebenso ist $O_1' :=
M\setminus \supp \psi_{W_1}$ eine offene Umgebung von $C_1$ in $M$ mit
$\abschluss{O_1} \subset O_1' \subset U_1$ und $\psi_{U_1}\at{O_1'} =
1$. Dann ist $O' := r_1^{-1}(\tilde O') \cap O_1' \subset U$ eine
offene Umgebung von $C$ in $M$ mit $\abschluss{O} \subset O' \subset U$
und $\psi_U\at{O'} = 1$, wobei man die erste Inklusion mit
Rechnung \eqref{eq:RechungMitAbschlussVonO} einsieht. Somit ist
\begin{align*}
\supp \psi_{W} = \abschluss{\carr \psi_W} \subset
\abschluss{M\setminus{O'}} = M \setminus{O'} \subset M \setminus
\abschluss{O} = W
\end{align*}
klar.
Also ist $\{\psi_U,\psi_W\}$ eine der offenen Überdeckung $\{U,W\}$ von $M$
untergeordnete glatte, $G$"=invariante Zerlegung der Eins.
Wir betrachten nun die von dieser Zerlegung der Eins und der
Tubenumgebung $\Psi$
induzierte Prolongationsabbildung $\prol$ sowie die von der Zerlegung
der Eins $\{\psi_{U_1},\psi_{W_1}\}$ und der Tubenabbildung $\Psi_1$
induzierte Prolongationsabbildung $\prol[1]$. Die folgende Proposition
zeigt, dass die so konstruierten geometrischen Prolongationsabbildungen
auf naheliegende Weise verträglich sind.
\begin{proposition}
\label{prop:prop1undprolKompatibel}
Es gilt
\begin{align}
\prol = \prol[1] \ensuremath{\imath_1^*} \prol \ensuremath{\text{.}}
\end{align}
\end{proposition}
\begin{proof}
Wir zeigen $\prol = \prol[1] \qrol$, woraus leicht das
Behauptete folgt, denn
\begin{align*}
\prol = \prol[1] \qrol = \prol[1] \ensuremath{\imath_1^*} \prol[1] \qrol = \prol[1]
\ensuremath{\imath_1^*} \prol \ensuremath{\text{.}}
\end{align*}
Sei also $x \in U = r_1^{-1}(\tilde U) \subset U_1$ und $f \in
C^\infty(C)$. Dann gilt
\begin{align*}
\prol[1](\qrol(f))(x) &\stackrel{x \in U_1}{=} \psi_{U_1}(x)
\qrol(f)(r_1(x)) \stackrel{r_1(x) \in \tilde{U}}{=} \psi_{U_1}(x) \cdot
\psi_{\tilde U}(r_1(x)) \cdot
f(\tilde{r}(r_1(x)))\\
&\stackrel{\phantom{x \in U_1}}{=} \psi_{U}(x) \cdot f(r(x)) =
(\prol f)(x) \ensuremath{\text{.}}
\end{align*}
Falls $x \notin U$ gilt offensichtlich
\begin{align*}
\prol[1](\qrol f)(x) = 0 = (\prol f)(x) \ensuremath{\text{.}}
\end{align*}
\end{proof}
Die Aussage von Proposition \ref{prop:prop1undprolKompatibel} ist für
eine beliebige Wahl von $\tilde O$ und $\{\tilde \psi_{\tilde U},\tilde
\psi_{\tilde W}\}$ richtig. Diese Wahlfreiheit wollen wir nun ausnutzen
und konstruieren $\tilde O$ und die Zerlegung der Eins $\{\tilde
\psi_{\tilde U},\tilde \psi_{\tilde W}\}$ ausgehend von einer beliebig
gewählten $G/G_1$"=invarianten, offenen Teilmenge $O_2$ von $C_2$ in
$\Mred[1]$ mit $\abschluss{O_2} \subset U_2$ und einer beliebig
gewählten $G/G_1$"=invarianten glatten Zerlegung der Eins
$\{\psi_{U_2},\psi_{W_2}\}$, die $\{U_2,W_2\}$ untergeordnet sei, wobei
$W_2 := \Mred[1] \setminus \abschluss{O_2}$. Dann definieren wir die
offene Menge $\tilde O := \pi_1^{-1}(O_2) \subset C_1$ und $\tilde W :=
C_1 \setminus \abschluss{\tilde O}$. Wegen der Stetigkeit von $\pi_1$
gilt offenbar $\abschluss{\tilde O} = \abschluss{\pi_1^{-1}(O_2)}
\subset \pi_1^{-1}(\abschluss{O_2}) \subset \pi_1^{-1}(U_2) = \tilde U$,
wobei sich die letzte Gleichheit aus Bemerkung
\ref{bem:RetraktionRunterD} ergibt. Weiter ist $\tilde O$ per Definition der
Quotientenwirkung $G$"=invariant. Nun definieren wir $\tilde \psi_{\tilde
U} := \pi_1^* \psi_{U_2}$ und $\tilde \psi_{\tilde W} := \pi_1^*
\psi_{W_2}$.
\begingroup
\emergencystretch=0.8em
\begin{proposition}
\label{prop:zerlderEinsUntenHochgezogen}
$\{\tilde \psi_{\tilde U},\tilde \psi_{\tilde W}\}$ ist eine der
offenen, $G$"=invarianten Überdeckung $\{\tilde U,\tilde W\}$ von
$C_1$ untergeordnete $G$"=invariante, glatte Zerlegung der Eins.
\end{proposition}
\endgroup
\begin{proof}
Offensichtlich gilt
\begin{align*}
\tilde \psi_{\tilde U} + \tilde \psi_{\tilde W} = \pi_1^*
\psi_{U_2} +
\pi_1^* \psi_{W_2} = \pi_1^*(\psi_{U_2} + \psi_{W_2}) = 1
\end{align*}
und
\begin{align*}
\tilde \psi_{\tilde U}(g c_1) &= \pi_1^* \psi_{U_2}(gc_1) =
\psi_{U_2}(\pi_1(gc_1)) \\
&= \psi_{U_2}(\wp(g) \pi_1(c_1)) = \psi_{U_2}(\pi_1(c_1)) = \tilde
\psi_{\tilde U}(c_1) \quad \text{für $c_1 \in C_1$ und $g \in G$} \ensuremath{\text{.}}
\end{align*}
Entsprechend folgt die $G$"=Invarianz von $\tilde \psi_{\tilde
W}$. Ferner sieht man
\begin{align*}
\supp \tilde \psi_{\tilde U} = \supp(\pi_1^* \psi_{U_2}) =
\abschluss{\pi_1^{-1}(\carr \psi_{U_2})} \subset \pi_1^{-1}(\supp
\psi_{U_2}) \subset \pi_1^{-1}(U_2) = \tilde U \ensuremath{\text{.}}
\end{align*}
Schließlich gilt auch
\begin{align*}
\supp \tilde \psi_{\tilde W} \subset \tilde W \ensuremath{\text{.}}
\end{align*}
Dies sieht man wie folgt ein. Die Menge $O_2' := \Mred[1]\setminus
\supp \psi_{W_2}$ ist eine offene Umgebung von $C_2$ in $\Mred[1]$
mit $\abschluss{O_2} \subset O_2' \subset U_2$ und $\supp \psi_{W_2}
= \Mred[1]\setminus O_2'$. Weiter gilt $\abschluss{\tilde O} \subset
\pi_1^{-1}(\abschluss{O_2}) \subset \pi_1^{-1}(O_2')$.
Dann erhält man
\begin{align*}
\supp \tilde \psi_{\tilde W} \subset \pi_1^{-1}(\supp \psi_{W_2}) =
\pi_1^{-1}(\Mred[1]\setminus O_2') = \pi_1^{-1}(\Mred[1])\setminus
\pi_1^{-1}(O_2') \subset C_1\setminus {\abschluss{\tilde O}} =
\tilde W \ensuremath{\text{.}}
\end{align*}
Also ist $\{\tilde \psi_{\tilde U},\tilde \psi_{\tilde W}\}$ eine der
offenen, $G$"=invarianten Überdeckung $\{\tilde U,\tilde W\}$ von
$\tilde U$ untergeordnete $G$"=invariante, glatte Zerlegung der Eins,
wie gewünscht.
\end{proof}
Wir können nun einige Verträglichkeitsbedingungen zwischen den
Prolongationsabbildungen formulieren.
\begin{proposition}
\label{prop:vertrZerlDerEins}
Sind die Zerlegungen der Eins wie oben beschrieben konstruiert, so
gilt
\begin{align}
\label{eq:vertrZerlDerEins}
\psi_U\at{\tilde U} = \psi_{U_2} \circ \pi_1\at{\tilde U} \ensuremath{\text{.}}
\end{align}
\end{proposition}
\begin{proof}
Für $c_1 \in \tilde U = U \cap C_1$ sieht man
\begin{align*}
\psi_U(c_1) = \psi_{U_1}(c_1) \cdot \tilde \psi_{\tilde
U}(\underbrace{r_1(c_1)}_{= c_1}) = \tilde \psi_{\tilde U} (c_1)
= \psi_{U_2}(\pi_1(c_1)) \ensuremath{\text{.}}
\end{align*}
\end{proof}
\begin{satz}
\label{satz:prol2Komp}
Seien die Zerlegungen der Eins und die Tubenabbildungen wie oben
kompatibel gewählt. Dann gelten für die induzierten
Prolongationsabbildungen folgende Beziehungen.
\begin{satzEnum}
\item %
\label{item:prol2Kompa} %
$\pi_1^* \prol[2] = \ensuremath{\imath_1^*} \prol \tilde \varsigma^*$.
\item%
\label{item:prol2Komp}%
$\prol[1]\pi_1^*\prol[2] \pi_2^* = \prol \pi^* \varsigma^*$.
\item %
\label{item:hprolKomp2} %
$\h \prol[1]\pi_1^*\prol[2] = 0$.
\item %
\label{item:fettUndduennEinschProl} %
$\ensuremath{\imath^*} \prol[1] \pi_1^* \prol[2] = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1] \pi_1^*
\prol[2]$.
\end{satzEnum}
\end{satz}
\begin{proof}
\begin{beweisEnum}
\item %
Für $c_1 \in C_1$ und $f \in \CM[C_2]$ gilt
\begin{align*}
(\pi_1^* \prol[2] f)(c_1) &= \prol[2] f (\pi_1(c_1))\\
&=
\begin{cases}
\psi_{U_2}(\pi_1(c_1)) f(r_2(\pi_1(c_1))) & \text{für $\pi_1(c_1) \in U_2$}
\\
0 & \text{sonst}
\end{cases} \\
&=
\begin{cases}
\psi_{U_2}(\pi_1(c_1)) f(\pi_1(r(c_1)) & \text{für $ c_1 \in U$} \\
0 & \text{sonst}
\end{cases} \\
&=
\begin{cases}
\psi_{U}(c_1) f(\pi_1(r(c_1))) & \text{für $c_1 \in U$} \\
0 & \text{sonst}
\end{cases} \\
&=
\begin{cases}
\psi_{U}(c_1)(\tilde \varsigma^*f)(r(c_1)) & \text{für $c_1 \in U$} \\
0 & \text{sonst}
\end{cases} \\
&= (\prol \tilde \varsigma^* f)(c_1) = (\ensuremath{\imath_1^*} \prol \tilde
\varsigma^* f)(c_1) \ensuremath{\text{.}}
\end{align*}
Dabei wurden im dritten Schritt die Definitionen von $r_2$ und $r$
verwendet:
\begin{align*}
r_2(\pi_1(c_1)) = \pi_1(\tilde r(c_1)) \stackrel{c_1 \in C_1}{=}
\pi_1(\tilde r(r_1(c_1))) = \pi_1(r(c_1)) \ensuremath{\text{.}}
\end{align*}
\item Mit~\refitem{item:prol2Kompa} und Proposition
\ref{prop:prop1undprolKompatibel} erhält man
\begin{align*}
\prol[1] \pi_1^* \prol[2] \pi_2^* = \prol[1] \ensuremath{\imath_1^*} \prol \tilde
\varsigma^* \pi_2^* = \prol \tilde \varsigma^* \pi_2^* = \prol \pi^* \varsigma^*\ensuremath{\text{.}}
\end{align*}
\item Wie im Beweis zu~\refitem{item:prol2Komp} sieht man sofort
\begin{align*}
\h \prol[1]\pi_1^*\prol[2] = \h \prol[1]\ensuremath{\imath_1^*}\prol \tilde
\varsigma^* = \h \prol \tilde \varsigma^* = 0 \ensuremath{\text{.}}
\end{align*}
Dabei wurde im letzten Schritt Gleichung
\eqref{eq:globalisierteHomotopie3} verwendet.
\item Dies folgt unmittelbar aus~\refitem{item:hprolKomp2} und der
expliziten Formel für $\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$.
\end{beweisEnum}
\end{proof}
Auf für die induzierten Quanteneinschränkungen können wir
Verträglichkeitsbedingungen formulieren.
\begin{lemma}
\label{lem:RelationenMitQuantenj}
Es gibt ein $\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \dpA \CM[C_1][[\lambda]] \to
\CM[C][[\lambda]]$ mit
\begin{lemmaEnum}
\item
\label{item:QuantenJEigenschaft1}
$\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$ und
\item\label{item:QuantenJEigenschaft2} $\ensuremath{\jmath^*} \pi_1^* \prol[2]
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_2 = \ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}}\pi_1^*$.
\end{lemmaEnum}
Die Abbildung $\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}}$ ist durch Eigenschaft~\refitem{item:QuantenJEigenschaft1} schon eindeutig festgelegt und
von der Form $\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1]$.
\end{lemma}
\begin{proof}
Wir zeigen zuerst die Eindeutigkeit und dann die Existenz.
\begin{itemize}
\item[Eindeutigkeit] \hfill \\
Angenommen es gibt ein $\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \dpA \CM[C_1][[\lambda]] \to
\CM[C][[\lambda]]$ mit der Eigenschaft~\refitem{item:QuantenJEigenschaft1}, dann gilt
\begin{align*}
\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} = \ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \underbrace{\ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \prol[1]}_{= \operatorname{\mathsf{id}}}
\stackrel{~\refitem{item:QuantenJEigenschaft1}}{=} \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}
\prol[1] \ensuremath{\text{.}}
\end{align*} %
\item[Existenz] \hfill \\
Wir definieren $\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1]$ und zeigen, dass die
Eigenschaften~\refitem{item:QuantenJEigenschaft1} und~\refitem{item:QuantenJEigenschaft2} erfüllt sind. Sei $\{\tilde
e_\alpha\}_{\alpha = 1, \dots, \dim G_1}$ eine Basis von
$\lieAlgebra_1$ mit zugehöriger dualer Basis
$\{\tilde e^\alpha\}$ und $e_\alpha := T_e \ensuremath{\iota} \tilde e_\alpha$ für $\alpha \in
\{1,\dots,\dim G_1\}$. Weiter wählen wir eine Basis
$\{e_\alpha\}_{\dim G_1 +1,\dots,\dim G}$ von
$\lieAlgebra_2$. Dies induziert eine Basis $\{T_e\wp
e_\alpha\}_{\alpha = \dim G_1 + 1,\dots,\dim G}$ von $\Lie[G/G_1]$, deren duale Basis wir mit
$\{(T_e\wp e_\alpha)^*\}$ bezeichnen wollen. Insbesondere ist
$\{e_\alpha\}_{\alpha = 1, \dots, \dim G}$ eine Basis von
$\lieAlgebra$ mit dualer Basis $\{e^\alpha\}$.
\begin{beweisEnum}
\item Per Definition von $\qIdeal$ gilt
\begin{align*}
\qIdeal = \left\{\sum_{\alpha = 1}^{\dim G} f^\alpha \star
\qJ(e_\alpha) \mid f^\alpha \in \CM[M][[\lambda]]\right\} \ensuremath{\text{.}}
\end{align*}
Unter Beachtung von $\qJ[1](\tilde e_\alpha) = \qJ(e_\alpha)$ für $\alpha
\in \{1,\dots,\dim G_1\}$ erhält man damit
\begin{align*}
\qIdeal[1] = \left\{\sum_{\alpha = 1}^{\dim G_1} f^\alpha \star
\qJ(e_\alpha) \mid f^\alpha \in \CM[M][[\lambda]]\right\} \ensuremath{\text{.}}
\end{align*}
Also ist insbesondere $\qIdeal[1] \subset \qIdeal$. Somit gilt
für alle $f \in \CM[M]$
\begin{align*}
& \prol[1] \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} f -f \in \qIdeal[1] \subset \qIdeal \\
&\implies \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(\prol[1] \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} f - f) = 0\\
&\implies \ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} f = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1] \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} f = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}
f \ensuremath{\text{.}}
\end{align*}
Daraus folgt $\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}$.
\item Wir formulieren Bedingung~\refitem{item:QuantenJEigenschaft2} zunächst etwas um.
%
\begin{align*}
\lefteqn{\ensuremath{\jmath^*} \pi_1^* \prol[2] \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_2 = \ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \pi_1^*}\\
&\iff \ensuremath{\imath^*} \prol[1] \pi_1^* \prol[2] \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_2 = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}
\prol[1] \pi_1^*\\
&\iff \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1] \pi_1^* \prol[2] \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_2 = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}
\prol[1] \pi_1^* \\
&\iff \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1] \pi_1^* (\operatorname{\mathsf{id}} - \prol[2] \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_2) = 0
\end{align*}
Dabei wurde bei der ersten Äquivalenz die Gleichungen $\ensuremath{\jmath^*} =
\ensuremath{\imath^*} \prol[1]$ und $\ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} = \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1]$ ausgenutzt. Bei der zweiten
Äquivalenz wurde Satz \ref{satz:prol2Komp}~\refitem{item:fettUndduennEinschProl} verwendet.
\begingroup
\emergencystretch=0.8em
Da nach Proposition \ref{prop:QuantenAugmentierung} die
Abbildung $\operatorname{\mathsf{id}} - \prol[2] \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_2$ aber auf $\qIdeal[2] \subset
\CM[{\Mred[1]}][[\lambda]]$ projiziert, genügt es die letzte
Gleichung auf $\qIdeal[2]$ zu zeigen. Mit der
Homotopieeigenschaft
\begin{align*}
\operatorname{\mathsf{id}} = \operatorname{\boldsymbol{\partial}_2}\operatorname{\h_{2}} + \operatorname{\h_{2}} \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_2
\end{align*}
gilt für $\tilde{f} \in \qIdeal[2]$ die Gleichung
\begin{align*}
\tilde{f} = \operatorname{\boldsymbol{\partial}_2} \operatorname{\h_{2}} (\tilde{f}) =
\dPaar{\operatorname{\h_{2}}(\tilde{f})}{(T_e \wp e_\alpha)^*} \starred[1] \qJ[2](T_e\wp
e_\alpha)\ensuremath{\text{.}}
\end{align*}
\endgroup
Mit der Abkürzung $\tilde{f}^\alpha :=
\dPaar{\operatorname{\h_{2}}(\tilde{f})}{T_e( \wp e_\alpha)^*} \in \CM[{\Mred[1]}]$ berechnen wir
\begin{align*}
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1] \pi^*_1 \tilde{f} &= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1] \pi^*_1 (
\tilde{f}^\alpha \starred[1]
\qJ[2](T_e\wp e_\alpha)) \\
&= \underbrace{\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} \prol[1] \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} }_{= \ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} =
\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}}(\underbrace{\prol[1] \pi_1^* \tilde{f}^\alpha}_{=:
f^\alpha} \star \prol[1] \pi_1^* \qJ[2](T_e \wp e_\alpha))
\\
&= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} (f^\alpha \star \prol[1] \pi_1^* \qJ[2](T_e \wp
e_\alpha))\\
&= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(f^\alpha \star \prol[1]\ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \qJ(e_\alpha)) -
\underbrace{\ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(f^\alpha
\star \qJ(e_\alpha))}_{=0}\\
&= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}(\underbrace{f^\alpha \star\underbrace{
((\prol[1]\ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} \qJ(e_\alpha)) -\qJ(e_\alpha))}_{ \in
\qIdeal[1] \subset \qIdeal}}_{\in \qIdeal} ) \\
&= 0 \ensuremath{\text{.}}
\end{align*}
\end{beweisEnum}
\end{itemize}
\end{proof}
\subsubsection{Vergleich der Sternprodukte}
\label{sec:VergleichDerSternproducteSUB}
Wir kommen nun zum Hauptresultat der vorliegenden Arbeit.
\begin{satz}[Quanten-Koszul-Reduktion in Stufen]
\label{thm:UebereinstimmungDerSternprodukte}
Es seien dieselben Voraussetzungen wie in Satz
\ref{satz:redPhasenraeumeSymplektomorph} gegeben. Zusätzlich gebe es
ein $G$"=invariantes Komplement $\lieAlgebra_2$ von
$T_e\ensuremath{\iota}\lieAlgebra_1$. Weiter sei $\star$ ein $G$"=invariantes
Sternprodukt für $(M,\omega)$, für das eine Quantenimpulsabbildung
$\qJ$ existiere und $\qJ[1]$ die durch Gleichung
\eqref{eq:J1Definition2} davon induzierte Quantenimpulsabbildung für
die $G_1$"=Wirkung sowie $\qJ[2]$ die gemäß Gleichung
\eqref{eq:QuantenJ2} induzierte Quantenimpulsabbildung für die
$G/G_1$"=Wirkung. Dann kann man die geometrischen Homotopie"=Daten
derart wählen, dass
\begin{align}
\varsigma^* \starred[2] = \starred
\end{align}
gilt. Dabei bezeichnet $\starred$ das bezüglich der $G$"=Wirkung nach
dem Quanten"=Koszul"=Schema mit diesen geometrischen Homotopie"=Daten
und $\qJ$ konstruierte Sternprodukt für
$(\Mred,\omega_{\mathrm{red}})$. Ist $\star_1$ das entsprechende
Sternprodukt für $(\Mred[1],\omega_{{\mathrm{red}}_1})$ bezüglich
$\qJ[1]$, so ist $\starred[2]$ das bezüglich $\starred[1]$ und
$\qJ[2]$ reduzierte Sternprodukt für
$(\Mred[2],\omega_{\mathrm{red}_2})$.
\end{satz}
\begin{proof}
Für $c \in C = J^{-1}(0)$ ist mit Bemerkung \ref{bem:PiAufC}
$\pi_1(c) \in C_2$, also gilt für $f \in \CM[C_2]$ schon
%
\begin{align*}
f(\pi_1(c)) = \prol[2] f (\pi_1(c))\ensuremath{\text{.}}
\end{align*}
%
Damit folgt
\begin{align*}
\tilde{\varsigma}^* f(c) = f(\pi_1(c)) = \prol[2] f (\pi_1(\ensuremath{\jmath}(c))) =
\ensuremath{\jmath^*} \pi_1^* \prol[2] f(c) \ensuremath{\text{,}}
\end{align*}
also für $\phi \in \CM[{\Mred[2]}]$
%
\begin{align*}\label{loc:Stern}
\pi^*\varsigma^* \phi = \tilde{\varsigma}^* \pi_2^* \phi = \ensuremath{\jmath^*}
\pi_1^* \prol[2] \pi_2^* \phi\ensuremath{\text{.}} \tag{$*$}
\end{align*}
%
Seien $\phi_a,\phi_b \in \CM[{\Mred[2]}]$, dann gilt:
\begin{align*}
\lefteqn{\pi^*(\phi_a (\varsigma^* \starred[2]) \phi_b)} \\
&= \pi^* \varsigma^*({\varsigma^{-1}}^* \phi_a \starred[2]
{\varsigma^{-1}}^*
\phi_b)\\
&= \ensuremath{\jmath^*} \pi_1^* \prol[2] \pi_2^*({\varsigma^{-1}}^* \phi_a
\starred[2] {\varsigma^{-1}}^* \phi_b) && \eAnn{nach \ref{loc:Stern}} \\
&= \ensuremath{\jmath^*} \pi_1^* \prol[2] \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}}_2 (\prol[2]
\pi_2^*{\varsigma^{-1}}^* \phi_a \starred[1] \prol[2]
\pi_2^*{\varsigma^{-1}}^*
\phi_b) && \eAnn{nach Gleichung \eqref{eq:SternproduktAufDemReduziertenPhasenraum3}}\\
&= \ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \pi_1^* (\prol[2] \pi_2^*{\varsigma^{-1}}^* \phi_a
\starred[1] \prol[2] \pi_2^*{\varsigma^{-1}}^* \phi_b) &&
\eAnn{nach Lemma
\ref{lem:RelationenMitQuantenj}~\refitem{item:QuantenJEigenschaft2}} \\
&= \ensuremath{\boldsymbol{\jmath}^{\boldsymbol{*}}} \ensuremath{\boldsymbol{\imath}_1^{\boldsymbol{*}}} (\prol[1] \pi_1^* \prol[2]
\pi_2^*{\varsigma^{-1}}^* \phi_a \star \prol[1] \pi_1^* \prol[2]
\pi_2^*{\varsigma^{-1}}^*
\phi_b) && \eAnn{nach Gleichung \eqref{eq:SternproduktAufDemReduziertenPhasenraum3}} \\
&= \ensuremath{\boldsymbol{\imath}^{\boldsymbol{*}}} (\prol \pi^* \phi_a \star \prol \pi^* \phi_b) &&
\eAnn{nach Lemma
\ref{lem:RelationenMitQuantenj}~\refitem{item:QuantenJEigenschaft1}}[]
\\
&\phantom{=} &&\eAnn[]{und Satz
\ref{satz:prol2Komp}~\refitem{item:prol2Komp}} \\
&= \pi^*(\phi_a \starred \phi_b) && \eAnn{nach Gleichung
\eqref{eq:SternproduktAufDemReduziertenPhasenraum3}} \ensuremath{\text{.}}
\end{align*}
\end{proof}
\begin{bemerkung}
\label{bem:Hauptresultat}
Es sei darauf hingewiesen, dass Satz
\ref{thm:UebereinstimmungDerSternprodukte} nicht nur etwa besagt,
dass $\varsigma^* \starred[2]$ und $\starred$ äquivalent sind,
sondern dass sie wirklich übereinstimmen. Dies ist auch von
physikalischem Interesse, da zwei äquivalente Sternprodukte
physikalisch inäquivalent sein können, wie schon in Kapitel
\ref{cha:Deformationsquantisierung} bemerkt wurde. Ferner ist der
Symplektomorphismus aus Satz
\ref{thm:UebereinstimmungDerSternprodukte} genau derselbe wie der
klassische aus Satz \ref{satz:redPhasenraeumeSymplektomorph}. Die
Einschränkung in Satz \ref{thm:UebereinstimmungDerSternprodukte} im
Vergleich zur klassischen Situation besteht in den zusätzlichen
Forderungen an die Gruppenstruktur, die im Falle kompakter Gruppen
jedoch automatisch erfüllt sind, vgl.\ Bemerkung
\ref{bem:invariantesKomplement}. Ansonsten gibt es keine weiteren
Obstruktionen, insbesondere keine die auf Quanteneffekte schließen
lassen.
\end{bemerkung}
\cleardoublepage
\begin{appendices}
\renewcommand{ |
3,212,635,537,710 | arxiv | \section{Introduction}
Synchronization is one possible form of emergent dynamics
displayed by coupled oscillator systems and this is seen
in a wide variety of physical phenomena, e.g, synchrony among neural activity
in the brain \cite{glass}, intensity of coupled lasers to light pulses emitted
by fireflies \cite{kurth1}.
It is now well established that two chaotic systems,
with appropriate coupling, can exhibit synchronized behavior \cite{pc,kurth2}.
Synchronized chaotic dynamics has been reported in systems consisting
of well known models
of chaos like the logistic map, Lorentz system, Henon map, coupled map lattices etc.,
which represent a wide collection of discrete and continuous time systems.
The growing interest in chaos and synchronization is
partly due to its potential applications in chaos control, chaos based cryptography,
neural networks and biological systems.
Much of the work on synchronization has concentrated on instantaneous
coupling of the dynamical systems \cite{kurth1}. This implies that we disregard the
finite time it takes for the interaction or the information to travel
from one system to the other. Consider two identical chaotic systems
represented by $x_{n+1}=F(x_n)$ and $y_{n+1}=F(y_n)$, started from different
initial conditions. To synchronize their solutions, $x_n$ and $y_n$, we suitably
couple both the systems. Then the modified equations will be
$x_{n+1}=F(x_n) + g_1(y_n)$ and $y_{n+1}=F(y_n) + g_2(x_n)$. In this form of
coupling,
we have implicitly assumed that $y_n$ is instantly available to the
$x$-system without any delay and vice versa. This cannot be true in general.
Many physical phenomena that display synchronization are often spatially separated
and the time taken for the information to travel is not negligible.
For example, synchronization of neuronal activity in the brain involves
time delays due to information processing and transfer between different
parts of the brain \cite{bdelay} and is estimated to be about tens of
milliseconds \cite{bdelay1}. In the context of using chaos synchronization for
secure communication, it is usual for the sender and
receiver to be spatially separated and information takes finite time to
travel between them. Then, the question is, can the delayed interactions
lead to synchronization of coupled systems ? The works done in the last
few years show that delayed couplings can lead
to synchronization \cite{masoller,atay} as well as new scenarios such as amplitude death
in limit cycle oscillators and coupled oscillators \cite{reddy}, multi stable
synchronization \cite{multis} and symmetry breaking \cite{symmb}. Techniques for
controlling pathological rhythms in neurons based on delayed feedback have also been
reported \cite{rose}.
The experiments on information processing in the brain and neurosciences
are providing evidence for new features of delayed synchronization, namely
that of near zero delay synchronization of signals from spatially
separated regions \cite{eeg-rev}. It has been
reported that spatially separated cortical regions in the brain
of the cat display synchronization {\it without any lag} \cite{cat}.
Neuronal firings from left and right cortex regions recorded on
primates show near zero synchrony maintained over
considerable distances \cite{murthy}. The idea of spatially distributed systems
synchronizing without delay continues to attract research attention since the mechanism
leading to such an effect is not yet clear and continues to be debated \cite{free}.
Recently one mechanism for zero delay synchronization of mutually
coupled oscillators has been demonstrated experimentally in a system
of three semiconductor lasers \cite{fischer}. In this case, a central driving laser L$_2$
is bidirectionally coupled with two other mutually delay coupled lasers,
L$_1$ and L$_3$. Then, the
delay coupled lasers L$_1$ and L$_3$ display zero delay or isochronal synchronization and
is shown to be reasonably robust. A variant of this scheme has been used
to propose a method for bidirectional secure communication using delay
coupled oscillators \cite{rroy}. Simultaneously, the modeling and analysis of zero
lag synchronization maintained over large distances in neurons is beginning to
take shape \cite{lagmodel}.
In this paper, we show that zero delay synchronization
can be achieved in delay coupled spatially extended systems, namely a
coupled map lattice (CML), if they
are driven by a third such system. This application is motivated by
the fact that the activity in the cerebral cortex of the brain (as measured
by electroencephalograph, for example) is due to the interaction
between millions of neurons that are spatially distributed. The
zero lag synchronization occurs between groups of such spatially
distributed neurons. Hence one is led to
consider a collection of coupled oscillators that are spatially
separated. Another important application is in the area of
secure communication. Coupled map lattices can be applied to
encryption of messages in multi-channel communication \cite{mcc}.
In real-time, for multichannel communications that require
security, there is a need
for as many different chaotic signals as the number of channels to encode
the messages sent in each of the channels. Signals in these channels
are encoded by the chaotic time series from one of the lattice points of the CML.
The CML, being a high dimensional
chaotic system, provides sufficient security against most attacks. Presence of
chaotic, zero delay synchronization would allow
the receiver to decode the message in all the channels at the same time. For real-time
applications synchronization must be achieved in shortest possible time
and zero delay synchronization is ideally suited for this purpose.
For a review of certain aspects of synchronization in spatially extended systems
and its applications, see ref \cite{zan}.
In the next section, we briefly review the coupled map lattice paradigm
and introduce our model. Further, in subsequent sections, we report results
on zero lag synchronization from this model, obtain analytically the parametric
regimes where this occurs and also bounds for synchronization errors due to
parameter mismatches.
\section{Coupled Map Lattice}
We consider the coupled map lattice given by,
\begin{equation}
x_{n+1}^i = (1-\epsilon) f[x_n^i] + \frac{\epsilon}{2}
\left( f[x_n^{i-1}] + f[x_n^{i+1}] \right)
\label{cml}
\end{equation}
where $i=1,2 ....L$ is the index for the lattice site
and $\epsilon$ is the coupling strength parameter.
This was originally introduced \cite{cml,cml-app} as a model for
chaos in spatially extended systems. There have been attempts to
model real life phenomena based on CMLs \cite{cml-app}.
They display a rich variety of dynamical
regimes ranging from frozen random patterns to spatio-temporal
chaos upon variation of the parameters.
We use periodic boundary conditions, so that $x_n^{L+1}=x_n^{1}$ leading
to a ring type lattice.
Here, the local dynamics uses the logistic equation, $f[x]=a x(1-x)$, where
$a$ is the chaos parameter. In Sec III(A), we will denote this map
showing its explicit parameter dependence as $f[x,a]$.
\begin{figure}[h]
\includegraphics*[width=8.8cm]{fig1.eps}
\caption{Coupling scheme of CMLs. (a) $x$ and $y$ are mutually
delay coupled CMLs. The $z$-CML is the driver. (b) The driver is
a collection of $N$ unidirectionally coupled CMLs. $k$
denotes the delay time in mutual coupling.}
\label{scheme}
\end{figure}
In this work, we are attempting to synchronize two CMLs labeled $x$ and $y$ at
zero delay when both are mutually delay-coupled and are driven by a third CML
labeled $z$. This is schematically
shown in Fig. \ref{scheme}(a). The second CML labeled
$y$ is obtained by replacing $x$ with $y$ in Eq. (\ref{cml}); CML $z$ can
be obtained in a similar way. We mutually couple
$x$ and $y$ CMLs with a delay.
The $z$-CML is the driver and it is unidirectionally coupled to $x$ and $y$ CMLs.
It is given by,
\begin{equation}
z_{n+1}^i = (1-\epsilon) f[z_n^i] + \frac{\epsilon}{2}
\left( f[z_n^{i-1}] + f[z_n^{i+1}] \right)
\label{cml-z}
\end{equation}
The modified form of $x$-CML is given by,
\begin{eqnarray}
x_{n+1}^i & = & \Gamma \left\{ f[x_n^i] + \beta f[y_{n-k}^i] + \alpha ~f[z_{n}^i] \right\} \nonumber \\
& & + ~~\frac{\epsilon}{2} \left\{ f[x_n^{i-1}] + f[x_n^{i+1}] \right\}
\label{cml-x}
\end{eqnarray}
where $\Gamma = (1-\epsilon)/(1+\alpha+\beta)$. Similarly, the $y$-CML is also modified
and becomes,
\begin{eqnarray}
y_{n+1}^i & = & \Gamma \left\{ f[y_n^i] + \beta f[x_{n-k}^i] + \alpha ~f[z_{n}^i] \right\} \nonumber \\
& & + ~~\frac{\epsilon}{2} \left\{ f[y_n^{i-1}] + f[y_n^{i+1}] \right\}.
\label{cml-y}
\end{eqnarray}
The Eqns. (\ref{cml-z}-\ref{cml-y}) represent our coupling scheme shown in
Fig. \ref{scheme}(a) for zero lag synchronization.
The parameters $\beta \ge 0$ and $\alpha \ge 0$ represent the strength of delayed mutual
coupling and the strength of coupling with the driver CML respectively.
Note that in Eqns. (\ref{cml-x},\ref{cml-y}), delay $k$ is introduced in
the mutual coupling term. If $\alpha=0$ (absence of drive CML), then no isochronal
synchronization takes place between $x_n(i)$ and $y_n(i)$ in the presence of
mutual delays between them. However, if $\beta=0$
these CMLs synchronize beyond some critical value of $\alpha$. We will
explore the general case when $\alpha , \beta \ge 0$, $k>0$ and show numerical
evidence for zero lag synchronization of $x$- and $y$-CMLs but not with $z$-CML.
The coupling parameter $\epsilon$ and $a$ are chosen such that CML generates chaotic motion.
The coupling scheme shown in Fig. \ref{scheme}(a) is reminiscent of
the generalized synchronization (GS) that has been widely studied in the
last one decade \cite{aba}. In GS scenario, there is one driver and
a driven (response) system and the state of the latter depends on the
former. This is the likely case in the absence of mutual coupling, {\it i.e.}, $\beta=0$.
However, in this scheme we are considering one driver and
two driven systems which themselves are mutually coupled to one another
with a delay. It is also relevant to point out that in this scheme the driver
and the driven system need not necessarily be one unit each but can be
composed of many (sub)systems as shown in Fig. \ref{scheme}(b). In this scheme,
instead of one driver CML, a collection of $N$ unidirectionally coupled
CMLs are used to drive the $x$ and $y$ CMLs. The detailed
results for this scheme will be presented elsewhere.
\begin{figure}
\includegraphics*[width=8.8cm]{fig2.eps}
\caption{(a) $z_n$ (b) $x_n$ and (c) $y_n$ for 525th lattice site of the CML system
in Eqns. (\ref{cml-z}-\ref{cml-y}). The local map
parameter is $a=4.0$. The local coupling strength in CMLs
is $\epsilon=0.1$ and $\alpha=\beta=1.0$. The $x$ and $y$ CMLs are
mutually coupled with delay $k=26$.}
\label{fig2}
\end{figure}
\begin{figure}
\includegraphics*[width=8.8cm]{fig3.eps}
\caption{(a) $z_n-y_n$, (b) $z_n-x_n$ and (c) $x_n-y_n$. The parameters
are the same as in Fig. \ref{fig2}.}
\label{fig3}
\end{figure}
\begin{figure}
\includegraphics*[width=8.8cm]{3d_color1.eps}
\caption{(Color Online) The quantity $x_n^i-y_n^i$ plotted as function
of $i$ and $n$. Notice that the region beyond $n > 100$ is
flat indicating synchronization without delay in the entire coupled
map lattice. The parameters are the same as in Fig. \ref{fig2}.}
\label{fig4}
\end{figure}
\section{Zero delay synchronization}
The coupled map lattices in Eqns. (\ref{cml-z}-\ref{cml-y}) with
$L=1000$ lattice elements are iterated for 6000
discrete time steps. Each of the CML is initialized at $n=0$ with a
different realization of uniformly distributed random numbers.
The logistic map parameter is $a=4.0$ such that the local map
dynamics is chaotic and the local coupling strength is $\epsilon=0.1$. For this combination
of $a$ and $\epsilon$, the CML in Eq. (\ref{cml}) is known to display
spatio-temporal chaos \cite{cml-app}. The $x$ and $y$ CMLs are mutually coupled
with a delay of $k=26$. The $z$ CML drives both the $x$ and $y$ CMLs.
In Fig. \ref{fig2}, we show a typical
time series for $x_n, y_n$ and $z_n$ drawn from 525th lattice point
from the system of CMLs given by Eqns. (\ref{cml-z}-\ref{cml-y}).
Notice that each of them is chaotic and beyond 100th time step
$x_n$ and $y_n$ are synchronized without delay. The difference
between the pairs of time series is shown in Fig. \ref{fig3} and
clearly the delay-coupled CMLs, $x$ and $y$, exhibit isochronal synchronization
(Fig. \ref{fig3}(c)) but they do not synchronize with the driver
$z$-CML as seen in Fig. \ref{fig3}(a,b). Even though results from a typical time
series from one lattice point
is displayed in Figs. \ref{fig2} and \ref{fig3}, we observe
synchronization for all the lattice points of $x$ and $y$ coupled map
lattice. This is shown in Fig. \ref{fig4} as a space-time plot which has
a flat region coinciding with zero of $z$-axis for $n>100$ at every lattice point.
The isochronal synchronization of CMLs demonstrated in Fig. \ref{fig4}
depends on the strength parameters $\alpha$ and $\beta$. Results presented here
indicate that there is a critical value of $\alpha$ and $\beta$,
other parameters remaining the same, below which synchronization
does not take place. However, the time taken to achieve
synchrony is found to increase with magnitude of
the delay $k$. The effect of various parameters on synchronization is
the discussed in the next section.
In order to understand the correlations that exist between
$x_n$, $y_n$ and $z_n$ at $i$th lattice site, we study the
lagged cross-correlation defined as,
\begin{equation}
C(m) = \frac{\sum_{n=1} \left( x_n - \overline{x} \right)
\left( y_{n+m} - \overline{y} \right)}{\sigma_x \sigma_y}
\label{crosscor}
\end{equation}
where $\bar{x}$ and $\bar{y}$ are the sample means, $\sigma$ the corresponding
standard deviation and $m$ represents the lag
and lattice site index is suppressed. In Fig. \ref{fig5}, the solid line shows the
lagged cross correlation between the iterates of
$x$ and $y$ CMLs at 525th lattice site. At zero lag, $x_n$ and $y_n$ are
almost perfectly correlated with $|C(0)|=0.998$ indicating perfect
synchronization without delay. Along with this, the mutual coupling between
$x$ and $y$ CMLs
with delay $k=26$ leads to partial recurrences in $|C(m)|$ at similar intervals.
In contrast, for $x_n$ and $z_n$ (dashed line in Fig. \ref{fig5}), $|C(0)|=0.701$
indicates the absence of identical synchronization (see also Figs \ref{fig3}(a,b)).
Due to synchrony between $x_n$ and $y_n$, again similar
result holds good for $y_n$ and $z_n$, with peaks in $|C(m)|$ separated by delay $k=26$.
If we apply instantaneous coupling, i.e, $k=0$ then the recurrences would
be absent, as shown by the dotted line in Fig. \ref{fig5}, indicating that $x$ and $y$ CMLs
would not maintain any memory of the dynamics of the other CML in them. Similar results
hold good for all the lattice sites in CML.
\begin{figure}
\includegraphics*[width=8cm]{fig5.eps}
\caption{Absolute cross-correlation $|C(m)|$ as a function of lag $m$.
$|C(m)|$ between $x_n$ and $y_n$ (solid line) and between $x_n$ and $z_n$
(dashed line) taken from their 525th lattice point is displayed. The dashed
line is shifted by 10 units along $x$-axis for clarity. The
cross-correlation between $y_n$ and $z_n$ is almost indistinguishable
from the dashed line since $x_n$ and $y_n$ are in synchrony. If mutual
coupling is absent ($k=0$), then recurrences are not seen (dotted line).}
\label{fig5}
\end{figure}
The recurrence pattern of $|C(m)|$ shown in Fig. \ref{fig5} can be used to
detect mutually delayed couplings and estimate the magnitude of delay
in real physical systems. As shown in that figure, the absence of
delayed couplings would not lead to any recurrence. It is essential that
the physical system should be composed of many subsystems which can
display synchronization among themselves. The time interval between
the peaks in $|C(m)|$ would give an estimate of the magnitude of delay.
However, in a complex physical system, the cross correlations alone will
not be sufficient in identifying the driver system. At this point, we also remark
about the possibility of replacing $z$-CML by a noise process. Do we expect
synchronization then ? In some restricted range of parameters, the same noise
process driving both $x$- and $y$-CML can lead to isochronal synchronization.
However, the neat recurrence structures and the associated memory features shown in
Fig. \ref{fig5} will be absent in such a case and the magnitude of delays will
not carry any significance.
\section{Sensitivity to parameters}
In this section, we discuss the parametric regimes in which the synchronization
occurs. As pointed out before, the isochronal synchronization depends on the
parameters $k$, $\alpha$ and $\beta$. To understand the role of these
parameters, we apply linear stability analysis.
The synchronized solution of interest is,
\begin{equation}
u^i_{n+1} = x_{n+1}^i - y_{n+1}^i = \bar{u}_{n+1} \;\;\;\;\;\; \mbox{for all}\;\; n.
\label{syncsol}
\end{equation}
In our case, $ \bar{u}_{n+1}=0$.
We will perform a linear stability analysis about this solution
to determine the parameters that will lead to synchronization.
For convenience, we will shift to new variables defined as,
\begin{equation}
u^i_{n} = x_n^i - y_{n}^i, \;\;\;\;\;\;\;\;\;\;\;\; v^i_{n} = x_n^i + y_{n}^i.
\end{equation}
The dynamics of $u^i_{n+1}$ can be written as,
\begin{eqnarray}
u^i_{n+1} & = & \frac{g(u_n^i,v_n^i)}{1+\alpha+\beta} -
\frac{\beta (1-\epsilon)}{1+\alpha+\beta} ~g(u_{n-k}^i,v_{n-k}^i) \nonumber \\
& + & \epsilon \left\{ \sum_{j \thicksim i} g(u_n^j,v_n^j) -
\frac{g(u_n^i,v_n^i)}{1+\alpha+\beta} \right\}.
\label{pert}
\end{eqnarray}
where $g(u_n^i,v_n^i) = f(x_n^i) - f(y_n^i) = a u_n^i (1 - v_n^i)$ and
$j \thicksim i$ represents summation over nearest neighbours.
We will follow the elegant technique discussed in Ref. \cite{joy}. The connection
topology, here being the nearest neighbour, is encoded in the spectra of
the graph Laplacian defined as, $(\Delta w)_i = (1/n_i) \sum_{j \thicksim i}
( w_j - w_i )$. We will use the fact that the eigenmodes $\phi$ of Laplacian
are obtained from the eigenvalue equation $\Delta \phi_m = -\lambda_m \phi_m$ \cite{joy},
where $-\lambda_m$ is the eigenvalue. Then, we will consider perturbations
to synchronized solution $\bar{u}_n$ in Eq. (\ref{syncsol}) by the
$m$th eigenmode $\phi^i_m$ as,
\begin{equation}
u^i_n = \bar{u}_n + \mu ~\delta u_n^m ~ \phi^i_m,
\end{equation}
such that for $\mu << 1$, $\delta u_m(n) \to 0$ as $n \to \infty$ for
synchronized solutions.
Substituting this in $g(u_n^i,v_n^i)$ and Taylor expanding it about $\mu=0$,
we get,
\begin{equation}
g(u_n^i,v_n^i) = g(\bar{u}_n,v_n^i) + \mu ~\delta u_n^m ~\phi^i_m ~g'(\bar{u}_n),
\end{equation}
and $g'(\bar{u}_n)$ should be taken to mean $g(u_n^i,v_n^i)$ evaluated at the
synchronized solution $\bar{u}_n$.
We substitute this in Eq. (\ref{pert}) to obtain,
\begin{eqnarray}
\delta u^m_{n+1} ~ \phi^i_m & = &
\left\{ \frac{\epsilon}{1+\alpha+\beta} \Delta \phi^i_m +
\frac{1}{1+\alpha+\beta} \phi^i_m \right\} \nonumber \\
& & \delta u^m_n ~ g'(\bar{u}_n) -
\frac{\beta(1-\epsilon)}{1+\alpha+\beta} \nonumber \\
& & \delta u^m_{n-k} ~\phi^i_m ~ g'(\bar{u}_{n-k}).
\end{eqnarray}
Using $\Delta \phi_m = -\lambda_m \phi_m$ and after some simple manipulations, we get,
\begin{eqnarray}
\delta u^m_{n+1} = \left( \frac{1-\lambda_m \epsilon}{1+\alpha+\beta} \right)
\delta u^m_n ~ g'(\bar{u}_n) - \nonumber \\
\frac{\beta(1-\epsilon)}{1+\alpha+\beta} ~ \delta u^m_{n-k} ~ g'(\bar{u}_{n-k})
\label{deleqn}
\end{eqnarray}
This is the relation we need to analyse the stability of synchronized solutions.
The eigenvalue $-\lambda_m$ is dependent only on the connection topology
of the CML. For nearest neighbour coupling, the non-zero eigenvalues of $\Delta$ are
$\lambda_m = 1 - \cos(2 \pi m/L)$, $m=1,2,......L-1$ \cite{joy}. If $L>>1$,
the largest eigenvalue is 2 if $L$ is even and $1+\cos(\pi/L) \sim 2$ if $L$ is odd.
Putting $\lambda_m=2$, Eq. (\ref{deleqn}) can be analysed for the following two cases.
\subsubsection{Absence of mutual coupling, $\beta=0$}
Firstly, we consider the case $\beta=0$, {\it i.e}, absence of mutual coupling.
In this case, we have from Eq. (\ref{deleqn}),
\begin{equation}
\delta u^m_{n+1} = \left( \frac{1-2\epsilon}{1+\alpha} \right)
\delta u^m_n ~ g'(\bar{u}_n)
\label{deleqn1}
\end{equation}
The condition for local stability is,
\begin{equation}
q=\lim_{N\to\infty} \frac{1}{N}\log \frac{|\delta u^m_{N+1}|}{|\delta u^m_0|} < 0.
\label{locstab}
\end{equation}
Since $g'(\bar{u}_n) = f'(x_n)$, we can
iterate Eq. (\ref{deleqn1}) $N$ times to obtain
\begin{equation}
q = \log \left| \frac{1-2\epsilon}{1+\alpha} \right|
+ \langle \log |f'(x_n)| \rangle,
\end{equation}
where $\langle . \rangle$ denotes the time average. If the local map
$f(x_n)$ is ergodic, this average can be replaced by an
ensemble average and we have $\langle \log |f'(x_n)| \rangle = \log 2$,
the Lyapunov exponent of the logistic map. Hence, the condition for synchronization
for $\beta = 0$ turns out to be $|2(1-2\epsilon)/(1+\alpha)| < 1$, which implies
\begin{equation}
\alpha > |2 - 4 \epsilon|-1.
\label{condition1}
\end{equation}
This holds good for any value of $k$ as is to be expected.
Secondly, note that if $\alpha=0$, the stability condition in
Eq. (\ref{condition1}) will not be satisfied for any value of $\epsilon$
for which synchronization takes place.
Hence, for $\alpha=0$, {\it i.e}, in the absence of drive CML,
no isochronal synchronization can take place.
This condition is verified by numerical simulations of CMLs in
Eqns. (\ref{cml-z}-\ref{cml-y})
with $\epsilon=0.1$ and $\beta=0$. We define the degree of synchronization
to be,
\begin{equation}
\sigma = \langle (x_n^i - y_n^i)^2 \rangle,
\end{equation}
where the average $\langle . \rangle$
is taken over all the lattice points for 50000 iterations after discarding
the initial 10000 time steps \cite{atay}. If the system synchronizes, then $\sigma \to 0$
as $n \to \infty$. The results in Fig \ref{b0} show that
for $\alpha > 0.6$, we obtain $\sigma < 10^{-20}$ leading to synchronization
and this confirms
the validity of analytical condition in Eq. (\ref{condition1}).
\begin{figure}[t]
\includegraphics*[width=5cm]{b0sigma.eps}
\caption{Degree of synchronization $\sigma$ as a function of $\alpha$ for $\beta=0$
and $\epsilon=0.1$. The delays are $k=4$ (circles) and $k=24$ (triangles).}
\label{b0}
\end{figure}
\begin{figure}[t]
\includegraphics*[width=8cm]{linsta.eps}
\caption{Degree of synchronization $\sigma$, as a function of $\alpha$ and $\beta$,
obtained by numerically iterating Eq. (\ref{deleqn}).
The results are shown for
two different choices of delay $k$. The black points indicate $\sigma < 10^{-15}$
and white indicates lack of synchronization.}
\label{linsta}
\end{figure}
\subsubsection{Effect of delays, $\beta > 0$}
Next, we consider the case $\beta > 0$. In this case, it
is not straightforward to analytically solve Eq. (\ref{deleqn}) to obtain
local stability criteria. We iterate Eq. (\ref{deleqn}) numerically
to estimate the value of $q$ [Eq. (\ref{locstab})] as a function of
$\alpha$ and $\beta$ for $\epsilon=0.1$.
In Fig \ref{linsta}, we present the numerical results and
if $q<0$, we denote it by a black point (synchronization)
and for $q>0$ we denote it by white point (no synchronization).
The interesting feature is that the introduction of delays enhances
synchronization to some extent. For instance, at $\beta=0$, synchrony
requires $\alpha > 0.6$
but in the presence of delays approximately $\alpha > 0.4$
is sufficient for synchronization in the range $\sim 0.15 < \beta < 0.5$.
This is reminiscent of the recent results that indicate enhanced synchrony
due to presence of delays in coupled systems \cite{masoller,atay}.
As the strength of mutual coupling $\beta$ increases,
it will require even stronger
drive by $z$-CML to achieve synchronization. This can be qualitatively
seen in Fig \ref{linsta} where for $\beta > 0.5$, the minimum value of $\alpha$
required for synchronization increases with increase in $\beta$.
Notice again that at $\alpha=0$ there is no synchronization, even in the
presence of delays.
\begin{figure}[t]
\includegraphics*[width=8cm]{abk.eps}
\caption{Degree of synchronization $\sigma$, as a function of $\alpha$ and $\beta$,
obtained by numerically simulating CMLs in Eq. (\ref{cml-z}-\ref{cml-y}).
The results shown for four different choices of delay $k$.
The black points indicate $\sigma < 10^{-15}$
and white indicates lack of synchronization.}
\label{dosync}
\end{figure}
To confirm the linear stability analysis for $\beta > 0$, in particular the
results displayed in Fig \ref{linsta}, we simulate the CMLs
in Eqns. (\ref{cml-z}-\ref{cml-y}) as a function of $\alpha$, $\beta$ and $k$
and we display the degree of synchronization $\sigma$ in Fig \ref{dosync}.
The black points in the figure denote $\sigma < 10^{-15}$ and white points
correspond to lack of synchronization. This is shown for four different
choices of delays $k$. The CML simulations broadly agree with
the numerical estimates of $q$ shown in Fig. \ref{linsta} based on Eq. (\ref{deleqn}).
The features such as the enhancement in synchronization in the presence of
delays and optimal $\beta$ is clearly seen in these simulations. While a physical
explanation for enhanced synchrony and the optimal value of $\beta$ is not yet
clear, one plausible reason could be as follows;
For $\alpha=\beta=0$, we have two independent chaotic CMLs.
But, in the absence of only the driver CML, {\it i.e} with $\alpha=0$, the dynamics of the
CML system, $x_n(i)$ and $y_n(i)$, for $0.03 < \beta < 0.14$ settles mostly to a
periodic solution and for $\beta > 0.14$ it becomes increasingly chaotic. Thus, in
parameteric space, there
is a window of non-chaotic region flanked on either sides by predominantly chaotic dynamics.
The strength of the driver $\alpha$ required to synchronize the non-chaotic
dynamics is less than the one needed for chaotic solution.
This accounts for the dip around $\beta=0.1-0.2$ seen in Fig. \ref{linsta} and \ref{dosync}.
Further more, it is only to be expected that as $k$ increases synchrony would be
difficult to achieve and hence strong driving by $z$-CML will be needed to enforce
synchronization. Thus, if $\beta$ is held constant, the minimal $\alpha$
required to bring about synchronization increases as mutual coupling delay $k$
increases. At $k=\infty$, the delay is infinite and the $x$- and $y$-CMLs do not
communicate with each other on finite time scales. This scenario corresponds
to setting $\beta=0$, the absence of mutual coupling. Indeed if $k$ is larger
than the simulation times, we obtain similar results as shown in Fig. \ref{b0}.
Even though we use values of $k$ in multiples of 4, we emphasise that
the qualitative results remain unaltered for all even values of $k$. However,
for odd values of $k$, the synchronization region in $(\alpha,\beta)$-space
is smaller compared to those displayed in Fig \ref{dosync}.
A better analytical handle
on solutions of Eq. (\ref{deleqn}) will help understand the role of odd $k$.
\section{Robustness}
How robust is this zero delay synchronization against parameter mismatches ?
This question is of practical importance since in real-life systems, be it the
EEG signals in the brain or the electronic circuits for encryption in
communications, most often the parameters remain mismatched. For the purposes
of this section, we will explicitly show the parameter dependence in the
CMLs; for instance, $x$-CML in Eq. (\ref{cml-x}) will be denoted by $x_{n+1}^i(a,\epsilon)$
and the local map will be denoted by, $f[x_n^i;a]$. We will consider the
quantity, to be called synchronization error,
\begin{equation}
S_{n+1}^i(a_1,\epsilon_1; a_2,\epsilon_2) = x_{n+1}^i(a_1,\epsilon_1) - y_{n+1}^i(a_2,\epsilon_2).
\label{sync-err}
\end{equation}
The synchronization time $T_{sync}$ is defined such that
$S_{n+1}^i(a_1,\epsilon_1; a_2,\epsilon_2)=0$ for all $n > T_{sync}$.
For most practical purposes, $T_{sync}$ should be
typically much smaller than the experimental times of interest.
We will consider the synchronization error in Eq. (\ref{sync-err}) and analytically
estimate the bounds on $S_{n+1}^i$ due to parameter
mismatches. In the case of identical synchronization without delay,
we have shown above that $S_{n+1}^i=0$ for all $i$ and for all $n > T_{sync}$.
Firstly, we note that identical synchronization persists,
i.e, $S_{n+1}(i)=0$ for reasonably large mismatches in
local map parameter $a$ between the driver CML and the driven CMLs.
In particular, the numerical simulations indicate that
synchronization is mostly independent of the coupling constant $\epsilon$
in $z$-CML. Hence the important effects arise due to mismatch in parameters
of the $x$- and $y$-CMLs, which we study below.
In the numerical simulations shown in this section, we have maintained
$\alpha=\beta=1$ and $k=26$.
\subsubsection{Mismatch in local map parameter}
We consider the effect of mismatch in the parameters of $x$ and $y$ CMLs.
First, we consider the case when parameters of $x$- and $z$- CML are
identical but there is a mismatch $\Delta a$ in local map
parameter between $x$- and $y$- CML.
Starting from Eqns. (\ref{cml-x},\ref{cml-y}), after some algebra,
we obtain $S_{n+1}^i$ without any approximation as,
\begin{equation}
S_{n+1}^i(a,\epsilon ; a-\Delta a, \epsilon) = S_{n+1}^i(a,\epsilon ; a, \epsilon) + \phi_1(\Delta a, \epsilon),
\label{mm1}
\end{equation}
where we have,
\begin{eqnarray}
\phi_1(\Delta a, \epsilon) &=& \Gamma \left( f[y_{n}^i;\Delta a] - \beta f[y_{n-k}^i;\Delta a] \right) + \nonumber \\
& & \frac{\epsilon}{2} \left( f[y_{n}^{i-1};\Delta a] + f[y_{n}^{i+1};\Delta a] \right).
\label{adep1}
\end{eqnarray}
For $n > T_{sync}$, we will have $S_{n+1}^i(a,\epsilon ; a, \epsilon)=0$ which is
the condition for zero lag synchronization and it has been numerically demonstrated
in the previous section. Notice that for logistic map $0 \le f[x_n;a] = a ~g(x_{n-1}) \le 1$,
where $g(x) = x(1-x)$. Hence the first term in Eq. (\ref{adep1})
is smaller compared to the second term and we have the approximate result that,
\begin{eqnarray}
S_{n+1}^i(a,\epsilon ; a-\Delta a, \epsilon) &\approx& \frac{\epsilon ~\Delta a}{2}
\left( g[y_{n}^{i-1}] + g[y_{n}^{i+1}] \right) \nonumber \\
& \le & \frac{\epsilon ~\Delta a}{2} g_{max} \approx \frac{\epsilon ~\Delta a}{2},
\label{adep2}
\end{eqnarray}
where $g_{max} = \mbox{max} \left( g[y_{n}^{i-1}] + g[y_{n}^{i+1}] \right) \sim O(1)$.
Thus, in case of mismatch in parameter $a$, the upper bound for synchronization
error is of $O(\epsilon ~\Delta a/2)$. This estimate can be compared with the
average synchronization error after evolving it for a sufficiently
long time as shown in Fig. \ref{fig6}(a,c). The root mean square deviation or the
standard deviation $\sigma_S(\Delta a)$ of $S_{n+1}^i(a,\epsilon ; a-\Delta a, \epsilon)$
is a suitable measure and is
obtained by numerically simulating Eqns. (\ref{cml-z}-\ref{cml-y}).
In Fig. \ref{fig6}(b), we show $\langle\sigma_S(\Delta a)\rangle$, the averaged
standard deviation over all lattice sites and we observe a good agreement with the
analytical result.
\subsubsection{Mismatch in coupling constant}
\begin{figure}[t]
\includegraphics*[width=8cm]{fig9.eps}
\caption{(a) $S_n^i$ for $i=525$th lattice with $a=4.0,\epsilon=0.1$ and
$\Delta a=0.1$. (b) $\langle\sigma_S(\Delta a)\rangle$, the standard deviation of $S_n^i$
averaged over all lattice sites, as a function of $\Delta a$.
(c) $S_n^i$ for $i=525$th lattice with $a=4.0,\epsilon=0.1$ and
$\Delta \epsilon =0.05$ (d) $\langle\sigma_S(\Delta \epsilon)\rangle$
as a function of $\Delta \epsilon$.
Note that the synchronization error due to
parameter mismatch is bounded by estimates in Eqns. (\ref{adep2},\ref{epsdep1})
shown as solid lines in (b) and (d).}
\label{fig6}
\end{figure}
We consider the case when all the three CMLs in Eqns. (\ref{cml-z}-\ref{cml-y})
have the same chaos parameter $a$ but the local coupling strength in
$x$- and $z$-CML is $\epsilon$ and for $y$-CML it is $\epsilon + \Delta \epsilon$.
Typically, $\Delta \epsilon \ll 1$. Once again, we start from Eqns. (\ref{cml-x})
and (\ref{cml-y}) and we obtain,
\begin{equation}
S_{n+1}^i(a,\epsilon ; a, \epsilon - \Delta\epsilon) = S_{n+1}^i(a,\epsilon ; a, \epsilon) + \Delta\epsilon ~~\phi_2
\label{epsdep}
\end{equation}
\begin{eqnarray}
\phi_2 & = & -\frac{1}{3} \left( f[y_{n}^{i};a] + \beta f[x_{n-k}^i;a] + \alpha f[z_{n}^i;a]\right) \nonumber \\
& & + \frac{1}{2} \left( f[y_{n}^{i-1};a] + f[y_{n}^{i+1};a] \right)
\end{eqnarray}
As before, $S_{n+1}^i(a,\epsilon ; a, \epsilon) = 0$ in Eq. (\ref{epsdep}) which
defines the synchronization state.
For the logistic map $0 \le f(x;a) \le 1$ and hence we have, $|\phi_2| \le 1$.
Thus, if the coupling parameters are mismatched the synchronization is still
present though it suffers an error whose bound is estimated to be,
\begin{equation}
S_{n+1}^i(a,\epsilon ; a, \epsilon - \Delta\epsilon) \le \Delta \epsilon.
\label{epsdep1}
\end{equation}
Fig. \ref{fig6}(d) shows that the numerically simulated synchronization error, quantified by
the average standard deviation $\langle\sigma_S(\Delta\epsilon)\rangle$ of
$S_{n+1}^i(a,\epsilon ; a, \epsilon - \Delta\epsilon)$ over all lattice sites,
is always lesser than the analytical bound which is linear in $\Delta\epsilon$.
It must be remarked that in both the cases of parameter mismatches
studied above synchronization
suffers an error that can be minimized by tuning $\Delta a$ or $\Delta \epsilon$.
In other words, for large mismatches ($\Delta a, \Delta \epsilon \gg 1$)
synchronization is completely lost as
seen from the trends in numerical results in Fig. \ref{fig6}(b,d).
Obviously, exactly identical synchronization without lag is recovered
if the mismatches $\Delta a$ and $\Delta \epsilon$ are zero. It is possible
that there can be mismatches in both $a$ and $\epsilon$. Proceeding as above,
we can obtain an estimate for the error bounds as,
\begin{eqnarray}
S_{n+1}^i(a,\epsilon ; a-\Delta a, \epsilon-\Delta \epsilon) & = & S_{n+1}^i(a,\epsilon ; a, \epsilon) +
\chi_1 ~\Delta a \nonumber + \\
& & \chi_2 ~\Delta\epsilon + \chi_3 ~\Delta a ~\Delta\epsilon,
\end{eqnarray}
where $\chi_1, \chi_2, \chi_3 \le 1$. Once synchronization is reached,
$S_{n+1}^i(a,\epsilon ; a, \epsilon)=0$.
Thus, depending on the relative magnitude of $\Delta a,\Delta \epsilon$ and
$\Delta a ~\Delta \epsilon$, the dominant synchronization
error bound has linear dependence on one of these factors.
In fact, the error analysis done above would hold good for any local map
of the form $f[x;a] = a~g(x)$. Experiments in
neuronal studies have reported examples of synchronization with error in
spatially distributed neurons \cite{konig}.
From the form of analytical estimates in Eqns. (\ref{mm1},\ref{adep2}) and
(\ref{epsdep1}), it might appear as though
$z$-CML has no role to play in synchronizing the coupled dynamics. The
contribution of $z$-CML enters the $x$- and $y$-CML through the mutual
coupling terms with the delay $k$.
\section{Conclusions}
We have shown through numerical simulations that two coupled map lattices,
say, $x$-CML and $y$-CML, which are coupled to one another with a delay
can display {\em isochronal} synchronization if they are driven by a
third CML. We have used periodic boundary conditions, {\it i.e.},
a ring type lattice for the CMLs in the results
presented here. The central result remains unaltered even if we use a different
boundary condition, e.g, the one way coupled map lattice with ring
or open type boundary conditions. While the isochronal synchronization
is achieved irrespective of the boundary conditions, the parametric
regimes in which this synchrony occurs depends on the boundary conditions.
Our results also indicate that there is a critical value for the
coupling strength $\epsilon$ above which the synchronization occurs. We have
analytically studied how the strength parameters $\alpha$ and $\beta$
and the delay $k$ affect isochronal synchronization. An interesting
feature is that the presence of delays leads to synchronization
in larger parametric regime when compared with the case with absence of
mutual coupling.
The original motivation for this work was to look for possible mechanisms that
could explain isochronal synchronization occuring in neuronal systems \cite{eeg-rev}.
For simplicity, we considered one dimensional coupled map lattices with
nearest neighbour coupling even though they
are not known to be models for a collection of neurons in the brain. However,
we expect qualitatively similar results for models of neurons too \cite{ira}.
In general, higher
dimensional extensions of CMLs are possible and they display much richer
variety of collective properties like phase synchronized states and cluster
synchronization \cite{2dcml} which could modifiy the scenario presented in this work.
It would be interesting to study isochronal synchronization in higher dimensional
coupled map systems.
It is known that
two mutually coupled oscillators synchronize with one another. However, the
question of isochronal synchronization in mutually delay-coupled oscillators is
currently being actively pursued in view of its applications in
biological systems. The results discussed here provide one possible mechanism for
isochronal synchronization in delay-coupled spatially extended systems.
We have also obtained analytical estimates for bounds on errors due to
mismatches in parameters between the two CMLs and verified them in simulations.
We have also simulated this scheme with one way coupled map lattice \cite{will}
and qualitatively the results are the same as discussed above.
It would be interesting to study isochronal synchronization
in delay coupled physical systems and in their realistic models.
Further, other coupling topologies can also be implemented to study if
synchronized solutions such as the one discussed here is supported in them.
The results discussed in this work will help understand the effects
of delay coupling in spatially separated, extended physical systems.
\begin{acknowledgments}
One of us (SA) thanks Physical Research Laboratory for the internship during
which time part of this work was begun. We also thank the anonymous referee
for critical comments that helped improve the manuscript.
\end{acknowledgments}
|
3,212,635,537,711 | arxiv | \section{Introduction}
Considerable recent interest in the physical literature have been
devoted
to lattices of
the discrete nonlinear Schr\"{o}dinger (dNLS) type with compensated gains
and losses.
For the one-dimensional lattice, a prototypical, mono-parametric
model takes the form
\begin{equation}
\label{dnls}
i \frac{d u_n}{d t} = u_{n+1} - 2 u_n + u_{n-1} + i \gamma (-1)^n u_n + |u_n|^2 u_n,
\end{equation}
where parameter $\gamma$ stands for the gain and loss coefficient. If the lattice is truncated at
a finite chain, then the site index $n$ ranges from $1$ to $2N$ for a positive integer $N$
subject to the homogeneous Dirichlet boundary conditions $u_0 = u_{2N+1} = 0$.
The dNLS model (\ref{dnls}) represents one of the simplest discrete systems
which are symmetric with respect to combined parity (P) and time-reversal (T)
transformations. Hence, it is referred to as the PT-dNLS equation.
Motivated by progress in optics both at the theoretical~\cite{Kuleshov,ziad,Muga} and
experimental~\cite{salamo,dncnat} levels, many researchers have studied
nonlinear stationary states of few site configurations \cite{Dmitriev,Li,Guenter,Ramezani,suchkov,Sukhorukov,ZK}
as well as in infinite lattices \cite{Pelin1,Sukh,zheng}.
In what follows, we will be particularly interested in dimers and
quadrimers with alternating loss and gain in the adjacent sites.
It should be noted that such configurations are directly amenable
to the experimental considerations in \cite{dncnat}.
In fact, in such finite waveguide arrays, due to the presence of
iron doping, there
unavoidably exists some loss in all the channels (lattice sites).
However, the experimental procedure uses a mask covering
some of the waveguides~\cite{dkip}. Then the ones
that are {\it not} covered from the top are optically
pumped and experience gain. There is considerable freedom
in the selection of the mask so different configurations
(such as loss, loss, gain, gain, or loss, gain, loss, gain)
are feasible. However, the key feature is that because
of the geometric characteristics of the waveguide
(with each channel being about 4 $\mu$m wide,
but 30 mm long), only a homogeneous pump beam can be
practically applied to all the gain channels, as in the
case considered hereafter. Notice, however, that our analysis
will be generalized beyond these configurations
to arbitrary finite chains.
Our previous work \cite{Pelin3} was devoted to the systematic analysis of nonlinear stationary states
in the finite PT-symmetric chains. In particular,
working in the range of the gain and loss coefficient $\gamma$
when the zero equilibrium state is neutrally stable \cite{Bar,Pelin2},
we continued stationary states from the two limits characterized by small and large
amplitudes of the stationary states. For these continuations, we modified the
arguments of the Lyapunov--Schmidt reduction method to establish the
existence of the solutions.
We have also illustrated numerically that the stationary states are stable
for small amplitudes and some of them are stable even for large amplitudes.
The nonlinear dynamics of oscillators with alternating loss and
gain is an interesting problem in its own right.
Aspects of such dynamics that have been previously explored include
the following.
Transmission resonances in a PT-symmetric dimer coupled with a linear dNLS chain were studied
in \cite{Mirosh}. Asymmetric wave propagation through a finite PT-symmetric chain
was observed in \cite{lepri}. Nonlinear dynamics of wave packets near the phase transition
point was addressed with the reduction to the nonlinear Klein--Gordon equation in \cite{nixon1,nixon2}.
Our present paper studies the nonlinear dynamics of oscillators in the simplest finite
PT-symmetric chains such as dimers and quadrimers. We shall prove the following three main results
by using methods of the theory of differential equations.
Although (most of) these results are expected on an intuitive level, the rigorous proof of these results
appears to be a mathematical question of interest in its own right
and can be seen, in some of the cases below, to present
significant challenges. The three main
results are summarized as follows:
\begin{itemize}
\item[R1] We prove that solutions of the PT-dNLS equation do not blow up in a finite time.
This is achieved by a priori energy estimates for squared amplitudes of the nonlinear oscillators.
In the Hamiltonian case $\gamma = 0$, this result is equivalent to the conservation of the total
sum of squared amplitudes, so the calculation proceeds in a fashion parallel
to the corresponding conservation law and bears an extra step associated with Gronwall's lemma.
\item[R2] For parameter values of the gain and loss coefficient $\gamma$
when the zero equilibrium state is neutrally stable (this regime is referred to as the {\it exact PT-phase}), we prove
that the solutions of the finite PT-dNLS equation starting with small initial data remain bounded for all times.
This is expected in Hamiltonian systems with the energy conservation when the zero equilibrium
is stable. Nevertheless, the PT-dNLS equation with $\gamma \neq 0$
is non-Hamiltonian and lacks generally any conserved quantities.
\item[R3] For the same values of $\gamma$, we prove that despite the fact
that the zero equilibrium state is neutrally stable, there
exist solutions of the finite PT-dNLS equation starting with sufficiently large initial data which grow exponentially
fast for larger times. This result is perhaps the least expected among the three, because the values of $\gamma$
are inside the exact PT-phase, yet it can be anticipated since the balance between damped and gained
oscillators is broken at large initial amplitudes, hence the amplitude of the oscillators with gain grows
exponentially according to the linear law, in spite of the contributions of the nonlinear terms and
interactions with other damped oscillators.
\end{itemize}
The article is structured as follows. Section 2 gives the rigorous proof of the three main results
for the dimer. Although the dimer equations are fully integrable because of
the presence of conserved quantities (reviewed in Appendix A), we develop
qualitative methods of analysis, which become suitable for more complicated
finite PT-symmetric chains. Section 3 outlines the proof of the three main results
for the general case of finite PT-symmetric chains. Section 4 illustrates the main results
with numerical computations of dimers and quadrimers. Section 5 contains
a summarizing discussion and some consideration of future challenges.
\vspace{0.5cm}
{\bf Acknowledgments:} The work of P.K. is partially supported by the US
National Science Foundation under grants NSF-DMS-0806762, NSF-CMMI-1000337,
and by the US AFOSR under grant FA9550-12-1-0332.
The work of D.P. and D.T. is supported by the ministry of education
and science of Russian Federation (Project 14.B37.21.0868).
\section{Nonlinear dynamics of a dimer}
Setting $a := u_1$ and $b := u_2$ for $N = 1$,
we consider the following system of two differential equations describing
a PT-symmetric dimer:
\begin{eqnarray}
\left\{ \begin{array}{l} i \frac{da}{dt} = b - i\gamma a + |a|^2 a, \\
i \frac{d b}{dt} = a + i \gamma b + |b|^2 b. \end{array} \right.
\label{dimer}
\end{eqnarray}
We fix the gain and loss parameter $\gamma$ in the interval $(0,1)$, which guarantees
neutral stability of the zero equilibrium; this is the parametric interval
of the exact PT-phase for this model. Indeed, for any $\gamma \in (0,1)$,
the zero equilibrium point is neutrally stable with the eigenvalue-eigenvector pairs:
\begin{eqnarray}
\label{eigenvalues-dimes}
\lambda = \pm i \sqrt{1 - \gamma^2}, \quad \left( \begin{array}{c} a \\ b \end{array} \right) =
\left( \begin{array}{c} 1 \\ i \gamma \mp \sqrt{1 - \gamma^2} \end{array} \right).
\end{eqnarray}
The system of dimer equations (\ref{dimer}) is fully integrable with two conserved quantities for
any $\gamma \neq 0$. As a result, the three main results R1--R3 can be proven with the use of conserved
quantities, as is outlined in Appendix A. Because the conserved quantities do not seem to exist
for more complicated PT-symmetric configurations, such as quadrimers, we shall prove the
same results using more general
qualitative methods of the differential equation theory.
\vspace{0.25cm}
{\bf Proof of R1:} It follows from the system of dimer equations (\ref{dimer}) that
\begin{eqnarray}
\left\{ \begin{array}{l} \frac{d |a|^2}{d t} = - 2\gamma |a|^2 + i( \bar{b} a - b \bar{a}), \\
\frac{d |b|^2}{d t} = 2\gamma |b|^2 - i( \bar{b} a - b \bar{a}). \end{array} \right.
\label{dimer-amplitudes}
\end{eqnarray}
Adding these two equations together, we obtain the balance equations for squared amplitudes
\begin{equation}
\label{balance-dimer}
\frac{d}{d t} ( |a|^2 + |b|^2) = 2\gamma ( |b|^2 - |a|^2).
\end{equation}
By Gronwall' inequality, the balance equation (\ref{balance-dimer}) results
in the a priori bound
\begin{equation}
\label{Gronwall}
|a(t)|^2 + |b(t)|^2 \leq (|a_0|^2 + |b_0|^2) e^{2 \gamma t}, \quad t \in \mathbb{R},
\end{equation}
where $a_0 = a(0)$ and $b_0 = b(0)$. A priori bound (\ref{Gronwall}) proves that
the amplitudes $|a(t)|$ and $|b(t)|$ do not blow up in a finite time.
\vspace{0.25cm}
{\bf Proof of R2:} The key point of the proof is to rewrite the system of differential equations
(\ref{dimer-amplitudes}) in the integral form:
\begin{equation}
\label{integral-amplitude-a}
|a(t)|^2 = |a_0|^2 e^{-2 \gamma t} + i \int_{0}^t e^{-2 \gamma (t-\tau)}
\left[ a(\tau) \bar{b}(\tau) - \bar{a}(\tau) b(\tau) \right] d \tau
\end{equation}
and
\begin{equation}
\label{integral-amplitude}
|b(t)|^2 = e^{2 \gamma t} \left( |b_0|^2 - i \int_{0}^t e^{-2 \gamma \tau}
\left[ a(\tau) \bar{b}(\tau) - \bar{a}(\tau) b(\tau) \right] d \tau \right).
\end{equation}
First, we show that the product $ab$ remains bounded as a function of $t$ for all times. Setting
$$
u := \bar{a} b + a \bar{b}, \quad v := i (a \bar{b}-\bar{a} b)
$$
and using the system of dimer equations (\ref{dimer}), we obtain
\begin{eqnarray}
\left\{ \begin{array}{l} \frac{d u}{d t} = (|b|^2-|a|^2) v, \\
\frac{d v}{d t} = (|b|^2-|a|^2) (2 - u). \end{array} \right.
\label{dimer-phases}
\end{eqnarray}
The system (\ref{dimer-phases}) reduces to the harmonic oscillator equation
in the new temporal variable
$$
s(t) := \int_0^t (|b(t')|^2 - |a(t')|^2) dt'.
$$
Therefore, we obtain the exact solution of the system (\ref{dimer-phases}):
\begin{equation}
\label{exact-ab-variable}
\left\{ \begin{array}{l} u(t) = 2 + C_1 \cos\left[\int_0^t (|b|^2 - |a|^2) dt'\right] + C_2
\sin \left[\int_0^t (|b|^2 - |a|^2) dt'\right], \\
v(t) = -C_1 \sin\left[\int_0^t (|b|^2 - |a|^2) dt'\right] + C_2
\cos \left[\int_0^t (|b|^2 - |a|^2) dt'\right], \end{array} \right.
\end{equation}
where $C_1$ and $C_2$ are arbitrary constants, which are uniquely
defined by the initial conditions. From (\ref{exact-ab-variable}), we obtain
\begin{equation}
\label{bound-ab}
|a(t) b(t)| \leq 1 + |C_1| + |C_2|, \quad t \geq 0,
\end{equation}
hence $ab$ is a bounded function of $t$ for all times. Note that
this result also follows from the conservation law (\ref{conserved-1}) in Appendix A.
Next, we show that the component $b(t)$ remains bounded for all times if and only if
the solution of the system (\ref{dimer}) satisfies the constraint
\begin{equation}
|b_0|^2 = i \int_0^{\infty} e^{-2\gamma t} \left[ a(t) \bar{b}(t) - \bar{a}(t) b(t) \right] dt.
\label{constraint}
\end{equation}
Indeed, since $|ab|$ is a bounded function of $t$, the integral in (\ref{integral-amplitude})
is bounded for all $t \in \mathbb{R}_+$. Therefore, if the constraint (\ref{constraint}) is not satisfied,
it follows from the integral equation (\ref{integral-amplitude}) that
the solution $|b(t)|^2$ grows like $e^{2 \gamma t}$. On the other hand, if
the constraint (\ref{constraint}) is satisfied, the integral equation (\ref{integral-amplitude})
takes the form
\begin{equation}
\label{integral-amplitude-constraint}
|b(t)|^2 = i e^{2 \gamma t} \int_{t}^{\infty} e^{-2 \gamma \tau} \left[ a(\tau) \bar{b}(\tau) - \bar{a}(\tau) b(\tau) \right] d \tau,
\end{equation}
from which the global bound follows
\begin{equation}
\label{bounds-b}
|b(t)|^2 \leq \gamma^{-1} \sup_{t \in \mathbb{R}_+} |a(t) b(t)|, \quad t \in \mathbb{R}_+
\end{equation}
and implies that $\sup_{t \in \mathbb{R}_+} |b(t)|< \infty$. On the other hand,
it follows from the integral equation (\ref{integral-amplitude-a}) that
\begin{equation}
\label{bounds-a}
|a(t)|^2 \leq |a_0|^2 e^{-2 \gamma t} + \gamma^{-1} (1 - e^{-2 \gamma t})
\sup_{t \in \mathbb{R}_+} |a(t) b(t)|, \quad t \in \mathbb{R}_+,
\end{equation}
hence $\sup_{t \in \mathbb{R}_+} |a(t)| < \infty$. Note that bound (\ref{bounds-b}) implies
$$
\sup_{t \in \mathbb{R}_+} |b(t)| \leq \gamma^{-1} \sup_{t \in \mathbb{R}_+} |a(t)|.
$$
It remains to show that the constraint (\ref{constraint}) is satisfied for all
solutions of the system of dimer equations (\ref{dimer}) starting with small initial
data $(a_0,b_0)$. Let $\delta := \sqrt{|a_0|^2 + |b_0|^2}$ be small. We would like to show
that the amplitudes $|a|$ and $|b|$ do not grow significantly on the time scales of $t = \mathcal{O}(\delta^{-2})$.
This is achieved with the transformation of the system of dimer equations (\ref{dimer})
to the normal coordinates near the zero equilibrium point.
Using the eigenvalues and eigenvectors in (\ref{eigenvalues-dimes}) near the zero equilibrium,
we define the normal coordinates $c$ and $d$ by the transformation
\begin{equation}
\label{transform-1}
\left\{
\begin{array}{l} a = c - d (\sqrt{1 - \gamma^2} + i \gamma), \\
b = c (\sqrt{1 - \gamma^2} + i \gamma) + d. \end{array} \right.
\end{equation}
The system of dimer equations is now rewritten in coordinates $(c,d)$:
\begin{eqnarray}
\left\{ \begin{array}{l} i \dot{c} = \sqrt{1 - \gamma^2} c + (|c|^2 + 2 |d|^2) c + d^2 \bar{c} + 2i \gamma c^2 \bar{d}, \\
i \dot{d} = -\sqrt{1 - \gamma^2} c + (2 |c|^2 + |d|^2) d + c^2 \bar{d} - 2i \gamma d^2 \bar{c}. \end{array} \right.
\label{dimer-normal}
\end{eqnarray}
Removing the linear terms by the phase rotation factors,
\begin{equation}
\label{transform-2}
c(t) = C(t) e^{-i \sqrt{1 - \gamma^2} t}, \quad d(t) = d(t) e^{i \sqrt{1 - \gamma^2} t},
\end{equation}
we obtain the cubic nonlinear system
\begin{eqnarray}
\left\{ \begin{array}{l} \dot{C} = -i (|C|^2 + 2 |D|^2) C - i D^2 \bar{C} e^{4 i \sqrt{1 - \gamma^2} t}
+ 2 \gamma C^2 \bar{D} e^{- 2i \sqrt{1 - \gamma^2} t}, \\
\dot{D} = - i (2 |C|^2 + |D|^2) D - i C^2 \bar{D} e^{-4 i \sqrt{1 - \gamma^2} t}
- 2 \gamma D^2 \bar{C} e^{2i \sqrt{1 - \gamma^2} t}. \end{array} \right.
\label{dimer-cubic}
\end{eqnarray}
Since the solution exists globally, for any $T \in (0,\infty)$, let us define $\epsilon$ by
$$
\epsilon := \sup_{t \in [0,T]} \sqrt{|C(t)|^2 + |D(t)|^2}.
$$
By Gronwall's inequality, we obtain from the system (\ref{dimer-cubic}):
\begin{equation}
\label{bound-Gr}
|C(t)|^2 + |D(t)|^2 \leq (|C_0|^2 + |D_0|^2) e^{(3 + 2 \gamma) \epsilon^2 t}, \quad t \in [0,T].
\end{equation}
Since $|C_0|^2 + |D_0|^2 = \mathcal{O}(\delta^2)$, we obtain $\epsilon = \mathcal{O}(\delta)$
if $T = \mathcal{O}(\epsilon^{-2})$, that is, if $T = \mathcal{O}(\delta^{-2})$.
Finally, if the constraint (\ref{constraint}) is not satisfied, the solution $|b(t)|^2$ grows like $e^{2 \gamma t}$
and this growth on time $t = \mathcal{O}(\delta^{-2})$ contradicts the bound (\ref{bound-Gr}),
thanks to the transformations (\ref{transform-1}) and (\ref{transform-2}). Hence
the constraint (\ref{constraint}) is satisfied for all solutions of the dimer equations (\ref{dimer})
starting with small initial data $(a_0,b_0)$, consequently these solutions remain
bounded for all positive times.
\vspace{0.25cm}
{\bf Proof of R3:} Let us consider the second equation of the system (\ref{dimer-amplitudes}) rewritten as follows:
\begin{equation}
\label{second-eq-amplitude}
\frac{d |b|^2}{d t} = 2\gamma |b|^2 - i( \bar{b} a - b \bar{a}).
\end{equation}
Let us choose the initial data $(a_0,b_0)$ to be sufficiently large so that
$$
2\gamma |b_0|^2 - i( \bar{b}_0 a_0 - b_0 \bar{a}_0) \geq 2\gamma |b_0|^2 - 2 |a_0| |b_0| \geq 2 \gamma |b_0|^2 - 2 (1 + |C_1| + |C_2|) > 0,
$$
where we have used the bound (\ref{bound-ab}) on $\sup_{t \in \mathbb{R}_+} |a(t) b(t)|$.
For instance, if $a_0 = 0$, then $C_1 = -2$, $C_2 = 0$, and $b_0$ must satisfy
the inequality $|b_0|^2 > 3 \gamma^{-1}$. Then, by the differential equation (\ref{second-eq-amplitude}), $|b(t)|^2$ will grow
and the inequality
$$
2\gamma |b(t)|^2 - i( \bar{b}(t) a(t) - b(t) \bar{a}(t)) \geq 2 \gamma |b(t)|^2 - 2 (1 + |C_1| + |C_2|) > 0,
$$
will be preserved for all positive times. By the comparison principle for differential equations,
$|b(t)|^2$ remains larger than the lower solution that grows exponentially like $e^{2 \gamma t}$.
Then, by the integral equation (\ref{integral-amplitude}), we conclude that the growth of
$|b(t)|^2$ is exactly exponential like $e^{2 \gamma t}$.
Note that even if the component $b(t)$ for the gained oscillator grows exponentially, the component
$a(t)$ for the damped oscillator remains bounded thanks to the bounds (\ref{bound-ab}) and (\ref{bounds-a}):
$$
\sup_{t \in \mathbb{R}_+} |a(t)|^2 \leq |a_0|^2 + \gamma^{-1} \sup_{t \in \mathbb{R}_+} |a(t) b(t)|
\leq |a_0|^2 + \gamma^{-1} (1 + |C_1| + |C_2|),
$$
which is only defined by the initial data $(a_0,b_0)$. Moreover, $|a(t)|^2$ must decay exponentially
as $e^{-2 \gamma t}$ to compensate the growth of $|b(t)|^2$ and to provide the uniform bound
(\ref{bound-ab}) for all positive times. Yet, while the growth of $|b(t)|^2$ is monotonic under these conditions,
according to the above argument, the decay of $|a(t)|^2$ is typically oscillatory
(see equation (\ref{xi-growth}) in Appendix A).
It is also important to note that the above considerations
provide a simple sufficient criterion for the exponential growth, namely
\begin{equation}
\label{condition-b-0}
|b_0|^2 > \gamma^{-1} (1 + |C_1| + |C_2|).
\end{equation}
Given $a_0$ and $b_0$, coefficients $C_1$ and $C_2$ can be directly computed from
the exact solution (\ref{exact-ab-variable}). Then, if the inequality (\ref{condition-b-0}) is satisfied,
it can be immediately inferred that the amplitude $|b(t)|$ will grow indefinitely
according to $e^{2 \gamma t}$ and $|a(t)|$ will correspondingly
decay, so that their product remains bounded.
\section{Nonlinear dynamics of a finite PT-symmetric chain}
We now consider the generalization of our results to a
finite PT-symmetric chain, which is described by the PT-dNLS equation
(\ref{dnls}) for $n \in S_N := \{1,2,...,2N\}$, subject to the Dirichlet boundary conditions
$u_0 = u_{2N+1} = 0$. In the previous works \cite{Bar,Pelin3}, it was proved that
the zero equilibrium is neutrally stable for any $\gamma \in (-\gamma_N,\gamma_N)$, where
$$
\gamma_N := 2 \cos\left( \frac{\pi N}{1 + 2N} \right).
$$
When $N = 1$, this corresponds to $\gamma_1 = 1$. In what follows, we fix $\gamma \in (0,\gamma_N)$.
We are now ready to prove the three main results R1--R3 in the general case of finite $N \in \mathbb{N}$.
\vspace{0.25cm}
{\bf Proof of R1:} For any $n \in S_N$, the squared amplitude satisfies the evolution equation
\begin{eqnarray}
\frac{d |u_n|^2}{d t} = 2\gamma (-1)^n |u_n|^2 + g_n - g_{n-1}, \quad g_n := i(u_n \bar{u}_{n+1} - \bar{u}_n u_{n+1}).
\label{dimer-amplitudes-chain}
\end{eqnarray}
Adding up all equations, we obtain the balance equation for squared amplitudes
\begin{equation}
\label{balance-chain}
\frac{d}{d t} \sum_{n \in S_n} |u_n|^2 = 2\gamma \sum_{n \in S_N} (-1)^n |u_n|^2.
\end{equation}
By Gronwall' inequality, the balance equation (\ref{balance-chain}) results in
the a priori bound
\begin{equation}
\label{Gronwall-chain}
\sum_{n \in S_n} |u_n(t)|^2 \leq \left( \sum_{n \in S_n} |u_n(0)|^2 \right) e^{2 \gamma t}, \quad t \in \mathbb{R}.
\end{equation}
Bound (\ref{Gronwall-chain}) proves that the set of amplitudes $ \{ |u_n(t)| \}_{n \in S_N}$
does not blow up in a finite time.
\vspace{0.25cm}
{\bf Proof of R2:} We rewrite the differential equations
(\ref{dimer-amplitudes-chain}) in the integral form, separately for
odd $n$
\begin{equation}
\label{integral-amplitude-chain}
|u_n(t)|^2 = |u_n(0)|^2 e^{-2 \gamma t} + \int_{0}^t e^{-2 \gamma (t-\tau)} \left[
g_n(\tau) - g_{n-1}(\tau) \right] d \tau
\end{equation}
and even $n$
\begin{equation}
\label{integral-amplitude-chain-even}
|u_n(t)|^2 = e^{2 \gamma t} \left( |u_n(0)|^2 + \int_{0}^t e^{-2 \gamma \tau} \left[
g_n(\tau) - g_{n-1}(\tau) \right] d \tau \right).
\end{equation}
We shall now prove that the set of components $\{ g_n \}_{n \in S_N}$ remains
bounded for all times. Setting
$$
f_n := u_n \bar{u}_{n+1} + \bar{u}_n u_{n+1}, \quad g_n := i(u_n \bar{u}_{n+1} - \bar{u}_n u_{n+1})
$$
and using the PT-dNLS equation (\ref{dnls}), we obtain
\begin{eqnarray}
\left\{ \begin{array}{l} \frac{d f_n}{d t} = (|u_{n+1}|^2-|u_n|^2) g_n +
i (\bar{u}_{n-1} u_{n+1} - u_{n-1} \bar{u}_{n+1}) + i(u_n \bar{u}_{n+2} - \bar{u}_n u_{n+2}), \\
\frac{d g_n}{d t} = (|u_{n+1}|^2-|u_n|^2) (2 - f_n) +
(\bar{u}_{n-1} u_{n+1} + u_{n-1} \bar{u}_{n+1}) - (u_n \bar{u}_{n+2} + \bar{u}_n u_{n+2}). \end{array} \right.
\label{dimer-phases-chain}
\end{eqnarray}
Using the variation of constants method, we write
\begin{equation}
\label{exact-ab-variable-chain}
\left\{ \begin{array}{l} f_n(t) = 2 + C_n(t) e^{i s_n(t)} + D_n(t) e^{-i s_n(t)}, \\
g_n(t) = i C_n(t)e^{i s_n(t)} - i D_n(t) e^{-i s_n(t)}, \end{array} \right.
\end{equation}
where
$$
s_n(t) := \int_0^t \left( |u_{n+1}(t')|^2-|u_n(t')|^2 \right) dt',
$$
and obtain the equivalent system of differential equations
\begin{eqnarray}
\label{dimer-equivalent-amplitude}
\left\{ \begin{array}{l}
\frac{d C_n}{d t} = i(u_n \bar{u}_{n+2} - u_{n-1} \bar{u}_{n+1}) e^{-i s_n}, \\
\frac{d D_n}{d t} = i (\bar{u}_{n-1} u_{n+1} - u_{n+2} \bar{u}_{n}) e^{i s_n}. \end{array} \right.
\end{eqnarray}
Since $\bar{u}_n u_{n+1} = 1 + D_n e^{-i s_n}$, we integrate the second equation
of the system (\ref{dimer-equivalent-amplitude}) and obtain
\begin{equation}
\label{u-n-u-n}
\bar{u}_n u_{n+1} = 1 + D_n(0) e^{-i s_n(t)} + i e^{-i s_n(t)} \int_0^t
(\bar{u}_{n-1} u_{n+1} - u_{n+2} \bar{u}_{n}) e^{i s_n(\tau)} d \tau.
\end{equation}
If the amplitudes $|u_n|$ or $|u_{n+1}|$ are bounded for all times,
then $\bar{u}_n u_{n+1}$ is bounded for all times from the Cauchy--Schwarz inequality
$|g_n| \leq 2 |u_n| |u_{n+1}|$. Therefore, to conclude that $g_n$ (or equivalently,
$\bar{u}_n u_{n+1}$) remain bounded for all other solutions, it is sufficient
to consider the case when either $|u_n|$ or $|u_{n+1}|$ or both grow as $t \to \infty$.
Assume that either $|u_n|$ or $|u_{n+1}|$ grow as $t \to \infty$, but not both.
Without loss of generality, we assume that $|u_n|$ grows for even $n$.
Then, for sufficiently large $t_0 > 0$, for which $|u_{n+1}|^2 - |u_n|^2$ is sign-definite,
we can write the integral term as follows:
\begin{equation}
\label{integration-by-parts}
i \int_{t_0}^t
(\bar{u}_{n-1} u_{n+1} - u_{n+2} \bar{u}_{n}) e^{i s_n(\tau)} d \tau =
\int_{t_0}^t
\frac{\bar{u}_{n-1} u_{n+1} - u_{n+2} \bar{u}_{n}}{|u_{n+1}|^2 - |u_n|^2} \frac{d}{d\tau} e^{i s_n(\tau)} d \tau.
\end{equation}
Under the same assumption that $|u_n|$ grows for even $n$,
it follows from the integral equation (\ref{integral-amplitude-chain-even}) that
the amplitude $|u_n|$ grows as the precise exponential rate $e^{\gamma t}$.
Similarly, $|u_{n+2}|$ may grow but is only allowed to grow at the same exponential rate.
As a result, the integrand
in (\ref{integration-by-parts}) before the derivative term
converges to a constant value as $t \to \infty$ exponentially fast. Therefore, integration by parts yields a uniform
constant bound for all times including the limit $t \to \infty$.
In this case, we conclude that there exist positive constants $\{ G_n \}_{n \in S_N}$ such that
\begin{equation}
\label{bound-g-n}
|g_n(t)| \leq G_n, \quad t \geq 0, \quad n \in S_N,
\end{equation}
hence $g_n$ is a bounded function of $t$ for all times.
It remains to exclude the case when both $|u_n|$ or $|u_{n+1}|$ grow
simultaneously at the same rate such that
$|u_{n+1}|^2 - |u_n|^2 \to 0$ as $t \to \infty$.
Adding two amplitude balance equations (\ref{dimer-amplitudes-chain}) for odd $n$, we obtain
\begin{eqnarray}
\frac{d}{d t} \left( |u_{n+1}|^2 + |u_n|^2 \right) =
2\gamma (|u_{n+1}|^2 - |u_n|^2) + g_{n+1} - g_{n-1}.
\label{technical-eq}
\end{eqnarray}
If both $|u_n|$ or $|u_{n+1}|$ grow but $|u_{n+1}|^2 - |u_n|^2 \to 0$ as $t \to \infty$,
then (\ref{technical-eq}) implies that $|u_{n-1}|$ or $|u_{n+2}|$ grow at the same rate,
in other words, all squared amplitudes $\{ |u_n|^2 \}_{n \in S_N}$ grow at the same rate and
$|u_{n+1}|^2 - |u_n|^2 \to 0$ as $t \to \infty$ for all $n$. However, this clearly contradicts the balance
equation (\ref{balance-chain}). Therefore, this case is impossible.
By bound (\ref{bound-g-n}), functions $g_n$ are bounded for all $t$. It follows from
the integral equation (\ref{integral-amplitude-chain}) that
the squared amplitudes $|u_n(t)|^2$ are bounded for all odd $n$
(they correspond to the damped oscillators):
\begin{equation}
\label{bounds-damped}
|u_n(t)|^2 \leq |u_n(0)|^2 e^{-2 \gamma t} + (2\gamma)^{-1}
(1 - e^{-2 \gamma t}) \sup_{t \in \mathbb{R}_+} (|g_n(t)| + |g_{n-1}(t)|), \quad t \in \mathbb{R}_+,
\end{equation}
hence $\sup_{t \in \mathbb{R}_+} |u_n(t)| < \infty$ for odd $n$.
On the other hand, it follows from
the integral equation (\ref{integral-amplitude-chain-even}) that
$|u_n(t)|^2$ for all even $n$,
corresponding to the gain oscillators, is bounded for all times if and only if
the solution of the PT-dNLS equation (\ref{dnls}) satisfies the constraint
\begin{equation}
|u_n(0)|^2 = \int_0^{\infty} e^{-2\gamma t} \left[ g_{n-1}(t) - g_n(t)
\right] dt, \quad \mbox{\rm for even} \;\; n \in S_N.
\label{constraint-chain}
\end{equation}
If the constraint (\ref{constraint-chain}) is satisfied,
the integral equation (\ref{integral-amplitude-chain})
takes the form
\begin{equation}
\label{integral-amplitude-constraint2}
|u_n(t)|^2 = -e^{2 \gamma t} \int_{t}^{\infty} e^{-2 \gamma \tau} \left[
g_n(\tau) - g_{n-1}(\tau) \right] d \tau,
\end{equation}
from which the global bound follows
\begin{equation}
\label{bounds-u-n}
|u_n(t)|^2 \leq (2\gamma)^{-1} \sup_{t \in \mathbb{R}_+} (|g_n(t)| + |g_{n-1}(t)|), \quad t \in \mathbb{R}_+
\end{equation}
and implies that $\sup_{t \in \mathbb{R}_+} |u_n(t)|< \infty$ for even $n$.
Using bound (\ref{bounds-damped}) and the inequality $|g_n| \leq 2 |u_n| |u_{n+1}|$,
we find from bound (\ref{bounds-u-n}) that for all even $n$, we have
$$
\sup_{t \in \mathbb{R}_+} |u_n(t)| \leq \gamma^{-1} \sup_{t \in \mathbb{R}_+} (|u_{n-1}(t)| + |u_{n+1}(t)|).
$$
The proof that the constraints (\ref{constraint-chain}) are satisfied for all
solutions of the PT-dNLS equation (\ref{dnls}) starting with small initial
data is similar to the case of dimers. It is achieved with the transformation
of the PT-dNLS equation to normal coordinates and subsequent control
of the solution for long times that are inversely proportional to the squared
size of the small initial data. This control is contradicted to the
exponential growth of $|u_n(t)|^2$ like $e^{2 \gamma t}$ for even $n$
if the constraint (\ref{constraint-chain}) is not satisfied.
\vspace{0.25cm}
{\bf Proof of R3:} Using the balance equation (\ref{dimer-amplitudes-chain}) for even $n$
and the global bound (\ref{bound-g-n}), we choose the initial data $\{ u_n(0) \}_{n \in S_N}$
to be sufficiently large so that
$$
2\gamma |u_n(0)|^2 + g_n(0) - g_{n-1}(0) \geq 2 \gamma |u_n(0)|^2 - G_n - G_{n-1} > 0.
$$
By the differential equation (\ref{dimer-amplitudes-chain}) for even $n$,
the squared amplitude $|u_n(t)|^2$ will grow and the inequality
$$
2\gamma |u_n(t)|^2 + g_n(t) - g_{n-1}(t) \geq 2 \gamma |u_n(t)|^2 - G_n - G_{n-1} > 0,
$$
will be preserved for all positive times. By the comparison principle for differential equations,
$|u_n(t)|^2$ for even $n$ remains larger than the lower solution that grows exponentially like $e^{2 \gamma t}$.
Then, by the integral equation (\ref{integral-amplitude-chain}), we conclude that the growth of
$|u_n(t)|^2$ for even $n$ is exactly exponential like $e^{2 \gamma t}$.
Again, $|u_n(t)|^2$ for odd $n$ must decay exponentially
as $e^{-2 \gamma t}$ to compensate the growth of $|u_n(t)|^2$ for even $n$ and to provide the uniform bound
(\ref{bound-g-n}) for all positive times. As in the case
of the dimer, the decay of the odd sites may be oscillatory,
however, if the above inequality holds, the indefinite growth
of the even sites is monotonic.
\section{Numerical illustrations for dimers and quadrimers}
We now turn to a numerical illustration of the analytical results R1-R3.
The case of the dimer is considered in Figures \ref{tfig1} and~\ref{tfig2}.
For demonstration purposes, we choose $\gamma=0.7 < 1$, although
we have verified the validity of the results also for other values
of $\gamma$ in the interval $(0,1)$.
In Fig.~\ref{tfig1}, we explore the fate
of sufficiently small initial data in connection with the statement
R2. In particular, we sample both the real and the imaginary parts
of the initial data from a uniform distribution in the interval $[0,0.1]$.
As a result, for all $1000$ realizations considered
herein the squared initial $l^2$ norm is less than $0.04$, and hence
this case corresponds to the choice $\delta=0.2$.
We can see in the middle panel of the figure depicting the evolution
of all $1000$ realizations in time via a contour plot of
$|a(t)|^2 + |b(t)|^2$ that this quantity remains bounded (notice also
the relevant colorbar). Moreover, it typically appears to feature
oscillatory dynamics, a canonical example of which is featured
on the right panel of the figure. Hence, in accordance with the
statement R2, solutions for all sufficiently small initial
data remain bounded for the monitored times.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=50mm,height=40mm]{fig_tent1.eps}
\includegraphics[width=50mm,height=40mm]{fig_tent2.eps}
\includegraphics[width=50mm,height=40mm]{fig_tent3.eps}
\end{center}
\caption{The left panel shows the (sorted in increasing
value) squared numerical $l^2$
norm for our $1000$ realizations of uniformly distributed random
initial data whose real and imaginary parts are drawn
within the interval $[0,0.1]$. The evolution of the different initial data
is shown as a colormap of the total
density $|a(t)|^2 + |b(t)|^2$ versus time (in $[0,30]$) and
the realization index in the middle panel. The bounded nature of the
results attests to the absence of indefinite growth and
the existence of oscillations. The right panel shows a typical example
of bounded oscillations for $|a(t)|^2$ (solid blue line)
and $|b(t)|^2$ (green dashed line).}
\label{tfig1}
\end{figure}
In Fig.~\ref{tfig2}, we explore a setting where the initial
data are sampled from an interval ten times as large, that is,
the real and imaginary parts of $a$ and $b$ are drawn
randomly from a uniform distribution in the interval
$[0,1]$. This enables the monitoring of large initial
data settings as is clearly illustrated in the figure
(based on the corresponding initial norms). This, in turn,
leads a large fraction of the initial data to grow exponentially
over time, verifying the statement R3;
these solutions are illustrated by a saturated
white color in the contour plot of $\log(|a(t)|^2 + |b(t)|^2)$
in the top right panel of Fig.~\ref{tfig2}. On the other hand,
there are still among these $1000$ realizations ones that preserve
roughly the same (red) colormap throughout their evolution, indicating
that they correspond to bounded solutions.
An example of each of these two possibilities is illustrated
in the bottom panels of Fig.~\ref{tfig2}. On the bottom left
panel, a solution consonant with statement R3 (in that it
is associated with exponential growth) and also with statement
R1 (in that the growth happens with rate $2 \gamma$ shown for
comparison by a black dash-dotted line) is illustrated.
It is also worthwhile to note that for such solutions,
the product $|a(t) b(t)|$ respects the bounded evolution,
whereas $|a(t)|$ decreases exponentially with some oscillations.
On the bottom right panel, an oscillatory and
bounded solution is shown.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=50mm,height=40mm]{fig_tent4.eps}
\includegraphics[width=50mm,height=40mm]{fig_tent5.eps} \\
\includegraphics[width=50mm,height=40mm]{fig_tent6.eps}
\includegraphics[width=50mm,height=40mm]{fig_tent7.eps}
\end{center}
\caption{The same as in Figure \ref{tfig1} but the initial amplitudes of both real
and imaginary parts are taken to be ten times larger (i.e., drawn
randomly from a uniform distribution in $[0,1]$). The top left
panel shows the (sorted) distribution of the initial square $l^2$
norms. The top right panel shows the time evolution of the different
realizations. In this case the contour plot shows
$\log(|a(t)|^2 + |b(t)|^2)$ and the results are saturated for
large norms; i.e., for the realizations resulting in the white
regions, the evolution results in indefinite growth, while for
the ``red threads'', the evolution stays bounded for all $t$.
The bottom panels show a respective example
of the two possible scenaria (in the same form as in Fig.~\ref{tfig1}).
The only difference is that in the left panel for comparison
a dashed black line is used to depict $e^{2 \gamma t}$ in the semilog
plot, clearly indicating that the growth rate of $|b(t)|^2$
asymptotically follows the theoretical prediction.
The bounded product $|a(t) b(t)|$ is shown also in the form of a (red) dash-dotted line.}
\label{tfig2}
\end{figure}
We now turn our attention to the case of quadrimers.
The case of small initial data for the quadrimer is
examined in Fig.~\ref{tfig3}. Once again, we have monitored
the evolution of $1000$ quadrimer realizations, with random
initial data chosen from a uniform distribution with both
real and imaginary parts in $[0,0.1]$. It can be clearly
seen from the colorbar that all relevant runs maintain
a norm which is bounded and follow what appears to be a nearly
periodic evolution (see, in particular, a typical case example
in the right panel of Fig.~\ref{tfig3}).
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=50mm,height=40mm]{fig_tent_q1.eps}
\includegraphics[width=50mm,height=40mm]{fig_tent_q2.eps}
\end{center}
\caption{Same as Figure \ref{tfig1} but for the quadrimer
with small initial data. For the contour plot evolution
of the squared $l^2$ norm, it is clear that all orbits remain
bounded (left panel). A typical example of the resulting
bounded orbit is shown in the right
panel with the blue solid and red dash-dotted lines denoting
the gain sites, while the green dashed and cyan dotted lines
correspond to the lossy ones.}
\label{tfig3}
\end{figure}
On the other hand, Figure \ref{tfig4} shows dynamics of
the quadrimer starting with random initial data of size ten times
larger than in Figure \ref{tfig3}. In this case, similarly to Fig.~\ref{tfig2},
we have plotted the squared $l^2$ norm of the chain in a logarithmic
contour plot, with the saturation (i.e., the yellow/faint color) indicating the
indefinite growth of most configurations. On the other hand, in this
case too, a number of solutions (the ones appearing as ``red threads''
in the left panel of Fig.~\ref{tfig4}) remain bounded.
The two additional panels of Fig.~\ref{tfig4}, middle and right,
display the two prototypical scenarios that we have observed
as being realized when indefinite growth (according to the predicted
rate of $e^{2 \gamma t}$ for the squared densities) arises for
the quadrimer configurations. In the middle panel, only one
of the two gain sites ultimately grows, while the other only
results in bounded oscillations. Nevertheless, in a number
of the relevant cases, like the one of the right panel, it is clear
that {\it both} gain sites are ultimately led towards indefinite
growth with their lossy counterparts both decaying in this case.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=50mm,height=40mm]{fig_tent_q3.eps}
\includegraphics[width=50mm,height=40mm]{fig_tent_q4.eps}
\includegraphics[width=50mm,height=40mm]{fig_tent_q5.eps}
\end{center}
\caption{Same as in Figure \ref{tfig3} for the quadrimer with large initial
data. From
the saturation of the left panel's logarithmic scale, it is clear that
most trajectories lead to indefinite growth. The middle and right panels
illustrate the two principal scenaria through which this is happening
i.e., {\it one} of the gain sites growing exponentially (middle) or {\it both}
of the gain sites growing exponentially (right).}
\label{tfig4}
\end{figure}
\section{Discussion}
In the present work, we have explored the dynamical features of
the finite PT-dNLS chains. We have centered
our exposition around three principal features of the models.
Using an a priori
estimate, based on the generalization of the $l^2$ norm (former for
$\gamma=0$) conservation law, and the Gronwall lemma, we
precluded finite time blowup for this system and offered an upper
bound on its potential growth.
Secondarily, we were able to provide
sharp bounds for the maximal possible growth of the amplitude at the sites with gain
and the decay of the amplitudes at the sites with loss, making the interesting observation
that the product of the two stays bounded. This analysis,
in turn, permitted us to explore the fate of small data, leading
to the conclusions that solutions starting with such data always remain bounded, as may be
intuitively anticipated on the basis of the spectral stability of
the zero equilibrium (in the regime of exact PT-symmetry).
Finally, perhaps the most counter-intuitive of our findings concerned
the potential indefinite growth of solutions with large initial data {\it even in the
regime of exact PT phase}. Not only were we able to offer sufficient
conditions for such growth, but also we were able to identify its precise
rate, revealing that the growth rate indicated by the priori bound
is not merely an upper bound but a precise characterization of
the density increase.
These results were initially presented
in systematic detail in the case of the dimer; in the latter, the complete
integrability revealed earlier in \cite{Ramezani} can be used to make
considerations far more precise, as indicated in Appendix A. However,
we intentionally chose to present proofs of our statements that
would generalize to more sites to indicate the generality of our
considerations. All of these
findings were also corroborated by systematic numerical computations,
through a large number of realizations (with random initial data).
All of these remained bounded for small initial data, as per statement
R2, but for large initial data were led to indefinite growth
both in the case of the dimer and in that of the quadrimer with
the precise rate offered by statements R1 and R3.
It will certainly be relevant to extend the present
considerations to a number of different directions.
On the one hand, it would be interesting to complement
considerations such as the ones presented herein
with more precise estimates. In particular, offering
sharper conditions for the ``separatrices'' between
indefinite growth and bounded oscillations is a particularly interesting
problem. On the other hand, the stable coherent structures of
the finite chain would naturally be anticipated to
be the centers around which the bounded motion is
organized; however, aside from the integrable case of the
dimer, this feature is not immediately transparent from
our analysis and would be quite relevant to further
explore. Finally, it is an interesting problem to consider
higher dimensional settings, such as those in \cite{Guenter}.
The latter settings may enable the formation of more complex
phase patterns and their enhanced connectivity may modify
the arguments devised herein.
|
3,212,635,537,712 | arxiv | \section*{Acknowledgements} I would like to thank Daniel Erman, Mois\'es Herrad\'on Cueto, Kit Newton, Solly Parenti, Claudiu Raicu, and Melanie Matchett Wood for their helpful conversations and comments. The computer algebra system \texttt{Macaulay2} \cite{M2} provided valuable assistance throughout this work.
\section*{Notation} Throughout we work over a field $\mathbb{K}$. When clear we generally admit the reference to the field, and so write $\P^{r}$ for $\P^{r}_{\mathbb{K}}\coloneqq \P(\mathbb{K}^{r+1})$. When referring to vectors (i.e. elements of $\mathbb{Z}^{n}$ or $\mathbb{K}^n$) we normally use a bold font for example $\aa,\mathbf b,\mathbf d,\mathbf v,\mathbf w$. Given a vector $\mathbf v=(v_1,v_2,\ldots,v_{n})$ we denote the sum $v_1+v_2+\cdots+v_n$ by $|\mathbf v|$. For the sake of notational hygiene we abuse notation slightly and write $\mathbb{Z}^2_{\geq1}$ for $\left(\mathbb{Z}_{\geq1}\right)^2$, that is tuples $(a,b)\in \mathbb{Z}^2$ such that $a\geq 1$ and $b\geq1$. Likewise for $\mathbb{Z}_{\geq0}^2$. For brevity we write $\mathbf 1$ for $(1,1)\in\mathbb{Z}^2$ and $\mathbf 0$ for $(0,0)\in\mathbb{Z}^2$.
\section{Background and Set Up}\label{sec:set-up}
Fixing $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, we let $S'=\mathbb{K}[x_{0},x_{1},\ldots,x_{n_1}]$ and $S''=\mathbb{K}[y_{0},y_{1},\ldots,y_{n_2}]$ be standard $\mathbb{Z}$-graded polynomial rings, and set $S=S'\otimes_{\mathbb{K}}S''$ with the induced $\mathbb{Z}^{2}$-multigrading. Concretely $S$ is isomorphic to the bi-graded polynomial ring $\mathbb{K}[x_0,x_1,\ldots,x_{n_1},y_{0},y_{1}\ldots,y_{n_2}]$ where $\deg(x_i)=(1,0)\in \mathbb{Z}^2$ and $\deg(y_j)=(0,1)\in \mathbb{Z}^2$ for every $i=0,1,\ldots,n_1$ and $j=0,1,\ldots,n_2$. Moreover, $S$ is isomorphic to Cox ring of $\P^{\mathbf n}$, which we generally denote $\operatorname{Cox}(\P^{\mathbf n})$ (see \cite{cox95}). Since $S$ is $\mathbb{Z}^2$-graded there is a natural decomposition of $\mathbb{K}$-vector spaces
\[
S\cong \bigoplus_{\aa\in\mathbb{Z}^2} S_{\aa},
\]
where $S_{\aa}$ is the $\mathbb{K}$-vector space spanned by monomials in $S$ of bi-degree $\aa$. The Hilbert function of $S$ is the function $\operatorname{HF}(\aa,S)=\dim_{\mathbb{K}} S_{\aa}$. Similarly given an ideal $J\subset S$ that is homogeneous with respect to this $\mathbb{Z}^2$-grading, we write $J_{\aa}$ for the $\mathbb{K}$-vector space spanned by monomials in $J$ of bi-degree $\aa$, and the Hilbert function of $J$ is the function $\operatorname{HF}(\aa,J)=\dim_{\mathbb{K}} J_{\aa}$.
\begin{defn}
Given $\mathbf b\in \mathbb{Z}^2$ and $\mathbf d\in \mathbb{Z}^2_{\geq1}$ we define the \textbf{bi-graded Veronese module} of $S$ to be
\[
S(\mathbf b;\mathbf d)\coloneqq\bigoplus_{k\in \mathbb{Z}}S_{k\mathbf d+\mathbf b}\subset S,
\]
which we consider as a $\mathbb{Z}$-graded $R=\Sym S_{\mathbf d}$ module.
\end{defn}
More specifically a generator $\ell$ of $R$ corresponds to a monomial $m\in S_{\mathbf d}$, and then $\ell$ acts on $S(\mathbf b;\mathbf d)$ via multiplication by this monomial $m$. Further, the degree $k$ piece of $S(\mathbf b;\mathbf d)$ is $S_{k\mathbf d+\mathbf b}$, and so the degree one piece is $S_{\mathbf d+\mathbf b}$. Now as an $R$-module $S(\mathbf b;\mathbf d)$ is isomorphic to the $\mathbb{Z}$-graded homogeneous coordinate ring of $\P^{\mathbf n}$ embedded by $\O_{\P^{\mathbf n}}(\mathbf d)$.
Given $p,q\in \mathbb{N}$ we define \textbf{$(p,q)$-th Koszul cohomology group of} $S(\mathbf b;\mathbf d)$ to be the cohomology of the sequence:
\begin{equation}\label{eq:koszul-R}
\begin{tikzcd}[column sep = 3em]
\cdots \rar{}& \bigwedge\nolimits^{p+1}R_{1}\otimes S_{(q-1)\mathbf d+\mathbf b}\rar{\partial_{p+1,q-1}}&\bigwedge\nolimits^{p}R_{1}\otimes S_{q\mathbf d+\mathbf b}\rar{\partial_{p,q}}&\bigwedge\nolimits^{p-1}R_{1}\otimes S_{(q+1)\mathbf d+\mathbf b}\rar{}&\cdots
\end{tikzcd}
\end{equation}
where the differentials are given
\begin{align*}
\partial_{p+1,q-1}\left(m_0\wedge m_1\wedge\cdots\wedge m_p\otimes f\right)&=\sum_{i=0}^{p}(-1)^i m_0\wedge m_1\wedge\cdots\wedge \hat{m}_i\wedge \cdots\wedge m_p\otimes m_if \\
\partial_{p,q}\left(m_1\wedge m_2\wedge\cdots\wedge m_p\otimes f\right)&=\sum_{i=1}^{p}(-1)^i m_1\wedge m_2\wedge\cdots\wedge \hat{m}_i\wedge \cdots\wedge m_p\otimes m_if.
\end{align*}
As in the introduction, we denote this by $K_{p,q}(\mathbf n,\mathbf b;\mathbf d)$, and note that $K_{p,q}(\mathbf n,\mathbf b;\mathbf d)\cong K_{p,q}\left(\P^{\mathbf n},\O_{\P^{\mathbf n}}(\mathbf b);\O_{\P^{\mathfrak{n}}}(\mathbf d)\right)$.
That said it will be helpful for us to realize the Koszul complex in \eqref{eq:koszul-R} above in a different way. Towards this, notice that there exist maps:
\begin{center}
\begin{tikzcd}[column sep = 3.5em]
R\rar[two heads]{} & S(\mathbf 0,\mathbf d) \rar[hook] & S
\end{tikzcd}.
\end{center}
Moreover, when restricted to the degree one piece of $R$, and the subsequent images, these maps give natural isomorphisms
\begin{center}
\begin{tikzcd}[column sep = 3.5em]
R_{1}\arrow[r, leftrightarrow, "\sim"] & S(\mathbf 0;\mathbf d)_{1} \arrow[r, leftrightarrow, "\sim"] & S_{\mathbf d}.
\end{tikzcd}
\end{center}
Thus, the $R$-module action of $R_{1}$ on $S$ is the same as the $S$-module action of $S_{\mathbf d}$ on $S$, and so the Koszul complex in \eqref{eq:koszul-R} is naturally isomorphic to the following Koszul complex:
\begin{equation}\label{eq:kozul-S}
\begin{tikzcd}[column sep = 3em]
\cdots \rar{}& \bigwedge\nolimits^{p+1}S_{\mathbf d}\otimes S_{(q-1)\mathbf d+\mathbf b}\rar{\partial_{p+1,q-1}}&\bigwedge\nolimits^{p}S_{\mathbf d}\otimes S_{q\mathbf d+\mathbf b}\rar{\partial_{p,q}}&\bigwedge\nolimits^{p-1}S_{\mathbf d}\otimes S_{(q+1)\mathbf d+\mathbf b}\rar{}&\cdots
\end{tikzcd}
\end{equation}
where the differentials are defined in an analogous way. So the cohomology of \eqref{eq:kozul-S} is isomorphic to $K_{p,q}(\mathbf n,\mathbf b;\mathbf d)$.
We end this section by noting that considerations of Castelnuovo-Mumford regularity show that if $\mathbf d\gg0$, relative to $\mathbf b$, then $K_{p,q}(\mathbf n,\mathbf b;\mathbf d)=0$ for $q>|\mathbf n|$. In particular, if $\mathbf b=\mathbf 0$ then $K_{p,q}(\mathbf n,\mathbf 0;\mathbf d)=0$ for $q>|\mathbf n|$ for all choices of $\mathbf d\in \mathbb{Z}^2_{\geq1}$.
\begin{example}
If $\mathbf b\neq\mathbf 0$ then it is not the case that $K_{p,|\mathbf n|}(\mathbf n,\mathbf 0;\mathbf d)=0$ for all choices of $\mathbf d\in \mathbb{Z}^2_{\geq1}$. For example, using arguments similar to those in Proposition~\ref{prop:regularity} one can show that if $\mathbf n=(1,3)$, $\mathbf d=(3,3)$, and $\mathbf b=(-2,-1)$ then there exists $p$ such that $K_{p,|\mathbf n|}(\mathbf n,\mathbf b;\mathbf d)\neq0$.\end{example}
\begin{prop}\label{prop:regularity}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$, and $\mathbf b\in \mathbb{Z}^2$. If the following two pairs of inequalities are satisfied then $K_{p,q}(\mathbf b,\mathbf b;\mathbf d)=0$ for $q>|\mathbf n|$:
\begin{align}
d_1+b_1n_2>-n_1-1 \quad \quad &\text{or} \quad \quad d_2+b_2n_2<0, \label{eq:reg-1} \\
d_1+b_1n_1<0 \quad \quad &\text{or} \quad \quad d_2+b_2n_1>-n_2-1. \label{eq:reg-2}
\end{align}
\end{prop}
\begin{proof}
By Proposition~2.38 of \cite{aprodu10} it is enough to show that $H^i(\P^{\mathbf n}, \O_{\P^{\mathbf n}}(\mathbf d+\mathbf b(|\mathbf n|-i)))=0$ for all $i>0$. Using the K\"{u}nneth formula \cite[\href{https://stacks.math.columbia.edu/tag/0BEC}{Tag 0BEC}]{stacks-project} to compute $H^i(\P^{\mathbf n}, \O_{\P^{\mathbf n}}(\mathbf d+\mathbf b(|\mathbf n|-i)))=0$ we see that these cohomology groups are only potentially non-zero when $i=n_1,n_2,$ and $|\mathbf n|$. In particular, $H^{n_1}(P^{\mathbf n}, \O_{\P^{\mathbf n}}(\mathbf d+\mathbf b(|\mathbf n|-n_1))$ is isomorphic to $H^{n_1}(\P^{n_1},\O_{\P^{n_1}}(d_1+b_1n_2))\otimes H^{0}(\P^{n_2}, \O_{\P^{n_2}}(d_2+b_2n_2))$. Thus, the condition that $H^{n_1}(P^{\mathbf n}, \O_{\P^{\mathbf n}}(\mathbf d+\mathbf b(|\mathbf n|-n_1))=0$ is equivalent to \eqref{eq:reg-1}. An analogous argument shows that the vanishing of $H^{n_2}(P^{\mathbf n}, \O_{\P^{\mathbf n}}(\mathbf d+\mathbf b(|\mathbf n|-n_2))$ is equivalent to \eqref{eq:reg-2}. Finally, in the last case, when $i=|\mathbf n|$, by using the K\"{u}nneth formula we see that $H^{|\mathbf n|}(\P^{\mathbf n}, \O_{\P^{\mathbf n}}(\mathbf d+\mathbf b(|\mathbf n|-|\mathbf n|)))$ is zero if and only if $d_1>- n_1-1$ or $d_2>-n_2-1$. Since $d_1\geq1$ and $d_2\geq1$ these conditions are always satisfied.
\end{proof}
\section{A Regular Sequence on $\P^{n_1}\times \P^{n_2}$}\label{sec:reg-seq}
One useful approach when attempting to construct non-zero syzygies is to quotient by a linear regular sequence as this does not change the Koszul cohomology groups \cite[Theorem~2.20]{aprodu10}. For example, in order to prove non-vanishing results for $\P^{n}$, Ein, Erman, and Lazarsfeld quotient by the regular sequence consisting of powers of the variables \cite{einErmanLazarsfeld16}. Since we are working on a product of projective spaces such a regular sequence is not an option for us. Namely there are no monomial regular sequences of bi-degree $\mathbf d$ of length $|\mathbf n|+1$ on either the Cox ring of $\P^{\mathbf n}$ or the homogeneous coordinate ring of $\P^{\mathbf n}$ embedded by $\O_{\P^{\mathbf n}}(\mathbf d)$.
Instead, we choose to work with a sequence of multigraded forms which form a virtual regular sequence of length $|\mathbf n|+1$ on the Cox ring of $\P^{\mathbf n}$ (i.e. $S$). That is to say a sequence of elements $g_{0},g_{1},\ldots,g_{|\mathbf n|}$ of bi-degree $\mathbf d$ whose support is contained in the irrelevant ideal $\langle x_0,x_1,\ldots,x_{n_1}\rangle \cap \langle y_0,y_1,\ldots,y_{n_2}\rangle$ of $\P^{\mathbf n}$. Since the ideal $\langle g_0,g_1,\ldots,g_{|\mathbf n|}\rangle$ is supported on the irrelevant ideal, the $g_i$ form a regular sequence on $\P^{\mathbf n}$. By the isomorphism between $S_{\mathbf d}$ and $R_{1}$ discussed in the previous section, such $g_{0},g_{1},\ldots,g_{|\mathbf n|}$ correspond to a sequence of linear forms $\ell_{0},\ell_{1},\ldots,\ell_{|\mathbf n|}$ in $R$, that is a regular sequence on $S(\mathbf b;\mathbf d)$.The $g_{i}$ we use generalize forms first introduced by Eisenbud and Schreyer in \cite{eisenbudSchreyer09}, and later used in \cite{berkesch13} and \cite{oeding17}.
\begin{defn}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. Given $0\leq t \leq |\mathbf n|$ we define
\[
g_t\coloneqq\sum_{\substack{i+j=t\\0\leq i \leq n_1\\ 0 \leq j \leq n_2}}x_{i}^{d_1}y_j^{d_2}.
\]
\end{defn}
\begin{example}
For example if $\mathbf n=(1,1)$ and $\mathbf d=(d_1,d_2)$ then there are three $g_{t}$'s:
\[
g_0=x_0^{d_1}y_0^{d_2}, \quad g_1=x_0^{d_1}y_1^{d_2}+x_1^{d_1}y_0^{d_2}, \quad g_2=x_1^{d_1}y_1^{d_2}.
\]
On the other hand if $\mathbf n=(2,3)$ and $\mathbf d=(d_1,d_2)$ there are six $g_{t}$'s:
\[
g_0=x_0^{d_1}y_0^{d_2}, \quad g_1=x_0^{d_1}y_1^{d_2}+x_1^{d_1}y_0^{d_2}, \quad g_2=x_0^{d_1}y_2^{d_2}+x_1^{d_1}y_1^{d_2}+x_{2}^{d_1}y_{0}^{d_2},
\]
\[
g_{3}=x_{0}^{d_1}y_{3}^{d_2}+x_{1}^{d_1}y_{2}^{d_2},
\quad g_{4}=x_{1}^{d_1}y_{3}^{d_2}+x_{2}^{d_1}y_{2}^{d_2}, \quad g_{5}=x_{2}^{d_1}y_{3}^{d_2}.
\]
\end{example}
\begin{defn}
Throughout the paper we let $\mathbb{R}(\mathbf n,\mathbf d)=\langle g_0,g_1,\ldots,g_{|\mathbf n|}\rangle$. Note that $\mathbb{R}(\mathbf n,\mathbf d)$ depends on both $\mathbf n=(n_1,n_2)$ and $\mathbf d=(d_1,d_2)$, however, for notational hygiene we often suppress this and simply write $\mathbb{R}$ or $\mathbb{R}(\mathbf d)$ for $\mathbb{R}(\mathbf n,\mathbf d)$ when we feel it will not cause confusion. We denote the quotient $S/\mathbb{R}$ by $\overline{S}$.
\end{defn}
An extremely important aspect of these particular forms is that they behave nicely when quotienting by $x_{n_1}$ or $y_{n_2}$. For example, if $\mathbf n=(2,3)$ then the image of $g_{2}=x_0^{d_1}y_2^{d_2}+x_1^{d_1}y_1^{d_2}+x_{2}^{d_1}y_{0}^{d_2}$ in $S/\langle x_{2}\rangle$ is $x_0^{d_1}y_2^{d_2}+x_1^{d_1}y_1^{d_2}$, which is the same as $g_{2}$ when $\mathbf n=(1,3)$. This makes them amenable to inductive arguments on $n_1$ or $n_2$. We make significant use of this fact throughout, and so record it in the following lemma.
\begin{lemma}\label{lem:induction-ideal}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}_{\geq1}^2$ and $\mathbf d=(d_1,d_2)\in\mathbb{Z}^2_{\geq1}$. Let $S'=\mathbb{K}[x_0,x_1,\ldots, x_{n_1-i},y_{0},y_1,\ldots,y_{n_2-j}]$ considered with the bi-grading given by $\deg x_s=(1,0)\in \mathbb{Z}^2$ and $\deg y_t=(0,1)\in \mathbb{Z}^2$. Considering $\mathbb{R}(n_1-i,n_2-j,\mathbf d)\subset S'$ there exists a natural isomorphism
\begin{center}
\begin{tikzcd}[column sep = 5em]
\frac{\displaystyle S'}{\displaystyle\mathbb{R}(n_1-i,n_2-j,\mathbf d)} \arrow[r, leftrightarrow, "\sim"] & \frac{\displaystyle S}{\displaystyle\mathbb{R}(\mathbf n,\mathbf d)+\langle x_{n_1-i+1},x_{n_1-i+2},\ldots,x_{n_1},y_{n_2-j+1},y_{n_2-j+2},\ldots,y_{n_2}\rangle}.
\end{tikzcd}
\end{center}
\end{lemma}
\begin{proof}
By induction it is enough to consider the case when $i=1$ and $j=0$. There is a natural isomorphism
\begin{center}
\begin{tikzcd}[column sep = 5em]
\frac{\displaystyle \frac{S}{\langle x_{n_1}\rangle}}{\displaystyle \frac{\mathbb{R}(\mathbf n,\mathbf d)+\langle x_{n_1}\rangle}{\langle x_{n_1}\rangle}}\arrow[r, leftrightarrow, "\sim"] & \frac{\displaystyle S}{\displaystyle\mathbb{R}(\mathbf n,\mathbf d)+\langle x_{n_1}\rangle},
\end{tikzcd}
\end{center}
and since $S/\langle x_{n_1}\rangle\cong S'$ it is enough to show that $\frac{\mathbb{R}(\mathbf n,\mathbf d)+\langle x_{n_1}\rangle}{\langle x_{n_1}\rangle}$ is isomorphic to $\mathbb{R}(n_1-1,n_2,\mathbf d)$. A straightforward argument shows that $\frac{\mathbb{R}(\mathbf n,\mathbf d)+\langle x_{n_1}\rangle}{\langle x_{n_1}\rangle}$ is isomorphic to the ideal $\langle \overline{g}_{0},\overline{g}_{1},\ldots,\overline{g}_{|\mathfrak{n}|}\rangle$ where $\overline{g}_{t}$ is the image of $g_{t}$ in $S/\langle x_{n_1}\rangle$. However, one sees that
\[
\overline{g}_{t}=\sum_{\substack{a+b=t\\ 0\leq a \leq n_1-1 \\ 0 \leq b \leq n_2}} x_{a}^{d_1}y_{b}^{d_2},
\]
and so considered as an element of $S'$, the ideal $\langle \overline{g}_{0},\overline{g}_{1},\ldots,\overline{g}_{|\mathbf n|}\rangle$ is isomorphic to $\mathbb{R}(n_1-1,n_2,\mathbf d)$.
\end{proof}
As noted in the previous section, there is a natural isomorphism between $R_{1}$ and $S_{\mathbf d}$, and we write $\ell_{t}$ for the image of $g_{t}$ in $R_{1}$ under this isomorphism. Notice that while $g_{t}\in S$ has bi-degree $\mathbf d$, the corresponding element $\ell_{t}\in R$ is of degree one. We then let $\L(\mathbf n,\mathbf d)$ be the ideal $\langle \ell_{0},\ell_{1},\ldots,\ell_{|\mathbf n|}\rangle \subset R$. As with $\mathbb{R}(\mathbf n,\mathbf d)$ we will often write $\L$ or $\L(\mathbf d)$ for $\L(\mathbf n,\mathbf d)$ when $\mathbf n$ and $\mathbf d$ are clear from context. We write $\overline{R}$ for the quotient $R/\L$, and $\overline{S}(\mathbf b;\mathbf d)$ for $S(\mathbf b;\mathbf d)/\L S(\mathbf b;\mathbf d)$, which we consider as a $\overline{R}$-module. The natural isomorphisms discussed in the previous section remain isomorphisms after quotienting by $\mathbb{R}$ and $\L$
\begin{center}
\begin{tikzcd}[column sep = 3.5em]
\overline{R}_{1}\arrow[r, leftrightarrow, "\sim"] & \overline{S}(\mathbf 0;\mathbf d)_{1} \arrow[r, leftrightarrow, "\sim"] & \overline{S}_{\mathbf d}.
\end{tikzcd}
\end{center}
As indicated in the start of the section these $\ell_{t}$'s form a regular sequence on $S(\mathbf b;\mathbf d)$ as long as $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay as an $R$-module. The case when $\mathbf d=\mathbf 1$ and $\mathbf b=\mathbf 0$ was shown by Eisenbud and Schreyer in their work on Boij-S\"{o}derberg theory \cite[Proposition~5.2]{eisenbudSchreyer09}. The following proposition generalizes their argument to the case of all $\mathbf d\in \mathbb{Z}^{t}_{\geq1}$.
\begin{prop}\label{prop:regular-sequence}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}_{\geq1}^2$, $\mathbf d=(d_1,d_2)\in\mathbb{Z}^2_{\geq1}$, and $\mathbf b=(b_1,b_2)\in\mathbb{Z}^2$. If
\[
\frac{d_1}{d_2}b_2-b_1<n_1+1\quad \quad \text{and} \quad \quad \frac{d_2}{d_1}b_1-b_2<n_2+1
\]
then the forms $\ell_{0},\ell_{1},\ldots,\ell_{|\mathbf n|}$ are a regular sequence on $S(\mathbf b;\mathbf d)$ as an $R$-module.
\end{prop}
A key part of the Proposition~\ref{prop:regular-sequence} is the following characterization of when $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay as an $R$-module. In particular, the inequalities appearing in Proposition~\ref{prop:regular-sequence} are needed as they exactly describe when $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay as an $R$-module. This is a major difference between a product of projective spaces and a single projective space, as in the case of a single projective space the equivalent of $S(\mathbf b;\mathbf d)$ is always Cohen-Macaulay \cite{einErmanLazarsfeld16}.
\begin{prop}\label{prop:cohen-macaulay}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}_{\geq1}^2$, $\mathbf d=(d_1,d_2)\in\mathbb{Z}^2_{\geq1}$, and $\mathbf b=(b_1,b_2)\in\mathbb{Z}^2$. $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay as an $R$-module if and only if:
\[
\frac{d_1}{d_2}b_2-b_1<n_1+1\quad \quad \text{and} \quad \quad \frac{d_2}{d_1}b_1-b_2<n_2+1.
\]
\end{prop}
Note that $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay for all $\mathbf d$ if $\mathbf b=\mathbf 0$. In particular, since a product of projective spaces is a smooth toric variety, the case when $\mathbf b=0$ follows from a far more general result of Hochster \cite{hochster72} (see also \cite[Theorem 9.2.9]{coxLittleSchenck11}).
\begin{proof}[Proof of Proposition~\ref{prop:cohen-macaulay}]
If we write $H_{R_+}^i(S(\mathbf b;\mathbf d))$ for the $i$-th local cohomology module of $S(\mathbf b;\mathbf d)$, then $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay if and only if $\dim S(\mathbf b;\mathbf d)$ is equal to $\inf\{i\in \mathbb{N} \; : \;H^i_{R_+}(S(\mathbf b;\mathbf d))\neq0\}$ \cite[Theorem~9.1]{twentyFourHours}. Moreover, since $S(\mathbf b;\mathbf d)$ is isomorphic to the section ring of $(\iota_{\mathbf d})_* \O_{\P^{\mathfrak{n}}}(\mathbf b)$ where $\iota_{\mathbf d}:\P^{\mathbf n}\rightarrow{}\P^{r_{\mathbf n,\mathbf d}}$ is the $\mathbf d$'uple Segre-Veronese map induced by $\O_{\P^{\mathbf n}}(\mathbf d)$
\[
\inf\left\{i\in \mathbb{Z}_{>1} \; \big| \;H^i_{R_+}(S(\mathbf b;\mathbf d))\neq0\right\}=\inf\left\{i\in\mathbb{Z}_{>1} \; \bigg| \; \begin{matrix}
H^{i-1}\left(\P^{\mathbf r_{\mathbf n,\mathbf d}}, (\iota_{\mathbf d})_* \O_{\P^{\mathbf n}}(\mathbf b)(k)\right)\neq0\\
\text{for some $k\in\mathbb{Z}$}
\end{matrix}
\right\},
\]
and so $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay if and only if $(\iota_{\mathbf d})_* \O_{\P^{\mathbf n}}(\mathbf b)$ has no intermediate cohomology \cite[Theorem~13.21]{twentyFourHours}. Since $H^{i-1}\left(\P^{\mathbf r_{\mathbf n,\mathbf d}}, (\iota_{\mathbf d})_* \O_{\P^{\mathbf n}}(\mathbf b)(k)\right)$ is isomorphic to $H^{i-1}\left(\P^{\mathbf n}, \O_{\P^{\mathbf n}}(\mathbf b+k\mathbf d)\right)$ by the K\"{u}nneth formula \cite[\href{https://stacks.math.columbia.edu/tag/0BEC}{Tag 0BEC}]{stacks-project} we further reduce to cohomology computation on $\P^{n_1}$ and $\P^{n_2}$. From this we see that there is no intermediate cohomology if for every $k\in \mathbb{Z}$:
\begin{align}
b_1+kd_1>-(n_1+1) \quad \quad \text{or} \quad \quad b_2+kd_2<0 \label{eq:line1} \\
\intertext{and}
b_2+kd_2>-(n_2+1) \quad \quad \text{or} \quad \quad b_1+kd_1<0. \label{eq:line2}
\end{align}
Now note the first inequality in Equation~\eqref{eq:line1} is true for every $k>-(n_1+1+b_1)/d_1$ while the second is true for every $k<-b_2/d_2$. Thus, Equation~\eqref{eq:line1} is true for all $k$ if and only if $(n_1+1+b_1)/d_1<-b_2/d_2$. Rearranging this inequality gives the first hypothesis in the proposition statement. A similar analysis for Equation~\eqref{eq:line2} produces the second hypothesis.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:regular-sequence}]
By Proposition~\ref{prop:cohen-macaulay} $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay as an $R$-module, and so showing that $\ell_{0},\ell_{1},\ldots,\ell_{|\mathbf n|}$ is a regular sequence on $S(\mathbf b;\mathbf d)$ is equivalent to showing that $\ell_{0},\ell_{1},\ldots,\ell_{|\mathbf n|}$ is part of a system of parameters. Equivalently that $\dim \overline{S}(\mathbf b;\mathbf d) =\dim S(\mathbf b;\mathbf d)-(|\mathbf n|+1)$ \cite[Theorem~2.12]{burnsHerzog93}. Being system of parameters is a set-theoretic condition on the support of $\overline{S}(\mathbf b;\mathbf d)$, and since
\[
\operatorname{supp}_{\overline{R}}\overline{S}(\mathbf b;\mathbf d)=\operatorname{supp}_{R} S(\mathbf b;\mathbf d)\otimes_R \overline{R}=\operatorname{supp}_{R} S(\mathbf b;\mathbf d)\otimes_{R}\otimes \overline{S}(0;\mathbf d)\subset \operatorname{supp}_{R} S(\mathbf b;\mathbf d)\cap \operatorname{supp} \overline{S}(0;\mathbf d)
\] we may reduce to the case when $\mathbf b=\mathbf 0$. Now let $\mathcal{I}(\mathbf d)$ be the ideal sheaf generated by $\ell_{0},\ell_{1},\ldots,\ell_{|\mathbf n|}$. Considering the map:
\begin{center}
\begin{tikzcd}[row sep = .75 em, column sep = 3em]
\P^{\mathbf n}\rar{\psi}&\P^{\mathbf n} \\
x_{i,j}\rar[mapsto]&x_{i,j}^{d_i}
\end{tikzcd}
\end{center}
one sees that $\psi^{*}\mathcal{I}(\mathbf d)=\mathcal{I}(\mathbf 1)$. Therefore, since $H^0(\mathcal{I}(\mathbf d))=\L(\mathbf n,\mathbf d)$ we see that we may further reduce to the case when $\mathbf d=\mathbf 1$. This case was proven in \cite[Proposition~5.2]{eisenbudSchreyer09}.
\end{proof}
Since $\L$ is generated by a linear regular sequence on $S(\mathbf b;\mathbf d)$, quotienting by $\L$ does not change the cohomology of the Koszul complex of \eqref{eq:kozul-S}.
\begin{notation}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}_{\geq1}^2$, $\mathbf d=(d_1,d_2)\in\mathbb{Z}^2_{\geq1}$, and $\mathbf b=(b_1,b_2)\in\mathbb{Z}^2$. We let $K^{\overline{R}}_{p,q}(\overline{S}(\mathbf b;\mathbf d))$ denote the cohomology of the following chain complex
\begin{equation}
\begin{tikzcd}[column sep = 3em]
\cdots \rar{}& \bigwedge\nolimits^{p+1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q-1)\mathbf d+\mathbf b}\rar{\overline{\partial}_{p+1}}&\bigwedge\nolimits^{p}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}\rar{\overline{\partial}_p}&\bigwedge\nolimits^{p-1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q+1)\mathbf d+\mathbf b}\rar{}&\cdots
\end{tikzcd}.
\end{equation}
\end{notation}
\begin{cor}\label{cor:artinian-reduction}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}_{\geq1}^2$, $\mathbf d=(d_1,d_2)\in\mathbb{Z}^2_{\geq1}$, and $\mathbf b=(b_1,b_2)\in\mathbb{Z}^2$. If
\[
\frac{d_1}{d_2}b_2-b_1<n_1+1\quad \quad \text{and} \quad \quad \frac{d_2}{d_1}b_1-b_2<n_2+1
\]
then for all $p,q\in \mathbb{Z}_{\geq0}$ there exists a natural isomorphism
\begin{equation*}
\begin{tikzcd}[column sep = 5em]
K_{p,q}(\mathbf n,\mathbf b;\mathbf d) \rar[leftrightarrow]{\sim} & K^{\overline{R}}_{p,q}(\overline{S}(\mathbf b;\mathbf d))
\end{tikzcd}
\end{equation*}
\end{cor}
\begin{proof}
Combine Proposition~\ref{prop:regular-sequence} with Theorem~2.20 from \cite{aprodu10}.
\end{proof}
\section{Ideal Membership for $\mathbb{R}$}\label{sec:ideal-membership}
In this section, we turn our attention to describing when certain monomials are contained in the ideal $\mathbb{R}$ introduced in Section~\ref{sec:reg-seq}. This highlights a significant challenge when generalizing the work of Ein, Erman, and Lazarsfeld from the case of a single projective space to a product of projective spaces. Namely, since there are no monomial regular sequences of length $|\mathbf n|+1$ on $\P^{\mathbf n}$, we must work with a regular sequence for which, the ideal membership question is more difficult. For example, describing when a given element of $S$ is contained in $\mathbb{R}=\langle g_0,g_1,\ldots,g_{|\mathbf n|}\rangle$, is more complicated then determining when an element is in $\langle x_0^{d},x_{1}^{d},\ldots,x_{n}^{d}\rangle\subset \mathbb{K}[x_0,x_1,\ldots,x_n]$. This section is dedicated to studying the ideal membership question for $\mathbb{R}$.
Our approach to ideal membership for $\mathbb{R}$ is to make use of the fact that $\mathbb{R}$ is homogeneous with respect to a number of interesting gradings. For example, in Section~\ref{sec:mod-deg} we introduce the notion of the modular degree of an element of $S$. This induces a $\left(\mathbb{Z}/\langle d_1\rangle\right)^{\oplus n_1+1}\oplus\left(\mathbb{Z}/\langle d_2\rangle\right)^{\oplus n_2+1}$-grading on $S$ that $\mathbb{R}$ is homogeneous with respect to. Using this grading we show that the ideal membership question for $\mathbb{R}(\mathbf n,\mathbf d)$ can be reduced to the ideal membership question for $\mathbb{R}(\mathbf n,\mathbf 1)$.
Having reduced the question of ideal membership to the case when $\mathbf d=\mathbf 1$, we then introduce the notion of the index weighted degree, which induces a non-standard $\mathbb{Z}$-grading on $S$. The index weighted grading allows us to discuss the $\mathbb{K}$-vector space $S_{\aa,k}$ spanned by monomials of bi-degree $\aa$ and index weighted degree $k$. Using this refinement together with a series of spectral sequence arguments we gain insight into the ideal membership question for $\mathbb{R}(\mathbf n,\mathbf 1)$. For example, we prove the following:
\begin{theorem}\label{thm:tri-deg-vanishing}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\aa=(a_1,a_2)\in \mathbb{Z}^2_{\geq0}$, and $k\in\mathbb{Z}_{\geq0}$. If $\aa$ and $k$ satisfy one of the following inequalities:
\begin{enumerate}
\item $a_1\geq1$ and $a_2\geq n_1+1$,
\item $a_2\geq1$ and $a_1\geq n_{2}+1$,
\item $0\leq k\leq a_1a_2-1$, or
\end{enumerate}
then $S_{\aa,k}=\mathbb{R}(\mathbf 1)_{\aa,k}$. Moreover, if $k=a_1a_2$ then $\dim S_{\aa,k}=\dim \mathbb{R}(\mathbf 1)_{\aa,k}-1$.
\end{theorem}
Combining these arguments gives us a detailed understanding of what monomials are in $\mathbb{R}(\mathbf d)$. This, in turn, allows us to understand the ideal quotient $(\mathbb{R}:_{S}f)$ for particular polynomials $f\in S$, and this provides the range of non-vanishing appearing in Theorem~\ref{thm:main}.
\subsection{The Modular Degree on $S$}\label{sec:mod-deg}
Throughout this section given $\mathbf v=(v_1,v_2\ldots,v_n)\in \mathbb{Z}^{n}$ and $d\in \mathbb{Z}$ we write $\mathbf v\bmod d$ to mean $(v_1\bmod d,v_2\bmod d,\ldots,v_n\bmod d)\in \left(\mathbb{Z}/\langle d\rangle \right)^{n}$. Further we use the following multi-index notation for monomials in $S$ and $\overline{S}$.
\begin{notation}
Given $\mathbf v=(v_0,v_1,\ldots,v_{n_1})\in \mathbb{Z}^{n_1+1}_{\geq 0}$ and $\mathbf w=(w_0,w_1,\ldots,w_{n_2})\in \mathbb{Z}^{n_2+1}_{\geq 0}$ write $\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}$ for the monomial:
\[
\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}=\prod_{i=0}^{n_1}x_i^{v_i}\prod_{j=0}^{n_2}y_j^{w_j}\in S.
\]
\end{notation}
With this notation in hand, we define the modular degree of a monomial in $S$ as follows.
\begin{defn}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. Given a monomial $\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}\in S$ we define its modular degree to be:
\[
\operatorname{mod.deg}\left(\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}\right)=(\mathbf v\bmod d_1,\mathbf w\bmod d_2)\in \left(\mathbb{Z}/\langle d_1\rangle\right)^{\oplus n_1+1}\oplus\left(\mathbb{Z}/\langle d_2\rangle\right)^{\oplus n_2+1}.
\]
\end{defn}
Immediate from the definition we see that the modular degree induces a $\left(\mathbb{Z}/\langle d_1\rangle\right)^{\oplus n_1+1}\oplus\left(\mathbb{Z}/\langle d_2\rangle\right)^{\oplus n_2+1}$-grading on $S$ as follows
\[
S\cong \bigoplus_{\alpha \in \left(\mathbb{Z}/\langle d_1\rangle\right)^{\oplus n_1+1}\oplus\left(\mathbb{Z}/\langle d_2\rangle\right)^{\oplus n_2+1}} S_{\alpha}
\]
where $S_{\alpha}$ is the $\mathbb{K}$-vector space spanned by monomials $m\in S$ such that $\operatorname{mod.deg}(m)=\alpha$. We call this the modular grading, and $\mathbb{R}$ is homogeneous with respect to it.
\begin{lemma}\label{lem:mod-deg-homg}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. The modular degree gives $S$ a $\left(\mathbb{Z}/\langle d_1\rangle\right)^{\oplus n_1+1}\oplus\left(\mathbb{Z}/\langle d_2\rangle\right)^{\oplus n_2+1}$-grading. Moreover, the ideal $\mathbb{R}$ is homogeneous with respect to this grading.
\end{lemma}
\begin{proof}
To show that the modular degree induces a grading on $S$ it is enough to show that if $m,m'\in S$ are monomials then $\operatorname{mod.deg}(m\cdot m')$ is equal to $\operatorname{mod.deg}(m)+\operatorname{mod.deg}(m')$. This follows from the fact that $
\mathbb{Z}/\langle d_i\rangle$ is an abelian group.
Shifting to showing that $\mathbb{R}=\langle g_{0},g_{1},\ldots,g_{|\mathbf n|}\rangle$ is homogeneous with respect to this grading it is enough to show that each of the generators are homogeneous. Towards this recall that
\[
g_{t}=\sum_{\substack{i+j=t\\0\leq i \leq n_1\\ 0 \leq j \leq n_2}}x_{i}^{d_1}y_j^{d_2},
\]
and so each term in $g_{t}$ has modular degree $\mathbf 0$ meaning $g_{t}$ is homogeneous with respect to this grading.
\end{proof}
The key property of the modular grading on $S$ is that thinking of $S^{[\mathbf d]}\coloneqq \mathbb{K}[x_0^{d_1},x_1^{d_1},\ldots,x_{n_{1}}^{d_{1}},y_{0}^{d_{2}},y_{1}^{d_{2}},\ldots,y_{n_2}^{d_{2}}]$ as a sub-ring of $S$ then $S_{\alpha}$ is a free rank one $S^{[\mathbf d]}$-module for every $\alpha \in \left(\mathbb{Z}/\langle d_1\rangle\right)^{\oplus n_1+1}\oplus\left(\mathbb{Z}/\langle d_2\rangle\right)^{\oplus n_2+1}$. Given a monomial $\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}\in S_{\alpha}$ then by the division algorithm we may write $v_i = q_{i}d_1+r_{i}$ and $w_i = q'_{i}d_2+r'_2$ where $0\leq r_i < d_1$ and $0\leq r'_i < d_2$. This allows us to write $\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}$ as
\[
\mathbf x^{\mathbf v}\mathbf y^{\mathbf w} = \underbrace{\left(\prod_{i=0}^{n_1}x_i^{r_i}\prod_{i=0}^{n_2}y_i^{r'_i}\right)}_{\text{I}} \underbrace{\left(\prod_{i=0}^{n_1}x_i^{q_{i}d_1}\prod_{i=0}^{n_2}y_{i}^{q'_{i}d_2}\right)}_{\text{II}},
\]
where II is a monomial in $S^{[\mathbf d]}$ and I is a monomial determined entirely by the modular degree of $\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}$.
\begin{defn}
Given a monomial $\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}\in S$ we define $\operatorname{remd}(\mathbf x^{\mathbf v}\mathbf y^{\mathbf w})$ to be the monomial
\[
\operatorname{remd}\left(\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}\right)=\left(\prod_{i=0}^{n_1}x_i^{r_i}\prod_{i=0}^{n_2}y_i^{r'_i}\right)\in S,
\]
where by the division algorithm $v_i = q_{i}d_1+r_{i}$ and $w_i = q'_{i}d_2+r'_2$ with $0\leq r_i < d_1$ and $0\leq r'_i < d_2$ .
\end{defn}
\begin{example}
Let $\mathbf n=(1,2)$ so that $S=\mathbb{K}[x_0,x_1,y_0,y_1,y_2]$ and set $\mathbf d=(3,5)$. If $f=x_{0}^{5}x_1^{3}y_{0}^{5}y_{1}^{11}y_{2}^{8}$ then the modular degree of $f$ is $((2,0),(1,1,3))$ and $\operatorname{remd}(f)=x_0^2y_0y_1y_2^3$. Any element of modular degree $((2,0),(1,1,3))$ can be written as $x_0^2y_0y_1y_2^3$ times an element of $S^{[\mathbf d]}$. For example, $g=x_0^{11}y_0y_1^{6}y_2^{103}$ also has modular degree $((2,0),(1,1,3))$ , and we may write it as $\operatorname{remd}(f)\cdot x_0^{3\cdot 3}y_1^{5}y_2^{20\cdot 5}$.
\end{example}
\begin{lemma}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. For any $\alpha\in \left(\mathbb{Z}/\langle d_1\rangle\right)^{\oplus n_1+1}\oplus\left(\mathbb{Z}/\langle d_2\rangle\right)^{\oplus n_2+1}$ the vector space $S_{\alpha}$ is a free rank one $S^{[\mathbf d]}$-module, which is generated by $\operatorname{remd}(f)$ for any $f\in S_{\alpha}$.
\end{lemma}
\begin{proof}
First let us check that $S_{\alpha}$ has the structure of a $S^{[\mathbf d]}$-module. This amounts to showing that $S_{\alpha}$ is closed under multiplication by elements in $S^{[\mathbf d]}$. Since $S^{[\mathbf d]}$ is generated by monomials of bi-degree $\mathbf d$ we further reduce to showing $S_{\alpha}$ is closed under multiplication by $x_{i}^{d_1}$ and $y_j^{d_2}$ for $i=0,1,\ldots,n_1$ and $j=0,1,\ldots,n_2$. This follows immediately from the definition of the modular degree.
Turing our attention to showing that $S_{\alpha}$ is free of rank one fix $\alpha\in \left(\mathbb{Z}/\langle d_1\rangle\right)^{\oplus n_1+1}\oplus\left(\mathbb{Z}/\langle d_2\rangle\right)^{\oplus n_2+1}$, let $\mathbf x^{\mathbf v}\mathbf y^{\mathbf v}\in S_{\alpha}$ be a monomial. By the division algorithm we may write $v_i = q_{i}d_1+r_{i}$ and $w_i = q'_{i}d_2+r'_2$ where $0\leq r_i < d_1$ and $0\leq r'_i < d_2$. One readily checks that
\[
\mathbf x^{\mathbf v}\mathbf y^{\mathbf w} = \left(\prod_{i=0}^{n_1}x_i^{r_i}\prod_{i=0}^{n_2}y_i^{r'_i}\right) \left(\prod_{i=0}^{n_1}x_i^{q_{i}d_1}\prod_{i=0}^{n_2}y_{i}^{q'_{i}d_2}\right)=\operatorname{remd}\left(\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}\right)\left(\prod_{i=0}^{n_1}x_i^{q_{i}d_1}\prod_{i=0}^{n_2}y_{i}^{q'_{i}d_2}\right).
\]
Moreover, since $r_{i}$ and $r'_{i}$ determine the modular degree of $\mathbf x^{\mathbf v}\mathbf y^{\mathbf w}$ we see that every monomial in $S_{\alpha}$ is of the form $m\cdot\operatorname{remd}(\mathbf x^{\mathbf v}\mathbf y^{\mathbf w})$ for a unique $m\in S^{[\mathbf d]}$.
\end{proof}
We now state a few basic properties regarding $\operatorname{remd}(f)$ that follows immediately from the previous lemma.
\begin{lemma}\label{lem:round-down-properties}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. If $f,g\in S$ are homogeneous with respect to the modular grading then
\begin{enumerate}
\item $f$ is divisible by $\operatorname{remd}(f)$,
\item $f/\operatorname{remd}(f) \in S^{[\mathbf d]}$, and
\item $\operatorname{mod.deg} f = \operatorname{mod.deg} g$ if and only if $\operatorname{remd}(f) = \operatorname{remd}(g)$.
\end{enumerate}
\end{lemma}
Finally, the following proposition shows how the modular grading can be used to reduce the ideal membership question for $\mathbb{R}(\mathbf d)$ to the ideal membership question for $\mathbb{R}(\mathbf 1)$. Before stating it, however, we fix the following notation that given a monomial $m\in S^{[\mathbf d]}$ we let $m^{1/\mathbf d}$ be the monomial that is the image of $m$ under the isomorphism:
\[
\begin{tikzcd}
S^{[\mathbf d]}\arrow[rr,"^{1/\mathbf d}"]& &S & & x_i^{d_1}\rar[mapsto] &x_i & y_i^{d_1}\rar[mapsto] & y_i
\end{tikzcd}.
\]
\begin{prop}\label{prop:containment}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. Let $f\in S$ be homogeneous with respect to the modular grading then $f\in \mathbb{R}(\mathbf d)$ if and only if $\left( f/\operatorname{remd}(f)\right)^{1/\mathbf d} \in \mathbb{R}(\mathbf 1)$.
\end{prop}
\begin{proof}
By definition $f\in \mathbb{R}(\mathbf d)$ if and only if there exists $h_i\in S$ such that:
\[
f=\sum_{i=0}^{|\mathbf n|} h_ig_i.
\]
Now since $f$ is homogeneous with respect to the modular grading without loss of generality we may assume that the $h_i$ are also homogeneous with respect to the modular grading. Moreover, by Lemma~\ref{lem:mod-deg-homg} the modular degree of $g_{t}$ is equal to $\mathbf 0$, and so $\operatorname{mod.deg} f=\operatorname{mod.deg} h_i$. In particular, by part (3) of Lemma~\ref{lem:round-down-properties} we know that $\operatorname{remd}(f) = \operatorname{remd}(h_i)$. By part (1) Lemma~\ref{lem:round-down-properties} we know that $f$ is divisible by $\operatorname{remd}(f)$, and so combining these we have that $f\in \mathbb{R}(\mathbf d)$ if and only if:
\[
\frac{f}{\operatorname{remd}(f)}=\sum_{i=0}^{|\mathbf n|} \frac{h_ig_i}{\operatorname{remd}(f)}=\sum_{i=0}^{|\mathbf n|} \frac{h_i}{\operatorname{remd}(h_i)}g_i.
\]
By part (2) of Lemma~\ref{lem:round-down-properties} the above relation is actually a relation in the subring $S^{[\mathbf d]}$. Since under the isomorphism $-^{1/\mathbf d}$ the image of $\mathbb{R}(\mathbf d)$ is $\mathbb{R}(\mathbf 1)$, we see that $f\in \mathbb{R}(\mathbf d)$ if and only if $\left( f/\operatorname{remd}(f) \right)^{1/\mathbf d} \in \mathbb{R}(\mathbf 1)$. \end{proof}
\subsection{The Index Weighted Degree}
If we look at one of the generators of $\mathbb{R}(\mathbf d)$ we see that the lower indices of each term all sum to the same thing. For example, $g_{1}=x_{0}^{d_1}y_1^{d_2}+x_{1}^{d_1}y_0^{d_2}$ and the lower indices of each term sum to one. We exploit this symmetry by introducing a non-standard $\mathbb{Z}$-grading on $S$, which we call the index weighted grading, which $\mathbb{R}$ is homogeneous with respect to. Using this grading we will prove Theorem~\ref{thm:tri-deg-vanishing}, and state a conjecture describing exactly when $S_{\aa,k}=\mathbb{R}(\mathbf 1)_{\aa,k}$.
\begin{defn}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. The index weighted grading on $S$ is the non-standard $\mathbb{Z}$-grading given by letting $\operatorname{index.deg} x_i=d_2i$ and $\operatorname{index.deg} y_j=d_1j$ for $i=0,1,\ldots,n_1$ and $j=0,1,\ldots,n_2$.
\end{defn}
The important property of the index weighted grading is that $\mathbb{R}$ is homogeneous with respect to it.
\begin{lemma}\label{lem:homogeneous-wrt-idex}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. The ideal $\mathbb{R}$ is homogeneous with respect to the index weighted grading.
\end{lemma}
\begin{proof}
Recall that $\mathbb{R}=\langle g_0,g_{1}\ldots,g_{|\mathbf n|}\rangle$ where for $0\leq t \leq |\mathbf n|$:
\[
g_{t}=\sum_{\substack{i+j=t\\0\leq i \leq n_1\\ 0 \leq j \leq n_2}}x_{i}^{d_1}y_j^{d_2}.
\]
Suppose $x_{i}^{d_1}y_j^{d_2}$ is a term appearing in $g_{t}$ so that $i+j=t$. Now we have that:
\[
\operatorname{index.deg}\left(x_{i}^{d_1}y_j^{d_2}\right)=d_1d_2i+d_1d_2j=d_1d_2(i+j)=d_1d_2t,
\]
and so each term of $g_{t}$ has the same index weighted degree meaning $\mathbb{R}$ is homogeneous.\end{proof}
\begin{defn}
Given $\aa\in \mathbb{Z}^2$ and $k\in \mathbb{Z}$ we write $S_{\aa,k}$ (respectively $\overline{S}_{\aa,k}$ and $\mathbb{R}_{\aa,k}$) for the $\mathbb{K}$-vector space spanned by monomials in $S$ (respectively $\overline{S}$ and $I$) of bi-degree $\aa$ and index weighted degree $k$.
\end{defn}
The following conjecture describes exactly the $\aa\in\mathbb{Z}^2$ and $k\in \mathbb{Z}$ for which $\mathbb{R}(\mathbf 1)_{\aa,k}$ is equal to $S_{\aa,k}$. Combined with Proposition~\ref{prop:containment} this provides a partial answer for the ideal membership question for $\mathbb{R}$.
\begin{conj}\label{conj:tri-deg-vanishing}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and let $\mathbf d=\mathbf 1$. Given $\aa=(a_1,a_2)\in \mathbb{Z}^2_{\geq0}$ and $k\in\mathbb{Z}_{\geq0}$ we have that $\dim \overline{S}_{\aa,k}=0$ if and only if $\aa$ and $k$ satisfy one of the following inequalities:
\begin{enumerate}
\item $a_1\geq1$ and $a_2\geq n_1+1$,
\item $a_2\geq1$ and $a_1\geq n_{2}+1$,
\item $0\leq k\leq a_1a_2-1$, or
\item $k \geq a_1n_{1}+(n_{2}-a_1)a_2+1$.
\end{enumerate}
Moreover, if $k=a_1a_2$ or $k=a_1n_{1}+(n_{2}-a_1)a_2$ then $\dim \overline{S}_{\aa,k}=1$.
\end{conj}
While we are unable to prove the full conjecture, we do prove a large portion of it. In particular, the remaining portion of this section is dedicated to proving Theorem~\ref{thm:tri-deg-vanishing}. This shows that conditions (1), (2), and (3) imply $\dim \overline{S}_{\aa,k}=0$, as well as proves that $\overline{S}_{\aa,a_{1}a_{2}}$ is one dimensional.
First, using a hypercohomology spectral sequence argument we prove part (1) and (2) of Theorem~\ref{thm:tri-deg-vanishing}. This establishes the sufficiency of conditions (1) and (2) in Conjecture~\ref{conj:tri-deg-vanishing}.
\begin{proof}[Proof of Part (1) and (2) of Theorem~\ref{thm:tri-deg-vanishing}]
Let $\mathbf d=\mathbf 1$ and consider the Koszul complex of $\O_{\P^{\mathbf n}}$-modules defined on $g_{0},g_{1},\ldots,g_{|\mathbf n|}$:
\begin{equation*}
\mathbf F_{\bullet}\coloneqq \left[
\begin{tikzcd}
0 & \lar \O_{\P^{\mathbf n}} & \lar \O_{\P^{\mathbf n}}^{\oplus(|\mathbf n|+1)}(-\mathbf 1) & \lar \cdots & & \cdots & \lar \O_{\P^{\mathbf n}}(-(|\mathbf n|+1)\cdot\mathbf 1) & \lar 0
\end{tikzcd}\right].
\end{equation*}
More precisely $\mathbf F_{\bullet}$ is the Koszul complex of $\O_{\P^{\mathbf n}}$-modules where $\mathbf F_{i}=\O_{\P^{\mathbf n}}(-i\cdot\mathbf 1)^{\oplus\binom{|\mathbf n|+1}{i}}$. Notice that this complex is quasi-isomorphic to zero. Given $\aa=(a_1,a_2)\in\mathbb{Z}^2$ we write $\mathbf F(\aa)_{\bullet}$ for the complex $\mathbf F_{\bullet}\otimes \O_{\P^{\mathbf n}}(\aa)$. Consider the hypercohomology spectral sequence associated to the complex $\mathbf F(\aa)_{\bullet}$, and the global sections functor $\Gamma\left(-,\O_{\P^{\mathbf n}}\right)$, which is defined by:
\[
E^{1}_{p,q}=R^{q}\Gamma\left(\mathbf F(\aa)_{p}\right)=H^{q}\left(\P^{\mathbf n},\mathbf F(\aa)_{p}\right).
\]
This spectral sequence abuts to $\mathbb{H}^{p-q}\left(\mathbf F(\aa)_{\bullet}\right)$, which since $\mathbf F_{\bullet}$ is quasi-isomorphic to zero is zero. The $E^1$ page of this spectral sequence looks like:
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells,
nodes={minimum width=5ex, minimum height=6ex,
text depth=1ex,
inner sep=0pt, outer sep=0pt,
anchor=base},
column sep=4ex, row sep=4ex]%
{
{\scriptstyle |\mathbf n|} & \scriptstyle H^{|\mathbf n|}\left(\P^{\mathbf n},\mathbf F(\aa)_{0}\right) & \scriptstyle H^{|\mathbf n|}\left(\P^{\mathbf n},\mathbf F(\aa)_{1}\right) & & \cdots & & \scriptstyle H^{|\mathbf n|}\left(\P^{\mathbf n},\mathbf F(\aa)_{\mathbf n}\right) & \scriptstyle H^{|\mathbf n|}\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|+1}\right) & \\
\scriptstyle |\mathbf n|-1 & \scriptstyle H^{|\mathbf n|-1}\left(\P^{\mathbf n},\mathbf F(\aa)_{0}\right) & \scriptstyle H^{|\mathbf n|-1}\left(\P^{\mathbf n},\mathbf F(\aa)_{1}\right) & & \cdots & & \scriptstyle H^{|\mathbf n|-1}\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|}\right) & \scriptstyle H^{|\mathbf n|-1}\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|+1}\right) &\\
[7ex,between origins]
\vdots & \vdots & \vdots & & &\ddots & \vdots & \cdots &\\
[7ex,between origins]
\scriptstyle 1 & \scriptstyle H^1\left(\P^{\mathbf n},\mathbf F(\aa)_{0}\right) & \scriptstyle H^1\left(\P^{\mathbf n},\mathbf F(\aa)_{1}\right) & & \cdots & & \scriptstyle H^1\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|}\right) & \scriptstyle H^{1}\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|+1}\right) & \\
[5ex,between origins]
\scriptstyle 0 & \scriptstyle H^0\left(\P^{\mathbf n},\mathbf F(\aa)_{0}\right) & \scriptstyle H^0\left(\P^{\mathbf n},\mathbf F(\aa)_{1}\right) & & & & \scriptstyle H^0\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|}\right) & \scriptstyle H^0\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|+1}\right) & \\
[3ex,between origins]
& \scriptstyle 0 & \scriptstyle 1 & & \cdots & & \scriptstyle {|\mathbf n|} & \scriptstyle {|\mathbf n|+1} &\\
};
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-1-3) -- (m-1-3 -| m-1-2.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-2-3) -- (m-2-3 -| m-2-2.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-4-3) -- (m-4-3 -| m-4-2.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-5-3) -- (m-5-3 -| m-5-2.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-1-7) -- (m-1-7 -| m-1-6.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-2-7) -- (m-2-7 -| m-2-6.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-4-7) -- (m-4-7 -| m-4-6.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-5-7) -- (m-5-7 -| m-5-6.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-1-8) -- (m-1-8 -| m-1-7.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-2-8) -- (m-2-8 -| m-2-7.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-4-8) -- (m-4-8 -| m-4-7.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-5-8) -- (m-5-8 -| m-5-7.east);
\draw[thick] (m-1-1.north east) -- (m-6-1.east) ;
\draw[thick] (m-6-1.north) -- (m-6-9.north);
\end{tikzpicture}
By the K\"{u}nneth formula \cite[\href{https://stacks.math.columbia.edu/tag/0BEC}{Tag 0BEC}]{stacks-project} the only possible $q$ for which $H^q\left(\P^{\mathbf n},\O_{\P^{\mathbf n}}(a_1,a_2)\right)$ is non-zero is $q=0,n_1,n_2,$ and $|\mathbf n|$. More specifically:
\[
H^q\left(\P^{\mathbf n},\O_{\P^{\mathbf n}}(a_1,a_2)\right)\cong
\begin{cases}
\mathbb{K}^{\binom{n_1+a_1}{a_1}}\otimes\mathbb{K}^{\binom{n_2+a_2}{a_2}} &\mbox{if } q=0 \mbox{ and } a_1\geq0,a_2\geq0\\
\mathbb{K}^{\binom{-a_1-1}{-n_1-a_1-1}}\otimes \mathbb{K}^{\binom{n_2+a_2}{a_2}} &\mbox{if } q=n_1 \mbox{ and } a_1\leq-n_1-1,a_2\geq0\\
\mathbb{K}^{\binom{n_1+a_1}{a_1}}\otimes\mathbb{K}^{\binom{-a_2-1}{-n_2-a_2-1}} &\mbox{if } q=n_2 \mbox{ and } a_1\geq0,a_2\leq -n_2-1 \\
\mathbb{K}^{\binom{-a_1-1}{-n_1-a_1-1}}\otimes\mathbb{K}^{\binom{-a_2-1}{-n_2-a_2-1}} &\mbox{if } q=n_1+n_1 \mbox{ and } a_1\leq -n_1-1,a_2\leq -n_2-1 \\
\end{cases}.
\]
Using this we see that the only non-zero entries on the $E^1$ page occur in rows $q=0,n_1,n_2,$ and $|\mathbf n|$. In fact, the only spots $(p,q)$ on the first page of this spectral sequence, which are non-zero are:
\begin{enumerate}
\item $(p,0)$ for $p$ in the range $p\leq \min\{a_1,a-2,|\mathbf n|+1\}$,
\item $(p,n_1)$ for $p$ in the range $a_1+n_1+1\leq p \leq \min\{a_2,|\mathbf n|+1\}$,
\item $(p,n_2)$ for $p$ in the range $a_2+n_2+1\leq p \leq \min\{a_1,|\mathbf n|+1\}$, and
\item $(p,|\mathbf n|)$ for $p$ in the range $\max\{a_1+n_1+1,a_2+n_2+1\}\leq p\leq |\mathbf n|+1$.
\end{enumerate}
Now consider the $E^2$ page of the spectral sequence. Since $\mathbf F_{\bullet}(\aa)$ is a Koszul complex twisted by $\O_{\P^{\mathbf n}}(\aa)$ on this page the 0th row is nothing but the degree $(a_1,a_2)$ strand on the Koszul complex. Moreover, since the cokernel of the Koszul complex is $\overline{S}$ we have that $E^2_{0,0}\cong \overline{S}_{\aa}$.
On the $j$th page of this spectral sequence the map to $E^j_{0,0}$ has source $E^j_{j,j-1}$. By considering the $E^1$ page this means the only maps to the $E^j_{0,0}$ that may be non-trivial occur when $j=1,n_1+1,n_2+1,$ and $|\mathbf n|+1$. We have already described the map on the $E^1$ page, and so consider the remaining cases.
\begin{itemize}
\item \underline{Page $n_1+1$:} On the $E^{n_1+1}$ page the map to $E^{n_1+1}_{0,0}$ has source $E^{n_1+1}_{n_1+1,n_1}$. Thus, for this map to be trivial it suffices for $E^{1}_{n_1+1,n_1}=0$. By our description of the $E^1$ page above this is true if and only if $n_1+1<a_1+n_1+1$. So if $a_1\geq1$ then $E^{1}_{n_1+1,n_1}=0$, and so the map to $E^{n_1+1}_{0,0}$ is zero.
\item \underline{Page $n_2+1$:} On the $E^{n_2+1}$ page the map to $E^{n_2+1}_{0,0}$ has source $E^{n_2+1}_{n_2+1,n_2}$. Thus, for this map to be trivial it suffices for $E^{1}_{n_2+1,n_2}=0$. By our description of the $E^1$ page above this is true if and only if $n_2+1<a_2+n_2+1$. So if $a_2\geq1$ then $E^{1}_{n_2+1,n_2}=0$, and so the map to $E^{n_2+1}_{0,0}$ is zero.
\item \underline{Page $|\mathbf n|+1$:} On the $E^{|\mathbf n|+1}$ page the map to $E^{|\mathbf n|+1}_{0,0}$ has source $E^{|\mathbf n|+1}_{|\mathbf n|+1,|\mathbf n|}$. Thus, for this map to be trivial it suffices for $E^{1}_{|\mathbf n|+1,|\mathbf n|}=0$. By our description of the $E^1$ page above this is true if and only if $|\mathbf n|+1<\min\{a_1+n_1+1,a_2+n_2+1\}$. So if $a_1\geq n_2+1$ or $a_2\geq n_1+1$ then $E^{1}_{|\mathbf n|+1,|\mathbf n|}=0$, and so the map to $E^{|\mathbf n|+1}_{0,0}$ is zero.
\end{itemize}
Thus, if $a_1,a_2\geq1$ and either $a_1\geq n_2+1$ or $a_2\geq n_1+1$ there are no non-zero maps to $E^j_{0,0}$ for $j\geq2$. Since this spectral sequence abuts to zero $E^{2}_{0,0}\cong \overline{S}_{\aa}\cong 0$.
\end{proof}
We now shift to showing that condition (3) of Conjecture~\ref{conj:tri-deg-vanishing} implies the stated vanishing. Before proving this in full we first consider the special case when $a_1=1$. This will be useful as a base case for our inductive proof of Proposition~\ref{prop:conjecture-1}.
\begin{lemma}\label{lem:tri-vanishing-special-case}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and let $\mathbf d=\mathbf 1$. Given $\aa=(1,a_2)\in \mathbb{Z}^{2}_{\geq1}$ if $k\leq a_2-1$ then $\dim \overline{S}_{\aa,k}=0$.
\end{lemma}
\begin{proof}
It is enough to show that if $m\in S_{\aa,k}$ is a monomial then $m\in \mathbb{R}(\mathbf n,\mathbf 1)$. Now any monomial $m\in S_{\aa,k}$ is of the form $m=x_{\ell}m'$ where $m'$ is a monomial of bi-degree $(0,a_2)$ and index weighted degree $k-\ell$ supported on $y_0,y_1,\ldots,y_{n_2}$. Since the index weighed degree of $m$ is less than or equal to $a_2-1$ the index weighted degree of $m'$ is less than $a_2-(\ell+1)$. Thus, we may write $m'$ as $m'=y_{0}^{\ell+1}m''$ where $m''$ is a monomial supported on $y_0,y_1,\ldots,y_{n_2}$. So it is enough to show that $x_{\ell}y^{\ell+1}_{0}\in \mathbb{R}(\mathbf n,\mathbf 1)$.
Towards this we show that $x_{\ell}y^{\ell+1}_{0}\in \mathbb{R}(\mathbf n,\mathbf 1)$ for $\ell=0,1,\ldots,n_1$ by induction on $\ell$. The base case when $\ell=0$ is clear since $x_{\ell}y_{0}^{\ell+1}=g_{0}$. Now suppose the $x_{\ell'}y^{\ell'+1}_{0}\in \mathbb{R}(\mathbf 1)$ for all $0\leq \ell'<\ell$. We may write $x_{\ell}y_{0}^{\ell+1}$ as
\[
x_{\ell}y_{0}^{\ell+1}=x_{\ell}y_{0}y_{0}^{\ell}=\left(g_{\ell}-\sum_{i=1}^{\ell}x_{\ell-i}y_{i}\right)y_{0}^{\ell},
\]
and so it is enough to show that $\left(\sum_{i=1}^{\ell}x_{\ell-i}y_{i}\right)y_{0}^{\ell}$ is in $\mathbb{R}(\mathbf 1)$. However, each term in this sum is of the form $x_{\ell-i}y_{0}^{\ell-i}y_iy_{0}^{i}$, which by the inductive hypothesis is contained in $\mathbb{R}(\mathbf n,\mathbf 1)$.
\end{proof}
\begin{prop}\label{prop:conjecture-1}
Fix $\mathbf n \in \mathbb{Z}^2_{\geq1}$ and let $\mathbf d=\mathbf 1$. Given $\aa=(a_1,a_2)\in \mathbb{Z}_{\geq1}^2$ if $0\leq k\leq a_1a_2-1$ then $\dim\overline{S}_{\aa,k} =0$.
\end{prop}
\begin{proof}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\aa=(a_1,a_2)\in\mathbb{Z}^2$, and $k\in \mathbb{Z}$ such that $k\leq a_1a_2-1$. Without without loss of generality we assume that $n_2\leq n_1$, and that $a_2\leq n_1$ since if $a_{2}>n_1$ then $\dim \overline{S}_{\aa,k}=0$ by parts (1) and (2) of Theorem~\ref{thm:tri-deg-vanishing}. We now proceed by induction on $n_1+a_1$. Note that the base case when $a_1=n_1=1$ follows immediately from Lemma~\ref{lem:tri-vanishing-special-case}. Our inductive hypothesis can now be stated as follows: Let $S'=\mathbb{K}[x_{0},x_{1},\ldots,x_{n'_1},y_{0},y_{1},\ldots,y_{n_2}]$, which we consider with the natural bi-grading, and write $\overline{S}'$ for $S'/\mathbb{R}(n'_1,n_2,\mathbf 1)$. If $(a'_1,a_2)\in \mathbb{Z}^2_{\geq1}$ and $n'_{1}+a'_{1}<n_1+a_1$ then $\dim \overline{S}'_{(a'_1,a_2),k}=0$ for all $0\leq k\leq a'_{1}a_{2}-1$.
Now since $\mathbb{R}(\mathbf n,\mathbf 1)$ is homogeneous with respects to the bi-grading and index weighted grading, Lemma~\ref{lem:homogeneous-wrt-idex}, after shifting accordingly we may consider the short exact sequence
\begin{center}
\begin{tikzcd}[column sep = 3.5em]
0 \rar & \overline{S}\rar{\cdot x_{n_1}} & \overline{S}\rar & \overline{S}/\langle x_{n_1}\rangle \rar & 0
\end{tikzcd}
\end{center}
as a short exact sequence of graded modules with respect to either the bi-grading or index weighted grading. This gives the following:
\[
\dim \overline{S}_{\aa,k}=\dim \overline{S}_{(a_1-1,a_2),k-n_1} + \dim \overline{S}/\langle x_{n_1}\rangle_{\aa,k}.
\]
So for the inductive step it is enough for both $\dim \overline{S}_{(a_1-1,a_2),k-n_1}$ and $\dim \overline{S}/\langle x_{n_1}\rangle_{\aa,k}$ to equal zero.
First we show that $\dim \overline{S}_{(a_1-1,a_2),k-n_1}$ is equal to zero. Notice that by applying the inductive hypothesis in the case when $n'_1=n_1$ and $a'_1=a_1-1$, it is enough for $k-n_1\leq (a_1-1)a_2-1=a_1a_2-a_2-1$. This inequality is true since by our initial assumptions $k\leq a_1a_2-1$ and $a_2\leq n_1$.
Now we show that $\dim \overline{S}/\langle x_{n_1}\rangle_{\aa,k}$ is equal to zero. By Lemma~\ref{lem:induction-ideal} there is an isomorphism between $\overline{S}/\langle x_{n_1}\rangle$ and $\overline{S}'$ when $n'_1=n_1-1$. In particular, the dimension of $\overline{S}/\langle x_{n_1}\rangle_{\aa,k}$ is equal to the dimension of $\overline{S}'_{\aa,k}$. Applying the inductive hypothesis when $n'_1=n_1-1$ and $a'_1=a_1$, we conclude that $\dim \overline{S}/\langle x_{n_1}\rangle_{\aa,k}=\dim \overline{S}_{\aa,k}=0$.
\end{proof}
We end this section by proving the last remark of Theorem~\ref{thm:tri-deg-vanishing}. Before proving the full claim we first consider the special case when $a_{1}=n_2$ and $a_{2}=n_1$.
\begin{prop}\label{prop:tri-deg-special-case}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq0}$, and let $\mathbf d=\mathbf 1$. The dimension of $\overline{S}_{n_2,n_1,n_1n_2}$ is one.
\end{prop}
\begin{proof}
We use a hypercohomology spectral sequence argument similar to the one used in the proof of parts (1) and (2) of Theorem~\ref{thm:tri-deg-vanishing}. In particular, let $\mathbf d=\mathbf 1$ and consider the Koszul complex $\mathbf F_{\bullet}$ of $\O_{\P^{\mathbf n}}$-modules defined on $g_{0},g_{1},\ldots,g_{|\mathbf n|}$. Notice this complex is quasi-isomorphic to zero.
Writing $\mathbf F(\aa)_{\bullet}$ for the complex $\mathbf F_{\bullet}\otimes \O_{\P^{\mathbf n}}(\aa)$, we consider the hypercohomology spectral sequence associated to the complex $\mathbf F(\aa)_{\bullet}$ and the global sections functor $\Gamma\left(-,\O_{\P^{\mathbf n}}\right)$, which is defined by:
\[
E^{1}_{p,q}=R^{q}\Gamma\left(\mathbf F(\aa)_{p}\right)=H^{q}\left(\P^{\mathbf n},\mathbf F(\aa)_{p}\right).
\]
This spectral sequence abuts to $\mathbb{H}^{p-q}\left(\mathbf F(\aa)\right)$, which is zero since $\mathbf F_{\bullet}$ is quasi-isomorphic to zero. Using the fact that $\mathbf F_{i}=\O_{\P^{\mathbf n}}(-i\cdot\mathbf 1)^{\oplus\binom{|\mathbf n|+1}{i}}$ if $a_{1}=n_2$ and $a_2=n_1$ the $E^1$ page of this spectral sequence looks like:
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells,
nodes={minimum width=5ex, minimum height=6ex,
text depth=1ex,
inner sep=0pt, outer sep=0pt,
anchor=base},
column sep=4ex, row sep=4ex]%
{
{\scriptstyle |\mathbf n|} & \scriptstyle 0 & \scriptstyle 0 & & \cdots & & \scriptstyle 0 & \scriptstyle \mathbb{K}& \\
\scriptstyle |\mathbf n|-1 & \scriptstyle 0 & \scriptstyle 0 & & \cdots & & \scriptstyle 0 & \scriptstyle 0 &\\
[7ex,between origins]
\vdots & \vdots & \vdots & & &\ddots & \vdots & \cdots &\\
[7ex,between origins]
\scriptstyle 1 & \scriptstyle 0 & \scriptstyle 0 & & \cdots & & \scriptstyle 0 & \scriptstyle 0 & \\
[5ex,between origins]
\scriptstyle 0 & \scriptstyle H^0\left(\P^{\mathbf n},\mathbf F(\aa)_{0}\right) & \scriptstyle H^0\left(\P^{\mathbf n},\mathbf F(\aa)_{1}\right) & & & & \scriptstyle H^0\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|}\right) & \scriptstyle H^0\left(\P^{\mathbf n},\mathbf F(\aa)_{|\mathbf n|+1}\right) & \\
[3ex,between origins]
& \scriptstyle 0 & \scriptstyle 1 & & \cdots & & \scriptstyle {|\mathbf n|} & \scriptstyle {|\mathbf n|+1} &\\
};
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-1-3) -- (m-1-3 -| m-1-2.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-2-3) -- (m-2-3 -| m-2-2.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-4-3) -- (m-4-3 -| m-4-2.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-5-3) -- (m-5-3 -| m-5-2.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-1-7) -- (m-1-7 -| m-1-6.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-2-7) -- (m-2-7 -| m-2-6.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-4-7) -- (m-4-7 -| m-4-6.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-5-7) -- (m-5-7 -| m-5-6.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-1-8) -- (m-1-8 -| m-1-7.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-2-8) -- (m-2-8 -| m-2-7.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-4-8) -- (m-4-8 -| m-4-7.east);
\draw[shorten <= .1cm, shorten >= .1cm,-stealth] (m-5-8) -- (m-5-8 -| m-5-7.east);
\draw[thick] (m-1-1.north east) -- (m-6-1.east) ;
\draw[thick] (m-6-1.north) -- (m-6-9.north);
\end{tikzpicture}
Since $\mathbf F_{\bullet}(n_2,n_1)$ is the Koszul complex resolving $\O_{Y}$ twisted by $\O_{\P^{\mathbf n}}(n_2,n_1)$ notice that the $0$th row on this page is nothing by the degree $(n_2,n_1)$-strand of the Koszul complex resolving $\mathbb{R}(\mathbf 1)$. In particular, since the co-kernel of the Koszul complex resolving $\mathbb{R}(\mathbf 1)$ is $\overline{S}$ we know that $E^2_{0,0}$ is isomorphic to $\overline{S}_{(n_2,n_1)}$.
A cohomology computation, similar to the one done in the proof of parts (1) and (2) of Theorem~\ref{thm:tri-deg-vanishing}, shows that every map to $E_{0,0}^{j}$ has trivial source except when $j=1$ and $j=|\mathbf n|+1$. Thus, $E_{0,0}^{|\mathbf n|+1}\cong E_{0,0}^{2} \overline{S}_{(n_2,n_1)}$. Likewise since there are no non-trivial maps to $E_{|\mathbf n|+1,|\mathbf n|}^{j}$ we know that $E_{|\mathbf n|+1,|\mathbf n|}^{|\mathbf n|+1}\cong E_{|\mathbf n|+1,|\mathbf n|}^{1}$ and is is isomorphic to $H^{|\mathbf n|+1}(\P^{\mathbf n},\O_{\P^{\mathbf n}}(-n_1-1,-n_2-1))$.
The above spectral sequence comes from the bi-complex given by tensoring the \v{C}ech complex on $\P^{\mathbf n}$ with the Koszul complex on $g_{0},g_{1},\ldots,g_{|\mathbf n|}$. Since both of these complexes are homogeneous with respect to the index weighted grading the resulting bi-complex, and hence the associated spectral sequence are also homogeneous with respect to the index weighted grading. Thus all vector spaces and differential appearing in the various pages of the spectral sequence will be graded with respect to the index degree. We would like to determine the index degree of 1-dimensional vector space $H^{|\mathbf n|+1}(\P^{\mathbf n},\O_{\P^{\mathbf n}}(-n_1-1,-n_2-1)$. We compute this as follows: the last term of the Koszul complex is a rank 1-module generated in index weighted degree $\sum_{i} \operatorname{index.deg}(g_i)$. The generator of the 1-dimensional vector space $H^{|\mathbf n|+1}(\P^{\mathbf n},\O_{\P^{\mathbf n}}(-n_1-1,-n_2-1)$ corresponds to the Laurent monomial $\frac{1}{x_0x_1\cdots x_{n_1}y_0y_1\cdots y_{n_2}}$. So this vector space has index weighted degree:
\[
\sum_{i=0}^{|\mathbf n|}\operatorname{index.deg}(g_i)-\operatorname{index.deg} \left(\prod_{i=0}^{n_1}\frac{1}{x_i}\prod_{j=1}^{n_2}\frac{1}{y_j}\right) = \sum_{i=0}^{|\mathbf n|}i-\sum_{j=0}^{n_1}j - \sum_{k=0}^{n_2}k= \binom{n_1+n_2}{2}-\binom{n_1}{2}-\binom{n_2}{2}=n_1n_2.
\]
On the $(|\mathbf n|+1)$-th there is a non-trivial map from $E_{|\mathbf n|+1,|\mathbf n|}^{|\mathbf n|+1}$ to $E_{0,0}^{|\mathbf n|+1}$. Moreover, since this spectral sequence abuts to zero this map must be an isomorphism. Using that $E_{|\mathbf n|+1,|\mathbf n|}^{|\mathbf n|+1}\cong H^{|\mathbf n|+1}(\P^{\mathbf n},\O_{\P^{\mathbf n}}(-n_1-1,-n_2-1))$ and $E_{0,0}^{|\mathbf n|+1}\cong \overline{S}_{(n_2,n_1)}$. As $H^{|\mathbf n|+1}(\P^{\mathbf n},\O_{\P^{\mathbf n}}(-n_1-1,-n_2-1))\cong \mathbb{K}$ this shows that $\dim \overline{S}_{n_2,n_1}=1$. Since this isomorphism respects the index weighted grading it follows that the 1-dimensional vector space of bi-degree $S_{(n_2,n_1)}$ will be supported entirely in index weighted degree $n_1n_2$.
\end{proof}
Finally, we complete the proof of Theorem~\ref{thm:tri-deg-vanishing} by proving the last claim that $\dim \overline{S}_{\aa,a_1a_2}=1$.
\begin{proof}[Proof of Theorem~\ref{thm:tri-deg-vanishing}]
We proceed by induction upon $n_1+n_2$. For the base case note that when $n_1=n_2=1$ the claim is clear as the only case is when $a_1=a_2=1$, which follows from Proposition~\ref{prop:tri-deg-special-case}. Now suppose that $a_1\leq n_2$ and $a_2\leq n_1$. Since we have already prove the claim when $a_1=n_2$ and $a_2=n_1$ without without loss of generality we may suppose that $a_2<n_1$. Considering the short exact sequence
\begin{center}
\begin{tikzcd}[column sep = 3.5em]
0 \rar & \overline{S}\rar{\cdot x_{n_1}} & \overline{S} \rar & \overline{S}/\langle x_{n_1}\rangle \rar & 0
\end{tikzcd}
\end{center}
we see that:
\[
\dim \overline{S}_{\aa,a_1a_2}=\dim \overline{S}_{(a_1-1,a_2),a_1a_2-n_1}+\dim \overline{S}/\langle x_{n_1}\rangle_{\aa,a_1a_2}.
\]
As $a_2\leq n_1-1$ we see that $a_1a_2-n_1\leq (a_1-1)a_2-1$. So by Proposition~\ref{prop:conjecture-1} $\dim \overline{S}_{(a_1-1,a_2),a_1a_2-n_1}=0$, and thus, is enough for $\dim \overline{S}/\langle x_{n_1}\rangle_{\aa,a_1a_2}=1$. Letting $S'=\mathbb{K}[x_0,x_{1},\ldots,x_{n_1-1},y_{0},y_{1},\ldots,y_{n_2}]$ with the natural bi-grading by Lemma~\ref{lem:induction-ideal} there is an isomorphism between $\overline{S}/\langle x_{n_1}\rangle$ and $S'/\mathbb{R}((n_1-1,n_2),\mathbf d)$. In particular, $\dim \overline{S}/\langle x_{n_1}\rangle_{\aa,a_1a_2}$ is equal to $\dim S'/\mathbb{R}((n_1-1,n_2),\mathbf d)_{\aa,a_1a_1}$, which by the inductive hypothesis is equal to one.
\end{proof}
\section{non-Vanishing via Generalized Monomial Methods}\label{sec:monomial-techniques}
Our main result shows how to construct non-zero syzygies on $\P^{\mathbf n}$ from monomials in $\overline{S}$. This generalizes Lemma~2.3 and Corollary~2.4 in \cite{einErmanLazarsfeld16} to a product of projective spaces.
A key observation for these generalizations is that the condition of one monomial dividing another monomial used in \cite{einErmanLazarsfeld16} can be weakened to a condition on the index weighted degree. This turns out to be crucial as the notion of when two monomials divide each other in $\overline{S}$ is quite subtle since our regular sequence is not generated by monomials. Before stating and proving this result we first establish a few important definitions and background results.
\begin{defn}
An element $f\in \overline{S}$ is a monomial of degree $\mathbf d$ if and only if there exists a monomial $m\in S_{\mathbf d}$ such that $\overline{m}=f$ where $\overline{m}$ is the image of $m$ in $\overline{S}$.
\end{defn}
\begin{defn}
An element $\zeta\otimes f\in \bigwedge\nolimits^s \overline{S}\otimes \overline{S}$ is a monomial if and only if $\zeta= n_1\wedge\cdots\wedge n_s$ where each $n_i\in \overline{S}$ and $f\in \overline{S}$ are monomials.
\end{defn}
\begin{defn}
Given a finite subset $P\subset \overline{S}$ we write $\det P$ for the wedge product of all elements in $P$, and say $\zeta\in \bigwedge\nolimits^{s}P$ if $\zeta=f_1\wedge \cdots\wedge f_s$ where $f_i\in P$ for all $i$.
\end{defn}
\begin{lemma}\label{lem:stupid}
Let $\phi:V\rightarrow W$ be a map of finite dimensional $\mathbb{K}$-vector spaces, $\{v_1,v_2,\ldots,v_n\}$ be a basis for $V$, and $\{w_1,w_2,\ldots,w_m\}$ be a basis for $W$. If there exists $I\subset \{1,2,\ldots,n\}$ such that
\[
w_1=\sum_{i\in I}\phi(v_i)
\]
then there exists an $i\in I$ such that if we express $\phi(v_i)$ in the given basis as
\[
\phi(v_i)=c_1w_1+c_2w_2+\cdots+c_mw_m
\]
where $c_i\in \mathbb{K}$ then $c_1\neq0$.
\end{lemma}
\begin{proof}
Towards a contradiction suppose that
\[
\phi(v_i)=c_{i,1}w_1+c_{i,2}w_2+\cdots+c_{i,m}w_m
\]
and $c_{i,1}=0$ for all $i\in I$. This means that
\[
w_1=\sum_{i\in I}\phi(v_i)=\sum_{i\in I}\left(c_{i,2}w_2+\cdots+c_{i,m}w_m\right)
\]
which contradicts the fact that $\{w_1,w_2\ldots,w_m\}$ is a basis for $W$.
\end{proof}
With this lemma and these definitions in hand, we can now state the key proposition of this section.
\begin{prop}\label{prop:nonvanishing-bounds}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$, $\mathbf b=(b_1,b_2)\in\mathbb{Z}^2$, and $0\leq q \leq |\mathbf n|$. Let $f\in \overline{S}_{q\mathbf d+\mathbf b}$ be a non-zero monomial, and let
\begin{align*}
L(f)&\coloneqq \left\{\begin{matrix} m \\ \text{a monomial of}\\ \text{bi-degree $\mathbf d$}\end{matrix} \; \bigg| \; \begin{matrix} \operatorname{index.deg} m \leq \operatorname{index.deg} f\end{matrix}\right\}\subset \overline{S}_{\mathbf d}\\
Z(f)&\coloneqq \left\{\begin{matrix} m \\ \text{a monomial of}\\ \text{bi-degree $\mathbf d$}\end{matrix} \; \big| \; mf=0\right\}\subset \overline{S}_{\mathbf d}
\end{align*}
be the set of monomials of bi-degree $\mathbf d$ and of index weighted degree less than $f$ and the set of annihilators of $f$ of bi-degree $\mathbf d$ respectively. Consider the Koszul complex:
\begin{center}
\begin{tikzcd}[column sep = 3em]
\cdots \rar{}& \bigwedge\nolimits^{p+1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q-1)\mathbf d+\mathbf b}\rar{\overline{\partial}_{p+1}}&\bigwedge\nolimits^{p}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}\rar{\overline{\partial}_p}&\bigwedge\nolimits^{p-1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q+1)\mathbf d+\mathbf b}\rar{}&\cdots
\end{tikzcd}
\end{center}
\begin{enumerate}
\item Given $\zeta \in \bigwedge\nolimits^pZ(f)$ the element $\zeta\otimes f\in \ker \overline{\partial}_{p}$.
\item Given $\zeta\in \bigwedge^{s} \overline{S}_{\mathbf d}$ such that $(\det L(f) \wedge \zeta) \otimes f\neq0$ then $(\det L(f) \wedge \zeta) \otimes f \not\in \operatorname{img} \overline{\partial}_{\#L(f)+s+1}$.
\end{enumerate}
\end{prop}
\begin{proof}
First let us focus our attention on part (1). By definition, since $\zeta\in \bigwedge\nolimits^pZ(f)$ we may write it as $\zeta=\zeta_1\wedge\zeta_2\wedge\cdots\wedge \zeta_p$ where $\zeta_i\in Z(f)\subset \overline{S}_{\mathbf d}$. Thus, we see that
\begin{align*}
\overline{\partial}_{p}\left(\zeta\otimes f\right)=\overline{\partial}_{p}\left(\zeta_1\wedge \zeta_2\wedge\cdots\wedge \zeta_p\otimes f\right)&=\sum_{i=1}^p(-1)^i\zeta_1\wedge \zeta_2\wedge\cdots\wedge \hat{\zeta}_i\wedge \cdots \wedge \zeta_p\otimes (\zeta_if)\\
&=\sum_{i=1}^p(-1)^i\zeta_1\wedge \zeta_2\wedge\cdots\wedge \hat{\zeta}_i\wedge \cdots \wedge \zeta_p\otimes 0=0,
\end{align*}
where the penultimate equality follows from the fact that $\zeta_i\in Z(f)$, and so by definition annihilates $f$.
We now shift to proving part (2). Towards a contradiction suppose that $(\det L(f) \wedge \zeta) \otimes f$ is non-zero and in the image of the map:
\begin{center}
\begin{tikzcd}[ampersand replacement=\&,column sep = 3em]
\bigwedge\nolimits^{\#L(f)+s+1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q-1)\mathbf d+\mathbf b}\rar{\overline{\partial}_{\#L(f)+s+1}}\& \bigwedge\nolimits^{\#P(f)+s}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}
\end{tikzcd}.
\end{center}
This means there exists $\xi_{j}\in \bigwedge\nolimits^{\#L(f)+s+1}\overline{S}_{\mathbf d}$ and $g_j\in \overline{S}_{(q-1)\mathbf d+\mathbf b}$ such that:
\begin{equation}\label{eq:mon-method}
\overline{\partial}_{\#L(f)+s+1}\left(\sum_{j=1}^t \xi_j\otimes g_j\right)=\sum_{j=1}^t \overline{\partial}_{\#L(f)+s+1}\left(\xi_j\otimes g_j\right)=(\det L(f) \wedge \zeta) \otimes f.
\end{equation}
By the linearity of $\overline{\partial}_{\#L(f)+s+1}$ we may, without without loss of generality, assume that $\xi_j\otimes g_j$ is a monomial.
Now the monomials in $\bigwedge\nolimits^{\#L(f)+s+1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q-1)\mathbf d+\mathbf b}$ and in $\bigwedge\nolimits^{\#L(f)+s}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}$ both form spanning sets. Therefore, since $(\det L(f) \wedge \zeta) \otimes f$ is a monomial in $\bigwedge\nolimits^{\#L(f)+s}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}$ and $\xi_j\otimes g_j$ is a monomial in $\bigwedge\nolimits^{\#L(f)+s}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}$ by Lemma~\ref{lem:stupid} for Equation~\eqref{eq:mon-method} to be true it must be the case that $(\det L(f) \wedge \zeta) \otimes f$ appears as a term in $\overline{\partial}_{\#L(f)+s+1}\left(\xi_j\otimes g_j\right)$ for some $j$.
Since $\xi_j\otimes g_j$ is a monomial we may write $\xi_j\otimes g_j$ as $\overline{n}_0\wedge \overline{n}_1\wedge \cdots \wedge \overline{n}_{\#P(f)+s}\otimes \overline{g}$ where $g$ and the $n_i$ are monomials in $S$ and $\overline{g}$ and $\overline{n}_i$ are their images in $\overline{S}$. For $(\det L(f) \wedge \zeta) \otimes f$ to appear as a term in $\overline{\partial}_{\#L(f)+s+1}(\overline{n}_0\wedge \overline{n}_1\wedge \cdots \wedge \overline{n}_{\#L(f)+s}\otimes \overline{g})$ without without loss of generality we have that
\[
\overline{n}_1\wedge \overline{n}_2\wedge\cdots \wedge \overline{n}_{\#L(f)+s}\otimes (\overline{n}_0\overline{g})=(\det L(f) \wedge \zeta) \otimes f.
\]
This implies two equalities:
\begin{enumerate}
\item $\overline{n}_1\wedge \overline{n}_2\wedge\cdots \wedge \overline{n}_{\#L(f)+s}=(\det L(f) \wedge \zeta)$ as elements in $\bigwedge\nolimits^{\#L(f)+s}\overline{S}_{\mathbf d}$, and
\item $\overline{n}_0\overline{g}=f$ as elements in $\overline{S}_{q\mathbf d+\mathbf b}$.
\end{enumerate}
The second of these means that
\begin{align*}
\operatorname{index.deg}(\overline{n}_0)&+\operatorname{index.deg}(\overline{g})=\operatorname{index.deg}(\overline{n}_0\overline{g})=\operatorname{index.deg}(f),
\end{align*}
which implies that $\operatorname{index.deg} \overline{n}_0 \leq \operatorname{index.deg} f$ meaning $\overline{n}_{0}\in L(f)$. However, combining this fact with the first equality we see that
\[
\overline{n}_0\wedge \overline{n}_1\wedge \cdots \wedge \overline{n}_{\#L(f)+s}\otimes \overline{g}=\overline{n}_0\wedge(\det L(f) \wedge \zeta)\otimes \overline{g}=0
\]
contradicting the fact that $(\det L(f) \wedge \zeta) \otimes f$ is non-zero.
\end{proof}
Immediately from Proposition~\ref{prop:nonvanishing-bounds} we are able to deduce a non-vanishing result giving non-trivial syzygies in a range determined by $L(f)$ and $Z(f)$.
\begin{cor}\label{cor:monomial-nonvanishing}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$, and $\mathbf b=(b_1,b_2)\in\mathbb{Z}^2$. Given $0\leq q \leq |\mathbf n|$ and $0\leq k \leq q$ Let $f\in \overline{S}_{q\mathbf d+\mathbf b}$ be a non-zero monomial such that $L(f)\subset Z(f)$ then.
$K_{p,q}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)$ for all $p$ in the following range:
\[
\#L(f)\leq p \leq \#Z(f).
\]
\end{cor}
\begin{proof}
Fix $\#L(f)\leq p \leq \#Z(f)$ and write $p$ as $p=\#L(f)+s$. Since $s \leq \#\left(Z(f)\setminus L(f)\right)$ we may pick $s$ distinct elements $\zeta_1,\zeta_2,\ldots,\zeta_{s}\in (Z(f)\setminus L(f))$. Set $\zeta = \zeta_1\wedge \zeta_2\wedge\cdots\wedge \zeta_s$. Note that since the $\zeta_i$ are distinct monomials -- and so form part of a basis for $\overline{S}_{\mathbf d}$ -- $\zeta$ is non-zero. By part (1) of Proposition~\ref{prop:nonvanishing-bounds} $(\zeta\wedge \det L(f))\otimes f$ is in the kernel of $\partial_{\#L(f)+s}$, while by part (2) it is not in the image of $\partial_{\#L(f)+s+1}$. Hence it represents a non-zero element in $K_{p,q}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)$.
\end{proof}
\section{Special Monomails}\label{sec:fqk}
In Section~\ref{sec:monomial-techniques} we showed that given a non-zero monomial $f\in \overline{S}_{q\mathbf d+\mathbf b}$, satisfying certain technical conditions described in Proposition~\ref{prop:nonvanishing-bounds}, one can construct a non-zero syzygy $\zeta_{1}\wedge \zeta_{2}\wedge\cdots \wedge \zeta_{p}\otimes f$ in $K_{p,q}^{\overline{R}}(S(\mathbf b;\mathbf d))$ where $p$ is controlled in part by the annihilators of $f$. We now turn to describing the monomials $f$ we will use in our proof of Theorem~\ref{thm:main}.
Broadly, the idea is that having fixed $0\leq q\leq |\mathbf n|$ and $\mathbf b\in \mathbb{Z}^2$ we will construct a non-zero monomial $f_{q,k,\mathbf b}\in\overline{S}$ of bi-degree $q\mathbf d$ for every $0\leq k\leq q$. Each $f_{q,k,\mathbf b}$ will play the role of $f$ in Proposition~\ref{prop:nonvanishing-bounds} and Corollary~\ref{cor:monomial-nonvanishing}, and will produce non-trivial syzygies (assuming a few technical conditions) in the range
\[
\binom{d_1+k}{k}\binom{d_2+(q-k)}{q-k}-(q+2)\leq p \leq r_{\mathbf n,\mathbf d}-\binom{d_1+n_1-k}{n_1-k}\binom{d_2+n_2-(q-k)}{n_2-(q-k)}-(|\mathbf n|+1).
\]
Initially, we will not explicitly define $f_{q,k,\mathbf b}$. Instead we utilize the fact that in certain degrees, described by Theorem~\ref{thm:tri-deg-vanishing}, $(S/\mathbb{R}(\mathbf 1))_{\aa,t}$ is one dimension. In particular, we define $f_{q,k,\mathbf b}$ in terms of the generator of $(S/\mathbb{R}(\mathbf 1))_{(k,q-k),k(q-k)}$, which we denote by $\tilde{f}_{q,k}$. Thus, the first part of this section focuses on studying the generator $\tilde{f}_{q,k}$ of $(S/\mathbb{R}(\mathbf 1))_{(k,q-k),k(q-k)}$. In particular, we show that these $\tilde{f}_{q,k}$ satisfy a series of recursive relations from which it is possible to explicitly write down $\tilde{f}_{q,k}$.
Following this, in the second part of this section, we define $f_{q,k,\mathbf b}$ and study it's properties. Namely, we show that $f_{q,k,\mathbf b}$ is well-defined, and then show that it is in fact a non-zero in $\overline{S}$. Moreover, we see that $f_{q,k,\mathbf b}$ is supported on the variables $x_{0},x_{1},\ldots,x_{q-k},y_{0},y_{1},\ldots,y_{k}$.
We then end this section by studying the linear annihilators of $f_{q,k,\mathbf b}$. For example, we show that if $\mathbf b=\mathbf 0$ then $x_{i}f_{q,k,\mathbf b}$ and $y_{j}f_{q,k,\mathbf b}$ equal zero as elements of $\overline{S}$ for $i=0,1,\ldots,(q-k-1)$ and $j=0,1,\ldots,(k-1)$. Understanding these linear annihilators of $f_{q,k,\mathbf b}$ is crucial as it allows us to bound $\#Z(f_{q,k,\mathbf b})$ appearing in Proposition~\ref{prop:nonvanishing-bounds} and Corollary~\ref{cor:monomial-nonvanishing}.
\subsection{Defining \texorpdfstring{$\tilde{f}_{q,k}$}{$\~{f}_{q,k}$}}
As described in Theorem~\ref{thm:tri-deg-vanishing} for certain degrees $(S/\mathbb{R}(\mathbf 1))_{\aa,t}$ is one dimensional. The goal of this section is to study the generator, unique up to scalar multple, of $(S/\mathbb{R}(\mathbf 1))_{\aa,t}$. This is useful as the monomial $f_{q,k,\mathbf b}$ we will define in the next subsection is built from these generators. In particular, many of our results about $f_{q,k,\mathbf b}$ given later in this section are built on the understanding of the generator of $(S/\mathbb{R}(\mathbf 1))_{\aa,t}$.
\begin{defn}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$. Given $0< q \leq |\mathbf n|$ and $0\leq k \leq q$ such that $q-k\leq n_1$ and $k\leq n_2$ let $\tilde{f}_{q,k}$ be the unique, up to scalar multiplication, non-zero monomial in $(S/\mathbb{R}(\mathbf 1))_{(k,q-k),k(q-k)}$.
\end{defn}
A crucial property of $\tilde{f}_{q,k}$, which is not immediately obvious from the definition, is that $\tilde{f}_{q,k}$ is supported on $x_{0},x_{1},\ldots,x_{q-k},y_{0},y_{1},\ldots,y_{k}$. In fact, in $\tilde{f}_{q,k}$ is independent, up to the isomorphism described in Lemma~\ref{lem:induction-ideal}, of the $\mathbf n$ so long as $n_1\geq q-k$ and $n_2\geq k$. This is the content for the following lemmas.
\begin{lemma}\label{lem:lem-fqk-independent}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $0< q \leq |\mathbf n|$ and $0\leq k \leq q$ such that $q-k\leq n_1$ and $k\leq n_2$. Let $S'=\mathbb{K}[x_{0},x_{1},\ldots,x_{n_1-i},y_{0},y_{1},\ldots,y_{n_2-j}]$. If $q-k\leq n_1-i$ and $k\leq n_2-j$ and $\tilde{f}'_{q,k}$ is the unique, up to scalar multiplication, non-zero monomial in $(S'/\mathbb{R}(\mathbf 1))_{(k,q-k),k(q-k)}$ then $\tilde{f}_{q,k}=\tilde{f}'_{q,k}$ under the isomorphism described in Lemma~\ref{lem:induction-ideal}.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:induction-ideal} there exists an isomorphism of $\mathbb{K}$-vector spaces:
\begin{equation}\label{eq:lem-fqk-independent}
\begin{tikzcd}[column sep = 4em]
\left(\frac{S'}{\mathbb{R}((q-k,k),\mathbf 1)}\right)_{(k,q-k),k(q-k)} \arrow[r, leftrightarrow, "\sim"] &
\left(\frac{S}{\mathbb{R}(\mathbf n,\mathbf 1)+\langle x_{n_1-i+1},x_{n_1-i+2},\ldots,x_{n_1},y_{n_2-j+1},y_{n_2-j+2},\ldots,y_{n_2}\rangle}\right)_{(k,q-k),k(q-k)}.
\end{tikzcd}
\end{equation}
The final remark of Theorem~\ref{thm:tri-deg-vanishing} shows that left hand side of \eqref{eq:lem-fqk-independent} is one dimensional, and so right hand side is also one dimension. That said since $\tilde{f}_{q,k}$ is defined to be a representative for the unique, up to scalar multiplication, generator of $S/\mathbb{R}(\mathbf n,\mathbf 1)_{(k,q-k),k(q-k)}$, and so $\tilde{f}_{q,k}=\tilde{f}'_{q,k}$ under the isomorphism in \eqref{eq:lem-fqk-independent}.
\end{proof}
\begin{lemma}\label{lem:fqk-relations}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $0< q \leq |\mathbf n|$ and $0\leq k \leq q$ such that $q-k\leq n_1$ and $k\leq n_2$. The following identities hold in $\overline{S}$:
\begin{enumerate}
\item $\tilde{f}_{q,k}=x_{q-k}\tilde{f}_{q-1,k-1}$,
\item $\tilde{f}_{q,k}=y_{k}\tilde{f}_{q-1,k}$,
\item $\tilde{f}_{q,k}=x_{q-k}y_{k-1}\tilde{f}_{q-2,k-1}$, and
\item $\tilde{f}_{q,k}=x_{q-k-1}y_{k}\tilde{f}_{q-2,k-1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Parts (3) and (4) follow by combining parts (1) and (2). For part (1) consider a graded component of the maps induced by multiplication by $x_{q-k}$:
\begin{equation}\label{eq:fqk-relations}
\begin{tikzcd}[column sep = 4em]
\left(\frac{S}{\mathbb{R}(\mathbf 1)}\right)_{(k-1,q-k),(k-1)(q-k)} \rar{x_{q-k}} & \left(\frac{S}{\mathbb{R}(\mathbf 1)}\right)_{(k,q-k),k(q-k)}
\end{tikzcd}.
\end{equation}
By Theorem~\ref{thm:tri-deg-vanishing} both the source and target of the map in \eqref{eq:fqk-relations} are one dimensional $\mathbb{K}$-vector spaces. Moreover, $\tilde{f}_{q-1,k-1}$ is a generator for the left hand side and $\tilde{f}_{q,k}$ is a generator for the right hand side. Thus, it is enough to show that $x_{q-k}$ divides $\tilde{f}_{q,k}$.
Towards a contradiction suppose that $x_{q-k}$ does not divide $\tilde{f}_{q,k}$. Letting $S'=\mathbb{K}[x_{0},x_{1},\ldots,x_{q-k-1},y_{0},y_{1},\ldots,y_{k}]$ by Lemma~\ref{lem:induction-ideal} there is an isomorphism of $\mathbb{K}$-vector spaces:
\begin{equation}\label{eq:fqk-relations-2}
\begin{tikzcd}[column sep = 4em]
\left(\frac{S'}{\mathbb{R}((q-k-1,k),\mathbf 1)}\right)_{(k,q-k),k(q-k)} \arrow[r, leftrightarrow, "\sim"] &
\left(\frac{S}{\mathbb{R}(\mathbf n,\mathbf 1)+\langle x_{q-k},x_{q-k+1},\ldots,x_{n_1}\rangle}\right)_{(k,q-k),k(q-k)}.
\end{tikzcd}
\end{equation}
Part (1) of Theorem~\ref{thm:tri-deg-vanishing} implies the source of the map in \eqref{eq:fqk-relations} has dimension zero. However, by Lemma~\ref{lem:lem-fqk-independent} $\tilde{f}_{q,k}$ is non-zero after quotienting $S$ by $\mathbb{R}(\mathbf n,\mathbf 1)+\langle x_{q-k+1},x_{q-k+1},\ldots,x_{n_1}\rangle$. Thus, since $\tilde{f}_{q,k}$ is not divisible by $x_{q-k}$ it is non-zero in the target of \eqref{eq:fqk-relations}. Hence the target of \eqref{eq:fqk-relations} has dimension one, which is a contradiction. Part (2) of this lemma follows by a similar argument.
\end{proof}
\begin{remark}
While we will not make use of this, notice that as a consequence of Lemma~\ref{lem:fqk-relations} we are able to write down an explicit representative for $\tilde{f}_{q,k}$. Namely, as $\tilde{f}_{q,0}$ has bi-degree $(0,q)$ and index degree $0$ we know $\tilde{f}_{q,0}=y_{0}^{q}$, and by a similar argument $\tilde{f}_{q,q}=x_{0}^{q}$. So using inductive structure of Lemma~{lem:fqk-relations} with these base cases we can find explicit representatives for $\tilde{f}_{q,k}$. For example,
\[
\tilde{f}_{5,2}=x_{3}y_{1}\tilde{f}_{3,1}=x_{3}y_{1}\left(x_{2}y_{0}\tilde{f}_{1,0}\right)=x_{2}x_{3}y_{0}^{2}y_1.
\]
\end{remark}
\subsection{Defining $f_{q,k,\mathbf b}$}
We are now ready to define $f_{q,k,\mathbf b}$ in terms of $\tilde{f}_{q,k}$. While in Theorem~\ref{thm:main2} we restrict our attention to the case when $\mathbf b\in \mathbb{Z}^{2}_{\geq0}$ we will define $f_{q,k,\mathbf b}$ for a more general range of $\mathbf b$. In particular, under suitable hypothesis we will allow $\mathbf b$ to have negative coordinates. One might hope that these more general $f_{q,k,\mathbf b}$'s may be used to extend Theorem~\ref{thm:main2} to additional cases of $\mathbf b$, even though we do not carry this out here.
\begin{notation}
Given a monomial $\mathbf x^{\mathbf v}\mathbf y^{\mathbf y}$ in $S$ (or $\overline{S}$) we write $(\mathbf x^{\mathbf v}\mathbf y^{\mathbf y})^{\mathbf d}$ for the monomial $\mathbf x^{d_1\mathbf v}\mathbf y^{d_2\mathbf y}$ in $S$ (or $\overline{S}$).
\end{notation}
\begin{defn}\label{defn:fqkb}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$, and $\mathbf b\in \mathbb{Z}^2$. Let $0< q \leq |\mathbf n|$ and $0\leq k \leq q$ such that $q-k\leq n_1$ and $k\leq n_2$. If $d_1>q-k+b_1$ and $d_2>k+b_2$ then define $f_{q,k,\mathbf b}$ to be:
\[
f_{q,k,\mathbf b}\coloneqq
\begin{cases}
\left(x_0\cdots x_{q-1}\right)^{d_1-1}x_{q}^{q+b_1}y_{0}^{b_2}\tilde{f}_{q,0}^{\mathbf d}& \mbox{if } k=0 \\
\left(x_0\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1}\left(y_0\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2}\tilde{f}_{q,k}^{\mathbf d} & \mbox{if } k\neq0
\end{cases}.
\]
\end{defn}
Note that from the definition it is not necessarily clear that $f_{q,k,\mathbf b}$ is in fact an element of $\overline{S}$. In particular, since $b_1$ and $b_2$ may be negative the terms $x_{q-k}^{q-k+b_1}$ and $y_{k}^{k+b_2}$ appearing in the definition of $f_{q,k,\mathbf b}$ need not be monomials. In fact, if \textit{both} $b_1$ and $b_2$ are sufficiently negative $f_{q,k,\mathbf b}$ is not an element of $\overline{S}$.
However, the following lemma shows that as long as at least one of $q-k+b_1$ and $k+b_2$ are non-negative and $\mathbf d\gg\mathbf b$ we need not worry about this except in a few edge cases (i.e. when $k=0$ or $k=q$). The key insight is that the relations in Lemma~\ref{lem:fqk-relations} provide the following alternative definitions for $f_{q,k,\mathbf b}$ when exactly one of $q-k+b_1$ and $k+b_2$ is negative are immediate.
\begin{lemma}\label{lem-fqkb-well-defined}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$, and $\mathbf b\in \mathbb{Z}^2$. Let $0< q \leq |\mathbf n|$ and $0\leq k \leq q$ such that $q-k\leq n_1$ and $k\leq n_2$.
\begin{enumerate}
\item Suppose $q-k+b_1<0$ and $k+b_2\geq0$. If $d_1>|q-k+b_1|$ and $k\neq0$ then $f_{q,k,\mathbf b}\in \overline{S}$ and
\[
f_{q,k,\mathbf b}=
\left(x_0\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1+d_1}\left(y_0\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2}\tilde{f}_{q-1,k-1}^{\mathbf d}.
\]
\item Suppose $q-k+b_1\geq0$ and $k+b_2<0$. If $d_2>|k+b_2|$ then $f_{q,k,\mathbf b}\in \overline{S}$ and
\[
f_{q,k,\mathbf b}=
\begin{cases}
\left(x_0\cdots x_{q-1}\right)^{d_1-1}x_{q}^{q+b_1}y_{0}^{b_2+d_2}\tilde{f}_{q-1,0}^{\mathbf d}& \mbox{if } k=0 \\
\left(x_0\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1}\left(y_0\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2+d_2}\tilde{f}_{q-1,k}^{\mathbf d} & \mbox{if } k\neq0
\end{cases}.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
Apply Lemma~\ref{lem:fqk-relations}.
\end{proof}
\begin{remark}
Since $\tilde{f}_{q,k}$ has bi-degree $(k,q-k)$ the monomial $f_{q,k,\mathbf b}$ has bi-degree $q\mathbf d$. This is key as we wish to apply these $f_{q,k,\mathbf b}$ as in Proposition~\ref{prop:nonvanishing-bounds}.
\end{remark}
The remainder of this section, is dedicated to proving that for certain $\mathbf b$ the monomial $f_{q,k,\mathbf b}$ is non-zero as an element of $\overline{S}$, and describing a certain subset of $(\mathbb{R}:f_{q,k,\mathbf b})$. An important property, key to proving both of these, is that $(f_{q,k,\mathbf b}/\operatorname{remd}(f_{q,k,\mathbf b}))^{1/\mathbf d}$ is equal to $\tilde{f}_{q',k'}$ for some $q'$ and $k'$.
\begin{lemma}\label{lem:floor-of-fqk}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$, and $\mathbf b\in \mathbb{Z}^{2}$. Further fix $0< q \leq |\mathbf n|$ and $0\leq k \leq q$ such that $q-k\leq n_1$ and $k\leq n_2$. Suppose that $|(q-k)+b_1|<d_1$ and $|k+b_2|<d_2$.
\begin{enumerate}
\item If $0\geq (q-k)+b_1$ and $0\geq k+b_2$ then $\left(\frac{f_{q,k,\mathbf b}}{\operatorname{remd}(f_{q,k,\mathbf b})}\right)^{1/\mathbf d}$ is equal to $\tilde{f}_{q,k}$.
\item If $q-k+b_1<0$, $k+b_2\geq0$, and $k\neq0$ then $\left(\frac{f_{q,k,\mathbf b}}{\operatorname{remd}(f_{q,k,\mathbf b})}\right)^{1/\mathbf d}$ is equal to $\tilde{f}_{q-1,k-1}$. \\
\item If $q-k+b_1\geq0$, $k+b_2<0$ then $\left(\frac{f_{q,k,\mathbf b}}{\operatorname{remd}(f_{q,k,\mathbf b})}\right)^{1/\mathbf d}$ is equal to $\tilde{f}_{q-1,k}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We only prove part (1) as the the remaining parts follow in a similar manner from Lemma~\ref{lem-fqkb-well-defined}. First we handle the case when $k=0$. The key facts are that $\operatorname{remd}(\tilde{f}_{q,0}^{\mathbf d})$ is equal to $\tilde{f}_{q,0}$, and that $\operatorname{remd}(x_q^{q+b_1}) =x_q^{q+b_1}$ and $\operatorname{remd}(y_{0}^{b_1})=y_0^{b_2}$ since $q+b_1<d_{1}$ and $b_2<d_2$ respectively. Computing:
\[
\left(\frac{f_{q,0,\mathbf b}}{\operatorname{remd}(f_{q,0,\mathbf b})}\right)^{1/\mathbf d}=\left(\frac{\left(x_0\cdots x_{q-1}\right)^{d_1-1}x_{q}^{q+b_1}y_{0}^{b_2}\tilde{f}_{q,0}^{\mathbf d}}{\left(x_0\cdots x_{q-1}\right)^{d_1-1}x_{q}^{q+b_1}y_0^{b_2}}\right)^{1/\mathbf d}=\left(\tilde{f}_{q,0}^{\mathbf d}\right)^{1/\mathbf d}=\tilde{f}_{q,0}.
\]
The case when $k\neq0$ is essentially the same. Again the key facts are that $\operatorname{remd}\left(\tilde{f}_{q,k}^{\mathbf d}\right)=\tilde{f}_{q,k}$, and that $\operatorname{remd}\left(x_{q-k}^{q-k+b_1}\right) =x_{q-k}^{q-k+b_1}$ and $\operatorname{remd}\left(y_{k}^{k+b_2}\right)=y_{k}^{k+b_2}$ since $(q-k+b_1)<d_{1}$ and $k+b_2<d_2$. From these then the result follows from the computation:
\[
\left(\frac{f_{q,k,\mathbf b}}{\operatorname{remd}(f_{q,k,\mathbf b})}\right)^{1/\mathbf d}=\left(\frac{\left(x_0\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1}\left(y_0\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2}\tilde{f}_{q,k}^{\mathbf d}}{\left(x_0\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1}\left(y_0\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2}}\right)^{1/\mathbf d}=\left(\tilde{f}_{q,k}^{\mathbf d}\right)^{1/\mathbf d}=\tilde{f}_{q,k}.
\]
\end{proof}
Note the conditions that $d_1>|q-k+b_1|$ and $d_2>|k+b_2|$ ensure that $(q-k+b_1)$ and $k+b_2$ remain unchanged modulo $d_1$ and $d_2$ respectively. This, and Lemma~\ref{lem-fqkb-well-defined}, are the source of conditions appearing in Theorem~\ref{thm:main}. Using this previous lemma together with Proposition~\ref{prop:containment} we conclude that $f_{q,k,\mathbf b}\neq0$ as an element of $\overline{S}$.
\begin{lemma}\label{lem-fqkb-nonzero}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$, $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$, and $\mathbf b\in \mathbb{Z}^2$. Further fix $0< q \leq |\mathbf n|$ and $0\leq k \leq q$ such that $q-k\leq n_1$ and $k\leq n_2$. Suppose $|q-k+b_1|< d_1$ and $|k+b_2|<d_2$. If one of the following pairs of inequalities hold:
\begin{enumerate}
\item $0\geq (q-k)+b_1$ and $0\geq k+b_2$,
\item $q-k+b_1<0$, $k+b_2\geq0$, or
\item $q-k+b_1\geq0$, $k+b_2<0$
\end{enumerate}
then the monomial $f_{q,k,\mathbf b}\neq0$ as an element of $\overline{S}$.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:containment} $f_{q,k,\mathbf b}\not\in \mathbb{R}(\mathbf d)$ if and only if $\left(\frac{f_{q,k,\mathbf b}}{\operatorname{remd}(f_{q,k,\mathbf b})}\right)^{1/\mathbf d}\not\in \mathbb{R}(\mathbf 1)$. By Lemma~\ref{lem:floor-of-fqk} $\left(\frac{f_{q,k,\mathbf b}}{\operatorname{remd}(f_{q,k,\mathbf b})}\right)^{1/\mathbf d}$ is equal to $\tilde{f}_{q',k'}$ for some $q'$ and $k'$. So $f_{q,k,\mathbf b}\not\in \mathbb{R}(\mathbf d)$ if and only if $\tilde{f}_{q',k'}\not\in \mathbb{R}(\mathbf 1)$. However, by construction $\tilde{f}_{q',k'}\not\in \mathbb{R}(\mathbf 1)$.
\end{proof}
\subsection{Linear Annihilators of $f_{q,k,\mathbf b}$}
Finally, we show that while $f_{q,k,\mathbf b}$ is non-zero it is annihilated by $x_i$ and $y_j$ for where $i$ and $j$ are by $q,k,$ and $\mathbf b$. For example, if $\mathbf b=\mathbf 0$ then $x_{i}f_{q,k,\mathbf b}$ and $y_{j}f_{q,k,\mathbf b}$ are equal to zero for a number of $i=0,1,\ldots,(q-k-1)$ and $j=0,1,\ldots,(k-1)$. Understanding these linear annihilators of $f_{q,k,\mathbf b}$ is crucial to the proof of Theorem~\ref{thm:main} and Theorem~\ref{thm:main2} as it allows us to bound the $\#Z(f_{q,k,\mathbf b})$, which is crucial to Corollary~\ref{cor:monomial-nonvanishing}.
\begin{prop}\label{prop:linear-annhilators}
Fix $\mathbf n=(n_1,n_2)\in \mathbb{Z}^2_{\geq1}$ and $\mathbf d=(d_1,d_2)\in \mathbb{Z}^2_{\geq1}$. Further fix integers $0<q\leq|\mathbf n|$ and $0\leq k\leq q$ such that $(q-k)\leq n_1$ and $k\leq n_2$. Suppose $|q-k+b_1|< d_1$ and $|k+b_2|<d_2$.
\begin{enumerate}
\item If $0\geq (q-k)+b_1$ and $0\geq k+b_2$ then
\[
\langle x_0,x_1,\ldots,x_{q-k-1},y_{0},y_{1},\ldots,y_{k-1}\rangle \subset \left(0:_{\overline{S}}f_{q,k,\mathbf b}\right).
\]
\item If $q-k+b_1<0$, $0k+b_2\geq0$, and $k\neq0$ then
\[
\langle x_0,x_1,\ldots,x_{q-k-1},y_{0},y_{1},\ldots,y_{k-2}\rangle \subset \left(0:_{\overline{S}}f_{q,k,\mathbf b}\right).
\]
\item If $q-k+b_1\geq0$, $k+b_2<0$ then
\[
\langle x_0,x_1,\ldots,x_{q-k-2},y_{0},y_{1},\ldots,y_{k-1}\rangle \subset \left(0:_{\overline{S}}f_{q,k,\mathbf b}\right).
\]
\end{enumerate}
\end{prop}
\begin{proof}
We begin by proving part (1). First we handle the case when $k=0$. Fixing an integer $0\leq i\leq q-1$, we wish to show that $x_{i}f_{q,k,\mathbf b}\in \mathbb{R}$. By Proposition~\ref{prop:containment} $x_{i}f_{q,k,\mathbf b}\in \mathbb{R}(\mathbf d)$ if and only if $\left(\frac{x_if_{q,k,\mathbf b}}{\operatorname{remd}( x_if_{q,k,\mathbf b})}\right)^{1/\mathbf d}\in \mathbb{R}(\mathbf 1)$. Using that $0\leq q-k+b_1< d_1$ and $0\leq k+b_2<d_2$ and performing a computation analogous to the one in Lemma~\ref{lem:floor-of-fqk} we find:
\[
\left(\frac{x_if_{q,0,\mathbf b}}{\operatorname{remd}(x_if_{q,0,\mathbf b})}\right)^{1/\mathbf d}=\left(\frac{x_i\left(x_0\cdots x_{q-1}\right)^{d_1-1}x_{q}^{q+b_1}y_0^{b_2}\tilde{f}_{q,0}^{\mathbf d}}{\left(x_0\cdots\widehat{x}_{i}\cdots x_{q-1}\right)^{d_1-1}x_{q}^{q+b_1}y_{0}^{b_2}}\right)^{1/\mathbf d}=\left(x_i^{d_1}\tilde{f}_{q,0}^{\mathbf d}\right)^{1/\mathbf d}=x_i\tilde{f}_{q,0}.
\]
Now $x_i\tilde{f}_{q,0}$ has bi-degree $(1,q)$ and index weighted degree $i$, and so $i\leq q-1$. Theorem~\ref{thm:tri-deg-vanishing} implies that $x_i\tilde{f}_{q,0}\in \mathbb{R}(\mathbf 1)$.
Turing to the case when $k>0$ fix an integer $0\leq i\leq q-k-1$. We wish to show that $x_if_{q,k,\mathbf b}\in \mathbb{R}(\mathbf d)$. By Proposition~\ref{prop:containment} $x_{i}f_{q,k,\mathbf b}\in \mathbb{R}(\mathbf d)$ if and only if $\left(\frac{x_if_{q,k,\mathbf b}}{\operatorname{remd}(x_if_{q,k,\mathbf b})}\right)^{1/\mathbf d}\in \mathbb{R}(\mathbf 1)$. Using that $0\leq q-k+b_1< d_1$ and $0\leq k+b_2<d_2$ and performing a computation analogous to the one in Lemma~\ref{lem:floor-of-fqk} we find:
\begin{align*}
\left(\frac{x_if_{q,k,\mathbf b}}{\operatorname{remd}(x_if_{q,k,\mathbf b})}\right)^{1/\mathbf d}=\left(\frac{x_i\left(x_0\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1}\left(y_0\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2}\tilde{f}_{q,k}^{\mathbf d}}{\left(x_0\cdots\widehat{x}_{i}\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1}\left(y_0\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2}}\right)^{1/\mathbf d}
=\left(x_i^{d_1}\tilde{f}_{q,k}^{\mathbf d}\right)^{1/\mathbf d}=x_i\tilde{f}_{q,k},
\end{align*}
and so it enough to show that $x_i\tilde{f}_{q,k}\in \mathbb{R}(\mathbf 1)$. Computing we find that $\deg(x_i\tilde{f}_{q,k})=(k+1,q-k)$ and $\operatorname{index.deg}(x_i\tilde{f}_{q,k}))=k(q-k)+i$. Finally, notice that since $0\leq i\leq q-k-1$ we have that:
\[
k(q-k)+i\leq k(q-k)+(q-k-1)=(k+1)(q-k)-1,
\]
and so by Theorem~\ref{thm:tri-deg-vanishing} $\dim \overline{S}_{(k+1,q-k),k(q-k)+i}=0$, which implies that $x_i\tilde{f}_{q,k}\in \mathbb{R}(\mathbf 1)$.
The argument the $y_j$'s is similar. Fixing a natural number $0\leq j\leq k-1$, we wish to show that $y_jf_{q,k,\mathbf b}\in \mathbb{R}(\mathbf d)$. Again using Proposition~\ref{prop:containment} it is enough to show that $\left(\frac{y_jf_{q,k,\mathbf b}}{\operatorname{remd}(y_jf_{q,k})}\right)^{1/\mathbf d}\in \mathbb{R}(\mathbf 1)$. A computation analogous to the one in the previous cases shows that:
\begin{align*}
\left(\frac{y_jf_{q,k,\mathbf b}}{\operatorname{remd}(y_jf_{q,k,\mathbf b})}\right)^{1/\mathbf d}=\left(\frac{y_j\left(x_0\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1}\left(y_0\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2}\tilde{f}_{q,k}^{\mathbf d}}{\left(x_0\cdots x_{q-k-1}\right)^{d_1-1}x_{q-k}^{q-k+b_1}\left(y_0\cdots\widehat{y}_{j}\cdots y_{k-1}\right)^{d_2-1}y_{k}^{k+b_2}}\right)^{1/\mathbf d}
=\left(y_j^{d_2}\tilde{f}_{q,k}^{\mathbf d}\right)^{1/\mathbf d}=y_j\tilde{f}_{q,k},
\end{align*}
and so it enough to show that $y_{j}\tilde{f}_{q,k}\in \mathbb{R}(\mathbf 1)$. The bi-degree of this element is $(k,q-k+1)$ and its index weighted degree is $k(q-k)+)$. Finally, notice that since $0\leq j\leq k-1$ we have that:
\[
k(q-k)+j\leq k(q-k)+(k-1)=k(q-k+1)-1,
\]
and so by Theorem~\ref{thm:tri-deg-vanishing} $\dim \overline{S}_{(k,q-k+1)k(q-k)+j}=0$, which implies that $y_j\tilde{f}_{q,k}\in \mathbb{R}(\mathbf 1)$. Parts (2) and (3) follow in a similar fashion.
\end{proof}
\section{The Key Case - $K_{p,q}\left(\P^{q-k}\times \P^{k},\mathbf b;\mathbf d\right)$}\label{sec:special-case}
In this section, we prove a special case of Theorem~\ref{thm:main}. Specifically, we fix $0\leq k\leq q$ and consider $K_{p,q}(\P^{q-k}\times \P^{k}, \mathbf b;\mathbf d)$. Following our heuristic that the non-vanishing of $K_{p,q}(\mathbf n,\mathbf b;\mathbf d)$ is controlled by subvarieties of the form $\P^{i}\times\P^{j}$ where $i+j=q$, we see that the case of $\P^{q-k}\times\P^{k}$ may be simpler than the general case, since the only subvariety of this form is $\P^{q-k}\times\P^{k}$ itself. This special case is crucial to our proof of the full theorem. In particular, our proof of the general case uses a series of arguments to reduce Theorem~\ref{thm:main} to the following special case.
\begin{theorem}\label{thm:special}
Fix integers $0\leq k\leq q$, $\mathbf d\in \mathbb{Z}_{>1}^2$, and $\mathbf b\in \mathbb{Z}^2_{\geq0}$. If $0\leq q-k+b_1< d_1$ and $0\leq k+b_2<d_2$ and
\[
\frac{d_1}{d_2}b_2-b_1<q-k+1\quad \quad \text{and} \quad \quad \frac{d_2}{d_1}b_1-b_2<k+1
\]
then $K_{p,q}((q-k,k),\mathbf b;\mathbf d)\neq0$ for $p = r_{(q-k,k),\mathbf d}-(q+1)$
\end{theorem}
Before proving Theorem~\ref{thm:special} we need two lemmas regarding Hilbert functions. The first lemma shows that the Hilbert function of an ideal $J\subset S$ can be bounded below in terms of the number of the linearly independent forms of total degree one.
\begin{lemma}\label{lem:hilbert-function-bound}
If $J\subset S$ is a homogeneous ideal and $K\subset \left\langle J_{(1,0)},J_{(0,1)}\right\rangle$ then
\[
\operatorname{HF}(\mathbf d,J)\geq r_{\mathbf n,\mathbf d}-\binom{d_1+n_1-\dim K_{(1,0)}}{n_1-\dim K_{(1,0)}}\binom{d_2-n_2-\dim K_{(0,1)}}{n_2-\dim K_{(0,1)}}
\]
\end{lemma}
\begin{proof}
Since $K$ is gender by monomials of total degree one, the quotient $S/K$ is exactly a bi-graded polynomial ring with $n_1-\dim K_{(1,0)}$ $x$-variables and $n_2-\dim K_{(0,1)}$ $y$-variables. It follows that $\operatorname{HF}(\mathbf d,S/K)$ equals $\binom{d_1+n_1-\dim K_{(1,0)}}{n_1-\dim K_{(1,0)}}\binom{d_2-n_2-\dim K_{(0,1)}}{n_2-\dim K_{(0,1)}}$, and since $\operatorname{HF}(\mathbf d,S)=\operatorname{HF}(\mathbf d,K)+\operatorname{HF}(\mathbf d,S/K)$, we see that $\operatorname{HF}(\mathbf d,K)$ equals the right hand side of the displayed equation in the lemma. Now, since $K$ is a subset of $J$ we have that $\operatorname{HF}(\mathbf d,K)\leq \operatorname{HF}(\mathbf d,J)$ yielding the desired result.
\end{proof}
The second lemma concerns the Hilbert function of $\mathbb{R}(\mathbf d)$.
\begin{lemma}\label{lem:hilbert-function-I}
Fix integers $0\leq k\leq q$ and $\mathbf d\in \mathbb{Z}_{>1}^2$ and let $n_1=q-k$ and $n_2=k$:
\[
\operatorname{HF}(\mathbf d,\mathbb{R}(\mathbf d))=(q-k)+k+1=q+1.
\]
\end{lemma}
\begin{proof}
Since $\mathbb{R}(\mathbf d)$ is generated in bi-degree $\mathbf d$ by $g_{0},g_{1},\ldots,g_{\mathbf n}$ it is enough to show that these $g_{\ell}$'s are linearly independent over $\mathbb{K}$. This follow from the fact that the index weighted degree induces a $\mathbb{Z}$-grading on $S$, and that index weighted degree of each $g_{t}$ is distinct, Lemma~\ref{lem:homogeneous-wrt-idex}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:special}]
As we are considering the case of $\P^{q-k}\times\P^{k}$, i.e. when $\mathbf n=(q-k,k)$, thoughtout this proof we let $S=\mathbb{K}[x_0,x_1,\ldots,x_{q-k},y_0,y_1,\ldots,y_{k}]$. By Proposition~\ref{prop:cohen-macaulay} the inequalities on $\mathbf b$ and $\mathbf d$ in the hypothesis of the theorem ensure that $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay as an $R$-module. Hence by the Artinian reduction explained in Corollary~\ref{cor:artinian-reduction} we know that there exists a natural isomorphism between $K_{p,q}((q-k,k),\mathbf b;\mathbf d)$ and $K_{p,q}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)$. So it is enough to prove that $K_{p,q}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)$ is non-zero for $p = r_{(q-k,k),\mathbf d}-(q+1)$.
We do this by applying Corollary~\ref{cor:monomial-nonvanishing} to the monomial $f_{q,k,\mathbf b}$ described in Definition~\ref{defn:fqkb}. However, before we do this we check that $f_{q,k,\mathbf b}$ satisfies the conditions need for Corollary~\ref{cor:monomial-nonvanishing}. In particular, we check that $f_{q,k,\mathbf b}$ is a well-defined monomial, which is non-zero in $\overline{S}_{q\mathbf d+\mathbf b}$. The fact that $f_{q,k,\mathbf b}$ is a well-defined monomial follows from the fact that both coordinates of $\mathbf b=(b_1,b_2)$ are non-negative (see Lemma~\ref{lem-fqkb-well-defined}). Moreover, since $0\leq q-k+b_1<d_1$ and $0\leq k+b_2<d_2$ we know that $f_{q,k,\mathbf b}$ is non-zero as an element of $\overline{S}$ (see Lemma~\ref{lem-fqkb-nonzero}).
Thus, by the monomial methods from Corollary~\ref{cor:monomial-nonvanishing} if $L(f_{q,k,\mathbf b})\subset Z(f_{q,k})$ then $K_{p,q}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)\neq0$ for all $p$ in
\[
\#L(f_{q,k,\mathbf b})\leq p \leq \#Z(f_{q,k,\mathbf b}).
\] In particular, using the trivial upper bound that $\#L(f_{q,k,\mathbf b})\leq \#Z(f_{q,k,\mathbf b})$ gives non-vanishing for $p=\#Z(f_{q,k})$. Thus, it is enough to i) show that $L(f_{q,k,\mathbf b})\subset Z(f_{q,k,\mathbf b})$ and ii) give a lower bound on $\#Z(f_{q,k,\mathbf b})$ that is also an upper bound on $\#L(f_{q,k,\mathbf b})$.
Towards part (i) recall that
\[
Z(f_{q,k,\mathbf b})= \left\{\begin{matrix} m \\ \text{a monomial of}\\ \text{bi-degree $\mathbf d$}\end{matrix} \; \bigg| \; mf_{q,k,\mathbf b}=0\right\}\subset \overline{S}_{\mathbf d},
\]
and so $Z(f_{q,k,\mathbf b})$ is equal to $(0:_{\overline{S}}f_{q,k,\mathbf b})_{\mathbf d}\subset \overline{S}$. By Proposition~\ref{prop:linear-annhilators} the ideal $\langle x_{0},x_{1},\ldots,x_{q-k-1},y_0,y_{1},\ldots,y_{k-1}\rangle$ is contained in $(\mathbb{R}(\mathbf d):_{S}f_{q,k,\mathbf b})$, and so the degree $\mathbf d$ part of $\langle x_{0},x_{1},\ldots,x_{q-k-1},y_0,y_{1}\ldots,y_{k-1}\rangle\overline{S}$ is contained in $Z(f_{q,k,\mathbf b})=(0:_{\overline{S}}f_{q,k,\mathbf b})_{\mathbf d}$. Thus, it is enough to show:
\[
L(f_{q,k,\mathbf b})= \left\{\begin{matrix} m \\ \text{a monomial of}\\ \text{bi-degree $\mathbf d$}\end{matrix} \; \bigg| \; \begin{matrix}\operatorname{index.deg} m\leq \operatorname{index.deg} f_{q,k,\mathbf b}\end{matrix}\right\}\subset \left(\langle {x}_{0},x_{1},\ldots,{x}_{q-k-1},{y}_0,y_{1},\ldots,{y}_{k-1}\rangle\overline{S}\right)_{\mathbf d}.
\]
Since $n_1=q-k$ and $n_2=k$ the only monomial of bi-degree $\mathbf d$ not contained in $\langle {x}_{0},x_{1},\ldots,{x}_{q-k-1},{y}_0,y_{1},\ldots,{y}_{k-1}\rangle\overline{S}$ is ${x}_{q-k}^{d_1}{y}_{k}^{d_2}$. However, since $n_1=q-k$ and $n_2=k$ we know that ${x}_{q-k}^{d_1}{y}_{k}^{d_2}=g_{|\mathbf n|}$, and so ${x}_{q-k}^{d_1}{y}_{k}^{d_2}=0$ as an element of $\overline{S}$. This gives the following containments:
\[
L(f_{q,k,\mathbf b})\subset \langle {x}_{0},x_{1},\ldots,{x}_{q-k-1},{y}_0,y_{1},\ldots,{y}_{k-1}\rangle\overline{S}\subset Z(f_{q,k,\mathbf b}).
\]
Shifting our focus to step (ii), and giving a lower bound for $\#Z(f_{q,k,\mathbf b})$, note that:
\[
\#Z(f_{q,k,\mathbf b})=\operatorname{HF}(\mathbf d,(0:_{\overline{S}}f_{q,k,\mathbf b}))=\operatorname{HF}(\mathbf d,(\mathbb{R}:_{S}f_{q,k,\mathbf b}))-\operatorname{HF}(\mathbf d,\mathbb{R}).
\]
Utilizing the fact that $\langle x_{0},x_{1},\ldots,x_{q-k-1},y_0,y_{1},\ldots,y_{k-1}\rangle$ is contained in $(\mathbb{R}:_{S}f_{q,k,\mathbf b})$, Proposition~\ref{prop:linear-annhilators}, together with Lemmas ~\ref{lem:hilbert-function-bound} and ~\ref{lem:hilbert-function-I} we get the desired result
\begin{align*}
\#Z(f_{q,k,\mathbf b})=\operatorname{HF}(\mathbf d,(\mathbb{R} :_{S}f_{q,k,\mathbf b}))-\operatorname{HF}(\mathbf d,\mathbb{R})\geq r_{q-k,k\mathbf d}+1-\binom{d_1}{0}\binom{d_2}{0}-\operatorname{HF}(\mathbf d,\mathbb{R})=r_{(q-k,k),\mathbf d}-(q+1).
\end{align*}
\end{proof}
\section{Proof of Main Theorems}\label{sec:proof-of-theorem}
We are now ready to prove our main results: Theorem~\ref{thm:main}, Corollary~\ref{cor:main}, and Theorem~\ref{thm:main2}. By combining part (1) of Proposition~\ref{prop:nonvanishing-bounds} and Proposition~\ref{prop:linear-annhilators} we can now easily construct Koszul co-cycles of the form $m_1\wedge \cdots \wedge m_p\otimes f_{q,k,\mathbf b}$. However, checking such a co-cycle is not a co-boundary is relatively difficult. Our key insight is that the issue of showing $m_1\wedge \cdots \wedge m_p\otimes f_{q,k,\mathbf b}$ is not a co-boundary can, in a sense, be reduced to the special case considered in the Section~\ref{sec:special-case}.
More precisely if we fix $0\leq q \leq |\mathbf n|$ and $0\leq k \leq q$ so that $q-k\leq n_1$ and $k\leq n_2$ then the quotient map
\[
\begin{tikzcd}[column sep = 4em]
\overline{S}\rar{\pi}& \frac{\overline{S}}{\langle x_{q-k+1},x_{q-k+2},\ldots,x_{n_1},y_{k+1},y_{k+2},\ldots,y_{n_2}\rangle}=\overline{S}'
\end{tikzcd},
\]
induces a map between Koszul compexes
\begin{center}
\begin{tikzcd}[column sep = 3.5em, row sep = 3.5em]
\cdots \rar{}& \bigwedge\nolimits^{p+1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q-1)\mathbf d+\mathbf b}\dar[two heads]{\pi}\rar{\overline{\partial}_{p+1}}&\bigwedge\nolimits^{p}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}\dar[two heads]{\pi}\rar{\overline{\partial}_p}&\bigwedge\nolimits^{p-1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q+1)\mathbf d+\mathbf b}\dar[two heads]{\pi}\rar{}&\cdots \\
\cdots \rar{}& \bigwedge\nolimits^{p+1}\overline{S}'_{\mathbf d}\otimes \overline{S}'_{(q-1)\mathbf d+\mathbf b}\rar{\overline{\partial}'_{p+1}}&\bigwedge\nolimits^{p}\overline{S}'_{\mathbf d}\otimes \overline{S}'_{q\mathbf d+\mathbf b}\rar{\overline{\partial}'_p}&\bigwedge\nolimits^{p-1}\overline{S}'_{\mathbf d}\otimes \overline{S}'_{(q+1)\mathbf d+\mathbf b}\rar{}&\cdots
\end{tikzcd}.
\end{center}
Checking directly in coordinates one sees that this induced map is in fact a map of chain complexes
\begin{align*}
\pi\left(\overline{\partial}_p(m_1\wedge \cdots\wedge m_{p}\otimes f)\right)&=\pi\left(\sum_{i=1}^p (-1)^{i}m_1\wedge \cdots \wedge\hat{m_i}\wedge \cdots\wedge m_{p}\otimes m_if\right)\\
&=\sum_{i=1}^p (-1)^{i}\pi(m_1)\wedge \cdots \wedge\hat{m_i}\wedge \cdots\wedge \pi(m_{p})\otimes \pi(m_if)\\
&=\sum_{i=1}^{p}(-1)^i \pi(m_1)\wedge \cdots \hat{\pi(m_i)}\wedge \cdots \wedge \pi(m_{p})\otimes \pi(m_i)\pi(f)) \\
&= \overline{\partial}'_p \left(\pi(m_1)\wedge \cdots\wedge \pi(m_{p})\otimes \pi(f)\right)= \overline{\partial}'_p\left(\pi(m_1\wedge \cdots\wedge m_{p}\otimes f)\right).
\end{align*}
Chasing this diagram of Koszul complexes shows that the condition of an element $\zeta\in \bigwedge\nolimits^{p}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}$ not being a co-boundary is implied by $\pi(\zeta)$ not being a co-boundary.
\begin{lemma}\label{lem:technical-1}
Fix $\zeta\in \bigwedge\nolimits^{p}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d}$. If $\pi(\zeta)\not\in \operatorname{img}\left(\overline{\partial}'_{p+1}\right)$ then $\zeta\not\in \operatorname{img}\left(\overline{\partial}_{p+1}\right)$.
\end{lemma}
\begin{proof}
Towards a contradiction suppose there exists $\alpha\in \bigwedge\nolimits^{p+1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q-1)\mathbf d+\mathbf b}$ such that $\overline{\partial}_{p+1}(\alpha)=\zeta$. Now since $\pi$ induces a chain map of Koszul complexes
\[
\overline{\partial}'_{p+1}\left(\pi(\alpha)\right)=\pi\left(\overline{\partial}_{p+1}(\alpha)\right)=\pi(\zeta)
\]
contradicting the fact that $\pi(\zeta)\not\in \operatorname{img}\left(\overline{\partial}'_{p+1}\right)$.
\end{proof}
\begin{lemma}\label{lem:technical-2}
Fix $\zeta\in \bigwedge\nolimits^{p}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}$ and $m\in \overline{S}_{\mathbf d}$. If $\zeta \not\in \operatorname{img} \overline{\partial}_{p+1,q}$ then $m\wedge \zeta\not\in \operatorname{img} \overline{\partial}_{p+2,q}$
\end{lemma}
\begin{proof}
We prove the contrapositive that if $m\wedge \zeta\in \operatorname{img} \overline{\partial}_{p+2,q}$ then $\zeta\in \operatorname{img} \overline{\partial}_{p+1,q}$. Towards this let $\zeta = \zeta_1\wedge \cdots \wedge \zeta_p\otimes f$, and assume that $\overline{\partial}_{p+2,q}(\alpha)=m\wedge \zeta$. Now we may write $\alpha$ as
\[
\alpha=m\wedge \left(\sum_{j} \xi_j\otimes g_j\right) + \sum_{i} \omega_i\otimes h_i
\]
where
\[
\xi_j\otimes g_j\in \bigwedge\nolimits^{p}\operatorname{span}_{\mathbb{K}}\left(\overline{S}_{\mathbf d}-\{m\}\right)\otimes \overline{S}_{q\mathbf d+\mathbf b} \quad \quad \text{and} \quad \quad \omega_i\otimes h_i \in \bigwedge\nolimits^{p+1}\operatorname{span}_{\mathbb{K}}\left(\overline{S}_{\mathbf d}-\{m\}\right)\otimes \overline{S}_{q\mathbf d+\mathbf b}.
\]
Computing we see that:
\begin{align}\label{eqn:reduction-2}
m\wedge \zeta = \overline{\partial}_{p+2,q}(\alpha) &= \overline{\partial}_{p+2,q}\left(m\wedge \left(\sum_{j} \xi_j\otimes g_j\right) + \sum_{i} \omega_i\otimes h_i\right)=\overline{\partial}_{p+2,q}\left(m\wedge \left(\sum_{j} \xi_j\otimes g_j\right)\right) + \overline{\partial}_{p+2,q}\left(\sum_{i} \omega_i\otimes h_i\right)\nonumber\\
&= \underbrace{-m\wedge \left(\overline{\partial}_{p+1,q}\left(\sum_{j} \xi_j\otimes g_j\right)\right)}_{\text{I}}\;\;+\;\;\underbrace{\sum_{j}-\xi_j \otimes mg_j+\overline{\partial}_{p+2,q}\left(\sum_{i} \omega_i\otimes h_i\right)}_{\text{II}}.
\end{align}
Now Part II of the above equation is entirely contained in the vector subspace $\bigwedge\nolimits^{p}\operatorname{span}_{\mathbb{K}}\left(\overline{S}_{\mathbf d}-\{m\}\right)\otimes \overline{S}_{(q+1)\mathbf d+\mathbf b}$, and hence must equal zero. So Equation~\eqref{eqn:reduction-2} simplifies to:
\[
m\wedge \zeta = -m\wedge \left(\overline{\partial}_{p+1,q}\left(\sum_{j} \xi_j\otimes g_j\right)\right).
\]
In particular, we see that $\overline{\partial}_{p+1,q}\left(-\sum_{j} \xi_j\otimes g_j\right)=\zeta$, and so as claimed $\zeta \in \operatorname{img} \overline{\partial}_{p+1,q}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main2}]
By Proposition~\ref{prop:cohen-macaulay} the inequalities on $\mathbf b$ and $\mathbf d$ in the hypothesis of the theorem ensure that $S(\mathbf b;\mathbf d)$ is Cohen-Macaulay as an $R$-module. In particular, Proposition~\ref{prop:regular-sequence} implies that $\ell_{0},\ell_{1},\ldots,\ell_{|\mathbf n|}$ is a linear regular sequence on $S(\mathbf b;\mathbf d)$, and so the Artinian reduction argument described in Corollary~\ref{cor:artinian-reduction} shows that quotienting by $\langle \ell_0,\ell_1,\ldots,\ell_{|\mathbf n|}\rangle$ induces an isomorphism between $K_{p,q}(\mathbf n,\mathbf b;\mathbf d)$ and $K_{p,q}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)$.
Thus, it is enough to prove the desired non-vanishing for $K_{p,q}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)$. We do this by first using the special non-trivial syzygy on $\P^{q-k}\times\P^{k}$ constructed in Theorem~\ref{thm:special} together with the lifting argument in Lemma~\ref{lem:technical-1} to construct a single non-trivial syzygy on $\P^{\mathbf n}$. We then construct other non-zero syzygies from this initial non-zero syzygy by Lemma~\ref{lem:technical-2}.
Set $\delta = r_{(q-k,k),\mathbf d}-(q+1)$, and choose $\delta$ degree $\mathbf d$ non-zero monomials $m_1,m_2,\ldots,m_{\delta}$ contained in the ideal $\langle x_0,x_1,\ldots,x_{q-k},y_0,y_1,\ldots,y_k\rangle\overline{S} \cap \mathbb{K}[x_0,x_1\ldots,x_{q-k},y_0,y_1,\ldots,y_k]$. We wish to show that $\zeta = m_1\wedge \cdots \wedge m_{\delta} \otimes f_{q,k,\mathbf b}$ represents a non-zero class in $K_{p,\delta}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)$. That is $\zeta$ is a represents a non-zero class in the cohomology of the following chain complex
\begin{equation}\label{eq:pf-chain}
\begin{tikzcd}[column sep = 3.5em, row sep = 3.5em]
\cdots \rar{}& \bigwedge\nolimits^{\delta+1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q-1)\mathbf d+\mathbf b}\rar{\overline{\partial}_{\delta+1}}&\bigwedge\nolimits^{\delta}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}\rar{\overline{\partial}'_\delta}&\bigwedge\nolimits^{\delta-1}\overline{S}_{\mathbf d}\otimes \overline{S}_{(q+1)\mathbf d+\mathbf b}\rar{}&\cdots.
\end{tikzcd}
\end{equation}
Towards this we first show that $\zeta$ is well-defined and non-zero, which amounts to checking the same for $f_{q,k\mathbf b}$. Since $b_1\geq0$ and $b_2\geq0$ we know by Lemma~\ref{lem-fqkb-well-defined} that $f_{q,k,\mathbf b}$ is a well-defined monomial in $\overline{S}_{q\mathbf d+\mathbf b}$. Moreover, since $0\leq q-k+b_1<d_1$ and $0\leq k+b_2<d_2$ we know that $f_{q,k,\mathbf b}$ is non-zero as an element of $\overline{S}$ (see Lemma~\ref{lem-fqkb-nonzero}).
Having showed that $\zeta$ is well-defined we turn to proving that $\zeta$ is not in the image $\overline{\partial}_{\delta+1}$. We do this by considering $\pi(\zeta)\in\bigwedge\nolimits^{\delta}\overline{S}'_{\mathbf d}\otimes \overline{S}'_{q\mathbf d+\mathbf b}$ where $\pi$ is as defined in the beginning of this section. Using the inductive structure described in Lemma~\ref{lem:induction-ideal} we know that $\overline{S}'$ is exactly $\overline{S}$ in the case when $n_1=q-k$ and $n_2=k$. Thus, the Koszul complex
\begin{equation}\label{eq:reduction}
\begin{tikzcd}[column sep = 3.5em, row sep = 3.5em]
\cdots \rar{}& \bigwedge\nolimits^{\delta+1}\overline{S}'_{\mathbf d}\otimes \overline{S}'_{(q-1)\mathbf d+\mathbf b}\rar{\overline{\partial}'_{\delta+1}}&\bigwedge\nolimits^{\delta}\overline{S}'_{\mathbf d}\otimes \overline{S}'_{q\mathbf d+\mathbf b}\rar{\overline{\partial}'_\delta}&\bigwedge\nolimits^{\delta-1}\overline{S}'_{\mathbf d}\otimes \overline{S}'_{(q+1)\mathbf d+\mathbf b}\rar{}&\cdots
\end{tikzcd}
\end{equation}
actually computes $K_{\delta,q}((q-k,k),\mathbf b;\mathbf d)$. Moreover, by construction one sees that $\pi(\zeta)$ represents one of the non-trivial syzygies constructed in Theorem~\ref{thm:special}. In particular, $\pi(\zeta)$ represents a non-zero element in the cohomology of complex~\eqref{eq:reduction} above. This means that
$\pi(\zeta)$ is not in the image of $\overline{\partial}'_{\delta+1}$.
Now by Lemma~\ref{lem:technical-1} the fact that $\pi(\zeta)$ is not in the image of $\overline{\partial}'_{\delta+1}$ implies that $\zeta\in \bigwedge\nolimits^{\delta}\overline{S}_{\mathbf d}\otimes \overline{S}_{q\mathbf d+\mathbf b}$ is not in the image of $\overline{\partial}_{\delta+1}$. Thus, to show that $\zeta$ is a non-trivial syzygy on $\P^{\mathbf n}$, i.e. a non-zero element of the cohomology of complex \eqref{eq:pf-chain} above, we must show that $\zeta$ is in the kernel of $\overline{\partial}_{\delta}$. Using our description of the annihilators of $f_{q,k,\mathbf b}$ given in Proposition~\ref{prop:linear-annhilators} we know that $m_1,m_2,\ldots,m_{\delta}$ annihilate $f_{q,k,\mathbf b}$. So by part (1) of Proposition~\ref{prop:nonvanishing-bounds} implies that $\overline{\partial}_{\delta}(\zeta)=0$. Hence $\zeta$ represents a non-trivial class in $K_{p,\delta}^{\overline{R}}\left(\overline{S}(\mathbf b;\mathbf d)\right)$.
We now use Lemma~\ref{lem:technical-2} to construct other non-trivial syzygies from $\zeta$. In particular, by inductively applying Lemma~\ref{lem:technical-2} we know that if $(n_1\wedge\cdots\wedge n_t)\wedge \zeta$ is non-zero then $(n_1\wedge\cdots\wedge n_t)\wedge \zeta$ is not in the image of $\overline{\partial}_{\delta+t+1,q}$. Thus, as long as $(n_1\wedge\cdots\wedge n_t)\wedge \zeta$ remains non-zero and in the kernel of $\overline{\partial}_{\delta+t,q}$ it will represent a non-trivial class in $K_{\delta+t,q}^{\overline{R}}(\overline{S}(\mathbf b;\mathbf d))$.
Using the description of the annihilators of $f_{q,k,\mathbf b}$ given in Proposition~\ref{prop:linear-annhilators} together with part (1) of Proposition~\ref{prop:nonvanishing-bounds} we know that $(n_1\wedge\cdots\wedge n_t)\wedge \zeta$ will be in the kernel of $\overline{\partial}_{\delta+t,q}$ so long as $n_i\in \langle {x}_{0},{x}_1,\ldots,{x}_{q-k-1},{y}_0,{y}_1,\ldots,{y}_{k-1}\rangle\overline{S}$ for all $i$. Further, $(n_1\wedge\cdots\wedge n_t)\wedge \zeta$ will be non-zero provided that $n_1,n_2,\ldots,n_t,m_1,m_2,\ldots,m_{\delta}$ are unique in $\overline{S}$. As these are all monomials of bi-degree $\mathbf d$ contained in $\langle {x}_{0},{x}_1,\ldots,{x}_{q-k-1},{y}_0,{y}_1,\ldots,{y}_{k-1}\rangle\overline{S}$ the number of such elements is controlled by the Hilbert function of this ideal. Using Lemma~\ref{lem:hilbert-function-bound} to compute the Hilbert function of $\langle {x}_{0},{x}_1,\ldots,{x}_{q-k-1},{y}_0,{y}_1,\ldots,{y}_{k-1}\rangle\overline{S}$ we see that we can construct a non-trivial class in $K_{\delta+t,q}^{\overline{R}}(\overline{S}(\mathbf b;\mathbf d))$ whenever
\begin{align*}
\delta+t\leq \operatorname{HF}\left(\mathbf d, \langle {x}_{0},{x}_1,\ldots,{x}_{q-k-1},{y}_0,{y}_1,\ldots,{y}_{k-1}\rangle\overline{S}\right)&\geq \operatorname{HF}\left(\mathbf d,\langle {x}_{0},{x}_1,\ldots,{x}_{q-k-1},{y}_0,{y}_1,\ldots,{y}_{k-1}\rangle S\right)-\operatorname{HF}\left(\mathbf d,\mathbb{R}(\mathbf n,\mathbf d)\right)\\
&=r_{\mathbf n,\mathbf d}-\binom{d_1+n_1-(q-k)}{n_1-(q-k)}\binom{d_2+n_2-k}{n_2-k}-(|\mathbf n|+1).
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
This follows immediately from Theorem~\ref{thm:main2} with $\mathbf b=\mathbf 0$.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:main}]
By Theorem~\ref{thm:main} if $d_1>q$ and $d_2>q$ then
\begin{align*}
\rho_q(\mathbf n;\mathbf d)\geq&1-\frac{\min\left\{\displaystyle \binom{d_1+n_1-i}{n_1-i}\binom{d_2+n_2-j}{n_2-j} \;\;\; \bigg| \;\;\; \begin{matrix} i+j=q \\ 0\leq i \leq n_1 \\ 0\leq j \leq n_2 \end{matrix}\right\}}{r_{\mathbf n,\mathbf d}}-\frac{\min\left\{\displaystyle \binom{d_1+i}{i}\binom{d_2+j}{j} \;\;\; \bigg| \;\;\; \begin{matrix}i+j=q \\ 0\leq i \leq n_1 \\ 0\leq j \leq n_2 \end{matrix}\right\}}{r_{\mathbf n,\mathbf d}} - \frac{|\mathbf n|-q-1}{r_{\mathbf n,\mathbf d}}\\
\\
\geq&1-\sum_{\substack{i+j=q \\ 0\leq i \leq n_1 \\ 0\leq j \leq n_2}} \frac{\displaystyle \binom{d_1+n_1-i}{n_1-i}\binom{d_2+n_2-j}{n_2-j}}{r_{\mathbf n,\mathbf d}}+\frac{\displaystyle \binom{d_1+i}{i}\binom{d_2+j}{j}}{r_{\mathbf n,\mathbf d}}- \frac{|\mathbf n|-q-1}{r_{\mathbf n,\mathbf d}}.
\end{align*}
The result follows by noting that $\binom{d+n}{n}=\frac{d^n}{n!}+O(d^{n-1})$ and $r_{\mathbf n,\mathbf d}=O(d_1^{n_1}d_2^{n_2})$.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{aprodu10}{book}{
author={Aprodu, Marian},
author={Nagel, Jan},
title={Koszul cohomology and algebraic geometry},
series={University Lecture Series},
volume={52},
publisher={American Mathematical Society, Providence, RI},
date={2010},
pages={viii+125},
isbn={978-0-8218-4964-4},
review={\MR{2573635}},
}
\bib{berkesch13}{article}{
author={Berkesch Zamaere, Christine},
author={Erman, Daniel},
author={Kummini, Manoj},
author={Sam, Steven V.},
title={Tensor complexes: multilinear free resolutions constructed from
higher tensors},
journal={J. Eur. Math. Soc. (JEMS)},
volume={15},
date={2013},
number={6},
pages={2257--2295},
issn={1435-9855},
review={\MR{3120743}},
doi={10.4171/JEMS/421},
}
\bib{berkesch17}{article}{
author={Berkesch Zamaere, Christine},
author={Erman, Daniel},
author={Smith, Gregory G.},
title={Virtual resolutions for a product of projective spaces},
date={2017},
note={ArXiv pre-print: \url{https://arxiv.org/abs/1703.07631}}
}
\bib{burnsHerzog93}{book}{
author={Bruns, Winfried},
author={Herzog, J\"urgen},
title={Cohen-Macaulay rings},
series={Cambridge Studies in Advanced Mathematics},
volume={39},
publisher={Cambridge University Press, Cambridge},
date={1993},
pages={xii+403},
isbn={0-521-41068-1},
review={\MR{1251956}},
}
\bib{conca18}{article}{
author={Conca, Aldo},
author={Juhnke-Kubitzke, Martina},
author={Welker, Volkmar},
title={Asymptotic syzygies of Stanley-Reisner rings of iterated
subdivisions},
journal={Trans. Amer. Math. Soc.},
volume={370},
date={2018},
number={3},
pages={1661--1691},
issn={0002-9947},
review={\MR{3739188}},
doi={10.1090/tran/7149},
}
\bib{cox95}{article}{
author={Cox, David A.},
title={The homogeneous coordinate ring of a toric variety},
journal={J. Algebraic Geom.},
volume={4},
date={1995},
number={1},
pages={17--50},
issn={1056-3911},
review={\MR{1299003}},
}
\bib{coxLittleSchenck11}{book}{
author={Cox, David A.},
author={Little, John B.},
author={Schenck, Henry K.},
title={Toric varieties},
series={Graduate Studies in Mathematics},
volume={124},
publisher={American Mathematical Society, Providence, RI},
date={2011},
pages={xxiv+841},
isbn={978-0-8218-4819-7},
review={\MR{2810322}},
doi={10.1090/gsm/124},
}
\bib{einErmanLazarsfeld15}{article}{
author={Ein, Lawrence},
author={Erman, Daniel},
author={Lazarsfeld, Robert},
title={Asymptotics of random Betti tables},
journal={J. Reine Angew. Math.},
volume={702},
date={2015},
pages={55--75},
issn={0075-4102},
review={\MR{3341466}},
doi={10.1515/crelle-2013-0032},
}
\bib{einErmanLazarsfeld16}{article}{
author={Ein, Lawrence},
author={Erman, Daniel},
author={Lazarsfeld, Robert},
title={A quick proof of nonvanishing for asymptotic syzygies},
journal={Algebr. Geom.},
volume={3},
date={2016},
number={2},
pages={211--222},
issn={2214-2584},
review={\MR{3477954}},
doi={10.14231/AG-2016-010},
}
\bib{einLazarsfeld93}{article}{
author={Ein, Lawrence},
author={Lazarsfeld, Robert},
title={Syzygies and Koszul cohomology of smooth projective varieties of
arbitrary dimension},
journal={Invent. Math.},
volume={111},
date={1993},
number={1},
pages={51--67},
}
\bib{einLazarsfeld12}{article}{
author={Ein, Lawrence},
author={Lazarsfeld, Robert},
title={Asymptotic syzygies of algebraic varieties},
journal={Invent. Math.},
volume={190},
date={2012},
number={3},
pages={603--646},
issn={0020-9910},
review={\MR{2995182}},
doi={10.1007/s00222-012-0384-5},
}
\bib{eisenbud95}{book}{
author={Eisenbud, David},
title={Commutative algebra},
series={Graduate Texts in Mathematics},
volume={150},
note={With a view toward algebraic geometry},
publisher={Springer-Verlag, New York},
date={1995},
pages={xvi+785},
isbn={0-387-94268-8},
isbn={0-387-94269-6},
review={\MR{1322960}},
doi={10.1007/978-1-4612-5350-1},
}
\bib{eisenbud05}{book}{
author={Eisenbud, David},
title={The geometry of syzygies},
series={Graduate Texts in Mathematics},
volume={229},
note={A second course in commutative algebra and algebraic geometry},
publisher={Springer-Verlag, New York},
date={2005},
pages={xvi+243},
isbn={0-387-22215-4},
review={\MR{2103875}},
}
\bib{eisenbudSchreyer09}{article}{
author={Eisenbud, David},
author={Schreyer, Frank-Olaf},
title={Betti numbers of graded modules and cohomology of vector bundles},
journal={J. Amer. Math. Soc.},
volume={22},
date={2009},
number={3},
pages={859--888},
issn={0894-0347},
review={\MR{2505303}},
doi={10.1090/S0894-0347-08-00620-6},
}
\bib{ermanYang18}{article}{
author = {Erman, Daniel},
author = {Yang, Jay},
doi = {10.2140/ant.2018.12.2151},
journal = {Algebra \& Number Theory},
number = {9},
pages = {2151--2166},
publisher = {MSP},
title = {Random flag complexes and asymptotic syzygies},
url = {https://doi.org/10.2140/ant.2018.12.2151},
volume = {12},
year = {2018},
}
\bib{green84-I}{article}{
author={Green, Mark L.},
title={Koszul cohomology and the geometry of projective varieties},
journal={J. Differential Geom.},
volume={19},
date={1984},
number={1},
pages={125--171},
issn={0022-040X},
review={\MR{739785}},
}
\bib{green84-II}{article}{
author={Green, Mark L.},
title={Koszul cohomology and the geometry of projective varieties. II},
journal={J. Differential Geom.},
volume={20},
date={1984},
number={1},
pages={279--289},
issn={0022-040X},
review={\MR{772134}},
}
\bib{hochster72}{article}{
author={Hochster, M.},
title={Rings of invariants of tori, Cohen-Macaulay rings generated by
monomials, and polytopes},
journal={Ann. of Math. (2)},
volume={96},
date={1972},
pages={318--337},
issn={0003-486X},
review={\MR{0304376}},
doi={10.2307/1970791},
}
\bib{twentyFourHours}{book}{
author={Iyengar, Srikanth B.},
author={Leuschke, Graham J.},
author={Leykin, Anton},
author={Miller, Claudia},
author={Miller, Ezra},
author={Singh, Anurag K.},
author={Walther, Uli},
title={Twenty-four hours of local cohomology},
series={Graduate Studies in Mathematics},
volume={87},
publisher={American Mathematical Society, Providence, RI},
date={2007},
pages={xviii+282},
isbn={978-0-8218-4126-6},
review={\MR{2355715}},
doi={10.1090/gsm/087},
}
\bib{lemmens18}{article}{
author={Lemmens, Alexander},
title={On the $n$-th row of the graded Betti table of an $n$-dimensional
toric variety},
journal={J. Algebraic Combin.},
volume={47},
date={2018},
number={4},
pages={561--584},
issn={0925-9899},
review={\MR{3813640}},
doi={10.1007/s10801-017-0786-y},
}
\bib{M2}{misc}{
label={M2},
author={Grayson, Daniel~R.},
author={Stillman, Michael~E.},
title = {Macaulay 2, a software system for research
in algebraic geometry},
note = {Available at \url{http://www.math.uiuc.edu/Macaulay2/}},
}
\bib{oeding17}{article}{
author={Oeding, Luke},
author={Raicu, Claudiu},
author={Sam, Steven V},
title={On the (non-)vanishing of syzygies of Segre embeddings},
date={2017},
note={ArXiv pre-print: \url{https://arxiv.org/abs/1708.03803}}
}
\bib{ottavianiPaoletti}{article}{
author={Ottaviani, Giorgio},
author={Paoletti, Raffaella},
title={Syzygies of Veronese embeddings},
journal={Compositio Math.},
volume={125},
date={2001},
number={1},
pages={31--37},
issn={0010-437X},
review={\MR{1818055}},
doi={10.1023/A:1002662809474},
}
\bib{stacks-project}{misc}{
author = {The {Stacks project authors}},
title = {The Stacks project},
howpublished = {\url{https://stacks.math.columbia.edu}},
year = {2019},
}
\end{biblist}
\end{bibdiv}
\end{document} |
3,212,635,537,713 | arxiv | \section{Introduction}
Longitudinal or clustered data, where observations within a unit (cluster) are more correlated than observations from other units (clusters), are very common in areas such as social science and medical research. Further, the data may contain a large number of correlated features relative to the number of observations (high dimensional data). The goal of this paper is to extend tree-based algorithms to high dimensional longitudinal data with correlated features and to develop a relatively interpretable data mining technique for feature selection and prediction.
Tree based algorithms began to gain momentum with the appearance of the CART (classification and regression trees) algorithm (Breiman et al., 1984) \cite{breiman1984classification}.
They are widely used in statistical machine learning due to their interpretability, relatively high computational efficiency, and their nonparametric and nonlinear nature. Briefly, a binary decision tree-based algorithm recursively partitions the parameter space into relatively pure nodes using a splitting criterion such as entropy or Gini impurity by searching every possible variable, and each possible split point, until it meets a pre-specified stopping criteria, and builds a piece-wise model on each subset of the data. The algorithm is greedy by nature and does not take into account correlation or longitudinal structure.
Segal \cite{segal1992tree} made the first attempt to deal with longitudinal data by using regression trees, and proposing a new split function depending on the covariance structure of multiple responses. However, this method cannot deal with time-varying covariates (only the responses, and not the covariates, vary with time in his setting) and all the observations within a unit end up in one terminal node. Mixed-effects longitudinal trees (MELT)(Cho et al., 2014) \cite{eo2014tree} fully explore the shape of the data with respect to time by fitting low degree polynomials and splits on the coefficients. The objective of MELT is to identify different shapes of time among units. However, MELT only deals with time-invariant covariates, and is not optimized for prediction.
Sela and Simonoff (2012) \cite{sela2012re} proposed the RE-EM tree, which uses a random effects model to deal with longitudinal structure, where the fixed effect is modelled as a standard regression tree CART. The random effects and fixed effects are estimated alternatively, which is similar to the EM algorithm. Later, a new version of RE-EM tree was proposed by Simonoff and Fu (2015) \cite{fu2015unbiased} where the implementation of the fixed effect was replaced by the conditional inference trees of Hothorn et al. (2006) \cite{hothorn2006unbiased} to reduce bias. RE-EM tree can deal with time-varying covariates, and observations within a unit can end up in different terminal nodes.
The generalized
linear mixed-effects model trees (GLMM tree) algorithm (M. Fokkema et al., 2017) \cite{fokkema2018detecting} adopts a more general approach than the RE-EM tree. The GLMM tree also uses a random effects approach, but with the fixed effect modelled as a piece-wise generalized linear model, that is, as a regression model tree with a generalized linear model, instead of a constant, at each leaf. The fixed effects and random effects are estimated in an alternative way, with one estimated after the other until
convergence. The GLMM tree provides more flexibility in the model of fixed effect and can be used to detect treatment effects (see \ref{sec:design of simulations}). The GLMM tree approach will be discussed in more detail in \ref{sec:GLMM Tree}.
It is known that Random Forest variable selection is biased when there is correlation among the features. Fuzzy Forests (Conn, Ramirez et al., 2015) \cite{conn2015fuzzy} was developed to address correlation within the predictors in the setting where the number of parameters is much greater than the number of observations ($p >> n$). The first step in Fuzzy Forests is to explicitly cluster features using weighted correlation networks \cite{zhang2005general} (reviewed in \ref{sec: WGCNA}). Then a feature screening step is conducted within each cluster using Recursive Feature Elimination Random Forests (RFE-RFs) \cite{diaz2006gene}. Finally a feature selection step is done within the features selected from the screening step, allowing clusters to interact with each other. The screening step and the selecting step enables Fuzzy Forests to select features in a relatively unbiased way in the presence of highly correlated features. The Fuzzy Forests methodology has been used in a number of applied research articles, for example \cite{conn_ramirez2016, kim-alvarez-ramirez, ramirez-abrajano-alvarez2019}.
This paper proposes the Fuzzy Random Effect Estimation tree (FREEtree), which takes advantage of the powerful feature selection approach of Fuzzy Forests, as well as the flexible framework of the GLMM tree, to deal with the longitudinal structure of data.
The remainder of the article is organized as follows: Section \ref{sec: review of WGCNA and GLMM} reviews the building blocks of FREEtree before section \ref{sec: FREEtree method} explains the FREEtree algorithms in detail. Section \ref{sec: Simulation results} provides simulation results of FREEtree on two simulated data sets, one with a time-treatment interaction and one without. Section \ref{Discussion and future work} discusses future research development of FREEtree and the last section concludes the paper.
\section{A review of WGCNA and GLMM tree}
\label{sec: review of WGCNA and GLMM}
\subsection{WGCNA}
\label{sec: WGCNA}
Weighted correlation networks (WGCNA) have been used in many applications to examine the network structure of covariates \citep{langerfelder2008wgcna, pei2017wgcna,Gudenas2015wgcna}. This is an unsupervised learning method.
In order to construct the network, WGCNA does the following: (1) choose a similarity function for feature $X^{u}$ and $X^{v}$, denoted by $s_{uv}$. A common choice is $Corr(X^{u},X^{v})$ where Corr is the Pearson correlation. Then compute the similarity matrix $S = [s_{uv}]$. (2) Transform the similarity matrix X by the adjacency matrix $A = [a_{uv}]$ where $a_{uv} = s_{uv}^{\beta}$ which results in a soft-thresholding network. The $\beta$ is chosen according to the scale-free criterion \citep{zhang2005general}.(3) Convert the adjacency matrix A to the topological overlap matrix (TOM) W through Eq.\eqref{eq:TOM} where $q_{uv}=\sum_{r=1}^{p}a_{ur}{a_{rv}}$ and $c_{u} = \sum_{r=1}^{p} a_{ur}$. (4) Use a hierarchical clustering tree algorithm to find clusters using TOM. The reason that hierarchical clustering algorithm uses TOM instead of the adjacency matrix $A$ is that using TOM may lead to more distinct modules \citep{zhang2005general}.
\begin{equation}\label{eq:TOM}
w_{uv} = \frac{q_{uv}+a_{uv}}{\min\{c_{u},c_{v}\}+1-a_{uv}}
\end{equation}
Weighted correlation network analysis (WGCNA)\citep{zhang2005general} can be used for clustering covariates where covariates within each module are highly correlated and features (or g, we will often use the term genes in this paper to remain consistent with the genetics literature) from different modules are approximately uncorrelated. Covariates that are not assigned to any clusters are placed in the grey module. That is, each grey covariate in the grey module is roughly uncorrelated to any other covariates and can be viewed as a cluster on its own. Thus, note that in the context of machine learning, we can view each feature as a gene and therefore WGCNA can identify modules of highly correlated features.
\subsection{GLMM tree}
\label{sec:GLMM Tree}
The rational behind the Generalized Linear Mixed-Effects Model tree (GLMM tree) \citep{fokkema2018detecting} is that a global generalized linear mixed-effect model may not fit the data well. However, if additional splitting variables are available, we can fit the data with piece-wise models by partitioning the data with these splitting variables.
For example, suppose that in our dataset the $t$\textsuperscript{th} observation of cluster $i$ consists of covariates $x_{it}$ and response $y_{it}$. Cluster $i$ may stand for the i\textsuperscript{th} patient and $t$, the time of the measurement. Then a global Generalized Linear Mixed-Effects model (GLMM) is given by
\begin{equation}\label{eq:global GLM}
E[y_{it}|x_{it}] = \mu_{it};\quad g(\mu_{it}) = x_{it}^{T}\beta + z_{i}^{T}b_{i};
\end{equation}
where $g$ is the link function and $\beta$ is a vector of fixed-effect regression coefficients (as opposed to the power function described in WGCNA). For a mixed-effect model with only a random intercept, $z_{i}$ is just constant (1) and $b_{i}$ is the random intercept associated with cluster $i$. When random slopes are involved, $z_{i}$ is the design vector which is a subset of $x_{it}$ and $b_{i}$ is the random vector with each component corresponding to the random deviation of the slope from the fixed-effect. For simplicity, we assume that the link function $g$ is the identity function and the mixed-effects model with only random intercept is adopted. That is, we are using a linear mixed-effect model with only a random intercept from now on, as the following:
\begin{equation}\label{eq:global LMM}
\mu_{it} = x_{it}^{T}\beta + b_{i}
\end{equation}
In many cases, the Linear Mixed-Effect model (LMM) in Eq\eqref{eq:global LMM} may not fit the data well because the assumption that the underlying fixed-effect model is a linear function is too restrictive. It often makes more sense to approximate the fixed-effect structure with a piece-wise linear model instead of a global linear model. GLMM tree uses a model-based recursive partitioning (MOB) algorithm \citep{zeileis2008model} that partitions the dataset using splitting variables and find better-fitting local LMM models. MOB iterates the following: fit a parametric model (such as LMM) to the dataset and then adopt parameter stability tests on each of splitting variables by computing a p-value for every splitting variable. If the smallest p-value is below the significant level $\alpha$, the dataset is split into two subsets using the splitting variable value with the smallest p-value, with the split point for that variable chosen to minimize the instability. Therefore only significant splitting variables will be used for splitting at the node of a GLMM tree. More details of the parameter stability test are described by Zeileis \cite{zeileis2008model}. The resulting GLMM tree has the following form,
\begin{equation}\label{eq:local LMM}
\mu_{it} = x_{it}^{T}\beta_{j(it)} + b_{i}
\end{equation}
where $j(it)$ is the index of terminal node that $t$\textsuperscript{th} observation of cluster $i$ belongs to. Note that the fixed-effect is now a piece-wise linear function of covariates and the random intercept is global in the sense that it only depends on the cluster, instead of the terminal node. The GLMM tree is trained by iteratively estimating the fixed-effect (a linear mixed-effects tree) assuming random effects are known and estimating random effects by assuming fixed effects are known until convergence.
The R package, \pkg{glmertree} \citep{fokkema2018detecting} implements the GLMM tree. In the following section, we use LMM tree for simplicity. That is, we assume the link function $g$ is the identity function. The function \code{lmertree()} is used in this package.
\section{The FREEtree estimation method}
\label{sec: FREEtree method}
The goal of FREEtree is feature selection and then using selected features to make predictions. The advantage of having fewer features is parsimony and increased interpretability. At the heart of the algorithm lies a binary decision tree splitting strategy that is easily interpretable. While CART and many other methods are usually biased towards selecting correlated features while ignoring independent ones in feature selection, FREEtree reduces this bias by clustering features by their correlation pattern and screening features within each cluster, while allowing for features to interact. The resulting features are used to fit a LMM tree, which includes a linear regression model at the end of each leaf that also considers a random effect at the patient level. The predictive power mostly comes from LMM tree, which fits the data with a piecewise linear function of covariates plus a random effect, instead of a piece-wise constant function like CART and RE-EM tree. However, in order to regress on covariates, feature selection is necessary because linear regression requires that the sample size be sufficiently larger than the number of parameters for identifiability. FREEtree integrates feature selection and prediction in a natural way and is particularly useful when $p$ is larger than $n$.
\subsection{Notation}
The training dataset consists of patients $i = 1,2,...,n$, who are measured at time $t = 1,2,..,T$. To simplify the notation, we assume balanced data here, though this is not required for the FREEtree algorithm. Each patient has three types of features:
\begin{itemize}
\item \texttt{var\char`_select} X: Features of length $p$ that will be chosen from.
\item \texttt{fixed\char`_regress} R: Features that will be used for regression in every tree. In longitudinal settings, this could be time or higher order of time.
\item \texttt{fixed\char`_split} S: Features that will be considered as splitting variables in every tree.
\end{itemize}
The value of features of patient $i$ at time $t$ is denoted by $x_{it}$, $r_{it}$ and $s_{it}$ respectively. Note that \texttt{var\char`_select}, \texttt{fixed\char`_regress} and \texttt{fixed\char`_split} can be empty. The selection of which type each features belongs in is left up to the user. The goal of FREEtree is to select important features from \texttt{var\char`_select} and use the selected features as well as \texttt{fixed\char`_regress} and \texttt{fixed\char`_split} to give the final prediction.
\subsection{The FREEtree algorithm}
\label{sec:Fuzzy Strategy}
The FREEtree algorithm consists of a feature selection step and a prediction step. First assume that \texttt{fixed\char`_regress} is not empty. The case where it is empty will be discussed in section \ref{sec:PC Strategy}.
The feature selection step has three steps: clustering, screening, and selection. During the clustering step, features in \texttt{var\char`_select} are clustered by WGCNA into modules, which includes a grey module and non-grey modules whose number is not known a priori. The grey module includes all covariates that have low connectivity and can be viewed as roughly independent. Features within the same non-grey module are highly correlated/connected with each other and have lower correlation or connectivity with the features from other modules. Let there be $m$ modules selected by WGCNA. Denote the modules of \texttt{var\char`_select} by $\{P_{1},...,P_{m}\}$ and let $p_{l} = |P_{l}|$ so that $\sum_{l=1}^{m} p_{l} = p$. Without loss of generality, denote the last module $P_{m}$ as the grey module.
For the screening step, features are selected within each module as follows: For module $l$ ($l=1,2,...,m$), use \texttt{fixed\char`_regress} as regression variables and use $P_{l}$ as well as consider \texttt{fixed\char`_split} for splitting variables to fit a LMM tree. The selected features from module $l$ are the set of features $P_{l}^{S}$ used in the LMM tree that are not included in \texttt{fixed\char`_split}. The result of the screening step is a set of screened features $\{P_{1}^{S},...,P_{m}^{S}\}$.
The final selection step allows the selected features from each modules to interact with each other. FREEtree uses all of the screened features $\{P_{1}^{S},...,P_{m}^{S}\}$ from the screening step and treats \texttt{fixed\char`_split} as splitting variables, then uses \texttt{fixed\char`_regress} as regression variables to fit a LMM tree. The final selected features from \texttt{var\char`_select} are the features used by this LMM tree that are not included in \texttt{fixed\char`_split}, denoted by $x^{S}$.
Finally, at the prediction step, a LMM tree is fitted using \texttt{fixed\char`_split} and $X^{s}$ as splitting variables and using \texttt{fixed\char`_regress} and $X^{s}$ as regression variables. The prediction is provided by this final LMM tree. Note that the final selected features $X^{s}$ from \texttt{var\char`_select} are used both as splitting and regression variables, which fits the data in a more flexible way than just regressing on \texttt{fixed\char`_regress}.
\subsection{Another strategy for feature selection}
\label{sec:Non Fuzzy Strategy}
The screening and selection steps help reduce bias in feature selection by eliminating features in correlated modules and thus protecting independent features from being ignored by LMM tree. However, if the number of non-grey modules is large and there are many correlated features after screening step, the independent features are still in the danger of being ignored at the selection step. In order to help protect independent features, another strategy of feature selection is proposed, which is particularly helpful if the number of correlated feature is large compared with independent features. Users can set Fuzzy=False to use this strategy. If Fuzzy=True, the strategy in section \ref{sec:Fuzzy Strategy} will be adopted.
At the screening step, features within each non-grey modules $\{P_{1},...,P_{m-1}\}$ are screened into $\{P_{1}^{S},...,P_{m-1}^{S}\}$. That is, use $P_{l}$ ($l = 1,2,...,m-1$) and \texttt{fixed\char`_split} as splitting variables and use \texttt{fixed\char`_regress} as regression variables to fit a LMM tree and choose features $P_{l}^{S}$ used by the tree and not contained in \texttt{fixed\char`_split}. Note that for now we don't screen within the grey module $P_{m}$. Then we select features from within the set of screened features $\{P_{1}^{S},...,P_{m-1}^{S}\}$ from the non-grey groups by using all of the screened features and \texttt{fixed\char`_split} as splitting variables, and \texttt{fixed\char`_regress} as regression variables to fit a LMM tree. The selection step allows the non-grey modules to interact with each other producing $\{Q_{1}^{S},...,Q_{m-1}^{S}\}$ with $Q_{l}^{S} \subset P_{l}^{S}$ for $l=1,2,...,m-1$. Then we fit a LMM tree using \texttt{fixed\char`_split} and features in the grey module as splitting variables, and regress on \texttt{fixed\char`_regress} as well as $\{Q_{l}^{S}\}_{l=1}^{m-1}$. The set of selected features from the grey module are the ones used in this LMM tree that are not included in \texttt{fixed\char`_split}, which is denoted by $Q_{m}^{S}$. The final result of feature selection is $\{Q_{l}^{S}\}_{l=1}^{m}$, denoted by $X^{s}$. A final LMM tree for prediction is fitted using \texttt{fixed\char`_split} and $X^{s}$ as splitting variables and using \texttt{fixed\char`_regress} and $X^{s}$ as regression variables.
\subsection{Use principal components in the absence of regressors}
\label{sec:PC Strategy}
Suppose that we do not have a natural choice for \texttt{fixed\char`_regress} and set it to empty. One obvious way to do feature selection and prediction is to use RE-EM tree \citep{sela2012re} with an averaged value at each leaf instead of a linear regression model. The disadvantage is that the assumption of the underlying true model being a RE-EM tree, a piece-wise constant function plus random intercept, can be too restrictive.
It is more flexible to fit the underlying model with a piece-wise linear function in addition to a random intercept. Therefore another method that is proposed, which could have more power in feature selection and prediction, is to use the dominant principal components (PC) of the non-grey modules as intermediate regressors. The idea here is that in linear regression, using the dominant principle components as regressors has a comparable power in terms of prediction as using all the covariates as regressors, although interpretability is lost. However, FREEtree, even if it uses PCs, is still interpretable because PCs are used only in the step of feature selection and the selected features are determined by the non-terminal nodes of the tree, instead of PCs or any other regressors. The first PCs of non-grey modules are used for simplicity, though more dominant features can be used. Note that we do not use PCs of grey module since features within grey module are roughly independent and thus it is likely that there may be no dominant PCs.
For the screening step, features from non-grey modules $P_{l}$ (l=1,2,..,m-1) are selected by fitting a LMM tree using the first PC of $P_{l}$ as regression variables and use $P_{l}$ and \texttt{fixed\char`_split} as splitting variables. If Fuzzy=True, for the grey module $P_{m}$, a RE-EM tree is fitted using \texttt{fixed\char`_split} and the features used in the node of RE-EM tree are selected. Denote the screened features by $\{P_{l}^{S}\}_{l=1}^{m}$. For the selection step, final features $X^{S}$ are obtained by selecting from the screened features. That is, fit a RE-EM tree using $\{P_{l}^{S}\}_{l=1}^{m}$ and select those appeared in the nodes of the RE-EM tree. In the prediction step, a LMM tree is fitted using $X^{S}$ and \texttt{fixed\char`_split} as splitting variables and $X^{S}$ as regression variables.
If Fuzzy=False, final non-grey features $\{Q_{l}^{S}\}_{l=1}^{m-1}$ are obtained by selecting from screened features $\{P_{l}^{S}\}_{l=1}^{m-1}$ from non-grey modules. That is, use all the $\{P_{l}^{S}\}_{l=1}^{m-1}$ as splitting variables to fit a RE-EM tree and select features used in the node of RE-EM tree and not contained in \texttt{fixed\char`_split}. Then the selected grey-features $Q_{m}^{S}$ are obtained by fitting a LMM tree using the grey module $P_{m}$ and \texttt{fixed\char`_split} as splitting variables and $\{Q_{l}^{S}\}_{l=1}^{m-1}$ as regression variables. The final set of selected features $X^{S}$ is $\{Q_{l}^{S}\}_{l=1}^{m}$. The prediction is given by a LMM tree using $X^{S}$ and \texttt{fixed\char`_split} as splitting variables and using $X^{S}$ as regression variables.
\section{Simulation}
\label{sec: Simulation results}
\subsection{Design of simulations}
\label{sec:design of simulations}
We provide simulations to examine the utility of FREEtree in terms of feature selection, prediction and estimation of the underlying model structure.
In all simulations, the training dataset has $n$ subjects (we will allow $n$ to vary) and each subject has $p=400$ features $X$ to be selected along with \texttt{fixed\char`_split} and \texttt{fixed\char`_regress}. The features, $X$, are grouped into 4 modules $\{X^{(1)},...,X^{(100)}\}$, \allowbreak $\{X^{(101)},...,X^{(200)}\}$,$\{X^{(201)},...,X^{(300)}\}$ as well as $\{X^{(301)},...,X^{(400)}\}$. Each feature $X^{(i)}$ is generated from a multivariate normal distribution with mean 0 and variance 1. The features from different modules are uncorrelated and features within the first three modules are correlated with correlation 0.8, while features within the last module are uncorrelated. Therefore, the first three modules are called non-grey modules and the final module is the grey module, according to the conventions in WGCNA.
The first simulation includes a time by treatment interaction where different treatments corresponds to different patterns of response with respect to time. For simplicity, we assume two treatments here, $treatment_1$ and $treatment_2$. The true model for subject $i$ at time $t$ is given by $$y_{it}=f(X_{it})+(t-3)^2\mathbbm{1}_{treatment_1}-(t-3)^2\mathbbm{1}_{treatment_2}+b_{i}+\epsilon_{it}$$ where $\mathbbm{1}$ is the indicator function, $\epsilon_{it}$ is the error is drawn from normal distribution and $f$ is given by
$$f(X) = 5X^{(1)}+2X^{(2)}+2X^{(3)}+5X^{(2)}X^{(3)}+5X^{(301)}+2X^{(302)}+2X^{(303)}+5X^{(302)}X^{(303)}$$
Here, only 6 variables out of 300 are important. The other variables are noise. We use $treatment$ as \texttt{fixed\char`_split} and use $time (t)$ and $time^2$ ($t^{2}$) as \texttt{fixed\char`_regress}.
Here \texttt{var\char`_select} is $X$ with important features being $X^{(1)},X^{(2)},\\X^{(3)},X^{(301)},X^{(302)}$ and $X^{(303)}$. Since we have a natural choice for \texttt{fixed\char`_regress}, in $time$ and $time^2$, we adopt the method described in
section \ref{sec:Fuzzy Strategy} and section \ref{sec:Non Fuzzy Strategy}.
In a second simulation, we consider a mixed effects model given by
$$y_{it}=f(X_{it})+b_{i}+\epsilon_{it}$$ where $f$ and $\epsilon_{it}$ are the same as in the first simulation and $b_i$ is the random intercept corresponding to subject $i$ which is drawn from normal distribution with mean 0 and variance 3. Random intercepts of different subjects are independent. Since now we do not have a natural choice for \texttt{fixed\char`_regress}, we adopt the method described in section \ref{sec:PC Strategy}. That is, during the screening step, we regress on the first principal components of non-grey modules to select features from non-grey modules.
In both simulations, a validation set of 100 subjects is used for tuning parameters and a test set of 100 subjects is used for measuring root mean squared error on future observations. The prediction does not include random intercepts because they cannot be estimated from unknown patients. The performances of Random Forests and Fuzzy Forests in the following sections are measured by running the simulation 50 times using different random seeds.
\subsection{Predictive performance}
In this section, we first consider the dataset with the time-treatment interaction detailed in the previous section. We compare the predictive performance of FREEtree, Random Forests, Fuzzy Forests and LMM tree. For Random Forests and Fuzzy Forests, \texttt{var\char`_select} $\{X^{(v)}\}_{v=1}^{400}$,
\texttt{fixed\char`_regress} $time$ and $time^2$ and
\texttt{fixed\char`_split} $treatment$ are used as covariates. $Time$, $time^2$ and $treatment$ are manually put into the "grey" module in Fuzzy Forests because the time variables are uncorrelated with $\{X^{(v)}\}_{v=1}^{400}$ in the generating process and treatment is categorical which WGCNA cannot deal with directly. For LMM tree, treatment and $\{X^{(v)}\}_{v=1}^{400}$ are specified as splitting variables and $time$, $time^2$ are used as the regression variables.
Note that unlike FREEtree, we can not use all $\{X^{(v)}\}_{v=1}^{400}$ as regression variables because linear regression requires that the sample size be greater than the number of parameters in the linear regression model.
Fig.\ref{fig:a1} shows the results on this dataset. FREEtree outperforms other methods when the sample size is relatively large. When the sample size is relatively small, FREEtree does not have an strong advantage since it has a linear regression model at each leaf, and thus there are many more parameters to estimate, necessitating a larger sample size.
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{a1.png}
\caption{Predictive performance using the time-treatment interaction dataset}
\label{fig:a1}
\end{figure}
Fig.\ref{fig:a0} gives the results of the performance on the simulated dataset with only random intercepts, a special case of longitudinal structure. The RMSE of Random Forests, Fuzzy
Forests, RE-EM tree and FREEtree are given. Only $\{X^{(v)}\}_{v=1}^{400}$ are used in these algorithms. This analysis shows that FREEtree has better predictive performance than other algorithms and performs better when the sample size is larger. Note, that unlike the case in the previous simulation, FREEtree does well even when $n$ is relatively small because the dataset structure here is much simpler.
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{a0.png}
\caption{Predictive performance on the dataset with only random intercepts}
\label{fig:a0}
\end{figure}
\subsection{Feature selection performance}
In this section, we compare the performance of feature selection from FREEtree and Fuzzy Forests, which is designed for feature selection. For Fuzzy Forests, we computed the proportion of times each feature was selected as important over 50 simulation runs on the same training set with different seeds and/or tuning parameters. In each run, the top 12 features are selected in the first simulation with time-treatment interaction dataset and top 10 features are chosen in the second simulation using dataset with only random intercepts. For FREEtree, the final chosen features are presented.
In the first simulation, shown in Fig.\ref{fig:a1Fuzzy}, where the true features are $X^{(1)},X^{(2)},X^{(3)},X^{(301)},X^{(302)}$\\$,X^{(303)}$, $treatment$, $time$ and $time^2$, Fuzzy Forests successfully identified $X^{(1)},X^{(2)},$\\$X^{(3)},X^{(301)},X^{(302)},X^{(303)}$ with probability 1 but missed $time$ and $time^2$ completely (selected 0 times) regardless of the overall sample size. Fuzzy Forests identifies $treatment$ with probability 1 when $n \geq 150$. As for FREEtree, since treatment, time and time2 are explicitly specified to use as splitting and regression variable respectively, we only need to examine the final selected features from \texttt{var\char`_select} $\{X^{(v)}\}_{v=1}^{400}$. Fig.\ref{fig:a1FREEtree} gives results for this simulation and it shows that in this dataset FREEtree can recover the true important features when $n \geq 150$.
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{FREEtree_a1.jpg}
\caption{The selected feature of FREEtree with different sample size $n$ on the dataset with time-treatment interaction. }
\label{fig:a1FREEtree}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{a1_Fuzzy_Forest.jpg}
\caption{Feature selection performance of Fuzzy Forests on the dataset with a time-treatment interaction}
\label{fig:a1Fuzzy}
\end{figure}
In the second simulation where the true generating process only includes random intercepts, the feature selection performance of Fuzzy Forests and FREEtree were also studied. Fig.\ref{fig:a0Fuzzy} shows the results of Fuzzy Forests, which recovers all the important features correctly. Fig.\ref{fig:a0FREEtree} shows that FREEtree can also recover all the important features for all of the sample sizes tested.
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{FREEtree_a0.jpg}
\caption{The selected feature of FREEtree with different sample size $n$ on the dataset with only random intercepts. The first column is the number of patients or sample size $n$. }
\label{fig:a0FREEtree}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{a0_Fuzzy_Forest.jpg}
\caption{Feature selection performance of Fuzzy Forests on the dataset with only random intercepts}
\label{fig:a0Fuzzy}
\end{figure}
\subsection{Estimation of the underlying pattern}
The advantage of FREEtree is not only in its in higher prediction accuracy, but also in how it fits the underlying structure due to the models at its leaves. Recall that in the first simulation, the dataset has a time-treatment interaction. That is, the treatment-time components will first drop then increase for treatment1 and will first increase and then drop for treatment2. In this section we will examine whether FREEtree can recover the true time pattern for different treatments. The underlying true pattern should have the following form:
\[ \begin{cases}
(t-3)^2 & \text{treatment}=1 \\
-(t-3)^2 & \text{treatment}=2
\end{cases}
\]
FREEtree was able to successfully detect the time-treatment interaction in this simulation. Table \ref{table: mean of time pattern } shows that FREEtree gives a reasonable estimation of the time pattern function. However, note that patterns like this cannot be directly observed using tree-based methods such as RE-EM tree because the leaves in RE-EM tree correspond to an averaged value instead of a model.
\begin{table}[htp]
\centering
\begin{tabular}{lSSSSSS}
\toprule
\multirow{2}{*}{Sample Size} &
\multicolumn{2}{c}{treatment1} &
\multicolumn{2}{c}{treatment2} \\
& {time} & {time2} & {time} & {time2}\\
\midrule
100 & -8.88 & 1.36 & 5.23 & -0.89 \\
200 & -5.60 & 0.88 & 5.46 & -0.91 \\
300 & -6.06 & 0.99 & 5.43 & -0.91 \\
400 & -6.40 & 1.07 & 6.16 & -1.01 \\
\bottomrule
\end{tabular}
\caption{The mean of coefficients of linear models at leaves for each treatment. The coefficients of time and time2 should be 6 and 1 for treatment1 and -6 and -1 for treatment2. }
\label{table: mean of time pattern }
\end{table}
\section{Application}
We illustrate a real data application of FREEtree in a wide longitudinal dataset of World Bank, IMF and Penn World Table country level economic and developmental indicators. Using the adoption of inflation targeting by a nation's central bank as a treatment variable, we wish to predict the percentage change in a country's consumer price index (CPI) as a measure of the inflation rate. Merging together 15 different data sources\footnote{IMF World Economic Outlook (October 2019), IMF Financial Development Index Database, Penn World Table version 9.1, and the following World Bank databases: World Development Indicators, Education Statistics, Doing Business, Health Nutrition and Population Statistics, Gender Statistics, Global Financial Development, Health Equity and Financial Protection Indicators, Worldwide Governance Indicators, Worldwide Bureaucracy Indicators, Statistical Capacity Indicators, Global Jobs Indicators and Environment, Social and Governance Data.}, we obtain a final data set of 120 countries with 393 features observed for a 12 year period between 2005 and 2016 inclusively. The data series mostly comprise of population ratios, per capita metrics, year-over-year rates of change, proportions of national accounts, and scaled indicators, before being normalized to have mean zero and unit standard deviations.\\
Country level indicators are often highly correlated across time, with many series being very related to or subsets of others. Although tree-based techniques like Random Forests and Fuzzy Forests can process large numbers of series through feature selection, they do not have the capabilities to model mixed effects or give a single interpretable tree. \pkg{glmertree} can manage such effects while directly incorporating treatment variables into the analysis using GLM at each final node, it cannot handle the number of features in the dataset given the dimensionality problems inherit in linear regression. We compare results obtained by FREEtree to Random Forests and Fuzzy Forests in an example that takes advantage of individual country-level effects, as well as the central bank price targeting policy \textit{country\_id} was declared to be the subgroup \pkg{cluster}, while \pkg{fixed regress} included a linear and quadratic temporal term. Inflation targeting adoption, a binary variable, was declared for \pkg{consider split}, the rest of the features were included for the screening and selecting process. The formula takes the form:
$$ CPI_{i,t} = year_t + year^2_t + treatment_{i,t} + X_{i,t} \;|\; country\_id_i \;|\; X_{i,t} $$
where $X_{i,t}$ include all the other features to be screened and selected by FREEtree's algorithm. \\
The resulting tree has three nodes from two split variables (investment price index and GDP volatility) and 9 explanatory variables, including GNP per capital, fuel and GDP volatility (Figure \ref{fig:FTresult}). The mixed effect paint a picture of volatile frontier economies in various states of high inflation or deflation, while industrialized nations tend to be closer to the mean (Figure \ref{fig:FTresult intercept}). \\
\begin{figure}[htp]
\centering
\includegraphics[width=1.2\textwidth]{FTresult.jpg}
\caption{FREEtree model tree applied to real sample data}
\label{fig:FTresult}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=1.2\textwidth]{FTresult_intercept.jpg}
\caption{FREEtree model tree applied to real sample data: individual country effects}
\label{fig:FTresult intercept}
\end{figure}
Using mainly default parameter values, WGCNA yielded four modules with 150, 125, 80 and 38 features, and the grey module being the 3rd largest in size. We can see that FREEtree performs notably better in larger samples (Figure \ref{fig:comp_xsectional}) and further out of sample temporally (Figure \ref{fig:comp_temporal}).
\begin{figure}[htp]
\centering
\includegraphics[width=1.2\textwidth]{comp2.png}
\caption{FREEtree model tree applied to real sample data: cross sectional performance (20 test countries)}
\label{fig:comp_xsectional}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=1.2\textwidth]{comp1.png}
\caption{FREEtree model tree applied to real sample data: forward performance by horizon}
\label{fig:comp_temporal}
\end{figure}
\section{Interpretability}
\label{Interpretability}
FREEtree differentiates itself from other model trees in its ability to accept a very large number of features, addressing dimensionality issues when $p >> n$. Although this is a feature it shares in common with Random Forest and Fuzzy Forest, it distinguishes itself from these ensemble methods by being able to produce a single tree the user can readily interpret and understand while also providing superior predictions. \\
The production of a single decision tree also lets the user specify persistent features that will make it into the regression nodes, inherited from LMM tree. In addition, the user can specify subgroup cluster indicators and which features are guaranteed to make it past the screening process as regressors in the linear model. This flexibility caters well to researchers seeking to understand the impact of their variables of interest among a high number of other features, allowing them to effectively customize the output tree while taking advantage of WGCNA-based feature selection.\\
\begin{figure}[htp]
\centering
\includegraphics[width=1.2\textwidth]{RF_result.jpg}
\caption{Representative tree of Random Forest applied to real sample data}
\label{fig:exRF}
\end{figure}
\section{Discussion and Future Research}
\label{Discussion and future work}
At the feature clustering step, FREEtree uses Pearson correlation as the similarity function, which may not be optimal when the measurements of each feature of any patients are time series. That is, for patient $i$ and feature $v$, $X^{(v)}_{i1},X^{(v)}_{i2},..,X^{(v)}_{iT}$ is a time series. In order to cluster features in this case, we have to cluster time series. According to our simulations, where Auto-Regressive and Compound-Symmetric structure were imposed on each feature $X^{(v)}$, WGCNA still works when the correlation between features are relatively large. However, when the correlation is relatively low, WGCNA may not find strong associations and assign all the features to the grey group. One way to get around this is that when doing WGCNA analysis, instead of using correlation of features when building the similarity matrix, we use time series distance measures such as Dynamic time warping (DTW) and average them with respect to each patient and finally transform it into a similarity measure. In this case, the adapted WGCNA can detect module distinctions even if the correlation between features is relatively low. However, it is most be pointed out that computing time series distance measure such as DTW requires a lot of computational resources and in applications where $p$ is really large, replacing correlation with a time series distance measure may not be practical computationally.
\section{Conclusions}
In this paper we have presented Fuzzy Random Effect Estimation tree (FREEtree) algorithm that can provide a relatively unbiased way to do feature selection in the presence of correlation between features. Also, it deals with longitudinal data by using a random effect model tree, where the fixed effect is modelled as a piece-wise linear model, which has greater fitting and predicting power than RE-EM tree. It is expected that FREEtree can be widely used in application where the data has longitudinal structure as well as many correlated features.
\bibliographystyle{plain}
|
3,212,635,537,714 | arxiv | \chapter{Agradecimientos}
Son insuficientes los poco (cuarenta, cincuenta, sesenta, ...) a\~nos que me han sido otorgados para agradecerle al profesor Diego Restrepo por su colaboraci\'on, paciencia y amabilidad durante mi camino en el aprendizaje de la f\'isica de part\'iculas. Gracias Diego.\\\\
Deseo tambi\'en expresar mi gratitud y colaboraci\'on a los dem\'as integrantes del Grupo de Fenomenolog\'ia de las Interacciones Fundamentales (GFIF). \\\\
Expreso mis agradecimientos al profesor Jos\'e Valle y a su equipo de investigaci\'on, en la Universidad de Valencia, por su amable acogida durante mi estancia.\\ \\
Le quiero agradecer al maestro Jaime Chica por ense\~narme la pasi\'on desinteresada que se debe tener hacia la ciencia.\\\\
Le expreso mis agradecimientos a mi esposa, Luz Amparo, por aguantar mi constante silencio durante estos a\~nos.\\\\
Finalmente, le doy las gracias a la Universidad de Antioquia.
\chapter{Appendix}\label{apen1}
\section{$H$-charges of dimension-5 operators}\label{aped.B}
The horizontal charges for the dimension-5 operators that violate only $B$ are given by
\begin{align}\label{eq:D5BNV1}\nonumber
H\left[(\kappa_3)_{1jk}\widehat{Q}_1\widehat{Q}_j\widehat{Q}_k\widehat{H}_d\right]=&
A_3+(2x+4-n_{\lambda''})\mathbf{1_3},\\\nonumber
H\left[(\kappa_3)_{2jk}\widehat{Q}_2\widehat{Q}_j\widehat{Q}_k\widehat{H}_d\right]=&
A_3+(2x+3-n_{\lambda''})\mathbf{1_3},\\
H\left[(\kappa_3)_{3jk}\widehat{Q}_3\widehat{Q}_j\widehat{Q}_k\widehat{H}_d\right]=&
A_3+(2x+1-n_{\lambda''})\mathbf{1_3},
\end{align}
\begin{align}\label{eq:D5BNV2}\nonumber
H\left[(\kappa_{10})_{ij1}\widehat{Q}_i\widehat{Q}_j\widehat{d}^*_1\right]=&
A_3+(x-n_{\lambda''})\mathbf{1_3},\\\nonumber
H\left[(\kappa_{10})_{ij2}\widehat{Q}_i\widehat{Q}_j\widehat{d}^*_2\right]=&
A_3+(x+1-n_{\lambda''})\mathbf{1_3},\\
H\left[(\kappa_{10})_{ij2}\widehat{Q}_i\widehat{Q}_j\widehat{d}^*_3\right]=&H\left[(\kappa_{10})_{ij3}\widehat{Q}_i\widehat{Q}_j\widehat{d}^*_2\right].
\end{align}
For the lepton and baryon-number violating operators we have that
\begin{align}\label{eq:D5BLNV1}\nonumber
H\left[(\kappa_{1})_{1jkl}\widehat{Q}_1\widehat{Q}_j\widehat{Q}_k\widehat{L}_l\right]=&
A_1+(5+2x+n_l-n_{\lambda''})\mathbf{1_3},\\\nonumber
H\left[(\kappa_{1})_{2jkl}\widehat{Q}_2\widehat{Q}_j\widehat{Q}_k\widehat{L}_l\right]=&
A_1+(4+2x+n_l-n_{\lambda''})\mathbf{1_3},\\
H\left[(\kappa_{1})_{3jkl}\widehat{Q}_3\widehat{Q}_j\widehat{Q}_k\widehat{L}_l\right]=&
A_1+(2+2x+n_l-n_{\lambda''})\mathbf{1_3},
\end{align}
\begin{align}\label{eq:D5BLNV2}\nonumber
H\left[(\kappa_{2})_{ij11}\widehat{u}_i\widehat{u}_j\widehat{d}_1\widehat{e}_1\right]=&
A_2+(6-n_1+n_{\lambda''})\mathbf{1_3},\\\nonumber
H\left[(\kappa_{2})_{ij21}\widehat{u}_i\widehat{u}_j\widehat{d}_2\widehat{e}_1\right]=&
A_2+(5-n_1+n_{\lambda''})\mathbf{1_3},\\\nonumber
H\left[(\kappa_{2})_{ij31}\widehat{u}_i\widehat{u}_j\widehat{d}_3\widehat{e}_1\right]=&
H\left[(\kappa_{2})_{ij21}\widehat{u}_i\widehat{u}_j\widehat{d}_2\widehat{e}_1\right],\\\nonumber
H\left[(\kappa_{2})_{ij12}\widehat{u}_i\widehat{u}_j\widehat{d}_1\widehat{e}_2\right]=&
A_2+(3-n_2+n_{\lambda''})\mathbf{1_3},\\\nonumber
H\left[(\kappa_{2})_{ij22}\widehat{u}_i\widehat{u}_j\widehat{d}_2\widehat{e}_2\right]=&
A_2+(2-n_2+n_{\lambda''})\mathbf{1_3},\\\nonumber
H\left[(\kappa_{2})_{ij32}\widehat{u}_i\widehat{u}_j\widehat{d}_2\widehat{e}_2\right]=&H\left[(\kappa_{2})_{ij22}\widehat{u}_i\widehat{u}_j\widehat{d}_3\widehat{e}_2\right]\\\nonumber
H\left[(\kappa_{2})_{ij13}\widehat{u}_i\widehat{u}_j\widehat{d}_1\widehat{e}_3\right]=&
A_2+(1-n_3+n_{\lambda''})\mathbf{1_3},\\\nonumber
H\left[(\kappa_{2})_{ij23}\widehat{u}_i\widehat{u}_j\widehat{d}_2\widehat{e}_3\right]=&
A_2+(-n_3+n_{\lambda''})\mathbf{1_3},\\
H\left[(\kappa_{2})_{ij33}\widehat{u}_i\widehat{u}_j\widehat{d}_2\widehat{e}_3\right]=&H\left[(\kappa_{2})_{ij23}\widehat{u}_i\widehat{u}_j\widehat{d}_3\widehat{e}_3\right].
\end{align}
Finally, for the lepton-number violating terms we have found
\begin{align}\label{eq:D5LNV}\nonumber
H\left[(\kappa_{4})_{ij1}\widehat{Q}_i\widehat{H}_d\widehat{u}_j\widehat{e}_1\right]=&
A_4+(5-n_1+x)\mathbf{1_3},\\\nonumber
H\left[(\kappa_{4})_{ij2}\widehat{Q}_i\widehat{H}_d\widehat{u}_j\widehat{e}_2\right]=&
A_4+(2-n_2+x)\mathbf{1_3},\\\nonumber
H\left[(\kappa_{4})_{ij3}\widehat{Q}_i\widehat{H}_d\widehat{u}_j\widehat{e}_3\right]=&
A_4+(-n_3+x)\mathbf{1_3},\\\nonumber
H\left[(\kappa_{5})_{ij}\widehat{L}_i\widehat{H}_u\widehat{L}_j\widehat{H}_u\right]=&
\left(
\begin{array}{ccc}
2n_1 & n_1+n_2 & n_1+n_3 \\
n_1+n_2 & 2n_2 & n_2+n_3 \\
n_1+n_3 & n_2+n_3 & 2n_3
\end{array}
\right),\\\nonumber
H\left[(\kappa_{6})_{i}\widehat{L}_i\widehat{H}_u\widehat{H}_d\widehat{H}_u\right]=&-1+n_i,\\\nonumber
H\left[(\kappa_{7})_{ij1}\widehat{u}_i\widehat{d}^*_j\widehat{e}_1\right]=&
A_7+(4-n_1)\mathbf{1_3},\\\nonumber
H\left[(\kappa_{7})_{ij2}\widehat{u}_i\widehat{d}^*_j\widehat{e}_2\right]=&
A_7+(1-n_2)\mathbf{1_3},\\\nonumber
H\left[(\kappa_{7})_{ij3}\widehat{u}_i\widehat{d}^*_j\widehat{e}_3\right]=&
A_7+(-1-n_3)\mathbf{1_3},\\\nonumber
H\left[(\kappa_{8})_{1}\widehat{H}_u^*\widehat{H}_d\widehat{e}_1\right]=&5-n_1+x,\\\nonumber
H\left[(\kappa_{8})_{2}\widehat{H}_u^*\widehat{H}_d\widehat{e}_2\right]=&2-n_2+x,\\\nonumber
H\left[(\kappa_{8})_{3}\widehat{H}_u^*\widehat{H}_d\widehat{e}_3\right]=&-n_3+x,\\
H\left[(\kappa_{9})_{i1k}\widehat{Q}_i\widehat{L}^*_j\widehat{u}_k\right]=&
A_9+(-n_j)\mathbf{1_3}.
\end{align}
In the above expressions we have defined
\begin{align
A_1&=A_3=\left(
\begin{array}{ccc}
6 & 5 & 3 \\
5 & 4 & 2 \\
3 & 2 & 0
\end{array}
\right), \,
A_2=\left(
\begin{array}{ccc}
10 & 7 & 5 \\
7 & 4 & 2 \\
5 & 2 & 0
\end{array}
\right),\,
A_4=A_9=\left(
\begin{array}{ccc}
8 & 5 & 3 \\
7 & 4 & 2 \\
5 & 2 & 0
\end{array}
\right),\,
A_7=\left(
\begin{array}{ccc}
5 & 6 & 6 \\
2 & 3 & 3 \\
0 & 1 & 1
\end{array}
\right).
\end{align}
\section{Operator contributing to ${Y}_\nu=\sum_i {Y}^{(i)}_\nu$
at $\mathcal{ O}(\epsilon)$}\label{aped.C}
In this Appendix we calculate, as an example, one of the contributions of the effective Lagrangian associated with the representations of $SU(5)$ given in the Table \ref{tab:1}.
We write the $SU(5)\times U(1)_F$ breaking vevs as
\begin{equation}
\label{rep1}
\langle\mathbf{\Sigma_{\pm}}\rangle=
\frac{V}{\sqrt{60}}\times\operatorname{diag}(2,2,2,-3,-3)\,,
\end{equation}
where the factor $\alpha=1/\sqrt{60}$ gives the usual normalization of the
$SU(5)$ generators.
\begin{eqnarray}
\label{rep2}
N_S &=& \frac{1}{\sqrt{60}}\;
\operatorname{diag}(2,\,2,\,2,\,-3,\,-3)\cdot \mathbf{N}_{24}\,.
\end{eqnarray}
where the subscript in $\mathbf{N}_{24}$ (singlet) and refer to the
corresponding $SU(5)$ generators.
we need the following contraction
\begin{align}
\label{rep3}
i M \left[ \mathbf{24}^a_b\,\mathbf{24}^l_m \right]_{\mathbf{S}}&= \left(\mathcal { S}_{\mathbf{S}}\right)^{a\,l}_{b\,m}= \frac{5}{2}\left[\delta^a_m\,\delta_b^l +
\delta^a_l\, \delta_b^m\right]- \delta^a_b\, \delta_m^l \,,
\end{align}
Using the Eqs.(\ref{rep1}, \ref{rep2}, \ref{rep3}) we calculate the contribution $O(\epsilon;\mathbf{24_S})$ to the light neutrino mass matrix given in the Table \ref{tab:1}
\begin{align}\nonumber
\mathcal{L}=& (-i)\mathbf{\bar{5}}_{a}\mathbf{5}^{\phi_{u_{b}}} \mathbf{24}^{a}_{b}+(-i)\mathbf{24}^{l}_{m}\Sigma^{c}_{l}\mathbf{24}^{m}_{c}\\ \nonumber
\nonumber =& (i)^{2}\mathbf{\bar{5}}_{a}\mathbf{5}^{\phi_{u_{b}}}[\mathbf{24}^{a}_{b}\mathbf{24}^{l}_{m}]_{s}\Sigma^{c}_{l}\mathbf{24}^{m}_{c}\\ \nonumber
\nonumber =& (i)^{2}\mathbf{\bar{5}}_{5}\mathbf{5}^{\phi_{u_{5}}}[\mathbf{24}^{5}_{5}\mathbf{24}^{l}_{m}]_{s}\Sigma^{c}_{l}\mathbf{24}^{m}_{c}\\ \nonumber
\nonumber =& (i)^{2}\mathbf{\bar{5}}_{5}\mathbf{5}^{\phi u_{
5}}[\frac{5}{2}(\delta^{5}_{m}\delta^{l}_{5} + \delta^{5}_{l}\delta^{m}_{5}) - \delta^{5}_{5}\delta^{l}_{m}]\Sigma^{c}_{l}\mathbf{24}^{m}_{c}\\ \nonumber
=&(i)^{2}\mathbf{\bar{5}}_{5}\mathbf{5}^{\phi_{u_{5}}}[\frac{5}{2}\Sigma^{c}_{5}\mathbf{24}^{5}_{c} + \frac{5}{2}\Sigma^{c}_{5}\mathbf{24}^{5}_{c} - \Sigma^{c}_{m}\mathbf{24}^{m}_{c}]\\ \nonumber
=&(i)^{2}\mathbf{\bar{5}}_{5}\mathbf{5}^{\phi_{u_{5}}}[5\Sigma^{c}_{5}\mathbf{24}^{5}_{c} - \Sigma^{c}_{m}\mathbf{24}^{m}_{c}]\\ \nonumber
=&(i)^{2}\mathbf{\bar{5}}_{5}\mathbf{5}^{\phi_{u_{5}}}[5\Sigma^{1}_{5}\mathbf{24}^{5}_{1} - \Sigma^{1}_{1}\mathbf{24}^{1}_{1} - \Sigma^{1}_{2}\mathbf{24}^{2}_{1} - \Sigma^{1}_{3}\mathbf{24}^{3}_{1} - \Sigma^{1}_{4}\mathbf{24}^{4}_{1} - \Sigma^{1}_{5}\mathbf{24}^{5}_{1}\\ \nonumber
+&5\Sigma^{2}_{5}\mathbf{24}^{5}_{2} - \Sigma^{2}_{1}\mathbf{24}^{1}_{2} - \Sigma^{2}_{2}\mathbf{24}^{2}_{2} - \Sigma^{2}_{3}\mathbf{24}^{3}_{2} - \Sigma^{2}_{4}\mathbf{24}^{4}_{2} - \Sigma^{2}_{5}\mathbf{24}^{5}_{2}\\ \nonumber
+&5\Sigma^{3}_{5}\mathbf{24}^{5}_{3} - \Sigma^{3}_{1}\mathbf{24}^{1}_{3} - \Sigma^{3}_{2}\mathbf{24}^{2}_{3} - \Sigma^{3}_{3}\mathbf{24}^{3}_{3} - \Sigma^{3}_{4}\mathbf{24}^{4}_{3} - \Sigma^{3}_{5}\mathbf{24}^{5}_{3}\\ \nonumber
+&5\Sigma^{4}_{5}\mathbf{24}^{5}_{4} - \Sigma^{4}_{1}\mathbf{24}^{1}_{4} - \Sigma^{4}_{2}\mathbf{24}^{2}_{4} - \Sigma^{4}_{3}\mathbf{24}^{3}_{4} - \Sigma^{4}_{4}\mathbf{24}^{4}_{4} - \Sigma^{4}_{5}\mathbf{24}^{5}_{4}\\ \nonumber
+&5\Sigma^{5}_{5}\mathbf{24}^{5}_{5} - \Sigma^{5}_{1}\mathbf{24}^{1}_{5} - \Sigma^{5}_{2}\mathbf{24}^{2}_{5} - \Sigma^{5}_{3}\mathbf{24}^{3}_{5} - \Sigma^{5}_{4}\mathbf{24}^{4}_{5} - \Sigma^{5}_{5}\mathbf{24}^{5}_{5}]\\ \nonumber
=&(i)^{2}\mathbf{\bar{5}}_{5}\mathbf{5}^{\phi_{u_{5}}}[-(2)(2)-(2)(2)-(2)(2)-(-3)(-3)+5(-3)(-3)-(-3)(-3)] \mathbf{N}_{24}\\ \nonumber
=&(i)^{2}\mathbf{\bar{5}}_{5}\mathbf{5}^{\phi_{u_{5}}}[-4 -4 -4 -9 +45 -9]\mathbf{N}_{24}\\ \nonumber
=&-15\alpha^{2}\mathbf{\bar{5}}_{5}\mathbf{5}^{\phi_{u_{5}}}\mathbf{N}_{24}.\\ \nonumber
\end{align}
The contribution is $-15$ which is listed on one of the entries in the Table \ref{tab:1}. Similarly are calculated the others.
\chapter{Neutrino masses in $SU(5)\times U(1)_H$ with adjoint flavons}\label{cap3}
We present a $SU(5)\times U(1)_H$ supersymmetric model for neutrino masses and mixings that implements the seesaw mechanism by means of the heavy $SU(2)$ singlets and triplets states contained in three adjoint of $SU(5)$. We discuss how Abelian $U(1)_H$ symmetries can naturally yield non-hierarchical light neutrinos even when the heavy states are strongly hierarchical, and how it can also ensure that $R$--parity arises as an exact accidental symmetry. By assigning two flavons that break $U(1)_H$ to the adjoint representation of $SU(5)$ and assuming universality for all the fundamental couplings,the coefficients of the effective Yukawa and Majorana mass operators become calculable in terms of group theoretical quantities. There is a single free parameter in the model, however, at leading order the structure of the light neutrinos mass matrix is determined in a parameter independent way.
\section{Theoretical framework}
\subsection{Same sign and both signs Abelian charges}
\label{sec:signs}
Sometimes symmetry considerations are sufficient to determine
univocally the structure of the low energy operators, however, other
times a detailed knowledge of the full high energy theory is needed.
Let us consider for example a $U(1)_H$ symmetry and assume that all
the heavy and light states have charges of the same sign, say
positive. Then a single spurion $\epsilon_{-1}$ with a negative unit
charge is involved in the construction of all $U(1)_H$ (formally)
invariant operators. Let us consider the ${\rm dim}=5$ seesaw operator
$ \mathcal{ L}_{D5} \sim - \frac{g_{\alpha\beta} }{2M}\left(\bar\ell_\alpha H\right) \left( H^T\ell^c_\beta\right) $, where
$\ell_\alpha$ are the lepton doublets and $H$ is the Higgs field, that
for simplicity we take neutral under the Abelian symmetry $F(H)=0$.
Since the only spurion useful to construct (formally) invariant
operators is $\epsilon_{-1}$, one can easily convince himself that the
structure of $g_{\alpha\beta}$, and thus the structure of the light neutrino
mass matrix, is univocally determined by the $F$ charges of the
light leptons as: $g_{\alpha\beta} \sim \epsilon_{-1}^{
F(\ell_\alpha)+ F(\ell_\beta)}$, while the $F$-charges of whatever
heavy states of mass $\sim M$ are inducing the effective operator are
irrelevant.\footnote{It should be remarked that, contrary to what is
sometimes stated, Abelian $U(1)_H$ symmetries allow to arrange very
easily for non-hierarchical light neutrinos together with strongly
hierarchical heavy neutrinos (as are often preferred in
leptogenesis) by simply choosing $F(\ell_\alpha)=F(\ell)$ for all
$\alpha$, and $ F(N_1)\gg F(N_2)\gg F(N_3)$.} We can conclude that
in this case one does not need to consider the details of the high
energy theory, since the structure of the low energy effective
operators can be straightforwardly read off from the charges of the
light states.
However, if we allow for $U(1)_H$ charges of both signs, then both
symmetry breaking spurions $\epsilon_{-1}= \epsilon_{+1}= \epsilon$
are relevant. This implies that naive charge counting applied to the
low energy effective operators is unreliable, since basically a factor
$\epsilon^n$, as estimated in the low energy theory, could correspond
instead to $\epsilon^{n+m}_{+1} \cdot \epsilon^m_{-1} \sim
\epsilon^{n+2m}$. Clearly the naive estimate can result in a
completely different (and wrong) structure with respect to the one
effectively generated by the high energy theory. We illustrate this
with a simple example: let us take two lepton doublets with charges
$F(\ell_1)= -F(\ell_2)= +1$ and again $F(H)=0$. The structure of the
light neutrino mass matrix read off from the lepton doublets charges
would be given by the low energy coefficient:
\begin{equation}
\label{eq:mnu1}
g_{\alpha\beta}
\sim \begin{pmatrix}
\epsilon^2 & 1 \\ 1 & \epsilon^2 \end{pmatrix} \,.
\end{equation}
This corresponds to a pair of quasi degenerate (pseudo-Dirac) light
neutrinos.
Now, let us assume that the fundamental high energy (seesaw) theory
has two right handed neutrinos with charges $F(N_{1,2})=+1$.
For the heavy mass matrix $M_N$, its inverse, and for the Yukawa coupling
$Y_{\alpha i}\bar\ell_\alpha N_i$ we obtain:
\begin{align}
\label{eq:MN}
M_N &\sim \epsilon^2
\begin{pmatrix}
1&1\\1&1
\end{pmatrix}\,,
M_N^{-1} &\sim \epsilon^{-2}
\begin{pmatrix}1&1\\ 1&1\end{pmatrix}\,,
Y &\sim
\begin{pmatrix}1&1\\ \epsilon^2&\epsilon^2\end{pmatrix}\,.
\end{align}
The resulting effective low energy coefficient is:
\begin{equation}
\label{eq:mnu2}
g_{\alpha\beta} \sim
Y M_N^{-1}Y^T \sim
\epsilon^{-2}
\begin{pmatrix}
1 & \epsilon^2 \\
\epsilon^2 & \epsilon^4
\end{pmatrix}
\,,
\end{equation}
which (for $\epsilon\ll 1$) corresponds to very hierarchical and
mildly mixed light neutrinos, that is a completely different result
from the previous one.
The model we are going to describe in this paper requires fermions
with charges of both signs, as well as a pair of positively and
negatively charged spurions. Therefore a detailed knowledge of the
high energy theory is mandatory, and accordingly we will explicitly
describe all its relevant aspects.
\subsection{Outline of the $SU(5)\times U(1)_H$ model}
We assume that at the fundamental level all the Yukawa couplings are
universal, and that all the heavy messengers states carrying $U(1)_H$
charges have the same mass, as it would happen if the masses are
generated by the vacuum expectation values (vev) of some singlet
scalar. With these assumptions, the only free parameter of the model
is the ratio between the vacuum expectation value of the flavons and
the mass of the heavy vectorlike FN fields. This parameter is
responsible for the fermion mass hierarchy, and all the remaining
features of the mass spectrum are calculable in terms of group
theoretical coefficients. More precisely, in our model the flavor
symmetry is broken by vevs of scalar fields $\langle
\Sigma_{\pm}\rangle$ in the $\mathbf{24}$--dimensional adjoint
representation of $SU(5)$, where the subscripts refer to the values
$\pm 1$ of the $U(1)_H$ charges that set the normalization for all the
other charges. The vevs
$\langle\Sigma_+\rangle=\langle\Sigma_-\rangle=V_a$ with
$V_a=V\cdot\operatorname{diag}(2,2,2,-3,-3)/\sqrt{60}$ are also
responsible for breaking the GUT symmetry down to the
electroweak--color gauge group. The size of the order parameters
breaking the flavor symmetry is then $\epsilon=V/M$ where $M$ is the
common mass of the heavy FN vectorlike fields. This symmetry breaking
scheme has two important consequences: power suppression in
$\epsilon$ appear with coefficients related to the different entries
in $V_a$, and the FN fields are not restricted to the $\mathbf{5}$,
$\overline{\mathbf{5}}$, or $\mathbf{10}$, $\overline{\mathbf{10}}$,
multiplets as is the case when the $U(1)_H$ breaking is triggered by
singlet flavons~\cite{Aristizabal:2003zn,Duque:2008ah}.
The model studied in~\cite{Duque:2008ah} adopted this same scheme, and
yields a viable phenomenology, since it produces quark masses and
mixings and charged lepton masses that are in agreement with the data.
The $U(1)_H$ charge assignments of the model yield $U(1)_H$ mixed
anomalies, that are canceled trough the Green-Schwartz
mechanism~\cite{Green:1984sg}. The values of the charges are
determined only modulo an overall rescaling, that may be appropriately
chosen in order to forbid baryon and lepton number violating
couplings. However, with the choice of charges adopted
in~\cite{Duque:2008ah}, both $\Delta L=1$ and $\Delta L=2$ violating
operators were forbidden, and thus the seesaw mechanism could not be
embedded in the model. In order to avoid this unpleasant feature, in
this work we explore the possibility of forbidding just the $\Delta
L=1$ operators while allowing the $\Delta L=2$ seesaw operator for
neutrino masses. We will show that by means of a suitable choice of
the $F$ charges, the seesaw mechanism can be implemented, and one can
obtain neutrino masses and mixings in agreement with oscillation data,
while $\Delta L=1$ and $\Delta B\neq 0$ (and thus $R$--parity
violating) operators are forbidden at all orders by virtue of the
$F$-charges. Moreover, the scale of the heavy seesaw neutral fermions
remains fixed, and lies a few order of magnitude below the GUT scale,
and is of the right order to allow the generation of the baryon
asymmetry through leptogenesis.
\subsection{Charge assignments}
The $F$ charges have to satisfy some specific
requirements in order to yield a viable phenomenology. In the
following we denote for simplicity the various $F$ charges with the
same label denoting the corresponding $SU(5)$ multiplet. To
allow a Higgsino $\mu$--term at tree level, we must require
\begin{align}
\label{eq:mu}
\overline{\mathbf{5}}^{\phi_d}+\mathbf{5}_{\phi_u}=0\,,
\end{align}
where $ \overline{\mathbf{5}}^{\phi_d},\,\mathbf{5}_{\phi_u}$ denote
the $F$-charges of the
chiral multiplets containing the $SU(2)$ Higgs doublets
$\phi_d,\,\phi_u$.
It is easy to see that with the constraint~(\ref{eq:mu}) the
overall charge of the Yukawa operators for the charged fermion masses
$ \mathbf{10}_{I} \mathbf{\bar{5}}_{J} \mathbf{\bar{5}}^{{\phi}_{d}}$
and $ \mathbf{10}_{I} \mathbf{10}_{J} \mathbf{{5}}_{{\phi}_{u}}$, that
are even under $R$--parity, are invariant under the charge
redefinitions~\cite{Duque:2008ah}:
\begin{align}
\label{eq:shift}
\mathbf{\bar{5}}_{I}&\to \mathbf{\bar{5}}_{I} + a_n\\ \nonumber
\mathbf{10}_{I} &\to \mathbf{10}_{I} -\frac{a_n}{3}\\ \nonumber
\mathbf{\bar{5}}^{{\phi}_{d}} &\to
\mathbf{\bar{5}}^{{\phi}_{d}} - \frac{2a_n}{3}\\ \nonumber
\mathbf{5}_{{\phi}_{u}}&\to \mathbf{5}_{{\phi}_{u}} +\frac{2a_n}{3}\,,
\end{align}
where $I=1,2,3$ is a generation index, and $a_n$ is an arbitrary
parameter that can be used to redefine the charges. Assuming
$\mathbf{5}_{\phi_u}= 0$,
then the anomalous solution that
was chosen in ref.~\cite{Duque:2008ah} can be written as
\begin{align}
\label{eq:1}
\mathbf{5}_{\phi_u}= \overline{\mathbf{5}}^{\phi_d} = &0\,, &&
\overline{\mathbf{5}}_I=2^I-7\,, &&\mathbf{10}_I=3-I\,.
\end{align}
Starting from a set of integer charges, and redefining
this set by means of the shift
Eq.~(\ref{eq:shift}) with
\begin{align}
\label{eq:2}
a_n=-\frac{3}{2}\left( \frac{2n}{5} +1\right)\,,
\end{align}
where $n$ is an integer, it is easy to see that the $R$--parity
violating operators $\mathbf{10}_{I} \mathbf{\bar{5}}_{J}
\mathbf{\bar{5}}_{K}$ and $\mathbf{\bar{5}}_{I}
\mathbf{{5}}_{{\phi}_{u}}$ have half--odd--integer charges, and hence
are forbidden at all orders by the $U(1)_H$ symmetry.
To generate neutrino masses, we now introduce three heavy multiplets
$\mathbf{N}_I$ $(I=1,2,3$) with half--odd--integer $F$--charges, that
we assume corresponding to adjoint representations $\mathbf{24}$. The
adjoint of $SU(5)$ contains two types of $SU(2)$ multiplets that can
induce at low energy the dimension five Weinberg
operator~\cite{Weinberg:1979sa}: one $SU(2)\times U(1)\times SU(3)$
singlet that allows to implement the usual type I seesaw, and one
$U(1)\times SU(3)$ singlet but $SU(2)$ triplet giving rise to a type
III seesaw~\cite{Bajc:2006ia,Bajc:2007zf,Biggio:2010me}. Contributions
from these two types of multiplets unavoidably come together, so that
by assigning `right handed neutrinos' to the $\mathbf{24}$ of $SU(5)$
one necessarily ends up with a type I+III seesaw.\footnote{We thank
the referee for bringing this point to our attention.}
This slightly more complicated seesaw structure is not crucial for our
construction, but we still keep track of it for a matter of
consistency.
The half--odd--integer charges of the new states, after the charges of
the other fields have been shifted according to Eqs.~(\ref{eq:shift})
and (\ref{eq:2}), can be parameterized as
\begin{align}
\label{eq:3}
N_I=\frac{2m_I + 1}{2}\,,
\end{align}
where $m_I$ are integers. The {\it effective} superpotential terms
that give rise to the seesaw are
\begin{align}
\label{eq:4}
W_{\text{seesaw}}=Y_\nu^{I J}\,\overline{\mathbf{5}}_{I}\,\mathbf{5}_{\phi_u}\,\mathbf{N}_{J}
+\tfrac{1}{2}M_R^{I J} \mathbf{N}_I \mathbf{N}_J\,.
\end{align}
The coefficient $Y_\nu^{I J}$ of the Dirac operator in
Eq.~\eqref{eq:4} is determined by the following sums of $F$--charges:
\begin{align}
\overline{\mathbf{5}}_{I}+\mathbf{5}_{\phi_u}+\mathbf{N}_{J}
=&2^I-7+a_n+2a_n/3+N_J\nonumber\\
=&2^I-9+n+m_J\,.\nonumber
\end{align}
Explicitly:
\begin{align}
\label{eq:5}
F(\overline{\mathbf{5}}_{I}\,\mathbf{5}_{\phi_u}\,\mathbf{N}_{J})=&
\begin{pmatrix}
-7-n+m_1 & -7-n+m_2 & -7-n+m_3 \\
-5-n+m_1 & -5-n+m_2 & -5-n+m_3 \\
-1-n+m_1 & -1-n+m_2 & -1-n+m_3 \\
\end{pmatrix}.
\end{align}
For the mass operator of the adjoint neutrinos
we have the following (integer) $F$--charges
\begin{align}
\label{eq:6}
\mathbf{N}_{I}+\mathbf{N}_{J}=&1+m_I+m_J\,,\nonumber\\
F(\mathbf{N}_I\,\mathbf{N}_J) =&\begin{pmatrix}
1+ 2m_1 &1+ m_1+m_2 &1+m_1+m_3 \\
1+ m_1+m_2 &1+ 2m_2 &1+m_2+m_3\\
1+ m_1+m_3 &1+ m_2+m_3 &1+2m_3\\
\end{pmatrix}.
\end{align}
The light neutrino
mass matrix is then obtained from the seesaw formula
\begin{align}
\label{eq:7}
M_\nu \approx& - {v^2}\sin^2\beta\,Y_\nu\,M_R^{-1}\,Y^T_{\nu}\,,
\end{align}
where $v=175\,$GeV, and it is left understood that in Eq.~\eqref{eq:7}
the contributions of the $SU(2)$ singlets and triplets are both summed
up.
As is implied by the FN mechanism, the order of magnitude of the
entries in $Y_\nu$ and $M_R$ is determined by the corresponding values
of the sums of $F$ charges Eqs.~(\ref{eq:5}) and
(\ref{eq:6}) as:
\begin{align}
\label{eq:8}
Y_\nu^{I J}&\sim\epsilon^{|\overline{\mathbf{5}}_{I}+\mathbf{5}_{\phi_u}+\mathbf{N}_{J} |}
\nonumber\\
M_R^{I J}&\sim M
\cdot\epsilon^{|\mathbf{N}_{I}+\mathbf{N}_{J}|} =
V \cdot\epsilon^{|\mathbf{N}_{I}+\mathbf{N}_{J}|-1}\,.
\end{align}
where in the second relation $M$ is the mass of the FN messengers
fields and in the last equality we have used $M=\epsilon^{-1} V$.
Note that since we have two flavon multiplets $\Sigma_{\pm}$ with
opposite charges, the horizontal symmetry allows for operators with
charges of both signs, and hence the exponents of the symmetry
breaking parameter $\epsilon$ in Eq.~\eqref{eq:8} must be given in
terms of the absolute values of the sum of charges. In FN models
only the order of magnitude of the entries in Eq.~\eqref{eq:8} are
determined, and it is generally assumed that non-hierarchical order
one coefficients multiply each entry. However, in our model the
assumption of universality for the fundamental Yukawa couplings has
been made in order to avoid arbitrary $\mathcal{O}(1)$ numbers of
unspecified origin.\footnote{This condition excludes the simple (and
often used) charge assignments in which there are two zero
eigenvalues in the light neutrino mass matrix, as
in~\cite{Dreiner:2003yr,Chen:2008tc}.} The coefficients
multiplying each entry in Eq.(\ref{eq:8}) can be in fact computed
with the same technique introduced in~\cite{Duque:2008ah} for
computing the down-quark and charged lepton masses. In summary, the
order of magnitude of the various entries in $M_\nu$ is determined by
the appropriate powers of the small factor $\epsilon$ while, as we
will see, the details of the mass spectrum are determined by
non-hierarchical computable group theoretical coefficients, that only
depend on the way the heavy FN states are assigned to $SU(5)$
representations.
\subsection{Coefficients of the Dirac and Majorana effective
operators}
In this section we analyze the contributions of different effective
operators to $Y_\nu$ and to $M_R$, showing that a phenomenologically
acceptable structure, able to reproduce (approximately) the correct
mass ratios and to give reasonable neutrino mixing angles can be
obtained.
We assume that a large number of vectorlike FN fields exist in various
$SU(5)$ representations. Since we assign the heavy Majorana neutrinos
to the adjoint $\mathbf{N}$, the possible FN field
representations $\mathbf{R}$ can be identified starting from the
following tensor products involving the representations of the
fields in the external lines (see the diagrams in Fig.~\ref{fig:1}):
\begin{align}
\label{eq:9}
\mathbf{ \overline{5} \otimes {5_{\phi_u}} } =& \mathbf{
{1}\oplus{24} } \,, \\
\mathbf{\overline{5}} \>\> \mathbf{\otimes} \>\> \mathbf{\Sigma}\ =& \mathbf{
\overline{5}\oplus\overline{45} \oplus \overline{70}}\,, \\
\mathbf{ {N} \> \otimes {\Sigma} } \>\> =& \mathbf{{1_S}}\oplus
\mathbf{{24_S}} \oplus \mathbf{24_A} \oplus \mathbf{{75_S}}
\oplus \mathbf{{126_A}} \nonumber\\
&\oplus\mathbf{\overline{126_A}} \oplus \mathbf{{200_S}} \,,
\end{align}
where the subscripts $\scriptstyle\mathbf{S,\,A}$ in the last line
denote the symmetric or antisymmetric nature of the corresponding
representations. We assume that all FN fields transform nontrivially
under $SU(5)$, and thus that no singlet exists and, for simplicity, we
restrict ourselves to representations with dimension less than 100,
which results in the following possibilities $\mathbf{R}=\mathbf{24}$,
$\mathbf{5}$, $\mathbf{45}$, $\mathbf{70}$.
{\it Pointlike propagators:}\ Since the mass $M$ of
these fields is assumed to be larger than $\langle \Sigma_\pm\rangle
\sim \Lambda_{\text{GUT}}$, the contributions to the operators in
Eq.~\eqref{eq:4} can be evaluated by means of insertions of effective
pointlike propagators. As in \cite{Duque:2008ah} we denote the
contractions of two vectorlike fields in the representation
$\mathbf{R}$, $\overline{\mathbf{R}}$ as
\begin{align}
\left[\mathbf{R}_{d e\ldots}^{a b c\ldots}\overline{\mathbf{R}}_{l m n\ldots}^{p
q\ldots}\right]
=-\frac{i}{M}\mathcal{S}_{d e \ldots l m n\ldots}^{a b c\ldots p q \ldots}\,,
\end{align}
where all the indices are $SU(5)$ indices, and $\mathcal{S}$ is the appropriate
group index structure. The structures $\mathcal{S}$ for
$\left[\mathbf{5}^a\,\mathbf{\bar{5}}_b\right]$,
$\left[\mathbf{ 45}^{ab}_c\,\mathbf{\bar{45}}^n_{lm}\right]$ and
$\left[\mathbf{ 70}^{ab}_c\,\mathbf{\bar{70}}^n_{lm}\right]$
(and for several other
$SU(5)$ representations) can be found in Appendix A of
\cite{Duque:2008ah}. In addition we need the following
contractions
\begin{align}
\label{eq:10S}
i M \left[ \mathbf{24}^a_b\,\mathbf{24}^l_m \right]_{\mathbf{S}}&=
\left(\mathcal { S}_{\mathbf{S}}\right)^{a\,l}_{b\,m}=
\frac{5}{2}\left[\delta^a_m\, \delta_b^l +
\delta^a_l\, \delta_b^m\right]
- \delta^a_b\, \delta_m^l \,, \\
\label{eq:10A}
i M\left[\mathbf{24}^a_b\,\mathbf{24}^l_m \right]_{\mathbf{A}}&=
\left(\mathcal{ S}_{\mathbf{A}}\right)^{a\,l}_{b\,m}= \frac{5}{2}
\left[\delta^a_m\, \delta_b^l -\delta^a_l\, \delta_b^m\right]\,.
\end{align}
These two expressions are obtained by imposing the traceless condition
for the adjoint $\left(\mathcal{ S}_{\mathbf{S,A}}\right)^{a\,l}_{a\,m}
=\left(\mathcal{ S}_{\mathbf{S,A}}\right)^{a\,l}_{b\,l}=0$ and the
normalization factor is fixed by the requirement that the (subtracted)
singlet piece $\delta^a_b\, \delta_m^l$ in Eq.~\eqref{eq:10S} provides
the proper singlet contraction, that is, by inserting the singlet in
the diagram of Fig.\ref{fig:1}(b) we require that the operator
$\left(\mathbf{\overline{5}}_a \mathbf{5}_{\phi_u}^a\right)
\cdot\left(\mathbf{N}_l^j \mathbf{\Sigma}_j^l\right)$ is obtained with
unit coefficient.
{\it Vertices:}\ All the vertices we need involve $\mathbf{5}_{\phi_u}$ or the
adjoint $\mathbf{\Sigma}$ with the external fermions $\mathbf{\bar{5}}$ and $\mathbf{N}$,
or with the FN representations $\mathbf{R}$ in the internal lines.
The vertices have the general form $-i \lambda \mathcal{ V}$ where
$\lambda$ is universal for all vertices. Including symmetry factors,
the relevant field contractions $ \mathcal{
V}=\mathbf{R}\,\mathbf{5}_{\phi_u}\,\mathbf{R'} $ or $\mathcal
{V}=\mathbf{R}\,\mathbf{\Sigma}\,\mathbf{R'}$, with
$\mathbf{R},\mathbf{R'}=\mathbf{5},\,\mathbf{24},\,\mathbf{45},\,\mathbf{70}$,
are:
\begin{eqnarray}
\label{eq:v5} &&
\mathbf{\bar{5}}_a \mathbf{24}^a_b\mathbf{5}^b \qquad \!\!\!
\mathbf{\bar{5}}_a \mathbf{24}^c_b \mathbf{ 45}^{ba}_c \qquad \!\!\!
\mathbf{\bar{5}}_a \mathbf{24}^c_b \mathbf{ 70}^{ba}_c
\qquad \mathbf{24}^a_c\mathbf{24}^c_b
\left(\mathbf{24}_{\mathbf{S,A}}\right)^b_a.
\\
\label{eq:v45} &&
\mathbf{\bar{45}}_{ab}^c{\mathbf{24}^\uparrow}^b_d\mathbf{ 45}^{da}_c \quad\ \quad
\frac{1}{2}\, \mathbf{\bar{45}}_{ab}^c{\mathbf{24}^\downarrow}_c^d\mathbf{ 45}^{ba}_d
\\ \label{eq:v70} &&
\mathbf{\bar{70}}_{ab}^c{\mathbf{24}^\uparrow}^b_d\mathbf{ 70}^{da}_c \quad\ \quad
\frac{1}{2}\, \mathbf{\bar{70}}_{ab}^c{\mathbf{24}^\downarrow}_c^d\mathbf{ 70}^{ba}_d
\quad\ \quad
\mathbf{\bar{45}}_{ab}^c\mathbf{24}^b_d\mathbf{ 70}^{da}_c \,,
\end{eqnarray}
where the vertices in the first line describe the couplings of the
external states ($\mathbf{\overline 5}$ and $\mathbf{N}$) with heavy
FN fields and flavons, while the last two lines involve only heavy FN
fields and flavons. There are two inequivalent ways of contracting
the indices for the vertices involving the $\mathbf{24}$ with pairs of $\mathbf{ 45}$
and $\mathbf{ 70}$ in the last two lines~\cite{Duque:2008ah}. They are distinguished in
Eqs.~(\ref{eq:v45}) and (\ref{eq:v70}) by an up ($\mathbf{24}^\uparrow$) or
down ($\mathbf{24}^\downarrow$) arrow-label. As explained
in~\cite{Duque:2008ah}, this can be traced back to the fact that these
representations are contained twice in their tensor products with the
adjoint.
{\it Relevant multiplet components:}\ We write the $SU(5)\times
U(1)_H$ breaking vevs as
\begin{equation}
\label{eq:VEV}
\langle\mathbf{\Sigma_{\pm}}\rangle=
\frac{V}{\sqrt{60}}\times\operatorname{diag}(2,2,2,-3,-3)\,,
\end{equation}
where the factor $1/\sqrt{60}$ gives the usual normalization of the
$SU(5)$ generators,
$\operatorname{Tr}(\mathbf{R}^a\overline{\mathbf{R}^b})=(1/2)\delta^{ab}$,
and the coefficients of the left handed neutrino couplings to the
$SU(2)$ singlet $\nu\,\phi^0_u\,N_S$ and $SU(2)$ triplet
$\nu\,\phi^0_u\,N_T$ as well as the Majorana neutrinos mass terms
$N_{S,T}\,N_{S,T}$ are obtained by projecting the representations
$\mathbf{5}$, $\mathbf{5}_{\phi_u}$ and $\mathbf{N}$ onto the relevant
field components according to
\begin{eqnarray}
\label{eq:projections1}
\nu &=& - \overline{\mathbf{5}}_5 = - \delta_5^a \; \overline{\mathbf{5}}_a
\\
\label{eq:projections2}
\phi^0_u &=&\mathbf{5}_{\phi_u}^5 = \delta^5_b\; \mathbf{5}_{\phi_u}^b
\\
\label{eq:projections3}
N_S &=& \frac{1}{\sqrt{60}}\;
\operatorname{diag}(2,\,2,\,2,\,-3,\,-3)\cdot \mathbf{N}_{24}\,.
\\
N_T &=& \frac{1}{\sqrt{60}}\;
\operatorname{diag}(0,\,0,\,0,\,\sqrt{15},\,-\sqrt{15})
\cdot \mathbf{N}_{3}\,.
\label{eq:projections4}
\end{eqnarray}
where the subscripts in $\mathbf{N}_{24}$ (singlet) and
$\mathbf{N}_{3}$ (neutral component of the triplet) refer to the
corresponding $SU(5)$ generators. The assumption of a unique heavy
mass parameter $M$ for the FN fields and of universality of the
fundamental scalar-fermion couplings $\lambda$ yield a remarkable
level of predictivity. In particular, for the vertices involving
$\mathbf{\Sigma_\pm}$ we can always reabsorb $\lambda V \to V$. This
leaves just an overall power of $\lambda$ common to all effective
Yukawa operators that involve one insertion of the Higgs multiplet
$\mathbf{5}_{\phi_u}$ (see the diagrams in Figs.~\ref{fig:1}) and no
$\lambda$ at all for the contributions to $M_R$, (see the diagrams in
Figs.~\ref{fig:2}).
The contributions to $Y_\nu$ and $M_R$ at different orders can be
computed using the vertices $\mathcal{V}$ given in
Eqs.~\eqref{eq:v5}-\eqref{eq:v70} and the relevant group structures
$\mathcal{S}$ in Eqs.~\eqref{eq:10S}, \eqref{eq:10A} and in Appendix A
of~\cite{Duque:2008ah}, that account for integrating out the heavy FN
fields. Additionally, the multiplets $\mathbf{\bar 5}$, $\mathbf{N}$,
and $\mathbf{5}_{\phi_u}$ in the external legs of the diagrams must be
projected on the relevant components according to
Eqs.~\eqref{eq:projections1}-\eqref{eq:projections4} and the flavons
$\mathbf{\Sigma_\pm}$ have to be projected onto the vacuum according to
Eq.~\eqref{eq:VEV}.
We have evaluated the $Y_\nu$ including the contributions up to
$\mathcal{O}(\epsilon^2)$ that are diagrammatically depicted in
Figs.~\ref{fig:1}: \ref{fig:1}(a) $\mathcal{O}(\epsilon^0)$;
\ref{fig:1}(b)--\ref{fig:1}(c) $\mathcal{O}(\epsilon^1)$;
\ref{fig:1}(d)--\ref{fig:1}(f) $\mathcal{O}(\epsilon^2)$. $M_R$ has
been computed including contributions with three insertions
of the flavons $\Sigma_\pm$
corresponding to the diagrams in Figs.~\ref{fig:2}: \ref{fig:2}(a)
$\mathcal{O}(\epsilon)$; \ref{fig:2}(b) $\mathcal{O}(\epsilon^2)$;
\ref{fig:2}(c) $\mathcal{O}(\epsilon^3)$. At each specific order,
the contributions to specific entries in $Y_\nu$ and $M_R$
can be written as
\begin{eqnarray}
\label{eq:orders}
Y_\nu^{(i)} &=& \lambda\,\alpha^{i+1}\,\epsilon^i \cdot
\left(y^S_i +y^T_i\right)\,,
\\
M_R^{(i)}& =& V\,\alpha^{i+3}\,\epsilon^i
\cdot \left(r^S_{i+1}+r^T_{i+1} \right)=
M\,\alpha^{i+3}\,\epsilon^{i+1}
\cdot \left(r^S_{i+1}+r^T_{i+1} \right)\,,
\end{eqnarray}
where $\alpha= 1/\sqrt{60}$ is the normalization factor for
$\mathbf{\Sigma}$ and for the $N_{S,T}$ in the adjoint, $V=M\epsilon$
with $V$ defined in Eq.~(\ref{eq:VEV}), and $y_i^{S,T}$ and
$r_{i+1}^{S,T}$ are the nontrivial group theoretical coefficients,
that we have computed for $i=0,1,2$ and for the singlet $(S)$ and triplet
$(T)$ contributions to the seesaw Lagrangian.
The corresponding
results for $y_i^{S,T}$ are given in Table~\ref{tab:1} (where we have
followed the notation of~\cite{Duque:2008ah}), while the results for
$r_{i+1}^{S,T}$ are given in Table~\ref{tab:2}.
\begin{figure}[t!]
\centering
\hspace{0.5cm}\includegraphics[scale=0.75]{y0}
\hspace{2.3cm}\includegraphics[scale=0.75]{y1}
\hspace{1.7cm}\includegraphics[scale=0.75]{y1-2} \\ [-5pt]
\leftline{\hspace{2.4cm}(a)\hspace{5.5cm}(b)\hspace{5.5cm}(c)}
\vspace{10pt}
\includegraphics[scale=0.75]{y2}
\includegraphics[scale=0.75]{y2-2}
\includegraphics[scale=0.75]{y2-3} \\ [-5pt]
\leftline{\hspace{2.5cm} (d) \hspace{5.1cm} (e) \hspace{5.0cm} (f)}
\caption{ Diagrams contributing to $Y_\nu$ at different orders. The
lowest order coefficient corresponding to diagram (a) is $y_0=3$.
Diagrams (b)--(c) contribute at $\mathcal{ O}(\epsilon^1)$ and yield the
coefficients $y_1$ in the second column in
Table~\ref{tab:1}. Diagrams (d)--(f) contribute at $\mathcal
{O}(\epsilon^2)$ and give the coefficients $y_2$ in the fourth
column of the table.}
\label{fig:1}
\end{figure}
\begin{figure}[h!!]
\centering
\includegraphics[scale=0.85]{r1} \hspace{1.0cm}
\includegraphics[scale=0.85]{r2} \hspace{1.0cm}
\includegraphics[scale=0.85]{r3} \\ [-5pt]
\leftline{\hspace{1.6cm}(a)\hspace{4.7cm}(b)\hspace{6.3cm}(c)}
\caption{
Diagrams contributing to $M_R$ at different orders. The lowest order coefficient
$r_1$ is obtained from diagram (a), $r_2$ from (b), and $r_3$ from (c).}
\label{fig:2}
\end{figure}
\renewcommand{\arraystretch}{1.28}
\begin{table}[t!]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
$\mathbf{\bar 5}_1$ & $\mathbf{\bar 5}_2$ & $\mathbf{\bar 5}_3$
& $\mathbf{10}_1$ & $\mathbf{10}_2$ & $\mathbf{10}_3$ &
$\mathbf{5}_{\phi_u}=-\mathbf{5}_{\phi_d}$&$N_1$&$N_2$ &$N_3$\\\hline
$-\frac{29}{10}$& -$\frac{9}{10}$ & $\frac{31}{10}$&
$\frac{13}{10}$&$\frac{3}{10}$&$-\frac{7}{10}$&$\frac{7}{5}$&
$\frac{5}{2}$&$-\frac{1}{2}$&$-\frac{11}{2}$\\
\end{tabular}
\caption{$F$--charges obtained with $n=-6$ in Eq.~\eqref{eq:2},
and $m_1=2$, $m_2=-1$, and $m_3=-6$ in Eq.~\eqref{eq:3}.}
\label{tab:3}
\end{table}
\renewcommand{\arraystretch}{1}
We have searched for all possible charge assignments with absolute
values of the $F$ charges smaller than 10, and we have examined the
resulting neutrino mass matrices. We have found some promising
possibilities. If we choose, for example, in Eqs.~(\ref{eq:5}) and
(\ref{eq:6}), $n=-6$ and $m_1=2$, $m_2=-1$, $m_3=-6$, we obtain the
$F$--charges shown in Table~\ref{tab:3}, which can be obtained from
the set given in Eq.~\eqref{eq:1} through the redefinitions
Eqs.~\eqref{eq:shift} with $a_{-6}=21/10$.
According to Eq.~\eqref{eq:5}, this set of $F$--charges
gives the following orders of magnitude for $Y_\nu$:
\begin{align}
\label{eq:12}
Y_{\nu}\sim&\lambda
\begin{pmatrix}
\epsilon & \epsilon^2 & \epsilon^7 \\
\epsilon^3 & 1 & \epsilon^5 \\
\epsilon^7 & \epsilon^4 & \epsilon \\
\end{pmatrix}.
\end{align}
Neglecting terms of $\mathcal{O}(\epsilon^4)$ and higher,
including the coefficients $y_i^{S,T}$ and the
appropriate powers of the normalization factor $\alpha$, this reads:
\begin{align}
Y_\nu^{S,T}\,\approx\,&\lambda\, \alpha\, \begin{pmatrix}
y_1(\alpha\epsilon) & y_2(\alpha\epsilon)^2 & 0 \\
y_3(\alpha\epsilon)^3 & y_0 & 0 \\
0 & 0 & y_1(\alpha\epsilon) \\
\end{pmatrix}^{S,T}.
\end{align}
where the superscript $^{S,T}$ outside the matrix is a shorthand for
$y_i^{S,T}$ inside the matrix. Similarly, according to
Eq.~\eqref{eq:6} and \eqref{eq:8} we have for the entries in $M_R$ the
following orders of magnitude:
\begin{align}
\label{eq:13}
M_R\,\sim\, &
V\,
\begin{pmatrix}
\epsilon^4 & \epsilon^1 & \epsilon^2\\
\epsilon^1 & \epsilon^0 &\epsilon^5 \\
\epsilon^2 & \epsilon^5 &\epsilon^{10} \\
\end{pmatrix}.
\end{align}
Neglecting terms of $\mathcal{O}(\epsilon^4)$ and higher, and
taking into account the coefficients $r_i^{S,T}$ and $\alpha$, we obtain
\begin{align}
\label{eq:14}
M_R^{S,T} \approx\, & \, V \, \alpha^{3}\,
\begin{pmatrix}
0 & r_2 (\alpha\epsilon) & r_3(\alpha\epsilon)^2\\
r_2(\alpha\epsilon) & r_1 &0 \\
r_3(\alpha\epsilon)^2 & 0 &0 \\
\end{pmatrix}^{S,T} \,.
\end{align}
According Eq.~\eqref{eq:7}, the resulting light neutrino mass matrix then is
\begin{align}
\label{eq:Mnuappx}
M_\nu\approx& - \frac{v^2\sin^2\beta}{\alpha V}
\,\lambda^{2}\sum_{S,T} \left[ \frac{1}{r_1\,r_3}\, \begin{pmatrix}
0 & 0 & y_1^2r_1 \\
0 & \phantom{\Big|}\!\!y_0^2r_3 & -y_0y_1r_2 \\
y_1^2 r_1 & -y_0y_1r_2 & \frac{1}{r_3}\, y_1^2 r_2^2
\end{pmatrix}\right],
\end{align}
where we have neglected in each entry corrections of $\mathcal
{O}(\alpha\epsilon)^2$ and higher, and we have suppressed the
subscripts $^{S,T}$ not to clutter the expression. It is remarkable
that at leading order the structure of the light neutrino mass matrix
remains determined only in terms of the group theoretical coefficients
$y_i^{S,T}$ and $r_i^{S,T}$, and in particular it does not depend on the
hierarchical parameter $\epsilon$. Let us also note that this matrix
corresponds to the two zero--texture type of neutrino mass matrix
discussed in~\cite{Mohanta:2006xd}. As regards the scale $\alpha V$
appearing in the denominator of Eq.~\eqref{eq:Mnuappx}, it can be
directly related with the unification scale, defined as the mass scale
of the leptoquarks gauge fields $M_X=M_Y$~\cite{Bailin:1986wt}:
\begin{align}
\label{eq:Lgut}
\Lambda_{GUT}=M_X
=5\,g_5\,\alpha\, V\,,
\end{align}
where $g_5\approx 0.7$ is the unified gauge coupling at
$\Lambda_{GUT}\simeq10^{16}$.
It is remarkable to note that both $Y_\nu$ and $M_R$ are hierarchical,
with the first one having a hierarchy between its eigenvalues of
$\mathcal {O}(\alpha\epsilon)$ and the second one of $\mathcal
{O}(\alpha\epsilon^2)$. The light neutrino mass matrix computed
naively (and erroneously, see Section~\ref{sec:signs}) from the
effective seesaw operator using only the charges of the
$\overline{\mathbf{5}}_I$ multiplets, would also be hierarchical.
However, the resulting $M_\nu$ is not hierarchical, and in fact at
leading order it does not depend at all on $\epsilon$ but only on the
group theoretical coefficients $y_i^{S,T}$ and $r_i^{S,T}$. It is precisely the
presence of $F$ charges of both signs for the fields and for the two
flavons that yields the possibility of obtaining non-hierarchical
neutrino masses and large mixing angles, although the whole scenario
is defined at the fundamental level in terms of a small hierarchical
parameter $\epsilon$.
Let us comment at this point that, as it is discussed
in~\cite{Duque:2008ah}, corrections from sets of higher order diagrams
to the various entries in $Y_\nu$ and $M_R$ can generically be quite
sizable, although suppressed by higher powers of $\epsilon$. This is
because at higher orders the number of diagrams contributing to the
various operators proliferate, and the individual group theoretical
coefficients also become generically much larger, as can be seen in
Tables~\ref{tab:1} and~\ref{tab:2}. By direct evaluation of higher
orders corrections, the related effects were estimated
in~\cite{Duque:2008ah} to be typically of a relative order $\sim 20\%
- 30\%$. To take into account the possible effects of these
corrections, we allow for a $\sim 25\% $ uncertainty in the
final numerical results.
\section{Numerical analysis}
\label{sec:numerical}
\begin{table}[t!]
\centering
\begin{tabular}{|l|r|r||l|r|r|}\hline
\phantom{$\Big|$}$\epsilon^1$ & $y_1^S$& $y_1^T$&\ $\epsilon^2$ &$y_2^S$&$y_2^T$ \ \\
\hline\phantom{$\Big|$}
$\!\![\mathbf{5}^{\phi_{u}}\Sigma]$& & &$[\mathbf{5}^{\phi_{u}}\Sigma\Sigma]$&$ $&$$ \ \\ [3pt]
$O(\epsilon;\mathbf{24_S})$&$-15$&$-15\sqrt{15}$&$O(\epsilon^{2};\mathbf{24_S},\mathbf{24_S})$&$-75$&$-225\sqrt{15}$\\[2pt]
$O(\epsilon;\mathbf{24_A})$&$\ 0$&$0$&$O(\epsilon^{2};\mathbf{24_A},\mathbf{24_S})$& $0$&$0$\\ [2pt]
&&&$O(\epsilon^{2};\mathbf{24_S},\mathbf{24_A})$& $0$&$0$\\ [2pt]
&&&$O(\epsilon^{2};\mathbf{24_A},\mathbf{24_A})$& $0$&$0$\\ [3pt]
\hline \phantom{$\Big|$}
$[\Sigma\mathbf{5}^{\phi_{u}}]$& & &$[\Sigma\mathbf{5}^{\phi_{u}}\Sigma]$&$ $&$$ \\ [3pt]
$O(\epsilon;\mathbf{5})$&$\ -9$&$-3\sqrt{15}$&$O(\epsilon^{2};\mathbf{5},\mathbf{24_S})$&$-45$&$-45\sqrt{15}$\\ [2pt]
&&&$O(\epsilon^{2};\mathbf{5},\mathbf{24_A})$& $\ \ 0$&$0$\\ [2pt]
$O(\epsilon;\mathbf{45})$&$\ 75$&$-15\sqrt{15}$&$O(\epsilon^{2};\mathbf{45},\mathbf{24_S})$&$300$&$-180\sqrt{15}$\\ [2pt]
&&&$O(\epsilon^{2};\mathbf{45},\mathbf{24_A})$&$\ \ 0$&$0$\\ [2pt]
$O(\epsilon;\mathbf{70})$&$-225$&$-15\sqrt{15}$&$O(\epsilon^{2};\mathbf{70},\mathbf{24_S})$&$-900$&$-180\sqrt{15}$\\ [2pt]
&&&$O(\epsilon^{2};\mathbf{70},\mathbf{24_A})$& $\ \ 0$&$0$\\ [2pt]
\cline{4-6}& &\phantom{$\Big|$}
&$[\Sigma\Sigma\mathbf{5}^{\phi_{u}}]$&$ $&$$ \\ [3pt]
&&&$O(\epsilon^{2};\mathbf{5},\mathbf{5})$& $-27$&$-9\sqrt{15}$ \\ [2pt]
&&&$O(\epsilon^{2};\mathbf{5},\mathbf{45})$& $225$ &$-45\sqrt{15}$ \\ [2pt]
&&&$O(\epsilon^{2};\mathbf{5},\mathbf{70})$& $-675$&$-45\sqrt{15}$ \\ [2pt]
&&&$O(\epsilon^{2};\mathbf{45},\mathbf{5})$& $225$&$75\sqrt{15}$ \\ [2pt]
&&&$O^{\uparrow}(\epsilon^{2};\mathbf{45},\mathbf{45})$& $1425$&$-285\sqrt{15}$ \\ [2pt]
&&&$O^{\downarrow}(\epsilon^{2};\mathbf{45},\mathbf{45})$&$525$&$-105\sqrt{15}$\\ [2pt]
&&&$O(\epsilon^{2};\mathbf{45},\mathbf{70})$& $1125$&$75\sqrt{15}$ \\ [2pt]
&&&$O(\epsilon^{2};\mathbf{70},\mathbf{5})$& $-675$&$-225\sqrt{15}$ \\ [2pt]
&&&$O(\epsilon^{2};\mathbf{70},\mathbf{45})$& $-1125$&$225\sqrt{15}$ \\ [2pt]
&&&$O^\uparrow(\epsilon^{2};\mathbf{70},\mathbf{70})$& $-4725$&$-315\sqrt{15}$ \\ [2pt]
&&&$O^\downarrow(\epsilon^{2};\mathbf{70},\mathbf{70})$& $-675$&$-45\sqrt{15}$\\ [3pt]
\hline\phantom{$\Big|$}
$\Sigma_{\mathbf{R}}O(\epsilon; \mathbf{R})$&$-174$&$-48\sqrt{15}$&$\Sigma_{\mathbf{R}} O(\epsilon^{2}; \mathbf{R})$&$-5097$&$-1329\sqrt{15}$\\ [3pt]
\hline
\end{tabular}
\caption{Operators contributing to ${Y}_\nu=\sum_i {Y}^{(i)}_\nu$
at $\mathcal{ O}(\epsilon)$ and $\mathcal{ O}(\epsilon^2)$ and values of
the corresponding coefficients
$y_i =Y^{(i)}_\nu/\left(\lambda\,\alpha^{i+1}\, \epsilon^i\right)$
for the singlet $(S)$ and triplet $(T)$ components.
The value of the $\mathcal{ O}(1)$ coefficients are
$y_0^S=3$ and $y_0^T=\sqrt{15}$. An example of calculating an entry in the table is given in Appendix \ref{aped.C}
}
\label{tab:1}
\end{table}
\begin{table}[t!]
\centering
\begin{tabular}{|l|c||l|c||l|c|}\hline
\phantom{$\Big|$}$\epsilon^1$&$(r_1^S,r_1^T)$
& $\epsilon^2$ & $(r_2^S,r_2^T)$
& $\epsilon^3$ & $(r_3^S,r_3^T)$
\\\hline
\phantom{$\Big|$}
$\!\![\Sigma]$&&$[\Sigma\Sigma]$&& $[\Sigma\Sigma\Sigma]$&$$ \\ [6pt]
\phantom{$\Big|$}$O(\epsilon;\mathbf{ 24})$ & $(-30,-90)$
&$O(\epsilon^{2};\mathbf{ 24_S})$& $(-150,-1350)$
&$O(\epsilon^{3};\mathbf{24_S},\mathbf{24_S})$& $(-750,-20250)$ \\ [4pt]
&&$O(\epsilon^{2};\mathbf{ 24_A})$& $(0,0)$
&$O(\epsilon^{3};\mathbf{24_A},\mathbf{24_S})$& $(0,0)$\\ [4pt]
&& &&$O(\epsilon^{3};\mathbf{24_S},\mathbf{24_A})$& $(0,0)$\\ [4pt]
&& &&$O(\epsilon^{3};\mathbf{24_A},\mathbf{24_A})$& $(0,0)$\\ [4pt]
\hline \phantom{$\Big|$}
$\Sigma_{\mathbf{R}} O(\epsilon; \mathbf{R})$&$(-30,-90)$
&$\Sigma_{\mathbf{R}} O(\epsilon^{2}; \mathbf{R})$&$(-150,-1350)$
&$\Sigma_{\mathbf{R}} O(\epsilon^{3}; \mathbf{R})$&$(-750,-20250)$ \\ \hline
\end{tabular}
\caption{
Operators contributing to ${M}_R=\sum_i M^{i}_R$ at
$\mathcal{ O}(\epsilon)$, $\mathcal{O}(\epsilon^2)$
and $\mathcal{ O}(\epsilon^3)$, and values of the corresponding coefficients
$r_i =M_R^{(i-1)}/\left(V\alpha^{i+2}\,\epsilon^{i-1}\right)$ for the singlet $(S)$ and triplet $(T)$ components.
}
\label{tab:2}
\end{table}
Allowing for all the contributions listed in Table~\ref{tab:1}, the
resulting coefficient at $\mathcal{ O}(\epsilon)$ for $Y_\nu^S$ would be
$y_1^S=-174$ that is too large to reproduce the neutrino oscillation
data. We will then assume that only some contributions are present.
This is easily achieved by assuming that no FN fields exist in the
representations $\mathbf{70}_{39/10}$, $\mathbf{70}_{-41/10}$,
$\mathbf{45}_{39/10}$ and $\mathbf{45}_{-41/10}$, and this results in
much smaller coefficients $y_1^S=-24$ and $y_1^T=-18\sqrt{15}$ that
are determined by the $y_1^{S,T}$ entries in the first and third lines
in Table~\ref{tab:1}, and that are the one we will use henceforth.
(The absence of these representations also implies that several
contributions to the higher order coefficient $y_2^{S,T}$ are absent,
which yields much smaller values $y_2^{S,T}\sim 10^2$ instead than
$\sim 10^3$, see Table~\ref{tab:1}. In any case, since at leading
order $M_\nu$ Eq.~(\ref{eq:Mnuappx}) does not depend on $y_2^{S,T}$,
this only affects the higher order corrections.) As regards the
contributions to $M_R$, they arise only from insertions of the
$\mathbf{24}$, and thus they are not affected by the absence of
$\mathbf{70}$ and $\mathbf{45}$.
By using in
Eq.~(\ref{eq:Mnuappx}) $(y_0^S,y_0^T)=(3,\sqrt{15})$,
$(y_1^S,y_1^T)=(-24,-18\sqrt{15})$
and the values of $(r_i^S,r_i^T)$ given in
Table~\ref{tab:2}, we obtain
\begin{align}
\label{eq:15}
M_\nu \;\approx\; & - 5\,
\left(\lambda\,\sin\beta\right)^2
\frac{g_5\, v^2}{\Lambda_{\text{GUT}}}
\,
\begin{pmatrix}
0 & 0 & -1.0 \\
0 & -0.47 & -0.68 \\
-1.0 & -0.68 & -1.0
\end{pmatrix}.
\end{align}
With $v=175\,$GeV and $\Lambda_{\text{GUT}}\approx 10^{16}\,$GeV
the numerical value of the
prefactor is $
\approx 0.008\,(\sin\beta\,\lambda)^2\,$eV. For $\tan\beta\approx10$
($\tan\beta\approx1$) the atmospheric mass scale $\approx 0.05\,$eV
can then be reproduced for acceptable values of the coupling
$\lambda\sim 1.9\;(2.7)\,$.
Our model is based on the successful model for the $d$-quark and
leptons masses discussed in Ref.~\cite{Duque:2008ah}, and we have
checked that the absence of the representations that we have forbidden
here do not affect the results of this previous study. In particular,
by using the coefficients calculated in Ref.~\cite{Duque:2008ah} we
have for the matrix of the charged leptons Yukawa couplings
\begin{align}
Y^e\simeq
\begin{pmatrix}
\epsilon^4 & \epsilon^5 & \epsilon^4 \\
-2.9\epsilon^3 & 3.8\epsilon^2
& 10.2\epsilon^3 \\
-7.6\epsilon^3 & 9.2\epsilon^2 & 2.3\epsilon
\end{pmatrix}\,.
\end{align}
To compute neutrino mixing matrix
$U_{PMNS}=U_\nu\left(V_L^e\right)^\dagger$, besides the matrix $U_\nu$
that diagonalizes $M_\nu$ in Eq.~\eqref{eq:15}, we also need $V_L^e$
that diagonalizes the left-handed product ${Y^e}{Y^e}^\dagger$. We
obtain
\begin{equation}
\label{eq:Ue}
V_L^e \sim
\begin{pmatrix}
1. & 10^{-5} &10^{-5} \\
10^{-5} & -1 & 0.02 \\
10^{-5} & 0.02 & 1
\end{pmatrix}.
\end{equation}
that is approximately diagonal, and thus $U_{PMNS}\approx U_\nu$.
Allowing for a $\sim 25\%$ numerical uncertainty in the entries of the
matrix in Eq.~\eqref{eq:15}, we find that it is possible to fit the
neutrino oscillation data, with the exception of
$\sin^2\theta_{12}\sim 0.5$ for which a particularly large corrections
is needed. Finally, the mass of the lightest heavy singlet and triplet
neutrino states can be obtained from Eq.~\eqref{eq:14} and are
\begin{align}
& M_1^S\ \approx\ 5\times 10^{11}\, \text{GeV}\,,\\
& M_1^T\approx 1.5\times 10^{13}\, \text{GeV}\,.
\end{align}
In particular the mass of the singlet Majorana neutrino is of the
right order of magnitude to allow for thermal
leptogenesis~\cite{Davidson:2008bu}.
\chapter{Baryonic violation of $R$-parity from anomalous $U(1)_H$}\label{cap2}
\label{intro2}
Supersymmetric scenarios with $R$-parity conservation are becoming very constrained due to the lack of missing energy signals associated to heavy neutral particles, thus motivating scenarios with $R$-parity violation. In view of this, we consider a supersymmetric model with $R$-parity violation and extended by an anomalous horizontal $U(1)_{H}$ symmetry. A self-consistent framework with baryon-number violation is achieved along with a proper suppression for lepton-number violating dimension-five operators, so that the proton can be sufficiently stable. With the introduction of right-handed neutrinos both Dirac and Majorana masses can be accommodated within this model. The implications for collider physics are discussed.
\section{Horizontal model with Baryon-number violation}
\label{sec:BNVmodel}
In the simplest scenario, the $U(1)_H$ symmetry is spontaneously
broken at one scale close to Planck mass, $M_P$, by the vacuum expectation
value of a SM singlet scalar, the flavon field $S$, with $H$ charge~$-1$, which allows us to define the expansion parameter $\theta=\langle S
\rangle/M_P\approx0.22$ (see Sec.~\ref{theta}).
The fermion masses and mixings are determined by factors of the type
$\theta^n$,for which $n$ is fixed by the horizontal charges of the fields
involved.
In supersymmetric scenarios, the order of magnitude of the $R$-parity
violating couplings can also be fixed by the FN mechanism
\cite{Mira:2000gg, Dreiner:2003hw, Dreiner:2003yr, Choi:1998wc,Choi:1996se, Binetruy:1996xk, Joshipura:2000sn, Ellis:1998rj, BenHamo:1994bq}.
In what follows we will constrain the $H$ charge to satisfy the
condition $|H(f_i)| \lesssim 10$ which leads to a consistent prediction of
the size of the suppression factor $\theta$ in the context of string
theories~\cite{Dreiner:2003hw,Choi:1996se}(see discussion in Sec.~\ref{theta}).
From Eq.~\eqref{resum} is straightforward to see the possible
scenarios in the context of an anomalous horizontal Abelian symmetry
with a single flavon, reviewed in the introduction.
The MSSM is obtained when $\mathcal{N}/3$, each individual $n_i$
and $\mathcal{N}-2n_k$ are fractional~\cite{Dreiner:2003yr,Dreiner:2007vp}.
Bilinear $R$-parity violation\footnote{See, for example, Ref. \cite{Diaz:2003as} and references therein} is obtained when $\mathcal{N}/3$
is fractional and each $n_i$ is a negative integer ~\cite{Dreiner:2006xw, Mira:2000gg}.
Another self-consistent $R$-parity breaking model with $L$ violation can
be obtained if $\mathcal{N}/3$ and each individual $n_i$ are fractional,
but some of the $\mathcal{N}-2n_k$ are integers. In such a case the
decays of the LSP are leptophilic~\cite{Sierra:2009zq}.( For these developments see Sec.~\ref{modelt}).
In this thesis we want to explore the last self-consistent possibility,
consisting in the $R$-parity breaking model with $B$ violation. It is
clear from Eq.~\eqref{resum} that if $\mathcal{N}$ is an integer and
multiple of 3, and each $n_i$ is fractional but not half-integer,
then only the 9 $\lambda_{ijk}''$ are generated. The specific horizontal charges are
\begin{align}
\label{eq:Hlpp}
H \begin{pmatrix}
\lambda_{112}''&\lambda_{212}''&\lambda_{312}''\\
\lambda_{113}''&\lambda_{213}''&\lambda_{313}''\\
\lambda_{123}''&\lambda_{223}''&\lambda_{323}''\\
\end{pmatrix}=
\begin{pmatrix}
6 & 3 & 1\\
6 & 3 & 1\\
5 & 2 & 0\\
\end{pmatrix} +n_{\lambda''}\mathbf{1_{3}},
\end{align}
where $\mathbf{1_3}$ is a $3\times 3$ matrix filled with ones, and $n_{\lambda''}$ is defined by
\begin{align}\label{nlpp}
n_{\lambda''}=x+ \frac{1}{3}\mathcal{N}\,.
\end{align}
For positive $n_{\lambda''}$ values, the third-generation couplings dominate with fixed ratios between them:
\begin{align}
\label{eq:Hierlpp}
\begin{pmatrix}
\lambda_{112}''&\lambda_{212}''&\lambda_{312}''\\
\lambda_{113}''&\lambda_{213}''&\lambda_{313}''\\
\lambda_{123}''&\lambda_{223}''&\lambda_{323}''\\
\end{pmatrix}\approx&
\theta^{n_{\lambda''}}\begin{pmatrix}
\theta^6 & \theta^3 & \theta\\
\theta^6 & \theta^3 & \theta\\
\theta^5 & \theta^2 & 1\\
\end{pmatrix} & n_{\lambda''}\ge 0\,.
\end{align}
For negative values some of the couplings start to be forbidden in the superpotential by
holomorphy, and for $n_{\lambda''}<-6$ all of them must be generated
from the K\"ahler potential with additional Planck mass suppression,
so that the LSP may be a decaying dark matter candidate as in the
case of $L$ violation studied in Ref.~\cite{Sierra:2009zq}.
We will not pursue this possibility in this work because in that case
the phenomenology at colliders should be the same as that in the MSSM.
Below the allowed range for $n_{\lambda''}$ and their
consequences at present and future colliders will be checked.
\subsection{Constraints from $\Delta B\neq0$ processes}
Several experimental constraints are found on $B$ violating
couplings both for individual and quadratic products of couplings~\cite{Barbier:2004ez}. For individual couplings, the stronger constraints are for $\lambda_{11k}$. Because in our model the predicted order of magnitude for the coupling $\lambda_{113}''$ is the same as that for $\lambda_{112}''$, the most restrictive constraint is that obtained for the later and comes from
the dinucleon $NN\to KK$ width, which according to Refs.~\cite{Goity:1994dq,Csaki:2011ge} is
\begin{align}
\Gamma\sim \rho_N\frac{128\pi\alpha_s^2|\lambda''_{112}|^4(\tilde{\Lambda})^{10}}{m_{N}^2m_{\tilde{g}}^2m_{\tilde{q}}^8}\,,
\end{align}
where $\rho_N\approx 0.25\ \text{fm}^{-3}$ is the nucleon density,
$m_N\approx m_p$ is the nucleon mass, and $\alpha_s\approx0.12$ is the
strong coupling.
Note that this kind of matter instability requires only $B$ violation and is suppressed by the tenth power of $\tilde{\Lambda}$, which parametrizes the hadron and nuclear effects.
For this quantity, order of magnitude variation is expected around of the $\Lambda_{\text{QCD}}$ scale of $200\ \text{MeV}$.
However, $\tilde{\Lambda}$ is roughly expected to be smaller than $\Lambda_{\text{QCD}}$ because of the repulsion effects inside the nucleus~\cite{Csaki:2011ge}.
From general experimental searches of matter instability~\cite{Berger:1991fa}, lower bounds similar to the proton lifetime should be used for this specific dinucleon channel~\cite{Goity:1994dq}, and therefore additional suppression from $\lambda_{112}$ could be required. In fact, the first lower bound on dinucleon decay to kaons has been recently obtained from Super-Kamiokande data~\cite{Litos:2010}
\begin{align*}
\tau_{NN\to KK} =\frac{1}{\Gamma}>1.7\times10^{32}\ \text{yr}\,.
\end{align*}
From this value, we can obtain a constraint for the $B$ violating coupling:
\begin{align}
|\lambda''_{112}|\lesssim3.2\times 10^{-7}
\left(\frac{1.7\times10^{32}\, \text{yr}}{\tau_{NN\to KK}}\right)^{1/4}
\left(\frac{m_{\tilde{g}}}{300\, \text{GeV}}\right)^{1/2}
\left(\frac{m_{\tilde{q}}}{300\, \text{GeV}}\right)^{2}
\left(\frac{75\ \text{MeV}}{\tilde{\Lambda}}\right)^{5/2}\,,
\end{align}
where a conservative value for $\tilde{\Lambda}$, as in \cite{Barbier:2004ez}, has been used. Large values of $\tilde\Lambda$ give rise to even smaller upper bounds for $|\lambda_{112}|$. In Fig.~\ref{fig:constraints_contour}, we illustrate the effect of varying gluino and squark masses. We can see that the constraint still holds strong for large values of the relevant supersymmetric masses, especially for low-mass gluinos.
\begin{figure}
\centering
\includegraphics[scale=0.6]{constraints_contour}
\caption{$\lambda_{112}''$ constraint as a function of squark and gluinos mass, for $\tilde{\Lambda}=75\ \text{GeV}$ and $\tau_{NN\to KK}=1.7\times10^{32}\, \text{yr}$}
\label{fig:constraints_contour}
\end{figure}
For $\tilde{m}=m_{\tilde{g}}=m_{\tilde{q}}$, we can obtain the lower bound
\begin{align}
\label{eq:tildem}
\tilde{m} \gtrsim& (279\ \text{GeV})\theta^{(-8+2n_{\lambda''})/5}
\left(\frac{\tau_{NN\to KK}}{1.7\times10^{32}\, \text{years}}\right)^{1/10}
\left(\frac{\tilde{\Lambda}}{75\ \text{MeV}}\right),&n_{\lambda''}\ge& -6\,.
\end{align}
The excluded supersymmetric masses as function of $n_{\lambda''}$ are illustrated with the yellow (light-gray) bands in Fig.~\ref{fig:constraints}. The important restrictions appear for negative powers of $\theta$ in Eq.~\eqref{eq:tildem}, corresponding to $n_{\lambda''}\le 4$. If $\tilde\Lambda$ is increased to $150\ \text{MeV}$, stronger restrictions are obtained, as illustrated in the dashed bands of Fig.~\ref{fig:constraints}. We can see that for the full range of equal gluino and squark masses displayed in figure~\ref{fig:constraints}, the constraint is strong enough to forbid all the negative solutions of $n_{\lambda''}$ and also some of the positive solutions depending of the chosen $\tilde{\Lambda}$ value.
It is also possible to exclude the negative solutions if we use the available quadratic coupling product bounds. For our model the most important constraint is obtained from the penguin decays $B\to \phi\pi$~\cite{Barbier:2004ez, BarShalom:2002sv}. Updating the limit with the last result from BABAR~\cite{Aubert:2006nn}\footnote{The limit from Belle is $\operatorname{Br}(B^+\to \phi\pi^+)<3.3\times 10^{-7}$~\cite{Kim:2012gt}.} to $\operatorname{Br}(B^+\to \phi\pi^+)<2.4\times10^{-7}$, we obtain from Fig. 3 of Ref.~\cite{BarShalom:2002sv}
\begin{align}
|\lambda_{i23}''\lambda_{i12}^{\prime\prime *}|<2\times 10^{-5}\left(
\frac{m_{\tilde{u}_{i R}}}{100\text{ GeV}}
\right)^2\,.
\end{align}
The excluded right-handed up-squark masses are shown in the green (dark gray) bands
of Fig.~\ref{fig:constraints}, with the specific generation of up squark
labeled inside the band.
The solutions with the additional ``*'' label, have the quoted $\lambda_{i23}''$ coupling absent
from the superpotential. However it is regenerated at order $\theta$
through a K\"ahler rotation~\cite{Choi:1996se} from the dominant
coupling still present in the superpotential.
As a result, again the negative solutions are excluded for the full range of squark
masses displayed in the figure.
Moreover, the first two positive solutions are also excluded.
In the figure, the gray region for $n_{\lambda''}\le-7$ is also shown.
In this case, the holomorphy of the superpotential forbids all the
$\lambda''$ terms and although they will be generated after $U(1)_H$
symmetry breaking via the K\"ahler potential~\cite{Giudice:1988yz},
these terms are suppressed by the additional factor
$m_{3/2}/M_P$~\cite{Sierra:2009zq}.
Therefore the LSP is very long-lived and the phenomenology at
colliders is expected to be the same as that in the MSSM.
\begin{figure}
\centering
\includegraphics[scale=0.7]{constraints}
\caption{The yellow bands (green bands) display the excluded range for
$\tilde{m}$ ($\tilde{u}_{iR}$), as a function of the possible $n_{\lambda''}$
solutions from the constraints in
$|\lambda_{112}''|$ ($|\lambda_{i23}''\lambda_{i12}^{\prime\prime *}|$).
The gray dashed bands show the effect of increasing
$\tilde{\Lambda}$ in the $|\lambda_{112}''|$ constraint.
The affected $\tilde{u}_{iR}$ is indicated for each value of
$n_{\lambda''}$.
For $n_{\lambda''}<-6$, the phenomenology at
colliders is expected to the same as in the MSSM. }
\label{fig:constraints}
\end{figure}
Therefore, by demanding a $B$ violating model and imposing the
constraints on the $R$-parity breaking couplings, only positive
solutions for $n_{\lambda''}$ remain allowed giving rise to a clear
hierarchy between $\lambda''$ couplings, which have a direct impact on
the phenomenology of the LSP.
The dominant coupling turns out to be $\lambda''_{323}$, a feature
shared with Refs.\cite{Csaki:2011ge,KerenZur:2012fr}.
\subsection{Dimension-five operators and proton decay}
So far the $U(1)_H$ symmetry has been used to forbid dimension-four lepton- number
violating couplings, in order to keep proton decay to a safe limit.
However, proton decay mediated by $\lambda''$ couplings alone can occur in
scenarios with a gravitino lighter than a proton~\cite{Choi:1996nk},
leading to strong bounds on these couplings. Thus, by ensuring gravitino
masses greater than 1 GeV in these scenarios there will be no
contribution to the proton decay coming from a gravitino,
which being the LSP can be also a dark matter canditate
\cite{Takayama:2000uz,Buchmuller:2007ui,Csaki:2011ge,Lola:2008bk,Lola:2007rw}.
On the other hand, there are also dimension-five lepton or$/$and baryon-
number violating couplings, which can induce proton decay. Hence, it
is also necessary to check if these terms are also banned or
suppressed enough.
The nonrenormalizable dimension-five operators in the superpotential
$W_{5D}$ and K\"ahler potential $V_{5D}$ are given by
\cite{Allanach:2003eb, Barbier:2004ez, Ibanez:1991pr, Ibanez:1991hv}
\begin{align}\label{WD5}\nonumber
W_{D5}&=\frac{(\kappa_1)_{ijkl}}{M_P}\widehat{Q}_i\widehat{Q}_j\widehat{Q}_k\widehat{L}_l+\frac{(\kappa_2)_{ijkl}}{M_P}\widehat{u}_i\widehat{u}_j\widehat{d}_k\widehat{e}_l+\frac{(\kappa_3)_{ijk}}{M_P}\widehat{Q}_i\widehat{Q}_j\widehat{Q}_k\widehat{H}_d\\
&+\frac{(\kappa_4)_{ijk}}{M_P}\widehat{Q}_i\widehat{H}_d\widehat{u}_j\widehat{e}_k+\frac{(\kappa_5)_{ij}}{M_P}\widehat{L}_i\widehat{H}_u\widehat{L}_j\widehat{H}_u+\frac{(\kappa_6)_{i}}{M_P}\widehat{L}_i\widehat{H}_u\widehat{H}_d\widehat{H}_d,\\
V_{5D}&=\frac{(\kappa_7)_{ijk}}{M_P}\widehat{u}_i\widehat{d}^{*}_j\widehat{e}_k+\frac{(\kappa_8)_{i}}{M_P}\widehat{H}^{*}_u\widehat{H}_d\widehat{e}_i+\frac{(\kappa_9)_{ijk}}{M_P}\widehat{Q}_i\widehat{L}^{*}_j\widehat{u}_k+\frac{(\kappa_{10})_{ijk}}{M_P}\widehat{Q}_i\widehat{Q}_j\widehat{d}^{*}_k.
\end{align}
A review of the effect of these operators in the destabilization of the
proton is given in Ref.~\cite{Dreiner:2012ae}.
In the present case of $B$ violation, we would guarantee a
sufficiently stable proton if the $B$ and $L$-violating operators with couplings $\kappa_{1,2}$
and the $L$-violating operators with coupling $\kappa_{4,7,8,9}$ are forbidden\footnote{The constraints on the operator with coupling $\kappa_6$ are mild~\cite{Dreiner:2012ae}}.
The operator with coupling $\kappa_5$, $LH_uLH_u$, is not constrained
by proton decays because it violates the lepton number by two units.
The horizontal charges for all the dimension-5 operators are given
in Appendix~\ref{aped.B}.
Given the fractional values needed for $n_i$ in order to get rid of
the dimension-four $L$ violating operators in Eq.~\eqref{po}, it turns
out that all dimension-five $L$-violating operators are also
automatically forbidden by the $U(1)_H$ symmetry see
Eqs.~(\ref{eq:D5BLNV1}), (~\ref{eq:D5BLNV2}) and (~\ref{eq:D5LNV}).
At this stage the $U(1)_H$ symmetry plays the same role as that of a lepton-
parity discrete symmetry \cite{Allanach:2003eb,Ibanez:1991pr,Ibanez:1991hv,Dreiner:2005rd}.
\section{Generation of neutrino masses}
\label{sec:RHN}
Although it is not required the $LH_uLH_u$ operator be forbidden
by $U(1)_H$ symmetry to ensure proton stability,
it is unavoidably prohibited because the
bilinear charges $n_i$ are not half-integers.
Thus, the Majorana mass terms $\nu_L\nu_L$ are automatically forbidden.
The same happens with lepton-parity symmetry, and also within the more general
approach of gauge discrete symmetries~\cite{Ibanez:1991pr,Ibanez:1991hv,Dreiner:2005rd},
for which the solutions than allow the $UDD$ operator
automatically forbid Majorana neutrinos.
The proposed solution in these kinds of frameworks is just to introduce
right-handed neutrinos $N$ with their Majorana mass terms $NN$
forbidden, while keeping the Yukawa operators containing left- and
right-handed neutrinos still allowed, generating in this way Dirac neutrino
mass matrices~\cite{Luhn:2007gq}.
When these ideas are applied to our case of horizontal symmetries, it is also necessary to explain the smallness of the neutrino
Yukawa couplings.
The introduction of three right-handed neutrinos $N_{i}$ $(i=1,2,3)$ allows us to give
Dirac masses to neutrinos by assigning fractional and not half-integer
$H$-charges to $N_{i}$, such that the $NN$ terms remain forbidden.
Let us paramatrize the bilinear $H$ charges as $n_2=n_1+\alpha$,
$n_3=n_1+\beta$ and for right-handed neutrinos: $N_2=N_1+\epsilon$
and $N_3=N_1+\rho$. The neutrino Dirac mass matrix reads
\begin{align}
M_{\nu}\sim
v_u\theta^{\beta+\rho+n_1+N_1}
\left(
\begin{array}{ccc}
\theta^{-\beta-\rho } & \theta^{\epsilon -\beta-\rho} & \theta^{-\beta} \\
\theta^{\alpha -\beta-\rho } & \theta^{\alpha +\epsilon-\beta -\rho} & \theta^{\alpha-\beta} \\
\theta^{-\rho} & \theta^{\epsilon-\rho } & 1
\end{array}
\right),
\end{align}
where $v_{u}$ is the vacuum expectation value developed by the up-type Higgs field.
From Eq.~(\ref{nlpp}) we obtain $n_1=\frac{1}{3}\left(1-\alpha -\beta
+3 n_{\lambda ''}-3 x\right)$. Motivated by the recent results of a
large value for $\theta_{13}$
\cite{Abe:2011sj,Abe:2011fz,An:2012eh,Ahn:2012nd}, which support those
models based on a anarchical neutrino mass matrix
\cite{Hall:1999sn,Haba:2000be,deGouvea:2003xe,deGouvea:2012ac}, it is
convenient to choose $\alpha=\beta=\epsilon=\rho=0$ and
$\beta+\rho+n_1+N_1=n_{Y_\nu}$ with $n_{Y_\nu}$ being an integer and $n_{Y_\nu}\ge 16$
in order to generate a neutrino Yukawa coupling
$Y_\nu\lesssim10^{-11}$.
It is worth stressing that since $n_1$ cannot be an integer, the
$\mu\tau$ anarchical texture with $\alpha=\beta=-1$
\cite{Dreiner:2003yr, Altarelli:2002sg, Buchmuller:2011tm, Altarelli:2012ia}
is not allowed.
However, other textures can be accommodated in our model
\cite{Altarelli:2012ia}, such as pseud- $\mu\tau$ anarchy
($\alpha=\beta=\epsilon=\rho=-2$) and the hierarchical texture
($\alpha=\epsilon=-1,\, \beta=\rho=-2$).
An immediate consequence of the anarchy assumption is that the
bilinear charges are equal and are set to $n_i=n_{\lambda
''}-x+\frac{1}{3}$, being clearly noninteger numbers.
The $H$-charges that allow us to obtain a self-consistent framework with
the requirements mentioned above are shown in Table~\ref{tab:Hc3}.
It is remarkable that when explaining the neutrino Yukawa couplings
$Y_\nu$, a lower bound on $n_{\lambda''}\ge 6$ emerges, which leads
to deep implications on the phenomenology of the model (see the next
Section).
\begin{table}[t]
\centering
\begin{tabular}{ccccccccc}\hline
$x$ & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 2 \\\hline
$n_{\lambda''}$ & 6 & 7 & 8 & 9 & 9 & 10 & 10& 10\\%\hline
$n_i$ & 19/3 & 22/3 & $25/3$ & $28/3$ & $25/3$ & $31/3$ & $28/3$&$25/3$ \\%\hline
$N_i$ & $29/3$ & $26/3$ & $29/3$ & $26/3$ & $29/3$ &$23/3$ & $26/3$&29/3 \\\hline
\end{tabular}
\caption{Some sets of $H$-charge allowing a self-consistent framework of
$R$-parity breaking with $B$ violation and Dirac neutrinos.}
\label{tab:Hc3}
\end{table}
\subsection{Majorana neutrinos}
It is worth mentioning that it is also possible to have Majorana
neutrinos if in addition to the right-handed neutrinos we include in the
model a second flavon\footnote{For a model with several flavons see
Ref.\cite{Jack:2003pb}}, $\psi$, with fractional\footnote{A scenario
with Majorana neutrinos and nonanomalous $U(1)_H$ symmetry, which
is spontaneously broken by two flavons with opposite $H$-charge +1
and -1 was obtained in Ref.\cite{Eyal:1999gq}.} $H$-charge and with a
vacuum expectation value approximately equal to $\theta$.
The horizontal charges of these superfields are fixed by new invariant
diagrams coming from Dirac and Majorana mass terms.
In this way, the $H$-charge of $\psi$ must be such that it does not get
coupled to $L$-violating operators. Therefore, the respective total
$H$-charge of the full $L$ violating operator would be either
fractional and therefore forbidden, or negative and sufficiently
suppressed.
The introduction of an additional flavon field could spoil the proton
stability since $H$-invariant terms can be obtained by coupling a
large number of $\psi$ flavons to dangerous operators.
Therefore it is mandatory to ensure that $L$ violating bilinear,
dimension-four and dimension-five operators are generated through the GM
mechanism or have a large Froggatt-Nielsen suppression.
The $H$-charges that allow us to obtain Majorana neutrinos with
the requirements mentioned above, are shown in Table~\ref{tab:Hc}.
To illustrate this point, let us consider the first solution given in Table~\ref{tab:Hc}.
For that set of $H$-charges, we have found that the minimum suppression that is achieved
for dimension-four and -five operators is
$\widehat{L}_1\widehat{Q}_1\widehat{D}_1: m_{3/2}\theta^{21}/M_P$ and
$\widehat{u}_1\widehat{u}_2\widehat{d}_1\widehat{e}_1: m_{3/2}\theta^2/M_P^2$,
which is enough to satisfy the constraints coming from proton decay.
\begin{table}[t]
\centering
\begin{tabular}{ccccccccc}\hline
$x$ & 1 & 1 & 1 & 2 & 2 & 2 & 3 & 3 \\\hline
$n_{\lambda''}$ & 5 & 6 & 7 & 6 & 7 & 8 & 8 & 9 \\%\hline
$n_i$ & $13/3$ & $16/3$ & $19/3$ & $13/3$ & $16/3$ & $19/3$ & $16/3$ & $19/3$ \\%\hline
$\psi$ & $-47/6$ & $-53/6$ & $-59/6$ &$-47/6$ & $-53/6$ & $-59/6$ &$-53/6$ & $-59/6$ \\\hline
\end{tabular}
\caption{Sets of $H$-charges that allow having Majorana neutrinos with $H(N_i)=7/2$.
For this scenario there is no lower bound on $n_{\lambda''}$.}
\label{tab:Hc}
\end{table}
Henceforth, we will combine the solutions allowed by the
experimental constraints on $R$-parity breaking couplings discussed in
Sec.~\ref{sec:BNVmodel}, with the restrictions to obtain Dirac
neutrinos, and therefore we will only consider solutions with
$n_{\lambda''}\ge 6$.
\section{Implications on collider searches}
\label{sec:implications}
From a collider physics point of view, there are two main differences
between the models with and without $R$-parity conservation.
When $R$-parity conservation is assumed, the production
of supersymmetric particles is in pairs, and the LSP is stable leading
to missing energy signatures in the detectors.
On the other hand, $R$-parity violation allows for the single
production of supersymmetric particles and the decay of the LSP
involving jets or/and leptons.
The $R$-parity breaking and $B$ violating operators induce LSP decay
directly or indirectly to quarks, including the top if LSP is
sufficiently massive\footnote{If a supersymmetric partner of some SM
particle is the nest-to-the-lightest-supersymmtric particle with the gravitino as the LSP, our
phenomenological results would not change.}.
Given that the LSP is no longer stable due to R-parity violation, in
principle the LSP can be any supersymmetric particle
\cite{Dreiner:1997uz, Barbier:2004ez,Dreiner:2008ca}.
For recent phenomenological studies in supersymmetric scenarios with
$R$-breaking through $B$ violating, see, {\it e.g.} Refs.
\cite{Butterworth:2009qa,Allanach:2012vj,Brust:2012uf,Asano:2012gj,Curtin:2012rm,Lola:2008bk,Dreiner:2008ca,Evans:2012bf,Berger:2012mm,Carpenter:2008sy,Dreiner:2012wm,
Franceschini:2012za,Bomark:2011ye,Choudhury:2011ve,Desai:2010sq,Kilic:2011sr,Carpenter:2007zz,Kaplan:2007ap,Berger:2013sir}
and in particular, Refs. \cite{Csaki:2011ge,KerenZur:2012fr}.
The phenomenology of the model at LHC is basically the same studied in
the SSM with minimal flavor violation (MFV)~\cite{Csaki:2011ge} and partial compositeness \cite{KerenZur:2012fr}.
In fact, in Ref.~\cite{Csaki:2011ge} they also get a hierarchy in which the third-generation couplings
dominate with fixed ratios between them.
Fixing the expansion parameter as $\theta=0.22$, their set of
$R$-parity breaking parameters can be written as
\begin{align}
\label{eq:Grosslpp}
\begin{pmatrix}
\lambda_{112}''&\lambda_{212}''&\lambda_{312}''\\
\lambda_{113}''&\lambda_{213}''&\lambda_{313}''\\
\lambda_{123}''&\lambda_{223}''&\lambda_{323}''\\
\end{pmatrix}\approx&
\tan^2\beta_{\text{MFV}}\begin{pmatrix}
\theta^{24} & \theta^{18} & \theta^{13}\\
\theta^{19} & \theta^{14} & \theta^{12}\\
\theta^{16} & \theta^{13} & \theta^{11}\\
\end{pmatrix}=
\theta^{n_{\text{MFV}}}\begin{pmatrix}
\theta^{13} & \theta^{7} & \theta^2\\
\theta^8 & \theta^3 & \theta\\
\theta^5 & \theta^2 & 1\\
\end{pmatrix}\,,
\end{align}
with $\theta^{n_{\text{MFV}}}=\theta^{11}\tan^2\beta_{\text{MFV}}$.
Comparing with Eq.~\eqref{eq:Hierlpp}, we can see that the set of
predicted couplings until order $\theta^{n_{\text{MFV}}+3}$ is
basically the same as in our case (with the exception of their
$\lambda_{312}''$ which has an additional suppression factor of
$\theta$).
Therefore, the phenomenology of both theories for $R$-parity violation
should be the same at the LHC.
In fact, the phenomenology of Ref.~\cite{Csaki:2011ge} for the leading
couplings was analyzed in detail at the LHC with the results
presented as function of $\tan\beta_{\text{MFV}}$.
The specific values at
$\tan\beta_{\text{MFV}}\approx(44.5,20.7,9.7,4.6,2.1)$ in several
plots of Ref.~\cite{Csaki:2011ge} correspond to the discrete set of solutions
$n_{\lambda''}=(6,7,8,9,10)$ respectively, in our model.
In particular several plots there, they explore the decay length ($c\tau$)
for LSP masses in the range of $100-800\ \text{GeV}$.
When the stop is the LSP for example, displaced vertices (DV) are
expected for $n_{\lambda''}=10$.
For a sbottom LSP it is possible to have DV for $n_{\lambda''}=9,10$,
while the three-body decays of a LSP neutralino could generate DV for
$n_{\lambda''}=8,9,10$.
In the same vein, because decays of the stau LSP involves four
particles in the final state, DV are expected for $n_{\lambda''}\ge
6$.
Recent phenomenological analysis in $R$-parity breaking
trough $UDD$ operators has focused on prompt decay for stops and
sbottoms~\cite{Csaki:2011ge,Franceschini:2012za,Bhattacherjee:2013gr,Csaki:2013we}.
However, the experimental results about DV at the LHC
are, in general not directly applicable to these kinds of models, because
high $p_T$ leptons are required for trigger the
events~\cite{Aad:2011zb,Aad:2012zx,Chatrchyan:2012jna}, and to be part of the
DV~\cite{Aad:2012zx,Chatrchyan:2012jna}.
We assume in the discussion below that pure hadronic DV are still
compatible with light squarks and gluinos.
Regarding collider searches, a pair produced gluino with a prompt decay to three
jets has been searched by CDF~\cite{Aaltonen:2011sg}, CMS
\cite{Chatrchyan:2011cj,Chatrchyan:2012uxa} and ATLAS~\cite{ATLAS:2012dp}.%
\footnote{In this analysis all the superpartners except for the
gluinos are decoupled, and some reinterpretation would be needed to
apply the results to a more generic SUSY spectrum.}
CMS results constrain the gluino mass to be in the ranges
$144<m_{\tilde{g}}<200$ GeV or $m_{\tilde{g}}>460$ GeV.
However, ATLAS already excludes gluino masses up to
$m_{\tilde{g}}\lesssim666$ GeV.
In general, these bounds do not apply when the gluino is not the
LSP~\cite{Csaki:2011ge,Bhattacherjee:2013gr}.
On the other hand, CDF~\cite{Aaltonen:2008dn}, ATLAS
~\cite{Aad:2011yh,ATLAS:2012ds} and CMS~\cite{Chatrchyan:2013izb} also
have performed searches for pair production of dijet resonances in
four-jet events without putting appreciable constraints on stops
decaying to dijets.
Therefore, the already analyzed data at the LHC still allow for low
squarks and gluinos in scenarios with $R$-parity breaking through
$B$-violating couplings~\cite{Evans:2012bf,Bhattacherjee:2013gr}
We have seen that both this single-flavon horizontal (SFH) and the MFV models, lead to a
realistic and predictive framework which could be more easily probed
at LHC than some {\it ad hoc} version or $R$-parity breaking with $B$
violation.
In fact, recently in~\cite{Berger:2013sir} the CMS results on searches for
new physics in events with same-sign dileptons and $b$ jets
~\cite{Chatrchyan:2012paa} have been recasting in a simplified version of the
$R$-parity breaking MFV model where it is assumed one spectrum with
only two light states: a gluino and a stop.
All other SUSY particles are assumed to be either too heavy or too
weakly coupled to be relevant at the LHC.
Furthermore, the stop is assumed to be the LSP, and
$m_{\tilde{g}}>m_{\tilde{t}}+m_t$.\footnote{As a consequence the gluino branching to stop-top is equal to 1.}
Under these conditions they are able to set a lower bound on the gluino
mass about $800\ \text{GeV}$ at 95\% of confidence level.\footnote{The
obtained lower bound only apply if the gluino is a Majorana
particle.}
The same bound could apply to the SFH model with
$R$-parity breaking presented in this work.
In order to really probe this single-flavon horizontal (or the MFV)
$R$-parity breaking model, the full textures in Eq.~\eqref{eq:Hlpp} or ~\eqref{eq:Grosslpp}
should be probed.
However, relations between different branching ratios could be
measured only in $e^+e^-$ colliders.
In a stop LSP scenario, it can decay directly into two down quarks of
different generations through the $\lambda''_{3jk}$ coupling. In this
case, the hierarchy between $\lambda''$ couplings allows for estimate
several fractions of branchings, e.g. $\operatorname{Br}(\tilde t\to
\bar{s}\bar{b})/\operatorname{Br}(\tilde t\to
\bar{d}\bar{s})/\sim\theta^2$.
A sbottom LSP, with a mass larger than the top mass, may show the
clear hierarchy $\operatorname{Br}(\tilde b\to
\bar{t}\bar{s})/\operatorname{Br}(\tilde b\to
\bar{c}\bar{s})/\sim\theta^4$.
For a neutralino LSP with $m_{\tilde\chi}^0>m_t$, the dominant
coupling $\lambda''_{323}$ entails $\operatorname{Br}(\tilde\chi\to t
d b)/\operatorname{Br}(\tilde\chi\to t s
b)\sim\operatorname{Br}(\tilde\chi\to t d
s)/\operatorname{Br}(\tilde\chi\to t s b)\sim\theta^2$ and
$\operatorname{Br}(\tilde\chi\to c s
b)/\operatorname{Br}(\tilde\chi\to t s b)\sim\theta^4$.
For the case $m_{\tilde\chi}<m_t$ the main neutralino decay is then
controlled by $\lambda''_{223}$, and will produce charm quarks with
ratios of branching ratios given by $\operatorname{Br}(\tilde\chi\to c
d b)/\operatorname{Br}(\tilde\chi\to c s
b)\sim\operatorname{Br}(\tilde\chi\to c d
s)/\operatorname{Br}(\tilde\chi\to c s b)\sim\theta^2$.
\chapter{Supersymmetric models with a symmetry $U(1)_H$}\label{cap1}
One of the unsolved problems in the standard model is the
related to the hierarchy of the masses, in charged fermion sector.
The central idea proposed by FN
~\cite{Froggatt:1978nt,Dreiner:2003hw} was to introduce an Abelian $U (1)_{H}$--symmetry which assigns $H$ charges to fermion fields to try to explain this mass hierarchy.
These masses may be expressed in terms of a parameter $\theta$
given by:
\begin{equation}\label{1}
\theta=\frac{\langle S \rangle}{M_{F}},
\end{equation}
where $S$ is a flavonic field, and $M_{F}$ is a mass scale for the FN heavy Fields.
\section{Froggatt-Nielsen mechanism in the quark sector}
\label{sec:BNVmodel22}
The Yukawa Lagrangian in the standard model includes terms of
type:
\begin{equation}\label{2}
\mathcal{L}=h^{u}\bar{u}_{R}H^{u}Q + \text{h.c},
\end{equation}
where $Q=(u,d)_{L}$ transforms as a representation $(\mathbf{3},\mathbf{2})_{1/3}$ under $SU(3)_{C}\times SU(2)_{L} \times U(1)_{Y}$;
$\bar{u}_{R},\bar{d}_{R}$ transforms as a $(\mathbf{\bar{3}},\mathbf{1})_{-4/3, 2/3}$ and
$H^{u}$ transforms as $(\mathbf{1}, \mathbf{\bar{2}})_1$. The Lagrangian above the scale
$M_{F}$ is:
\begin{equation}\label{3}
\mathcal{L}= \bar{u}_{R}H^{u}R + \bar{R}ST + \bar{T}SF +
\ldots + \bar{P}SQ + \text{h.c},
\end{equation}
where $S$ is a flavon which transforms as a singlet under $SU(3)_{C}\times SU(2)_{L} \times U(1)_{Y}$ and acquires a vacuum expectation value given by $\langle S \rangle$ a scale below
$M_{F}$. $R$, $T$, $F$, $P$ are heavy fields, charged under the symmetry $U(1)_{H}$. The diagrammatic representation of Eq.~(\ref{3})
is shown in Figure~\ref{fig:FN}. This Feynman diagram will lead to effective contributions of the mass terms of the fermions when the Abelian $U(1)_{H}$ symmetry gets spontaneously broken. This diagram must be invariant under the horizontal $U(1)-H$--charge assignment.
\begin{figure}
\centering
\includegraphics[scale=0.8]{FN}
\caption{Diagrammatic representation of the Yukawa Lagrangian, where are taken $n$ Froggatt-Nielsen fields that allow the invariance under $U(1)_{H}$. }
\label{fig:FN}
\end{figure}
The horizontal charges are defined as:
\begin{equation}\label{4}
S=H(S),\quad Q=H(Q), \quad H^{u}=H(H^{u}), \quad
u=H(\bar{u}_{R}),\quad d=H(\bar{d}_{R}), \quad
\end{equation}
and
\begin{equation}\label{5}
R=H(R), \quad T=H(T), \quad F=H(F),\quad
\ldots, P=H(P).
\end{equation}
As the horizontal $U(1)_{H}$--charge must be conserved in each of the vertices of Figure~\ref{fig:FN}, we have
\begin{equation}\label{6}\nonumber
u + H^{u} + R=0,
\end{equation}
\begin{equation}\label{7}\nonumber
-R + S + T=0,
\end{equation}
\begin{equation}\label{8}\nonumber
-T + S + F=0,
\end{equation}
\quad\quad\quad\quad\quad \quad \quad \quad\quad \quad \quad \quad \quad \quad\quad\quad\quad
\;\vdots\quad \quad \quad\quad\vdots \quad\quad \quad\vdots
\begin{equation}\nonumber
-P + S + Q=0,
\end{equation}
by integrating the heavy fields of Froggatt-Nielsen in the Eq.~(\ref{3}), are added the $n$ charges $U(1)_{H}$ assigned to these fields, thereby obtaining:
\begin{equation}\label{11}
\cancel{R}-\cancel{R} + S + \cancel{T} - \cancel{T} +
S + \cancel{F} + \ldots - \cancel{P} + S=
n S=n.
\end{equation}
Therefore, after the breaking of the horizontal $U(1)_{H}$--symmetry and the integration of heavy fields, the effective Lagrangian in the Eq.~(\ref{3}) is:
\begin{equation}\label{10}
\mathcal{L}^{\text{Yukawa}}=\bar{u}H^{u}\left(\frac{\langle S
\rangle}{M_{F}}\right)^{n}Q + \text{h.c}.
\end{equation}
For this term be invariant under the symmetry $U(1)_{H}$, it must to comply with:
\begin{equation}\label{12}
n=u + H^{u} + Q,
\end{equation}
where the Yukawas are:
\begin{align}\label{15}
h^{u}&\sim \theta^{n}\sim \theta^{u + H^{u} +Q}
\end{align}
The assignment of an appropriate set of horizontal $U(1)_{H}$--charges
for these fields could give a model
phenomenologically viable with experimental observations.
To summarize, one can say that the idea proposed by FN consists of introducing an Abelian horizontal symmetry
$U(1)_{H}$, and some scalar field $S$, called flavon, together with the need to assume that there are a large number of heavy FN fields that serve as mediators of new interactions.
These heavy fields are vectorlike. With this set of conditions the suppression of Yukawa couplings can be explained.
\section{General renormalizable superpotential}\label{modteo}
The most general renormalizable superpotential including right--handed neutrinos, is given by~\cite{Dreiner:2003hw}:
\begin{align}\label{po}
\mathcal{W}&=
\varepsilon^{ab}\delta^{xy}h^{u}_{ij}\widehat{Q}^{i}_{xa}\widehat{H}^{u}_{b}\widehat{u}^{j}_{y}\nonumber
+
\varepsilon^{ab}\delta^{xy}h^{d}_{ij}\widehat{Q}^{i}_{xa}\widehat{H}^{d}_{b}\widehat{d}^{j}_{y}\\ \nonumber
&+ \varepsilon^{ab}h^{l}_{ij}\widehat{L}^{i}_{a}\widehat{H}^{d}_{b} \widehat{l}^{j} +
\varepsilon^{ab}Y^{\nu}_{ij}\widehat{L}^{i}_{a}\widehat{H}^{u}_{b}\widehat{N}^{j}\\ \nonumber
&+ \varepsilon^{ab}\mu \widehat{H}^{d}_{a}\widehat{H}^{u}_{b} +
M^{\text{R}}_{ij}\widehat{N}^{i}\widehat{N}^{j}\\\\ \nonumber \nonumber
&+ \frac{1}{2}\varepsilon^{ab}\lambda_{ijk}\widehat{L}^{i}_{a}\widehat{L}^{j}_{b} \widehat{l}^{k}\\ \nonumber
&+
\varepsilon^{ab}\delta^{xy}\lambda^{'}_{ijk}\widehat{Q}^{i}_{xa}\widehat{L}^{j}_{b}\widehat{d}^{k}_{y}\nonumber
+ \Xi_{i}\widehat{N}^{i}\\ \nonumber
&+
\frac{1}{2}\varepsilon^{xyz}\lambda_{ijk}^{''}\widehat{u}^{i}_{x}\widehat{d}^{j}_{y}\widehat{d}^{k}_{z}\nonumber
+ \varepsilon^{ab}Y^{\nu}_{i}\widehat{N}^{i}\widehat{H}^{d}_{a}\widehat{H}^{u}_{b}\\ \nonumber
&+ \varepsilon^{ab}\mu^{i}\widehat{L}^{i}_{a}\widehat{H}^{u}_{b} + y^{\text{R}}_{ijk}\widehat{N}^{i}\widehat{N}^{j}\widehat{N}^{k},\nonumber
\end{align}
the upper block in Eq.~(\ref{po}) is $R$--parity conserving, the lower block violates $R$--parity. In the Eq.~(\ref{po}) $\widehat{H}$, $\widehat{Q}$, $\widehat{L}$ represent the left--chiral $SU(2)_{L}$--doublet superfields of the higgses, the quarks and leptons; $\widehat{u}$, $\widehat{d}$, $\widehat{l}$, $\widehat{N}$ represent the right--chiral superfields; $a$, $b$, $c$ and $x$, $y$, $z$ are $SU(2)_{L}$-- and $SU(3)_{C}$--indices, $i$, $j$, $k$ are generational indices; $\delta^{xy}$ is the Kronecker symbol, $\varepsilon^{\ldots}$ symbolizes any tensor that is totally antisymmetric with respect to the exchange of any two indices, with $\varepsilon^{12\ldots}=1$. All other symbols are coupling constants, The $\lambda_{ijk}$ ($\lambda^{''}_{ijk}$) being antisymmetric with respect to the exchange of the first two (last two) indices. Here, the simultaneous presence of terms that violate Baryon number ($B$) and lepton number ($L$) give a very short proton lifetime (For a more detailed explanation, see~\cite{Martin:1997ns}).
Lepton number is explicitly broken by the bilinear couplings $\mu_i$
and trilinear couplings $\lambda_{ijk}$ and $\lambda_{ijk}'$, whereas
the couplings $\lambda_{ijk}''$ are responsible for the $B$ violation.
The factor of $1/2$ is due to the antisymmetry of the corresponding
operators~\cite{Barbier:2004ez}.
The $H$ charge for the fields determines whether or not a particular term can be present in the
superpotential. As will be seen in the next section, when extending a supersymmetric model with a
$U(1)_H$ Abelian factor, the size of all the parameters entering in the
superpotential arises as a consequence of $U(1)_H$ breaking. In
particular, the violating lepton or baryon--number may be absent without the need of
$R$--parity~\cite{Mira:2000gg,Dreiner:2006xw, Dreiner:2003hw,Dreiner:2003yr, Choi:1998wc, Dreiner:2007vp,Joshipura:2000sn}.
Before proceeding we will fix our notation: following Ref.~\cite{Mira:2000gg}, we will denote a field and its $H$ charge with the same symbol, i.e. $H(f_{i})=f_{i}$, $H$--charge differences as $H(f_i-f_j)=f_{ij}$~\cite{Dudas:1995yu}; bilinear $H$ charges as $n_{i}=L_{i}+H_{u}$. In
what follows we will constrain the $H$--charges to satisfy the condition
$|H(f_i)|\lesssim10$ that as highlighted in Refs.~\cite{Dreiner:2003hw,Choi:1996se} leads to a complete
consistent supersymmetric flavor model. Trilinear
$H$--charges of the $B$ and $L$ violating operators will be written as $n_{\lambda_{ijk}}$ with the index determined
by the corresponding trilinear coupling, that is to say the index can
be given by $\lambda_{ijk}$, $\lambda_{ijk}'$, or $\lambda_{ijk}''$.
We fix $\theta=\langle S \rangle/M_{\text{P}}\simeq 0.22$~\cite{Dreiner:2003yr,Irges:1998ax} and $S=-1$.
The holomorphy of the superpotential forbids all the
terms for which $n<0$ (where $n$ is an abbreviation for the overall $H$--charge of an operator in the superpotential) and although they will be generated after
$U(1)_H$ symmetry breaking (triggered by the vacuum expectation value of the scalar
component of $S$, $\langle S \rangle$) via the K\"ahler potential
~\cite{Giudice:1988yz} these terms are in general much more suppressed than
those for which $n\ge0$. Terms with fractional $n$ are also forbidden
and in contrast to those with $n<0$ there is no mechanism through
which they can be generated.
As already stressed any coupling in the superpotential is determined
up to order 1 factors by its $H$--charge. Thus, any bilinear or
trilinear couplings $\mu_i$ and $\lambda_T$ (where $\lambda_T$ is an abbreviation for any of the trilinear couplings in Eq.~(\ref{po})) must be given by
~\cite{Dreiner:2003hw,Binetruy:1996xk}
\begin{align}
\label{eq:res}
\mu_\alpha\sim &
\begin{cases}
M_P\theta^{n_\alpha} & n_\alpha\ge 0\\
m_{3/2}\theta^{|n_\alpha|} & n_\alpha<0\\
0 & n_\alpha\ \text{fractional}
\end{cases}
&\lambda_T\sim&
\begin{cases}
\theta^{n_\lambda} & n_\lambda\ge 0\\
(m_{3/2}/M_P)\theta^{|n_\lambda|} & n_\lambda<0\\
0 &n_\lambda\ \text{fractional}
\end{cases}\,.
\end{align}
An operator with fractional charges is prohibited also on models with several flavons of integer charges.
\section{$R$--parity as a result of a gauge symmetry}\label{gag}
It can be shown that the conservation of $R$--parity in the MSSM,
may result as a consequence of the proper choice of the horizontal $U(1)_{H}$--charges.
Now, the overall $U(1)_{H}$--charge for any operator can be written as
follows:
\begin{align}\label{25}
H_{\text{Total}}&=\sum_{i}(n_{N^{i}}N^{i}) +
\sum_{i}(n_{L^{i}}L^{i} + n_{l^{i}}l^{i}) +
n_{H^{d}}H^{d} + n_{H^{u}}H^{u} \\ \nonumber
&+
\sum_{i}(n_{Q^{i}}Q^{i}
+ n_{d^{i}}d^{i}+
n_{u^{i}}u^{i}), \nonumber
\end{align}
where the $n_{N^{i}}, n_{L^{i}}, \ldots$ are positive integers representing the number of times it is repeated a field. These numbers are independent, due to the gauge invariance of the group
$SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}$, moreover we define in Eq.~(\ref{25}):
$n_{Q}=\sum_{i}n_{Q^{i}}$, $n_{L}=\sum_{i}n_{L^{i}}$, $n_{u}=\sum_{i}n_{u^{i}}$, $\ldots$
The following example shows how to find the overall
$U(1)_{H}$--charge from some operator of the Eq.~(\ref{po}). For the term
$h^{u}QH^{u}u$
, we obtain:
\begin{align*}
n_{Q}&=1,\\
n_{H^{u}}&=1,\\
n_{u}&=1,\\
\end{align*}
The total charge for this operator is:
\begin{align*}
H_{\text{Total}}= Q+ H^{u} + u.
\end{align*}
In the same way are calculated the overall $U (1)_{H}$--charges for
other operators in the Eq.~(\ref{po}). Below,
are listed two central definitions, which form the main structure for the following analysis:
\begin{defin}\label{def1}
For the different fields in the model, we may use:
\begin{align}\label{26}
B_{p}&=(-1)^{n_{Q} - n_{u} - n_{d}},\\ \nonumber
L_{p}&=(-1)^{n_{L} - n_{N} - n_{l}},\\ \nonumber
R_{p}&= B_{p}\times L_{p}= (-1)^{n_{Q} - n_{u} - n_{d}+ n_{L} - n_{N} - n_{l}} \nonumber
\end{align}
Where the $B_{p}$, $L_{p}$, $R_{p}$ correspond to baryonic parity, leptonic parity and $R$--parity. All operators,
which conserve the $\mathbb{Z}_{2}$--symmetry $R_{p}$, each have an overall integer
$H_{Total}$--charge. Namely, all operators for which
$n_{Q} - n_{u} - n_{d}+n_{L} - n_{N} -
n_{l}$ is even, each have an overall integer $U(1)_{H}$--charge.
\end{defin}
\begin{defin}\label{def2}
One operator which violates $R_p$ have an overall fractional $U(1)_{H}$--charge. Namely, one operator for which $n_{Q} -
n_{u} - n_{d}+n_{L} - n_{N} -
n_{l}$ is odd each have an overall fractional $U(1)_{H}$--charge.
\end{defin}
We can draw several conclusions:
\begin{prop}
If the field $Q^{1}$ has the same quantum numbers as
$Q^{2}$, an $R_{p}$--conserving operator $Q^{1}\phi_{1}\phi_{2}\ldots \phi_{n}$ guarantees that $Q^{2}\phi_{1}\phi_{2}\ldots
\phi_{n}$ is $R_{p}$--conserving as well, we thus find that it is necessary that the charge
$Q^{2} - Q^{1}$ is integer.
\end{prop}
\begin{veri}
We start from the Eq.~(\ref{25}). The overall $U(1)_{H}$--charge is calculated
for this pair of operators,
\begin{align*}
H^{1}_{\text{Total}}&= n_{Q^{1}}Q^{1} + n_{\phi_{1}}\phi_{1} +
n_{\phi_{2}}\phi_{2} + \ldots +
n_{\phi_{n}}\phi_{n}=Z_{1},\\
H^{2}_{\text{Total}}&= n_{Q^{2}}Q^{2} + n_{\phi_{1}}\phi_{1} +
n_{\phi_{2}}\phi_{2} + \ldots +
n_{\phi_{n}}\phi_{n}=Z_{2}.
\end{align*}
We subtract this pair of expressions:
\begin{align}\label{prim}
H^{2}_{\text{Total}} - H^{1}_{\text{Total}}=Q^{2} - Q^{1}= Z_{2} - Z_{1}=
\text{Integer}.
\end{align}
\end{veri}
\begin{prop}
For any $SU(3)_{C}\times
SU(2)_{L}\times U(1)_{Y}$ invariant operator $\phi_{1}\phi_{2}\ldots
\phi_{n}$ which violates $R_{p}$ one has that $\phi_{1}\phi_{2}\ldots
\phi_{n}\phi_{1}\phi_{2}\ldots
\phi_{n}$ conserves $R_{p}$. It follows that all operators which violate $R$--parity
have an overall half-odd integer $U(1)_{H}$--charge.
\end{prop}
\begin{veri}
Be $\phi_{1}\phi_{2}\ldots
\phi_{n}$ an operator which violates $R_{p}$. Using the
def.~\ref{def2} we find the $H_{\text{Total}}$--charge:
\begin{align}\label{27}
\phi_{1} + \phi_{2}+ \ldots + \phi_{n}=\frac{p}{q}.\quad \textrm{Where $p$ and $q$ belong to integers
$\mathbb{Z}$}.
\end{align}
Now be the operator $\phi_{1}\phi_{2}\ldots
\phi_{n}\phi_{1}\phi_{2}\ldots
\phi_{n}$ that conserves $R$--parity.
Considering the def.~\ref{def1}, the $H_{\text{Total}}$--charge of this operator is an integer.
Considering this result, this operator has a integer
$H_{\text{Total}}$--charge of the form:
\begin{align}\label{28}
2\phi_{1} + 2\phi_{2}+ \ldots + 2\phi_{n}=z ,\; \textrm{where $z$ belong to integers $\mathbb{Z}$}
\end{align}
Introducing the Eq.~(\ref{27}) in the Eq.~(\ref{28}), it is obtained that:
\begin{align*}
2\frac{p}{q}=z.
\end{align*}
The number $p$ is an odd integer, as $z$ is an integer, it follows that
$q=2$.
From this it is follows that the $H_{\text{Total}}$--charge for an operator that violates $R$--parity is an half-odd integer number
$p/2$.
\end{veri}
It follows immediately from the previous preposition and the last terms of the first block of the Eq.~(\ref{po}) that $\widehat{N}$ is half-odd-integer.
\begin{prop}
Let $H^{d}\phi_{1}\phi_{2}\ldots \phi_{n}$ be $SU(3)_C \times SU(2)_L \times U(1)_Y$ invariant and it conserve $R_p$, it follows that $L^{i}\phi_{1}\phi_{2}\ldots \phi_{n}$ does not conserve $R_p$ then the charge $L^{i} - H^{d}$ is half-odd-integer.
\end{prop}
\begin{veri}
Be $H^{d}\phi_{1}\phi_{2}\ldots
\phi_{n}$ an operator which conserves $R_{p}$.
Given this result, this operator has a
integer $H_{\text{Total}}$--charge of the form:
\begin{align}\label{29}
H^{d}+\phi_{1} + \phi_{2}+ \ldots + \phi_{n}=p.\quad \textrm{Where $p$ belong to integers
$\mathbb{Z}$}.
\end{align}
Now be the operator $L^{i}\phi_{1}\phi_{2}\ldots
\phi_{n}$ that violates $R$--parity.
Considering this result, this operator has a fractional
$H_{\text{Total}}$--charge of the form:
\begin{align}\label{30}
L^{i}+\phi_{1} + \phi_{2}+ \ldots + \phi_{n}=k/2 \; \textrm{Rational number}
\end{align}
We subtract the equation Eq.~(\ref{29}) of the Eq.~(\ref{30}), we get:
\begin{align}\label{31}
L^{i}-H^{d}=k/2-p=\textrm{half-odd-integer}.
\end{align}
\end{veri}
The MSSM $+\widehat{N}$ $R_p$--conserving, see
~\cite{Dreiner:2003hw}, is of the form:
\begin{align}\label{32}
Q^{1} +
H^{d} +
d^{1}&=n_{1},\\ \nonumber
Q^{1} +
H^{u} +
u^{1}&=n_{2},\\ \nonumber
L^{1} +
H^{d} +
l^{1}&=n_{3},\\ \nonumber
L^{1} +
H^{u} +
N^{i}&=n_{4},\\ \nonumber
N^{i} +
N^{j}&=n_{5},
\end{align}
where the $n_{1}, \ldots, n_{5}$ are integers; and analogously for the other matter superfields.
From Eq.~(\ref{26}) one sees that
\begin{align}\label{33}
n_L-n_{N}-n_{l}+n_Q-n_{u}-n_{d}=2\mathcal{R}-\rho,
\end{align}
$\mathcal{R}$ is an integer, $\rho$ is 0 or 1 if $R_p$ is conserved or broken.
We now plug Eqs.~(\ref{prim}),~(\ref{31}),~(\ref{32}),~(\ref{33}) and the Eq.~(3.3) of Ref.~\cite{Dreiner:2003yr} into Eq.~(\ref{25}) and we obtain:
\begin{align}\label{34}
3Q^{1}+L^{1}= \;\textrm{Integer},
\end{align}
is the necessary and sufficient condition ( apart from Eqs.~(\ref{prim}),~(\ref{31}),~(\ref{32})) on the $H$-charges for conserved $R_p$.
\section{The Standard Model fields $H$--charges}\label{cams}
The individual $H$ charges for the SM fields are determined through a set of phenomenological and theoretical conditions.
Eight phenomenological constraints arising from six mass ratios for the quarks and the charged leptons plus the two quark mixing angles
\begin{align}
\label{condi1}
m_u:m_c:m_t &\simeq \theta^{\,8}:\theta^{\,4}:1\,, \nonumber \\
m_d:m_s:m_b &\simeq
\theta^{\,4}:\theta^{\,2}:1\,, \nonumber \\
m_e:m_\mu:m_\tau &\simeq
\theta^{\,5}:\theta^{\,2} :1\,, \nonumber \\
V_{us}\simeq \theta\,, & \quad V_{cb}\simeq \theta^{\,2},\,
\end{align}
where $\theta$, given by the Eq.~(\ref{1}), is a small parameter of the order of the Cabbibo angle $\theta\simeq 0.22$. These eight conditions on the fermion charges can be re-expressed in terms of the following sets of eight charge differences shown in Table~\ref{tab:chargediff}~\cite{Dudas:1995yu, Choi:1996se, Binetruy:1996xk, Binetruy:1994ru, Leurer:1992wg}.
We will not repeat here the phenomenological analysis leading to these sets of charge differences, since this has been extensively
discussed in the literature~\cite{Dudas:1995yu, Choi:1996se, Binetruy:1996xk, Binetruy:1994ru, Leurer:1992wg}. The negative charge differences shown in the reference~\cite{Mira:2000gg} reproduce the matrix elements $V_{ub}$ and $V_{cb}$ much smaller to the observed ones and cannot be improved by the K\"ahler contributions~\cite{Espinosa:2004ya, King:2004tx}. Therefore these charges are not phenomenologically viable.
Two relations are provided by the absolute value of the masses of third generation fermions
\begin{align}\label{condi2}
m_t\simeq\langle H_u \rangle \quad\text{and}\quad m_b\simeq m_\tau.
\end{align}
Two theoretical constraint corresponding to the consistency conditions for the coefficients of the mixed linear anomalies (the second constraint fixes $k_1=5/3$)~\cite{Mira:2000gg, Binetruy:1994ru,Dine:2012mf}
\begin{align}\label{condi4}
\mathcal{A}_{CCH}=\mathcal{A}_{LLH}=\frac{\mathcal{A}_{YYH}}{k_1}=\delta_{\text{GS}},
\end{align}
where the $\mathcal{A}\ldots$ are the coefficients of the $SU(3)_C-SU(3)_C-U(1)_H$, $SU(2)_L-SU(2)_L-U(1)_H$, $U(1)_Y-U(1)_Y-U(1)_H$ anomalies. Moreover, $\mathcal{A}_{YHH}, \mathcal{A}_{HHH}$ correspond to $U(1)_Y-U(1)_H-U(1)_H$, $U(1)_H-U(1)_H-U(1)_H$ anomalies.
The final constraint comes from the vanishing of the mixed anomaly quadratic in the horizontal charges
\begin{align}\label{condi5}
\mathcal{A}_{YHH}=H^{2}_{u}-H^{2}_{d}+\sum_{i}\left[Q^{2}_{i}-L^{2}_{i}-2u^{2}_{i}+d^{2}_{i}+l^{2}_{i}\right]=0.
\end{align}
Given the above set of conditions, 13 out of 17 $H$ charges are constrained and can be expressed in terms of the remaining four free parameters that we choose to be $n_i$ $(i=1,2,3)$ and $x$. Where $x=H_d + Q_3 + d_3 = H_d + L_3 + l_3$ consistently with our parameterization of $\tan\beta=\theta^{x-3}$ such
that it ranges from 90 to 1 for $x$ running from 0 to 3 (see Ref.
~\cite{Mira:2000gg} for more details). The expressions for the standard model field $H$ charges are shown in Table~\ref{tab:char}. As can be seen from Table~\ref{tab:char}, the $H$ charges $n_i$ and $x$ act as free parameters and their possible values should be fixed by additional experimental constraints.
\begin{table}[t]
\centering
\begin{tabular}{rl}\hline\\
$Q_3=$&$\displaystyle -\frac{-3 x (x+10)+(x+4) n_1+(x+7) n_2+(x+9)
n_3-67}{15 (x+7)}$\\[0.5cm]
$L_3=$&$\displaystyle \frac{2 (x+1) (3 x+22)-(2 x+23) n_1-2 (x+7)
n_2+(13 x+97) n_3}{15 (x+7)}$\\[0.5cm]
$L_2=$&$L_3+n_2-n_3$\\
$L_1=$&$L_3+n_1-n_3$\\
$H_u=$&$n_3-L_3$\\
$H_d=$&$-1-H_u$\\
$u_3=$&$-Q_3-H_u$\\
$d_3=$&$-Q_3-H_d+x$\\
$l_3=$&$-L_3-H_d+x$\\
$Q_1=$&$3+Q_3$\\
$Q_2=$&$2+Q_3$\\
$u_1=$&$5+u_3$\\
$u_2=$&$2+u_3$\\
$d_1=$&$1+d_3$\\
$d_2=$& $d_3$\\
$l_1=$& $5-n_1+n_3+l_3$\\
$l_2=$& $2-n_2+n_3+l_3$\\\\\hline
\end{tabular}
\caption{Standard model fields $H$--charges in terms of
the bilinear $H$--charges $n_i$ and $x$}
\label{tab:char}
\end{table}
With all these restrictions, there is only a possible set of charge
differences which is displayed in Table~\ref{tab:chargediff}. This
self-consistent solution includes the Guidice-Masiero mechanism to
solve the $\mu$ problem because $n_0=-1$, and therefore the $\mu$ term
is absent from the superpotential~\cite{Mira:2000gg}.
\begin{table}[t]
\centering
\begin{tabular}{cccccccc}\hline
$Q_{13}$&$Q_{23}$&$d_{13}$&$d_{23}$&$u_{13}$&$u_{23}$&$\mathcal{L}_{13}$&$\mathcal{L}_{23}$\\\hline
$3$&$2$&$1$&$0$&$5$&$2$&$5$&$2$\\\hline
\end{tabular}
\caption{Standard model fields $H$ charge differences with $n_0=-1$ (from Ref. \cite{Mira:2000gg}). Here $\mathcal{L}_{i3}=L_{i3}+l_{i3}$}
\label{tab:chargediff}
\end{table}
\section{Determination of the parameter of expansion $\theta$}\label{theta}
The vacuum expectation value of the flavon $\langle S \rangle$ is determined dynamically thanks to the anomalous nature of $U(1)_H$~\cite{Dreiner:2003hw}. We show explicitly that our $U(1)_H$--charges assignments can successfully lead to an expansion parameter given by $\theta=\langle S \rangle/M_{P}\approx 0.185-0.221$~\cite{Dreiner:2003yr,Irges:1998ax} as desired phenomenologically.
In the string-embedded FN framework the expansion parameter $\theta$ has its origin solely in the Dine-Seiberg-Wen-Witten mechanism, due to which the coefficient of Fayet-Iliopoulos (FI) is radiatively generated~\cite{Dreiner:2003hw, Cvetic:1998gv,Fayet:1974jb}
\begin{align}\label{c1}
\varepsilon_H=g_s^{2}\frac{\mathcal{A}_{GGH}}{192\pi^{2}}M_{\text{P}}^{2},
\end{align}
where $\mathcal{A}_{GGH}=Grav-Grav-U(1)_H$ is the gravitational anomaly, and $g_s$ being the string couplings constant.
The cancellation of the mixed chiral anomalies of $U(1)_H$ with the gauge group of the SM, itself and gravity demands, see Ref.~\cite{Maekawa:2001uk}
\begin{align}\label{c2}
\frac{\mathcal{A}_{CCH}}{k_C}=\frac{\mathcal{A}_{LLH}}{k_L}=\frac{\mathcal{A}_{YYH}}{k_Y}=\frac{\mathcal{A}_{HHH}}{3k_H}=\frac{\mathcal{A}_{GGH}}{24},
\end{align}
the $k$$\ldots$ are the affine or Kac-Moody levels of the corresponding symmetry~\cite{Dreiner:2003hw}.
Relying on the Green-Schwartz mechanism~\cite{Green:1984sg}, the term
\begin{align}\label{c3}
\mathcal{A}_{YHH}=0.
\end{align}
The factor of 3 in the fourth denominator in Eq.~(\ref{c2}) is of a combinatorial nature: one deals with a pure rather than mixed anomaly.
In this convention one has:
\begin{align}\label{c4}
g_C^{2}k_C=g_L^{2}k_L=g_Y^{2}k_Y=g_H^{2}k_H=2g_s^{2},
\end{align}
$g_C$ being the $SU(3)_C$ couplings constant, $g_L$ being the $SU(2)_L$ couplings constant, $g_Y$ and $g_H$ are the $U(1)_Y$ and $U(1)_H$ couplings constant. For the factor of 2 in Eq.~(\ref{c4}) and a discussion of the mismatch between the conventions of GUT and string amplitudes see Ref.~\cite{Cvetic:1998gv}
($\varepsilon_H^{\text{tree level}}$ is zero in local supersymmetry, see Ref.~\cite{Barbieri:1982ac}). This gives
\begin{align}\label{c5}
\langle S \rangle=\sqrt{-\frac{\varepsilon_H}{S}},
\end{align}
supposing that no other fields break $U(1)_H$. With $S=-1$, we use the Eq.~(\ref{c2}) to eliminate $\mathcal{A}_{GGH}$ in favor of $\mathcal{A}_{CCH}$,
\begin{align}\label{c6}
\frac{\mathcal{A}_{CCH}}{k_C}&=\frac{\mathcal{A}_{GGH}}{24} \nonumber \\
\mathcal{A}_{GGH}&=\frac{24}{k_C}\mathcal{A}_{CCH}.
\end{align}
Replacing the Eq.~(\ref{c6}) in the Eq.~(\ref{c1}),
\begin{align}\label{c7}
\varepsilon_H=g_s^{2}\frac{24}{192\pi^{2}k_C}\mathcal{A}_{CCH}M_{\text{P}}^{2},
\end{align}
check according to Ref.~\cite{Dreiner:2003hw}, we have
\begin{align}\label{c8}
\mathcal{A}_{CCH}=\frac{1}{2}\left[\sum_i(2Q_{i}+u_{i}+d_{i})\right].
\end{align}
Utilizing the charges shown in Table~\ref{tab:char}, we find that
\begin{align}\label{c9}
\mathcal{A}_{CCH}=\frac{3}{2}(7+x),
\end{align}
from the Eq.(\ref{c4}), we use
\begin{align}\label{c10}
g_C^{2}k_C&=2g_s^{2} \nonumber \\
g_s^{2}&=\frac{g_C^{2}k_C}{2}.
\end{align}
Replacing the Eq.(\ref{c9}) and the Eq.~(\ref{c10}) into the Eq.~(\ref{c7}),
\begin{align}\label{c11}
\varepsilon_H&=\frac{24(7+x)3g_C^{2}k_C}{768\pi^{2}k_C}M_{\text{P}}^{2}\\ \nonumber
&=\frac{3(7+x)g_C^{2}}{32\pi^{2}}M_{\text{P}}^{2}.\\ \nonumber
\end{align}
Introducing the Eq.~(\ref{c11}) into the Eq.~(\ref{c5}),
\begin{align}\label{c12}
\langle S \rangle=\frac{g_C}{4\pi \sqrt{2}}\sqrt{3(7+x)}M_{\text{P}},
\end{align}
and evaluating $g_C$ $\left[M_{\text{GUT}}=2.2 \times 10^{16}\text{GeV}\right]\approx 0.72$ and replacing in Eq.~(\ref{c12}),
\begin{align}\label{c13}
\langle S \rangle=\frac{0.72}{4\pi \sqrt{2}}\sqrt{3(7+x)}M_{\text{P}},
\end{align}
the parameter $\theta$ in the Eq.~(\ref{1}) with $M_F=M_{\text{P}}$ and the Eq.~(\ref{c13}) is given by
\begin{align}\label{c14}
\theta=\frac{0.72}{4\pi \sqrt{2}}\sqrt{3(7+x)}.
\end{align}
We show in Table~\ref{tab:epsi} the variation of $\theta$ according to Eq.~(\ref{c14}) for different values of $x$.
\begin{table}[t]
\centering
\begin{tabular}{cccccc}\hline
$\theta$& &$0.185$&$0.198$&$0.210$&$0.221$\\ \hline
$x$& &$0$&$1$&$2$&$3$\\ \hline
\end{tabular}
\caption{Variation of $\theta$ with the different values of $x$.}
\label{tab:epsi}
\end{table}
Finally, the reason for obtaining the condition $|H(f_i)|\lesssim10$ is to avoid an excessive fine--tuning in the Eq.(\ref{c8}).
\section{Relations for the charges of the model}\label{modelt}
By using Table~\ref{tab:char} is easy to check that
\begin{align}\label{ma}
H(\lambda_{ijk})=&\\
\begin{pmatrix}
n_{\lambda_{121}} & n_{\lambda_{122}} & n_{\lambda_{123}} \\
n_{\lambda_{131}} & n_{\lambda_{132}} & n_{\lambda_{133}} \\
n_{\lambda_{231}} & n_{\lambda_{232}} & n_{\lambda_{233}} \\
\end{pmatrix}=&\begin{pmatrix}\nonumber
x+n_{2}+6 & x+n_{1}+3 & x+n_{1}+n_{2}-n_{3}+1 \\
x+n_{3}+6 & x+n_{1}-n_{2}+n_{3}+3 & x+n_{1}+1 \\
x-n_{1}+n_{2}+n_{3}+6 & x+n_{3}+3 & x+n_{2}+1 \\
\end{pmatrix}.\nonumber
\end{align}
\begin{table}
\centering
\begin{tabular}{llll}\hline
$i$ & $1$ & $2$ & $3$\\\hline
$p_i$ & $3$ & $2$ & $2$\\
$p_i'$ & $4$ & $1$ & $0$\\
$p_i''$ & $3$ & $2$ & $2$\\\hline
\end{tabular}
\caption{Integer values required to obtain the horizontal charges of dimension-4 RPV operators.}
\label{tab:pi}
\end{table}
This charges can be parameterized as:
\begin{align}\label{ca1}
H(\lambda_{ijk})=n_{i}+n_{j}-n_{k}+n_{0}+x+i-2k+p_i+p_j+p_k
\end{align}
the $p_i$ are given in the Table~\ref{tab:pi}. There are two possibilities for these charges,
\begin{enumerate}
\item $\lambda_{ijk}$ con $i=k$ or $j=k$.
We can write the Eq.~(\ref{ca1}) as
\begin{align}\label{ca2}
H(\lambda_{ijk})=n_{i(\text{or} j)}+n_{0}+x+i-2k+p_i+p_j+p_k,
\end{align}
the last five terms in the Eq.~(\ref{ca2}) can be seen as an integer function of the $i,j,k$ indices,
\begin{align}\label{ca3}
\mathcal{I}(i,j,k)=&i-2k+p_i+p_j+p_k\,, &(i<&j)
\end{align}
and Eq.~(\ref{ca2}),
\begin{align}\label{ca4}
H(\lambda_{ijk})=n_{i(or j)}+n_{0}+[x+\mathcal{I}(i,j,k)].
\end{align}
\item $\lambda_{ijk}$ con $i\neq k$ or $j\neq k$
\begin{center}
\begin{align}\label{ca5}
H(\lambda_{ijk})=n_{i}+n_{j}-n_{k}+n_{0}+[x+\mathcal{I}(i,j,k)],
\end{align}
adding and subtracting $n_k$ in Eq.~(\ref{ca5})
\begin{align}\label{ca6}
H(\lambda_{ijk})=n_{i}+n_{j}+n_{k}-2n_{k}+n_{0}+[x+\mathcal{I}(i,j,k)],
\end{align}
the Eq.~(\ref{ca6}) can be written as
\begin{align}\label{ca7}
H(\lambda_{ijk})=n_{1}+n_{2}+n_{3}+n_{0}-2n_{k}+[x+\mathcal{I}(i,j,k)].
\end{align}
\end{center}
Simplifying
\begin{align}\label{ca8}
H(\lambda_{ijk})=\mathcal{N}-2n_{k}+[x+\mathcal{I}(i,j,k)],
\end{align}
where,
\begin{align}\label{ca9}
\mathcal{N}=\sum_{\alpha=0}^{3}n_{\alpha}=n_1+n_2+n_3+n_{0}.
\end{align}
\end{enumerate}
By using Eq.~(\ref{ca8}) and Eq.~(\ref{ca9}) we can reproduce any entry of the matrix in Eq.~(\ref{ma}), for example
\begin{align}\label{ca10}
H(\lambda_{123})=&n_{1}+n_{2}+n_{3}-1-2n_{3}+[x+1-6+3+4]\\
&=x+n_1+n_2-n_3+1.\nonumber
\end{align}
In the same way the charges of the other couplings of Eq.~(\ref{po}) can be written as
\begin{align}\label{otros}
H(\lambda''_{ijk})=&\tfrac{1}{3}\mathcal{N}+\left[x+\mathcal{I}''(i,j,k)\right]&(j<&k) \\
H(\lambda'_{ijk})=&n_i+\left[x+\mathcal{I}'(i,j,k)\right], \nonumber
\end{align}
where
\begin{align}
\label{eq:Is}
\mathcal{I}''(i,j,k)=&-2i+p_i''+p_j''+p_k''&(j<&k)\nonumber\\
\mathcal{I}'(i,j,k)=&\frac{1}{2}\left(j+k+p_j'+p_k'\right)-2\delta_{j3}.&& \nonumber\\
\end{align}
To summarize, the $H$ charge of the $R$-parity breaking
couplings can be written as
\begin{align}
\label{resum}
H(\lambda''_{ijk})=&\tfrac{1}{3}\mathcal{N}+\left[x+\mathcal{I}''(i,j,k)\right]&(j<&k)\nonumber\\
H(\lambda'_{ijk})=&n_i+\left[x+\mathcal{I}'(i,j,k)\right] && \nonumber\\
H(\lambda_{ijk})=&n_i+n_j-n_k+n_0+\left[x+\mathcal{I}(i,j,k)\right] &(i<&j)\nonumber\\
=&
\begin{cases}
n_{i(\text{or $j$})}+n_0+\left[x+\mathcal{I}(i,j,k)\right]& \text{if $i= k$ (or $j= k$)}\\
\mathcal{N}-2n_k+\left[x+\mathcal{I}(i,j,k)\right]& \text{if $i\ne k$ and $j\ne k$}\\
\end{cases}.&&
\end{align}
From Eq.~(\ref{resum}) is straightforward to see the possible
scenarios that we can obtain in the context of an anomalous horizontal Abelian symmetry
with a single flavon as will be explained below:
\subsection{Getting the MSSM}\label{u-mssm}
It was shown in the Sec.~\ref{gag} that the conservation of $R$--parity in the MSSM $+\widehat{N}$,
results as a consequence of the proper choice of the horizontal $U(1)_{H}$--charges
and based on this it is possible to obtain a proper mass texture for the neutrinos. To recover the MSSM, we need that the bilinear terms ($\mu_{i})$ and the trilinear terms ($\lambda_{ijk}, \lambda'_{ijk}, \lambda''_{ijk}$) which violate $R$--parity are prohibited. To prohibit the bilinear terms along with $\lambda'_{ijk}$ and $\lambda_{ijk}$ with repeated indices. If $\mathcal{N}$ in Eq.~(\ref{resum}) is a rational number, we obtain $\mathcal{N}/3$ is fractional, then the term $\lambda''_{ijk}$ is also forbidden. From Eq.~(\ref{31}), $n_i$ must be half--integer and therefore $\mathcal{N}$ is fractional in order to have $\lambda_{ijk}$ fully prohibited. These conditions are fulfilled if we choose for example $n_1=-3/2$, $n_2=-5/2$, $n_3=-5/2$ and $x=2$. With this choice, we obtain the charges shown in the Table~\ref{mssm-n} which are in agreement with Ref.~\cite{Dreiner:2003yr}.
By using these charges and the Eqs.~(\ref{resum}) are forbidden the trilinear terms in the Eq.~(\ref{po}). For Example:
\begin{align}\label{ca11}
H(\lambda_{ijk})=&-3/2-5/2-5/2-1+5+[x+\mathcal{I}(ijk)]\\
=&-5/2+[x+\mathcal{I}(ijk)],\nonumber\\
H(\lambda'_{ijk})=&-5/2+\left[x+\mathcal{I}'(i,j,k)\right],\nonumber\\
H(\lambda''_{ijk})=&\tfrac{1}{3}(-3/2-5/2-5/2-1)+\left[x+\mathcal{I}''(i,j,k)\right]\nonumber\\
=&-15/6+\left[x+\mathcal{I}''(i,j,k)\right].\nonumber
\end{align}
\vspace{6.0cm}
\begin{table}[t]
\centering
\begin{tabular}{cccccc}\hline
Generation $i$&$Q_{i}$& $d_{i}$ & $u_{i}$ & $L_{i}$ & $l_{i}$ \\\hline
1 & 67/15 & 13/30 & 169/30 & 3/5 & 53/10 \\ \hline
2 & 52/15 & -17/30 & 79/30 & -2/5& 33/10 \\ \hline
3 & 22/15 & -17/30 & 19/30 &-2/5 &13/10 \\ \hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{cccc}\hline
$H_{u}$&$H_{d}$& $N_{1}$ & $N_{2}$ \\\hline
-21/10 & 11/10 & 5/2 & 5/2 \\ \hline
\end{tabular}
\caption{The set complete of $H$--charges for $x=2$. It gives a texture for the neutrinos sector and it reproduce the MSSM $+\widehat{N}$.}
\label{mssm-n}
\end{table}
Now, the superpotential which introduces the interaction terms for both Dirac and Majorana neutrinos is
\begin{align}\label{majo}
\mathcal{W}^{\nu}&=M_{\text{P}}\frac{M_{ij}}{2}\theta^{N_{i} +
N_{j}}N^{i}N^{j} + Y^{\nu}_{ij}\theta^{L^{i} + H^{u}
+ N^{j}}L^{i}H^{u}N^{j}\\ \nonumber
&+ \frac{\psi_{ij}}{2M_{\text{P}}}\theta^{L^{i} +
H^{u} + L^{j}+ H^{u}}
L^{i}H^{u}L^{j}H^{u},
\end{align}
the various terms of mass for neutrinos in this superpotential are, in order of appearance, as follows:
$\mathbf{M_{RR}^{Maj}}$, $\mathbf{M_{LR}^{Dirac}}$ and
$\mathbf{M_{LL}^{Maj}}$.
Note that $n_i$ are less than zero. Then the term $\mathbf{M_{LL}^{Maj}}$, with $H$ charges $(n_i +n_j )$, is suppressed by a factor of $m_{3/2}/M_{\text{P}}$, and therefore it can be disregarded in the Eq.~(\ref{majo}), the rest are just the terms to build the seesaw mechanism:
\begin{align}\label{see}
M^{\nu}_{ij}=-\theta^{L^{i} + L^{j} +
2H^{u}}\left(
\sum_{k,l}Y^{\nu}_{ik}M^{-1}_{kl}Y^{\nu}_{jl}\right).
\end{align}
The term $\theta^{L^{i}+L^{j}+2H^{u}}$ is greater than one, and so the $m_{\nu}$ can be improved by the parameter $\theta$, enhancing the consistency with phenomenology. According to the analysis, the set of $H$--charges given in
Table~\ref{mssm-n}, give a proper texture matrix for neutrinos
\begin{align}\label{mat}
\mathbf{M}^{\nu}\sim \theta^{-5}\langle H^{u} \rangle^{2}/M_{P}\begin{pmatrix}
\theta^{2} & \theta & \theta \\
\theta & 1 & 1 \\
\theta & 1 & 1 \\
\end{pmatrix}.
\end{align}
Note that the seesaw scale is obtained from the single scale of the model $M_{P}$. In this thesis, this mechanism will be generalized to the case of $SU(5)$ SUSY~\cite{Chen:2008tc,Jack:2003pb,Babu:2002tx,Ling:2002nj,Bando:2003iz}.
\subsection{Getting bilinear $R$-parity violation}
The bilinear $R$--Parity violating models are characterized by two properties~\cite{Hirsch:2000ef,Diaz:2003as}: first, the usual MSSM superpotential is enlarged according to
\begin{align}\label{bili}
W_{\text{BRPV}}=W_{MSSM} + \varepsilon^{ab}\mu^{i}\widehat{L}^{i}_{a}\widehat{H}^{u}_{b},
\end{align}
where there are 3 new superpotential parameters $(\mu^{i})$, one for each fermion generation. The second modification is the addition of extra soft term
\begin{align}\label{biliso}
V_{\text{soft}}=V_{MSSM} - \varepsilon^{ab}B^{i}\mu^{i}\tilde{L}^{i}_{a}H^{u}_{b},
\end{align}
that depends on three soft mass parameters $B^{i}$. For the sake of simplicity it is considered the $R$--conserving soft terms as in minimal supergravity (mSUGRA). Notice that the presence of the new soft interactions prevents the new bilinear terms in Eq.~(\ref{bili}) to be rotated away~\cite{Diaz:1997xc}.
The new bilinear terms break explicitly $R$--parity as well as lepton number. The bilinear $R$--parity violating models predicts correlations between observables in accelerators and neutrino physics~\cite{Porod:2000hv,deCampos:2012pf,DeCampos:2010yu} and they are sought at the LHC~\cite{ATLAS:2011ad}.
In our model, bilinear $R$-parity violation is obtained if we choose $\mathcal{N}/3$ fractional in Eq.~(\ref{resum}). We achieve this if $\mathcal{N}$ is rational, then $\lambda''_{ijk}$ is prohibited. If for each $n_i\lesssim-7$ in Eq.~(\ref{resum}) we get that the terms $\lambda'_{ijk}$ and $\lambda_{ijk}$ remain suppressed by a factor of the order of $m_{3/2}/M_{P}$.
For example, following the Eq.~(\ref{ca9}) and using the charges in the Tables~\ref{bilin2} and ~\ref{bilin} which are derived through the four free parameters $n_1=n_2=-7$, $n_3=-8$, $x=0$~\cite{Dreiner:2006xw} and $n_1=n_2=n_3=-8$, $x=1$~\cite{Mira:2000gg} . We obtain,
\vspace{2.0cm}
\begin{table}[t]
\centering
\begin{tabular}{cccccc}\hline
Generation $i$&$Q_{i}$& $d_{i}$ & $u_{i}$ & $L_{i}$ & $l_{i}$ \\\hline
1 & 467/105 & -97/35& 722/105 & -386/105 & 667/105 \\ \hline
2 & 467/105 & -167/35 & 302/105 & -386/105& 352/105 \\ \hline
3 & 257/105 & -167/35 & 92/105 &-491/105 &247/105 \\ \hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{cc}\hline
$H_{u}$&$H_{d}$ \\\hline
-349/105 & 244/105 \\ \hline
\end{tabular}
\caption{ Charges that enable bilinear $R$-parity violation with $x=0$ and $n_1=n_2=-7$,$n_3=-8$, according to the Ref.~\cite{Dreiner:2006xw}}
\label{bilin2}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{cccccc}\hline
Generation $i$&$Q_{i}$& $d_{i}$ & $u_{i}$ & $L_{i}$ & $l_{i}$ \\\hline
1 & 161/30 & -18/5 & 103/15 & -113/30 & 98/15 \\ \hline
2 & 131/30 & -23/5 & 58/15 & -113/30& 53/15 \\ \hline
3 & 71/30 & -23/5 & 28/15 &-113/30 &23/15 \\ \hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{cc}\hline
$H_{u}$&$H_{d}$ \\\hline
-127/30 & 97/30 \\ \hline
\end{tabular}
\caption{Charges that enable bilinear $R$-parity violation with $x=1$ and $n_1=n_2=n_3=-8$, according to the Ref.~\cite{Mira:2000gg}}
\label{bilin}
\end{table}
\vspace{-2.5cm}
\begin{align}\label{ca12}
\frac{\mathcal{N}}{3}=\frac{-8-8-8-1}{3}=\frac{-25}{3},\\
\frac{\mathcal{N}}{3}=\frac{-7-7-8-1}{3}=\frac{-23}{3}.\nonumber
\end{align}
This condition ensures that, for some bilinear charges $n_{i}\lesssim-7$, the $L$ violating trilinear terms in the Eq.~(\ref{po}) are very suppressed, while the $B$ violating are forbidden.
\subsection{Getting a $R$-parity breaking model with $L$ violation}
Following the Ref.~\cite{Sierra:2009zq}, and assuming a decaying neutralino as dark matter candidate, it is studied the neutralino decays in the context of the minimal $R$--parity violating models with only lepton number violating $\lambda$. The lifetime of a mainly gaugino neutralino decaying through a trilinear $R$--parity breaking coupling $\lambda$ is approximately given by (see Ref.~\cite{Baltz:1997ar})
\begin{align}\label{ca13}
\tau_{\chi}=\left(\frac{M_s}{2 \times 10^{4}\text{GeV}}\right)^{4}\left(\frac{10^{-23}}{\lambda}\right)^{2}\left(\frac{2 \times 10^{3}\text{GeV}}{m_{\chi}}\right)^{5}10^{26}\;\text{sec}.
\end{align}
According to this expression the viability of neutralino decaying DM will depend, for a few TeV neutralino mass, on the slepton mass spectrum and the size of the corresponding $\lambda$ coupling that will be determined by the choices $n_{\lambda}<0$. Due to the strong suppression induced by the factor $m_{3/2}/M_P$. A coupling as small as $10^{-23}$ is possible if $n_{\lambda}=-10$ and accordingly even with a not so heavy slepton the constraint $\tau_{\chi}\geq 10^{26}\ \text{sec}$ can be satisfied. To get only the couplings $\lambda$, we need that $\mathcal{N}$ in Eq.~(\ref{resum}) be a rational number, with this condition we prohibit $\lambda''_{ijk}$. Now if the bilinear $n_{i}$ are not a half-integer fractional, the terms $\lambda'_{ijk}$ and the $\lambda_{ijk}$, with $\text{ $i= k$ (or $j= k$)}$, are also prohibited. However, the terms $\lambda_{ijk}$ with \text{if $i\ne k$ and $j\ne k$} may be allowed if $\mathcal{N}-2n_k$ is an integer number. In such a case, the
decays of the LSP are leptophilic~\cite{Sierra:2009zq}. A set of bilinear charges that satisfy this condition are shown in the Table~\ref{rvl}.
As an example, if we use the Eqs.~(\ref{resum}) and Table~\ref{rvl}, the only $R$--parity violating coupling at all scales is
\begin{table}[t]
\centering
\begin{tabular}{lccccc}\hline
& $x$& $n_1$ &$n_2$ &$n_3$ & $|f_i|$\\\hline
$\lambda_{231}$& $1$& $7/3$ & $-19/3$ & $-25/3$& $<7$ \\\hline
$\lambda_{123}$& $1$& $-10/3$ & $-19/3$ & $7/3$& $<6$ \\\hline
$\lambda_{132}$& $1$& $-5/3$ & $17/3$ & $-20/3$& $<7$ \\\hline
\end{tabular}
\caption{Set of bilinear $H$--charges consistent with the trilinear
$H$--charge choice $n_\lambda=-10$.}
\label{rvl}
\end{table}
\begin{align}\label{ca14}
H(\lambda_{231})&=n_2+n_3+n_1-1-2n_{1}+\left[1+i-2k+p_i+p_j+p_k\right]\nonumber \\
&=-19/3-25/3+7/3-1-14/3+\left[1+2-2+2+2+3\right]\nonumber \\
&=-10,
\end{align}
while for example
\begin{align}\label{ca15}
H(\lambda_{123})&=n_2+n_3+n_1-1-2n_{1}+\left[1+i-2k+p_i+p_j+p_k\right]\nonumber \\
&=-19/3-25/3+7/3-1+50/3+\left[1+1-6+3+2+2\right]\nonumber \\
&=19/3,
\end{align}
is a fractional number, along as for example
\begin{align}\label{ca16}
H(\lambda_{123}^{'})&=n_1+\left[1+\frac{1}{2}(j+k+p_{j}^{'}+p_{k}^{'}\right]\nonumber \\
&=7/3+\left[1+\frac{1}{2}(5+1)\right]\nonumber \\
&=19/3,
\end{align}
and
\begin{align}\label{ca17}
H(\lambda_{123}^{''})&=\frac{1}{3}(7/3-19/3-25/3-1)+\left[1-2+3+2+2\right]\nonumber \\
&=7/3+\frac{1}{2}\left[5+1\right]\nonumber \\
&=14/9.
\end{align}
\subsection{Getting Majorana neutrinos with two flavons}
Here is also possible to have Majorana neutrinos if in addition to the right handed neutrinos we include in the model a second and third flavon, $\psi$, $\phi$, with a vacuum expectation value approximately equal to $\theta$. The horizontal charges of these fields are fixed by new invariant diagrams from Dirac and Majorana mass terms.
In this way, the $H$--charges $\psi$ and $\phi$ must be such that it does not get coupled to $L$ violating operators. Therefore, the respective overall $H$--charge of the $L$ violating operator would be fractional and therefore forbidden.
The introduction of two flavons field could spoil the proton stability since $H$--invariant terms can be obtained by coupling a large number of $\psi$ and $\phi$ flavons to dangerous operators. In our case, for the charges shown in the Table~\ref{mao}, all the dangerous operators that are coupled to new fields produce a overall fractional charge.
Then by adding a second and third flavon field with fractional charge, it is possible to build a Majorana neutrino mass matrix. In both cases an anarchical matrix is obtained, see Refs.~\cite{Hall:1999sn,Haba:2000be,deGouvea:2003xe,deGouvea:2012ac}, which is supported by the recent results of a large value for $\theta_{13}$.
\begin{align}\label{anarq}
\mathbf{M}^{\nu}\sim \theta^{-5}\langle H^{u} \rangle^{2}/M_{P}\begin{pmatrix}
1 & 1 & 1 \\
1& 1 & 1 \\
1 & 1 & 1 \\
\end{pmatrix}.
\end{align}
\begin{table}[t]
\centering
\begin{tabular}{ccccccccc}\hline
& $n_{\lambda}$& $x$ & $n_1$ & $n_2$ & $n_3$ & $\psi$ & $\phi$ & $N_{i}$ \\\hline
$\lambda_{132}$ & -12 & 1 & -9973/1399 & 2438/1399 & -9973/1399 & -13270/1399 & -859/1399& 10832/1399\\ \hline
$\lambda_{132}$ & -12 & 2 & -9137/1213 & 2347/1213& -9137/1213 & -11972/1213 & -488/1213&9625/1213 \\ \hline
$\lambda_{231}$ & 13 & 3 & 4318/3907 & 9973/3907 &9973/3907 & -32078/3907&-37733/3907 &27760/3907 \\ \hline
$\lambda_{231}$ & 2 & 1 & -411/3907 & -9973/3907 &-9973/3907 & -22620/3907&-13058/3907 &23031/3907 \\\hline
\end{tabular}
\caption{Examples set of $H$--charges which allow us having Majorana neutrinos. Where $N_i$ are the right-handed neutrinos.}
\label{mao}
\end{table}
\subsection{Model with violation of baryon number}
In the Chapter~\ref{cap2}, we consider a supersymmetric standard model extended with an anomalous horizontal symmetry $U(1)_{H}$ of a single flavon. A self-consistent framework with baryon-number violation is achieved along with a proper suppression for lepton-number violating dimension-five operators, so that the proton can be sufficiently stable. With the introduction of right-handed neutrinos both Dirac an Majorana masses can be accommodated within this model.
In order to obtain a model with baryonic number violation we need that $\mathcal{N}$ in Eq.~(\ref{resum}) be multiple of 3. This condition ensures that the couplings $\lambda^{''}$ are generated. Choosing the bilinear terms $n_i$ fractional but not a half--integer, we guarantee that the $\lambda^{'}$ and $\lambda$ remain prohibited. For example, by choosing $n_1=n_2=n_3=13/3$ and $x=1$ we can see that only the $\lambda^{''}$ are generated. Using the Eq.~(\ref{resum}), we have for example
\begin{align}\label{ejj}
H(\lambda''_{323})=&\tfrac{1}{3}(13/3+13/3+13/3-1)+\left[1+0)\right]\\
=&5.\nonumber
\end{align}
In the same way all $\lambda^{''}$ are obtained.
We can check for example that
\begin{align}\label{ejj2}
H(\lambda'_{123})=&13/3+\left[1+3\right]\\
=&25/3, \nonumber
\end{align}
and
\begin{align}\label{ejj3}
H(\lambda_{231})&=(13/3+13/3+13/3-1)-2(13/3)+\left[1+7\right]\\
=&34/3,\nonumber
\end{align}
are fractional.
\section{$SU(5)\times U(1)_{H}$ with several flavons}\label{unif}
Differently from the SM case~\cite{Dreiner:2003yr}, in $SU(5)$ GUTs it
is rather difficult to implement the horizontal symmetries,
because there is less freedom in choosing the $H$--charges (see for
example~\cite{Chen:2008tc}). However, if we allow for several flavons that break the
horizontal symmetry, and they are assigned to the adjoint representation of
$SU(5)$~\cite{Aristizabal:2003zn,Duque:2008ah,Wang:2011ub}, charges
that were forbidden in the singlet flavon case become allowed, under
the assumption that certain representations for the FN~\cite{Froggatt:1978nt} messengers fields do not exist. In
contrast to the non-unified $SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\times
U(1)_H$ model, where the singlet nature of the flavons is mandatory,
in $SU(5)\times U(1)_H$ assigning the flavons to the adjoint has the
additional bonus that non-trivial group theoretical coefficients
concur to determine the coefficients of the effective
operators~\cite{Aristizabal:2003zn,Duque:2008ah,Wang:2011ub}. In this
case, under the additional assumption that at the fundamental level
all the Yukawa couplings obey to some principle of
universality~\cite{Duque:2008ah}, the order one coefficients that
determine quantitatively the structure of the mass matrices become
calculable. In this thesis we generalize the mechanism of obtain $R$--parity from an horizontal symmetry described in Sec.~\ref{u-mssm} to the context of $SU(5)$ SUSY $+ \widehat{N}$.
\chapter{Conclusions}\label{sec:conlusions}
In this thesis we have obtained a supersymmetric $R$-parity breaking model with $B$
violation by considering the most general supersymmetric standard
model allowed by gauge invariance and extending it with a
SFH $U(1)_H$ symmetry.
The generated effective theory at low energy has only the particle
content of the SSM.
After imposing existing constraints in both single and quadratic
$R$-parity violating (RPV) couplings, only one precise
hierarchy remains depending on a global suppression factor
$\theta^{n_{\lambda''}}$ ($n_{\lambda}''>1$) with $\lambda''_{323}$ as
the dominant coupling and very suppressed couplings for the first two
generations.
Additional suppression is required in order to obtain Dirac neutrino
masses in the model, and only solutions with $n_{\lambda}''\ge6$
remain allowed.
In this way, the resulting RPV and $B$-violating model also explaining
neutrino masses is powerful enough to satisfy all the existing constraints
on RPV.
In particular, the $U(1)_H$ symmetry also ensures that dimension-five
$L$-violating operators are sufficiently suppressed so that
the decay of the proton is above the experimental limits.
The resulting underlying theory for the RPV operators is quite
similar to that obtained after imposing the MFV hypothesis on a general RPV model (at least until couplings of
order $\theta^{n_{\lambda''}+3}$), and therefore the predictions of
both models are the same at the LHC.
The phenomenology at colliders depends strongly on the nature and decay length of
the LSP.
Specific searches at the LHC for the RPV with $B$ violation have reported
restrictions only in the case of prompt decays of the gravitino when
it is the LSP.
Several analyses of CMS and ATLAS involving leptons have been
reanalyzed to constrain the gluino as a function of the stop mass
(see Ref.~\cite{Berger:2013sir} and references therein) within a special spectrum
guaranteeing that $\operatorname{BR}(\tilde{g}\to
\tilde{t}\,\bar{t})=1$ and with prompt decays of the corresponding
LSP stop.
In both cases bounds in the gluino mass around $600\text{ GeV}$ have
been obtained. Therefore, the parameter space of the RPV/SFH scenario
(or the RPV/MFV one) have still plenty of room to accommodate a low-energy
supersymmetric spectrum.
There is a number of open issues that could be more easily studied
within this realistic and predictive framework, for example, the
constraints on the couplings from low-energy observables and indirect
dark matter experiments or the restrictions in the parameter space
from other collider signatures like the displaced vertices searches
already implemented by ATLAS~\cite{Aad:2012zx} and
CMS~\cite{Chatrchyan:2012jna}.
Also we have worked in a $SU(5)\times U(1)_H$ model for charged fermion
masses studied in Ref.~\cite{Duque:2008ah} to include neutrino masses.
This has been done by means of an appropriate redefinition of the
$U(1)_H$ charges that, while it leaves unchanged the Yukawa matrices
for the charged fermions, it also forbids at all orders $\Delta B\neq
0$ and $\Delta L=1$ operators, while allowing for $\Delta L=2$
Majorana mass terms. Thus, $R$-parity is enforced as an exact
symmetry, but at the same time the seesaw mechanism can be embedded
within the model. Our construction is severely constrained by two
theoretical requirements. First, the $SU(5)$ GUT implies that the $F$
charges of the lepton doublets and $d$-quarks singlets, as well as the
$F$ charges of the quark-doublets and lepton singlets are the same,
reducing drastically the freedom one has in the SM. Second, we have
assumed universality of all the fundamental scalar-fermion couplings,
which basically implies that the model has only one free parameter,
that is the ratio between the $U(1)_H$ breaking vevs and the messenger
scale $M$. In spite of these serious restrictions, we have shown that
by assigning the $U(1)_H$ breaking flavons to the adjoint of $SU(5)$,
computable group theoretical coefficients arise that, at leading
order, determine the structure of the neutrino mass matrix in a
parameter independent way. This structure yields a reasonable first
approximation to the measured neutrino parameters. However, higher
order corrections can be large, and should be taken into account for a
more precise quantitative comparison with observations. In our model,
hierarchical heavy Majorana neutrinos naturally coexist with
non-hierarchical light neutrinos, the atmospheric scale is easily
reproduced for natural values of the parameters, and the mass of the
lightest heavy neutral states, that lies about five order of
magnitude below the GUT scale, is optimal for leptogenesis.
At the quantitative level, the predictivity of the model clearly
relies on the assumption of universality of the Yukawa couplings. We
have not put forth any speculation concerning the fundamental physics
that might underlie such a strong assumption, but have merely adopted
it as a working hypothesis to highlight how a theory of calculable
`order one coefficients' might actually emerge in GUT models relying
just on a generalized FN mechanism. Needless to say, by relaxing the
assumption of universality by a certain quantitative amount, all the
predictions would acquire a correspondent numerical uncertainty,
although the main qualitative features of the model will remain
unchanged.
\chapter{Introduction}
\begin{flushright}
\begin{minipage}[c]{3in}
\tiny\bfseries\emph{"We must not seek, but find, we must not judge, but observe and comprehend, inspire and elaborate the inspired. We have to feel our own essence integrated and ordered at whole. Only then we will have real relations with nature."\\ \\
\quad Hermann Hesse}
\end{minipage}
\end{flushright}
\vspace{1.5cm}
The standard model (SM) is a very successful framework for describing
particle physics phenomena. However, it suffers from some serious
phenomenological problems, among which: neutrinos are massless, the
conditions for baryogenesis are not fulfilled, and there is no
candidate for dark matter (DM). The first two problems can be
solved by extending the SM to include the seesaw mechanism for
neutrino masses~\cite{Minkowski:1977sc,Yanagida:1979as,Glashow,GellMann:1980vs,Mohapatra:1980yp}
that also opens the possibility of
baryogenesis via leptogenesis~\cite{Fukugita:1986hr,Davidson:2008bu},
while extending the SM to its supersymmetric version (SSM) can provide
a natural candidate for DM. However, in contrast to the SM, the SSM
does not have accidental lepton ($L$) and baryon-number ($B$)
symmetries, and this can lead to major phenomenological problems, like
fast proton decay. The standard solution to forbid all dangerous
operators is to impose a discrete symmetry, $R$--parity, and only in
the $R$-parity conserving SSM the lightest supersymmetric particle (LSP),
generally the neutralino, is stable, and provides a good DM
candidate.
Similarly to the SM, the SSM does not provide any explanation for
the strong hierarchy in the charged fermion Yukawa couplings. One way to
explain the flavor puzzle and the suppression of the fermion masses
with respect to the electroweak breaking scale is to impose Abelian
flavor symmetries, that we generically denote as $U(1)_H$, that are
broken by SM-singlets commonly denoted as flavons. This process involves horizontal charges for the fields that determines whether a particular term can or cannot be present in the superpotential.
After this problem is solved remain some free horizontal charges,
that can be used to set the order of magnitude of the $R$-parity violating
couplings. In supersymmetric models extended to include an anomalous horizontal
$U(1)_H$ symmetry {\it \`a la} Froggatt-Nielsen (FN) \cite{Froggatt:1978nt},
the standard model particles and their superpartners do not carry a
$R$-parity quantum number and instead carry a horizontal charge
($H$--charge). For a review see \cite{Dreiner:2003hw}. In addition, these
kinds of models involve new heavy FN fields and, in the simplest
realizations, an electroweak singlet superfield $S$ of $H$--charge $-1$.
$R$-parity conserving as well as $R$-parity violating $SU(3)_{C}\times
SU(2)_{L}\times U(1)_Y\times U(1)_H$--invariant effective terms arise once
below the FN fields scale, $M$, the heavy degrees of freedom are
integrated out. These terms involve factors of the type $(S/M)^n$,
where $n$ is fixed by the horizontal charges of the fields implicated
and determines whether a particular term can or cannot be present in the
superpotential. The holomorphy of the superpotential forbids all the
terms for which $n<0$ and although they will be generated after
$U(1)_H$ symmetry breaking (triggered by the vacuum expectation value of the scalar
component of $S$, $\langle S\rangle$) via the K\"ahler potential
\cite{Giudice:1988yz} these terms are in general much more suppressed than
those for which $n\ge0$. Terms with fractional $n$ are also forbidden
and in contrast to those with $n<0$ there is no mechanism through
which they can be generated. Finally, once $U(1)_H$ is broken the
terms with positive $n$ yield Yukawa couplings determined---up to order
one factors---by $\theta^n=(\langle S\rangle/M)^n$. The standard
model fermion Yukawa couplings typically arise from terms of this
kind. Correspondingly, supersymmetric models based on an $U(1)_H$
Abelian factor are completely specified in terms of the $H$--charges.
Then the $R$-parity conservation can be for example enforced by
an extended gauge symmetry together with supersymmetry (that requires
a holomorphic superpotential) as in the model studied
in~\cite{Mira:1999fx}, or solely by the gauge symmetry thanks to a
suitable choice of the $U(1)_H$--charges, as in
Ref.~\cite{Dreiner:2003yr}.
However, for scenarios such as the $R$--parity conserving constrained minimal supersymmetric standard model (CMSSM), the recent results on searches for supersymmetry by
CMS~\cite{Chatrchyan:2012lia} and ATLAS~\cite{Aad:2012fqa}
experiments have raised the bound on scalar and gluino masses, when
they are approximately equal, to the order of 1.4 TeV.
These searches are mainly based on missing transverse momentum carried
by the LSP.
A high mass scale for scalars and gluinos represents a potential chink
in the initial proposal of the SSM as a possible solution to the
hierarchy problem. These mass limits can be avoided in alternative supersymmetric~
models such as the $R$-parity violating SSM
\cite{Hall:1983id,Ross:1984yg,Barger:1989rk,Dreiner:1997uz,Allanach:2003eb,Barbier:2004ez},
in which the LSP is usually assumed to be the gravitino that also
provides a good decaying dark matter candidate
~\cite{Takayama:2000uz,Buchmuller:2007ui}.
The next-to-the-lightest supersymmetric particle decays to standard
model particles, and thus the missing transverse momentum may be
considerably reduced
~\cite{deCampos:2007bn,Brust:2011tb,Butterworth:2009qa,Allanach:2012vj,Brust:2012uf,Asano:2012gj,Curtin:2012rm,Graham:2012th}.
In addition, if the involved couplings are small enough, the presence of
displaced vertices may reduce the efficiency of the standard searches
at the LHC~\cite{deCampos:2007bn,Graham:2012th}. The simplest case of an anomalous horizontal symmetry with a single flavon, can also suppress, but do not completely prohibit, $R$--parity violating terms. Along these lines, consistent models have been built in which neutrino oscillation data can be explained\cite{ Mira:2000gg,Dreiner:2006xw, Dreiner:2003hw,Dreiner:2003yr,Choi:1998wc,Dreiner:2007vp}.
Also, by using the reported anomalies in cosmic-ray electron/positron
fluxes, a consistent model with tiny $R$-parity breaking couplings was
built with decaying leptophilic-neutralino dark
matter~\cite{Sierra:2009zq}.
We adopt in this thesis a new approach by assuming a set of $H$-charges that
give rise to a self-consistent model of $R$-parity breaking and
baryon-number violation.
As a consequence of our $H$-charge assignments, it is not
possible to generate a Majorana mass term for left-handed neutrinos.
However, a neutrino Dirac matrix can be built after the introduction of
right-handed neutrinos with proper $H$-charges.
We also show that by adding a second flavon field with fractional
charge, it is possible to build a Majorana neutrino mass matrix.
In both cases an anarchical matrix
~\cite{Hall:1999sn,Haba:2000be,deGouvea:2003xe,deGouvea:2012ac} is
obtained, which is supported by the recent results of a large value
for $\theta_{13}$~\cite{Abe:2011sj,Abe:2011fz,An:2012eh,Ahn:2012nd}.
As a consequence of $H$-charge assignments, the $\lambda''_{323}$
coupling dominates over the other couplings, and the third-generation
quarks are expected to be present at the final states of LSP decays. Moreover,
the horizontal symmetry predicts a precise hierarchy of $B$-violating
couplings, which can be translated into relations between different
branching ratios, that could be measured at $e^+\ e^-$ colliders. The required conditions to obtain one
$R$-parity breaking SSM with $B$ violation are shown, also taking into account
dimension-five operators.
Next, and continuing in the structure of the FN mechanism extended with a horizontal symmetry $U(1)_H$, we introduce the model $SU(5)$ proposed by
Georgi y Glashow~\cite{Georgi:1974sy}, which incorporates the standard model group and gives a description in terms of a single constant $g_{5}$; moreover, the quantization of charge comes as a direct consequence of the algebra of $SU(5)$, and the lifetime of the proton is consistent with the current experimental bounds~\cite{Abe:2013lua}. Differently from the SM case~\cite{Dreiner:2003yr}, in $SU(5)$ GUTs it
is rather difficult to implement this kind of horizontal symmetries,
because there is less freedom in choosing the $H$--charges (see for
example~\cite{Chen:2008tc}). However, if the flavons that break the
horizontal symmetry are assigned to the adjoint representation of
$SU(5)$~\cite{Aristizabal:2003zn,Duque:2008ah,Wang:2011ub}, charges
that were forbidden in the singlet flavon case become allowed, under
the assumption that certain representations for the Froggatt-Nielsen
(FN)~\cite{Froggatt:1978nt} messengers fields do not exist. In
contrast to the non-unified $SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\times
U(1)_H$ model, where the singlet nature of the flavons is mandatory,
in $SU(5)\times U(1)_H$ assigning the flavons to the adjoint has the
additional bonus that non-trivial group theoretical coefficients
concur to determine the coefficients of the effective
operators~\cite{Aristizabal:2003zn,Duque:2008ah,Wang:2011ub}. In this
case, under the additional assumption that at the fundamental level
all the Yukawa couplings obey to some principle of
universality~\cite{Duque:2008ah}. A virtue of the $U(1)_H$ gauge symmetry
implemented here is that when the $U(1)_H$ charges are chosen
appropriately, $\Delta B\neq 0$ and $\Delta L=1$ operators are
forbidden at all orders. However, $\Delta L=2$ operators
corresponding to Majorana masses for heavy neutral fermions of the
seesaw remain allowed, and thus the seesaw mechanism can be embedded
in the model. More in detail, following~\cite{Dreiner:2003yr} we
chose the $H$-charges in such a way that operators with even
$R$--parity have an overall $H$-charge that is an integer multiple of
the charge of the $U(1)_H$ breaking scalar fields (that, without loss
of generality, we set equal to $\pm 1$). In contrast, all the
$R$--parity breaking operators, that have an overall half-odd-integer
$H$--charge, are forbidden. Then, to allow for $\Delta L=2$ Majorana
masses while forbidding $\Delta L=1$ operators, it is sufficient to
chose the $H$--charges of the heavy seesaw neutral states as
half--odd-integers. The order one coefficients that
determine quantitatively the structure of the mass matrices become
calculable.
The structure of the thesis is as follows:
Chapter~\ref{cap1}, Sec.~\ref{sec:BNVmodel22}, will be an introduction to the theory of FN~\cite{Froggatt:1978nt} to explain the mass spectrum in the fermionic sector.
In Sec.~\ref{modteo} we enunciate the selection rules for our model.
In Sec.~\ref{gag} we show how $R$--parity can be obtained from a gauge symmetry. Choosing a suitable set of horizontal charges, the
$R$ parity comes as a direct result of this election.
~\cite{Dreiner:2003yr}. In Sec.~\ref{cams} we find the $U(1)_{H}$--charges for the fields of the SSM in terms of 4 four parameters. In Sec.~\ref{theta} we calculate the numerical value if the expansion parameter is $\theta$. In Sec.~\ref{modelt} we made the calculations of $U (1)_H$--charges for the $R$--parity breaking terms. In Sec.~\ref{unif} we synthesize the problem with several flavons for the unification theory $SU(5)$.
Chapter \ref{cap2}, Sec.~\ref{sec:BNVmodel}, will be raised the conditions to obtain one
$R$-parity breaking SSM with $B$ violation, also taking into account
dimension-five operators.
The generation of neutrino masses by introducing right-handed
neutrinos is discussed in
Sec.~\ref{sec:RHN}. In Sec.~\ref{sec:implications}
the consequences for collider physics are mentioned.
Chapter~\ref{cap3} will present a $SU(5)\times U(1)_H$ supersymmetric model for neutrino
masses and mixings.
In Chapter~\ref{sec:conlusions} are presented the discussions and conclusions of the work.
In the Appendix ~\ref{aped.B} are presented the horizontal charges of dimension-five $R$-parity breaking operators in detail while in~\ref{aped.C} we calculate explicitly a contribution to ${Y}_\nu=\sum_i {Y}^{(i)}_\nu$
at $\mathcal{ O}(\epsilon)$ of $SU(5)$.
|
3,212,635,537,715 | arxiv | \section{Introduction}
The sequential dependence model \cite{metzler2005sdm} is a very robust
retrieval model that has been shown to outperform or to be on par
with many retrieval models \cite{huston2013termdependencies}. Its
robustness comes from an integration of unigram, bigram, and windowed
bigram models through the theoretical framework of Markov random fields.
The SDM Markov random field is associated with a set of parameters
which are learned through the usual parameter estimation techniques
for undirected graphical models with training data. Despite its simplicity,
the SDM model is a versatile method that provides a reasonable input
ranking for further learning-to-rank phases or in as a building block
in a larger model \cite{dalton2013kbbridge}. As it is a feature-based
learning-to-rank model, it can be extended with additional features,
such as in the latent concept model \cite{bendersky2011quality,metzler2007lce}.
Like all Markov random field models it can be extended with further
variables, for instance to incorporate external knowledge, such as
entities from an external semantic network. It can also be extended
with additional conditional dependencies, such as further term dependencies
that are expected to be helpful for the retrieval task, such as in
the hypergraph retrieval model \cite{bendersky2012hypergraphs}.
The essential idea of the sequential dependence model (SDM) is to
combine unigram, bigram, and windowed bigram models so that they mutually
compensate each other's shortcomings. The unigram gram model, which
is also called the bag-of-words model and which is closely related
to the vector-space model, is indifferent to word order. This is an
issue for multi-word expressions which are for instance common for
entity names such as ``Massachusetts Institute of Technology'' or
compound nouns such as ``information retrieval'' which have a different
meaning in combination than individually. This shortcoming is compensated
for in bigram model which incorporate word-order by modeling the probability
of joint occurrence of two subsequent query words $q_{i-1}q_{i}$
or condition the probability of $i$th word in the query, $q_{i}$,
on seeing the previous word $q_{i-1}$.
One additional concern is that users tend to remove non-essential
words from the information need when formulating the query, such as
in the example query ``prevent rain basement'' to represent the
query ``how can I prevent the heavy spring rain from leaking into
my brick house's basement?''. The bigram model which only captures
consecutive words may not be able to address this situation. This
motivates the use of bigram models that allow for length-restricted
gaps. Literature describes different variants such models under the
names skip gram models or orthogonal sparse bigrams \cite{siefkes2004osb}.
In this work, we focus on a variant that has been used successfully
in the sequential dependence model, which models the co-occurrence
of two terms within a window of eight\footnote{The window length requires tuning in practice; we follow the choice
of eight for compliance with previous work.} terms, which we refer to as windowed bigrams.
The sequential dependence model combines ideas of all three models
in order to compensate respective shortcomings. The retrieval model
scores documents for a query though the theoretical framework of Markov
random field models (MRF). However, there are a set of related models
that address the same task and originate from generative models and
Jelinek-Mercer smoothing. In addition, different variants of bigram
models have been used interchangeably, i.e., based on a bag-of-bigrams
approach and an n-gram model approach which leads to different scoring
algorithms. A decade after the seminal work on the sequential dependence
model has been published, we aim to reconsider some of the derivations,
approximations, and study similarities and differences arising from
several choices. Where Huston et al.\ \cite{huston2013termdependencies,huston2014termdepedencies-appendix}
emphasized a strictly empirical study, in this work we reconsider
the SDM model from a theoretical side. The contributions of this paper
are the following.
\begin{itemize}
\item Theoretical analysis of similarities and differences for MRF versus
other modelling frameworks and different bigram paradigms.
\item Empirical study on effects on the retrieval performance and weight
parameters estimated\footnote{Code and runs available: \url{https://bitbucket.org/jfoley/prob-sdm}}.
\item Discussion of approximations made in an available SDM implementation
in the open-source engine Galago.
\end{itemize}
\paragraph{Outline}
After clarifying the notation, we state in Section \ref{sec:Sequential-Dependence-Algo}
the SDM scoring algorithm with Dirichlet smoothing as implemented
in the search engine Galago V3.7. In Section \ref{sec:mrfSDM} we
recap the original derivation of this algorithm as a Markov Random
Field. A generative alternative is discussed in Section \ref{sec:genSDM}
with connections to MRF and Jelinek-Mercer models. Where this is modeling
bigrams with the bag-of-bigrams approach, Section \ref{sec:genNgram}
elaborates on an alternative model that is following the n-gram model
approach instead. Section \ref{sec:Experiments} demonstrates the empirical equivalence the different
models when proper parameter learning methods are used. Related work is discussed
in Section \ref{sec:Related-work} before we conclude.
\section{Notation}
\label{sec:Notation}We refer to a model as $\mathcal{M}$, and the
likelihood of data under the model as $\mathcal{L}_{\mathcal{M}}$,
and a probability distribution of a variable as $p(X)$. We refer
to the numerical ranking score function provided by the model $\mathcal{M}$ for given arguments
as $\mbox{score}_{\mathcal{M}}(...)$. For graphical models, this
score is rank-equivalent to the model likelihood; or equivalently the log-likelihood.
In correspondence to conditional probabilities $\mathcal{L}(X|Y)$ we
refer to rank equivalent expressions to conditional scores, i.e.,
$\mbox{score}(X|Y)$.
We refer to counts as $n$ with subscripts. For example, a number
of occurrences of a term $w$ in a document $d$ is denoted $n_{w,d}$.
To avoid clutter for marginal counts, i.e., when summing over all counts
for possible variable settings, we refer to marginal counts as $\star$.
For example, $n_{\star,d}$ refers to all words in the document (also
sometimes denoted as $|d|$), while $n_{w,\star}$ refers to all occurrences
of the word $w$ in any document, which is sometimes denoted as $cf(w)$.
Finally, $n_{\star,\star}=|C|$ denotes the total collection frequency.
The vocabulary over all terms is denoted $V$.
We distinguish between random variables by uppercase notation, e.g.
$Q$, $D$, and concrete configurations that the random variables
can take on, as lower case, e.g., $q$, $d$. Feature functions of
variable settings $x$ and $y$ are denoted as $\mbox{\ensuremath{\mathbf{f}}}(x,y)$.
We denote distribution parameters and weight parameters as greek letters.
Vector-valued variables are indicated through bold symbols, e.g., $\boldsymbol{\lambda}$,
while elements of the vector are indicated with a subscript, e.g.
$\lambda_{u}$.
\section{Sequential Dependence Scoring Implementation}
\label{sec:Sequential-Dependence-Algo}Given a query $\mathbf{q}=q_{1},q_{2},\dots,q_{k}$,
the sequential dependence scoring algorithm assigns a rank-score for
each document $d$. The algorithm further needs to be given as parameters
$\boldsymbol{\lambda}=\lambda_{u},\lambda_{b},\lambda_{w}$ which
are the relative weights trading-off unigram (u), bigram (b), and
windowed-bigram (w) models.
Using shorthand $\mathcal{M}_{u}$ for the unigram language model,
$\mathcal{M}_{b}$ for the bigram language model, and $\mathcal{M}_{w}$
for an unordered-window-8 language model, the SDM score for the document
$d$ is computed as,
\begin{eqnarray}
\mbox{score}_{SDM}(d|\mathbf{q},\boldsymbol{\lambda}) & = & \lambda_{u}\mbox{\ensuremath{\cdot}}\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})+\lambda_{b}\mbox{\ensuremath{\cdot}}\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q}) + \lambda_{w}\mbox{\ensuremath{\cdot}}\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})\label{eq:sdm-scoring}
\end{eqnarray}
While the algorithm is indifferent towards the exact language models
used, the common choice is to use language models with smoothing. The
original work on SDM uses Jelinek-Mercer smoothing. Here, we first
focus on Dirichlet smoothing to elaborate on connections to generative
approaches. Dirichlet smoothing requires an additional parameter $\mu$
to control the smoothing trade-off between the document and the collection
statistics.
\paragraph*{Unigram model}
$\mathcal{M}_{u}$ also refers to the query likelihood model, which is
represented by the inquery \cite{callan1995inquery} operator \texttt{\#combine(}$q_{1}$
$q_{2}$ $\dots$ $q_{k}$\texttt{)}. Using Dirichlet smoothing, this operator implements the following
scoring equation.
\begin{equation}
\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})=\sum_{q_{i}\in\mathbf{q}}\log\frac{n_{q_{i},d}+\mu\frac{n_{q_{i},\star}}{n_{\star,\star}}}{n_{\star,d}+\mu}\label{eq:unigram-model}
\end{equation}
where, $n_{\star,d}$ is the document length, and $n_{\star,\star}$
denotes the number of tokens in the corpus. To underline the origin
of sums, we use the notation for sums over all elements in a vector, e.g.
$\sum_{q_{i}\in\mathbf{q}}\dots$ for all query terms, instead of
the equivalent notation of sums over a range indices of the vector,
e.g., $\sum_{i=1}^{k}\dots$.
\paragraph*{Bigram model}
For $\mathcal{M}_{b}$, a common choice is an ordered bigram model
with Dirichlet smoothing, which is represented by the inquery operator
chain \texttt{\#combine(\#ordered:1($q_{1}$ $q_{2}$) \#ordered:1( $q_{2}$
$q_{3}$) $\ldots$ \#ordered:1($q_{k-1}$ $q_{k}$))}. With Dirichlet
smoothing, this operator-chain implements the scoring function,
\[
\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q})=\sum_{\left(q_{i},q_{i+1}\right)\in\mathbf{q}}\log\frac{n_{\left(q_{i},q_{i+1}\right),d}+\mu\frac{n_{\left(q_{i},q_{i+1}\right),\star}}{n_{\left(\star,\star\right),\star}}}{n_{\left(\star,\star\right),d}+\mu}
\]
where, $n_{\left(q_{i},q_{i+1}\right),d}$ denotes the number of bigrams $q_{i} \circ q_{i+1}$
occurring in the document. The number of bigrams in the document,
$n_{\left(\star,\star\right),d}=|d|-1$, equals the document length
minus one.
\paragraph*{Windowed-Bigram model}
For the windowed\nobreakdash-bigram model $\mathcal{M}_{w}$, a common choice
is to use a window of eight terms and ignoring the word order. Note
that word order is only relaxed on the document side, but not on the
query side, therefore only consecutive query terms $q_{i}$ and $q_{i+1}$
are considered. This is represented by the inquery operator chain
\texttt{\#combine(\#unordered:8($q_{1}$ $q_{2}$) \#unordered:8($q_{2}$
$q_{3}$) \textcompwordmark{} $\ldots$ \#unordered:8($q_{k-1}$
$q_{k}$))}. With Dirichlet smoothing of empirical distributions over
windowed bigrams, this operator-chain implements the scoring function,
\[
\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})=\sum_{\left(q_{i},q_{i+1}\right)\in\mathbf{q}}\log\frac{n_{\left\{ q_{i},q_{i+1}\right\} _{8},d}+\mu\frac{n_{\left\{ q_{i},q_{i+1}\right\} _{8},\star}}{n_{\left\{ \star,\star\right\} _{8},\star}}}{n_{\left\{ \star,\star\right\} _{8},d}+\mu}
\]
where $n_{\left\{ q_{i},q_{i+1}\right\} _{8},d}$ refers to the number
of times the query terms $q_{i}$ and $q_{i+1}$ occur within eight
terms of each other.
\paragraph*{Implementation-specific approximations}
The implementation within Galgo makes several approximations on collection counts for bigrams as $n_{\left\{ \star,\star\right\} _{8},d} \approx n_{\left(\star,\star\right),d} \approx n_{\star,d}=|d|$. This approximation is reasonable in some cases, as we discuss in the appendix.
\section{Markov Random Field Sequential Dependence Model}
\label{sec:mrfSDM}In this Section we recap the derivation of the
SDM scoring algorithm.
\begin{figure*}
\subfloat[MRF model.\label{fig:MRF-model}]{
\begin{tikzpicture}[x=0.7cm,y=0.5cm]
\node[latent] (D) {$D$} ; %
\node[latent, below=2 of D] (Q2) {$Q_2$} ; %
\node[latent, left=2 of Q2] (Q1) {$Q_1$} ; %
\node[latent, right=2 of Q2] (Q3) {$Q_3$} ; %
\factoredge {D} {Q1} {} ; %
\factoredge {D} {Q2} {} ; %
\factoredge {D} {Q3} {} ; %
\factoredge {Q1} {Q2} {} ; %
\factoredge {Q2} {Q3} {} ; %
\end{tikzpicture}
} \hfill{}\subfloat[Factor graph.\label{fig:Factor-graph}]{\begin{tikzpicture}[x=0.7cm,y=0.5cm]
\node[latent] (D) {$D$} ; %
\node[latent, below=2 of D] (Q2) {$Q_2$} ; %
\node[latent, left=2 of Q2] (Q1) {$Q_1$} ; %
\node[latent, right=2 of Q2] (Q3) {$Q_3$} ; %
\factor[above left=1 of Q2] {q1q2d} {} {} {} ; %
\factor[above right=1 of Q2] {q2q3d} {} {} {} ; %
\factor[above=1 of Q1] {q1d} {} {} {} ; %
\factor[above=1 of Q2] {q2d} {} {} {} ; %
\factor[above=1 of Q3] {q3d} {} {} {} ; %
\draw (D) -- (q1q2d) ;
\draw (Q1) -- (q1q2d) ;
\draw (Q2) -- (q1q2d) ;
\draw (D) -- (q2q3d) ;
\draw (Q2) -- (q2q3d) ;
\draw (Q3) -- (q2q3d) ;
\draw (D) -- (q1d) -- (Q1) ;
\draw (D) -- (q2d) -- (Q2) ;
\draw (D) -- (q3d) -- (Q3) ; %
\end{tikzpicture}
}
\centering{ \subfloat[Generative model.\label{fig:Generative-model}]{
\begin{tikzpicture}[x=0.7cm,y=0.5cm]
\node[latent] (phiu) {$\phi^u_d$} ; %
\node[latent, below=0.15 of phiu] (phib) {$\phi^b_d$} ; %
\node[latent, below=0.15 of phib] (phiw) {$\phi^w_d$} ; %
\node[obs, below right=0.2 and 0.5 of phiw] (Q1) {$Q_1$} ; %
\node[obs, right=2 of Q1] (Q2) {$Q_2$} ; %
\node[obs, right=2.5 of Q2] (Q3) {$Q_3$} ; %
\node[latent, above=3 of Q2] (Z2) {$Z$} ; %
\node[latent, left=1.2 of Z2] (lambda) {$\lambda$} ; %
\node[const, above left=-0.3 and 1 of Q2] (marker) {} ; %
\factor[above=1.8 of marker] {q2u} {} {} {} ; %
\factor[above=1.1 of marker] {q2b} {} {} {} ; %
\factor[above=0.4 of marker] {q2w} {} {} {} ; %
\factoredge {phiu} {q2u} {Q2} ; %
\factoredge {phib} {q2b} {Q2} ; %
\factoredge {phiw} {q2w} {Q2} ; %
\draw (Q1) -- (q2b) ; %
\draw (Q1) -- (q2w) ; %
\node[const, above left=-0.3 and 1 of Q3] (marker2) {} ; %
\factor[above=1.8 of marker2] {q3u} {} {} {} ; %
\factor[above=1.1 of marker2] {q3b} {} {} {} ; %
\factor[above=0.4 of marker2] {q3w} {} {} {} ; %
\factor[right=0.2 of lambda] {lambda2} {} {} {} ;%
\node[const, right=4.5 of phiu] (phiu2) {}; %
\node[const, right=4.5 of phib] (phib2) {}; %
\node[const, right=4.5 of phiw] (phiw2) {}; %
\factoredge {phiu2} {q3u} {Q3} ; %
\factoredge {phib2} {q3b} {Q3} ; %
\factoredge {phiw2} {q3w} {Q3} ; %
\draw (Q2) -- (q3b) ; %
\draw (Q2) -- (q3w) ; %
\factoredge {lambda} {lambda2} {Z2} ; %
\gate {z2gate1} {(q2u) (q3u)} {Z2} ; %
\gate {z2gate2} {(q2b) (q3b)} {} ; %
\gate {z2gate3} {(q2w) (q3w)} {} ; %
\end{tikzpicture}}
}
\caption{Sequential Dependence Model.}
\end{figure*}
Metzler et al. derive the algorithm in Section 3 through a Markov
Random Field model for term dependencies, which we recap in this section.
Markov random fields, which are also called undirected graphical models,
provide a probabilistic framework for inference of random variables
and parameter learning. A graphical model is defined to be a Markov
random field if the distribution of a random variable only depends
on the knowledge of the outcome of neighboring variables. We limit
the introduction of MRFs to concepts that are required to follow the
derivation of the Sequential Dependence Model, for a complete introduction
we refer the reader to Chapter 19.3 of the text book of Murphy \cite{murphy2012book}.
To model a query $\mathbf{q}=q_{1}q_{2}\ldots q_{k}$ and a document
$d$, Metzler et al.\ introduce a random variable $Q_{i}$ for each
query term $q_{i}$ as well as the random variable $D$ to denote
a document $d$ from the corpus which is to be scored. For example, $Q_{1}=\mbox{'information'}$, $Q_{2}=\mbox{'retrieval'}$. The sequential
dependence model captures statistical dependence between random variables of consecutive
query terms $Q_{i}$ and $Q_{i+1}$ and the document $D$, cf.\ Figure
\ref{fig:MRF-model}.
However, non-consecutive query terms $Q_{i}$ and $Q_{j}$ (called
non-neighbors) are intended to be conditionally independent, given
the terms in between. By rules of the MRF framework, unconnected random
variables are conditionally independent given values of remaining
random variables. Therefore, the absence of connections between non-neighbors
$Q_{i}$ and $Q_{j}$ in the Graphical model (Figure \ref{fig:MRF-model})
declares this independence.
The framework of Markov Random Fields allows to reason about observed
variables and latent variables. As a special case of MRFs, all variables of the
sequential dependence model are observed. This means that we know
the configuration of all variables during inference relieving us from
treating unknowns. The purpose of MRFs for the sequential dependence
scoring algorithm is to use the model likelihood $\mathcal{L}$ as
a ranking score function for a document $d$ given the query terms
$\mathbf{q}$.
\subsection{SDM Model Likelihood}
The likelihood $\mathcal{L}$ of the sequential dependence model for
a given configuration of the random variables $Q_{i}=q_{i}$ and $D=d$
provides the retrieval score for the document $d$ given the query
$\mathbf{q}$.
According to the Hammersley-Clifford theorem \cite{murphy2012book},
the likelihood $\mathcal{L}$ (or joint distribution) of a Markov
Random Field can be fully expressed over a product over maximal cliques
in the model, where each clique of random variables is associated
with a nonnegative potential function $\psi$. For instance in the
sequential dependence model, a potential function $\psi$ for the
random variables $Q_{1},Q_{2},$ and $D$, produces a nonnegative
real-valued number for every configuration of the random variables
such as $Q_{1}=\mbox{'information'}$, $Q_{2}=\mbox{'retrieval'}$,
and $D$ referring to a document in the collection.
The Hammersley-Clifford theorem states that it is possible to express
the likelihood of every MRF through a product over maximal cliques
(not requiring further factors over unconnected variables). However,
the theorem does not provide a constructive recipe to do so. Instead,
it is part of devising the model to choose a factorization of the
likelihood into arbitrary cliques of random variables. Where the MRF
notation only informs on conditional \uline{\emph{in}}dependence, the equivalent
graphical notation of factor graphs additionally specifies the factorization
chosen for the model, cf.\ Figure \ref{fig:Factor-graph}.
In the factor graph formalization, any set of variables that form
a factor in the likelihood are connected to a small box. A consistent
factor graph of the sequential dependence model is given in Figure
\ref{fig:Factor-graph}. The equivalent model likelihood for the sequential
dependence model follows as,
\[
\mathcal{L}(\mathbf{Q},D)=\frac{1}{Z(\boldsymbol{\lambda})}\prod_{q_{i}\in\mathbf{q}}\psi(Q_{i},D|\boldsymbol{\lambda})\cdot\prod_{Q_{i},Q_{i+1}\in\mathbf{Q}}\psi(Q_{i},Q_{i+1},D|\boldsymbol{\lambda})
\]
\noindent Where $Z(\boldsymbol{\lambda})$, the partition
function, is a constant that ensures normalization
of the joint distribution over all possible configurations of $Q_{i}\in V$
and all documents $d$. This means that summing $\mathcal{L}$ over
all possible combinations of query terms in the vocabulary $V$ and
all documents in the corpus will sum to 1.
However, as the sequential dependence model is only used to rank documents
for a given query $\mathbf{q}$ by the model likelihood $\mathcal{L}$,
the constant $Z(\boldsymbol{\lambda})$ can be ignored to provide a rank
equivalent scoring criterion $\mbox{score}_{SDM}$.
\subsection{Ranking Scoring Criterion}
With the goal of ranking elements by the SDM likelihood function,
we can alternatively use any other rank-equivalent criterion. For instance,
we equivalently use the log-likelihood $\log\mathcal{L}$ for scoring,
leaving us with the following scoring criterion.
\begin{align}
\mbox{score}_{SDM}(d|\mathbf{q})& \; \stackrel{rank}{=} \; \log\mathcal{L}(\mathbf{q},d)\label{eq:sdm-factorized}\\
& \; \stackrel{rank}{=} \; \sum_{q_{i}\in\mathbf{q}}\log\psi(Q_{i},D|\boldsymbol{\lambda})+\sum_{q_{i},q_{i+1}\in\mathbf{q}}\log\psi(Q_{i},Q_{i+1},D|\boldsymbol{\lambda})\nonumber
\end{align}
\paragraph{Potential functions}
The MRF framework provides us with the freedom to choose the functional
form of potential functions $\psi$. The only hard restriction implied
by MRFs is that potential functions ought to be nonnegative. When
considering potential functions in log-space, this means that the
quantity $\log\psi$ can take on any real value while being defined
on all inputs.
The sequential dependence model follows a common choice by using a
so-called log-linear model as the functional form of the potentials
$\log\psi$. The log-linear model is defined as an inner product of
a feature vector $\mbox{\ensuremath{\mathbf{f}}}(\dots)$ and a parameter
vector $\boldsymbol{\lambda}$ in log-space. The entries of the feature
vector are induced by configurations of random variables in the clique
which should represent a measure of compatibility between different
variable configurations.
For instance in the sequential dependence model, the clique of random
variables $Q_{1}$, $Q_{2},$ and $D$ is represented as a feature
vector of a particular configuration $Q_{1}=q_{1}$, $Q_{2}=q_{2}$,
and $D=d$ which is denoted as $\mbox{\ensuremath{\mathbf{f}}}(q_{1},q_{2},d)$.
The log-potential function is defined as the inner product between
the feature vector and a parameter vector $\boldsymbol{\lambda}$
as
\begin{eqnarray*}
\log\psi(Q_{1},Q_{2},D|\boldsymbol{\lambda}) & = & \sum_{j=1}^{m}\mbox{f}_{j}(q_{1},q_{2},d)\cdot\lambda_{j}
\end{eqnarray*}
where $m$ denotes the length of the feature vector
or the parameter vector respectively. Each entry of the feature vector,
$\mbox{f}_{j}$ should express compatibility of the given variable
configurations, to which the corresponding entry in the parameter
vector $\lambda_{j}$ assigns relative weight. Since we operate in
log-space, both positive and negative weights are acceptable.
\paragraph{Factors and features}
The sequential dependence model makes use of two factor types, one
for the two-cliques of for single query terms and the document, and
another for the three-cliques of consecutive query terms and the document.
Both factor types are repeated across all query terms. Each factor
type goes along with its own feature vector functions and corresponding
parameter vector. While not necessarily the case, in this model, the
same parameter vector is shared between all factors of the same factor
type (so-called parameter-tying).
The sequential dependence model associates each two-clique $\log\psi(Q_{i},D|\boldsymbol{\lambda});\forall i$
with a feature vector of length one, consisting only of the unigram
score of $q_{i}$ in the document $d$, denoted by Equation \ref{fu}.
The three-clique $\log\psi(Q_{i-1},Q_{i},D|\boldsymbol{\lambda});$
$\forall i\geq2$ is associated with a feature vector of length two,
consisting of the bigram score of $q_{i-1}$ and $q_{i}$ in the document,
denoted Equation \ref{fb}, as well as the windowed-bigram score Equation
\ref{fw}.
\begin{eqnarray}
\mbox{f}_{u}(q_{i},d) & = & \mbox{score}_{\mathcal{M}_{u}}(d|q_{i})\label{fu}\\
\mbox{f}_{b}(q_{i-1},q_{i},d) & = & \mbox{score}_{\mathcal{M}_{b}}(d|q_{i-1},q_{i})\label{fb}\\
\mbox{f}_{w}(q_{i-1},q_{i},d) & = & \mbox{score}_{\mathcal{M}_{w}}(d|q_{i-1},q_{i})\label{fw}
\end{eqnarray}
In total, the model uses three features and therefore needs a total
of three parameter weights referred to as $\lambda_{u}$, $\lambda_{b}$,
and $\lambda_{w}$.
\subsection{Proof of the SDM Scoring Algorithm}
\begin{theorem} The SDM scoring algorithm as given in Equation \ref{eq:sdm-scoring}
implements the Markov random field as given in the factor graph of
Figure \ref{fig:Factor-graph}, with features defined as in Equations
\ref{fu}–\ref{fw}, and given parameters $\lambda_{u}$, $\lambda_{b}$,
and $\lambda_{w}$.
\end{theorem
\begin{proof}
Starting with Equation \ref{eq:sdm-factorized} and using the choices
for factors and feature of Equations \ref{fu}–\ref{fw} yields
\[
\mbox{score}_{SDM}(d|\mathbf{q})\; \stackrel{rank}{=} \; \sum_{q_{i}\in\mathbf{q}}\mbox{f}_{u}(q_{i},d)\cdot\lambda_{u}+
\]
\[
\sum_{q_{i-1},q_{i}\in\mathbf{q}}\left(\mbox{f}_{b}(q_{i-1},q_{i},d)\cdot\lambda_{b}+\mbox{f}_{w}(q_{i-1},q_{i},d)\cdot\lambda_{w}\right)
\]
Reordering terms of the sums, and making use of the independence of
$\lambda$ from particular the query terms yields
\[
\mbox{score}_{SDM}(d|\mathbf{q})\; \stackrel{rank}{=} \; \lambda_{u}\sum_{q_{i}\in\mathbf{q}}\mbox{f}_{u}(q_{i},d)+
\]
\[
\lambda_{b}\sum_{q_{i-1},q_{i}\in\mathbf{q}}\mbox{f}_{b}(q_{i-1},q_{i},d)+\lambda_{w}\sum_{q_{i-1},q_{i}\in\mathbf{q}}\mbox{f}_{w}(q_{i-1},q_{i},d)
\]
\begin{equation}
=\lambda_{u}\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})+\lambda_{b}\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q})+\lambda_{w}\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})\label{eq:sdm-final}
\end{equation}
This is the SDM scoring equation given in Equation \ref{eq:sdm-scoring}.
\end{proof}
\subsection{Parameter Learning}
There are two common approaches to optimize settings of parameters
$\lambda$ for given relevance data: grid tuning or learning-to-rank.
Due to its low-dimensional parameter space, all combinations of choices
for $\lambda_{u}$, $\lambda_{b}$, and $\lambda_{w}$ in the interval
$(0,1)$ can be evaluated. For example a choice of 10 values leads
to 1000 combinations to evaluate. For rank equivalence, without loss
of generality it is sufficient to only consider nonnegative combinations where
$\lambda_{u}+\lambda_{b}+\lambda_{w}=1$, which reduces the number
of combinations to 100.
An alternative is to use a learning-to-rank algorithms such as using
coordinate ascent to directly optimize for a retrieval metric, e.g.
mean average precision (MAP). Coordinate ascent starts with an initial
setting, then continues to update one of the three dimensions in turn
to its best performing setting until convergence is reached.
Since Equation \ref{eq:sdm-final} represents a log-linear model on
the three language models, any learning-to-rank algorithm including
Ranking SVM \cite{joachims2002ranksvm} can be used. However, in order
to prevent a mismatch between training phase and prediction phase
it is important to either use the whole collection to collect negative
training examples or to use the the same candidate selection strategy
(e.g., top 1000 documents under the unigram model) in both phases.
In this work, we use the RankLib\footnote{http://lemurproject.org/ranklib.php}
package in addition to grid tuning.
\section{Generative SDM Model}
\label{sec:genSDM}In this section we derive a generative model which
makes use of the same underlying unigram, bigram and windowed bigram
language models. Generative models are also called directed graphical
models or Bayesian networks. Generative models are often found to
be unintuitive, because the model describes a process that generates
data given variables we want to infer. In order to perform learning,
the inference algorithm 'inverts' the conditional relationships of
the process and to reason which input would most likely lead to the
observed data.
\subsection{Generative Process: genSDM\label{subsec:Generative-process}}
We devise a generative model where the query and the document are
generated from distributions over unigrams $\phi_{d}^{u}$, over bigrams
$\phi_{d}^{b}$ and windowed bigrams $\phi_{d}^{w}$. These three
distributions are weighted according to a multinomial parameter $(\lambda_{u},\lambda_{b},\lambda_{w})$
of nonnegative entries that is normalized to sum to one.
The generative process is visualized in directed factor graph notation
\cite{dietz2010notation} in Figure \ref{fig:Generative-model}. For
a given document $d$ with according distributions, the query $\mathbf{q}=q_{1}q_{2}\dots q_{k}$
is assumed to be generated with the following steps:
\begin{itemize}[leftmargin=3mm]
\item Draw a multinomial distribution $\lambda$ over the set '$u$','$b$','$w$'.
\item Assume distributions to represent the document $d$ are given to model
unigrams $\phi_{d}^{u}$, bigrams $\phi_{d}^{b}$ and windowed bigrams
$\phi_{d}^{w}$.
\item Draw an indicator variable $Z\sim Mult(\boldsymbol{\lambda})$ to
indicate which distribution should be used.
\item If $Z=\mbox{'}u\mbox{'}$ then
\begin{itemize}[leftmargin=3mm]
\item For all positions $1\leq i\leq k$ of observed query terms $q_{i}$
do: \\
Draw unigram $Q_{i}\sim Mult(\phi_{d}^{u})$.
\end{itemize}
\item If $Z=\mbox{'}b\mbox{'}$ then
\begin{itemize}[leftmargin=3mm]
\item For all positions $2\leq i\leq k$ of observed query bigrams $q_{i-1},q_{i}$
do: \\
Draw bigram $(Q_{i-1},Q_{i})\sim Mult(\phi_{d}^{b})$.
\end{itemize}
\item If $Z=\mbox{'}w\mbox{'}$ then
\begin{itemize}[leftmargin=3mm]
\item For all positions $2\leq i\leq k$ of observed query terms $q_{i-1},q_{i}$
do:\\
Draw cooccurrence $\{Q_{i-1},Q_{i}\}\sim Mult(\phi_{d}^{w})$.
\end{itemize}
\end{itemize}
When scoring documents, we assume that parameters $\lambda_{u}$,
$\lambda_{b}$, and $\lambda_{w}$ are given and that the random variables
$Q_{i}$ are bound to the given query terms $q_{i}$. Furthermore,
the document representations $\phi_{d}^{u}$, $\phi_{d}^{b}$, $\phi_{d}^{w}$
are assumed to be fixed – we detail how they are estimated below.
The only remaining random variables that remains is the draw of the
indicator $Z$. The probability of $Z$ given all other variables
being estimated in close form. E.g., $p(Z=\mbox{'}u\mbox{'}|\mathbf{q},\mbox{\ensuremath{\lambda}}\dots)\propto\lambda_{u}\prod_{i=1}^{k}\phi_{d}^{u}(q_{i})$
and analogously for 'b' and 'w', with a normalizer that equals the
sum over all three values.
Marginalizing (i.e., summing) over the uncertainty in assignments
of $Z$, this results as the following likelihood for all query terms
$\mathbf{q}$ under the generative model.
\begin{equation}
\mathcal{L}(\mathbf{q}|\lambda,\phi_{d}^{u},\phi_{d}^{b},\phi_{d}^{w})=\lambda_{u}\prod_{i=1}^{k}\phi_{d}^{u}(q_{i})+\lambda_{b}\prod_{i=2}^{k}\phi_{d}^{b}(\left(q_{i-1},q_{i}\right))+\lambda_{w}\prod_{i=2}^{k}\phi_{d}^{w}(\left\{ q_{i-1},q_{i}\right\} ) \label{eq:genmodel-likelihood}
\end{equation}
\subsection{Document Representation\label{subsec:Document-Representation-feature}}
In order for the generative process to be complete, we need to define
the generation for unigram, bigram and windowed bigram representations
of a document $d$. There are two common paradigms for bigram models,
the first is going back to n-gram models by generating word $w_{i}$
conditioned on the previous word $w_{i-1}$, where the other paradigm
is to perceive a document as a bag-of-bigrams which are drawn independently.
As the features of the sequential dependence model implement the latter
option, we focus on the bag-of-bigram approach here, and discuss the
n-gram approach in Section \ref{sec:genNgram}.
Each document $d$ in the corpus with words $w_{1},w_{2},\dots w_{n}$
is represented through three different forms. Each representation
is being used to model one of the multinomial distributions $\phi_{d}^{u}$,
$\phi_{d}^{b}$, $\phi_{d}^{w}$.
\paragraph{Bag of unigrams}
The unigram representation of $d$ follows the intuition of the document
as a bag-of-words $w_{i}$ which are generated independently through
draws from a multinomial distribution with parameter $\phi_{d}^{u}$.
In the model, we further let the distribution $\phi_{d}^{u}$ be governed
by a Dirichlet prior distribution. In correspondence to the SDM model,
we choose the Dirichlet parameter that is proportional to the empirical
distribution in the corpus, i.e., $p(w)=\frac{n_{w,\star}}{n_{\star,\star}}$
with the scale parameter $\mu$. We denote this Dirichlet parameter
as $\tilde{\mu}^{u}=\left\{ \mu\cdot\frac{n_{w,\star}}{n_{\star,\star}}\right\} _{w\in V}$
which is a vector with entries for all words $w$ in the vocabulary
$V$.
The generative process for the unigram representation is:
\begin{enumerate}
\item Draw categorical parameter $\phi_{d}^{u}\sim Dir(\tilde{\mu}^{u})$.
\item For each word $w_{i}\in d$ do: \\
Draw $w_{i}\sim Mult(\phi_{d}^{u})$.
\end{enumerate}
Given a sequence of words in the document $d=w_{1}w_{2}\dots w_{n}$,
the parameter vector $\phi_{d}^{u}$ is estimated in closed form as
follows.
\[
\phi_{d}^{u}=\left\{ \frac{n_{w,d}+\mu\frac{n_{w,\star}}{n_{\star,\star}}}{n_{\star,d}+\mu}\right\} _{w\in V}
\]
The log likelihood of a given set of query terms $\mathbf{q}=q_{1}q_{2}\dots q_{k}$
under this model is given by
\[
\log\mathcal{L}_{u}(\mathbf{q}|\phi_{d}^{u})=\sum_{q_{i}\in\mathbf{q}}\log\frac{n_{q_{i},d}+\mu\frac{n_{q_{i},\star}}{n_{\star,\star}}}{n_{\star,d}+\mu}
\]
Notice, that $\log\mathcal{L}_{u}(\mathbf{q}|\phi_{d}^{u})$ is identical
to $\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})$ of Equation \ref{eq:unigram-model}.
\paragraph{Bag of ordered bigrams}
One way of incorporating bigram dependencies in a model is through
a bag-of-bigrams representation. For a document $d$ with words $w_{1},w_{2},\dots w_{n}$
for every $i,\,2\leq i\leq n$ a bigram $(w_{i-1},w_{i})$ is placed
in the bag. The derivation follows analogously to the unigram case.
The multinomial distribution $\phi_{d}^{b}$ is drawn from a Dirichlet
prior distribution, parameterized by parameter $\tilde{\mu}^{b}$.
The Dirichlet parameter is derived from bigram-statistics from the
corpus, scaled by the smoothing parameter $\mu$.
The generative process for bigrams is as follows:
\begin{enumerate}
\item Draw categorical parameter $\phi_{d}^{b}\sim Dir(\tilde{\mu}^{b})$
\item For each pair of consecutive words $(w_{i-1},w_{i})\in d$: draw $(w_{i},w_{i+1})\sim Mult(\phi_{d}^{b})$
\end{enumerate}
Given an observed sequence of bigrams in the document $d=(w_{1},w_{2})(w_{2},w_{3})\dots(w_{n-1}w_{n})$,
the parameter vector $\phi_{d}^{b}$ can be estimated in closed form
as follows.
\[
\phi_{d}^{b}=\left\{ \frac{n_{(w,u),d}+\mu\frac{n_{(w,u),\star}}{n_{(\star,\star),\star}}}{n_{(\star,\star),d}+\mu}\right\} _{(w,u)\in VxV}
\]
The log likelihood of a given set of query terms $\mathbf{q}$ with\\
$\mathbf{q}=(q_{1}q_{2}),(q_{2}q_{3})\dots(q_{k-1}q_{k})$
under this model is given by
\[
\log\mathcal{L}_{b}(\mathbf{q}|\phi_{d}^{b})=\sum_{(q_{i-1},q_{i})\in\mathbf{q}}\log\frac{n_{(q_{i-1},q_{i}),d}+\mu\frac{n_{(q_{i-1},q_{i}),\star}}{n_{(\star,\star),\star}}}{n_{(\star,\star),d}+\mu}
\]
Also, $\log\mathcal{L}_{b}(\mathbf{q}|\phi_{d}^{b})$ produces the
identical to $\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q})$ above.
\paragraph{Bag of unordered windowed bigrams}
The windowed-bigram model of document $d$ works with a representation
of eight consecutive words $(w_{i-7}\dots w_{i})$, with derivation
analogously to the bigram case. However, in order to determine the
probability for two words $u$ and $v$ to occur within an unordered
window of 8 terms, we integrate over all positions and both directions.
The estimation of the windowed bigram parameter follows as
\[
\phi_{d}^{w}=\left\{ \frac{n_{\{u,v\}_{8},d}+\mu\frac{n_{\{u,v\}_{8},\star}}{n_{\{\star,\star\}_{8},\star}}}{n_{\{\star,\star\}_{8},d}+\mu}\right\} _{u\in V,v\in V}
\]
where $n_{\{u,v\}_{8},d}$ refers to the number of cooccurrences of
terms $u$ and $v$ within a window of eight terms. With parameters
$\phi_{d,v}^{w}$ estimated this way, the log-likelihood for query
terms $\mathbf{q}$ is given as
\begin{eqnarray*}
\log\mathcal{L}_{w}(\mathbf{q}|\phi_{d,\star}^{w}) & = & \sum_{{q_{i}\in\mathbf{q}\atop i>1}}\log\frac{n_{\left\{ q_{i-1},q_{i}\right\} _{8},d}+\mu\frac{n_{\left\{ q_{i-1},q_{i}\right\} _{8},\star}}{n_{\left\{ \star,\star\right\} _{8},\star}}}{n_{\left\{ q_{i-1},\star\right\} _{8},d}+\mu}
\end{eqnarray*}
The windowed bigram model $\mathcal{M}_{w}$ introduced above produces
the same score denoted $\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})$
as $\log\mathcal{L}_{w}(\mathbf{q}|\phi_{d}^{b})$.
\subsection{Generative Scoring Algorithm}
Inserting the expressions of the unigram, bigram and windowed bigram
language model into the likelihood of the generative model (Equation
\ref{eq:genmodel-likelihood}), yields
\begin{eqnarray}
\mathcal{L}_{\mbox{Gen}}(\mathbf{q},d) & \propto & \lambda_{u}\exp\,\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})\label{eq:likelihood-genSDM}
\end{eqnarray}
\[
+\lambda_{b}\exp\,\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q})+\lambda_{w}\exp\,\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})
\]
Since the expressions such as $\prod_{i=1}^{k}\phi_{d}^{u}(q_{i})$
are identical to \\
$\exp\,\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})$
as it was introduced in Section \ref{sec:Sequential-Dependence-Algo}.
\subsection{Connection to MRF-SDM model}
We want to point out the similarity of the likelihood of the generative
SDM model (Equation \ref{eq:likelihood-genSDM}) and the log-likelihood
of the SDM Markov random field from Equation \ref{eq:sdm-final},
which (as a reminder) is proportional to
\begin{eqnarray}
\log\mathcal{L}_{\mbox{MRF}}(\mathbf{q},d) & \propto & \lambda_{u}\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})+\lambda_{b}\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q})+\lambda_{w}\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})\label{eq:recap-log-mrf-sdm}
\end{eqnarray}
The difference between both likelihood expressions is that for MRF,
the criterion is optimized in log-space (i.e., $\underline{\log}\mathcal{L}_{\mbox{MRF}}(\mathbf{q},d)$)
where for the generative model, the criterion is optimized in the
space of probabilities (i.e., $\mathcal{L}_{\mbox{Gen}}(\mathbf{q},d)$).
Therefore the MRF is optimizing a linear-combination of log-features
such as $\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})$, where by
contrast, the generative model optimizes a linear combination of probabilities
such as $\underline{\exp}\,\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})$.
Looking at Equation \ref{eq:recap-log-mrf-sdm} in the probability
space, it becomes clear that the weight parameter $\lambda$ acts
on the language models through the exponent (and not as a mixing factor):
\begin{eqnarray*}
\mathcal{L}_{\mbox{MRF}}(\mathbf{q},d) & \propto & \left(\exp\,\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})\right)^{\lambda_{u}} \cdot\left(\exp\,\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q})\right)^{\lambda_{b}}\cdot\left(\exp\,\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})\right)^{\lambda_{w}}
\end{eqnarray*}
This difference is the reason why the MRF factor functions are called log-linear
models and why the parameter $\boldsymbol{\lambda}$ is not restricted
to nonnegative entries that sum to one---although this restriction
can be imposed to restrict the parameter search space without loss
of generality.
\subsection{Connections to Jelinek-Mercer Smoothing}
Jelinek-Mercer smoothing \cite{chen1996smoothing} is an interpolated
language smoothing technique. While discussed as an alternative to
Dirichlet smoothing by Zhai et al.\ \cite{zhai2001smoothing}, here
we analyze it as a paradigm to combine unigram, bigram, and windowed
bigram model.
The idea of Jelinek-Mercer smoothing is to combine a complex model
which may suffer from data-sparsity issues, such as the bigram language
model, with a simpler back-off model. Both models are combined by
linear interpolation.
We apply Jelinek-Mercer smoothing to our setting through a nested
approach. The bigram model is first smoothed with a windowed bigram
model as a back-off distribution with interpolation parameter $\tilde{\lambda}_{b}$.
Then the resulting model is smoothed additionally with a unigram model
with parameter $\tilde{\lambda}_{u}$. This model results in the following
likelihood for optimization.
\[
\mathcal{L}_{\mbox{JM}}(\mathbf{q},d)\propto(1-\tilde{\lambda}_{u})\Bigl(\tilde{\lambda}_{b}\exp\,\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q})+ (1-\tilde{\lambda}_{b})\exp\,\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})\Bigr)+(\tilde{\lambda}_{u})\exp\,\mbox{score}_{\mathcal{M}_{u}}(d|\mathbf{q})
\]
We demonstrate that this function is equivalent to the likelihood
of the generative model (Equation \ref{eq:likelihood-genSDM}), through
the reparametrization of $\lambda_{u}=\tilde{\lambda}_{u}$, $\lambda_{b}=(1-\tilde{\lambda}_{u})\cdot\tilde{\lambda}_{b}$
and $\lambda_{w}=(1-\tilde{\lambda}_{u})\cdot(1-\tilde{\lambda}_{b})$.
Therefore, we conclude that the generative model introduced in this
section is equivalent to a Jelinek-Mercer-smoothed bigram model discussed
here.
\begin{figure*}
\hfill{} \subfloat[Generative bag-of-bigrams (Section \ref{sec:genSDM}).\label{fig:Generative-bof-model}]{\begin{tikzpicture}[x=0.7cm,y=0.5cm]
\node[latent] (phiu) {$\phi^u_d$} ; %
\node[latent, below=0.15 of phiu] (phib) {$\phi^b_d$} ; %
\node[latent, below=0.15 of phib] (phiw) {$\phi^w_d$} ; %
\node[obs, below right=0.2 and 0.5 of phiw] (Q1) {$Q_1$} ; %
\node[obs, right=2 of Q1] (Q2) {$Q_2$} ; %
\node[obs, right=2.5 of Q2] (Q3) {$Q_3$} ; %
\node[latent, above=3 of Q2] (Z2) {$Z$} ; %
\node[latent, left=1.2 of Z2] (lambda) {$\lambda$} ; %
\node[const, above left=-0.3 and 1 of Q2] (marker) {} ; %
\factor[above=1.8 of marker] {q2u} {} {} {} ; %
\factor[above=1.1 of marker] {q2b} {} {} {} ; %
\factor[above=0.4 of marker] {q2w} {} {} {} ; %
\factoredge {phiu} {q2u} {Q2} ; %
\factoredge {phib} {q2b} {Q2} ; %
\factoredge {phiw} {q2w} {Q2} ; %
\draw (Q1) -- (q2b) ; %
\draw (Q1) -- (q2w) ; %
\node[const, above left=-0.3 and 1 of Q3] (marker2) {} ; %
\factor[above=1.8 of marker2] {q3u} {} {} {} ; %
\factor[above=1.1 of marker2] {q3b} {} {} {} ; %
\factor[above=0.4 of marker2] {q3w} {} {} {} ; %
\factor[right=0.2 of lambda] {lambda2} {} {} {} ;%
\node[const, right=4.5 of phiu] (phiu2) {}; %
\node[const, right=4.5 of phib] (phib2) {}; %
\node[const, right=4.5 of phiw] (phiw2) {}; %
\factoredge {phiu2} {q3u} {Q3} ; %
\factoredge {phib2} {q3b} {Q3} ; %
\factoredge {phiw2} {q3w} {Q3} ; %
\draw (Q2) -- (q3b) ; %
\draw (Q2) -- (q3w) ; %
\factoredge {lambda} {lambda2} {Z2} ; %
\gate {z2gate1} {(q2u) (q3u)} {Z2} ; %
\gate {z2gate2} {(q2b) (q3b)} {} ; %
\gate {z2gate3} {(q2w) (q3w)} {} ; %
\end{tikzpicture}
} \hfill{}\subfloat[Generative n-gram model (Section \ref{sec:genNgram}).\label{fig:Generative-ngram-model}]{\begin{tikzpicture}[x=0.7cm,y=0.5cm]
\node[latent] (phiu) {$\phi^u_d$} ; %
\node[latent, below=0.15 of phiu] (phib) {$\phi^b_d$} ; %
\node[latent, below=0.15 of phib] (phiw) {$\phi^w_d$} ; %
\node[obs, below right=0.2 and 0.5 of phiw] (Q1) {$Q_1$} ; %
\node[obs, right=2 of Q1] (Q2) {$Q_2$} ; %
\node[obs, right=2 of Q2] (Q3) {$Q_3$} ; %
\node[latent, above left=3 and 0.7 of Q2] (Z2) {$Z_2$} ; %
\node[latent, above left=3 and 0.7 of Q3] (Z3) {$Z_3$} ; %
\node[latent, above right=1 of Z2] (lambda) {$\lambda$} ; %
\node[const, above left=-0.3 and 1 of Q2] (marker) {} ; %
\factor[above=1.4 of marker] {q2u} {} {} {} ; %
\factor[above=0.7 of marker] {q2b} {} {} {} ; %
\factor[above=0 of marker] {q2w} {} {} {} ; %
\factoredge {phiu} {q2u} {Q2} ; %
\factoredge {phib} {q2b} {Q2} ; %
\factoredge {phiw} {q2w} {Q2} ; %
\draw (Q1) -- (q2b) ; %
\draw (Q1) -- (q2w) ; %
\node[const, above left=-0.3 and 1 of Q3] (marker2) {} ; %
\factor[above=1.4 of marker2] {q3u} {} {} {} ; %
\factor[above=0.7 of marker2] {q3b} {} {} {} ; %
\factor[above=0 of marker2] {q3w} {} {} {} ; %
\factor[below left=0.2 of lambda] {lambda2} {} {} {} ;%
\factor[below right=0.2 of lambda] {lambda3} {} {} {} ;%
\node[const, right=4 of phiu] (phiu2) {}; %
\node[const, right=4 of phib] (phib2) {}; %
\node[const, right=4 of phiw] (phiw2) {}; %
\factoredge {phiu2} {q3u} {Q3} ; %
\factoredge {phib2} {q3b} {Q3} ; %
\factoredge {phiw2} {q3w} {Q3} ; %
\draw (Q2) -- (q3b) ; %
\draw (Q2) -- (q3w) ; %
\factoredge {lambda} {lambda2} {Z2} ; %
\factoredge {lambda} {lambda3} {Z3} ; %
\gate {z2gate1} {(q2u)} {Z2} ; %
\gate {z2gate2} {(q2b)} {} ; %
\gate {z2gate3} {(q2w)} {} ; %
\gate {z3gate1} {(q3u)} {Z3} ; %
\gate {z3gate2} {(q3b)} {} ; %
\gate {z3gate3} {(q3w)} {} ; %
\end{tikzpicture}
}\hfill{}
\caption{Generative n-gram mixture models.}
\end{figure*}
\section{Generative N-Gram-based Model}
\label{sec:genNgram}The generative model introduced in Section \ref{sec:genSDM}
is rather untypical in that it considers three bag-of-features representations
of a single document without ensuring consistency among them. Using
it to generate documents might yield representations of different
content. In this section we discuss a more stereotypical generative
model based on the n-gram process (as opposed to a bag-of-n-grams).
Consistently with previous sections, this model combines a unigram,
bigram, and windowed bigram model.
While the unigram model is exactly as described in Section \ref{subsec:Document-Representation-feature},
the setup for the bigram and windowed bigram cases change significantly
when moving from a bag-of-bigram paradigm to an n-gram paradigm.
\subsection{Generative N-gram-based Bigram Process\label{subsec:Generative-n-gram-style-bigram}}
In the bag-of-bigrams model discussed in Section \ref{subsec:Document-Representation-feature},
both words of a bigram $(w_{i-1},w_{i})$ are drawn together from
one distribution $\phi_{d}$ per document $d$. In contrast, in the
n-gram models we discuss here, $w_{i}$ is drawn from a distribution
that is conditioned on $w_{i-1}$ in addition to $d$, i.e., $\phi_{d,w_{i-1}}$.
The difference is that where in the bag-of-bigrams model follows $p(w,v|d)=\frac{n_{(v,w),d}}{n_{(\star,\star),d}}$,
the n-gram version follows $p(w|v,d)=\frac{n_{(v,w),d}}{n_{(v,\star),d}}$
.
As before, we use language models with Dirichlet smoothing, a smoothing
technique that integrates into the theoretical generative framework
through prior distributions. For all terms $v\in V$, we let each
language model $\phi_{d,v}$ be drawn from a Dirichlet prior with
parameter $\tilde{\mu}_{v}^{b}$, which is based on bigram statistics
from the corpus, which are scaled by the smoothing parameter $\mu$.
For bigram statistics, we have the same choice between a bag-of-bigram
and n-gram paradigm. For consistency we choose to follow the n-gram
paradigm which yields Dirichlet parameter $\tilde{\mu}_{v}^{b}=\left\{ \mu\frac{n_{(v,w),\star}}{n_{(v,\star),\star}}\right\} _{w\in V}$
.
The generative process for the bigram model is as follows:
\begin{enumerate}
\item For all words $v\in V$ in the vocabulary: draw categorical parameter
$\phi_{d,v}^{b}\sim Dir(\tilde{\mu}_{v}^{b})$.
\item Draw the first word of the document $w_{1}\in d$ from the unigram
distribution, $w_{1}\sim Mult(\phi_{d}^{u})$.
\item For each remaining word $w_{i}\in d;\:i\leq2$: \\
draw $w_{i}\sim Mult(\phi_{d,w_{i-1}}^{b})$.
\end{enumerate}
Given a sequence of words in the document $d=w_{1}w_{2}\dots w_{n}$,
the parameter vectors $\phi_{d,v}^{b}$ ($\forall v\in V$) can be
estimated in closed form as follows.
\[
\phi_{d,v}^{b}=\left\{ \frac{n_{(v,w),d}+\mu\frac{n_{(v,w),\star}}{n_{(v,\star),\star}}}{n_{(v,\star),d}+\mu}\right\} _{w\in V}
\]
The log likelihood of a given set of query terms $\mathbf{q}=q_{1}q_{2}\dots q_{k}$
is modeled as $p(\mathbf{q})=\left(\prod_{{q_{i}\in\mathbf{q}\atop i>1}}p(q_{i}|q_{i-1})\right)\cdot p(q_{1})$.
With parameters $\phi_{d,v}^{b}$ as estimated above, the log-likelihood
for query terms $\mathbf{q}$ is given as
\begin{eqnarray*}
\log\mathcal{L}_{b}(\mathbf{q}|\phi_{d,\star}^{b}) & = & \sum_{{q_{i}\in\mathbf{q}\atop i>1}}\log\frac{n_{(q_{i-1},q_{i}),d}+\mu\frac{n_{(q_{i-1},q_{i}),\star}}{n_{(q_{i-1},\star),\star}}}{n_{(q_{i-1},\star),d}+\mu} +\log\mathcal{L}_{u}(q_{1}|\phi_{d}^{u})
\end{eqnarray*}
The second term handles the special case of the first query word $q_{1}$ which has
no preceding terms and therefore, when marginalizing over all possible preceding terms, collapses to the unigram distribution.
Even when ignoring the special treatment for the first query term
$q_{1}$, the bigram model $\mathcal{M}_{b}$ referred to above as
$\mbox{score}_{\mathcal{M}_{b}}(d|\mathbf{q})$ produces the different
score as $\log\mathcal{L}_{b}(\mathbf{q}|\phi_{d}^{b})$ due to the
difference in conditional probability and joint probability.
\subsection{Generative Windowed-Bigram Process\label{subsec:Generative-windowed-bigram-proce}}
The windowed bigram model of document $d$ also represents each word
$w_{i}$ as a categorical distribution. The difference is that the
model conditions on a random word within the 8-word window surrounding
the $i$'th position. This is modeled by a random
draw of a position $j$ to select the word $w_{j}$ on which the draw
of word $w_{i}$ will be conditioned on. In the following, we denote
the set of all words surrounding word $w_{i}$ by $\omega_{i}=\{w_{i-7}\dots w_{i-1}w_{i+1}\dots w_{i+7}\}$.
The generative process for the windowed bigram model is as follows:
\begin{enumerate}[leftmargin=4mm]
\item For all words $v\in V$: draw categorical parameter $\phi_{d,v}^{w}\sim Dir(\tilde{\mu}_{v}^{w})$.
\item For each word $w_{i}\in d$:
\begin{enumerate}[leftmargin=4mm]
\item Draw an index $j$ representing word $w_{j}\in\omega_{i}$ uniformly
at random.
\item Draw $w_{i}\sim Mult(\phi_{d,w_{j}}^{w})$.
\end{enumerate}
\end{enumerate}
Deriving an observed sequence of windows $\omega_{1}\omega_{2}\dots\omega_{n}$
from an given sequence of words in the document $d=w_{1}w_{2}\dots w_{n}$.
The parameter vectors $\phi_{d,v}^{w}$ ($\forall v\in V$) can be
estimated in closed form by counting all co-occurrences of $w_{i}$
with $v\in\omega_{i}$ in the vocabulary $V$. This quantity was introduced
above as $n_{\left\{ w,v\right\} _{8},d}$. In order to incorporate
choosing the position $j$, the co-occurrence counts are weighted
by the domain size of the uniform draw, i.e., $\frac{1}{7+7}$.
\begin{eqnarray*}
\phi_{d,v}^{w} & = & \left\{ \frac{\frac{1}{14}n_{\left\{ v,w\right\} _{8},d}+\mu\frac{\frac{1}{14}n_{\left\{ v,w\right\} _{8},\star}}{\frac{1}{14}n_{\left\{ v,\star\right\} _{8},\star}}}{\frac{1}{14}n_{\left\{ v,\star\right\} _{8},d}+\mu}\right\} _{w\in V}
\end{eqnarray*}
As the factors $\frac{1}{14}$ cancel, we arrive at the second line.
With parameters $\phi_{d,v}^{w}$ as estimated above, the log-likelihood
for query terms $\mathbf{q}$ is given as
\[
\log\mathcal{L}_{w}(\mathbf{q}|\phi_{d,\star}^{w})=\sum_{{q_{i}\in\mathbf{q}\atop i>1}}\log\frac{n_{\left\{ q_{i-1},q_{i}\right\} _{8},d}+{\scriptstyle 14\cdot\mu}\cdot\frac{n_{\left\{ q_{i-1},q_{i}\right\} _{8},\star}}{n_{\left\{ q_{i-1},\star\right\} _{8},\star}}}{n_{\left\{ q_{i-1},\star\right\} _{8},d}+14\mu} +\log\mathcal{L}_{u}(q_{1}|\phi_{d}^{u})
\]
The second term handles the special case of the $q_{1}$ which has
no preceding terms and collapses to the unigram model.
Aside from the special treatment for $q_{1}$, the bigram model $\mathcal{M}_{w}$
introduced above $\mbox{score}_{\mathcal{M}_{w}}(d|\mathbf{q})$ produces
a different log score as $\log\mathcal{L}_{w}(\mathbf{q}|\phi_{d}^{b})$.
\subsection{A New Generative Process: genNGram}
The n-gram paradigm language models discussed in this section, allows
to generate a term $q_{i}$ optionally conditioned on the previous
term. This allows to integrate unigram, bigram, and windowed bigram
models with term-dependent choices. For instance, after generating
$q_{1}$ from the unigram model, $q_{2}$ might be generated from
a bigram model (conditioned on $q_{1}$), and $q_{3}$ generated from
the windowed bigram model (conditioned on $q_{2}$). These term-by-term
model choices are reflected in a list of latent indicator variables
$Z_{i}$, one for each query term position $q_{i}$.
The generative process is as follows.
\begin{itemize}[leftmargin=3mm]
\item Draw a multinomial distribution $\boldsymbol{\lambda}$ over the set
'$u$','$b$','$w$'.
\item Assume estimated unigram model $\phi_{d}^{u}$, bigram model $\phi_{d,v}^{b};\forall v\in V$
and windowed bigram model $\phi_{d,v}^{w};\forall v\in V$ that represent
the document $d$ as introduced in this section.
\item For the first query term $q_{1}$ do: \\
Draw $Q_{1}\sim Mult(\phi_{d}^{u})$.
\item For all positions $2\leq i\leq k$ of query terms $q_{i}$, do:
\begin{itemize}[leftmargin=3mm]
\item Draw an indicator variable $Z_{i}\sim Mult(\boldsymbol{\lambda})$
to indicate which distribution should be used.
\item If $Z_{i}=\mbox{'}u\mbox{'}$ then do: \\
Draw $Q_{i}\sim Mult(\phi_{d}^{u})$
from the unigram model (Section \ref{subsec:Document-Representation-feature}).
\item If $Z_{i}=\mbox{'}b\mbox{'}$ then do: \\
Draw $Q_{i}\sim Mult(\phi_{d,Q_{i-1}}^{b})$
from the bigram model (Section \ref{subsec:Generative-n-gram-style-bigram}).
\item If $Z_{i}=\mbox{'}w\mbox{'}$ then do:\footnote{In spirit with SDM, $\phi^{w}$ is estimated from eight-term windows
in the document, but only the previous word is considered when generating
the query.} Draw $Q_{i}\sim Mult(\phi_{d,Q_{i-1}}^{w})$ from the windowed bigram
model (Section \ref{subsec:Generative-windowed-bigram-proce}).
\end{itemize}
\end{itemize}
Assuming that all variables $Q_{i}$ and parameters $\phi$, $\lambda$
are given, only the indicator variables $Z_{i}$ need to be estimated.
Since all $Z_{i}$ are conditionally independent when other variables
are given, their posterior distribution can be estimated in closed-form.
For instance, $p(Z_{i}=\mbox{'}b\mbox{'}|\mathbf{q},\mbox{\ensuremath{\lambda}}\dots)\propto\lambda_{b}\phi_{d,q_{i-1}}^{b}(q_{i})$
and analogously for 'u' and 'w'.
Integrating out the uncertainty in $Z_{i}$ and considering all query
terms $q_{i}$, the model likelihood is estimated as
\begin{eqnarray}
\mathcal{L}(\mathbf{q}|\lambda,\phi_{d}^{u},\phi_{d}^{b},\phi_{d}^{w}) & = & \phi_{d}^{u}(q_{1})\cdot\prod_{i=2}^{k}\biggl(\lambda_{u}\phi_{d}^{u}(q_{i}) +\lambda_{b}\phi_{d,q_{i-1}}^{b}(q_{i})+\lambda_{w}\phi_{d,q_{i-1}}^{w}(q_{i})\biggr)\label{eq:Likelihood-genNGram}
\end{eqnarray}
\section{Experimental Evaluation}
\label{sec:Experiments}In this section, the theoretical analysis
of the family of dependency models is complemented with an empirical
evaluation. The goal of this evaluation is to understand implications
of different model choices in isolation.
We compare the MRF-based and generative models with both paradigms
for bigram models. In particular, the following methods are compared
(cf. Figure \ref{fig:methods}):
\begin{itemize}[leftmargin=5mm]
\item \textbf{mrfSDM:} The original MRF-based sequential dependence model as introduced
by Metzler et al.\ \cite{metzler2005sdm}, as described in Section \ref{sec:mrfSDM}.
\item \textbf{genSDM:} A generative model with the same features, using the bag-of-bigrams
approach introduced in Section \ref{sec:genSDM}.
\item \textbf{genNGram:} Alternative generative model with using conditional bigram
models, closer to traditional n-gram models, discussed in Section
\ref{sec:genNgram}.
\item \textbf{mrfNGram:} A variant of the MRF-based SDM model using features from
conditional bigram models.
\item \textbf{QL:} The query likelihood model with Dirichlet smoothing, which is called the unigram
model in this paper.
\end{itemize}
\begin{figure*}
\noindent \begin{centering}
\subfloat[Different methods and features.\label{fig:methods}]{\noindent \begin{centering}
\begin{tikzpicture}[x=0.5cm,y=0.3cm, every node/.style={scale=0.7}]
\node (bowb) {bag of windowed bigrams}; %
\node[below=1 of bowb] (bob) {bag of bigrams}; %
\node[below=1 of bob] (uni) {unigram model}; %
\node[below=1 of uni] (big) {n-gram bigram model}; %
\node[below=1 of big] (wbig) {n-gram windowed bigram}; %
\node[left=2 of bob, draw] (mrfSDM) {mrfSDM}; %
\node[below=3 of mrfSDM, draw] (mrfNGram) {mrfNGram}; %
\node[right=2.5 of uni, draw] (QL) {QL}; %
\node[right=2 of bob, draw] (genSDM) {genSDM}; %
\node[below=3 of genSDM, draw] (genNGram) {genNGram}; %
\draw[->] (mrfSDM) -> (bob); %
\draw[->] (mrfSDM) -> (bowb); %
\draw[->] (mrfSDM) -> (uni); %
\draw[->] (genSDM) -> (bob); %
\draw[->] (genSDM) -> (bowb); %
\draw[->] (genSDM) -> (uni); %
\draw[->] (QL) -> (uni); %
\draw[->] (mrfNGram) -> (big); %
\draw[->] (mrfNGram) -> (wbig); %
\draw[->] (mrfNGram) -> (uni); %
\draw[->] (genNGram) -> (big); %
\draw[->] (genNGram) -> (wbig); %
\draw[->] (genNGram) -> (uni); %
\node[above=2 of mrfSDM] (MRF) {\textbf{MRF}} ; %
\node[above=2 of genSDM] (GEN) {\textbf{Generative}} ; %
\node[below=1 of wbig] (base) {}; %
\end{tikzpicture}
\par\end{centering}
}\subfloat[Performance with grid tuning.\label{fig:results-grid}]{\includegraphics[width=0.3\columnwidth]{data-final-column}
}\subfloat[Performance with RankLib.\label{fig:results-ranklib}]{\includegraphics[width=0.3\columnwidth]{data-ranklib-final-column}
}
\par\end{centering}
\caption{Experimental evaluation and results\label{fig:experiments}}
\end{figure*}
All underlying language models are smoothed with Dirichlet smoothing,
as a preliminary study with Jelinek Mercer smoothing yielded worse
results. (This finding is consistent with a study of Smucker et al.~\cite{smucker2006smoothing}.)
Term probabilities of different language models are on very different
scales. Such as is the average probability of bag-of-bigram entry
is much smaller than a probability under the unigram model, which
is in turn much smaller than a term under a conditional bigram model.
As we anticipate that the Dirichlet scale parameter $\mu$ needs to be
adjusted we introduce separate parameters for different
language models (and not use parameter tying).
\subsection{Experimental Setup}
Aiming for a realistic collection with rather complete assessments
and multi-word queries, we study method performance on the Robust04
test set. The test set contains 249 queries\footnote{Removing query 672 which does not contain positive judgments. }
and perform tokenization on whitespace, stemming with Krovetz stemmer,
but only remove stopwords for unigram models. While we focus on the measure mean-average
precision (MAP), similar results are obtained for ERR@20, R-Precision,
bpref, MRR, and P@10 (available upon request).
We use five-fold cross validation using folds that are identical to
empirical studies of Huston et al.~\cite{huston2013termdependencies,huston2014termdepedencies-appendix}.
The training fold is used to select both the Dirichlet scale parameters
$\mu$ and weight parameters $\boldsymbol{\lambda}$. Performance
is measured on the test fold only.
Parameters are estimated in two phases. First the Dirichlet scale
parameter $\mu$ is selected to maximize retrieval performance (measured
in MAP) of each language model individually. See Table \ref{tab:tuning-mu}
for range of the search grid, estimated Dirichlet parameter,
and training performance.
In the subsequent phase, Dirichlet parameters are held fixed while
the weight parameter $\boldsymbol{\lambda}=\{\lambda_{u},\lambda_{b},\lambda_{w}\}$
is selected. To avoid performance differences due different machine
learning algorithms, we evaluate two learning approaches for weight
parameter $\boldsymbol{\lambda}$: grid search and coordinate ascent
from RankLib. Despite not strictly being necessary, for grid search we only consider nonnegative weights
that sum to one, as suggested in the original SDM paper \cite{metzler2005sdm}. Each weight entry is selected on
a grid $\lambda\in[0.0,0.05,\ldots0.95,1.0]$ while constraint-violating
combinations are discarded. The RankLib experiment does not use a
grid, but performs coordinate-ascent with five restarts.
For single-term queries, all discussed approaches reduce to the Query
Likelihood model, i.e., unigram model. We therefore hold them out
during the training phase, but include them in the test phase, where
they obtain the same ranking for all approaches.
\begin{table}
\caption{Dirichlet settings with max MAP on the train set.\label{tab:tuning-mu}}
\hfill{}\subfloat[Bag-of-bigram models.]{
\noindent \centering{}{}%
\begin{tabular}{c@{\hskip 3em}ll@{\hskip 3em}ll@{\hskip 3em}ll}
\toprule
{split} & {$\mu_{u}$} & {MAP} & {$\mu_{b}$} & {MAP} & {$\mu_{w}$} & {MAP}\tabularnewline
\midrule
{0} & {1000} & {0.252} & {18750} & {0.131} & {20000} & {0.171}\tabularnewline
{1} & {1000} & {0.253} & {18750} & {0.127} & {2500} & {0.163}\tabularnewline
{2} & {1000} & {0.252} & {18750} & {0.131} & {20000} & {0.165}\tabularnewline
{3} & {1000} & {0.254} & {18750} & {0.135} & {20000} & {0.168}\tabularnewline
{4} & {1000} & {0.259} & {21250} & {0.130} & {2500} & {0.170}\tabularnewline
\bottomrule
\multicolumn{7}{c}{{$\mu\in[10,250,500,\ldots,2500,3000,3500,\ldots,5000,10000]$}}
\end{tabular}{\scriptsize \par}}\hfill{}
\hfill{}\subfloat[N-gram models.\label{tab:genWindowedBigram}]{
\noindent \centering{}{}%
\begin{tabular}{c@{\hskip 3em}ll@{\hskip 3em}ll@{\hskip 3em}ll}
\toprule
{split} & {$\mu_{u}$} & {MAP} & {$\mu_{b}$} & {MAP} & {$\mu_{w}$} & {MAP}\tabularnewline
\midrule
{0} & {1000} & {0.252} & {5} & {0.171} & {1} & {0.213}\tabularnewline
{1} & {1000} & {0.253} & {5} & {0.172} & {1} & {0.209}\tabularnewline
{2} & {1000} & {0.252} & {5} & {0.168} & {1} & {0.206}\tabularnewline
{3} & {1000} & {0.254} & {5} & {0.175} & {1} & {0.210}\tabularnewline
{4} & {1000} & {0.259} & {5} & {0.172} & {1} & {0.213}\tabularnewline
\bottomrule
\multicolumn{7}{c}{{$\mu\in[1,5,10,50,100,150,200,250,500,750,1000]$}}\tabularnewline
\end{tabular}{\scriptsize \par}}\hfill{}
\end{table}
\begin{table}
\caption{Selected weight parameter combinations parameter, which are stable
across folds, with training MAP. Left: grid tuning; Right: RankLib
(Figure \ref{fig:results-grid} shows results on test set).\label{tab:tuning-lambda}}
\noindent \centering{}{\scriptsize{}}%
\begin{tabular}{c@{\hskip 1em}llll@{\hskip 3em}llll}
\toprule
{method} & {$\lambda_{u}$} & {$\lambda_{b}$} & {$\lambda_{w}$} & {MAP} & {$\lambda_{u}$} & {$\lambda_{b}$} & {$\lambda_{w}$} & {MAP}\tabularnewline
\midrule
{mrfSDM} & {0.85} & {0.15} & {0.05} & { 0.26} & {0.88} & {0.06} & {0.06} & {0.26}\tabularnewline
{genSDM} & {0.05} & {0.05} & {0.9} & { 0.21} & {0.32} & {0.45} & {0.24} & {0.26}\tabularnewline
{genNGram} & {0.35} & {0} & {0.65} & { 0.23} & {0.10} & {0.01} & {0.89} & {0.26}\tabularnewline
\bottomrule
\multicolumn{5}{r}{{\scriptsize{}$\lambda\in[0.0,0.05,0.10,\dots0.95,1.0]$}} & \multicolumn{4}{c}{coord ascent}\tabularnewline
\end{tabular}{\scriptsize \par}
\end{table}
\subsection{Empirical Results}
The results of the evaluation with standard error bars are presented
in Figure \ref{fig:results-grid} for the grid tuning experiment and
in Figure \ref{fig:results-ranklib} for the RankLib experiment.
In the grid-tuning experiment it appears that the MRF-based SDM model is clearly better
than any of the other variants, including both generative models as
well as the MRF-variant with n-gram features. The second best method
is the query likelihood method. However, once $\boldsymbol{\lambda}$
is learned with coordinate ascent from RankLib, the difference disappears. This is concerning, because it may lead to the false belief of discriminative models being superior for this task.
The achieved performance of mrfSDM in both cases is consistent with
the results of the experiment conducted by Huston et al.~\cite{huston2013termdependencies}.
\paragraph{Generative models}
In all cases, weight parameters $\lambda$ and Dirichlet scale parameters
$\mu$ selected on the training folds, cf. Tables \ref{tab:tuning-mu}
and \ref{tab:tuning-lambda}, are stable across folds.
We observe that selected weight parameterization for the genNGram
model puts the highest weight on the windowed bigram model, omitting
the bigram model completely. In fact, among all four bigram language
models, the n-gram windowed bigram model, described in Section \ref{subsec:Generative-windowed-bigram-proce} achieves the highest retrieval
performance by itself (MAP 0.21, column $\mu_w$ in Table \ref{tab:genWindowedBigram}).
For the genSDM model, which is based on bag-of-bigrams, the weight parameters rather inconsistent across folds and training methods, suggesting that the model is unreliably when trained with cross validation.
\paragraph{Markov random fields}
In order to understand whether the success factor of the mrfSDM lies
in the log-linear optimization, or in the bag-of-bigram features,
we also integrate the n-gram based features discussed in Section \ref{sec:genNgram}
as features into the MRF-based SDM algorithm introduced by Metzler
et al.\ (discussed in Section \ref{sec:mrfSDM}). This approach is denoted
as mrfNGram in Figure \ref{fig:results-grid}. While the performance
is diminished when using grid-tuning, identical performance is achieved when parameters are estimated with RankLib (Figure \ref{fig:results-ranklib}).
\paragraph{Discussion}
We conclude that all four term-dependency methods are able to achieve the same
performance, no matter whether
a generative approach or a different bigram paradigm is chosen.
We also do not observe any difference across levels
of difficulty (result omitted). This is not surprising given the similarities between
the models, as elaborated in this paper.
However, a crucial factor in this analysis is the use of a coordinate ascent
algorithm for selection of weight parameters. The coordinate ascent
algorithm was able to find subtle but stable weight combinations that
the grid tuning algorithm did not even inspect.
An important take-away is to not rely on grid tuning for evaluating discriminative model in comparison generative models, as it may falsely appear that the discriminative model achieves a significant performance improvement (compare mrfSDM versus genSDM in Figure \ref{fig:results-grid}), where actually this is only due to inabilities of fixed grid-searches to suitably explore the parameter space.
\section{Related Work}
\label{sec:Related-work}This work falls into the context of other
works that study different common axiomatic paradigms \cite{zhai2011axiomatic} used in information retrieval
empirically and theoretically. Chen and Goodman \cite{chen1996smoothing}
studied different smoothing methods for language modeling, while Zhai
and Lafferty \cite{zhai2001smoothing} re-examine this question for
the document retrieval task. Finally, Smucker and Allan \cite{smucker2006smoothing}
concluded which characteristic of Dirichlet smoothing leads to its
superiority over Jelinek-Mercer smoothing.
Our focus is on the theoretical understanding of equivalences of different probabilistic models that consider sequential term dependencies, such as \cite{metzler2005sdm}. Our work is motivated to complement the empirical comparison
of Huston and Croft \cite{huston2013termdependencies,huston2014termdepedencies-appendix}.
Huston and Croft studied the performance of the sequential dependence
model and other widely used retrieval models with term dependencies
such as BM25-TP, as well as Terrier's pDFR-BiL2 and pDFR-PL2 with
an elaborate parameter tuning procedure with five fold cross validation.
The authors found that the sequential dependence model outperforms
all other evaluated method with the only exception being an extension,
the weighted sequential dependence model \cite{bendersky2010wsdm}.
The weighted sequential dependence model extends the feature space
for unigrams, bigrams, and windowed bigrams with additional features
derived from external sources such as Wikipedia titles, MSN query
logs, and Google n-grams.
\section{Conclusion}
In this work we take a closer look at the theoretical underpinning
of the sequential dependence model. The sequential dependence model
is derived as a Markov random field, where a common choice for potential
functions are log-linear models. We show that the only difference
between a generative bag-of-bigram model and the SDM model is that
one operates in log-space the other in the space of probabilities.
This is where the most important difference between SDM and generative
mixture of language models lies.
We confirm empirically, that all four term-dependency models are capable
of achieving the same good retrieval performance. However, we observe
that grid tuning is not a sufficient algorithm for selecting the weight
parameter---however a simple coordinate ascent algorithm, such as
obtainable from the RankLib package finds optimal parameter settings.
A shocking result is that for the purposes of comparing different models,
tuning parameters on an equidistant grid may lead to the false belief that the MRF model is significantly better, where in fact, this is only due to the use of an insufficient parameter estimation algorithm.
This analysis of strongly related models that following the SDM model
in spirit, but are based on MRF, generative mixture models, and Jelinek-Mercer/interpolation
smoothing might appear overly theoretical. However, as many extensions
exist for the SDM model (e.g., including concepts or adding spam features)
as well as for generative models (e.g., relevance model (RM3), translation
models, or topic models), elaborating on theoretical connections and
pinpointing the crucial factors are important for bringing the two
research branches together. The result of this work is that, when
extending current retrieval models, both the generative and Markov
random field framework are equally promising.
\section*{Acknowledgements}
\small{This work was supported in part by the Center for Intelligent Information Retrieval. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.}
\bibliographystyle{plain}
|
3,212,635,537,716 | arxiv | \section{Introduction}
Whenever we think about dynamical systems' properties almost inevitably come to mind the concepts of {\textit{stability}} and \textit{robustness}. Loosely speaking, we can say that stability implies same dynamics for maps sufficiently close to each other, and robustness implies the same behavior relative to a specifical property for maps sufficiently close to each other. These are both of most importance in the study of any dynamical system.\\
This work in particular is centered in the study of robust transitivity, meaning by \textit{transitive} the existence of a forward dense orbit of a point. This may seem at first sight as an unexciting topic since a fair amount of results concerning robust transitivity are known. Nonetheless, the aimed class of maps, the singular endomorphisms about which little to nothing is known; as well as taking on the high dimensional context are undoubtedly a fresh approach to the subject.\\
To set ideas in order we list up the most relevant known results about the topic.\\
We begin summing up the most studied case: robust transitivity of diffeomorphisms. The image provided by known results is fairly complete. Concerning surfaces, \cite{m2} shows that robust transitivity implies Anosov diffeomorphism and manifold $\T^2$; while in $dim(M)=n$ manifolds, in \cite{bdp} is proved that robust transitivity implies a dominated splitting. \\ Going further there is robust transitivity of regular endomorphisms (not globally but locally invertible). The image we have about these is somewhat less complete: we know that volume expanding is a necessary but not sufficient condition for $C^1$ robust transitivity according to \cite{lp}.\\ Carrying on, at last there is the least studied case, robust transitivity of singular maps (non empty critical set). Until 2013 nothing had ever been written on the topic. It was on that year when \cite{br} showed the first example of a $C^1$ transitive singular map. The second example was given only in 2016 by \cite{ilp}, they show a $C^1$ robustly transitive map with a persistent critical set. Nothing more than these two examples was known until that time. \\ Either so, there have been recent further advances on the topic: in 2019 \cite{lr1} and \cite{lr2} set the \textit{state of the art} proving that partial hyperbolicity is a necesary condition for robust transitivity of singular surface endomorphisms, that the only surfaces that support them are $\T^2$ and the Klein bottle, and that they belong to the homotopy class of a linear map with an eigenvalue of modulus larger than one. \\
Now, about the present work, the construction carried on in \cite{ilp} allows the existence of an endomorphism of $\T^2$ with persistent critical set which is $C^2$ but not $C^1$ robustly transitive, a result appearing in \cite{ip} which this article generalizes to the higher dimensional torus $\T^n$. It's worth to mention that the proof is inspired in the preceding ideas but with a significantly simpler approach and construction.
\subsection{Sketch of the Construction.} Starting from a matrix with integer coefficients of absolute value larger than one we build and endomorphism of $\T^n$ that presents a persistent critical set and admits a field of unstable cones. We choose a critical point and a neighborhood of it and perform a perturbation there. It provides with a new map that collapses an open set to an invariant hyperplane so it can not be $C^1$ transitive. Next, we prove that the map is $C^2$ robustly transitive by following curves whose velocities lie inside the unstable cones until they escape the critical region and then apply the classical argument for robust transitivity using open sets since no open set collapses in the $C^2$ topology. \\ The reader is also provided with a thorough description of the critical set and the critical points.
\section{Preliminaries}
We begin recalling some basic definitions. We assume the reader to be familiar with the concepts of real manifold and submanifold, atlas, chart, tangent vector and tangent space, differentiable map and differential of a map, etc. For more details about the contents of this section the reader might refer itself to \cite{gg} or \cite{kh}.
\subsection{Dynamical}
Let $f:M \rightarrow M$ a differentiable endomorphism. The \textbf{orbit} of $x \in M$ is $\mathcal{O}(x)=\{ f^n(x) , n \in \N \}$ and $f$ is \textbf{transitive} if there exists a point $x \in M$ such that $ \overline{{\mathcal{O}(x)}}=M$.
\begin{prop}\label{equi}
If $f$ is continuous then are equivalent:
\begin{enumerate}
\item $f$ is transitive.
\item For all $U, V$ open sets in $M$, exists $n \in \N$ such that $ f^n(U) \cap V \neq \emptyset$.
\item There exists a residual set $R$ (countable intersection of open and dense sets) such that for all points $ x \in R: \overline{\mathcal{O}(x)}=M$.
\end{enumerate}
\end{prop}
\begin{defi}
$f$ is $C^k$-\textbf{robustly transitive} if there exists $\ep >0$ and a neighborhood $\mathcal{U}_{(f,\ep)}$ of $f$ in the $C^k$ topology such that $g$ is transitive for all $ g \in \mathcal{U}_f$.
\end{defi}
\subsection{Geometrical}
For the rest of the preliminaries $M$ and $N$ will denote real manifolds, compact, connected and without boundaries such that $dim(M) \geq dim(N)$, $x \in M, y \in N$ and a differentiable map $f:M \rightarrow N$.\\
We say that $x $ is a \textbf{regular point} for $f$ if the differential at $x$, $D_x f$ is surjective. Or equivalently, if the rank of the Jacobian matrix of $f$ at $x$ satisfies $rk(D_x f) = dim(N)$. We say that $y $ is a \textbf{regular value} of $f$ if $\forall x \in f^{-1}(y)$, $x$ is a regular point and we say that $x \in M$ is a \textbf{critical point} or \textbf{singularity} for $f$ if $D_x f$ is not surjective. Equivalently, if $rk(D_x f) < dim(N)$. The \textbf{critical set} of $f$ is $ S_f = \{ x \in M / rk(D_x f) < dim(N) \} $ and $y $ is a \textbf{critical value} if it is not a regular value.
\begin{rk}
If $dim(M)=dim(N)$ then the definitions of regular point and critical point are equivalent to the determinant of the Jacobian $det(D_xf) \neq 0$ or $det(D_xf) = 0$ respectively.
\end{rk}
\subsubsection{Singularities}
We continue with a brief overview of singularity theory. Surprisingly, singularities present a nice geometrical behavior; they can be grouped under 'types' of singularities and these types can as well be grouped as submanifolds of $M$.
\begin{defi}
Let $f_1:M\rightarrow N$ and $f_2:M\rightarrow N$ maps such that $x_1 \in S_{f_1}$ and $x_2 \in S_{f_2}$.
We say $x_1$ and $x_2$ are \textbf{same type singularities} and denote it by $(f_1,x_1) \sim (f_2,x_2)$ if there exist neighborhoods $U_i$ of $x_i$, $V_i$ of $f_i(x_i)$, $i \in \{1,2\}$ and two diffeomorphisms $h_1:U_1 \rightarrow U_2$ and $h_2: V_1 \rightarrow V_2$ such that the following diagram commutes:
\begin{Large}{ $$\begin{tikzcd}
U_1 \arrow{r}{h_1} \arrow[swap]{d}{{f_1}_{|U_1}} & U_2 \arrow{d}{{f_2}_{|U_2}} \\
V_1 \arrow{r}{h_2} & V_2
\end{tikzcd}$$}
\end{Large}
\end{defi}
Clearly, $\sim$ is an equivalence relation. \\
The description of all singularities belonging to the same equivalence class receives the name of \textbf{normal form} of the singularity, and for simplicity they are classified for some map $g:\R^m \rightarrow \R^n$ with a singularity at $x=0$. Then, we say that $f:M \rightarrow N$ has \emph{that} type of singularity at $x \in M$ if $(f,x) \sim (g,0)$. \\
This description carries on with the disadvantage of allowing equivalence classes being too many, even infinite. Considering this, equivalence classes are grouped once again under a criterion called \textbf{Thom-Boardman}. Every differentiable map can be approximated by another with a finite number of Thom-Boardman singularities, and for this description it holds that every group of classes is, as a set, a submanifold of $M$. \\
In 1955, Thom proposed in \cite{t} the afore mentioned criterion for the singularities of a map $f \in C^\infty(M,N)$ defining $S_{k_1}(f)=\{ x \in M / dim (ker(D_xf))={k_1}\}$. Assuming this set to be a submanifold of $M$, then $S_{k_1,k_2}(f)=S_{k_2} (f_{|S_{k_1}})$ can be defined and so on. The non-increasing r-tuple $(k_1,k_2,...,k_r)$ is called to be the \emph{symbol} of the singularity and it characterizes it.\\ Unfortunately Thom could not prove the submanifold structure, leaving a blank in the theory until Boardman solved the problem in 1967 in his paper \cite{b}. He took jets of differentiable maps (loosely speaking Taylor expansions) which allow to define the $S_k$ as submanifolds of jets spaces, bringing along as well the local expressions of the action of the map in a neighborhood of the singularity (which happen to be the explicit analytical expressions of their normal forms). This sealed the acceptance of the classification (another very important consequence of Boardman works on singularities is the proof that stable maps are not dense in the set of smooth maps).
To end with preliminaries concerning singularities we will only mention that Mather (1971) and Morin (1972) gave different approaches to the Thom-Boardman classification by means of algebraic geometry and complex analysis, and that even extremely useful, the Thom-Boardman classification is not free of pathological behavior as Porteous (1972) showed by exhibiting a map $f:\R^5 \rightarrow \R^5$ with $S_2$ singularities in the closure of $S_{1,1,1,1}$ with very different qualitative behaviors. The boundaries of the Thom-Boardman classification lie in the fact that the closure of the union of the singular submanifolds need not be a manifold itself.
\section{A singular endomorphism $f$ of $\T^n$.}
We proceed now to the construction of the main map of this work which we call $f$. From $f$ we will be able to define a new map $H$ that has the properties announced in the title of the article.
\subsection{Construction of $f$.}
Consider the $n$ dimensional torus $\T^n$ endowed with the standard riemannian metric and let $\widehat{A} \in \mathcal{M}_n(\Z)$ be the diagonal matrix suggested below.
$$\widehat{A}= \left(\begin{array}{ccccc}
8 & 0 & 0 & \cdots & 0\\
0 & 2 & 0 & \cdots & 0 \\
\vdots & \vdots &2 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & \cdots & 0 &0 &2\\
\end{array
\right). $$
Notice the construction could be carried on with any pair of integers $\lambda$ and $\mu$ such that $|\lambda|>|\mu|>1$. The choice of $8$ and $2$ is made in the sake of simplicity and for a better understanding of the contents to follow.\\ $\widehat{A}$ defines a regular endomorphism
$ A:\T^n\rightarrow\T^n /A(x_1,...,x_n)= (8x_1,2x_2,...,2x_n) $. Observe that $p=(\frac{1}{4},0,...,0,\frac{1}{4})\in\T^{n}$, that $Ap =\left( 0, 0, ..., 0, \frac{1}{2} \right) $ and that $ A^2p=0$. \\ This point $p$ is the center of a ball where a perturbation will be performed in order to obtain the map $f$ we seek. To construct the perturbation we need to fix a series of technical parameters that will define $f$. The choice to do it at the beginning and all of them at the same time is in expectance of avoiding darkness in the construction and of that it will be clear how they depend on each other.\\
Start with $r>0$ satisfying the following conditions:
\begin{itemize}\label{p1}
\item $A(B_{(p,r)})\cap \overline{B_{(p,r)}}=\emptyset $,
\item $ A^{-1}(B_{(p,r)})\cap \{ x\in \R ^{n} / x_{n}=0 \}=\emptyset$,
\item $B_{(p,r)}\cap B_{(A.p ,r)}=B_{(A.p ,r)}\cap B_{(0,r)}= \emptyset.$
\end{itemize}
We choose $r$ like this so as there will be no points in $B_{(p,r)}$ that remain inside it under forward iteration by $A$ or coming from the hyperplane $\{x_n=0\}$ under iteration by $A$.\\
Now that $r$ is set, fix a second parameter $\theta$ such that $0< \theta < \frac{r}{2}$ and define a function $\psi :\R\to\R$ of class $C^{\infty}$ with an only critical point at $\frac{1}{16}$, with $\psi (\frac{1}{16})=4$ and $\psi ^{'}(\frac{1}{16})=\psi ^{''}(\frac{1}{16})=0$, with $\psi (x)=0$ for all $ x$ in the complement of $(\frac{1}{16}-\theta ,\frac{1}{16}+\theta )$; with an axis of symmetry in the line $x=\frac{1}{16}$ as shown in Figure \ref{figura11} (a) .
\begin{figure}[ht]
\begin{center}
\subfigure[]{\includegraphics[scale=0.33]{graficopsii.eps}}
\subfigure[]{\includegraphics[scale=0.37]{graficophiprima.eps}}
\caption{Graphs of $\psi$ and $\varphi'$}\label{figura11}
\end{center}
\end{figure}
Choose now a third arbitrary real parameter $a \in (0,\frac{3}{7})$ (which will play a key role shortly on in Section \ref{dinamf}) and finish setting a last parameter $\delta $, with $0<\delta < 2\theta $ verifying the following condition: since the derivative of $\psi$ is bounded once $\theta$ has been fixed, call $ M := M(\theta) = max_{x \in \R} \{ | \psi'(x) | \} $ and impose on $\delta$ that $2.M.r.\delta.(1+a)<a.$ \\
Now that all the parameters have been fixed, consider a smooth (class $C^{\infty}$) function $\varphi:\R\to\R$ such that:
\begin{itemize}
\item $\varphi' $ is as in Figure \ref{figura11} (b),
\item $\varphi (\frac{1}{4})=0$, $\varphi' (\frac{1}{4})=\frac{1}{2}$, $\varphi^{''}(\frac{1}{4})\neq 0$ , $\varphi' (\frac{1}{4} +\frac{\delta}{8} )=1$,
\item $\varphi (x)=0$ for all
$ x \notin [\frac{1}{4}-\frac{\delta}{4},\frac{1}{4}+\frac{3\delta}{4} ]$.
\item $-\frac{3}{4} \leq \varphi'(x) \leq 1$ for all $x \in [\frac{1}{4}-\frac{\delta}{4},\frac{1}{4}+\frac{3\delta}{4} ]$.
\end{itemize}
\begin{rk}
$max \{ |\varphi (x)|: \ x \in \R \}\leq \delta.$
\end{rk}
We are now in condition to define the perturbation of $A$ that depends on $r$,$\theta$, $a$ and $\delta$ which by simplicity we call only $f$ and is \begin{equation}\label{mapaefe}
f:= f_{r,\theta,a, \delta}:\T^n\to \T^n / f(x_1,...,x_n)= \left( 8x_1,2x_2,..., 2x_n-\varphi(x_n).\psi \left( \sum_{k=1} ^{n-1} x_k^2 \right) \right).
\end{equation}
To make the reading easier we will denote it as $ f(x_1,...,x_n)= \left( 8x_1,2x_2,..., 2x_n-\varphi.\psi \right)$ omitting the evaluations appearing on the definition.
\begin{rk}\label{obs1}. It is straightforward that:\\
1- $f \in C^{\infty}$.\\
2- $f_{|B^c_{(p,r)}}=A$. \\
3- $ f(p)= Ap =\left( 0, 0, ..., 0, \frac{1}{2} \right)$ and $f^2(p)= A^2p=0$.\\
4- The differential $D_x f$ at $x=(x_1,...,x_n)$ is
\begin{equation}\label{ecuacion1}
D_xf =
\left( \begin{array}{ccccc}
8 & 0 & \cdots & 0 & 0 \\
0 & 2 & \cdots & 0 & 0 \\
\vdots & \vdots & & \vdots & \vdots \\
0 & 0 & \cdots & 2 & 0 \\
-2.x_1.\varphi.\psi' & -2.x_2.\varphi.\psi'& \cdots & -2.x_{n-1}.\varphi.\psi' & 2-\varphi'.\psi \\
\end{array}
\right).
\end{equation}\\
\end{rk}
\subsection{Dynamics of $f$.}\label{dinamf}
In this subsection we prove that the dynamical behavior of $f$ is given by the existence of strong unstable cones at every point in the direction of the canonical first coordinate. We recall some definitions from \cite{kh} first.
\begin{defi}\label{cono}
Let $x \in M$, we call \textbf{cone} of parameter $a$ and vertex $x$ to $$C^{u}_{a} (x)=\left\{ (v_{1},...,v_{n}) \in T_x M / \frac{\Vert (v_{k+1},...,v_{n}) \Vert}{\Vert (v_{1}, v_2,...,v_k) \Vert}<a \right\}$$ for each $k \in \N \cap [1,n-1]$.
\end{defi} In this case we say the unstable cone is of \textbf{index} $n-k$.
\begin{defi}
We say a map $f$ admits an \textbf{unstable cone} of parameter $a$ and vertex $x \in M$ if exists $C^u_a(x) \subset T_xM$ such that $\overline{D_x f(C^u_a(x)) }\setminus \{0\} \subset C^u_a(f(x))$.\\
We say $f$ admits unstable cones of parameter $a$ if it admits an unstable cone of parameter $a$ and vertex $x$ at every point $x \in M$.
\end{defi}
\begin{prop}
{\textsl{Existence of unstable cones is $C^1$ robust}}:\\
If $f$ admits unstable cones of parameter $a$ then there exists a neighborhood $\mathcal{U}_f \in C^1$ of $f$ such that $\forall g \in \mathcal{U}_f: g$ admits unstable cones of parameter $b$ with $b \leq a$.
\end{prop}
In the context of diffeomorphisms, the existence of unstable cones is equivalent to a weak form of hyperbolicity called dominatted splitting. The subspace generated by the vectors in the denominator of the cone is regarded as the unstable space. Though this condition is not an equivalence for endomorphisms, the dynamical behavior of a small parameter cone is virtually the same as that of a strong unstable direction.\\ As was stated above, the most relevant dynamical feature that our map $f$ has is the existence of strong unstable cones at every point. We give the proof of this assertion below. Observe that the choice of the technical third parameter $a$ before fixing $\delta$ in the construction of $f$ is what we need for the cones to exist. Recall that we imposed on $\delta$ the condition $2.M.r.\delta.(1+a)<a$ for the parameter $a$ and the norm $M$ of the derivative of $\psi$. Observe also that for every $\delta_0$ such that $0 < \delta_0 < \delta$ the claims stated up next for $\delta$ are satisfied.\\
We will make use of the following notations in the proof of the lemma ahead and also in the following sections:
\begin{itemize}
\item If $v=(v_{1},v_{2},...,v_{n})$ and $h<n$, then $\tilde{v}_{h}=(v_{1},v_{2},...,v_{h})$ and $\tilde{v}_{n-1} = \tilde{v}$.
\item If $v=(v_{1},v_{2},...,v_{n})$ and $w=(w_{1},w_{2},...,w_{n})$ then $ \langle v,w\rangle_{h} = \sum_{k=1}^{h} v_{k}.w_{k} $.
\end{itemize}
\begin{lema} \label{3}
\textbf{Existence of unstable cones for $f$.} \end{lema}
Claim: Given $a \in (0, \frac{3}{7}) $:
\begin{enumerate}
\item $ C^{u}_{a} (x)=\lbrace (v_{1},...,v_{n})/ \frac{\Vert v_{2},...,v_{n} \Vert}{\vert v_{1}\vert}<a \rbrace$ satisfies $\overline{D_{x}f(C^{u}_{a}(x))} \setminus \{0\} \subseteq C^{u}_{a}(f(x))$.\\
\item For all $ v \in C^{u}_{a} (x)$ holds that $ \Vert D_{x}f (v) \Vert > 7 \Vert v \Vert $.
\end{enumerate}
\pf
\begin{enumerate}
\item By Equation \ref{ecuacion1} we have $\forall v=(v_{1},v_{2},...,v_{n}) \in C^{u}_{a} (x):$
$$u (u_1,..,u_n):=D_{x}f (v) = (8.v_{1},2.v_{2},...,-2.\langle x,v\rangle_{n-1}.\varphi.\psi' + v_{n}.(2-\varphi'.\psi)). $$
Performing calculations we have $$ \frac{\Vert u_{2},...,u_{n} \Vert}{\vert u_{1}\vert} = \frac{\Vert (2.v_{2},...,2.v_{n-1}, -2.\langle x,v\rangle_{n-1}.\varphi .\psi'+ v_{n}.(2-\varphi' .\psi) \Vert}{\vert 8.v_{1} \vert} \leq$$ $$ \leq \frac{\Vert 2.v_{2},...,2.v_{n-1} \Vert}{\vert 8.v_{1}\vert} + \frac{2.\Vert \tilde{x} \Vert.\Vert \tilde{v} \Vert.\vert \varphi \vert.\vert \psi' \vert}{\vert 8.v_{1} \vert} + \frac{\vert 2-\varphi' .\psi \vert. \vert v_{n}\vert}{\vert 8.v_{1} \vert} < \frac{a}{4} + 2.r.(\frac{1+a}{8}).\delta.M +\frac{5.a}{8} < a; $$ \\ where in the first inequality we use triangular and Cauchy-Schwarz; and in the second one we use:
\begin{itemize}
\item $ v \in C^{u}_{a} (x)$,
\item $\Vert \tilde{x} \Vert \leq \Vert x \Vert < r$,
\item $ \frac{\Vert \tilde{v} \Vert}{|8.v_1|}\leq \frac{|v_1|+\Vert v_2,...,v_{n-1} \Vert}{|8.v_1|}\leq \frac{1+a}{8}$,
\item $ \vert \varphi \vert < \delta $,
\item $ \vert \psi' \vert < M $,
\item $ \vert 2-\varphi' .\psi \vert \leq 5 $ since $ \frac{-3}{4} \leq \varphi' \leq 1 $ and $ 0 \leq \psi \leq 4$.
\end{itemize}
And in the third one we use the condition imposed over $\delta$.$\Box$ \\
\item $\forall v=(v_{1},v_{2},...,v_{n}) \in C^{u}_{a} (x)$: $$\left( \frac{\Vert D_{x}f (v) \Vert}{7.\Vert v \Vert}\right) ^{2} = \frac{64.v_{1}^{2}+4.\sum_{j=2}^{n-1}.v_{j}^{2}+(-2.\langle x,v\rangle_{n-1}.\varphi .\psi'+v_{n}.(2-\varphi' .\psi))^{2}}{49.\sum_{j=1}^{n}v_{j}^{2}} \geq$$
$$ \geq \frac{64.v_{1}^{2}}{49.\sum_{j=1}^{n}v_{j}^{2}} > \frac{64 }{49(1+a^{2})}> \frac{58 }{49(1+a^{2})} > 1, \forall a \in \left(0,\frac{3}{7}\right).\Box $$
\end{enumerate}
\begin{clly}\label{coroconos}
There exists a neighborhood $\mathcal{U}_f \in C^1$ of $f$ such that for all $g \in \mathcal{U}_f:g$ admits unstable cones satisfying for all $ v \in C^{u} (x)$ : $ \Vert D_{x}g (v) \Vert \geq 7 \Vert v \Vert $.
\end{clly}
\begin{rk}
$\vec{e_1}$ can be regarded as a strong unstable direction for $f$.
\end{rk}
\subsection{The critical set of $f$}
Recall that $S_f=\{x \in \T^{n} /det ( D_xf )=0\}$ is the critical set of $f$. Equation \ref{ecuacion1} provides $det(D_xf)=8 \cdot (2)^{n-2} \cdot (2-\varphi'.\psi) $ which translates into $$S_f=\{x \in \T^{n} / 2-\varphi'.\psi =0\rbrace.$$
\begin{rk}\label{rkcritset}
1- The point $p \in S_{f} $, so $S_f\neq \emptyset$.\\
2- The last column in the Jacobian (\ref{ecuacion1}) is null at every point belonging to $S_f $. Hence, for every point $ x \in S_f$ the kernell $ker(D_x f)$ is generated by the last canonical vector $\{\vec{e_n}\} $.\\
3- \textbf{The critical set $S_f$ is persistent} in the sense that for all $k \geq 1$ there exists a $C^k$ neighborhood of $f$ for which all of its maps have nonempty critical set. Take the points $q_1=(\frac{1}{4},0,...,0, \frac{1}{4} + \frac{\delta}{4})$ and $q_2=(\frac{1}{4},0,...,0, \frac{1}{4} + \frac{\delta}{8})$ both in $B_{(p,r)}$. Evaluate determinants to obtain \textit{det}$(D_{q_1}f)= 5. (8^{n-1})$ and \textit{det}$(D_{q_2}f)= -2.( 8^{n-1})$. Therefore it exists $\mathcal{U}_f \in C^{1}$ such that $\forall g \in \mathcal{U}_f$: $S_g\neq\emptyset$.
\end{rk}
\subsubsection{Classification of $S_f$}\label{Sfessubvariedad}
The objective of this subsection is to prove that the critical set $S_f$ is a submanifold of $\T^n$ of codimension 1. We begin recalling some basic definitions and theorems from differential geometry that will be necessary to carry on with our proof. Again, for more insight on these concepts the reader might refer itself to \cite{gg}. Remember that $M$ and $N$ denote differentiable manifolds with $dim(M) \geq dim(N)$, $x \in M, y \in N, W \subset N$ a submanifold and $f:M \rightarrow N$ a differentiable map.
\begin{defi}\label{transve}
We say that $f$ is \textbf{transversal} to $W$ at $f(x) \in N$ , and we denote it by $f \transv W$ at $f(x)$ if any of the two following conditions hold: $ f(x) \notin W$ or $T_{f(x)}W\oplus D_xf(T_xM)=T_{f(x)} N $.
\end{defi}
\begin{defi}
We say that $f \transv W$ if for all $x \in f^{-1}(W)$ holds $f \transv W$ at $f(x)$.
\end{defi}
\begin{prop}
Transversality is a \textbf{$C^1$-open} property ($C^1$-stable) i.e. $f \transv W$ then there exists a neighborhood $\mathcal{U}_f \in C^1$ of $f$ such that $\forall g \in \mathcal{U}_f: g \transv W$.
\end{prop}
\begin{thm}\label{subvartransversal}
{\bf{Preimage of transversal submanifold}}:
Let $f:M\rightarrow N$ be a differentiable map. $W$ a submanifold of $N$ and $f \transv W$.
Then $f^{-1}(W)$ is a submanifold of $M$ of the same codimension that $W$ in $N$.
\end{thm}
\begin{thm}\label{valorregular}
{\bf{Preimage of a regular value}}:
Let $f:M\rightarrow N$ be differentiable and $y \in N$ a regular value.
Then $f^{-1}(y)$ is a submanifold of $M$ with codimension equal to the dimension of $N$. Also, $\forall x \in f^{-1}(y)$ holds $T_x\left( f^{-1}(y)\right) =ker(D_xf)$.
\end{thm}
\vspace{0.2cm}
We start developing now the ideas to prove that $S_f$ is a submanifold of $\T^n$.
Let $ \mathcal{M}_{n}(\R)$ be the set of matrixes with real coefficients and size $n \times n$, and let $\mathcal{R}_{k}$ be the subset of matrixes of rank $k$. Then $\mathcal{R}_{k}$ is a submanifold of $\mathcal{M}_{n}(\R)$ (\cite{gg}, \textit{chapter 2, proposition 5.3}). In particular, if $k=n-1$ then $\mathcal{R}_{n-1}$ is of dimension $n^2-1$. Define now a map $h$ depending on $f$ by
\begin{equation}\label{mapah}
h:= h_{f}:\T^n \rightarrow \mathcal{M}_{n }(\R) / h\left( x\right) = D_{x}f
\end{equation}
which assigns to every point $x$ of $\T^{n}$ the Jacobian matrix of $f$ at $x$. Since $f$ is smooth so is $h$. Observe that $h(S_f)\subset \mathcal{R}_{n-1}$ since for all $ x\in S_f$ the rank $rk(D_{x}f)=n-1 $, but also since $h^{-1}\left(\mathcal{R}_{n-1}\right)\subset S_f$ then $S_f=h^{-1} \left(\mathcal{R}_{n-1}\right)$.\\
Proving that $ h \transv \mathcal{R}_{n-1}$ would imply our claim via Theorem \ref{subvartransversal}. \\Observe that if $x \notin S_f$, then $h(x) \notin \mathcal{R}_{n-1}$ hence $ h \transv \mathcal{R}_{n-1}$. \\
It is only left to see that the transversality condition holds when $x \in S_f$.\\ By definition $h \transv \mathcal{R}_{n-1}$ at $x \Leftrightarrow T_{h(x)}\left(\mathcal{M}_{n}(\R)\right)=T_{h(x)}\left(\mathcal{R}_{n-1}\right)\oplus \left( D_{x}h \right)(T_{x}(\T^n))$. Or equivalently, $$h \transv \mathcal{R}_{n-1} \mbox{ at } x \Leftrightarrow T_{h(x)}\left(\R^{n^2}\right)=T_{h(x)}\left(\mathcal{R}_{n-1}\right)\oplus \left( D_{x}h\right)(\R^n).$$
We determine $ T_{h(x)}\left(\mathcal{R}_{n-1}\right)$ and $Im \left( D_{x}h\right)$ when $x \in S_f$ to check that the transversality condition holds.
\begin{lema} \label{lema 1} For every critical point $x$ it holds that $$T_{h(x)}\left(\mathcal{R}_{n-1}\right)=\lbrace (x_1,...,x_{n^2})\in \R^{n^2} /x_{n^2}=0\rbrace. $$ \end{lema} \pf
Consider the determinant map $\xi:\mathcal{R}_{n-1}\rightarrow\R / \xi(M)= det(M)$. Follows immediately that $Im( \xi )=0$. \\ For all $ M \in \mathcal{R}_{n-1}$ the differential of $\xi$ is $D_{M} \xi=(A_{11} , A_{12} ,..., A_{nn} )$ being $A_{ij}$ the adjoint matrix to the element in the $i$-th row and $j$-th column of $M$. Since $M \in \mathcal{R}_{n-1}$, at least one of these adjoints is not null. This implies that the differential of $\xi$ has dimension greater or equal than one for all $ M \in \mathcal{R}_{n-1}$ which means that for all $ M \in \xi^{-1} (0): D_{M} \xi$ is surjective. Consequently, $0$ is a regular value for $\xi$ so according to Theorem \ref{valorregular} the tangent space to $\mathcal{R}_{n-1}$ at $h(x)$ is $T_{h(x)} (\xi^{-1} \lbrace0\rbrace) = ker (D_{h(x)}\xi)$. Now, observe that for every critical point $x$ it is $D_{h(x)}(\xi)=(0,...,0, 8.(2^{n-2}))$ since the restriction $h_{|S_{f}}$ has its last column null. This implies that for every critical point $ x \in S_f : ker (D_{h(x)}\xi) = \lbrace (x_1,x_2,...,x_{n^{2} -1}, 0)\rbrace \subset \R ^{n^2} $. Combine both equalities above to obtain $T_{h(x)}\left(\mathcal{R}_{n-1}\right)=T_{h(x)} (\xi^{-1} \lbrace0\rbrace) = ker (D_{h(x)}\xi)$ which is the thesis. $\Box$
\begin{lema} \label{lema 2} For every critical point $x$ it holds that $$ \{(0,0,...,0,1)\} \subset Im \left( D_{x}h\right) . $$ \end{lema}
\pf
Since $h:\T^n \rightarrow \mathcal{M}_{n \times n}(\R)$ and $f \in C^2$ then $h \in C^1$; therefore, the differential $D_x h \in \mathcal{M}_{n^2 \times n}(\R)$ exists and is continuous. The statement above is equivalent to the jacobian $D_x h$ has a non-zero element in the last row. \\ An explicit calculation provides:
\[
(D_xh)_{ij} =
\begin{cases}
\text{$0$} &\quad\text{if $i<n^2-n+1$}\\
\text{$-2.\varphi.\psi-4.x_{j}^{2}. \psi''$} &\quad\text{if $n^2-n<i<n^2$ and $j=i$} \\
\text{$-4.x_{i-n^2+n}.x_j.\varphi.\psi''$} &\quad\text{if $n^2-n<i<n^2$ and $j\not\in \lbrace i, n\rbrace$}\\
\text{$-2.x_{i-n^2+n}.\varphi'.\psi'$} &\quad\text{if $n^2-n<i<n^2$ and $j=n$}\\
\text{$-2x_j.\varphi'.\psi'$} &\quad\text{if $i=n^2$ and $j\neq n$}\\
\text{$-\varphi''.\psi$} &\quad\text{if $i=n^2$ and $j=n$}\\
\end{cases}
\]
\\
Consider the last row, at value $i=n^{2}$:\\ If $\varphi''\neq 0$, since $\psi >0$ at all points in $S_f$ we find a non-zero element in the last column when $j=n$. \\
If $\varphi''= 0$, looking at the graph of $\varphi'$ (Figure \ref{figura11}) there are only three points where the condition holds. Recall that for all $x \in S_f$ it is $\varphi'.\psi=2$. Discard then both of the points which have negative image under $\varphi'$ since they don't belong to the critical set, so the only one remaining case is at $x_n= \frac{1}{4}+ \frac{\delta}{8}$. \\ It is only left to see then that if $x_n= \frac{1}{4}+ \frac{\delta}{8}$ there exists $j$ such that $-2.x_j.\varphi'.\psi' \neq 0$ or equivalently $x_j.\psi' \neq 0$ since at this value $\varphi'(\frac{1}{4}+ \frac{\delta}{8})=1$. \\ Discuss now two cases considering Equation (\ref{mapaefe}) and Figure \ref{figura11}:\\ If it were $\psi'=0$ then $\sum_{k=1}^{n-1} x_k^2=\frac{1}{16}$. Since $ \psi(\frac{1}{16})=4$ then $\varphi'.\psi = 4$ implies $x \notin S_f$.\\ If it is $\psi' \neq 0$ and it were for all $j: x_j = 0$, since $ \psi(0)=0$ then $ \varphi'.\psi = 0$ which again implies $x \notin S_f $. \\ Therefore, for all critical points $ x \in S_f$ satisfying $ x_n= \frac{1}{4}+ \frac{\delta}{8}$ there exists $j<n$ such that $ x_j.\psi' \neq 0$. $\Box$
\begin{thm}\label{teorema 1}Apply Lemmas \ref{lema 1} and \ref{lema 2} along with Theorem \ref{subvartransversal} to obtain Definition \ref{transve} which gives the proof.$\Box$
\end{thm}
\subsubsection{Geometry of $S_f$ }\label{representacion}
Having established that $S_f$ is a submanifold of $\T^n$ we turn our attention to its geometry. The objective of this section is to prove that the critical set is isomorphic to the product of two spheres $$S_f \simeq S^{n-2} \times S^1.$$ As an introduction, we analyze the problem in low dimensions first and then generalize. Remember that for vectors $v=(v_{1},v_{2},...,v_{n}) \in \R^n$ and $h<n$ we denote $\tilde{v}_{h}=(v_{1},v_{2},...,v_{h})$ and $\tilde{v}_{n-1} = \tilde{v}$.\\
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.13]{nigual2.eps}
\caption{$S_f$ for $n=2$.}\label{nigual2}
\end{center}
\end{figure}
\underline{Case $n=2$}:\\
This case is fully studied in \cite{ip}. We only observe that the map $f$ takes the form $f:\T^2\to \T^2 / f(x,y)= \left( 8x,2y-\varphi(y).\psi ( x^2 ) \right) $ and give a graph (Figure \ref{nigual2}) showing explicitly that $S_f \simeq S^{0} \times S^1$. \\
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.13]{nigual3.eps}
\caption{$S_f$ for $n=3$.}\label{nigual3}
\end{center}
\end{figure}
\underline{Case $n=3$}:\\
Here $f$ is $f:\T^3\to \T^3 / f(x,y,z)= \left( 8x,2y,2z-\varphi(z).\psi ( x^2 + y^2 ) \right)$ and consequently the critical set is $S_f=\{(x,y,z) \in \T^{3} / 2-\varphi'(z).\psi(x^2+y^2) =0\rbrace$. Since $\psi \leq 4$, if $\varphi' < \frac{1}{2}$ then $\psi \cdot \varphi' <2$ which imply for all $(x,y,z) \in S_f : \varphi'(z) \geq \frac{1}{2}$. In turn $\varphi'(z) \in [\frac{1}{2},1]$ and $\psi(x^2+y^2) \in [2,4]$.\\ Go now to the graphs of $\varphi'$ and $\psi$ (Figure \ref{figura11}) ;
let $[\frac{1}{4},z_1]$ and $ [d_0,d_1]$ be the intervals such that $ \varphi '(z) \in [\frac{1}{2},1] \Leftrightarrow z \in [\frac{1}{4},z_1] $ and $ \psi(x^2+y^2) \in [2,4] \Leftrightarrow x^2+y^2 \in [d_0,d_1] $.\\
For each $z \in [\frac{1}{4},z_1]$ there exists a real value $ d=d(z) \in [d_0,d_1]$ with $\psi(d(z))=\frac{2}{\varphi'(z)}$; it holds that $\{x^2+y^2=d(z)\} \times {\{z\}} \subset S_f$. Moreover, there exists a unique value $d=\frac{1}{16}$ for the cases $z=\frac{1}{4}$ or $z=z_1$ and there exist \emph{two} distinct values $d(z)<D(z)$ for each $z \in (\frac{1}{4},z_1)$ that verify this property (they are symmetrical with respect to $\frac{1}{16}$). This situation can be regarded as that while $z$ 'runs through' the interval $[\frac{1}{4},z_1]$, the critical set gets foliated as the union of two circles. These circles are defined by the intersection of the boundaries of two balls centered at $(0,0,z)$, with radii $d(z)$ and $D(z)$, with 'horizontal' hyperplanes defined by constant values of $z$ as Figure \ref{nigual3} shows (observe that these circles are spheres of codimension 2 in $\T^3$).\\
We have shown then that $S_f=\bigcup_{z \in [\frac{1}{4},z_1]}(\{x^2+y^2=d(z)\} \cup \{x^2+y^2=D(z)\}) \times \{z\} $ or, in the same fashion, $S_f=\bigcup_{z \in [\frac{1}{4},z_1]}(\partial B_{(\tilde{0},\sqrt{d(z)})} \cup \partial B_{(\tilde{0},\sqrt{D(z)})} ) \times \{z\} $. Finally, since $z=\frac{1}{4}$ and $z=z_1$ have the same values of $d(z)=D(z)=\frac{1}{16}$ it is possible to identify the boundary manifolds $\{x^2+y^2=\frac{1}{16}\}\times\{\frac{1}{4}\}$ and $\{x^2+y^2=\frac{1}{16}\}\times\{z_1\}$ to obtain \textbf{$S_f \simeq S^{1} \times S^1$}. Observe that the extremal values for the radii of the balls are obtained when $z=\frac{1}{4}+\frac{\delta}{8}$ (they appear in Figure \ref{nigual3} as $B_0(x_\delta)$ and $B_1(x_\delta)$ respectively). \\
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.25]{Sf.eps}
\caption{Frontal view of $S_f$ for $n \geq 3$, $x_0=\frac{1}{4}$, $x_1=c$. }\label{Sf}
\end{center}
\end{figure}
Analyzing the \underline{general case} as the previous ones we deduce:\\
$S_f=\{x \in \T^{n} / 2-\varphi'.\psi =0\rbrace$ and $\psi \leq 4$ so it has to be $\varphi' \geq \frac{1}{2}$. In consequence, for all critical points $x=(x_1,...,x_n) \in S_f$ it hold $ \varphi'(x_n) \geq \frac{1}{2}$ and $\psi \in [2,4]$. Turn once more to the graphs of $\varphi'$ and $\psi$ (Figure \ref{figura11}).\\
Let $[\frac{1}{4},c]$ be the interval where $ \varphi '(x_n) \in [\frac{1}{2},1] \Leftrightarrow x_n \in [\frac{1}{4},c] $.\\
Let $ [d_0,d_1]$ be the interval where $ \psi(d) \in [2,4] \Leftrightarrow d \in [d_0,d_1] $.\\
For each $x_n \in [\frac{1}{4},c]$ there exists a real value $ d=d(x_n) \in [d_0,d_1]$ such that $\partial B_{(\tilde{0},\sqrt{d(x_n)})} \times {\{x_n\}} \subset S_f$. And just like before, for every $ x_n \in (\frac{1}{4},c) $ there exist \emph{two} distinct values $d(x_n)<D(x_n)$ that satisfy this condition except for a unique value $d(x_n)=D(x_n)=\frac{1}{16}$ at both cases $x_n=\frac{1}{4}$ and $x_n=c$. What happens is that while $x_n$ 'runs through' the interval $[\frac{1}{4},c]$, the critical set is foliated by the boundary of two balls of codimension 1, which are spheres of codimension 2 in $\T^n$. Once again it allows writing the critical set as $S_f=\bigcup_{x_n \in [\frac{1}{4},c]}(\partial B_{(\tilde{0},\sqrt{d(x_n)})} \cup \partial B_{(\tilde{0},\sqrt{D(x_n)})}) \times \{x_n\} $. Moreover, since the balls at $x_n=\frac{1}{4}$ and $x_n=c$ are equal (appearing in Figure \ref{Sf} as $B(x_0)$ and $B(x_1)$ respectively) we can identify them to obtain \textbf{$$S_f \simeq S^{n-2} \times S^1$$} which is our claim. Again, the extremal values for the radii of these balls are obtained at the \textit{equator}, when $x_n=\frac{1}{4}+\frac{\delta}{8}$ (they appear in Figure \ref{Sf} as $B_0(x_\delta)$ and $B_1(x_\delta)$ respectively). $\Box$
\subsubsection{Classification of the critical points}\label{folds}
We pay attention now to a special type of singularity called \textit{fold points} which are the most frequent singularities to appear on any map, corresponding to symbol $(1,0)$ in the Thom-Boardman classification. In particular, we pay special attention to these since they are the only type of singularity that will appear explicitly along this work, and most important of all because near fold points open sets collapse (as Lemma \ref{abiertofold} shows), and this can of course prevent robust transitivity as we will show happens for the map $f$ studied in the previous sections.\\ In the classical development of singularity theory, fold points (as well as all of the others) are defined by their normal forms and then its properties are established. We will not do so, but follow the path presented in \cite{gg} and give the following definition (we won't go on about their normal form until we need it later on in the article):
\begin{defi}\label{defifold}
We say that $x \in S_f$ is a \textbf{fold singularity} if $ ker(D_xf) \transv T_xS_f$.
\end{defi}
We denote by $F_f$ the set of fold points of $f$; i.e. $F_f=\{ x \in S_f / x \mbox{ is a fold point} \}$.
\begin{thm}\label{clasificaciondelospuntoscriticos}
Let $f$ be the map defined by Equation (\ref{mapaefe}). Then, every critical point in $S_f$ except for those that satisfy $x_n=\frac{1}{4}+\frac{\delta}{8}$ is a fold point. Or equivalently, $S_f \setminus F_f \subset \{ x_n= \frac{1}{4}+\frac{\delta}{8} \}$.
\end{thm}
\pf
Recall observation 2 in Remark \ref{rkcritset}. It provides $ker(D_x f)=\langle\{\vec{e_n}\}\rangle$ for all $x \in S_f $. Since $dim(T_x S_f)=n-1$, it suffices that $\{(0,0,...,0,1)\} \nsubseteq T_x S_f$ for $x$ to be a fold point. According to Lemma \ref{lema 2} we know that the tangent space is given by
$T_x S_f = T_x h^{-1}(\mathcal{R}_{n-1})=ker(D_x h)$. On the proof of the Lemma it was shown that if $\varphi^{''} \neq 0 $ then $(0,0,...,1) \in Im(D_x h)$ hence $ (0,0,...,1) \notin ker(D_x h)$ which implies $x \in F_f$. Therefore, whenever a critical point $x$ is not a fold point then $\varphi^{''} = 0$ which means $ x_n=\frac{1}{4}+\frac{\delta}{8}$. $\Box$
\begin{clly}
$F_f$ is open and has total Lebesgue measure in $S_f$.$\Box$
\end{clly}
\begin{rk}
What Theorem \ref{clasificaciondelospuntoscriticos} shows is that all points in $S_f$ except for those at the 'equators' are fold points. Geometrically it is easy to see why, since being $ker(D_xf)$ generated by $\vec{e_n}$ for all $x \in S_f$, the equators are the only lines where the transversality condition fails to hold.
\end{rk}
\section{A singular endomorphism $H$ of $\T^n$}
Now that we have thoroughly studied the critical set and dynamical behavior of the map $f$, we turn to the construction of the objective map of this work. That is, a map $H$ that is $C^2$ robustly transitive but not $C^1$ robustly transitive which we obtain through a perturbation of $f$. The idea is to see that, given $\ep >0$, around $p$ there exists an open set of $\T^n$ whose image under some $g$ in $\mathcal{U}_{(H,\varepsilon)} \in C^1$ is meager and invariant which prevents robust transitivity. Finally we will prove that $H$ is $C^2$ robustly transitive making use of the unstable cones it inherits from $f$.
\subsection{Construction of $H$.}\label{H}
Recall $p=(\frac{1}{4},0,..,0,\frac{1}{4}) \in S_{f} $. We start showing that around $p$ all critical points have their last coordinate as an implicit function of the others. We show as well that every first and second order derivatives of this implicit function at $p$ are equal to zero.
\begin{thm}\label{implicita}
There exist $U \subset \R^{n-1}$ open neighborhood of $\tilde{p}=(\frac{1}{4},0,...,0)$; $ V \subset \R$ open neighborhood of $\frac{1}{4}$ and a smooth function $ \phi:U \rightarrow V$ with $\phi(\tilde{p})=\frac{1}{4}$ such that for all $ x=(x_1,...,x_n) \in S_f \cap U\times V $ it is $ x_n=\phi(x_1,...,x_{n-1})$.
\end{thm}
\pf
Recall $S_{f}=\lbrace x \in \T^n /2-\varphi'.\psi =0\rbrace $. Derivate over $S_f$ along $x_n$ to obtain $ \frac{\partial}{\partial x_{n}}( 2-\varphi'.\psi )=-\varphi'' .\psi$. The evaluation at $p$ is not zero since $\varphi''(\frac{1}{4})\neq 0$.
Apply the Implicit Function Theorem to obtain an open neighborhood $U$ of $\tilde{p}=(\frac{1}{4},0,...,0)$, $ V$ an open neighborhood of $ \frac{1}{4}$ and a smooth function $ \phi:U \rightarrow V $ satisfying $ \phi(\tilde{p})=\frac{1}{4}$ and such that for all $ x \in S_f \cap U \times V, x=(\tilde{x},\phi(\tilde{x}))$.$\Box$ \\
We prove now that all first and second order derivatives of $\phi$ are null at $\tilde{p}$.
\begin{lema}\label{valecero} $\forall k, k' \leq n-1: \frac{\partial}{\partial x_{k}}\phi(\tilde{p})=\frac{{\partial}^2}{\partial x_{k}.x_{k'}}\phi(\tilde{p})=0$.
\end{lema}
\pf
For all $ x \in S_f \cap U \times V$, it holds $ 2-\varphi '(\phi).\psi=0 $. Let $ k\leq n-1$ and derivate along $x_k$; it holds $ \frac{\partial}{\partial x_k}[2-\varphi'(\phi).\psi]=0 $, then $$ \forall k\leq n-1: -\varphi^{''}(\phi).\frac{\partial}{\partial x_{k}}\phi.\psi-2.\varphi'(\phi).x_{k}.\psi '=0 \therefore \frac{\partial}{\partial x_{k}}\phi=-\frac{2x_{k}.\varphi'(\phi).\psi'}{\varphi^{''}(\phi).\psi}. $$ \\ Let $ k'\leq n-1$ and derivate now along $x_{k'}$: $$ \frac{{\partial}^2}{\partial x_{k}.x_{k'}}\phi= \left[ \frac{\partial}{\partial x_{k'}}\left(\frac{-2x_k.\varphi'(\phi)}{\varphi^{''}(\phi).\psi}\right)\right].\psi'+ \left(\frac{-2x_k.\varphi'(\phi)}{\varphi^{''}(\phi).\psi}\right) .\left( 2x_{k'} \psi^{''}\right).$$\\ It is clear that both of them are well defined in $U$ and since $\psi^{'}( \frac{1}{16})=\psi^{''}( \frac{1}{16})=0$ they become zero at $\tilde{p}$.$\Box$
\begin{clly}
$\forall\varepsilon' >0, \exists U' \subset U$ open neighborhood of $\tilde{p}$ in $\R^{n-1} $ such that \\$\forall x \in U'; \forall k, k' \leq n-1; $ $$ max\left\{\left|\phi(x)-\frac{1}{4}\right|,\left|\frac{\partial}{\partial x_{k}}\phi(x)\right|,\left|\frac{{\partial}^2}{\partial x_{k}.x_{k'}}\phi(x)\right|\right\}<\varepsilon'. $$
\end{clly}
\pf Apply Lemma \ref{valecero} and definition of continuity.$\Box$
\begin{clly}
$\forall\varepsilon' >0, \exists U' \subset U$ open neighborhood of $\tilde{p}$ in $\R^{n-1}$ such that \\$ \forall x \in U'; \forall k \leq n-1;$ $$ \left| \phi(x) -\frac{1}{4} \right| < \varepsilon'\left\|x-\tilde{p}\right\|^2 \mbox{ and } \left| \frac{\partial}{\partial x_k} \phi(x) \right| < \varepsilon'\left\|x-\tilde{p}\right\|. $$
\end{clly}
\pf Apply Taylor's Theorem to $\phi$ at $\tilde{p}$ together with Lemma \ref{valecero}.$\Box$
\\
We turn our attention now to the image $f(S_f)$ of the critical set under $f$. Here again, near $f(p)$ all points in $f(S_f)$ have their last coordinate as a function $\Phi$ of the previous ones, and all first and second order derivatives of this function $\Phi$ are null at $\widetilde{f(p)}$. Observe that since $A(p)=f(p)=(\tilde{0},\frac{1}{2})$ then $ \widetilde{A(p)}=\widetilde{f(p)}=\tilde{0}$. \\
To make the reading easier we denote through the rest of the section $\phi(\tilde{x})$ as $\tilde{\phi}$ for all $\tilde{x} \in U$. Theorem \ref{implicita} grants that for all points $x$ in $S_f \cap U \times V$ it holds that $$f_{|S_f \cap U\times V}(x_1,...,x_n)=(8x_1,2x_2,...,2x_{n-1},2\tilde{\phi}-\varphi(\tilde{\phi}).\psi).$$
Observe now that since the matrix $\widehat{A}$ at the start of Section 3 is invertible allows the definition of a map $ A^{-1}:\R^n \rightarrow \R^n /A^{-1}(x_1,...,x_n)= (\frac{x_1}{8},\frac{x_2}{2},...,\frac{x_n}{2}) $. This map, fairly being not defined on $\T^n$, is useful for finding the analytic expression of the function $\Phi$ we seek since a change of variables $z:=\widetilde{A(x)}$ enables writing $$f_{|{S_f \cap U\times V}}(x)=(z,2\tilde{\phi}(A^{-1}((z,0)))-\varphi(\tilde{\phi}(A^{-1}((z,0)))).\psi(A^{-1}((z,0)))).$$ Define $$\Phi:\R^{n-1}\rightarrow \R / \Phi(z)=[2.(\tilde{\phi}\circ A^{-1})-(\varphi \circ \tilde{\phi}\circ A^{-1}).(\psi \circ A^{-1})](z,0)$$ to have an explicit analytic expression for the last coordinate as a function of the others for all points near $f(p)$ in the set $f(S_f \cap U \times V)$.
\begin{rk}\label{ultimacoordenada}
For all $ x \in f(S_f \cap U \times V)$ the equality $x_n=\Phi(\tilde{x})$ holds.
\end{rk}
\begin{lema}
$\forall k, k' \leq n-1: \frac{\partial}{\partial x_k}\Phi(\tilde{0})=\frac{\partial^2}{\partial x_k.x_{k'}}\Phi(\tilde{0})=0$.
\end{lema}
\pf It suffices to notice that $A_{|\{x_n=0\}}^{-1}$ is linear. Apply the chain rule when derivating $\Phi$ and Lemma \ref{valecero} to obtain the proof to the claim. $\Box$\\
Moreover, considering that $A_{|\{x_n=0\}}^{-1}$ contracts and that $|\varphi'|\leq1$, the following corollaries are in order:
\begin{clly}\label{A1}
$\forall\varepsilon' >0, \exists W $ open neighborhood of $\tilde{0}$ in $\R^{n-1}$ such that \\ $\forall x \in W; \forall k, k' \leq n-1; $ $$ max\left\{\left|\Phi(x)-\frac{1}{2}\right|,\left|\frac{\partial}{\partial x_{k}}\Phi(x)\right|,\left|\frac{{\partial}^2}{\partial x_{k}.x_{k'}}\Phi(x)\right|\right\}<\varepsilon'.\Box$$
\end{clly}
\begin{clly}\label{A2}
$\forall\varepsilon' >0, \exists W $ open neighborhood of $\tilde{0}$ in $\R^{n-1}$ such that \\ $\forall x \in W; \forall k \leq n-1;$ $$ \left| \Phi(x) -\frac{1}{2} \right| < \varepsilon'\|x\|^2 \mbox{ and } \left| \frac{\partial}{\partial x_k} \Phi(x) \right| < \varepsilon'\|x\|.\Box$$
\end{clly}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.35]{graficoomega.eps}
\caption{Graph of $\omega$}\label{figura22}
\end{center}
\end{figure}
From now on we proceed to build a diffeomorphism $F$, $C^2$ close to the identity map, that we will compose with $f$ to obtain $H$. Essentially, what $F$ will do is a 'flattening' of $f(S_f)$ near $f(p)$ and be the identity map away from it. \\ To begin with, given $\ep >0$, choose $W$ where corollaries \ref{A1} and \ref{A2} hold for an $\varepsilon' <\frac{\varepsilon}{50}$. Next, fix a parameter $b \in \R $ such that $0 < b <\frac{1}{4}$ satisfying that the ball centered at $\tilde{0}$ and radio $2b$ has closure contained in $W$. Then, given $l \in \R$ such that $0<l < b$, define an auxiliary smooth function $\omega: \R \rightarrow\R$ (as Figure \ref{figura22} shows) satisfying:
\begin{itemize}
\item $\omega_{| B_{(0,l)}}=1$, $\omega_{| B_{(0,2l)}^{c}}=0$ and $\|\omega \|=1$,
\item $\|\omega'\|<\frac{2}{l}$ and $\|\omega''\|<\frac{8}{l^2}$.
\end{itemize}
\vspace {0.2cm}
Finally, define $u : \R^{n-1}\rightarrow \R$ $/u(\tilde{x})=\omega(\|\tilde{x}\|).(\Phi(\tilde{x})-\frac{1}{2})+\frac{1}{2}$; \\observe that $u_{| B_{(\tilde{0},l)}}=\Phi$ and $u_{| B_{(\tilde{0},2l)}^{c}}=\frac{1}{2}$.
\begin{lema}\label{phichato}
$$ \forall \tilde{x} \in \R^{n-1}, \forall k, k' < n: max\left\{ \left|u(\tilde{x})-\frac{1}{2}\right|,\left|\frac{\partial}{\partial x_k}u(\tilde{x})\right|,\left|\frac{\partial^2}{\partial x_k \partial x_{k'}}u(\tilde{x})\right| \right\} <\varepsilon. $$
\end{lema}
\pf
It suffices to calculate the extreme values in $\overline{B_{(\tilde{0},2l)}\setminus B_{(\tilde{0},l)}}$ since in the bounded component of the complement is less or equal than $\varepsilon'$ (controlled by $\Phi$) and in the unbounded component is null. Moving on to calculations for $\tilde{x}$ inside the annulus, we write $\omega(\|\tilde{x}\|)$ as $\omega$ in the sake of simplicity. Compute derivatives and apply Corollary \ref{A2}:\\
1) $$ \left|u(\tilde{x})-\frac{1}{2}\right| = \left|\omega.\left(\Phi(\tilde{x})-\frac{1}{2}\right) \right| \leq \left|\Phi(\tilde{x})-\frac{1}{2}\right| \leq \varepsilon' <\varepsilon . $$\\
2)$$\left|\frac{\partial}{\partial x_k}u(\tilde{x})\right|=\left| \omega'.\frac{(\tilde{x})_k}{\|\tilde{x}\|}.\left(\Phi(\tilde{x})-\frac{1}{2}\right)+\omega.\frac{\partial}{\partial x_k} \Phi(\tilde{x}) \right| \leq \frac{2}{l}4l^2\varepsilon' +2l\varepsilon' < 10l\varepsilon'<\varepsilon. $$\\
3) $$\left|\frac{\partial^2}{\partial x_k \partial x_{k'}}u(\tilde{x})\right|= |\omega^{''}.\frac{(\tilde{x})_{k'}}{\|\tilde{x}\|}.\frac{(\tilde{x})_k}{\|\tilde{x}\|}.\left(\Phi(\tilde{x})-\frac{1}{2}\right)+\omega^{'}.\frac{-(\tilde{x})_{k}.(\tilde{x})_{k'}}{\|\tilde{x}\|^3}.\left(\Phi(\tilde{x})-\frac{1}{2}\right)+$$ $$+ \omega'.\frac{(\tilde{x})_k}{\|\tilde{x}\|}.\frac{\partial}{\partial x_{k'}} \Phi(\tilde{x})+ \omega'.\frac{(\tilde{x}-\tilde{f}(p))_{k'}}{\|\tilde{x}-\tilde{f}(p)\|}.\frac{\partial}{\partial x_k}\Phi(\tilde{x})+\omega.\frac{\partial^2}{\partial x_k \partial x_{k'} } \Phi(\tilde{x})| \leq $$ $$ \leq \frac{8}{l^2}4l^2\varepsilon'+\frac{2}{l}\frac{1}{l}4l^2\varepsilon'+2(\frac{2}{l}2l\varepsilon')+\varepsilon'<50\varepsilon'<\varepsilon. \Box $$
\begin{clly}\label{coroFachata}
Given $\ep >0$, there exists a diffeomorphism $F$ of $\T^n$ such that $d_{C^2}(F,Id) < \varepsilon$ and exists $l>0 $ such that $\forall x \in f(S_f) \cap B_{(f(p),l)}: F(x)=(\tilde{x},\frac{1}{2}). $
\end{clly}
\pf
Define explicitly $F: \T^n \rightarrow \T^n / F(x)=(\tilde{x},x_n-u(\tilde{x})+\frac{1}{2})$. \\
Then,
\begin{itemize}
\item $F$ is a diffeomorphism of $\T^n$ since $ det(D_xF)=1, \forall x \in \T^n$.
\item $ \forall x \in \T^n, \|(F-Id)(x) \|=|u(\tilde{x})-\frac{1}{2}| $. By Lemma \ref{phichato}, $d_{C^2}(F,Id)<\varepsilon$ .
\item $F_{|B_{(\tilde{0},2l)}^c \times \R}(x)=Id$ therefore $F_{|B_{(f(p),2l)}^c}(x)=Id$.
\item $F_{|B_{(\tilde{0},l)} \times \R}(x)=(\tilde{x},x_n + \frac{1}{2}-\Phi(\tilde{x}))$.
\item By remark \ref{ultimacoordenada}, $ x_n=\Phi(\tilde{x})$ for all points $x \in f(S_f \cap U \times V)$. Then, the restriction $F_{|B_{(f(p),l)}\cap f(S_f \cap U \times V) } \subset\{ x_n= \frac{1}{2}\}$.$\Box$
\end{itemize}
\vspace{0,4cm}
We are now in condition to define a map $H$ that has the properties claimed in the title of the article:
$$H:\T^n \rightarrow \T^n / H(x)=(F \circ f)(x).$$
\begin{rk}\label{remarkH}
It is straightforward seeing that the following hold:
\begin{itemize}
\item $d_{C^2} (H,f)< \varepsilon$.
\item $H(p)=f(p)$.
\item $S_H=S_f$ hence $p \in S_H$.
\item $H(B_{(p,r)})\cap B_{(p,r)} = \emptyset$ because of how $f$ and $r$ were defined at the beginning.
\item $H(x)=A(x)$, $ \forall x \in \T^n \setminus (B_{(p,r)} \cup A^{-1}(B_{(A(p),r)}))$ because of how $H$ is defined.
\item There exists $ a' \leq a $ such that $\overline{D_{x}H(C^u_{a^{'}}(x))} \setminus \{ (0,0) \} \subset C^u_{a^{'}}(H(x)) , \forall x \in \T^n$ by Corollary \ref{coroconos}. Observe that the parameter $\varepsilon'$ of Corollaries \ref{A1} and \ref{A2} as well as the parameter $l$ from Lemma \ref{phichato} can all be shrunk if it was desired to push $H$ closer to $f$.\item $H(S_H)= F[f(S_f)] \subset \{x_n=\frac{1}{2}\}$, $\forall x \in B_{(f(p),l)}$ by Corollary \ref{coroFachata}.
\end{itemize}
\end{rk}
We proceed in the following sections to prove that $H$ is an endomorphism of $\T^n$ that is $C^2$ robustly transitive but not $C^1$ robustly transitive.
\subsection{$H$ is not $C^1$ robustly transitive}
We begin the section showing that every map with a fold singularity is $C^1$ close to a map whose critical set has nonempty interior.\\
Remember that $x \in S_f$ is a fold singularity if $ ker(D_xf) \transv T_xS_f$ and that the set $F_f=\{ x \in S_f / x \mbox{ is a fold point} \}$. In \textit{\cite{gg},chapter 3, Theorem 4.5} it is shown that for every $x \in F_f$, there exists a change of coordinates in $\R^n$ such that $f$ takes its \textbf{normal form }. Explicitly, if $f:M \rightarrow M / f(x)=y$ and $x \in F_f$ then there exist $ U, V$ open sets containing $x$ and $y$, there exists $N$ open neighborhood of $0 \in \R^m$ and diffeomorphisms $g:N \rightarrow U$ and $h:V \rightarrow N$ such that for all $ (x_1,...,x_n) \in N$, $( h \circ f_{|U} \circ g)(x_1, ..., x_n)=(x_1,...,x_{n-1},x_n^2)$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.35]{graficohfold.eps}
\caption{Graph of $\mu$}\label{uno}
\end{center}
\end{figure}
\begin{lema}\label{abiertofold}
If $f \in C^1(M,M)$ and $x \in F_f$ then for every $ \varepsilon >0$ and for every $W$ open neighborhood of $x$ in $M$ there exist a map $g$ in a neighborhood $ \mathcal{U}_{(f,\varepsilon)} \in C^1$ and $V$ an open neighborhood of $x$ in $M$ such that $V \subset S_g, g(V)=f(S_f \cap V)$ and $g(x)=f(x),\forall x \notin W$.
\end{lema}
\pf
Without loss of generality we take $x=0$ and $f$ in \emph{normal form}. \\ Let $\varepsilon >0$ and $W$ an open neighborhood of $x$ be given and choose $r>0$ such that $B_{(0,r)} \subset W$ and that $ 7r<\varepsilon$ and $r^2<\varepsilon$.\\ Take a bump function $\mu:\R \rightarrow \R$ class $C^1$ as shown in Figure \ref{uno} with $|\mu'|<\frac{4}{r}$ and define a map $g:\R^n\rightarrow \R^n/ g(x_1,..,x_n)=(x_1,...,x_{n-1},x_n^2.\mu(x_n))$. A straightforward calculation shows that $(f-g)(x_1,..,x_n)=(0,...,0,x_n^2.(1-\mu(x_n)))$ in $B_{(0,r)}$ and zero in $B_{(0,r)}^c$. Another straightforward calculation provides:
\begin{itemize}
\item $|f-g|\leq |x_n^2|.|1-\mu(x_n)|<r^2<\varepsilon$.
\item $|\frac{\partial}{\partial x_n}(f-g)|\leq |2x_n(1-\mu(x_n))-x_n^2.\mu'(x_n)|<2r+r^2.\frac{4}{r}<7r<\varepsilon$.
\end{itemize}
Define $V=B_{(0,\frac{r}{2})}$ so then $ g_{|V}=(x_1,..,x_{n-1},0)=f_{|S_f}=f_{|S_f \bigcap V}$. The remaining assertion follows from the fact that $W^c \subset B_{(0,r)}^c$ .$\Box$
\begin{rk}
Lemma \ref{abiertofold} does not hold in a $\mathcal{U}_{(f,\varepsilon)} \in C^2$ neighborhood of $f$. It is sufficient to notice that the Hessian matrix of $f$ has an element $2$ in the last entry. We provide with a more general proof for this remark in Theorem \ref{nopasac2} where we don't assume the singularity to be in normal form nor perform computations of derivatives.
\end{rk}
Now that the key feature about fold points has been stated, we move on closing the subsection with the proof of the claim on its title.
\begin{lema}
$p$ is a fold point for $H$.
\end{lema}
\pf
Let $h$ be the map defined by Equation \ref{mapah}, then Remark \ref{remarkH} provides
$T_p S_H = T_p S_f = T_p h^{-1} (\mathcal{R}_{n-1}) = ker (D_p h)$ and $ ker (D_p H)= ker(D_p f)$.
Apply Lemma \ref{lema 2} and remark 2 in Equation \ref{ecuacion1} to obtain $T_p S_H \oplus ker (D_p H) = \R^n $ which is the condition for $ p \in F_H$ . $\Box$
\begin{thm}
$H$ is not $C^1$ robustly transitive.
\end{thm} \pf (see Figure \ref{Haplasta})\\
Fix any $\varepsilon >0$. Since $p \in F_H$, Lemma \ref{abiertofold} provides a function $g $ in $\mathcal{U}_{(H, \ep)} \in C^1$ and an open set $W$ in $\T^n$ containing $p$ such that: $W \subset S_g$, $g(W)=H(S_H \cap W)$ for all $ x \in W$ and $g=H$ in $B_{(p,l)}^c$.
Abusing language in Corollary \ref{coroFachata}, call $W \cap B_{(p, l)}$ as $W$, then $g(W)=H(S_H \cap W) \subset H(S_H) \subset \{x_n=\frac{1}{2}\}$. Since $H=A$ in $B_{(p,r )}^c$, $f(p)\neq p$ and $|g(p)-f(p)|<\varepsilon$ then, previously reducing $W$ again if needed, it holds that $g^2(W)=g(g(W))=H(g(W))=A(g(W)) \subset \{x_n=0\}$ . On the other hand, $\{x_n=0\}$ is invariant under $g$ (since 'far' from $p$, $g$ is equal to $A$) then $ g^m(W) \subset \{ x \in \T^n / x_n=0 \} ,\forall m \geq 2$ which implies $g$ is not transitive. Being $g$ not transitive, $H$ is not $C^1$ robustly transitive. $\Box$
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.16]{Haplasta.eps}
\caption{$H$ 'flattens' the image of $S_f$ around $f(p)$}\label{Haplasta}
\end{center}
\end{figure}
\subsection{$H$ is $C^2$ robustly transitive}
We start the final subsection with a theorem showing that in the $C^2$ topology it is possible to find a neighborhood of $f$ where no map in there collapses open sets; opposed to what happens in the $C^1$ topology stated in Lemma \ref{abiertofold}:
\begin{thm}\label{nopasac2}
Let $f$ be the map defined by Equation \ref{mapaefe}. There exists a $C^2$ neighborhood $\mathcal{U}_f$ of $f$ such that for all $ g \in \mathcal{U}_f$ and all $V$ open in $\T^n$, the interior $int(g(V))$ is not empty.
\end{thm}
\pf
Observe that Lemmas \ref{lema 1} and \ref{lema 2} hold simultaneously for some open set $\mathcal{U}_f \in C^2$ containing $f$, therefore for all $ g \in \mathcal{U}_f$, $S_g$ is a submanifold of $\T^n$ of codimension 1. Given an arbitrary $g$ in $\mathcal{U}_f$ and any $ V$ open in $\T^n$ there exists $U \subset V$ open such that $U \cap S_g = \emptyset$. Since $g$ is a local diffeomorphism in the interior of $V \setminus S_g$ it carries $U$ to an open set. Hence, the interior of $g(V)$ is nonempty. $\Box$\\
We point out now the key properties that $f$ satisfies which are needed to finish the proof:
\begin{itemize}
\item 1) $f$ admits unstable cones and this is a $C^1$-stable property according to Corollary \ref{coroconos} (is worth to mention that the parameter $a$ can be chosen as small as desired in order to obtain a bounded prefixed difference between diameter and length, let's say 'less than $\ep$'. Hence diameter and length can be identified without risk).
\item 2) $f_{| \T^n \setminus B_{(p,r)}}$ expands due to Observation \ref{obs1}.
\end{itemize}
Define then a $C^1$ neighborhood of $f$ such that all of its elements admit unstable cones (item 1 holds). Then, reduce it until for all maps $ \hat{f}$ in there hold that $\hat{f}_{| \T^n \setminus B_{(p,\frac{3}{2}r)}}$ expands (item 2 holds) and reduce it again to a neighborhood $\mathcal{{U}}_f \in C^2$ of $f$ where Theorem \ref{nopasac2} also holds for all $ \hat{f} \in \mathcal{U}_f$. Also, since $H=F \circ f$ and $d_{2}(F,Id)<\varepsilon$ we can take $H \in \mathcal{U}_f$ and $\mathcal{U}_H \in C^2$ such that $\mathcal{U}_H \subset \mathcal{U}_f$. \\
In what comes next we choose an arbitrary $g \in \mathcal{U}_H$. If we prove $g$ is transitive then we will have proved that $H$ is $C^2$ robustly transitive. The strategy relies in noticing that for all open sets $V$ there exist within $V$ curves travelling inside $g$'s unstable cones which escape the perturbation region without coming back inside in the future. Being in a $C^2$ neighborhood where Theorem \ref{nopasac2} holds, then an open subset of $V$ also escapes the perturbation region. Finally, being $g$ \textit{$\ep$-close} to $A$ far away from $p$, then $g$ will be transitive. \\ We prove a series of lemmas that lead to the formal proof of these claims.
\begin{lema}\label{sucesiondecurvas}
Let $V \subset \T^n$ open. There exist a point $ y \in V$ and $ n_0 \in \N$ such that $g^k(y) \in int(g^k(V))$ and $g^k(y) \in \T^n \setminus B_{(p,2r)}, \forall k \geq n_0$.
\end{lema} \pf
Let $\beta \subset int(V)$ be a curve parametrized over an interval $I$ such that $\forall t \in I, \beta'(t) \in C^u_a(\beta(t))$. Condition \ref{3} on the differential of $f$ and the invariance of the unstable cones grant $diam(g^n(\beta))> 6^n.diam(\beta)$. Therefore it exists $n_0$ such that $diam(g^{n_0}(\beta)) > 9r$. Take a curve $\alpha \subset g^{n_0}(\beta) \setminus B_{(p,2r)}$ with $diam(\alpha) \geq 2r$.
Build now a succession of curves $\{\alpha_k\}$ in the following fashion: take $\alpha_1=\alpha$, since $\alpha'(t) \in C^u_a(\alpha(t))$ then $diam(g(\alpha_1))> 12r$ .
Being $diam(B_{(p,2r)})=4r$, there exists $ \alpha_2 \subset g(\alpha_1)$ such that $ \alpha_2 \subset \T^n \setminus B_{(p,2r)}$ and $diam (\alpha_2) \geq 2r$. Proceed inductively according to this algorithm to find a family of curves $\{ \alpha_k \}$ satisfying for all $ k \in \N $:
\begin{itemize}
\item $g(\alpha_{k}) \supset \alpha_{k+1}$,
\item $\alpha_k \subset \T^n \setminus B_{(p,2r)} $,
\item $\alpha'_k(t) \in C^u_a(\alpha_k(t))$.
\end{itemize}
Define afterwards a family of curves $\gamma_k$ such that $\forall k: \gamma_k \subset \alpha_1$ and $g^k(\gamma_k)=\alpha_k$.
We have $g^{k+1}(\gamma_k)=g(g^k(\gamma_k))=g(\alpha_k) \supset \alpha_{k+1}$, then $g^{k+1}(\gamma_k) \supset g^{k+1}(\gamma_{k+1})$ which imply $\gamma_k \supset \gamma_{k+1}$. Apply Cantor's Intersection Theorem to obtain a point $ x$ such that $\bigcap_{k \in \N} \overline{\gamma_k} =\{x\}$. It holds for all $k \in \N$ that $g^k(x) \in \T^n \setminus B_{(p,2r)} $. Choose any $y \in g^{-n_0}(x)$ to have a point satisfying the thesis.$\Box$
\begin{lema}\label{come}
For all $ V $ open set in $\T^n$ , there exist $y \in V$ and $n_v \in \N$ such that for all $ k \geq n_v: g^k(V) \supset B_{(g^k(y),r)}$.
\end{lema} \pf
For every $V$, Lemma \ref{sucesiondecurvas} gives a point $ y \in V$ and $ n_0 \in \N$ such that $g^k(y)$ is interior to ${g^k(V)}$ and $g^k(y) \in \T^n \setminus B_{(p,2r)}$ for all $k \geq n_0 $. Since $g_{|\T^n \setminus B_{(p,2r)}}$ expands and Theorem \ref{nopasac2} holds in every iterate of $g$ then there exists $ n_V$ such that $ g^k(V) \supset B_{(g^k(y),r)}$ for all $k \geq n_V $. $\Box$
\begin{lema}\label{cubre}
Given $\varepsilon > 0$ , there exist $ m \in \N$ and a finite family of open balls $\{B_j\}_{j \in \{1,..,d\}}$ in $\T^n$ with $diam(B_j) < \varepsilon $ for all $j$ such that: \begin{itemize}
\item $\bigcup_{j=1}^d (B_j)= \T^n$,
\item $ A^m(B_j)= \T^n$ for all $j \in \{1,..,d\}$,
\item There exists a $C^0$ neighborhood $\mathcal{U_A} $ of $A$ such that for all $ g \in \mathcal{U_A}$ and all $ j \in \{1,..,d\}$ it holds $ g^m(B_j)=\T^n$.
\end{itemize}
\end{lema}
\pf
The first claim comes from compactness of $\T^n$ and the second from $A$ expands. For the third, define a $C^0$ neighborhood of $A$ explicitly by the formula $\mathcal{U}_A = \{g:\T^n \rightarrow \T^n / \forall p \leq m, \forall j \leq d : |g^p(B_j)-A^p(B_j)| < \varepsilon \}$ . Endow now every ball in the family with its standard CW complex structure. Since $A$ is linear and $g$ homotopic to $A$, apply Cellular Approximation Theorem to show that $g$ transforms balls in sets that have the homotopy type of balls. Finally, since the boundaries of $g^p(B_j)$ and the boundaries of $A^p(B_j)$ are at distance less or equal than $\ep$ for all $j$, we have that $\mathcal{U}_A$ is the desired neighborhood that completes the proof.$\Box$
\begin{thm}
$H$ is $C^2$ robustly transitive.
\end{thm}
\pf
Given $\varepsilon = \frac{r}{4}$, choose a neighborhood $\mathcal{U_A}$ in $C^0$ given by Lemma \ref{cubre} and $\mathcal{U}_H \in C^2$ the neighborhood described after Theorem \ref{nopasac2}. Reduce $\mathcal{U}_H$ until it is contained in $\mathcal{U_A}$. Choose an arbitrary $ g \in \mathcal{U}_H$, we prove next that for all $ V$ open set in $\T^n$ there exists $\hat{m} \in \N$ such that $ g^{\hat{m}}(V)= \T^n$.\\ By Lemma \ref{come}, there exist $ y \in V$ and $n_V \in \N$ such that $ g^n(V) \supset B_{(g^n(y),r)}$ for all $n \geq n_V$. Consider now the family of open sets $\{B_j\}_{j \in \{1,..,d\}}$ and $m \in \N$ both given by Lemma \ref{cubre}. Since $diam(B_j)<\frac{r}{4}$ then $g^{n_V}(V) \supset B_j$. The third claim in Lemma \ref{cubre} gives that $g^m(B_j)=\T^n$ which implies that $g^{m+n_V}(V)=\T^n$. This proves that $g$ is transitive. But since $g$ was chosen arbitrarily, the proof holds for all $ g \in \mathcal{U}_H$. Therefore, $H$ is $C^2$ robustly transitive. $\Box$
\section*{Acknowledgements} The author would like to give thanks in the first place to \textrm{Dr. Jorge Iglesias} without whose help this article wouldn't have been possible. And in the second place to \textrm{Dr. Jorge Groisman} and \textrm{Dr. Roberto Markarian} whose thorough reading and insightful comments helped improving the quality of this work.
|
3,212,635,537,717 | arxiv | \section*{Acknowledgments}
We thank David Garcia for giving access to the extended Twitter data.
We also thank Roman Kern and the anonymous reviewers for their valuable feedback on the manuscript.
Tiago Santos was a recipient of a DOC Fellowship of the Austrian Academy of Sciences at the Institute of Interactive Systems and Data Science of the Graz University of Technology.
\balance
\bibliographystyle{IEEEtran}
\section{Discussion}
\label{sec:relwork}
In this section, we first reflect on practical aspects of our Bayesian approach vs. a frequentist alternative.
Then, we relate our work to literature on Hawkes process applications and theory.
\noindent \textbf{Comparison With Bootstrap.}
Bayesian and frequentist inference are two intensively discussed alternative statistical schools for estimating parameters. In recent years, Bayesian approaches received increasing appreciation by the web and machine learning communities (e.g.~\cite{barber2012bayesian,yang2012bayesian,singer2015hyptrails}). Beyond that body of work and theoretical considerations (cf. also Sec.~\ref{sec:approach}), we outline practical differences between our Bayesian approach and a bootstrap-based frequentist alternative.
Quantifying the uncertainty of decay estimates with bootstrap amounts to deriving confidence (rather than credible) intervals for the decay from the empirical bootstrap distribution.
Diagnosing mis-estimates and misaligned hypotheses with the bootstrapped decay distribution is not an integral part of the frequentist inference procedure, on the contrary to our Bayesian approach, which integrates the hypothesis in the inference procedure. However, explicitly formulating one- or two-sided statistical hypotheses to be tested against the bootstrap distribution is just as viable.
In the case of addressing breaks of stationarity, the bootstrap-based approach features remarkably higher complexity than our proposed Bayesian approach. While we infer all three parameters capturing the stationarity break simultaneously, the straightforward bootstrap-based alternative would require alternatively computing the distribution of one parameter while fixing the other two (as the inferred distribution is intractable). Hence, we argue that our Bayesian approach may also be the more natural choice for intractable inference.
Nevertheless, we stress that the frequentist alternative we outlined is a viable alternative to our Bayesian approach.
\noindent \textbf{Related Work.}
One of the first fields to leverage the seminal work by Hawkes~\cite{hawkes1971spectra} includes seismology~\cite{ogata1982application,daley2003introduction}.
Since then, Hawkes process theory and practice emerged in the realm of finance~\cite{bacry2015hawkes,bacry2016mean,trouleau2019learning}, as well as, more recently, in modeling user activity online~\cite{zhou2013learning,zhou2013learningt,upadhyay2017uncovering,tabibian2017distilling,rizoiu2017expecting,kurashima2018modeling,tabibian2019enhancing,santos2019self,junuthula2019block,hatt2020early}.
More specifically, the latter body of work extended Hawkes processes to predict diffusion and popularity dynamics of online media~\cite{zhou2013learningt,zhou2013learning,rizoiu2017expecting}, model online learning~\cite{upadhyay2017uncovering,tabibian2019enhancing},
capture the spread of misinformation~\cite{tabibian2017distilling}, and understand user behavior in online communities~\cite{santos2019self,junuthula2019block}, in online markets~\cite{hatt2020early} and in the context of the offline world~\cite{kurashima2018modeling}. As all of those previous references interpreted the parameter values of Hawkes processes (and variations thereof), they may benefit from our study of the decay parameter, especially as we uncover its properties and assess and mitigate estimation issues with Bayesian inference.
Perhaps closest to our work is Bacry et al.'s~\cite{bacry2016mean} study of mean field inference of Hawkes process values. In particular, those authors inspected the effect of varying the decay parameter across a range of values: With increasing decay, fitted self- and cross-excitations decrease while baseline intensity increases.
We go beyond their study by deepening our understanding of the (noisy) properties of the Hawkes log-likelihood as a function of the decay.
Methodologically, our Bayesian approach relates to Hosseini et al.'s~\cite{hosseini2016hnp3}. Those authors infer the decay parameter by assuming a Gamma prior and computing the mean of samples from the posterior (as part of a larger inference problem). In our work, we instead focus on the Bayesian approach as a means to quantify estimation uncertainty.
Further, as our Bayesian changepoint model captures breaks in stationarity, we simplify previous work~\cite{rizoiu2017expecting,santos2019self} which relies on additional assumptions, such as estimating stationarity via the time series of event counts.
Finally, our work complements recent efforts~\cite{trouleau2019learning} to learn Hawkes processes from small data.
\section{Results}\label{sec:empirical_analysis}
\begin{figure*}[!t]
\begin{center}
\subfloat[Far View]{
\includegraphics[width=0.31\textwidth]{results/loglik/updated_loglik_plot_Far_mean_ci}
\label{fig:loglik_far}
}
\subfloat[Close View]{
\includegraphics[width=0.31\textwidth]{results/loglik/updated_loglik_plot_Broad_mean_ci}
\label{fig:loglik_broad}
}
\subfloat[Closest View]{
\includegraphics[width=0.31\textwidth]{results/loglik/updated_loglik_plot_Close_mean}
\label{fig:loglik_close}
}
\caption{\label{fig:loglik}
\textbf{Convex-Like Noisy Log-Likelihood of Hawkes Processes as a Function of the Decay Parameter Beta ($\beta$).}
We simulate realizations from a two-dimensional Hawkes process with a given beta value for both dimensions ($\beta=1.2$), and we evaluate its log-likelihood for a large (cf. Fig.~\ref{fig:loglik_far}), medium (cf. Fig.~\ref{fig:loglik_broad}) and small (cf. Fig.~\ref{fig:loglik_close}) range of decay parameter values around the given one. Error bars throughout this paper indicate boostrapped $95\%$ confidence intervals, which may be too small to be visible\textsuperscript{a}. While the log-likelihood appears convex on a high-level, zooming in around the given value reveals a wide and noisy basin, which explains difficulties in optimizing the log-likelihood for the decay parameter. This finding is robust across Hawkes process parameter configurations which satisfy the stationarity constraint.
}
\end{center}
\small\textsuperscript{a} Single exception: We omit the error bars in Fig.~\ref{fig:loglik_close} for visualization purposes (as the error bars would otherwise overlap and render the Figure unintelligible).
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\subfloat[Log-Likelihood (Full)]{
\includegraphics[width=0.23\textwidth]{results/cvx_others/Log Likelihood}
\label{fig:cvx_others_loglik_full}
}
\subfloat[K-S Distance (Full)]{
\includegraphics[width=0.23\textwidth]{results/cvx_others/K-S Distance}
\label{fig:cvx_others_ks_full}
}
\subfloat[RMSE (Full)]{
\includegraphics[width=0.23\textwidth]{results/cvx_others/RMSE}
\label{fig:cvx_others_rmse_full}
}
\subfloat[Time (Full)]{
\includegraphics[width=0.23\textwidth]{results/cvx_others/Time (s)}
\label{fig:cvx_others_time_full}
}\\
\subfloat[Log-Likelihood (Excerpt)]{
\includegraphics[width=0.23\textwidth]{results/cvx_others/Log Likelihood_noWrong+}
\label{fig:cvx_others_loglik_excerpt}
}
\subfloat[K-S Distance (Excerpt)]{
\includegraphics[width=0.23\textwidth]{results/cvx_others/K-S Distance_noWrong+}
\label{fig:cvx_others_ks_excerpt}
}
\subfloat[RMSE (Excerpt)]{
\includegraphics[width=0.23\textwidth]{results/cvx_others/RMSE_noWrong}
\label{fig:cvx_others_rmse_excerpt}
}
\subfloat[Time (Excerpt)]{
\includegraphics[width=0.23\textwidth]{results/cvx_others/Time (s)_noFit}
\label{fig:cvx_others_time_excerpt}
}
\caption{\label{fig:cvx_others}
\textbf{All Approaches for Fitting Beta Perform Comparably Well.}
We simulate $100$ stationary two-dimensional Hawkes processes with random parameter values, and we evaluate the beta estimation approaches ``nonlinear optimization'' (``Fitted Beta'' in the Figures), ``expectation maximization'' (``Exp. Max.''), ``grid search'' (``Grid Beta'') and ``hyperparameter optimization'' (``Hyperopt''). As baselines for those four approaches, we also compute the fit quality with a fixed beta that is correct (``Given Beta''), a wrong beta in the same order of magnitude (``Wrong Beta''), as well as one (``Wrong Beta+'') and two (``Wrong Beta++'') orders of magnitude greater than the correct one. The upper row of Figures contains all approaches and the lower row selected ones for visualization purposes. We measure performance along a series of metrics, namely Hawkes process log-likelihood (``Log Likelihood''; Figs.~\ref{fig:cvx_others_loglik_full},~\ref{fig:cvx_others_loglik_excerpt}), Kolmogorov-Smirnov distances between simulated and real event streams (``K-S Distance''; Figs.~\ref{fig:cvx_others_ks_full},~\ref{fig:cvx_others_ks_excerpt}), root mean squared error of fitted and real decay parameter values (``RMSE''; Figs.~\ref{fig:cvx_others_rmse_full},~\ref{fig:cvx_others_rmse_excerpt}), and run-time of the fitting procedure using commodity hardware (``Time (s)''; Figs.~\ref{fig:cvx_others_time_full},~\ref{fig:cvx_others_time_excerpt}). For all metrics, lower values are better. Except when guessed values for beta are remarkably wrong, all approaches perform comparably well, with nonlinear optimization attaining a slightly better fit at the cost of runtime.
}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\subfloat[One Beta]{
\includegraphics[width=0.45\columnwidth]{results/causal/causality_acc_2dim_1beta}
\label{fig:causal_1beta}
}
\subfloat[Four Betas]{
\includegraphics[width=0.45\columnwidth]{results/causal/causality_acc_2dim_4beta}
\label{fig:causal_4beta}
}
\caption{\label{fig:causal}
\textbf{Significantly Wrong Beta Estimations Impact Directionality of Influence.}
Using the same simulation setup and notation as in Fig.~\ref{fig:cvx_others}, we measure and depict the accuracy of inferred influence directionality between Hawkes process dimensions. We define this accuracy as the relative frequency of how often the ranking of the estimated cross-excitation values corresponds to the real one. Higher values are better. We repeat this simulation for two-dimensional processes with one beta only (cf. Fig.~\ref{fig:causal_1beta}) as well as four different betas (cf. Fig.~\ref{fig:causal_4beta}), and we find remarkably wrong betas may lead to errors in the estimation of influence directionality. Again, higher dimensionality and number of betas leads to decreasing performance.
}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=0.95\columnwidth]{results/real/real_world_duolingo_ratio}
\caption{\label{fig:duolingo}
\textbf{In the Duolingo App, Users Learning C-level Words Have Longer Learning Bursts Than Those Learning A-level Words.}
We fit two Hawkes processes to users learning vocabulary on Duolingo: one process and its realization consist of a user learning ``hard'' (i.e. C-level) words, and the other of a user learning ``easy'' (i.e. A-level) words. We control for total activity levels by considering only users with ten learning events per word difficulty. Our results are robust to this choice of event total. We depict the ratio of the decay value of the C-level words to the A-level words, and we hypothesize the former is half as large as the latter. The fitted values show this ratio is even lower (``Fitted Beta Ratio'') in comparison to the ratio of posteriors (``Bayesian Beta Ratio''). Overall, lower decay values for the same number of events imply a more prolonged learning burst, suggesting learning C-level words requires more sustained effort from users.
}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=0.95\columnwidth]{results/real/real_world_emotions_beta_20-25_8d_subsample}
\caption{\label{fig:emotions}
\textbf{Tweet Timing Reflects Collective Emotions.}
We depict the result of fitting an MCMC-based changepoint detection model to users' Tweet timings in the two weeks surrounding the November 2015 Paris attacks (while again controlling for total tweet count per user): The estimated density for the Hawkes process intensity decay assigns more probability mass to higher decay regions before the change (``Beta 1'') in comparison to after (``Beta 2''). These findings are, again, robust to the choice of user activity level. This suggests collective effervescence manifests on Tweet timings as well. \tsa{Denis, please remind me to discuss the following:} One limitation of this changepoint detection approach lies in the estimation of when the change happens, which is about $20\%$ later than what should be.
}
\end{center}
\end{figure*}
Results
\section{Introduction}
As a method for modeling and predicting temporal event sequences (henceforth \textit{event streams}), Hawkes processes have seen broad application, ranging from estimating social dynamics in online communities~\cite{farajtabar2014shaping}, through measuring financial market movements~\cite{ait2015modeling} to modeling earthquakes~\cite{ogata1982application}.
Researchers and practitioners derive utility from Hawkes processes due to their flexibility in capturing history-dependent event streams.
Hawkes processes model event streams via the conditional intensity function, the infinitesimal event rate given the event history. Events cause \textit{jumps} in the conditional intensity function, which \textit{decays} to a \textit{baseline} level following a pre-defined functional form, the so-called \textit{kernel}.
This kernel is often chosen as an exponential function. The reasons for this choice are manifold, as Hawkes processes with an exponential kernel are (i) efficient to simulate and estimate~\cite{liniger2009multivariate,farajtabar2015coevolve}, (ii) parsimonious,
and (iii) realistic in practical applications~\cite{upadhyay2017uncovering,tabibian2019enhancing}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.7\columnwidth]{results/syn/beta_uncertainty_illustration}
\caption{\label{fig:hawkes_decay_uncertainty_illustration}
\textit{Remarkable Uncertainty in Fitted Decay Parameter Values.}
We illustrate the normalized distribution of decay values estimated with L-BFGS-B across $100$ realizations of the same Hawkes process (black line).
The discrepancy between that distribution and the standard Gaussian distribution (gray line) suggests a unique uncertainty property in Hawkes process decay estimations.
Fitting the decay parameter in Hawkes processes with breaks in stationarity results in yet other kinds of uncertainty properties (cf. yellow line).
}
\end{center}
\end{figure}
\noindent \textbf{Problem.}
While baseline and jump parameters of Hawkes processes are typically derived via convex optimization, the estimation of decay parameters in exponential kernels remains an open issue.
Previous work simply assumed the decay parameters to be constants~\cite{farajtabar2015coevolve,2017arXiv170703003B,choudhari2018discovering},
cross-validated decay parameter values~\cite{farajtabar2014shaping,choi2015constructing,salehi2019learning}, or estimated them with a range of different optimization approaches~\cite{ozaki1979maximum,da2014hawkes,bacry2016mean,upadhyay2017uncovering,kurashima2018modeling,figueiredo2018fast,santos2019self}. Such estimation approaches result in point estimates that can be considered as sufficient for simulating and predicting event streams.
However, researchers frequently directly interpret Hawkes process parameters~\cite{ogata1982application,bacry2015hawkes,upadhyay2017uncovering,tabibian2017distilling}, e.g., to infer the directions of temporal dependency~\cite{junuthula2019block,trouleau2019learning,hatt2020early}.
Even though such applications rely on estimates of decay parameter values, the current state-of-the-art mostly neglects that, in practice, we derive these values only within a degree of certainty, in particular in a common case of a small number of realizations of a given process. This can lead to qualitatively inaccurate conclusions if researchers are unaware or unable to quantify the uncertainty of decay parameter estimate.
Moreover, up until now there are only initial studies~\cite{rizoiu2017expecting,santos2019self} on how exponential growth, exogenous shocks to a system, or changes in the underlying process mechanics compromise key stationarity assumptions and aggravate estimation errors.
In Figure~\ref{fig:hawkes_decay_uncertainty_illustration}, we illustrate this problem in relation to a commonly used nonlinear optimization approach for fitting the decay, L-BFGS-B~\cite{byrd1995limited}. We estimate the decay value in two cases: (i) a small number of realizations ($100$ synthetically generated realizations from the same Hawkes process), and (ii) change of process parameters due to a stationarity break ($50$ synthetically generated realizations from one Hawkes process and $50$ from a process with a different decay parameter). The black line in the Figure represents the normalized distribution of the fitted decay parameter for case (i). We observe that line deviates remarkably from the standard Gaussian depicted in gray, suggesting the need to account for the uncertainty surrounding Hawkes process decay parameter estimations. Repeating the same fitting procedure for case (ii), we observe, in the yellow line, yet other distributional properties of fitted decay values. This suggests a specific need to separately account for uncertainty in the decay parameter estimation in the presence of breaks in stationarity.
\noindent \textbf{This Work.}
The contribution of this paper is two-fold: First, we focus on the potential uncertainty in the decay parameter of Hawkes processes and explore its sources and consequences.
In that regard, we uncover that the non-convex and noisy shape of the Hawkes process log-likelihood as a function of the decay is one cause of the variability in decay parameter estimates.
Second, we propose to integrate the estimation of the decay parameter in a Bayesian approach. This allows for (i) quantifying uncertainty in the decay parameter, (ii) diagnosing estimation errors, and (iii) addressing breaks of the crucial stationarity assumption.
In our approach, we formulate and evaluate closed-form and intractable hypotheses on the value of the decay parameter. Specifically, we encode hypotheses for the decay as parameters of a prior distribution. Then, we consider estimations of the decay for single Hawkes process realizations as samples from a likelihood. These likelihood samples form the data that we combine with the prior to perform Bayesian inference of posterior decay values.
We show with synthetic data as well as in a broad range of real-word domains that our Bayesian inference procedure for fitting the decay parameter allows for quantifying uncertainty, diagnosing estimation errors and analyzing breaks in stationarity.
In our first application, a study of earthquakes in Japanese regions~\cite{ogata1982application}, we uncover low uncertainty in certain geographical relationships.
Second, we validate Settles and Meeder's~\cite{settles2016trainable} hypothesis that vocabulary learning effort on the Duolingo mobile app correlates to the estimated difficulty of the learned words~\cite{council2001common}.
Finally, leveraging a dataset of Tweets before and after the Paris terror attacks of November $2015$, we measure the relation between a stationarity-breaking exogenous shock and collective effervescence~\cite{garcia2019collective}.
Overall, our work sheds light on fitting a widely used class of Hawkes processes, i.e., Hawkes processes with exponential kernels.
Better understanding these models and explicitly surfacing uncertainty in their fitted values facilitates their use by practitioners and researchers.
We expect the impact of our study to be broad\footnote{We make our code available at \url{https://github.com/tfts/hawkes_exp_bayes}.}, as
our results influence the application of a key analysis approach for studying time-dependent phenomena across most diverse domains such as social media, user behavior, or online communities.
\section{Experiments}
\label{sec:experiments}
We illustrate experimentally how our approach allows for quantifying the uncertainty in the decay and other Hawkes process parameter values, diagnosing mis-estimation and addressing breaks in stationarity.
For each of those three goals, we illustrate how our approach achieves the goal (i) with a synthetic dataset and (ii) within a real-world application. Hence, we demonstrate that, besides achieving the previously mentioned goals, our approach is broadly applicable across practical scenarios.
\subsection{Quantifying the Uncertainty}
We begin by addressing the problem of quantifying the uncertainty of fitted decay values $\hat{\beta}$, as well as potential consequences of mis-estimation on other Hawkes process parameters.
One prominent application of multi-dimensional Hawkes processes consists in the estimation of directions of temporal dependency, e.g., when studying influence in online communities~\cite{santos2019self,junuthula2019block}, or in approximating complex geographical~\cite{ogata1982application} or cortical~\cite{trouleau2019learning} relationships.
Recall that inferring such relations between a pair of Hawkes process dimensions may be framed as a problem of estimating which cross-excitation between the two dimensions is higher.
We demonstrate how our Bayesian approach helps in quantifying the uncertainty in such inferred relationships and how to estimate the impact of potential errors with synthetic and real-world data.
\noindent\textbf{Synthetic Data.}
We consider a two dimensional Hawkes process with parameters $\bm{\mu} = \left(\begin{smallmatrix}0.1\\0.5\end{smallmatrix}\right)$,
$\bm{\alpha} = \left(\begin{smallmatrix}0.1&\alpha_{12}\\\alpha_{21}&0.2\end{smallmatrix}\right)$, and
$\beta_{pq}=\beta \text{ } \forall_{1 \leq p,q \leq 2}$. We assume $\beta=1.2$.
For the cross-excitation parameters, we set $\alpha_{21} = 0.7$ and, successively, $\alpha_{12} = \alpha_{21} * c$ for a range of $10$ linearly spaced values of $c\in[0.75,1.25]$. This implies that each configuration encodes a different direction and strength of influence, where dimension $2$ more strongly influences dimension $1$ for $c < 1$, and vice-versa for $c > 1$.
For each such configuration, we simulate $K=100$ realizations with a stopping time of $T=1000$.
As the decay estimation approach we apply a non-linear optimization routine L-BFGS-B~\cite{byrd1995limited} after the arrival of each such realization to obtain $\{\hat{\beta}\}_{k=1}^{100}$ per configuration.
Using our closed-form Bayesian inference approach, we hypothesize that $\beta$ equals $1.5$ by setting $b_0=1.5$ in the previously described Gamma prior.
We then perform the aforementioned Bayesian inference on each set of $\{\hat{\beta}\}_{k=1}^{100}$ to derive $\beta'_{0.025}$ and $\beta'_{0.975}$, the lower and upper bounds of the $95\%$ credible interval of the posterior predictive density.
For $100$ linearly spaced values of the decay in $[\beta'_{0.025}, \beta'_{0.975}]$, we fit the remaining Hawkes process parameters (i.e. baseline and excitations) and check the accuracy of the inferred influence direction between dimensions.
The accuracy captures how many of the $100$ decay values lead to correct recovery of the relation between $\alpha_{12}$ and $\alpha_{21}$. We bootstrap $95\%$ confidence intervals\footnote{We report bootstrap statistics here to refrain from unnecessarily extending our Bayesian approach to the other Hawkes process parameters, which, as previously mentioned, should be estimated with convex optimization routines.} for the accuracy directly from that distribution of $100$ decay values.
This procedure provides
\begin{inparaenum}[(i)]
\item an estimate of the uncertainty of the fitted decay value via the $95\%$ credible interval,
\item an estimation of the robustness of the temporal dependency between dimensions $1$ and $2$, and
\item empirical evidence for the consequences of fitting Hakwes processes with misaligned $\beta$.
\end{inparaenum}
Beyond the settings presented here, we also experimented with slightly different parametrizations of the Hawkes process and alternative decay fitting approaches such as the Bayesian hyperparameter optimization Hyperopt~\cite{bergstra2011algorithms}. In all such settings, we obtained qualitatively the same results.
Figure~\ref{fig:syn_uncertainty} summarizes the outcome of this experiment.
As expected, we observe lower accuracies in inferring the direction of influence between dimension $1$ and dimension $2$ for $\alpha_{12}$ close to $\alpha_{21}$.
This implies that many decay values in the $95\%$ credible interval of such configurations lead to mis-estimations of the direction of influence, and the large error bars reflect this as well.
Overall, we believe practitioners may leverage this approach to quantify the uncertainty of the decay estimation and consequently of the inferred directions of influence as well.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.7\columnwidth]{results/causal/causality_unc_acc}
\caption{\label{fig:syn_uncertainty}
\textit{Quantifying Uncertainty When Inferring Directions of Influence.}
We first fit $\beta$ on realizations from two-dimensional Hawkes processes with cross-excitation $\alpha_{12}$ varying from $75\%$ to $125\%$ of $\alpha_{21}$.
We apply closed-form Bayesian inference to estimate the uncertainty in fitted decay as the posterior predictive $95\%$ credible interval.
For a set of decay values in that interval, we estimate the other parameters, and measure the accuracy in recovering the encoded influence direction between dimensions $1$ and $2$.
The accuracy is low and features larger error bars when $\alpha_{12}$ is close to $\alpha_{21}$, where many decay values in the $95\%$ credible intervals lead to wrong estimations of the influence direction.
}
\end{center}
\end{figure}
\noindent\textbf{Earthquakes and Aftershocks.}
We illustrate the outlined uncertainty quantification procedure with a dataset of earthquakes in the Japanese regions of Hida and Kwanto, as originally studied by Ogata et. al~\cite{ogata1982application}.
We consider the data listed in Table $6$ of that manuscript, i.e., a dataset of $77$ earthquakes from $1924$ to $1974$.
We employ the decay value listed in Table $5$ of that manuscript as the prior's parameter in our closed-form Bayesian approach\footnote{We choose that prior for demonstration purposes, as that decay value was estimated with a different Hawkes process from the one we study in this work.}. We assume a two-dimensional Hawkes process with a single $\beta$ value, where the dimensions represent earthquakes in the Japanese regions of Hida and Kwanto.
As pre-processing, we split earthquake occurrences into $K=4$ equally sized segments which we treat as process realizations, and we convert the event timescale to decades.
We replicate the seismological relationship that earthquakes in the Japanese Hida region precede those in Kwanto,
as we obtain $\alpha_{\text{Kwanto}\,\text{Hida}} > \alpha_{\text{Hida}\,\text{Kwanto}} = 0$.
Our Bayesian estimation procedure yields the posterior predictive density mean of $\beta' \approx 31.71$, which corresponds to an intensity half-life of about $log(2) / 31.71 \approx 0.02$ decades.
The inferred relationship between Hida and Kwanto is present for all but one value of the $95\%$ credible interval for $\beta'$, the lower extremity $\beta'_{0.025}$.
These results underscore the low uncertainty of the inferred direction of influence.
\subsection{Mis-Estimation \& Misaligned Hypotheses}
In this section, we demonstrate that our Bayesian approach facilitates diagnosing (inevitable) estimation errors and misaligned hypotheses as over- or under-estimates, as well as the magnitude of that error.
Hence, we address a need, which previous work~\cite{ogata1982application,bacry2016mean,upadhyay2017uncovering} implies, to encode, validate and diagnose estimations and hypotheses on the decay parameter value.
Again, we illustrate how our approach meets that need with synthetic and real-world data.
\noindent\textbf{Synthetic Data.}
\label{subsubsec:closed_bayes}
We consider a univariate Hawkes process with parameters $\mu=1.2, \alpha=0.6$ and $\beta=0.8$ (comparable choices of parameters lead to the same qualitative results).
Using this parametrization, we generate $K=100$ realizations with $100$ events each and we estimate decay after the arrival of each individual realization to obtain $\{\hat{\beta}\}_{k=1}^{100}$ per approach. To illustrate that our Bayesian approach is agnostic to the choice of decay estimation method, we apply the decay estimation approaches L-BFGS-B~\cite{byrd1995limited}, Grid Search (across $10$ evenly distributed values on a log scale in $[-1, 2]$, similarly to Salehi et al.~\cite{salehi2019learning}) and Hyperopt~\cite{bergstra2011algorithms}.
Using our closed-form Bayesian inference approach, we leverage a Gamma prior with $b_0=1$, an estimate slightly larger than true $\beta$, to illustrate our approach.
We then perform Bayesian inference on each set of $\{\hat{\beta}\}_{k=1}^{100}$ and compare the RMSE (root mean squared error) between the resulting $\beta'$ estimates and $\beta$.
We repeat this whole process $100$ times to derive uncertainty per fitting method via the Bayesian bootstrap distribution.
We observe that L-BFGS-B and Hyperopt return the decay estimates with lowest RMSE, while Grid Search performs remarkably worse.
Aiming to reduce RMSE across all approaches, we inspect the estimates more closely.
Looking into $\hat{\beta}_k$ reveals that they are consistently below our hypothesized $b_0$. We suspect this discrepancy arises due to our prior parameter value $b_0=1$, which is an over-estimate.
Therefore, we compute the difference between that $b_0$ and the posterior predictive density mean $\beta'$ per fitting approach.
Except for Grid Search ($b_0 - \beta' = -0.002 \pm 0.013$), the approaches boast positive differences (Hyperopt and L-BFGS-B both yield $b_0 - \beta' = 0.183 \pm 0.006$), which imply the prior parameter is larger than the posterior predictive density mean.
The direction and magnitude of these shifts of the posterior away from the prior suggest the use of our Bayesian approach as a diagnosis tool, which correctly signals that our hypothesis likely over-estimates true $\beta$.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.62\columnwidth]{results/real/real_world_vocabulary_ratio}
\caption{\label{fig:duolingo}
\textit{In the Duolingo App, Users Learning C-level Words Have Longer Learning Bursts Than Those Learning A-level Words.}
We fit two Hawkes processes to users with $10$ word learning events on Duolingo: one for users studying ``hard'' (C-level) words, and the other for users learning ``easy'' (A-level) words.
We posit that the decay value of the C-level process is half as large as the A-level one, and we depict the ratio of the former to the latter.
The ratio of fitted values (``Fitted Decay Ratio'') is lower than that of posterior predictive density means (``Bayesian Decay Ratio''), due to a conservative prior parametrization.
Lower decay values for the same number of events imply longer event bursts, suggesting that C-level words require extended learning effort.
}
\end{center}
\end{figure}
\noindent\textbf{Vocabulary Learning Intensity.}
We investigate a scenario proposed as future work by Settles and Meeder's~\cite{settles2016trainable} study of user behavior on the Duolingo language learning app: The authors speculate that vocabulary learning intensity in Duolingo correlates with word difficulty as defined by the CEFR language learning framework~\cite{council2001common}.
We complement the Duolingo data with a dataset of English-language vocabulary and its corresponding CEFR level\footnote{\url{http://www.englishprofile.org/american-english}}, and we build two groups of words: those from the easiest CEFR levels, A1 and A2 (A-level group), and those from the hardest ones, C1 and C2 (C-level group).
We observe that there are $28$ users with $10$ vocabulary learning events in the C-level. To control for total learning events per user, we randomly sample a set of $28$ users with $10$ learning events in the A-level.
Increasing the number of events to $11$ or $12$ leads to qualitatively similar results, but decreased statistical power due to smaller sample size.
We repeat this random sampling for a total of $100$ times to Bayesian bootstrap $95\%$ credible intervals.
We posit that each Duolingo user learning the A-level set of words represents a realization of a univariate Hawkes process, and users learning the C-level words represent realizations of another univariate Hawkes process.
Mapping the six CEFR levels (A1, A2, B1, B2, C1 and C2) to a scale from $1$ to $6$, we naively assume, for illustration purposes, that the C-levels may be more than twice as hard as A-levels.
If the data reflects that hypothesis, then we expect that a short learning burst suffices for grasping A-level words, in contrast to the C-level words, which may require perhaps more than two times as much effort over time.
We encode this hypothesis in our closed-form Bayesian approach (with L-BFGS-B) as $b_{\text{C-level}}=1$ and $b_{\text{A-level}}=2$, since, after controlling for the total event count, we interpret the former as corresponding to longer periods of higher intensity, when compared with the latter.
In Figure~\ref{fig:duolingo}, we depict the ratio of posterior predictive density means $\beta'_{\text{C-level}} / \beta'_{\text{A-level}}$ (``Bayesian Decay Ratio''), as well the analogue ratio of the means of actual L-BFGS-B estimations for both levels (``Fitted Decay Ratio''), i.e., $\{\hat{\beta}_{\text{C-level}} / \hat{\beta}_{\text{A-level}}\}_{k=1}^{28}$.
Overall, we confirm Settles and Meeder's hypothesis that word difficulty correlates positively with the effort required to learn them: The posterior (and fitted) decay values of the C-level words are lower than those of the A-level words, resulting in more prolonged learning bursts in the former vs. the latter.
Moreover, we underscore that this practical example illustrates the usefulness of our approach as a diagnosis tool: We observe a moderate shift away from the hypothesized $1:2$ difficulty ratio and towards a posterior predictive density mean value slightly below $3:10$.
This shift is even more severe according to the ``Fitted Decay Ratio'', which is not influenced by a prior.
This finding indicates that the CEFR language levels may not directly translate to numerical scales for quantifying learning progress.
Further, the relatively small size of the user sample highlights the importance of using Bayesian inference to not only diagnose a-priori estimates, but also to explicitly surface estimation uncertainty due to a small dataset.
Therefore, although we suggest caution in extrapolating our results, we believe that our observations may contribute to ongoing research on the challenges of quantifying language learning progress~\cite{hulstijn2007shaky}.
\subsection{Addressing Breaks in Stationarity}
We turn our attention to the assumption of stationarity and its effect on fitting the decay parameter of Hawkes processes.
Recall that stationarity implies that the intensity of a Hawkes process is translation-invariant.
In practical applications such as the study of virality of online content~\cite{rizoiu2017expecting} or the growth of online communities~\cite{santos2019self}, exogenous shocks or exponential growth break the stationarity assumption.
With synthetic data and a real-world example, we show how our Bayesian approach allows for assessing and capturing breaks in stationarity caused by exogenous shocks.
\noindent\textbf{Synthetic Data.}
\label{subsubsec:intractable_bayes}
We start with the same experimental setup as in section~\ref{subsubsec:closed_bayes}, but we introduce two key differences.
First, we assume that there was an underlying change in $\beta$ at some point during the $K=100$ realizations. We set the index of that change to $k^*=50$, but our conclusions also hold for other choices of $k^*$ (such as $k^* \in [30, 70]$). For $k<k^*$, we simulate the Hawkes process as previously, but, for $k\geq k^*$, we increment $\beta$ by $1$ while keeping the other parameters unchanged, and we simulate from that updated process instead. With $\beta_1$ we denote $\beta$ before the change, and with $\beta_2 = \beta_1 + 1$ the $\beta$ afterwards.
Second, we build a Bayesian inference setup to reflect the hypothesis that the $\beta$ value changed at some point in the set of realizations. Such models are termed \textit{changepoint models}. We propose the following intractable setup: A prior with $b_0 = \left\{ \begin{array}{ll} b_1,&k < \kappa\\ b_2,&k \geq \kappa \\ \end{array} \right.$, where we set $b_1\sim Exponential(1)$, $b_2\sim Exponential(0.7)$ and $\kappa \sim U\{1, 100\}$, and an exponentially distributed likelihood. Note that the hypothesis $b_1 > b_2$ contradicts the true second part of simulated realizations.
As the metrics for this experiment, we first measure the RMSE between the mean (respectively median) of samples from the posterior, $\bar{\beta}_1$ and $\bar{\beta}_2$ (resp. $\kappa / K$), and both true $\beta$ (resp. $k^* / K$). Beyond RMSE, we also assess the estimation accuracy as the relative frequency of a correctly inferred ordering $\bar{\beta}_1 < \bar{\beta}_2$.
As before, we demonstrate that our Bayesian approach is agnostic to the choice of decay fitting methods by employing L-BFGS-B, Grid Search and Hyperopt.
As a side result, we note that L-BFGS-B and Hyperopt again outperform Grid Search with respect to RMSE.
Further, we report mean accuracy values close to $1$ for Hyperopt and L-BFGS-B (both $0.98 \pm 0.014$) and remarkably lower values for Grid Search ($0.65 \pm 0.047$).
Although we encoded prior parameter values which contradict the data, we still almost always recover the correct relationship $\bar{\beta}_1 < \bar{\beta}_2$ with the L-BFGS-B and Hyperopt methods.
Qualitatively, this suggests identifying the direction of distributional changes in the decay is feasible.
Quantitatively, we expect this procedure to yield conservative estimates of the magnitude and timing of the change:
Bayesian inference features an ``inertia'' of a few realizations when updating the posterior after the shock.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.7\columnwidth]{results/real/real_world_emotions_beta_20-25_8d_subsample}
\caption{\label{fig:emotions}
\textit{Tweet Timing Reflects Collective Emotions.}
We depict the result of fitting an MCMC-based changepoint detection model to users' Tweet timings in the two weeks surrounding the November 2015 Paris attacks:
The estimated posterior density for the Hawkes process intensity decay assigns more probability mass to higher decay regions before the shock in comparison to afterwards. This suggests that collective effervescence manifests on Tweet timings, as lower decay values after the shock reflect more sustained bursts of activity.
}
\end{center}
\end{figure}
\noindent\textbf{Strength of Collective Effervescence.}
Our third real-world scenario concerns the manifestation of collective effervescence on Twitter in response to the 13. November 2015 terrorist attacks in Paris, as studied by Garcia and Rim{\'e}~\cite{garcia2019collective}.
They proposed future work could analyze how tweet timings reflect collective emotions surrounding the attacks.
We address this suggestion by fitting the changepoint Bayesian model with the L-BFGS-B method.
Specifically, we begin by extracting the timestamps of tweets by users in a two week period centered on the day of the attacks.
We model each user's behavior per week as a realization of a univariate Hawkes process, and we also control for tweeting activity per user: We extract all $205$ users who tweeted between $20$ and $25$ times in the week before and in the week after\footnote{Note that this extraction process results in a total of $410$ realizations, i.e., $205$ realizations before the shock and another $205$ afterwards.}.
Lowering the activity bounds yields more users each with less events to fit, while increasing those bounds has the opposite effect. However, by setting the activity bounds to different ranges of $5$ tweets (specifically, $10$ to $15$, $15$ to $20$, \ldots, or $45$ to $50$ tweets), we qualitatively observe the same outcomes.
We hypothesize that Twitter users, who partake in the collective emotion as a reaction to the shock, feature a sustained burst of activity. We expect such a burst of activity to translate into a decrease of the decay value after the shock. To numerically capture this hypothesis, we simply set $b_1 = 1.5, b_2=1$.
However, repeating this experiment with an opposing hypothesis (e.g., $b_1 = 1, b_2=1.5$), again leads to the same results.
Figure~\ref{fig:emotions} depicts the density of the distribution of the inferred decay posterior before and after the shock.
As expected, we confirm the decay value goes down in the week after the attacks, suggesting more sustained bursts of Tweeting activity in response to the attacks.
This, in turn, supports the hypothesis that Garcia and Rim{\'e} advanced: Reaction timings, in the form of longer bursts of tweets afforded by a $15\%$ lower mean posterior decay after the shock, reflect this collective emotion.
We note that this is a conservative estimate of the decrease in the parameter, since activity levels quickly revert back to a baseline within the week after the attacks themselves, as Garcia and Rim{\'e} report.
Further, this changepoint detection approach also over-estimates the time of the change at realization number $235$, i.e., $7.1\%$ later than the first Hawkes process realization after the shock, corresponding to realization number $206$.
\subsection{Decay Estimation}
\label{sec:fitting}
To learn from streams of events, practical applications start by fitting Hawkes processes, i.e., optimizing the log-likelihood given in Equation~\ref{eq:hawkes_loglik} to a set of event timestamps.
Practitioners then inspect fitted parameters to understand inherent temporal dependencies, and perform downstream tasks such as prediction via simulation of the fitted processes~\cite{kurashima2018modeling,santos2019self}.
As inferring and interpreting (all) fitted Hawkes process parameters is crucial in many real-world applications~\cite{ogata1982application,bacry2015hawkes,tabibian2017distilling,upadhyay2017uncovering,santos2019self,junuthula2019block,hatt2020early}, we turn our attention to the challenges in fitting Hawkes process parameters, and especially, in estimating the decay parameter in the exponential kernel.
Previous research has shown that the baseline $\mu$ and excitation jump $\alpha$ can be efficiently computed since the log-likelihood is amenable for convex optimization of these parameters \cite{bacry2015hawkes, farajtabar2015coevolve}.
However,
that does not hold for the decay $\beta$, neither in the univariate nor in the multivariate case.
Previous work suggested a wide range of methods to address the decay estimation problem with approaches that provide point estimates.
These approaches include setting $\beta$ to a given constant value~\cite{farajtabar2015coevolve,2017arXiv170703003B,choudhari2018discovering}, cross-validation over a range of values~\cite{farajtabar2014shaping,choi2015constructing,salehi2019learning}, or the application of a general optimization method. Those methods comprise non-linear optimization~\cite{ozaki1979maximum,da2014hawkes},
Bayesian hyperparameter optimization~\cite{figueiredo2018fast,santos2019self}, expectation-maximization~\cite{kurashima2018modeling,turkmen2019hawkeslib} or visual inspection of the log-likelihood function~\cite{bacry2016mean,upadhyay2017uncovering}.
While these methods suffice for obtaining a point estimate with comparable errors, none of them explicitly addresses uncertainty quantification.
We argue that aforementioned problems in estimating decay parameters (cf. Figure~\ref{fig:hawkes_decay_uncertainty_illustration}) are caused by a noisy, non-convex log-likelihood in $\beta$.
We illustrate these properties of the log-likelihood with the following exemplary experiment. We consider a univariate Hawkes process with $\mu=0.1, \alpha=0.5$ and $\beta^*=1.2$ (the same as in Fig.~\ref{fig:hawkes_decay_uncertainty_illustration}), and we then compute the negative log-likelihood for different values of $\beta$.
We generate three sets of $100$ realizations from that Hawkes process.
In Figure~\ref{fig:loglik}, we evaluate the negative log-likelihood with one set of realizations per each of three ranges of $\beta$ around $\beta^*$, namely a large (cf. Fig.~\ref{fig:loglik_far}), a medium (cf. Fig.~\ref{fig:loglik_broad}) and a small (cf. Fig.~\ref{fig:loglik_close}) range.
In the large range, it appears there is a convex basin around $\beta^*$ (which we annotate with a pink dashed line), but this function's shape shifts to a concave curve with increasing decay values. The function then converges on the right, as $\lim_{\beta \to +\infty} log\, L=-\mu t_n +nlog(\mu)$.
Inspecting the seemingly ``convex'' region more closely uncovers a wide and noisy basin around $\beta^*$, where $\beta^*$ does not always feature minimal negative log-likelihood (cf. Fig.~\ref{fig:loglik_broad}).
This explains difficulties in obtaining precise estimations regardless of the optimization strategy.
We repeated this experiment with a wide range of parameter values corresponding to stationary Hawkes processes. We confirm that these observations are robust to all such alternative configurations.
\section{Bayesian Decay Estimation}\label{sec:approach}
In this section, we present a novel Bayesian approach for the decay parameter estimation in Hawkes process. We begin with the overall goals and the intuition behind our approach before formally introducing the approach and its operationalization. We also provide a short reflection on a frequentist alternative.
\noindent \textbf{Goals.}
The aim of our approach is threefold. Firstly, we try to explicitly quantify $\beta$ estimation uncertainty and the magnitude of potential estimation errors across $\beta$ fitting approaches (i.e., point estimates).
Secondly, considering previous use cases~\cite{ogata1982application} that involve encoding and validating hypotheses on the decay value, we aim to systematically diagnose such hypotheses.
Thirdly, current methods return only decay values which fulfill the stationarity constraint $\rho < 1$. However, previous work~\cite{rizoiu2017expecting,santos2019self} studied applications with non-stationary changes such as exponential growth and exogenous shocks. Hence, we see an opportunity for an extension of current decay fitting methods to address potential violations of the stationarity assumption.
\noindent \textbf{Intuition.}
To that end, we propose a parsimonious Bayesian inference procedure for encoding and validating hypotheses on likely values for $\beta$.
In our Bayesian approach, we sequentially consider a series of univariate Hawkes process realizations one by one. With each subsequent realization, we fit $\beta$ with a given optimization method and obtain an ever-increasing set of estimated decay values that we denote as $\{\hat{\beta}\}_{k=1}^K$~\footnote{Note that the set $\{\hat{\beta}\}$ does not contain independent observations (as e.g. the realizations used to obtain $\{\hat{\beta}\}_{k=1}^2$ are also in $\{\hat{\beta}\}_{k=1}^3$).
Anticipating that practitioners use as much data as available, we consider $\{\hat{\beta}\}$ as previously defined. However, repeating all our experiments using only iid $\hat{\beta}_k$ (i.e., each fitted only on a single realization), we obtain similar though noisier results.}.
After this collection of $\hat{\beta}$, we apply Bayes' theorem to make inferences about the true $\beta$.
A key difference between our approach and typical applications of Bayesian inference can intuitively be described as follows:
The classical Bayesian inference setup typically places a prior distribution for an unknown parameter of interest in a probability distribution which captures the likelihood of given data. Then, applying Bayes' theorem allows for inferring likely values of the unknown parameter given the data.
In our approach, we assume the unknown parameter, i.e., the decay parameter, is also the data, which consists of the aforementioned sequence of decay parameter estimations. This setup enables the freedom to choose between computing posterior (i.e., in parameter space) or posterior predictive (i.e., in data space) distributions to learn about the decay. We find that this flexibility is useful and can improve performance in applications.
\noindent\textbf{Formalization.}
Given data $\{\hat{\beta}\}_{k=1}^K$ and a model $\mathcal{M}_H$ parametrized by hypothesis $H$ for the parameter of interest $\beta$, we propose computing the Bayesian posterior
\begin{equation}
P(\beta|\{\hat{\beta}\}_{k=1}^K, \mathcal{M}_H) \propto P(\{\hat{\beta}\}_{k=1}^K|\beta, \mathcal{M}_H) P(\beta|\mathcal{M}_H),
\label{eq:bayes}
\end{equation}
where $P(\{\hat{\beta}\}_{k=1}^K|\beta, \mathcal{M}_H)$ is the likelihood and $P(\beta|\mathcal{M}_H)$ the prior encoding a hypothesis. Again, note the accordance between the model parameter and the data in our Bayesian approach: $\beta$ is the model parameter and, at the same time, our data $\{\hat{\beta}\}_{k=1}^K$ contains the estimates of that same parameter. Having derived or estimated a posterior distribution density, we have multiple inference possibilities for $\beta$. In particular, we can, as introduced above, (i) obtain a point estimate for $\beta$ directly from the posterior (e.g., mean, median, or maximum), or (ii) compute a new estimate $\tilde{\beta}$ or a statistic (e.g., mean) from the posterior predictive distribution, which is given by (omitting for simplicity $\mathcal{M}_H$):
\begin{equation}
P(\tilde{\beta}|\{\hat{\beta}\}_{k=1}^K) = \int P(\tilde{\beta}|\beta, \{\hat{\beta}\}_{k=1}^K)P(\beta|\{\hat{\beta}\}_{k=1}^K)d\beta.
\label{eq:bayes_pred}
\end{equation}
We can also derive credible intervals ---roughly Bayesian counterparts of the frequentist confidence interval--- for such statistics.
\noindent\textbf{Operationalization.}
Thus, this setup leaves us with the task of deciding on appropriate distributions for the prior and the likelihood, as well as mapping hypotheses to specific parametrizations of the prior.
While we are, in general, free to choose from the broad spectrum of available techniques for Bayesian inference, we propose to assume the likelihood $P(\{\hat{\beta}\}_{k=1}^K|\beta) \sim Exponential(\lambda)$ and a conjugate prior, namely $P(\beta) \sim Gamma(a_0, b_0)$.
These assumptions lead to the following practical advantages: (i) there is a closed form solution to the posterior predictive density mean,
$\beta' = E[\tilde{\beta}|\{\hat{\beta}\}_{k=1}^K] = (b_0 + \sum_{k=1}^K{\hat{\beta}_k}) / (a_0-1)$, and
(ii) users of this approach only have to encode hypotheses on $\beta$ as the prior's $b_0$ parameter, as they can set $a_0$ simply to $K$.
We note that assuming other distributions of the exponential family such as Pareto for the log-likelihood and the conjugate prior is also a valid choice, for which we experimentally obtain equivalent results.
Note that our choice of a Bayesian setup also allows for complex inference setups, which require the application of e.g. Markov chain Monte Carlo (MCMC).
Also, in our approach, we restrict Bayesian inference to the decay and estimate confidence intervals for other parameters via (frequentist) bootstrap.
This enables the estimation of other parameters via more efficient convex optimization routines.
We also remark that our approach is agnostic to the choice of decay estimation method, and, in our experiments, we illustrate its application with several of the methods we mentioned in Section~\ref{sec:fitting}.
So far, we focused the presentation of our approach on univariate Hawkes processes. To generalize to the multivariate case, we set $\beta_{pq}=\beta \text{ } \forall_{p,q}$, a common simplification of the decay estimation problem~\cite{farajtabar2014shaping,tabibian2017distilling,santos2019self,salehi2019learning}, and we then proceed as previously.
Alternatively, optimization of $\beta_{pq}$ is also possible, but with a quadratic computational complexity in the number of dimensions.
\noindent\textbf{A Frequentist Alternative.}
Finally, we reflect on a bootstrap-based alternative formulation of our Bayesian approach. Repeatedly resampling $\{\hat{\beta}\}_{k=1}^K$ with replacement yields the bootstrap distribution of decay estimates. Testing hypotheses on the decay value then amounts to comparing a statistic of the bootstrap distribution (such as a mean or a percentile)\footnote{We recommend using empirical bootstrap statistics to correct for sample bias.} with the hypothesized value. While it is also possible to extend this bootstrap-based procedure to more complex, intractable inferential setups, the computational cost may surpass that of Bayesian inference (as we discuss in the context of an experiment). In our setting of sequential collection of realizations, Bayesian inference becomes a natural choice to update beliefs about true decay values (cf. Chapter 3.4 of Efron and Hastie~\cite{efron2016computer}), especially in the case of a small number of realizations. Further, as the bootstrap may be interpreted as a nonparametric, noninformative posterior approximation (cf. Chapter 8.4 of Hastie et al.~\cite{hastie2009elements}), Bayesian inference with informative priors may, again, lead to lower computational costs in the presence of few observations.
\section{Conclusion}
In this work, we studied the problem of fitting the decay parameter of Hawkes processes with exponential kernels.
The inherent difficulties we found in accurately estimating the decay value, regardless of the fitting method, relate to the noisy, non-convex shape of the Hawkes log-likelihood as a function of the decay.
Further, we identified problems in quantifying uncertainty and diagnosing fitted decay values, as well as in addressing breaks of the stationarity assumption. As a solution, we proposed a parsimonious Bayesian approach.
We believe our extensive evaluation of that approach across a range of synthetic and real-world examples demonstrates its broad practical use.
Optimization techniques such as constructing convex envelopes or disciplined convex-concave programming may, in the future, help to optimize the Hawkes process likelihood as a function of the decay.
We also believe exploring the potential of the vast Bayesian statistics toolbox for learning more from fitted (decay) parameter values is promising future work.
\section{Decay estimation in Hawkes Processes}
\label{sec:background}
In this section, we first summarize fundamentals of Hawkes process modelling before describing the problem of decay estimation.
\subsection{Hawkes Processes}
We briefly discuss temporal point processes,
a set of mathematical models for discrete events randomly arriving over time.
Temporal point processes capture the time of an upcoming event given the times of all previous events via the so-called \textit{conditional intensity function} (or simply \textit{intensity}) $\lambda^*(t)$. Mathematically,
\begin{equation}
\lambda^*(t)dt=\text{P(event in }[t, t+dt)|\mathcal{H}_t),
\label{eq:general_intensity}
\end{equation}
where $\mathcal{H}_t$ represents the event history up to (but excluding) time $t$.
Dividing Equation~\ref{eq:general_intensity} by $dt$ (in the Leibniz notational sense), we see $\lambda^*(t)$ equals the conditional probability of an event in an interval of (infinitesimal) length $dt$ per such interval $dt$.
Hence, we interpret $\lambda^*(t)$ as a history-dependent \textit{event rate}.
Such temporal point processes are often termed doubly-stochastic, as events occur randomly over time, and the event model $\lambda^*(t)$ is a random process too.
We also note $\lambda^*(t)$ characterizes temporal point processes as counting processes $N(t)$ for the number of events up to time $t$.
Hawkes processes~\cite{hawkes1971spectra} assume $\lambda^*(t)$ follows a certain functional form. Specifically, given a Hawkes process \textit{realization}, i.e., a set of $n$ events occurring at times $t_i \in \mathbb{R}^+$, the conditional intensity of a Hawkes process is
\begin{equation}
\lambda^*(t) = \mu + \alpha\sum_{t_i \in \mathcal{H}(t)} \kappa_{\beta}(t - t_i),
\label{eq:hawkes_intensity}
\end{equation}
where $\mu \in \mathbb{R}^+$ is the \textit{baseline} intensity
and $\alpha \in \mathbb{R}^+$ the \textit{self-excitation}, i.e., magnitude of an increase in $\lambda^*(t)$ at each event time $t_i$. Immediately after each $t_i$, the intensity decreases according to the kernel $\kappa_{\beta}$. A common choice~\cite{zhou2013learning,farajtabar2014shaping,farajtabar2015coevolve,upadhyay2017uncovering,santos2019self,trouleau2019learning} for the kernel is an exponential function parametrized by the \textit{decay} rate $\beta$, i.e., $\kappa_{\beta}(t) = e^{-\beta t}, \beta \in \mathbb{R}^+$. Plugging this kernel in Equation~\ref{eq:hawkes_intensity} we obtain
\begin{equation}
\lambda^*(t) = \mu + \alpha \sum_{t_i \in \mathcal{H}(t)} e^{-\beta (t-t_i)},
\label{eq:hawkes_exp_kernel}
\end{equation}
which we illustrate in Figure~\ref{fig:hawkes_intensity_illustration}.
Multivariate Hawkes processes with an exponential kernel generalize univariate ones by introducing parameters for self-excitation and for the decay per dimension. Beyond self-excitation, they also capture \textit{cross-excitation}, the intensity jump an event in one dimension causes in another. Formally, the intensity of dimension $p$ of an $M$-variate Hawkes process is
\begin{eqnarray}
\lambda^{*p}(t)&=&\mu_p+\sum_{q=1}^M\sum_{t_i^q<t} \alpha_{pq} e^{-\beta_{pq} (t-t_i^q)}.
\label{eq:hawkes_intensity_m}
\end{eqnarray}
Notice that we index each dimension's intensity function with $\lambda^{*p}$, and its baseline with $\mu_p$. This generalization also includes an excitation matrix with self-excitation and cross-excitation entries $\alpha_{pp}$ and respectively $\alpha_{pq}$, as well as, analogously, a matrix of decay rates $\beta_{pq}$. Note that $\alpha_{pq}$ captures the increase in intensity in dimension $p$ following an event in dimension $q$.
In matrix notation, we write $\bm{\mu} \in \mathbb{R}^M$, $\bm{\alpha} \in \mathbb{R}^{M\times M}$ and $\bm{\beta} \in \mathbb{R}^{M\times M}$.
We point to the work by Linniger~\cite{liniger2009multivariate} for more on multivariate Hawkes process theory.
We now introduce the notions of \textit{stationarity} and \textit{causality} in the multi-dimensional Hawkes process context.
Stationarity implies translation-invariance in the Hawkes process distribution, and, in particular, it also implies that the intensity does not increase indefinitely over time and therefore stays within bounds. More formally, a Hawkes process with an exponential kernel is stationary if the spectral radius $\rho$, i.e., the largest eigenvalue of the $L^1$-norm of $\bm{\alpha} / \bm{\beta}$, satisfies $\rho < 1$.
Note that assessing stationarity of a one-dimensional Hawkes process with an exponential kernel reduces to evaluating $\alpha / \beta < 1$.
Recent work~\cite{etesami2016learning} established a connection between Granger causality and the excitation matrix: In the particular case of our exponential kernel, dimension $q$ Granger-causes dimension $p$ if and only if $\alpha_{pq}>0$. Beyond this result, we interpret the magnitude of excitations $\alpha_{pq}$ as the strength and direction of temporal dependency between dimensions $p$ and $q$: For example, we say dimension $q$ influences dimension $p$ more strongly if and only if $\alpha_{pq} > \alpha_{qp}$.
Finally, we define the Hawkes process likelihood function. We work with the log-likelihood function due to its mathematical manipulability and to avoid computational underflows. The equation for the one-dimensional log-likelihood of the Hawkes process with an exponential kernel is as follows~\cite{ozaki1979maximum}:
\begin{eqnarray}
log\,L(\{t_i\}_{i=1}^n) = -\mu t_n - \frac{\alpha}{\beta} \sum_{i=1}^n{(1-e^{-\beta (t-t_i)})} \nonumber\\+ \sum_{i=1}^n{log(\mu+\alpha A(i))},
\label{eq:hawkes_loglik}
\end{eqnarray}
\[\text{ with A(1) = 0 and, for } i>1, \, A(i) = \sum_{t_j < t_i}{e^{-\beta (t_i-t_j)}}.\]
Ozaki~\cite{ozaki1979maximum} also proposes a (computationally less intensive) recursive formulation for Eq.~\ref{eq:hawkes_loglik}. We refer to Daley and Vere-Jones~\cite{daley2003introduction} for a general formulation of the log-likelihood of multivariate temporal point processes.
Henceforth, as we focus on Hawkes processes with exponential kernels, we refer to them simply as Hawkes processes.
|
3,212,635,537,718 | arxiv | \section{Introduction}
With the rapid development of mobile devices and the Internet, a massive amount of video content is being uploaded to the Internet every second.
The volume of video information has far exceeded the processing capacity of the conventional manual system, thus video content analysis has attracted the extensive interest of academic and industrial communities.
One of the most active research topics in video understanding is temporal action detection, which focuses on both classifying the action instances present in an untrimmed video and localizing them with temporal boundaries.
The temporal action detection task, like object detection, is divided into two parts: temporal action proposal generation (TAPG) and action recognition.
Deep learning has recently been shown to significantly improve action recognition performance \cite{two-stream,tsm,wu2020MVFNet}.
However, the performance of the two-stage temporal action detectors in mainstream benchmarks~\cite{thumos, activitynet} still has much room for improvement, which is mostly influenced by the quality of proposals from temporal action proposal generation.
Hence, great efforts have been devoted to TAPG task~\cite{lin2018bsn, lin2019bmn}.
These research generally use the temporal Intersection-over-Union (tIoU) between the proposal and instance, called the action score, to evaluate the confidence of the proposal in order to develop high-quality temporal action proposals with dependable confidence scores.
However, the background information is also significant but was previously overlooked.
For instance, as illustrated in Figure~\ref{fig:teaser}, we can easily restrict false-positive proposals by detecting the background ``Move a chair".
Furthermore, we can evaluate the inconsistency between the background ``Move a chair" and the action ``Hitting a pinata", leading to better action score and background score of the proposal.
Motivated by above observations,
we propose a general auxiliary \emph{Background Constraint} idea to reduce localization errors. Specifically, we introduce a background score for the proposal's confidence, and its supervision signal is defined by the temporal Intersection-over-Anchor (tIoA) between the proposal and the background. This concept can be flexibly integrated into existing TAPG methods (\emph{e.g.}, BMN~\cite{lin2019bmn}, GTAD~\cite{xu2020g}) to improve the performance in a plug-and-play fashion.
To further mine the rich information of action and background, in this paper, we propose the \emph{\textbf{B}ackground \textbf{C}onstraint \textbf{Net}work} (BCNet) to generate high-quality temporal action proposals. An essential component of BCNet is the \textit{Action-Background Interaction} (ABI) module, which performs both frame-level and clip-level action-background interaction to obtain reliable confidence scores of proposals.
To do so, we first generate action features and background features for each frame using self-attention and difference-attention. Sliding windows are then used to generate multi-scale anchors from action and background features. The clip-level interaction then discovers the complex relationships between action-anchors and background-anchors, and outputs the action and background scores for these anchors.
We also propose a \emph{Boundary Prediction} (BP) module for precisely locating action boundaries. To capture the complex long-term temporal relationships while avoiding the influence of global noise, we aggregate the original feature sequence using self-attention and cross-attention mechanisms.
The output representation is then used as the global representation for the boundary prediction task.
Finally, we feed the boundary probabilities, action scores and background scores into the post processing module to get the final proposal set.
Experimental results show the superiority of our system on two popular datasets, \emph{i.e.,} ActivityNet \cite{caba2015activitynet} and THUMOS14 \cite{thumos}. Our BCNet achieves significant performance and outperforms existing state-of-the-art methods on both datasets.
Our contributions are summarized as follows:
\begin{itemize}
\item We introduce a Background Constraint concept, which can be integrated easily with existing TAPG methods (\emph{e.g.}, BMN, GTAD) and improve performance significantly.
\item We propose a Background Constraint Network, which consists of multiple attention units, \emph{i.e.}, self-attention, cross-attention and difference-attention, and generates high-quality proposals by exploiting inconsistency between action and background.
\item Extensive experiments demonstrate that our method outperforms the existing state-of-the-art methods on THUMOS14 and achieves comparable performance on ActivityNet-1.3, in both temporal action proposal generation task and temporal action detection task.
\end{itemize}
\section{Related Work}
\subsection{Video Action Recognition}
Action recognition is a fundamental task in the video understanding area.
Currently, there are two types of end-to-end action recognition methods: 3D CNN-based methods and 2D CNN-based methods.
3D CNNs are natural extensions of their 2D counterparts and are intuitive spatiotemporal networks that directly tackle 3D volumetric video data~\cite{c3d,i3d} but have a high computational cost.
Other alternative efficient architectures, such as TSM~\cite{tsm}, TEI~\cite{teinet}, MVFNet~\cite{wu2020MVFNet}, DSANet~\cite{dsanet}, \emph{etc}, have been developed to capture temporal information with reasonable training resources.
These methods aim to design efficient temporal modules to perform efficient temporal modeling.
There is also ongoing research into dynamic inference~\cite{wu2020dynamic}, adaptive frame sampling techniques~\cite{wu2019multi,korbar2019scsampler}, which we believe can complement the end-to-end video recognition approaches.
\subsection{Temporal Action Proposal Generation}
Temporal action proposal generation aims to detect action instances with temporal boundaries and confidence in untrimmed videos.
Existing methods can be mainly divided into Top-down and Bottom-up methods.
The Top-down methods~\cite{oneata2014lear,gao2017turn,gao2018ctap,gao2020accurate,chen2019relation} generate proposals using sliding windows or pre-defined anchors.
The Bottom-up methods mainly focus on evaluating ``actionness'', which indicates the probability of a potential action, for each frame or clip in a video.
These works~\cite{shou2016temporal, zhao2017temporal} use snippet-wise probability to generate candidate proposals.
BSN \cite{lin2018bsn} first proposes to predict start, end and actionness of each frame, then proposals are generated by constructing start and end points with high probabilities, with low confidence ones further abandoned by an evaluation module.
\cite{lin2019bmn,su2020bsn++,xu2020g,lin2020fast} generate all possible combinations of temporal locations to evaluate confidence of proposals.
\cite{liu2019multi} generates coarse segment proposals by perceiving the whole video sequence and predicts the frame actionness by densely evaluating each video frame.
These methods evaluate action scores of proposals with rich clip-level context.
However, these methods fail to take full advantage of background by focusing only on the action score.
In our work, we predict the extra background score for the confidence of proposals to reduce low-quality proposals.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{figures/bcnet.pdf}
\caption{Illustration of the proposed Background Constraint Network. First we apply the feature extractor to encode video frames. Boundary Prediction (BP) module takes the feature sequence as input, and outputs boundary probability sequence. Action-Background Interaction (ABI) module takes the feature sequence as input, and outputs features of action and background at the frame level. Then, we set anchors on features of action and background, and feed them into clip-level interaction to generate action and background scores of anchors. Finally, we construct proposals based on boundary probabilities sequence and refine them using the corresponding anchor.}
\vspace{-0.3cm}
\label{fig:framework}
\end{figure*}
\subsection{Transformer and self-attention mechanism}
Transformers \cite{vaswani2017attention} has achieved great success in natural language processing.
Transformer architectures are based on a self-attention mechanism that summarizes content from the source sequence and is capable of modeling complex and arbitrary dependencies within a limited number of layers.
Recently, many works~\cite{dosovitskiy2020image,carion2020end,liu2021swin,tan2021relaxed,wang2021temporal} have revealed the great potential of Transformers in the computer vision task.
Inspired by the successful application of Transformers in various fields, we intuitively take advantage of Transformers in modeling long-range contextual information. In this paper, we utilize the Transformer-alike structure to devise three attention units.
\subsection{Background Modeling on Temporal Action Localization}
Background modeling in Temporal action localization has received some attention.
Several previous works~\cite{shou2016temporal,yuan2016temporal} generate proposals by sliding window and classify them into $C+1$ classes for $C$ action classes plus background class.
Also, several studies attempt to explicit background modeling for weakly-supervised temporal action localization.
Some works \cite{nguyen2019weakly,lee2020background} try to classify background frames as a separate class.
\cite{lee2020weakly} formulates background frames as out-of-distribution samples.
Essentially, all the above works aim to perform classification for these proposals.
Unlike them, in our work, we propose a Background Constraint concept to predict an additional background score for proposal confidence evaluation. To supervise the background score, we use temporal Intersection-over-Anchor (tIoA) between the proposal and the background. Our work concentrates on utilizing the background prediction score to restrict the confidence of proposals.
\section{Background Constraint Network}
As shown in Figure~\ref{fig:framework}, we propose a Background Constraint Network (BCNet) to generate high-quality proposals, which mainly consists of two main modules: Action-Background Interaction Module and Boundary Prediction Module. Firstly, the Action-Background Interaction (ABI) module is adopted to perform both frame-level and clip-level action-background interaction to obtain reliable confidence scores of proposals. The Boundary Prediction (BP) module is then utilized to locate the boundaries of the proposals by exploiting complex long-term temporal relationships for boundary regression.
\subsection{Problem Definition}
\label{sec:definition}
An untrimmed video $U$ can be denoted as a frame sequence $U=\{u_t\}_{t=1}^{l_v}$ with $l_v$ frames, where $u_t$ denotes the $t$-th RGB frame of video $U$.
The temporal annotation set of $U$ is made up of a set of temporal action instances as $\Psi_g=\{\varphi_n^g\}_{n=1}^{N_g}$ and $\varphi_n^g = (t_{s_n}, t_{e_n})$, where $N_g$ is the number of ground-truth action instances, $t_{s_n}$ and $t_{e_n}$ are the starting and ending time of the action instance $\phi_n^g$, respectively.
During training phase, the $\Psi_g$ is provided.
While in the testing phase, the predicted proposal set $\Psi_p$ should cover the $\Psi_g$ with high recall and high temporal overlapping.
\subsection{Background Constraint}
To evaluate the confidence of the proposal, existing methods primarily use the temporal Intersection-over-Union (tIoU) between the proposal and instance, called \emph{action score}.
The temporal Intersection-over-Union (tIoU) is used to define the label of action score, which can be computed by:
\begin{equation}
A_{label} = max \{ {\begin{vmatrix} \frac{G_i \cap P}{G_i \cup P} \end{vmatrix}}_{i=1}^n\},
\end{equation}
where $G_i$ is the $i$-th ground truth and $P$ is the proposal, $n$ is number of ground truth.
In this paper, we propose a novel Background Constraint concept to suppress low-quality proposals (false positive proposal).
Specifically, we predict a extra \emph{background score} for evaluating the confidence of the proposal besides the action score.
The label of the background score is generated using temporal Intersection-over-Anchor (tIoA), which can be computed by:
\begin{equation}
B_{label} = 1 - {\sum_{i=1}^{n} \frac{G_i \cap P}{P} },
\end{equation}
where $G_i$ is the $i$-th ground-truth and $P$ is the proposal, $n$ is number of ground-truth.
It is worth reminding that $A_{label}$ plus $B_{label}$ is not equal to 1 in most circumstances.
\begin{figure}[t]
\centering
\subfigure[Self-attention]{
\includegraphics[width=0.135\textwidth]{figures/figure3_a.pdf}
\label{fig:self}
}
\subfigure[Cross-attention]{
\includegraphics[width=0.135\textwidth]{figures/figure3_b.pdf}
\label{fig:cross}
}
\subfigure[Difference-attention]{
\includegraphics[width=0.162\textwidth]{figures/figure3_c.pdf}
\label{fig:diff}
}
\caption{Illustration of three different attention units.}
\label{fig:units}
\vspace{-0.4cm}
\end{figure}
\subsection{Action-Background Interaction Module}
In this section, we describe the Action-Background Interaction (ABI) module in our BCNet.
The ABI module conducts action-background interaction on two temporal granularities: frame-level and clip-level.
Inspired by the Transformer~\cite{vaswani2017attention}, ABI module models the interactions with two attention units, \emph{i.e.}, self-attention unit and difference-attention unit.
The details are described next.
\textbf{Frame-level Interaction.} We obtain the frame-wise feature sequence
$F_o \in \mathbb{R}^{T\times C}$ via the feature encoder, where $T$ is the length of the feature sequence and $C$ is the feature dimension.
First, we use the \emph{Self-attention Unit} to learn the relationships between frames and enhance the feature representations.
The structure of the \emph{Self-attention Unit} is shown in Figure~\ref{fig:self}.
We utilize the Transformer-alike structure to devise the self-attention unit, which consists of two sub-layers: self-attention layer and feed-forward network (FFN).
Specifically, the input sequence $F_o$ is projected onto three learnable linear transformations to get queries $Q_{F_o}$, keys $K_{F_{o}}$, and values $V_{F_o}$.
Then the self-attention map can be calculated as follows:
\begin{equation}
A(Q_{F_o},K_{F_o}) = Softmax \left( \frac {Q_{F_{o}}{K_{F_{o}}}^T}{\sqrt{C}} \right).
\end{equation}
We updates each component of the sequence by aggregating global information from the complete frame sequence by
\begin{equation}
F_{att} = \varphi^g \left( A(Q_{F_o},K_{F_o}) V_{F_{o}} \right),
\end{equation}
where $\varphi^g$ is a linear projection function.
Also, a residual connection around each of the two sub-layers and layer normalization is adopted to generate enhanced feature $F$, which can be written as:
\begin{equation}
\begin{split}
F^{'} = LayerNorm \left( F_{att} + F_o \right), \\
F = LayerNorm \left( F^{'} + FFN(F^{'}) \right).
\end{split}
\end{equation}
Next, the enhanced features $F$ are fed into the \emph{Difference-attention Unit} to aggregate features based on the inconsistency between the action feature $F_a$ and the background feature $F_b$ under action and background supervision.
The structure of the \emph{Difference-attention Unit} is shown in Figure~\ref{fig:diff}.
The action feature $F_a$ is projected onto two learnable linear transformations to get queries $Q_{F_a}$ and values $V_{F_a}$.
Also, we transform the background features $F_b$ to queries $K_{F_b}$ and values $V_{F_b}$.
Then, we compute the difference map $A(Q_{F_{a}}, K_{F_{b}})$ as:
\begin{equation}
A(Q_{F_{a}}, K_{F_{b}}) = Softmax \left( \frac {Q_{F_{a}}{K_{F_{b}}}^T}{\sqrt{C}} \right).
\end{equation}
In this way, $A_{i, j}$ represents the difference between the action frame $i$ and the background frame $j$. The smaller the value of $A_{i, j}$, the bigger the difference between the background features and action features.
Then we use the difference map to reweight $V_{F_a}$, $V_{F_b}$, and obtain enhanced feature $F_{a}^{'}$, $F_{b}^{'}$ respectively.
Finally, we append a prediction head which encode the $F_{a}^{'}$,$F_{b}^{'}$ with multi-layer perceptron (MLP) network and followed by a \emph{Sigmoid} layer to generate the action and background probability sequence.
\textbf{Clip-level Interaction.} We first use sliding window group to generate the action anchors and background anchors with different scales.
Following BMN~\cite{lin2019bmn}, we construct weight term $w_{i,j} \in \mathbb{R}^{N \times T}$ via uniformly sampling $N$ points between the temporal region for each anchor.
First, we conduct dot product in temporal dimension between $w_{i,j}$ and ${F_{a}^{'}}$ with the shape ${C \times N}$ to generate the action anchor.
Then, we get action anchor sequence $F_{a}^{c} \in \mathbb{R}^{L \times S}$ where $L$ is number of clip and $S = C \times N$.
Similarly, we generate background anchors sequence $F_{b}^{c}$ in the same way.
Next, the anchor sequences $F_{a}^{c}$ and $F_{b}^{c}$ are fed into the clip-level action-background interaction to generate action score and background score.
Specifically, we first utilize two independent \emph{Self-attention Units} to capture the relationships among action/background anchors, respectively.
The two self-attention units output updated anchor sequence ${F_{a}^{c}}^{'}$ and ${F_{b}^{c}}^{'}$.
Similar to the frame-level interaction, ${F_{a}^{c}}^{'}$ and ${F_{b}^{c}}^{'}$ are then fed into the \emph{Difference-attention Unit} to obtain difference map and reweighted anchor sequence $\widetilde{{F_{a}^{c}}^{'}}$ and $\widetilde{{F_{b}^{c}}^{'}}$.
Note that the difference map $A_{i, j}$ represents the feature difference between $i$-th action anchor and $j$-th background anchor.
Finally, we add a clip-level predictor which encodes the $\widetilde{{F_{a}^{c}}^{'}}$ and $\widetilde{{F_{b}^{c}}^{'}}$ with multi-layer perceptron (MLP) and a $Sigmoid$ layer to predict action scores and background scores.
\subsection{Boundary Prediction Module}
Long-term temporal modeling is a critical factor in proposal boundary prediction.
It is natural to use self-attention mechanism to model long dependencies.
However, global modeling is easy to introduce global noise then leads to the over-smoothing.
To this end, we propose a \emph{Boundary Prediction} (BP) module which introduces original features to alleviate this phenomenon.
This module is built using the Transformer-alike structure which consist of multiple layers. Each layer contains a \emph{Self-attention Unit}, a \emph{Cross-attention Unit} and a feed-forward network.
Specifically, we first obtain the feature $F_i$ ($i$ represents the input features of layer $i$, if $i$ = 1, $F_i$ = $F_o$) and $F_o$.
Then, we feed them to the \emph{Self-attention Unit} and generate augmented global features $F_{i}^{g}$ and $F_{o}^{g}$.
As shown in Fig.~\ref{fig:cross}, we use \emph{Cross-attention Unit} to generate the attention map $A(F_{i}^{g},F_{o}^{g})$ which represents the similarity between the aggregated feature $F_{i}^{g}$ and the original aggregated feature $F_{o}^{g}$, called the originality score.
To get $F_{i+1}$, we aggregate features which have high originality scores and discard features which have low originality scores.
The final output representation is then used as the global representation for the boundary prediction task. Specifically, we utilize a boundary predictor which encode the output representation with multi-layer perceptron (MLP) network and followed by a \emph{Sigmoid} layer to generate boundary probability sequence.
\subsection{Training}
\label{sec:train}
The overall objective of our framework is defined as:
\begin{equation}
\mathcal{L} = \mathcal{L}_{1} + \mathcal{L}_{2},
\end{equation}
where $\mathcal{L}_1$ and $\mathcal{L}_2$ are the objective functions of the ABI module and the BP module respectively.
\textbf{Objective of BP module.}
The BP module generates the starting and ending probability sequence $P_s$, $P_e$.
Thus, the loss function consists of starting loss and ending loss:
\begin{equation}
\mathcal{L}_{1} = \mathcal{L}_{bl}(P_s, G_s) + \mathcal{L}_{bl}(P_e, G_e),
\end{equation}
where $G_s$ and $G_e$ are the ground truth labels of boundary sequence, and $\mathcal{L}_{bl}$ is the binary logistic regression loss.
\textbf{Objective of ABI module.}
The ABI module generates frame-level and clip-level scores: $P_{a}^{f}$,$P_{b}^{f}$,$\widehat{P_{a}^{c}}$,$\widetilde{P_{a}^{c}}$ and $P_{b}^{c}$.
$P_{a}^{f}$ and $P_{b}^{f}$ are frame-level action and background classification scores.
$P_{b}^{c}$ is clip-level background classification scores.
Following BMN, $\widehat{P_{a}^{c}}$ is clip-level action classification scores and $\widetilde{P_{a}^{c}}$ is regression action scores.
The loss function $\mathcal{L}_{2}$ consists of frame-level loss and clip-level loss:
\begin{equation}
\mathcal{L}_{2} = \mathcal{L}_{frame} + \mathcal{L}_{clip}.
\end{equation}
The frame-level loss is
\begin{equation}
\mathcal{L}_{frame} = \mathcal{L}_{c}(P_{a}^{f},G_a^f) + \mathcal{L}_{c}(P_{b}^{f},G_b^f),
\end{equation}
where $G_a^f$ and $G_b^f$ are the ground truth labels of action and background probability at frame-level.
The clip-level loss is formulated as follows:
\begin{equation}
\mathcal{L}_{clip} = \mathcal{L}_c(\widehat{P_{a}^{c}},G_a^c) + \mathcal{L}_r(\widetilde{P_{a}^{c}},G_a^c) +
\mathcal{L}_c(P_{b}^{c},G_b^c),
\end{equation}
where $G_a^c$ and $G_b^c$ are the ground truth labels of action and background scores at clip-level.
$\mathcal{L}_{c}$ denotes the binary logistic regression loss function and $ L_r$ is a smooth $L_1$ loss.
\subsection{Inference}
\label{sec:infer}
As mentioned above, the BP module generates boundary probability and the ABI module generates the action and background scores.
Then we take the boundary probability, action scores and background scores into the Post-processing module.
Firstly, we construct a proposals set $\psi^c_p$ based on boundary probabilities.
Second, the proposal is refined by a corresponding pre-set anchor.
The proposal $\varphi = [t_s',t_e'] \in {\psi}^c_p$ is taken as an example,
we compute the temporal Intersection over Union (tIoU) between proposal $\varphi$ and anchors, then select a matching anchor $p_m = [t^m_s, t^m_e]$ to refine proposals.
we refine the proposal as:
\begin{equation}
[t_s,t_e]=
\begin{cases}
[\frac{ t_s' + t^m_s}{2} , \frac{ t_e' + t^m_e}{2}], & if ~ {\widehat{p_{m}^{a}} > {\alpha}_1} ~ and ~ \widetilde{{p_{m}^{a}}} > {\alpha}_2 \\
[t_s' , t_e'], &others
\end{cases},
\end{equation}
where $\widehat{p_{m}^a}$ is the anchor action classification score, $\widetilde{p_{m}^a}$ is the action regression score, ${\alpha}_1$ and ${\alpha}_2$ are the adjustment thresholds.
Finally, we get a proposal set $\psi_p = \{{\phi}_n = (t_s,t_e,p_{t_s'}^{s},p_{t_e'}^e,\widehat{p_{m}^a},\widetilde{p_{m}^a},{p_{m}^b})\}_{n=1}^N$, where $p_{t_s'}^{s},p_{t_e'}^{e}$ are the starting and ending probabilities and ${p_{m}^b}$ is anchor background score.
Following the previous practices, we also perform score fusion and redundant proposal suppression to further obtain final results.
Specifically, in order to make full use of various predicted scores for each proposal $\varphi_n$, we fuse its boundary probabilities and action-background scores of matching anchor by multiplication.
The confidence score $p^f$ can be defined as :
\begin{equation}
p^f = p_{t_s'}^s \cdot p_{t_e'}^e \cdot \widetilde{p_{m}^a} \cdot \widehat{p_{m}^a} \cdot (1 - {p_{m}^b}).
\end{equation}
Hence, the final proposal set as
\begin{equation}
\psi = {\{\varphi_n = (t_s,t_e,p^f)\}}_{n=1}^N.
\end{equation}
Moreover, we also use the Soft-NMS algorithm for post-processing to remove the proposals which highly overlap with each other.
\section{Experiments}
\subsection{Datasets and Evaluation Metrics}
\textbf{ActivityNet-v1.3} ~\cite{activitynet} is a large-scale video dataset for action recognition and temporal action detection tasks.
It contains 10K training, 5k validation, and 5k testing videos with 200 action categories, and the ratio of training, validation and testing sets is 2:1:1.
\textbf{THUMOS14}~\cite{thumos} contains 200 validation untrimmed videos and 213 test untrimmed videos, including 200 action categories.
This dataset is challenging due to the large variations in the frequency and duration of action instances across videos.
\noindent\textbf{Evaluation Metrics. }
Temporal action proposal generation aims to produce high-quality proposals with high tIoU, which have a high recall rate.
To evaluate quality of proposals, Average Recall (\textbf{AR}) is the average recall rate under specified tIoU thresholds.
Following the standard protocol, we use thresholds set [0.5:0.05:0.95] on ActivityNetv1.3 and [0.5:0.05:1.0] on THUMOS14.
To evaluate the performance of temporal action detection task, mean Average Precision (\textbf{mAP}) under multiple tIoU is the widely-used evaluation metric.
On ActivityNet-v1.3, the tIoU thresholds are set to \{0.5, 0.75, 0.95\}, and we also test the average mAP of tIoU thresholds between 0.5 and 0.95 with step of 0.05.
On THUMOS14, these tIoU thresholds are set to \{0.3, 0.4, 0.5, 0.6, 0.7 \}.
\input{table/thumos_ar}
\input{table/thumos_map}
\input{table/anet_ar}
\input{table/anet_map}
\subsection{Implementation Details}
\textbf{Feature Encoding.} Following previous works~\cite{lin2019bmn,xu2020g}, we adopt the TSN~\cite{tsn} and I3D \cite{two-stream} for feature encoding.
For THUMOS14, the interval $\sigma$ is set to 8 and 5 for I3D and TSN respectively. We crop each video feature sequence with overlapped windows of size $T = 256$ and stride 128.
As for ActivityNet-1.3, the sampling frame stride is 16, and each video feature sequence is rescaled to $T = 100$ snippets using linear interpolation.
\\
\textbf{Training and Inference.}
The number of layers in Boundary Prediction module is 12.
Due to the limit of computation resource, we apply 1D Conv for dimension reduction, then take the features as the input to the Boundary Prediction module and Action-Background Interaction module.
For each anchor, we use sampling points $N = 32$.
For post-processing module, we set adjustment thresholds ${\alpha}_1 = 0.9$ and ${\alpha}_2 =0.8 $.
We train our model from scratch using the Adam optimizer and the learning rate is set to $10^{-4}$ and decayed by a factor of 0.1 after every 10 epoch.
\subsection{Comparison with State-of-the-arts}
Here we compare our BCNet with the existing state-of-the-art methods on ActivityNet-v1.3 and THUMOS14.
For fair comparisons, we adopt the same two-stream features used by previous methods in our experiments.
\textbf{Results on THUMOS14.}
BCNet is compared with state-of-the-art methods in Table~\ref{tab:thumos_ar_an} and Table~\ref{tab:thumos_map}, where our method improves the performance significantly for both temporal action proposal generation and action detection.
For the temporal action proposal generation task, results are shown in Table~\ref{tab:thumos_ar_an}, which demonstrate that BCNet outperforms state-of-the-art methods in terms of AR@AN with AN varying from 50 to 1000.
\input{table/ablation_background}
For the temporal action detection task, the proposed BCNet also achieves superior results, as shown in Table~\ref{tab:thumos_map}.
The performance of our method exceeds state-of-the-art proposal generation methods by a big margin at different tIoU thresholds.
Specially, BCNet based on TSN feature reaches an mAP of 51.6 $\%$ at IoU 0.5.
Besides, the performance of BCNet can be further boosted when
it is combined with proposal post-processing methods: P-GCN \cite{zeng2019graph} and MUSES~\cite{liu2021multi}.
Now BCNet reaches \textbf{60.0\%} at IoU 0.5, outperforming all the other methods.
This signifies the advantage of BCNet proposals regardless of post-processing.
\textbf{Results on ActivityNet-v1.3.}
In Table~\ref{tab:anet_ar_an} and Table~\ref{tab:anet_map}, we compare the proposed BCNet with other methods on ActivityNet-v1.3.
For the temporal action proposal generation task, as shown in Table~\ref{tab:anet_ar_an}, the performance of BCNet again outperforms state-of-the-art proposal generation methods in terms of AR@AN with AN varying from 1 to 100 and AUC.
Especially when AN equals 1, we achieve 35.2\% regarding the AR metric, which indicates that top-1 proposal has high quality.
For the temporal action detection task, as summarized in Table~\ref{tab:anet_map},
our method achieves notable improvements on mAP over other proposal generation methods such as BMN~\cite{lin2019bmn} and G-TAD~\cite{xu2020g} at all tIoU thresholds.
When tIoU is 0.95, the mAP we obtain is 10.6\%, indicating that the confidence of the generated proposals are more reliable.
\subsection{Ablation Study}
In this section, we conduct ablation studies on THUMOS14 to verify the effectiveness of each component in BCNet.
\textbf{Multi-level ABI module.}
We perform ablation studies to verify the effectiveness of multi-level interaction in ABI module.
Frame-level interaction is designed to generate features of action and background.
Here, the ablation experiment demonstrates the necessity of frame-level interaction as shown in Table \ref{tab:ablation_multi}.
Compared with single-level ABI module that only has a clip-level interaction, multi-level ABI module is improved by 3.6\% at tIou 0.3.
\input{table/ablation_module}
\textbf{The effectiveness of Background Constraint.}
We perform ablation studies to verify the effectiveness of the background constraint idea.
To validate the generalizability of our proposed background constraint idea, we add it to the BMN, GTAD.
The experimental results are shown in Table ~\ref{tab:ablation_background}, which reveals that background constraint can also significantly improve the performance of existed methods.
\textbf{Architecture of ABI module.}
We perform ablation studies to verify the effectiveness of the architecture of ABI module.
To generate reliable confidence of proposal, ABI module is designed by exploiting rich information of action and background.
Our proposed ABI module consists of two key units: self-attention unit and difference-attention unit.
Results are shown in Table \ref{tab:ablation_ABI}.
The self-attention unit can improve the performance by a large margin (almost 4.5$\%$) at tIoU 0.5.
Difference-attention unit also brings significant improvement at tIoU 0.3, as the inconsistency of action and background is captured between action and background.
\input{table/ablation_ABI}
\subsection{Analysis on runtime.}
To verify the efficiency of our BCNet, we report the latency of our method on THUMOS14.
For the fair comparisons with other models, we measure the latency under the same environment (a single NVIDIA 2080Ti GPU).
We use a batch size of 1 to measure the latency on the full testing set and report the average time.
As shown in Table~\ref{ablation_infer}, our BCNet achieve the best mAP with smallest latency (141ms \emph{v.s.} 298ms, 330ms).
The main reason is that our model generates fewer proposals than these methods, which helps our model run faster.
\input{table/ablation_flops}
\section{Conclusion}
In this paper, we introduce a Background Constraint concept, which can be integrated easily with existing TAPG method.
Based on this concept, we propose a Background Constraint Network, which consists of multiple attention units \emph{i.e.}, self-attention unit, cross-attention unit, and difference-attention unit, and generates high-quality proposals by exploiting inconsistency between action and background.
Extensive experiments show that our model achieves new state-of-the-art performance in temporal action proposal generation and action detection on THUMOS14 and ActivityNet1.3 datasets.
\section*{Acknowledgments}
The work was funded by National Natural Science Foundation of China (U1711265) and Heilongjiang Province Science Foundation (2020ZX14A02).
\bibstyle{aaai22}
|
3,212,635,537,719 | arxiv | \section{Supplemental I. Dispersion relations for the Chern insulator}
\begin{figure*}[t]
\includegraphics[trim=96 240 98 255,clip, width=0.33\linewidth]{Ekya.pdf}\hfill
\includegraphics[trim=96 240 98 255,clip, width=0.33\linewidth]{Ekyb.pdf}\hfill
\includegraphics[trim=96 240 98 255,clip, width=0.33\linewidth]{Ekyc.pdf}
\caption{Dispersion relations for the Chern insulator model in Eq.~(1) in the main text with $L_x=6$, $t=1$, $\Delta=0.22$, $\epsilon=0$, $\kappa_{n,m}=0$, and (op,pe) boundary conditions (blue) for different values of $M$. The plot for $M=1.57$ also shows the single-particle spectrum for (pe,pe) boundary conditions (black).
For $M=1.2$ the system is inside a trivial bubble, and for $M=2$ the system is inside a toplogical bubble. The plot at $M=1.57$ shows the gap closing in the (op,pe) spectrum that happens at the phase transition between the trivial and the topological bubble. This gap closing happens while the gap in the (pe,pe) spectrum remains open. Note also that the highest state below zero energy is separated from all the other states below zero energy. We can therefore compute the Wannier charge center for this state alone. The result is $0$ for $M=1.2$ and $\pi$ for $M=2$. This shows that the topological information is carried by this state.}
\label{fig:many1}
\end{figure*}
In Fig.~\ref{fig:many1}, we show dispersion relations for the Chern insulator model with $L_x=6$. For open boundary conditions in the $x$ direction, the chiral modes of the $M>0$ Chern insulator cross at the center of the slab Brillouin zone, as shown in Fig.~\ref{fig:many1}(b). Chiral boundary modes of the Chern insulator phase for $M<0$ instead cross at the edge of the slab Brillouin zone. The figure also shows that the highest energy state below zero energy is separated from the other states below zero energy, which allows us to compute the Wannier charge center for this state alone.
\section{Supplemental II. Finite-size topology in the Chern insulator with $L_x=5$}
\begin{figure*}
\includegraphics[trim=96 240 98 255,clip, width=0.33\linewidth]{spectrum5.pdf}\hfill
\includegraphics[trim=96 240 98 255,clip, width=0.33\linewidth]{WL5.pdf}\hfill
\includegraphics[trim=96 240 98 255,clip, width=0.33\linewidth]{density5.pdf}
\caption{(a) Single-particle spectrum of the Chern insulator model in Eq.~(1) in the main text with $L_x=5$, $L_y=600$, $t=1$, $\Delta=0.22$, $\epsilon=0$, and $\kappa_{n,m}=0$. The spectrum for (pe,pe) boundary conditions (black) is plotted on top of the spectrum for (op,pe) boundary conditions (blue), which in turn is plotted on top of the spectrum for (op,op) boundary conditions (red). The purple background highlights the topological regions determined from the Wannier charge centers. (b) Wannier charge centers $\vartheta^{(j)}$ for $L_y\to\infty$. The topological regions (purple) are those for which at least one of the Wannier charge centers are at $\pm \pi$. (c) Density $d_{n,m}=\sum_\sigma\langle c_{\sigma,n,m}^\dag c_{\sigma,n,m} \rangle$ of one of the two states close to zero energy at $M=1.5$. Only one end of the system is shown as more than $99.8\%$ of the total density is located within the sites shown. The other state has the density located at the other end.}
\label{fig:Lx5}
\end{figure*}
The Chern insulator with $L_x=5$ and $L_y=600$ is investigated in Fig.~\ref{fig:Lx5}, where we show the single-particle spectrum, the Wannier charge centers, and the localization of the boundary modes. The results are similar to the case with $L_x=6$, except that the number of bubbles is different. In general, we observe that the (op,pe) spectrum has $2L_x$ gap closings as a function of $M$ when $L_x\ll L_y$, and hence there are $2L_x-1$ bubbles. When counting the bubbles in Fig.~\ref{fig:Lx5}(a), note that the (op,pe) spectrum does not have a gap closing at $M=0$ for $L_x=5$, even though the (pe,pe) spectrum has.
\section{Supplemental III. Disorder}\label{Sec:disorder}
The disorder averaging of the Chern insulator spectrum in Fig.~4(b) in the main text is done as follows. We label the energies $E_n$ in the single-particle spectrum with the index $n\in\{1,2,\ldots,2L_xL_y\}$ in increasing order such that $E_{n+1}\geq E_n$. We observe numerically that the states within the bubble gaps form pairs of almost degenerate states with the densities located at opposite ends of the considered quasi-(2-1)D lattice with $6\times 600$ sites. Specifically, we select states that fulfil both of the following criteria and observe that these states coincide with the states in the bubble gaps: Each of the states in a pair with energies $E_n$ and $E_{n+1}$ should have at least $97.5\%$ of the density located within the $2.5\%$ outermost sites at one end of the lattice in more than half of the considered disorder realizations, and for all disorder realizations $\sum_{(n,m)\in\textrm{left}}d_{n,m}-\sum_{(n,m)\in\textrm{right}}d_{n,m}$ should have opposite signs for the two states, where $d_{n,m}=\sum_\sigma \langle c^\dag_{\sigma,n,m} c_{\sigma,n,m}\rangle$ is the density and left/right refers to the left/right half of the lattice. Depending on the specific disorder realization, one state or the other in a pair may be higher in energy, since the disorder is not the same at the two ends. When we average over different disorder realizations, we want to average a given state with the corresponding states in all the other spectra. Hence, for the states in the bubble gaps, we average the energies of the states that have the densities located at the same end. For all other states, Fig.~1(d) in the main text shows the disorder average of $E_n$.
We further investigate the gap between the nearly-degenerate, topological, quasi-(2-2)D boundary modes in the Chern insulator model with disorder in Fig.~\ref{fig:gapdis}. Figure \ref{fig:gapdis}(a) shows the gap for the model with particle-hole symmetry. Even in the presence of disorder, the gap decays exponentially as a function of $L_y$, and after averaging over $200$ disorder realizations, the gaps closely follow the gaps for the model without disorder. The gap for the case without particle-hole symmetry is investigated in Fig.~\ref{fig:gapdis}(b,c). It is seen that the gap approaches zero when the number of disorder realizations increases.
\begin{figure*}
\includegraphics[trim=98 240 122 240,clip, width=0.33\textwidth]{figS2a.pdf}\hfill
\includegraphics[trim=98 240 122 240,clip, width=0.33\textwidth]{figS2b.pdf}\hfill
\includegraphics[trim=98 240 122 240,clip, width=0.33\textwidth]{figS2c.pdf}
\caption{(a) Energy gap between the nearly-degenerate, topological boundary modes (state number $L_xL_y$ and $L_xL_y+1$ in the spectrum) as a function of $L_y$ for the particle-hole symmetric case (i.e.\ $\epsilon=0$) and either $L_x=5$ and $M=1.5$ or $L_x=6$ and $M=2$. The gap for the model with disorder strength $\kappa=0.02$ averaged over $200$ disorder realizations shows the same exponential decay with $L_y$ as the gap for the model without disorder (i.e.\ $\kappa=0$). (b) Gap between the nearly-degenerate, topological boundary modes (state number $L_xL_y$ and $L_xL_y+1$ in the spectrum) for the disordered model without particle-hole symmetry and parameters $L_x=5$, $M=1.5$, $\epsilon=0.02$, and $\kappa=0.02$. The disorder averaging is done as explained in Sec.~\ref{Sec:disorder}, and the gap is shown as the difference between the energy $E_R$ of the state with the density mainly at the right end minus the energy $E_L$ of the state with the density mainly at the left end. The number of disorder realizations is given in the legend, and the error bars show the standard deviation of the mean value. The gap is seen to approach zero as the number of disorder realizations increases. (c) shows the same as (b), but for $L_x=6$ and $M=2$.}
\label{fig:gapdis}
\end{figure*}
\section{Supplemental IV. Quasi-(2-2)D modes in a Chern insulator with Chern number 2}
\begin{figure*}
\includegraphics[trim=98 240 108 255,clip, width=0.33\textwidth]{spectrumC2.pdf}\hfill
\includegraphics[trim=98 240 108 255,clip, width=0.33\textwidth]{WLC2.pdf}\hfill
\includegraphics[trim=98 240 108 255,clip, width=0.33\textwidth]{densityC2.pdf}
\caption{(a) The single-particle spectrum of the model in Eq.~\eqref{Chern2Ham} shows 4-fold degenerate, topological, zero energy modes for $L_x=6$, $L_y=600$, and open boundary conditions in both directions (red). The spectrum for periodic boundary conditions in the $y$ direction and open boundary conditions in the $x$ direction is shown in blue. The purple background highlights the topological region determined from the Wannier charge centers in the limit $L_y\to\infty$. (b) Wannier charge centers $\vartheta^{(j)}$. The topological region (purple) is the region for which at least one of the Wannier charge centers is at $\pm\pi$. Here, we observe that for $t_2>t_1$ and $L_y\to\infty$, two of the Wannier charge centers are at $\pm\pi$. (c) Density $d_{n,m}=\sum_\sigma\langle c^\dag_{\sigma,n,m} c_{\sigma,n,m}\rangle$ near the end of the slab for one of the four zero energy states for $t_2=3t_1$. The total density within the plotted region is $0.988$.}
\label{fig:C2}
\end{figure*}
In the main text, we considered a Chern insulator model with Chern number $1$ characterized by the Hamiltonian in Eq.~(1). Here, we provide results for a Chern insulator model \cite{Sticlet_2012} on an $L_x\times L_y$ square lattice with two orbitals labeled by $\sigma=\pm1$ on each site and Chern number $2$. The Hamiltonian
\begin{multline}\label{Chern2Ham}
H=t_1 \sum_{\sigma,n,m} (c^\dag_{\sigma,n,m} c_{-\sigma,n+1,m}+
c^\dag_{-\sigma,n+1,m} c_{\sigma,n,m})\\
- t_1 \sum_{\sigma,n,m} \sigma (ic^\dag_{\sigma,n,m} c_{-\sigma,n,m+1}
-ic^\dag_{-\sigma,n,m+1}c_{\sigma,n,m})\\
+ t_2 \sum_{\sigma,n,m} \sigma (c^\dag_{\sigma,n,m} c_{\sigma,n+1,m+1}
+c^\dag_{\sigma,n+1,m+1} c_{\sigma,n,m})
\end{multline}
consists of nearest neighbor hopping between different orbitals with strength $t_1$ and next-nearest neighbor hopping between the same type of orbitals with strength $t_2$. We consider $t_1>0$ and describe the model in terms of the parameter $t_2/t_1$.
We observe different properties of the model with $L_x\ll L_y$ depending on $L_x$. For odd $L_x$, we do not observe interesting behaviors near zero energy. When $L_x$ is divisible by $2$, modes form within a gap in the quasi-(2-1)D system. If $L_x$ is not divisible by $4$, the modes are at zero energy and are 4-fold degenerate. If $L_x$ is divisible by $4$, the modes are at nonzero energy, and there are two sets of states that are each 4-fold degenerate. The degeneracy hence doubles when the Chern number changes from $1$ to $2$. In Fig.\ \ref{fig:C2}, we show results for $L_x=6$. The topological, 4-fold degenerate states at zero energy are seen in the spectrum. The density distributions of these four states reveal the quasi-(2-2)D nature of the states with the densities being mainly at the ends, and the Wannier charge centers show that the model is topological for $t_2>t_1$ in the limit $L_y\to\infty$.
These results suggest that Chern insulators with Chern number $C$ possess $2|C|$, topological, quasi-(2-2)D boundary modes in the topological bubbles. We find this corresponds to $|C|$ centers of the Wannier center spectrum taking the value $\pm\pi$, indicating integer topological classification for the finite-size system.
|
3,212,635,537,720 | arxiv | \section{Introduction}
It was a widely held intuition in the early days of proper forcing
that a typical application of the Proper Forcing Axiom makes use of a
poset of the form $\sigma\mbox{-closed}\,*\,{\rm ccc}$. The usual
argument was to collapse the size of all objects to $\aleph_1$, then
use a ccc-poset to force the desired property to these objects and
finally pull everything back into the ground model with the help of
the forcing axiom. Later it was realized that forcing axioms have a
lot more applications than that. These new developments were heading
into two different directions, on the one hand there was the
development of semiproper forcing in \cite{shelah82:_proper_forcin}
which lead to the Semiproper Forcing Axiom and later to Martin's
Maximum in \cite{foreman88}. On the other hand, even for ${\rm PFA}$
it was soon realized that there is a large variety of proper forcings
that are not of the form $\sigma\mbox{-closed}\,*\,{\rm ccc}$.
Interesting examples here are posets that are not $\omega$-proper and
it was demonstrated first in \cite{shelah82:_proper_forcin} and later
in \cite{moore05:_set} that these can be used to good account. The
point we are trying to make is slightly different and includes both
proper and semiproper forcing notions. We give examples to show that
certain applications of forcing axioms can require adding reals or
countable sequences even before we collapse the size of the relevant
objects to $\aleph_1$. The intuition here is that if we collapse
without adding countable sequences then our object will have an
enumeration of order-type $\omega_1$ whose initial segments are in the
ground model. But certain applications exclude such a possibility,
most prominently the negation of approachability properties. We just
sketched the general direction of this article. Section
\ref{sec:closure-prop} introduces a wide range of forcing properties
that will become important later for the indestructibility theorems.
In Section \ref{sec:indestruct} we will give various indestructibility
results for forcing axioms, but also indestructibility results for
axioms true in the Levy-Collapse of a large cardinal, most notably the
axiom of Game Reflection from \cite{koenig04:_gener}. Section
\ref{sec:Kurepa-tree} is a way of applying this technique to Jonsson
cardinals and related model-theoretic transfer properties. Among other
things, it is shown there that ${\rm MM}$ is consistent with
$\aleph_\omega$ not being Jonsson which shoots down a lingering
conjecture.
The reader is assumed to have a strong background in set theory. As a
general reference we recommend \cite{Kunen-intro} and as a reference
regarding proper and semiproper forcing we suggest
\cite{baumgartner84:_applic_proper_forcin_axiom} and
\cite{shelah98:_proper_improper}. The remaining paragraphs of the
introduction will be used to give a short summary of the most
important Lemmas and Definitions used in this paper.
\begin{definition}
If $\Gamma$ is a class of posets then ${\rm MA}(\Gamma)$ denotes the
statement that whenever $\mathbb{P}\in\Gamma$ and
$D_\xi\;(\xi<\omega_1)$ is a collection of dense subsets of
$\mathbb{P}$ then there exists a filter $G\subseteq\mathbb{P}$ such that
$D_\xi$ intersects $G$ for all $\xi<\omega_1$. In particular, {\rm
PFA} is ${\rm MA}(\mbox{proper})$ and ${\rm MM}$ is ${\rm
MA}$(preserving stationary subsets of $\omega_1$). The interested
reader is referred to
\cite{baumgartner84:_applic_proper_forcin_axiom} and
\cite{foreman88} for more history on these {\em forcing axioms}.
\end{definition}
\begin{definition}\label{def-BMA}
We would like to remind the reader of the notion of a {\em bounded
forcing axiom}. Assume that $\lambda$ is a cardinal and $\Gamma$ a
class of posets, then the following are equivalent (see
\cite{bagaria00:_bound} and \cite{todorcevic02:_local_pfa}):
\begin{enumerate}
\item For every $\mathbb{P}\in\Gamma$ and a collection
$D_\xi\;(\xi<\omega_1)$ of dense subsets of $\mathbb{P}$ of size
$\leq\lambda$ there is a filter $G\subseteq\mathbb{P}$ such that $D_\xi$
intersects $G$ for all $\xi<\omega_1$.
\item For every $A\subseteq\lambda$ and every $\Sigma_1$-formula
$\varphi(x)$, if there is some $\mathbb{P}\in\Gamma$ such that
$\Vdash_\mathbb{P}\varphi(A)$ then there are stationarily many
$M\prec\Hk{\lambda^+}$ of size $\aleph_1$ containing $A$ such that
$\Hk{\lambda^+}\models\varphi(\pi_M(A))$, where $\pi_M$ is the
transitive collapse of $M$.
\end{enumerate}
So let us denote the equivalent statements (1) and (2) by ${\rm
MA}(\Gamma,\lambda)$ and for simplicity we write ${\rm
PFA}(\lambda)$ for ${\rm MA}(\mbox{proper},\lambda)$ and ${\rm
MM}(\lambda)$ for ${\rm MA}$(preserving stationary subsets of
$\omega_1,\lambda$). The axioms ${\rm PFA}(\omega_1)$ and ${\rm
MM}(\omega_1)$ are often denoted by ${\rm BPFA}$ and ${\rm BMM}$
respectively. If $\Gamma$ is any class of posets, we also write
${\rm PFA}(\Gamma)$ for ${\rm MA}(\mbox{proper and in $\Gamma$})$
and similarly with ${\rm MM}(\Gamma)$.
\end{definition}
\begin{definition}
The {\em approachability property for $\kappa$} (${\rm
AP}_{\kappa}$) is the statement that there is a sequence
$(C_\alpha:\alpha<\kappa^+)$ such that for any $\alpha<\kappa^+$:
\begin{enumerate}
\item[(a)] $C_\alpha\subseteq\kappa^+$, ${\rm otp}\;C_\alpha\leq\kappa$,
\end{enumerate}
and there is a club $C\subseteq\lim(\kappa^+)$ such that for every
$\gamma\in C$:
\begin{enumerate}
\item[(b)] $C_\gamma\subseteq\gamma$ is club,
\item[(c)] the initial segments of $C_\gamma$ are enumerated before
$\gamma$,\\ i.e. $\forall\alpha<\gamma\;\,\exists\beta<\gamma\;\,
C_\gamma\cap\alpha=C_\beta$.
\end{enumerate}
A straightforward argument shows that ${\rm AP}_\kappa$ follows from
either $\square_\kappa$ or from the cardinal arithmetic
$\kappa^{<\kappa}=\kappa$.
\end{definition}
We use an abbreviation in the context of elementary embeddings:
$j:M\longrightarrow N$ means that $j$ is a non-trivial elementary embedding from
$M$ into $N$ such that $M$ and $N$ are transitive. The {\em critical
point} of such an embedding, i.e. the first ordinal moved by $j$, is
denoted by ${\rm cp}(j)$. We write $jx$ for $j(x)$ in a context where
too many parentheses might be confusing. Let us remind ourselves of
the well-known extension Lemma for elementary embeddings first noticed
by Silver:
\begin{lemma}[Extension Lemma]\label{silverlem}
Let $j:M\longrightarrow N$ and assume that $G\subseteq\mathbb{P}$ is generic over
$M$ and $K\subseteq j(\mathbb{P})$ generic over $N$. If $j''G\subseteq K$ then
there is a unique extension $j^*:M[G]\longrightarrow N[K]$ of $j$ such that
$j^*(G)=K$.
\end{lemma}
\begin{proof}
For each $\mathbb{P}$-name $\dot{\tau}$ simply let
$j^*(\dot{\tau}[G])=j(\dot{\tau})[K]$.
\end{proof}
\begin{definition}
Recall that the model-theoretic transfer property
$$(\lambda_1,\lambda_0)\twoheadrightarrow(\kappa_1,\kappa_0)$$
means that every
structure $(\lambda_1,\lambda_0,f_i)_{i<\omega}$ has an elementary
substructure of the form $(A_1,A_0,f_i)_{i<\omega}$, where
$|A_1|=\kappa_1$ and $|A_2|=\kappa_0$. The relation
$(\omega_2,\omega_1)\twoheadrightarrow(\omega_1,\omega)$ is usually called {\em
Chang's conjecture}. A cardinal $\mu$ is called {\em
$\kappa$-Rowbottom} if for all $\lambda<\mu$ we have
$(\mu,\lambda)\twoheadrightarrow(\mu,<\kappa)$. A cardinal $\mu$ is called {\em
Jonsson} if every algebra of size $\mu$ has a proper subalgebra of
size $\mu$.
\end{definition}
We also need the following well-known Lemmas:
\begin{lemma}\label{coll-iso}
Let $\lambda$ be regular and assume that $\mathcal{P}$ is a
$\sigma$-closed poset of size $\lambda^{\aleph_0}$ that collapses
$\lambda^{\aleph_0}$ to $\aleph_1$. Then $\mathcal{P}$ is
forcing-isomorphic to ${\rm Col}(\aleph_1,\lambda)$.\hfill\qed
\end{lemma}
\begin{lemma}\label{str-embeds}
Assume that $\mathcal{P}$ is strategically $\sigma$-closed and
$\lambda\geq|\mathcal{P}|$. Then $\mathcal{P}$ completely embeds into
${\rm Col}(\aleph_1,\lambda)$.
\end{lemma}
\begin{proof}
Clearly, $\mathcal{P}$ completely embeds into $\mathcal{P}\times{\rm
Col}(\aleph_1,\lambda)$. Results in
\cite{foreman83:_games_boolean} imply that $\mathcal{P}\times{\rm
Col}(\aleph_1,\lambda)$ is $\sigma$-closed. Finally, Lemma
\ref{coll-iso} concludes that $\mathcal{P}\times{\rm
Col}(\aleph_1,\lambda)\cong{\rm Col}(\aleph_1,\lambda)$ and we are
done.
\end{proof}
\section{$\omega_2$-closure properties}
\label{sec:closure-prop}
In this section we introduce five different properties of forcings
which all entail that no new $\omega_1$-sequences be added. We give a
small overview before defining them one by one, the following list is
increasing in logical strength:
\begin{enumerate}
\item $\omega_2$-distributive
\item weakly $(\omega_1+1)$-game-closed
\item strongly $(\omega_1+1)$-game-closed
\item $\omega_2$-closed
\item $\omega_2$-directed-closed
\end{enumerate}
So let us start with distributivity. A poset $\mathbb{P}$ is called {\em
$\kappa$-distributive} if the intersection of less than $\kappa$
many dense open subsets is again dense open. Note that this is
equivalent to saying that $\mathbb{P}$ adds no new sequences of length
less than $\kappa$. It will become clear later why we are mostly
interested in the case $\kappa=\omega_2$. We have the following
proposition:
\begin{lemma}\label{pres-bounded}
Let $\lambda\geq\aleph_1$. The bounded forcing axiom ${\rm
MM}(\lambda)$ is preserved by $\lambda^+$-distri\-butive forcings.
Moreover, ${\rm PFA}(\lambda)$ is preserved by proper
$\lambda^+$-distributive forcings.
\end{lemma}
\begin{proof}
This follows simply from the fact that $\lambda^+$-distributive
forcings add no new elements to $\Hk{\lambda^+}$, so check that (2)
of Definition \ref{def-BMA} holds in any (proper)
$\lambda^+$-distributive extension. For the ${\rm
MM}(\lambda)$-argument, note that $\lambda^+$-distributive
forcings preserve stationary subsets of $\omega_1$.
\end{proof}
Now recall longer versions of the Banach-Mazur game on a partial
ordering $\mathbb{P}$: \medskip
\begin{tabular}{c|cccccc}
Empty & $p_0$ & $p_2$ & $\ldots$ & $p_\xi$ & $\ldots$\\\hline
Nonempty & $\qquad p_1$ & $\qquad p_3$ & $\qquad\ldots$ & $\qquad
p_{\xi+1}$ & $\qquad\ldots$
\end{tabular}
\medskip\noindent where $p_\xi\;(\xi<\alpha)$ is descending in
$\mathbb{P}$ and Nonempty wins the game of length $\alpha$ if he can play
$\alpha$ times.
\begin{definition}\label{def-game-closed}
A poset $\mathbb{P}$ is called {\em weakly $\alpha$-game-closed} if
Player Non\-empty has a winning strategy in the Banach-Mazur game of
length $\alpha$, where Nonempty is allowed to play at limit stages.
$\mathbb{P}$ is called {\em strongly $\alpha$-game-closed} if Player
Nonempty has a winning strategy in the same game except where Empty
is allowed to play at limit stages.
\end{definition}
It is clear that strongly $(\kappa+1)$-game-closed posets are also
weakly $(\kappa+1)$-game-closed. Remember that the standard forcing to
add a $\square_\kappa$-sequence with initial segments is weakly
$(\kappa+1)$-game-closed. An ${\rm AP}_\kappa$-sequence can be added
with a considerably milder forcing. The following crucial fact is
originally from \cite{yoshinubo03:_approac}:
\begin{lemma}\label{force-AP}
For all cardinals $\kappa$ there is a strongly
$(\kappa+1)$-game-closed forcing $\mathbb{A}_\kappa$ that adds an ${\rm
AP}_\kappa$-sequence.\hfill\qed
\end{lemma}
\begin{definition}\label{plays-def}
Assume for the following that $\mathbb{P}$ is strongly
$(\omega_1+1)$-game-closed. Let us fix a winning strategy $\sigma$
for Nonempty in the Banach-Mazur game on $\mathbb{P}$. Instead of
forcing with $\mathbb{P}$, we could also add a play of the game
generically. Then this play induces a generic filter for $\mathbb{P}$.
Define $$\mathbb{R}=\{\langle
p_\xi:\xi\leq\gamma\rangle:\gamma<\omega_1 \mbox{ and }
p_\xi\;(\xi\leq\gamma)\mbox{ is a partial $\sigma$-play}\}.$$
If
$s=\langle p_\xi:\xi\leq\gamma\rangle\in\mathbb{R}$ is such a partial
play, we also denote the maximal condition $p_\gamma$ by $p_s$. The
ordering on $\mathbb{R}$ is usual extension. Note that $\mathbb{R}$ is
$\sigma$-closed and contains $\mathbb{P}$ as a complete subalgebra by
the projection mapping $i(s)=p_s$. Yet, it is a much stronger
forcing: $\mathbb{R}$ will typically collapse the cardinality of
$\mathbb{P}$ to $\aleph_1$.
\end{definition}
\begin{lemma}\label{finsegsigma}
Using the notation of Definition \ref{plays-def}, if $G\subseteq\mathbb{P}$
is generic then $\mathbb{R}/\mathbb{P}=\{s\in\mathbb{R}:p_s\in G\}$ is
$\sigma$-closed.
\end{lemma}
\begin{proof}
Suppose $s_n\;(n<\omega)$ is a descending sequence in
$\mathbb{R}/\mathbb{P}$ and $\gamma$ the length of the union
$\bigcup_{n<\omega}s_n$. Then $q=\inf_{n<\omega}p_{s_n}$ is in $G$
and $$s=\bigcup_{n<\omega} s_n\cup\{(\gamma,q)\}$$
is still a
partial play according to $\sigma$.
\end{proof}
\begin{lemma}\label{quot-sigma}
Assume $\mathbb{P}$ is strongly $(\omega_1+1)$-game-closed and
$\lambda\geq|\mathbb{P}|^{\aleph_0}$. Then ${\rm
Col}(\omega_1,\lambda)/\mathbb{P}$ is $\sigma$-closed.
\end{lemma}
\begin{proof}
Let $\mathbb{R}$ be as before in Definition \ref{plays-def}. The Lemma
follows from the following calculation:
\begin{eqnarray*}
{\rm Col}(\aleph_1,\lambda)/\mathbb{P} & \cong & (\mathbb{R}\times{\rm
Col}(\aleph_1,\lambda))/\mathbb{P}\;\mbox{ (by Lemma
\ref{coll-iso})}\\
& \cong & (\mathbb{R}/\mathbb{P})\times{\rm Col}(\aleph_1,\lambda)
\end{eqnarray*}
and this last product is $\sigma$-closed by Lemma \ref{finsegsigma}.
\end{proof}
Lemma \ref{quot-sigma} points out the crucial difference between
strongly and weakly game-closed forcings: the quotient ${\rm
Col}(\aleph_1,\lambda)/\mathbb{P}$ will generally not be
$\sigma$-closed if $\mathbb{P}$ is only weakly
$(\omega_1+1)$-game-closed.
Finally, we introduce the two remaining notions listed at the
beginning of the section. For an infinite cardinal $\kappa$, $\mathbb{P}$
is called {\em $\kappa$-closed} if any $\mathbb{P}$-descending chain of
length less than $\kappa$ has a lower bound in $\mathbb{P}$. The poset
$\mathbb{P}$ is called {\em $\kappa$-directed-closed} if it is closed
under directed subsets of size less than $\kappa$.
\cite{koenig04:_fragm_maxim} proves that {\rm PFA} is preserved by
$\omega_2$-closed forcings and \cite{larson00:_separ} that {\rm MM} is
preserved by $\omega_2$-directed-closed forcings.
\section{Indestructibility of set-theoretic
axioms}\label{sec:indestruct}
\subsection{Forcing axioms}\label{subsec:forcingax}
This section should be seen as a continuation of work started in
\cite{koenig04:_fragm_maxim}. There it was shown that ${\rm PFA}$
implies failure of the approachability property at $\aleph_1$. While
we do not repeat the full proof here, it is interesting to mention
that ${\rm PFA}$ is applied to a poset $\mathbb{Q}_0*\mathbb{Q}_1*\mathbb{Q}_2$
in this argument, where $\mathbb{Q}_0$ adds a Cohen real, $\mathbb{Q}_1$ is
a collapse with countable conditions, and $\mathbb{Q}_2$ is specializing
a tree of size $\aleph_1$. The curious fact about the proof is that it
seems necessary for technical reasons to add the Cohen real right at
the start. In the following, we want to argue that the Cohen real is
absolutely necessary. We introduce the notion of a {\em
$\Sigma$-collapsing} poset $\mathbb{Q}$ which means that $\mathbb{Q}$ can
be split up into two parts, where the first collapses everything in
sight without adding countable sequences, while the second is
arbitrary. Almost all known applications of ${\rm PFA}$ or ${\rm MM}$
are using $\Sigma$-collapsing posets. This section wants to point out
the few arguments where the forcing axiom for $\Sigma$-collapsing
posets is not enough even though the full forcing axiom suffices. In
other words, in the presented examples it is absolutely necessary to
add countable sequences before collapsing everything to $\aleph_1$.
\begin{definition}
A poset $\mathbb{Q}$ is called {\em $\Sigma$-collapsing} if it is of
the form $\mathbb{Q}=\mathbb{Q}_0*\mathbb{Q}_1$, where
\begin{enumerate}
\item $\mathbb{Q}_0$ is $\aleph_1$-distributive and
\item $\Vdash_{\mathbb{Q}_0}|\mathbb{Q}_1|\leq\aleph_1$.
\end{enumerate}
The class of $\Sigma$-collapsing posets that preserve stationary
subsets of $\omega_1$ is denoted by $\Gamma_\Sigma$.
\end{definition}
The point of this definition is that a $\Sigma$-collapsing poset will
typically collapse its own cardinality to $\aleph_1$ without adding
countable sequences. The final segment $\mathbb{Q}_1$ is allowed to be
anything of size at most $\aleph_1$ though. Notice that, in the
context of forcing axioms, the class $\Gamma_\Sigma$ in particular contains all
posets that are
\begin{itemize}
\item $\aleph_1$-distributive (take $\mathbb{Q}_1$ to be trivial).
\item proper not adding reals (this implies
$\aleph_1$-distributivity).
\item $\sigma\mbox{-closed}\,*\,{\rm ccc}$ (since we can assume the
${\rm ccc}$-poset to be of size at most $\aleph_1$
\cite[p.62]{Kunen-intro}).
\end{itemize}
For example, ${\rm PFA}(\Gamma_\Sigma)$ implies the axiom ${\rm MRP}$
from \cite{moore05:_set} and ${\rm MM}(\Gamma_\Sigma)$ implies ${\rm
SRP}$ (see e.g. \cite{MR1713438}). As mentioned in the introduction,
many classical applications of ${\rm PFA}$ are actually consequences
of the forcing axiom for posets of the form
$\sigma\mbox{-closed}\,*\,{\rm ccc}$ and all these are also included
in ${\rm PFA}(\Gamma_\Sigma)$. Before we prove the main theorem of
this section, we reproduce the following Lemma. The proof is actually
a nice exercise but can also be looked up in \cite{kunen78:_satur}.
\begin{lemma}\label{kunen-lem}
Let $\lambda$ be regular uncountable. Assume that $\mathbb{R}$ is
$\lambda$-closed, $A$ a relation on $\lambda$ and $\varphi$ a
$\Sigma^1_1$-sentence. If there is a condition $r\in\mathbb{R}$ such
that $r\Vdash_\mathbb{R}"(\lambda,A)\models\varphi"$ then
$(\lambda,A)\models\varphi$. \hfill\qed
\end{lemma}
Let us prove the main preservation result. We show that strongly
$(\omega_1+1)$-game-closed forcings preserve the fragment of ${\rm
MM}$ that contains all the $\Sigma$-collapsing posets. This will
later be used for interesting new independence results.
\begin{thm}\label{pres-semiproper}
Assume that $V\models {\rm MM}(\Gamma_\Sigma)$ and $\mathbb{P}$ is strongly
$(\omega_1+1)$-game-closed. Then $V^\mathbb{P}\models{\rm MM}(\Gamma_\Sigma)$.
\end{thm}
\begin{proof}
Assume that
\begin{equation}
\label{eq:pres-semiproper0}
\Vdash_\mathbb{P}"\mathbb{Q}\mbox{ is
$\Sigma$-collapsing and preserves stationary subsets of $\omega_1$,"}
\end{equation}
where $\mathbb{Q}_0*\mathbb{Q}_1$ witnesses that $\mathbb{Q}$ is
$\Sigma$-collapsing and $\mathbb{Q}$ is a $\mathbb{P}$-name for a partial
ordering. Let $\dot{\tau}_\xi\;(\xi<\omega_1)$ be a sequence of
$\mathbb{P}$-names for dense subsets of $\mathbb{Q}$ and define the dense
subsets of $\mathbb{P}*\mathbb{Q}$:
\begin{equation}
\label{eq:pres-semiproper1}
D_\xi=\{(p,q):p\Vdash_\mathbb{P}q\in\dot{\tau}_\xi\}.
\end{equation}
Now remember from the definitions that
\begin{equation}
\label{eq:pres-semiproper2}
\Vdash_{\mathbb{P}*\mathbb{Q}_0}|\mathbb{Q}_1|\leq\aleph_1
\end{equation}
and recall the poset $\mathbb{R}$ from Definition \ref{plays-def} which
is induced by $\mathbb{P}$. It was shown in Lemma \ref{finsegsigma}
that $\mathbb{R}/\mathbb{P}$ is $\sigma$-closed.
\begin{claim}\label{stat-pres-claim}
$\mathbb{Q}_1$, as a forcing notion in $V^{\mathbb{P}*\mathbb{Q}_0}$,
preserves all stationary subsets of $\omega_1$ in $V^\mathbb{P}$.
\end{claim}
\begin{proof}
This simply follows from the fact that
$\mathbb{Q}=\mathbb{Q}_0*\mathbb{Q}_1$ preserves all stationary subsets of
$\omega_1$ in $V^\mathbb{P}$ by (\ref{eq:pres-semiproper0}).
\end{proof}
\begin{claim}\label{four-step}
The iteration $\mathbb{P}*\mathbb{Q}_0*\mathbb{R}/\mathbb{P}*\mathbb{Q}_1$
preserves stationary subsets of $\omega_1$.
\end{claim}
\begin{proof}
Let $E\subseteq\omega_1$ be stationary. Clearly, the stationarity of
$E$ is preserved in the three-step iteration
$\mathbb{P}*\mathbb{Q}_0*\mathbb{P}/\mathbb{R}$. Now assume towards a
contradiction that $\mathbb{Q}_1$, as a forcing notion in
$V^{\mathbb{P}*\mathbb{Q}_0*\mathbb{R}/\mathbb{P}}$, would destroy the
stationarity of $E$. Then in $V^{\mathbb{P}*\mathbb{Q}_0}$
\begin{equation}
\label{eq:pres-semiproper3}
\Vdash_{\mathbb{R}/\mathbb{P}}\mbox{ "there is a $\mathbb{Q}_1$-name
$\dot{C}$ for a club disjoint from $E$."}
\end{equation}
Using density arguments, it is straightforward to check that the
quoted statement in (\ref{eq:pres-semiproper3}) is $\Sigma^1_1$
over the structure
$(\omega_1\cup\mathbb{Q}_1,\in,E,\leq_{\mathbb{Q}_1})$. So we can apply
Lemma \ref{kunen-lem} and conclude that in $V^{\mathbb{P}*\mathbb{Q}_0}$
\begin{equation}
\label{eq:pres-semiproper4}
\mbox{there is a $\mathbb{Q}_1$-name $\dot{C}$ for a club
disjoint from $E$.}
\end{equation}
But (\ref{eq:pres-semiproper4}) says that $\mathbb{Q}_1$, as a
forcing notion in $V^{\mathbb{P}*\mathbb{Q}_0}$, destroys the
stationarity of $E$. This contradicts Claim \ref{stat-pres-claim}
so we finished the proof of Claim \ref{four-step}.
\end{proof}
Now use the forcing axiom in the ground model to get a filter
\begin{equation}
\label{eq:pres-semiproper5}
G*H_0*K*H_1\subseteq\mathbb{P}*\mathbb{Q}_0*\mathbb{R}/\mathbb{P}*\mathbb{Q}_1
\end{equation}
that is sufficiently generic, in particular for all dense sets
$D_\xi\;(\xi<\omega_1)$.
\begin{claim}\label{G-ext-claim}
The filter $G$ extends to a condition $q$ in $\mathbb{P}$.
\end{claim}
\begin{proof}
This is using the fact that $K$ yields a play of the Banach-Mazur
game of length $\omega_1+1$ that exhausts $G$ in the sense that
all elements of $G$ are refined during that play. But this play,
given by $K$, follows Nonempty's winning strategy so there is a
condition $q$ stronger than every condition in $G$.
\end{proof}
Claim \ref{G-ext-claim} finishes the proof since
\begin{equation}
\label{eq:pres-semiproper6}
q\Vdash_\mathbb{P}"H_0*H_1\subseteq\mathbb{Q}_0*\mathbb{Q}_1
\mbox{ is generic for }\dot{\tau}_\xi\;(\xi<\omega_1)."
\end{equation}
\end{proof}
To illustrate the significance of Theorem \ref{pres-semiproper}, we
turn to the following theorem of Magidor (see
\cite{cummings01:_squar}).
\begin{thm}\label{Magidor}
${\rm MM}$ implies the failure of ${\rm AP}_{\aleph_\omega}$.
\hfill\qed
\end{thm}
In his proof, Magidor applies a forcing that shoots a new
$\omega$-sequence through $\aleph_\omega$. A Corollary of Theorem
\ref{pres-semiproper} explains why it is necessary in his argument to
add a new countable sequence high up in the set-theoretic hierarchy:
\begin{corollary}
The following theory is consistent: $${\rm MM}(\aleph_\omega)+{\rm
MM}(\Gamma_\Sigma)+{\rm AP}_{\aleph_\omega}.$$
\hfill\qed
\end{corollary}
\begin{proof}
We add an ${\rm AP}_{\aleph_\omega}$-sequence to a model of ${\rm
MM}$. The corollary now follows from Lemmas
\ref{pres-bounded},\ref{force-AP} and Theorem \ref{pres-semiproper}.
\end{proof}
It has already been proved that ${\rm PFA} + {\rm AP}_{\aleph_\omega}$
is consistent. See \cite{cummings01:_squar} for more history on that.
Next we investigate the $\Sigma$-collapsing fragment of ${\rm PFA}$.
\begin{corollary}
${\rm Con}({\rm BPFA}+{\rm PFA}(\Gamma_\Sigma)+{\rm AP}_{\aleph_1})$.
\end{corollary}
\begin{proof}
By Lemmas \ref{pres-bounded},\ref{force-AP} and Theorem
\ref{pres-semiproper} since $\mathbb{A}_{\aleph_1}$ is strategically
$\sigma$-closed and therefore proper.
\end{proof}
But PFA implies the failure of ${\rm AP}_{\aleph_1}$
\cite{koenig04:_fragm_maxim}, so we get:
\begin{corollary}\label{coro-reals}
${\rm BPFA}$ and $ {\rm PFA}(\Gamma_\Sigma)$ together do not imply full ${\rm
PFA}$. \hfill\qed
\end{corollary}
For transparency, we include a small chart that sums up the results in
this area. The class $\Gamma_{\rm cov}$ is the collection of all
posets that preserve stationary subsets of $\omega_1$ and have the
{\em covering property}, i.e. every countable set of ordinals in the
extension can be covered by a countable set in the ground model. Note
that $\Gamma_\Sigma$ is a proper subset of $\Gamma_{\rm cov}$.
\medskip
\begin{tabular}{l|l}
If $V\models{\rm MM}$ and $\mathbb{P}$ is ... & then ... is true in
$V^\mathbb{P}$.\\
\hline\\
(1) $\omega_2$-distributive & ${\rm BMM}$\\
(2) weakly $(\omega_1+1)$-game-closed & ${\rm BMM}$ + saturation of
${\rm NS}_{\omega_1}$\\
(3) strongly $(\omega_1+1)$-game-closed & ${\rm BMM}$ + ${\rm
MM}(\Gamma_\Sigma)$\\
(4) $\omega_2$-closed & ${\rm BMM}$ + ${\rm MM}(\Gamma_{\rm cov})$\\
(5) $\omega_2$-directed-closed & ${\rm MM}$\\
\end{tabular}
\medskip In the table above, (1) is Lemma \ref{pres-bounded}, (2) is
in \cite{velickovic92:_forcin}, (3) is Theorem \ref{pres-semiproper},
(4) is in \cite{koenig04:_fragm_maxim}, and (5) is folklore but can be
looked up in \cite{larson00:_separ}. The papers
\cite{koenig04:_fragm_maxim} and \cite{koenig05:_kurep_namba} give
counterexamples which show that the results in this chart are
basically optimal. For example, it is shown in
\cite{koenig04:_fragm_maxim} that adding an ${\rm
AP}_{\aleph_1}$-sequence with a strongly $(\omega_1+1)$-game-closed
forcing makes ${\rm PFA}$ (and therefore ${\rm MM}(\Gamma_{\rm cov})$)
fail in the extension. In \cite{koenig05:_kurep_namba}, an
$\omega_2$-closed forcing is constructed which makes ${\rm MM}$ fail
in the extension. More details on this last fact can actually be found
in Section \ref{sec:Kurepa-tree}\ref{subsec:reg-Ktrees} of this paper.
\subsection{Levy-Collapse}
There are similar preservation results for statements true in the
Levy-Collapse of a large cardinal. In \cite{koenig04:_gener}, an
attempt was made to axiomatize the theory of the Levy-Collapse with
the help of a reflection principle that is in the style of the
well-investigated reflection principles for stationary sets and is
actually a strengthening of these. This axiom, the {\em Game
Reflection Principle} or ${\rm GRP}$ for short, proves all the
typical statements known to hold in the Levy-Collapse and is
formulated in terms of games. We need the following notions from
\cite{koenig04:_gener}:
\begin{definition}\label{gameondef}
Let $\theta$ be a regular cardinal.\hfill
\begin{enumerate}
\item (The Games) If $\mathcal{A}\subseteq{^{<\omega_1}\theta}$ then the game
$\mathbb{G}(\mathcal{A})$\label{mathbbG(mathcalA)} has length
$\omega_1$ and is played as follows: \medskip
\begin{tabular}{c|cccccc}
I & $\alpha_0$ & $\alpha_1$ & $\ldots$ & $\alpha_\xi$ &
$\alpha_{\xi+1}$ &
$\ldots$\\\hline
II & $\qquad \beta_0$ & $\qquad \beta_1$ & $\qquad\ldots$ &
$\qquad \beta_\xi$ & $\qquad \beta_{\xi+1}$ & $\qquad\ldots$
\end{tabular}
\medskip\newline both players I and II play ordinals below $\theta$
and $$\mbox{II wins iff }\langle
\alpha_\xi,\beta_\xi:\xi<\omega_1\rangle\in[\mathcal{A}],$$
where
$[\mathcal{A}]=\{f\in{^{\omega_1}\theta}:f\restriction
\xi\in\mathcal{A}\mbox{ for all }\xi<\omega_1\}$.
\item (Restricted Games) For $B\subseteq\Hk{\lambda}$, define the game
$\mathbb{G}^B(\mathcal{A})$ by letting the winning conditions be the
same as in $\mathbb{G}(\mathcal{A})$ but imposing the restriction on
both players to play ordinals in $B\cap\theta$.
\item ($\epsilon$-Approachability) A substructure $M\prec\Hk{\lambda}$
of size $\aleph_1$ is called {\em $\epsilon$-approach\-able} if it
is the limit of an $\epsilon$-chain of countable elementary
substructures, i.e. there is an $\epsilon$-chain $\langle
M_\xi:\xi<\omega_1\rangle$ with $M=\bigcup_{\xi<\omega_1}M_\xi$. We
denote the set of all $\epsilon$-approachable substructures of
$\Hk{\lambda}$ of size $\aleph_1$ by ${\rm EA}_\lambda$ and we drop
the subscript if it is clear from the context.
\end{enumerate}
\end{definition}
\begin{definition}
The {\em Game Reflection Principle} or ${\rm GRP}$ is the following
statement:
\medskip\hspace{0.7cm}\parbox{10cm}{Let $\theta$ be regular,
$\mathcal{A}\subseteq{^{<\omega_1}\theta}$ and $\lambda$ much larger
than $\theta$. If II has a winning strategy in the game
$\mathbb{G}^M(\mathcal{A})$ for every $M\in{\rm EA}_\lambda$, then
II has a winning strategy in $\mathbb{G}(\mathcal{A})$.}
\end{definition}
We have the following two theorems from \cite{koenig04:_gener}:
\begin{thm}\label{GRP-gsc}
${\rm GRP}$ is equivalent to saying that $\omega_2$ is generically
supercompact by $\sigma$-closed forcing, i.e. for every regular
$\lambda$ there is $\mathbb{P}\in\Gamma$ such that $V^{\mathbb{P}}$
supports $j:V\longrightarrow M$ with ${\rm cp}(j)=\omega_2$,
$j(\omega_2)>\lambda$, and $j''\lambda\in M$.\hfill\qed
\end{thm}
\begin{thm}\label{Coll-GRP}
Assume that $\kappa$ is supercompact. Then $$V^{{\rm
Coll}(\omega_1,<\kappa)}\models{\rm GRP}.$$
\hfill\qed
\end{thm}
Theorem \ref{Coll-GRP} is really contained in the stronger Theorem
\ref{pres-GRP}, so we postpone the proof. The following is the
preservation argument:
\begin{thm}\label{pres-GRP}
Assume that $\kappa$ is supercompact and $\dot{\mathbb{P}}$ is a ${\rm
Coll}(\omega_1,<\kappa)$-name for a strongly
$(\omega_1+1)$-game-closed partial ordering. Then $$V^{{\rm
Coll}(\omega_1,<\kappa)*\dot{\mathbb{P}}}\models{\rm GRP}.$$
\end{thm}
\begin{proof}
By Theorem \ref{GRP-gsc} it suffices to show that $\omega_2$ is
generically supercompact by $\sigma$-closed forcing. For any regular
$\lambda$ fix $j:V\longrightarrow M$ such that ${\rm cp}(j)=\kappa$,
$j(\kappa)>\lambda$, and $j''\lambda\in M$. Without restriction,
$\lambda>|\dot{\mathbb{P}}|$.
\begin{claim}\label{claim-sigma}
$\Vdash_{{\rm Coll}(\omega_1,<\kappa)}{\rm
Coll}(\omega_1,[\kappa,j\kappa))/\dot{\mathbb{P}}$ is
$\sigma$-closed.
\end{claim}
\begin{proof}
Note first that this makes sense because
$j\kappa>\lambda>|\dot{\mathbb{P}}|$ and therefore it is forced by
${\rm Coll}(\omega_1,<\kappa)$ that $\dot{\mathbb{P}}$ is a complete
subalgebra of ${\rm Coll}(\omega_1,[\kappa,j\kappa))$. This last
statement is basically Lemma \ref{str-embeds}. The claim now
follows from Lemma \ref{quot-sigma}.
\end{proof}
Now set $\mathcal{Q}={\rm Coll}(\omega_1,<\kappa)*\dot{\mathbb{P}}$.
Standard arguments yield
\begin{eqnarray*}
j\mathcal{Q} & = & j({\rm Coll}(\omega_1,<\kappa)*\dot{\mathbb{P}})\\
& = & {\rm Coll}(\omega_1,<j\kappa)*j\dot{\mathbb{P}}\\
& = & {\rm Coll}(\omega_1,<\kappa)*{\rm
Coll}(\omega_1,[\kappa,j\kappa))*j\dot{\mathbb{P}}\\
& = & {\rm Coll}(\omega_1,<\kappa)*\dot{\mathbb{P}}*{\rm
Coll}(\omega_1,[\kappa,j\kappa))/\dot{\mathbb{P}}*j\dot{\mathbb{P}}\\
& = & \mathcal{Q}*{\rm
Coll}(\omega_1,[\kappa,j\kappa))/\dot{\mathbb{P}}*j\dot{\mathbb{P}}\\
& = & \mathcal{Q}*j\mathcal{Q}/\mathcal{Q}.
\end{eqnarray*}
By the Extension Lemma \ref{silverlem} we can, in $V^{j\mathcal{Q}}$,
extend $j$ to $$j^*:V^\mathcal{Q}\longrightarrow M^{j\mathcal{Q}}.$$
Notice first
that $j''\lambda\in M^{j\mathcal{Q}}$. Finally, it holds in the model
$V^\mathcal{Q}$ that $\omega_2$ is generically $\lambda$-supercompact
by the forcing $j\mathcal{Q}/\mathcal{Q}$. But remember that
$$j\mathcal{Q}/\mathcal{Q}={\rm
Coll}(\omega_1,[\kappa,j\kappa))/\dot{\mathbb{P}}*j\dot{\mathbb{P}}$$
is
$\sigma$-closed by Claim \ref{claim-sigma}.
\end{proof}
Note that ${\rm GRP}$ implies ${\rm CH}$ \cite{koenig04:_gener} and
therefore ${\rm AP}_{\aleph_1}$. So Theorem \ref{pres-GRP} tells us
nothing if we take $\dot{\mathbb{P}}$ to be the usual forcing that adds
an ${\rm AP}_{\aleph_1}$-sequence. But we can get the following
interesting corollary:
\begin{corollary}\label{APomegaGRP}
${\rm Con}({\rm GRP} + {\rm AP}_{\aleph_\omega})$.
\end{corollary}
\begin{proof}
By Lemma \ref{force-AP} and Theorem \ref{pres-GRP}.
\end{proof}
Corollary \ref{APomegaGRP} is interesting because Shelah
\cite{shelah79} has shown that if $\kappa$ is supercompact then ${\rm
AP}_{\kappa^{+\omega}}$ fails. This contrasts the above result in
the sense that generic supercompactness of $\omega_2$ by
$\sigma$-closed forcing does not imply that ${\rm AP}_{\aleph_\omega}$
fails. To end the section, let us remark that weakly
$(\omega_1+1)$-game-closed forcings can introduce square-sequences and
this would violate ${\rm GRP}$ in a very strong fashion
\cite{koenig04:_gener}. So we cannot hope to have a preservation
theorem for weakly $(\omega_1+1)$-game-closed forcings.
\section{Kurepa-trees}\label{sec:Kurepa-tree}
\subsection{Jonsson cardinals}\label{subsec:Jonsson}
Let us go back to Theorem \ref{Magidor} for a while. Magidor's
argument that ${\rm MM}$ implies failure of the approachability
property at $\aleph_\omega$ has raised hopes that ${\rm MM}$ can
provide a good picture of the combinatorics of the cardinal
$\aleph_\omega$. Even earlier \cite{foreman88}, it was shown that
${\rm MM}$ implies ${\rm SCH}$ which puts severe restrictions on the
cardinal arithmetic at $\aleph_\omega$. In the light of all this, it
seemed possible that ${\rm MM}$ implies $\aleph_\omega$ is Jonsson
which would solve an old question. We give an argument to refute this
last implication, i.e. we show that ${\rm MM}$ is consistent with
${\aleph_\omega}$ not being Jonsson. This answers a question raised
during the Oberwolfach set theory meeting in 2005. The following
theorem is well-known, see for example \cite{higherinf}.
\begin{thm}\label{least-Jonsson}
Assume that $\mu$ is the least Jonsson cardinal. Then $\mu$ is
$\lambda$-Rowbottom for some $\lambda<\mu$.\hfill\qed
\end{thm}
Kurepa-trees are natural counterexamples to model-theoretic transfer
properties. The next lemma is probably standard, but we give the proof
for convenience.
\begin{lemma}\label{notcc-lem}
Let $\kappa<\lambda<\mu$ be cardinals where $\lambda$ is regular and
assume that there is a $\lambda$-Kurepa-tree $T$ with at least
$\mu$-many cofinal branches. Then
$(\mu,\lambda)\twoheadrightarrow\!\!\!\!\!\!\!/\,\,\,\,\,\,\,(\lambda,\kappa)$.
\end{lemma}
\begin{proof}
Let $\mathcal{B}$ be a collection of $\mu$-many cofinal branches of
$T$. The structure $(\mathcal{B},T)$ is of type $(\mu,\lambda)$, so
suppose towards a contradiction that there exists
$$(\mathcal{A},S)\prec(\mathcal{B},T),$$
where $\mathcal{A}\subseteq\mathcal{B}$ is
of size $\lambda$ and $S\subseteq T$ is of size $\kappa$. Find
$\delta<\lambda$ such that $S\subseteq T_{<\delta}$. Then by
elementarity, every two branches in $\mathcal{A}$ split within the
structure $(\mathcal{A},S)$ which implies that $T_\delta$ would have
size at least $|\mathcal{A}|=\lambda$. This contradicts the fact that
levels of $T$ have size less than $\lambda$.
\end{proof}
Let us point out again (cf. Section
\ref{sec:indestruct}\ref{subsec:forcingax}) that ${\rm MM}$ is
preserved by $\omega_2$-directed-closed forcings. This is used
crucially in the proof of the next theorem.
\begin{thm}\label{MM-Jonsson}
${\rm MM}$ does not imply that $\aleph_\omega$ is Jonsson.
\end{thm}
\begin{proof}
We construct a model of ${\rm MM}$ in which $\aleph_\omega$ is not
Jonsson. First note that ${\rm MM}$ is consistent with
$2^\lambda=\lambda^+$ for all $\lambda\geq\aleph_1$ since this
instance of the ${\rm GCH}$ can be forced with an
$\omega_2$-directed-closed forcing. So we start with a model
$$V\models{\rm MM}+2^\lambda=\lambda^+\mbox{ for all
}\lambda\geq\aleph_1$$
and define a full support Easton product
$$\mathbb{K}_\omega=\prod_{1<n<\omega}\mathbb{K}_n,$$
where $\mathbb{K}_n$ is
the usual forcing to add an $\aleph_n$-Kurepa-tree with
$\aleph_\omega$-many branches (see e.g. \cite{Kunen-intro}). Note
that $\mathbb{K}_n$ is $\aleph_n$-directed-closed and has the
$\aleph_{n+1}$-chain condition as we assumed the arithmetic
$2^{<\aleph_n}=\aleph_n$. The usual arguments for the Easton product
yield that $\mathbb{K}_\omega$ preserves all cardinals
$\aleph_n\;(n<\omega)$ and therefore preserves $\aleph_\omega$. It
is also easy to see that $\mathbb{K}_\omega$ is an
$\omega_2$-directed-closed forcing and hence preserves ${\rm MM}$.
By Lemma \ref{notcc-lem} we have
\begin{equation}
\label{eq:MM-Jonsson1}
(\aleph_\omega,\aleph_m)\twoheadrightarrow\!\!\!\!\!\!\!/\,\,\,\,\,\,\,(\aleph_m,\aleph_n)\mbox{ for
all }n<m<\omega
\end{equation}
in the generic extension $V^{\mathbb{K}_\omega}$. Now assume that
$\aleph_\omega$ is Jonsson in $V^{\mathbb{K}_\omega}$ and use Theorem
\ref{least-Jonsson}. We get that
$(\aleph_\omega,\aleph_m)\twoheadrightarrow(\aleph_\omega,\aleph_n)$ holds for some
$n<m<\omega$. This contradicts (\ref{eq:MM-Jonsson1}).
\end{proof}
The proof of Theorem \ref{MM-Jonsson} actually shows that ${\rm
MM}^{++}$ does not imply that $\aleph_\omega$ is Jonsson, where
${\rm MM}^{++}$ means that $\omega_1$-many names for stationary
subsets of $\omega_1$ can be pulled back into the ground model. We
generally tried to avoid these 'plus-versions' of forcing axioms, the
interested reader is referred to
\cite{baumgartner84:_applic_proper_forcin_axiom} or \cite{MR1713438}.
\subsection{Regressive Kurepa-trees}
\label{subsec:reg-Ktrees}
The notion of a regressive Kurepa-tree was introduced in
\cite{koenig05:_kurep_namba} in order to answer the question if ${\rm
MM}$ is sensitive to $\omega_2$-closed forcings. Surprisingly, ${\rm
MM}$ turned out to be sensitive to $\omega_2$-closed forcings but
only the Namba-fragment of ${\rm MM}$ can be violated. The key notion
was that of an $\omega_1$-regressive $\omega_2$-Kurepa-tree which can
be added by an $\omega_2$-closed forcing and it was shown that ${\rm
MM}$ is false once such a tree is added. We want to point out in
this section that regressive Kurepa-trees have strong impact on higher
versions of Chang's Conjecture even though they can be added with a
very mild forcing.
\begin{definition}
For any tree $T$ say that the level $T_\alpha$ is {\em
non-stationary} if there is a function $f_\alpha:T_\alpha\longrightarrow
T_{<\alpha}$ which is {\em regressive} in the sense that
$f_\alpha(x)<_Tx$ for all $x\in T_\alpha$ and if $x,y\in T_\alpha$
are distinct then $f_\alpha(x)$ or $f_\alpha(y)$ is strictly above
the meet of $x$ and $y$.
\end{definition}
\begin{definition}
Let $X$ be a set of ordinals. A $\lambda$-Kurepa-tree $T$ will be
called {\em $X$-regressive} if for all limit ordinals
$\alpha<\lambda$ with ${\rm cf}(\alpha)\in X$ the level $T_\alpha$
is non-stationary.
\end{definition}
The following two theorems appear in \cite{koenig05:_kurep_namba}.
\begin{thm}\label{add-regKtree1}
For all uncountable regular $\lambda$ there is a $\lambda$-closed
forcing that adds a $\lambda$-regressive $\lambda$-Kurepa-tree.
\hfill\qed
\end{thm}
\begin{thm}\label{MM-no-reg-Ktree}
Under {\rm MM}, there are no $\omega_1$-regressive
$\lambda$-Kurepa-trees for any uncountable regular $\lambda$.
\hfill\qed
\end{thm}
A close examination of the proof of Theorem \ref{add-regKtree1}
actually gives:
\begin{thm}\label{addregKtree2}
Assume $2^{<\lambda}=\lambda$ and $\kappa<\lambda<\mu$, where
$\kappa$ and $\lambda$ are regular. There is a
$\kappa^+$-directed-closed, $\lambda$-closed,
$\lambda^+-\mathit{cc}$ forcing that adds a
$[\kappa,\lambda)$-regressive $\lambda$-Kurepa-tree with at least
$\mu$-many branches. \hfill\qed
\end{thm}
Regressive Kurepa-trees are even stronger counterexamples to
model-theoretic transfer properties than the regular Kurepa-trees
considered in Section \ref{sec:Kurepa-tree}\ref{subsec:Jonsson}. This
can be seen from the following Lemma.
\begin{lemma}\label{regKtree-cc}
Let $\kappa<\lambda$ be regular. Assume there is a
$\{\kappa\}$-regressive $\lambda$-Kurepa-tree $T$ with at least
$\mu$-many branches and suppose $\kappa^{<\kappa}=\kappa$. Then
$(\mu,\lambda)\twoheadrightarrow\!\!\!\!\!\!\!/\,\,\,\,\,\,\,(\kappa^+,\kappa)$.
\end{lemma}
\begin{proof}
Let $\mathcal{B}$ be the set of cofinal branches of $T$ and consider
the structure $(\mathcal{B},T)$ which is of type $(\mu,\lambda)$. Now
assume towards a contradiction that
$(\mu,\lambda)\twoheadrightarrow(\kappa^+,\kappa)$ would hold, so we find a
substructure $$(\mathcal{A},S)\prec(\mathcal{B},T),$$
where $\mathcal{A}$ has
size $\kappa^+$ and $S$ has size $\kappa$. Define $\delta=\sup({\rm
ht}"S)$, we have two cases:
{\bf Case 1:} if ${\rm cf}(\delta)=\kappa$ then $T_\delta$ is a
non-stationary level of the tree $T$. A straightforward argument
using the fact that there is a regressive 1-1 function defined on
$T_\delta$ shows that $\mathcal{A}$ has size at most $|S|=\kappa$. This
is a contradiction.
{\bf Case 2:} if ${\rm cf}(\delta)<\kappa$ then $S$ has a cofinal
subtree $S_0$ of height $\kappa_0<\kappa$. Since $|S|=\kappa$, the
number of branches through $S_0$ can not be larger than
$\kappa^{\kappa_0}\leq\kappa^{<\kappa}=\kappa$. Again,
contradiction.
\end{proof}
\begin{corollary}\label{MM-cc}
MM is consistent with
$$(\aleph_{m+1},\aleph_m)\twoheadrightarrow\!\!\!\!\!\!\!/\,\,\,\,\,\,\,(\aleph_{n+1},\aleph_n)$$
for all $1<n<m$ simultaneously.
\end{corollary}
\begin{proof}
Using an Easton product similar to the proof of Theorem
\ref{MM-Jonsson}: we start with a model of $"{\rm
MM}+2^\lambda=\lambda^+$ for all $\lambda\geq\aleph_1$". Then for
all $m>2$ we add an $[\aleph_2,\aleph_m)$-regressive
$\aleph_m$-Kurepa-tree with $\aleph_{m+1}$-many branches. This
product is $\omega_2$-directed-closed by Theorem \ref{addregKtree2}.
Notice that in the extension we have $2^{<\aleph_n}=\aleph_n$ for
all $1<n<\omega$. The statement of the corollary then follows from
Lemma \ref{regKtree-cc}.
\end{proof}
|
3,212,635,537,721 | arxiv | \section{Introduction}
Knowledge of the initial mass function (IMF) of the first stars is critical to our understanding of the earliest stages of galaxy formation,
one of the key frontiers of modern cosmology (see e.g. Barkana \& Loeb 2001; Ciardi \& Ferrara 2005; Bromm et al. 2009; Benson 2010).
While the Population~(Pop)~III IMF is widely believed to have been top-heavy due to the limited cooling properties of primordial gas
(see e.g. Bromm \& Larson 2004), various scenarios for the formation of low-mass primordial stars have been discussed for some time
(e.g. Palla et al. 1983; Nakamura \& Umemura 2001; Omukai \& Yoshii 2003; Salvaterra et al. 2004; Machida et al. 2005; Silk \& Langer 2006).
Most recently, high-resolution simulations of Pop~III star formation have challenged the view that the first stars were very massive single objects
formed in isolation (e.g. Abel et al. 2002; Bromm et al. 2002), instead suggesting that many formed as binaries (Turk et al. 2009;
Stacy et al. 2010) or in clusters of lower mass stars (Clark et al. 2010; Greif et al. 2010a).
These results also raise a challenge to the postulated existence of a critical metallicity,
below which stars can only form with a top-heavy IMF
(e.g. Bromm \& Loeb 2003; Santoro \& Shull 2006; Schneider et al. 2006; Frebel et al. 2007; but see also Jappsen et al. 2009).
Further motivation to consider the formation and fate of low-mass Pop~III stars comes from the fact that no
low-metallicity star has yet been found to exhibit the chemical signature of pair-instability supernovae (SNe), one of
the primary predicted end states of very massive Pop~III stars (see e.g. Fryer et al. 2001; Heger et al. 2003). While this observation in itself
does not imply that low-mass Pop~III stars formed, it is consistent with a Pop~III IMF that is not dominated
by very massive stars (see Karlsson et al. 2008); instead, the observed chemical abundances of low-metallicity stars
(see Beers \& Christlieb 2005; Frebel 2010) suggest that they formed from material enriched by the explosions of
15-40 M$_{\odot}$ Pop~III stars (e.g. Umeda \& Nomoto 2003; Iwamoto et al. 2005; Joggerst et al. 2010).
Altogether, there remains the distinct possibility that some
fraction of Pop~III stars had masses low enough ($\la$ 0.8 M$_{\odot}$) that they may have survived to the present day.
In turn, this raises once again the long-standing question (e.g. Bond 1981): Where are such low-mass Pop~III stars today?
One explanation for why Pop~III stars may not be identified in the Galaxy today is that they do not appear as such, their
surfaces having been enriched due to the accretion of heavy elements from the interstellar medium (ISM) during the course of their long lives.
Enrichment by accretion has been studied by numerous authors, in the context of the present-day Milky Way
(e.g. Mestel 1954; Talbot \& Newman 1977; Alcock \& Illarionov 1980; Yoshii 1981; Iben 1983; Frebel et al. 2009). Using recently obtained
kinematic information on a large number of metal-poor stars, Frebel et al. (2009) modelled the enrichment of these stars individually
over the past $\sim$ 10 Gyr, finding that in almost all cases the amount of metals accreted onto the stars was far below what would be
needed to explain their observed surface metallicities. Based on this result, it was concluded that the accretion of enriched material during
passage through the disk of the Galaxy has not significantly altered the surface composition of observed low-metallicity stars.
However, as the first stars formed during the earliest stages of the assembly of the Milky Way, it is important to
also consider the degree to which the accretion of enriched material may have altered their surface compositions during this entire assembly process.
Recently, this has been studied by Komiya et al. (2009, 2010) using the extended Press-Schechter formalism to model the hierarchical growth of the
Milky Way. Their findings suggest that significant accretion of enriched material occurs soon after the
formation of a Pop~III star, before its host minihalo undergoes a merger.
Here we present a complementary study in which we evaluate the likelihood that low-mass Pop~III stars may avoid enrichment via accretion from the ISM,
using a simple model for their dynamical evolution during the hierarchical assembly of the Galaxy. In particular, we investigate the impact that a weak,
Solar-like stellar wind may have in limiting accretion, as it is perhaps only by such a wind that accretion onto a low-mass star
can be completely prevented.
In the next two Sections, we review the criteria for the suppression of accretion by a stellar wind and
we describe our model for the dynamical evolution of low-mass Pop~III stars.
In Section 4, we present our findings on the conditions for, and the likelihood of, enrichment by accretion from the ISM in the assembly of the Galaxy.
Finally, we briefly discuss the implications of our findings and give our conclusions in Section 5.
\section{Suppression of accretion by a Solar-like stellar wind}
Just as in the case of the Sun, the outer envelope of a low-mass Pop~III star is expected to be convective (e.g. Yoshii 1981; Fujimoto et al. 1995).
Therefore, as in the Sun, this convection may drive a dynamo which produces a magnetic field emanating from the surface of the star. In turn,
as the corona of the Sun is likely generated by processes associated with its magnetic field (e.g. Hassler et al. 1999),
such a star may also have a hot corona from which a weak, Solar-like wind is launched.
To evaluate the effect that such a stellar wind would have in suppressing the accretion of material from the ISM, we follow Talbot \& Newman (1977) and
define a minimum density $n_{\rm min}$ of the ISM required for accretion to take place when a stellar wind is launched with the same
density and speed as the Solar wind:
\begin{equation}
n_{\rm min} = 0.8 {\rm cm}^{-3} \left(v^{2}_{\rm *} + c^{2}_{\rm s}\right) \mbox{\ ,}
\end{equation}
where $v_{\rm *}$ is the velocity of the star with respect to the ISM and $c_{\rm s}$ is the sound speed of the gas, both in units of km s$^{-1}$.
Where in its orbit the velocity of a star is large enough that $n_{\rm min}$ exceeds the
density $n_{\rm gas}$ of the ISM through which the star passes, then the accretion of gas from the ISM is prevented.
For simplicity, in the following we shall adopt the formula for $n_{\rm min}$ given by Talbot \& Newman (1977) directly, although it is
derived for the specific case of the Sun. However, the radius of a Pop~III star with mass $\sim$ 0.8 M$_{\odot}$, which may still
be on the main-sequence today, is expected to be comparable to the Solar radius
(e.g. Weiss et al. 2000; Suda et al. 2007). Thus, we expect that the velocity of a wind from such stars,
roughly the escape velocity, will also be similar to the Solar value. In turn, if the density of the wind is also comparable to
that of the Solar wind, then the above formula is appropriate. Lacking detailed knowledge of such winds from Pop~III stars,
we limit ourselves to considering the single case of a Solar-like wind.
\section{Modelling Stellar dynamics in the assembly of the Halo}
Here we consider the nature of stellar orbits within the progenitors of the dark matter (DM) halo of the Galaxy,
in order to estimate the typical velocities of low-mass Pop~III stars with respect to the ISM.
As the detailed properties of the high-redshift progenitor haloes of the Milky Way are unknown,
we begin by adopting the following simple form for the gravitational potential, consistent with
a $\rho$ $\propto$ $r^{-2}$ matter density profile (White 1985), as well as with the observed
velocity curve of the Galaxy (Xue et al. 2008) and with cosmological simulations of the formation of the first protogalaxies
(e.g. Johnson et al. 2007; Wise \& Abel 2008):
\begin{equation}
\Psi(r) = v_{\rm max}^2 {\rm ln}\left(\frac{r}{r_{\rm 0}}\right) \mbox{\ .}
\end{equation}
Here $r_{\rm 0}$ is arbitrary and $v_{\rm max}$ is the maximum circular velocity of a halo of mass $M_{\rm h}$, given by
(e.g. Barkana \& Loeb 2001)
\begin{equation}
v_{\rm max} = \left(\frac{{\rm G} M_{\rm h}}{r_{\rm vir}} \right)^{\frac{1}{2}} \sim 20 {\rm km} {\rm s^{-1}} \left(\frac{M_{\rm h}}{10^8 {\rm M_{\odot}}}\right)^{\frac{1}{3}} \left(\frac{1+z}{10}\right)^{\frac{1}{2}} \mbox{\ .}
\end{equation}
A stellar orbit can then be defined by an apogalactic distance $r_{\rm ap}$ and an eccentricity $e$, given by
\begin{equation}
e = \frac{r_{\rm ap} - r_{\rm pr}}{r_{\rm ap} + r_{\rm pr}} \mbox{\ ,}
\end{equation}
where $r_{\rm pr}$ is the perigalactic distance of the orbit (e.g. Chiba \& Beers 2000).
With these parameters, the specific orbital angular momentum $l$ of the star is given by
\begin{equation}
l^2 = r^2_{\rm ap} v^2_{\rm max} \frac{(1-e)^2}{2e}{\rm ln}\left(\frac{1+e}{1-e}\right) \mbox{\ ,}
\end{equation}
and the velocity of the star at radius $r$ is
\begin{equation}
v^2_{\rm *}(r) = v_{\rm max}^2 \left({\rm ln}\left(\frac{r_{\rm ap}}{r}\right) + \frac{(1-e)^2}{2e}{\rm ln}\left(\frac{1+e}{1-e}\right) \right) \mbox{\ .}
\end{equation}
We note that even for a very eccentric orbit with $e$ = 0.99, the velocity $v^2_{\rm *}$ of the star changes by only a factor
ln($r_{\rm ap}$/$r$) $\le$ ln((1+$e$)/(1-$e$)) $\sim$ 5 throughout its orbit; hence, to within a factor of order unity, $v_{\rm *}$ $\sim$ $v_{\rm max}$.
Strictly speaking, the velocity $v_{\rm *}$ we derive here is the velocity of the star relative to the center of its host dark matter halo.
However, for simplicity, we shall treat this as the velocity of the star relative to the ISM.
While this assumption is clearly not valid for the case of stars formed, for example, in the disk of the
present-day Milky Way (see e.g. Talbot \& Newman 1977; Yoshii 1981), it is more tenable for Pop~III stars
formed at high redshift, as the orbits of such stars would be altered numerous times in the frequent mergers that their host haloes undergo in the
assembly of the Galaxy. Thus, we expect that any extant low-mass Pop~III stars are likely to have orbits that are in general not oriented
in a particular direction with respect to the motion of the ISM (see e.g. Carollo et al. 2007), in which case a {\it typical} such star is indeed likely to have
a velocity relative to the ISM that is comparable to $v_{\rm *}$ $\sim$ $v_{\rm max}$.
Furthermore, in the calculations that follow, we shall neglect the possibility that intracloud motions are of the order of $v_{\rm max}$ and thereby impact the
value of $n_{\rm min}$ within a cloud, as cosmological simulations of the formation of high redshift dwarf
galaxies (Wise \& Abel 2007a; Greif et al. 2008) show that the velocity
dispersion within clouds with $n_{\rm gas}$ $\ga$ $n_{\rm min}$ are well below $v_{\rm max}$.
We note, however, that cloud collisions may lead to temporarily higher intracloud velocity dispersions.
\section{Probability of accretion onto a Pop~III star with a Solar-like wind}
Here we consider the two primary situations in which accretion onto a single low-mass star with a Solar-like wind may take place:
from the dense gas at the center of the DM halo in which it forms and during passage through a dense gas cloud at a later time in the
hierarchical assembly of the Milky Way. For the latter case, we consider separately the most massive progenitor halo
of the Milky Way, in which the earliest Pop III stars likely formed (e.g. Brook et al. 2007), and the full population of progenitor
haloes which may or may not have hosted Pop~III stars, due to chemical and radiative feedback (e.g. Gao et al. 2010).
\subsection{Accretion of isothermal gas in the host halo}
For the density $n_{\rm gas}$ of the ISM in a DM halo hosting the formation of a Pop~III star at high redshift,
here we consider the case of an isothermal density profile with the following scaling, roughly consistent with the properties of the gas in Pop~III star-forming haloes, as found in numerical simulations (e.g. O'Shea \& Norman 2007; Wise \& Abel 2007b):
\begin{equation}
n_{\rm gas} = n_{\rm gas, vir} \left(\frac{r}{r_{\rm vir}}\right)^{-2} \mbox{\ ,}
\end{equation}
where the number density of baryons at the virial radius $r_{\rm vir}$ is given by
\begin{equation}
n_{\rm gas,vir} \sim 178 \Omega_{\rm b} (1+z)^3 \frac{\rho_{\rm crit}}{m_{\rm H}} \mbox{\ .}
\end{equation}
Here $\Omega_{\rm b}$ is the ratio of the cosmic baryon density to the critical density $\rho_{\rm crit}$ of the Universe at redshift $z$ = 0, and $m_{\rm H}$ is the mass of the hydrogen atom. To determine where in a halo of mass $M_{\rm h}$ a star with a Solar-like wind may accrete gas, we compare the gas density, given by equation (7), to the minimum density required for accretion, given by equation (1). Taking, as discussed in Section 3, $v_{\rm *}$ = $v_{\rm max}$, and assuming a sound speed $c_{\rm s}$ $\la$ $v_{\rm max}$, we have that the condition for accretion (i.e. $n_{\rm gas}$ $\ge$ $n_{\rm min}$) is only satisfied if
\begin{equation}
\left(\frac{M_{\rm h}}{10^8 {\rm M_{\odot}}}\right)^{-\frac{2}{3}} \left(\frac{1+z}{10} \right)^2 \left(\frac{r}{r_{\rm vir}}\right)^{-2}
\ga 10^4 \mbox{\ .}
\end{equation}
Simplifying this, we find that the dependences on the halo mass and redshift cancel out, leaving us with the following condition for accretion:
\begin{equation}
r \la 10 {\rm pc}.
\end{equation}
That is, for the case of an isothermal gas at the virial temperature of its host halo, a star will only accrete if
it is on an orbit that brings it within a physical distance of $\sim$ 10 pc of the center of the halo.
\begin{figure*}
\includegraphics[width=4.5in]{Figure1_revised.ps}
\caption{The probability that a star with a Solar-like wind passes through a cloud with sufficiently high density that accretion
occurs, within the main progenitor of the halo of the Milky Way. The dashed line shows the probability of accretion per unit redshift $z$,
while the solid line shows the probability for accretion taking place after redshift $z$.
Due to the lower stellar velocities within the halo at earlier times, and to the larger volume-filling factor of the gas that can thus be accreted,
there is a much higher likelihood of accretion at higher redshift (see also Komiya et al. 2010 for the case of no stellar wind). Note that the kink in the probability
per unit redshift at $z$ $\sim$ 16 is due to the power-law break in the function adopted for $f_{\rm cloud}$ (see equation 14). The
low probability of accretion ($p_{\rm acc}$ $\le$ 0.2) overall suggests that low-mass Pop~III stars with Solar-like winds may typically avoid accretion altogether.
}
\end{figure*}
Given this condition, if dynamical interactions result in the ejection of
a low-mass Pop~III star from the central $\sim$ 10 pc of the halo in which it forms, then the star will be prevented from accreting material.
If the Pop~III IMF is top-heavy, albeit with some low-mass stars forming, then dynamical interactions during the formation of stellar clusters are likely to eject
the low-mass stars at higher velocities than in the case of present-day star formation (e.g. Greif et al. 2010a), which can be up to
a few km s$^{-1}$ (e.g. Bate et al. 2003). As such ejection velocities can be comparable to the virial velocity of the minihalo hosting Pop~III star formation,
this could result in such low-mass stars being kicked out of the centers of their host haloes at early times. Indeed, it may be the case that low-mass stars form
{\it because} they are kicked out of the center of the halo, the accretion of dense gas being halted before the star can grow to be massive
(e.g. Bate et al. 2003; Clark et al. 2010). Note that even for an ejection velocity of just 3-5 km s$^{-1}$, a low-mass Pop~III star would travel
10 pc from the center of its host minihalo within the 2-3 Myr lifetime of the most massive stars that explode as SNe and enrich the ISM. Therefore, low-mass
Pop~III stars may avoid accretion of metals from within their host haloes as a natural consequence of the process of their formation.
We also note that mergers of star-forming haloes, which for the minihalo progenitors of the first galaxies occur on timescales of 10$^{7}$ - 10$^8$ Myr
(e.g. Greif et al. 2008; Gao et al. 2010), are likely to result in the violent relaxation of the stars in the halo, and this may also serve to
kick low-mass stars to larger radii in the merged host halo.
A further impediment to the accretion of metals is that this will not occur until the SNe ejecta from massive Pop~III stars
re-collapses to high densities within the host halo, which generally occurs on a timescale of $\sim$ 10$^8$ yr
(see Wise \& Abel 2008; Greif et al. 2010b; but see also Whalen et al. 2008a). Prior to this re-collapse, the ejecta will be at low density and may also
be moving with high velocities, making accretion very unlikely.
If dynamical interactions within the initial stellar cluster or violent relaxation as a result of a merger, take place within this time, then a low-mass
star is likely to avoid accretion in its host minihalo altogether.
\subsection{Accretion from dense clouds}
While the condition we have just found dramatically limits the range of circumstances in which a low-mass Pop~III star can accrete metal-enriched gas within the halo
in which it forms, it is still possible for accretion to take place if the star passes through a sufficiently dense molecular cloud in a more massive halo
at a later time. Taking it that the star has a velocity relative to a dense cloud of $v_{\rm *}$ $\sim$ $v_{\rm max}$, and using equation (7)
with the redshift dependence of $r_{\rm vir}$, we find the minimum density $n_{\rm min}$ of a cloud required for accretion to be
\begin{equation}
n_{\rm min} \sim 500 {\rm cm}^{-3} \left(\frac{M_{\rm h}}{10^8 {\rm M_{\odot}}}\right)^{\frac{2}{3}} \left(\frac{1+z}{10}\right) \mbox{\ .}
\end{equation}
For a molecular cloud with with this density and with a characteristic size $r_{\rm cloud}$ $\sim$ 10 pc (e.g. Spitzer 1978), assuming the standard Bondi (1952) accretion rate, we find
that this would result in the accretion of a total mass of $\sim$ 10$^{-7}$ M$_{\odot}$, leading to a stellar surface metallicity of
\begin{eqnarray}
{Z}_{\rm surf} & \sim & 10^{-7} {\rm Z_{\odot}} \left(\frac{f_{\rm conv}}{10^{-3}} \right)^{-1} \left(\frac{M_{\rm *}}{0.8 {\rm M_{\odot}}}\right)^2 \left(\frac{r_{\rm cloud}}{10 {\rm pc}}\right) \nonumber \\
& & \left(\frac{Z_{\rm met}}{{\rm 10^{-3} Z_{\odot}}}\right) \left(\frac{M_{\rm h}}{10^8 {\rm M_{\odot}}}\right)^{-\frac{2}{3}} \left(\frac{1+z}{10}\right)^{-1} \mbox{\ ,}
\end{eqnarray}
where $Z_{\rm met}$ is the metallicity of the gas in the cloud, $M_{\rm *}$ $\la$ 0.8 M$_{\odot}$ is the mass of the star, and $f_{\rm conv}$ is the fraction of the stellar mass contained in the outer convective layer (e.g. Yoshii 1981).
We have normalized this formula to the results of recent cosmological simulations of the earliest episodes of metal enrichment
via powerful Pop~III SNe (Wise \& Abel 2008; Greif et al. 2010b), from which it has been found that the metallicity of the gas that re-collapses into haloes of mass $M_{\rm h}$ $\sim$ 10$^8$ M$_{\odot}$ at $z$ $\ga$ 10 is of the order of $Z_{\rm met}$ $\sim$ 10$^{-3}$ $Z_{\odot}$. This metallicity is likely to be roughly an order of magnitude lower if the Pop~III IMF is not top-heavy (see e.g. Tumlinson 2010; Komiya et al. 2010).
For the case of the Milky Way at $z$ = 0, with a typical cloud size $r_{\rm cloud}$ $\sim$ 10 pc, $M_{\rm h}$ = 10$^{12}$ M$_{\odot}$, and $Z_{\rm met}$ = Z$_{\odot}$, we find $Z_{\rm surf}$ $\sim$ 10$^{-6}$ Z$_{\odot}$, in basic agreement with the estimate of the surface enrichment for the similar (but with $r_{\rm cloud}$ = 100 pc) case presented by Frebel et al. (2009).
In the following, we consider the probability of a star encountering a cloud of sufficient density that such enrichment via accretion may take place.
\subsubsection{Accretion within the main progenitor of the Galaxy}
An important specific case to consider is the probability of accretion onto low-mass Pop~III stars formed in the main progenitor halo of the Galaxy, as
this is the most likely formation site of the earliest Pop~III stars, which formed before radiative or chemical feedback could begin to inhibit
primordial star formation (e.g. Brook et al. 2007; Gao et al. 2010).
For this, we make use of the results from recent, high-resolution cosmological simulations of the formation of Milky Way-like haloes (Boylan-Kolchin et al. 2010).
These authors provide a best-fit formula for the mass assembly histories of their simulated haloes, as a function of redshift:
\begin{equation}
M_{\rm h}(z) = M_{\rm 0} (1+z)^{2.23} {\rm exp}\left[-4.9\left((1+z)^{\frac{1}{2}} - 1\right)\right] \mbox{\ ,}
\end{equation}
where we take the present-day mass of the halo to be $M_{\rm 0}$ = 10$^{12}$ M$_{\odot}$.
Comparing this formula with the results of Zhao et al. (2009) on the early growth of Milky Way-like haloes, we find good agreement up to at
least $z$ $\sim$ 20 (see also Gao et al. 2010), at which point Pop~III star formation was likely ongoing (e.g. Bromm \& Larson 2004).
Therefore, we shall use this formula for the growth history of the halo from $z$ = 20 to the present-day in our calculations.
Next, we shall assume the same density dependence of the volume-filling fraction $f_{\rm cloud}$($n_{\rm gas}$) of
gas clouds with gas density $\ge$ $n_{\rm gas}$ as found by Talbot \& Newman (1977). In particular, we shall take it that
\begin{equation}
f_{\rm cloud} = 5.6 \times 10^{-3} \left(\frac{n_{\rm gas}}{10^3 {\rm cm^{-3}}} \right)^{-\beta} \mbox{\ ,}
\end{equation}
where $\beta$ = 0.9 for $n_{\rm gas}$ $\le$ 10$^{3}$ cm$^{-3}$ and $\beta$ = 3 for $n_{\rm gas}$ $>$ 10$^{3}$ cm$^{-3}$.
We note that this value of $\beta$ for $n_{\rm gas}$ $>$ 10$^3$ cm$^{-3}$ is also broadly consistent with the results
of the recent {\it Bolocam Galactic Plane Survey} of dense molecular clouds (Rosolowsky et al. 2010).
As the gas content of the Galaxy today is lower than in the past due to star formation and feedback, in equation (14) we have
chosen a normalization for $f_{\rm cloud}$ that is a factor of 10 larger than that derived by Talbot \& Newman for the case
of the present-day Milky Way, in order to conservatively account for the higher gas fraction in the high redshift progenitors of the Galaxy.
Assuming that the star formation rate (SFR) within a halo is proportional to the mass of dense gas
(e.g. $n_{\rm gas}$ $\ga$ 30 cm$^{-3}$; Wolfire et al. 2003; Krumholz et al. 2009) in the halo, that the
mass of dense gas is proportional to the total halo mass, and that the SFR in the Galaxy today
is of the order of 1 M$_{\odot}$ yr$^{-1}$, this normalization yields a SFR of order 10$^{-2}$ M$_{\odot}$ yr$^{-1}$ for dwarf galaxies in
$\sim$ 10$^{9}$ M$_{\odot}$ haloes at $z$ $\la$ 10, in broad agreement with the SFRs found in recently high-resolution cosmological
simulations (Wise \& Cen 2009; Razoumov \& Sommer-Larsen 2010).
As we will show, the probability for accretion in the
halo is much higher at high redshift than in the present-day Milky Way, and thus our choice of an elevated volume-filling fraction of
dense gas compared to the present-day Galaxy does not appreciably affect our results for the overall probability of accretion, even though it is clearly an overestimate at low redshift.
Using the mass accretion history of the main progenitor halo given by equation (13) and the volume-filling fraction $f_{\rm cloud}$ given by equation (14),
we estimate the number of cold ($c_{\rm s}$ $<$ $v_{\rm max}$), dense clouds $N_{\rm acc}$($z$) with $n_{\rm gas}$ $\ga$ $n_{\rm min}$ (given by equation 11) through
which a star has passed between redshift $z$ and the present-day as
\begin{equation}
N_{\rm acc} \sim \int^{z}_{0} \frac{f_{\rm cloud}} {t_{\rm orb}} \left|\frac{dt}{dz'}\right| dz' \mbox{\ .}
\end{equation}
where $t_{\rm orb}$ $\sim$ $r_{\rm ap}$ / $v_{\rm max}$ is the orbital period of the star, and $t$($z$) is age of the Universe at redshift $z$.
While it is largely uncertain what we should choose for the apogalactic distance of the stellar orbit, for our current purposes
we choose $r_{\rm ap}$ = 0.1 $r_{\rm vir}$, to be consistent with simulations which suggest that Pop~III stars formed in the high-redshift progenitors of the Galaxy
end up in the central $\sim$ 0.1 $r_{\rm vir}$ of the halo at $z$ = 0
(e.g. Diemand et al. 2005; Scannapieco et al. 2006; Brook et al. 2007; De Lucia \& Helmi 2008; Madau et al. 2008; Salvadori et al. 2010; Gao et al. 2010).
We shall relax this assumption in the next Section, in which we consider the broader question of accretion onto Pop~III stars formed within the full population of
Milky Way progenitor haloes.
We emphasize that the formula for $f_{\rm cloud}$ that we use here (equation 14) was derived by Talbot \& Newman (1977) as the
volume-filling fraction of the disk of the Milky Way. Therefore, our estimate of the number $N_{\rm acc}$ of dense clouds encountered
implicitly assumes that at every orbit a star passes through the central, star-forming
regions of its host halo corresponding to the disk of the Galaxy today. As stars may instead orbit completely outside the central regions of the halo
(i.e. at radii $\ga$ 0.1 $r_{\rm vir}$), in this sense equation (15) is an upper limit for $N_{\rm acc}$.
Evaluating the integral in equation (15) yields a probability of accretion after redshift $z$ of $p_{\rm acc}$($< z$) $\sim$ 1 - $e^{-N_{\rm acc}}$,
which is shown in Figure 1. One trend that is evident is that the probability for accretion is much higher at high redshift.
There are two reasons for this. Firstly, the maximum circular velocity $v_{\rm max}$ of the halo is lower at higher redshift, and hence the
volume-filling factor of gas at densities above $n_{\rm min}$ is higher. Secondly, the orbital period $t_{\rm orb}$ of stars that we have
chosen is also lower at high redshift, leading to more frequent passages through the ISM during which accretion may take place. We also note that there is a probability of only
$p_{\rm acc}$ $\la$ 0.2 of accretion onto stars formed at $z$ $\la$ 20, and a probability of only $p_{\rm acc}$ $\sim$ 10$^{-5}$ of accretion onto such stars during passages through
the disk of the Galaxy over the past $\sim$ 10 Gyr, as treated in Frebel et al. (2009). From this we conclude that, if they have Solar-like winds,
the majority of any extant low-mass Pop~III stars formed in the main progenitor halo of the Milky Way are likely to exhibit pristine surface compositions today,
having been unaffected by the accretion of metal-enriched gas.
\begin{figure*}
\includegraphics[width=6.5in]{Figure2_1e8halos_tdyn_randomtorb.ps}
\caption{The probability of accretion $p_{\rm acc}$ onto low-mass Pop~III stars hosted by the 10$^8$ M$_{\odot}$ progenitor haloes of the DM halo of the Milky Way, as a function
of the redshift of formation of the haloes ({\it left panel}). The distribution of these probabilities is shown in the right panel, for all 10$^8$ M$_{\odot}$
haloes ({\it black histogram}) and for the subset of 10$^8$ M$_{\odot}$ haloes formed at $z$ $>$ 10 ({\it red histogram}).
While accretion is expected to occur in a small subset of haloes hosting Pop~III stars at $z$ $\le$ 10, in general the probability
of accretion is only $\le$ 0.1. The probability is almost an order of magnitude lower for Pop~III stars formed at $z$ $>$ 10, at which redshifts the bulk of
Pop~III stars are likely to have formed prior to widespread chemical enrichment and the onset of reionization.
}
\end{figure*}
\subsubsection{Accretion within all progenitor haloes}
While the earliest Pop~III stars are most likely to form in the main progenitor halo of the Galaxy, the majority may form
in other, smaller progenitor haloes. Here we evaluate the probability of accretion onto low-mass Pop~III stars in these haloes.
For this purpose, we employ the extended Press-Schechter (e.g. Lacey \& Cole 1993) formalism to construct
a merger tree of the progenitor haloes of a Milky Way-like halo, with a mass of 10$^{12}$ M$_{\odot}$ at $z$ = 0. In particular, we employ the
Monte-Carlo method suggested by Somerville \& Kolatt (1999).
As we have shown, in haloes of lower mass, accretion is more likely to occur, due principally to the fact that a star moving at lower velocity
$v_{\rm *}$ $\sim$ $v_{\rm max}$ can accrete gas at a lower density, which in turn has a larger volume-filling factor $f_{\rm cloud}$. Therefore, the minimum
halo mass for which accretion of metals can take place is a critical parameter in our calculation. As discussed in Section 4.1, a minimum halo
mass of 10$^{8}$ M$_{\odot}$ is likely a good estimate (e.g. Greif et al. 2010b), as this is the typical mass of a halo that is able to re-capture the metal-enriched ejecta of the first
Pop~III SNe, which typically occur in the 10$^6$ - 10$^7$ M$_{\odot}$ progenitors of these haloes (e.g. Bromm \& Larson 2004; Trenti \& Stiavelli 2009).
Similar to the case of the main progenitor halo described in Section 4.2.1, we estimate the probability of accretion onto a low-mass Pop~III star
by integrating equation (15) from $z$ = 0 to the redshift at which a halo with the minimum mass forms. However, to account for the numerous halo mergers that
alter the orbital parameters of a star, and in particular the apogalactic distance of its orbit, instead of assuming $r_{\rm ap}$ = 0.1 $r_{\rm vir}$
as before, we now use $r_{\rm ap}$ = $N_{\rm rand}$ $r_{\rm vir}$, where $N_{\rm rand}$ is a random number between zero and unity that is updated after each merger.
This yields the orbital period of a star in a given halo
$t_{\rm orb}$ = $N_{\rm rand}$ $r_{\rm vir}$ / $v_{\rm max}$. Also, as in Komiya et al. (2010), we account for the delay in a star being incorporated into its
new host halo after a merger, which is taken to be the timescale for dynamical friction given by Springel et al. (2001).
Similar to the case of accretion onto Pop~III stars with no winds considered by Komiya et al., we find that the accounting for this delay results in
somewhat higher accretion probabilities, especially for stars formed in haloes at relatively low $z$ that undergo mergers with much more massive haloes (see discussion below).
Figure 2 shows the probability that low-mass Pop~III stars within haloes with mass $\ge$ 10$^8$ M$_{\odot}$ accrete metal-enriched gas during the
assembly of the Galaxy. The left panel shows the probability of accretion onto stars hosted by all of the 10$^8$ M$_{\odot}$ progenitor haloes, as a function
of the redshift at which they form. The right panel shows the distribution of these probabilities; assuming that each 10$^8$ M$_{\odot}$ halo hosts the same number
of low-mass Pop~III stars, this can be translated as the distribution of probabilities of accretion onto all Pop~III stars in the halo of the Galaxy today.
As shown by the black histogram in the right panel, the probability of accretion is generally low, with the stars in most haloes having a probability of $\la$ 0.1
to accrete metal-enriched gas. We note that, although we have simply assumed that the stars have random orbits in their host halos,
this conclusion holds for different random realizations of these orbits (i.e. different $N_{\rm rand}$).
Interestingly, as shown in the left panel of Fig. 2, the haloes for which accretion is most likely are those which form
at relatively low redshift, $z$ $\la$ 10. This is a result of the fact that such late-forming haloes also grow slowly, and
so there is more time available for accretion of relatively low-density gas in these haloes compared to those that grow more rapidly at higher redshift.
Another important effect is that a portion of the late-forming 10$^8$ M$_{\odot}$ haloes also undergo mergers with much more massive haloes than at higher $z$.
As the timescale for dynamical friction in these mergers can be of the order of Gyr, the stars can orbit within
the haloes of lower mass (and hence lower $v_{\rm max}$) for much longer times than they would at higher $z$. In turn, this results
in considerably higher accretion probabilities for some stars formed at relatively low $z$; however, the effect of dynamical
friction on the overall probability distribution, shown in the right panel of Fig. 2, is not large, as also found by Komiya et al. (2010)
in a similar calculation.
However, it is important to note that the fraction of such haloes at low redshift that actually host Pop~III stars may be relatively low. This is
due to feedback effects from stars formed in neighboring haloes, such as metal enrichment (e.g. Maio et al. 2010; Greif et al. 2010b) and photoevaporation
(e.g. Shapiro et al. 2004; Whalen et al. 2008b), which will act to diminish the formation
rate of Pop~III stars (see also Tsujimoto et al. 1999; Hernandez \& Ferrara 2001). Even neglecting the suppression of Pop~III star formation due to metal enrichment
and photoionization by massive stars, the suppression due to the photodissociation of molecules (e.g. Haiman et al. 1997) is likely to be significant on its own.
Assuming a physical number density of atomic-cooling haloes of $\sim$ 0.03 kpc$^{-3}$ at $z$ $\la$ 20, as found from simulations of the formation of
Milky Way-like haloes (Gao et al. 2010), either massive stars (e.g. Yoshida et al. 2003) or rapidly
accreting black holes (e.g. Greif et al. 2008; Johnson et al. 2010) within these galaxies would produce a background molecule-dissociating flux
of the order of $J_{\rm 21}$ $\sim$ 10$^{-2}$ (i.e. 10$^{-2}$ $\times$ 10$^{-21}$ erg s$^{-1}$ cm$^{-2}$ Hz$^{-1}$ sr$^{-1}$), high enough to suppress the
rate of formation of Pop III stars in minihaloes (e.g. O'Shea \& Norman 2008).
Furthermore, the probabilities of accretion onto Pop~III stars forming during and after reionization (e.g. at $z$ $\la$ 10) (Johnson 2010)
are upper limits, since the 10$^8$ M$_{\odot}$ haloes in which they form will have a substantial amount of the gas photoevaporated out of the halo and
hence not available for accretion (see e.g. Thoul \& Weinberg 1996; Gnedin 2000; Dijkstra et al. 2004).
To evaluate the impact that external chemical and radiative feedback may have on the probability of accretion onto extant
low-mass Pop~III stars, the red histogram in the right panel of Fig. 2 shows the distribution of accretion probabilities in
the subset of haloes formed at $z$ $>$ 10.
The distribution of probabilities for this subset of haloes peaks at somewhat lower values than that of the overall population, with a typical probability
for accretion onto a star within these haloes of the order of 10$^{-2}$. Therefore, we conclude that if chemical enrichment is rapid or radiative feedback is strong
during the assembly of the Milky Way, then the accretion of metal-enriched gas onto a given low-mass Pop~III star is quite unlikely.
To account for the possibility that the metal-enriched SNe ejecta of the first stars can be contained in somewhat less massive haloes (see e.g. Kitayama \& Yoshida 2005;
Whalen et al. 2008a), we have also carried out the same exercise with a minimum halo mass for accretion of 10$^7$ M$_{\odot}$.
In this case, we find that the probability distribution for all such haloes peaks at $\sim$ 0.2, and that that for the subset forming at $z$ $>$ 10 peaks at $\sim$ 0.1.
While the probability for accretion in this case is considerably higher, we can conclude that even in this case the majority of low-mass Pop~III stars are not likely
to have accreted metal-enriched gas.
\section{Implications and Conclusions}
We have carried out an analysis of the likelihood that low-mass Pop~III stars with primordial surface compositions
could be found in the present-day Milky Way, under the assumption that such stars have Solar-like stellar winds
that prevent the accretion of metal-enriched material from the ISM.
We have found that if low-mass Pop~III stars are kicked out of the central $\sim$ 10 pc of the haloes in which they form,
either via dynamical interactions with more massive stars or from violent relaxation during an early merger,
then the accretion of gas onto the stars can be prevented, at least
until the star is incorporated into a larger halo in which dense molecular clouds are formed outside of the central $\sim$ 10 pc (e.g. Pawlik et al. 2010).
To evaluate the likelihood that a star accretes gas during the passage through dense molecular clouds at later stages
in the hierarchical assembly of the Galaxy, we have modeled the dynamics of Pop~III stars in their host dark matter haloes along with the density
distribution of gas within these haloes. Overall, we find that the probability of enrichment via accretion is $\la$ 0.1,
and is even lower for Pop~III stars formed at $z$ $\ga$ 10, at which times the majority of primordial star formation may have occurred (e.g. Maio et al. 2010).
The primary uncertainty affecting our calculations is in the volume-filling fraction $f_{\rm cloud}$ of dense clouds in the progenitor
haloes of the Milky Way. As discussed in Section 4.2.1, we have made the simple, conservative choice of a constant volume-filling fraction. While this
is likely an overestimate at low redshift, it does yield broad agreement with cosmological simulations of star formation in dwarf galaxies
in haloes of mass $\ga$ 10$^8$ M$_{\odot}$ at high redshift (i.e. $z$ $>$ 2), which we have shown are the most likely sites for accretion onto low-mass Pop~III stars.
We note, however, that existing observations of relatively massive
galaxies (i.e. within haloes of mass $M_{\rm h}$ $\ga$ 10$^{11}$ M$_{\odot}$) at $z$ $\la$ 2 suggest a steep increase of the
star formation rate, which is likely to be proportional to the amount of dense gas, with redshift (e.g. Bouch{\' e} et al. 2010). It is possible that
such a redshift dependence, which we have not sought to model, extends to the less massive progenitors of the Milky Way at higher redshift.
More precise calculations of the probability of accretion onto low-mass stars will have to await observations, for instance by the
{\it Atacama Large Millimeter Array} (e.g. Combes 2010), which provide stronger constaints on the properties of the dense gas in high redshift galaxies.
We emphasize that the low probability we find for accretion onto low-mass Pop~III stars is limited to the case that such stars have weak, Solar-like stellar winds.
As previous authors have shown, the amount of metal enrichment via accretion can be much higher for stars with no winds (Frebel et al. 2009; Komiya et al. 2010).
Importantly, unless low-mass Pop~III stars do have winds, such enrichment via accretion is likely to be an inevitability and the likelihood of
finding truly pristine Pop~III stars in the Galaxy today is practically zero.
Our results thus suggest that it may be possible to test, unequivocally, theoretical predictions of whether or not low-mass Pop~III stars formed
in the early Universe, as we have found that any such stars are likely to still exhibit primordial surface compositions in the
Galaxy today. The assessment of these theories may ultimately require that such pristine stars be found (see e.g. Weiss et al. 2000) in large surveys
of metal-poor stars, such as the {\it Sloan Extension for Galactic Understanding and Exploration} (e.g. Yanny et al. 2009),
the {\it Radial Velocity Experiment} (e.g. Fulbright et al. 2010), the {\it Apache Point Observatory Galactic Evolution Experiment}
(e.g. Allende Prieto et al. 2008), the {\it Large Sky Area Multi-Object Fiber Spectroscopic Telescope} project (e.g. Zhao et al. 2006),
and the HERMES\footnote{http://www.aao.gov.au/AAO/HERMES/} project.
If, however, they are not discovered, then it may be for one of two reasons: either low-mass Pop~III stars do not have sufficiently
strong winds to prevent enrichment via accretion, or long-lived Pop~III stars were not formed in sufficiently high numbers in the progenitor
haloes of the Milky Way (see e.g. Madau et al. 2008) for them to be found in observational campaigns.
\section*{Acknowledgements}
JLJ is grateful to Thomas Greif for sharing his results on the kinematics of
low-mass Pop~III stars in minihaloes ahead of their publication, as well as for comments on an
early draft of this work.
The authors would like to thank Fabrice Durier, Volker Gaibler, and Claudio Dalla Vecchia
for valuable feedback and discussion, as well as to the anonymous reviewer for a helpful report.
|
3,212,635,537,722 | arxiv | \section{Introduction}
The understanding of the
high energy behaviour of QCD in the perturbative Regge limit remains one of the most important and longstanding theoretical questions in particle physics. In the linear regime where gluonic saturation effects are not expected to be essential, QCD dynamics are described using the BFKL formalism~\cite{Fadin:1975cb,Kuraev:1976ge,Kuraev:1977fs,Balitsky:1978ic}, in the $k_t$-factorization~\cite{Cheng:1970ef, FL, GFL, Catani:1990xk, Catani:1990eg, Collins:1991ty, Levin:1991ry} framework. In order to reveal these resummation effects, first with
leading logarithmic (LL) precision, and more recently at
next-to-leading logarithmic (NLL) accuracy,
many processes have been proposed. One of the most promising ones is the inclusive dijet production with a large rapidity
separation, as proposed by Mueller and Navelet~\cite{Mueller:1986ey}. This idea led to many studies, now at the level of NLL precision.
Recent $k_t$-factorization studies of Mueller-Navelet jets~\cite{Colferai:2010wu,Ducloue:2013hia,Ducloue:2013bva,Ducloue:2014koa,Caporale:2012ih,Caporale:2013uva,Caporale:2014gpa,Celiberto:2015yba} were successful in describing such events at the LHC~\cite{Khachatryan:2016udy}, exhibiting the very first sign of BFKL resummation effects at the LHC.
To test the universality of such effects,
we propose to apply a similar formalism to study the production of a forward $J/\psi$ meson and a very backward jet with a rapidity interval that is large enough to probe the BFKL dynamics but small enough for
both the $J/\psi$ and the jet to be in the detector acceptance at LHC experiments such as ATLAS or CMS.\footnote{For example, at CMS the CASTOR calorimeter allows one to tag a jet down to $Y_2=-6.55$ in rapidity while the $J/\psi$ could be reconstructed up to $Y_1=2.4$, thus with a maximum interval in rapidity of almost 9, more than sufficient to see BFKL resummation effects.}
Although $J/\psi$ mesons were first observed more than 40 years ago, the theoretical mechanism for their production is still to be fully understood and the validity of some models remains a subject of discussions (for recent reviews see for example refs.~\cite{Brambilla:2010cs,Bodwin:2013nua}).
In addition, most predictions for charmonium production rely on collinear factorization, in which one considers the interaction of two on-shell partons emitted by the incoming hadrons, to produce a charmonium accompanied by a fixed number of partons. On the contrary, in this work the $J/\psi$ meson and the tagged jet are produced by the interaction of two collinear partons, but with the resummation of any number of accompanying unobserved partons, as usual in the $k_t$-factorization approach.
Here we will compare two different approaches for the description of charmonium production. First we will use the NRQCD formalism~\cite{Bodwin:1994jh}, in which the charmonium wavefunction is expanded as a series in powers of the relative velocity of its constituents. Next we will apply the Color Evaporation Model (CEM), which relies on the local-duality hypothesis~\cite{Fritzsch:1977ay,Halzen:1977rs}.
Finally we will show numerical estimates of the cross sections and of the azimuthal corrrelations between the $J/\psi$ and the jet obtained in both approaches.
We will rely on the
Brodsky-Lepage-Mackenzie (BLM)
procedure~\cite{Brodsky:1982gc} to fix the renormalization scale,
as it was adapted
to the resummed perturbation theory \`a la BFKL in refs.~\cite{Brodsky:1998kn,Brodsky:2002ka}, which some of us applied to Mueller-Navelet jets in ref.~\cite{Ducloue:2013bva}. Below, we will only discuss in detail the new elements related to the various $J/\psi$ production mechanisms. All details related to the BFKL evolution at NLL can
be found in refs.~\cite{Colferai:2010wu,Ducloue:2013hia}, while the details related to the application of the BLM scale fixing in our study are presented in ref.~\cite{Ducloue:2013bva}.
\section{Determination of the $J/\psi$ meson vertex}
We start with the determination of a general meson $M$ production vertex (the fact that we will restrict ourselves to $J/\psi$ in the rest of this paper plays no role at this stage). For the moment, we do not consider any specific model for its production. We generically denote with an index $M$ the kinematical variables attached to the system made of the meson and the possible accompanying unobserved particles, and use an index $V$ for the kinematical variables attached to the $J/\psi$ meson itself.
The inclusive high-energy hadroproduction process of such a meson $M$, via two gluon fusion, with a remnant $X$ and a jet
with a remnant $Y$ separated by a large rapidity difference between the jet and the meson, in scattering of a hadron $H(p_1)$ with a hadron $H(p_2)$, is illustrated in figure~\ref{Fig:Process}, where as a matter of illustration, we consider the parton coming out of the hadron $H(p_1)$ to be a gluon and the parton coming out of the hadron $H(p_2)$ to be a quark. For the sake of illustration, we suppose that the meson is produced in the fragmentation region of the hadron $H(p_1)$, named as forward, while the jet is produced in the fragmentation region of the hadron $H(p_2)$, named as backward.
On one hand, the longitudinal momentum fractions of the
jet and of the meson are assumed to be large enough so that the usual collinear factorization applies (the hard scales are provided by the heavy meson mass and by the transverse momentum of the jet), and we can neglect any transverse momentum, denoting the momentum of the upper (resp. lower) parton as $x \, p_1$ (resp. $x' \, p_2$), their distribution being given by usual parton distribution functions (PDFs). On the other hand, the $t-$channel exchanged momenta (e.g. $k$ in the lhs of figure~\ref{Fig:Process}, or the various ones involved in the rhs of figure~\ref{Fig:Process}) between the meson and the jet cannot be neglected due to their large relative rapidity, and we rely on $k_t-$factorization.
According to this picture,\footnote{We use the same notations as in refs.\cite{Colferai:2010wu,Ducloue:2013hia}.}
the differential cross section can be written as
\begin{equation}
\frac{d\sigma}{dy_V d|p_{V\bot}|d\phi_V dy_{J} d|p_{J\bot}|d\phi_{J}}
= \sum_{{a, b}} \int_0^1 \!dx \int_0^1 \!dx' f_{a}(x) f_{b}(x') \frac{ d \hat{\sigma}}{dy_V d|p_{V\bot}|d\phi_V dy_{J} d|p_{J\bot}|d\phi_{J}} \, ,
\end{equation}
where $f_{a, b}$ are the standard parton distribution functions of a parton $a (b)$ in the according hadron.
In $k_t$-factorization, the partonic cross section reads
\begin{equation}
\frac{ d \hat{\sigma}}{dy_V d|p_{V\bot}|d\phi_V dy_{J} d|p_{J\bot}|d\phi_{J}}
= \int d^2 k_\perp \, d^2 k_\perp' V_{V,a}(k_\perp,x) \, G(-k_\perp,-k_\perp',{\hat s})\,V_{J, b}(-k_\perp',x') \, ,
\label{eq:bfklpartonic}
\end{equation}
where $G$ is the BFKL Green's function depending on ${\hat s}=x x' s$,
denoting as $\sqrt{s}$ the center-of-mass energy of the two colliding hadrons.
At leading order (LO), the jet vertex reads~\cite{Bartels:2001ge,Bartels:2002yj}:
\begin{align}
V_{J, a}^{(0)}(k_\perp,x) = & h_{a}^{(0)}(k_\perp)\mathcal{S}_J^{(2)}(k_\perp;x) \, , \label{def:V0} \\ h_{a}^{(0)}(k_\perp) = & \frac{\alpha_s}{\sqrt{2}}\frac{C_{A/F}}{k_\perp^2} \, , \quad \mathcal{S}_J^{(2)}(k_\perp;x) = \delta\left(1-\frac{x_J}{x}\right)|p_{J \perp}|\delta^{(2)}(k_\perp-p_{J \perp}) \, . \label{def:S}
\end{align}
In the definition of $h_{\rm a}^{(0)}$, $C_A=N_c=3$ is to be used for an initial gluon and $C_F=(N_c^2-1)/(2N_c)=4/3$ for an initial quark. Following the notations of refs.~\cite{Bartels:2001ge,Bartels:2002yj}, the dependence of $V$ on the jet variables is implicit.
At next-to-leading order (NLO), the jet can be made of either a single or two partons. The explicit form of these jet vertices can be found in ref.\cite{Colferai:2010wu} as extracted from refs.~\cite{Bartels:2001ge,Bartels:2002yj} after correcting a few misprints of ref.~\cite{Bartels:2001ge}.
The explicit form of the BFKL Green's function $G$, as obtained at LL~\cite{Fadin:1975cb,Kuraev:1976ge,Kuraev:1977fs,Balitsky:1978ic} and at NLL~\cite{Fadin:1998py,Ciafaloni:1998gs} accuracy, can be found in ref.\cite{Colferai:2010wu}, and will not be reproduced here.
In the rest of the present paper, we will only focus on the case where the meson vertex is treated at lowest order, while the Green's function and the jet vertex will
be treated at NLL. The computation of the NLO $J/\psi$ vertex, which is a quite involved task, is left for further studies.
To properly fix the normalization, let us focus for a moment on the Born approximation, see the lhs of figure~\ref{Fig:Process}. Then, each building block in the factorized formula (\ref{eq:bfklpartonic}) is treated at lowest order.
In this limit,
our normalizations are such that the Born Green's function is
\begin{equation}
G^{\rm Born}(k_\perp,k_\perp',{\hat s})=\delta^2(k_\perp - k_\perp')\,,
\label{G:Born}
\end{equation}
while the jet vertices are given by eqs.~(\ref{def:V0}, \ref{def:S}).
As explained above, the relevant components of the involved momenta read
\begin{equation}
k = \beta p_2 +k_\bot, \quad p_J = x' p_2 +p_{J\bot}, \quad p_M= x \, p_1 + p_{M\bot} ,
\label{Sudakov_k}
\end{equation}
where $k$ is the
$t-$channel
exchanged momentum.
\begin{figure}[t]
\center
\psfrag{p1}{$\hspace{-.4cm} H(p_1)$}
\psfrag{p2}{$\hspace{-.4cm} H(p_2)$}
\psfrag{q1}{$x \, p_1$}
\psfrag{q2}{$\hspace{-.2cm} x' \, p_2$}
\psfrag{k}{$k$}
\psfrag{d}{$k'$}
\psfrag{q}{$p_J$}
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{M}{$p_M$}
\psfrag{X}{$X$}
\psfrag{Y}{$Y$}
\raisebox{1cm}{\includegraphics[scale=1]{process-Born-jet-Born.eps}} \hspace{2cm} \raisebox{0cm}{\includegraphics[scale=1]{process-BFKL.eps}}
\caption{The high-energy hadroproduction of a meson $M$ and a jet (here originating from a quark) with a large rapidity between them. Left: Born approximation. Right: inclusion of BFKL-like resummation effects due to multiple emissions of gluons and of higher order jet vertex corrections.}
\label{Fig:Process}
\end{figure}
In the high-energy limit, the ${\cal T}_{Mq}$ -matrix reads
\begin{equation}
{\cal T}_{Mq} = \frac{1}{i} \, \frac{2}{s} \frac{(-i)}{p_{J\bot}^2} \langle X| A^a_\mu(0)|H(p_1)\rangle g^{\mu \nu}_\bot {\cal A}_\nu^{ab}\,\bar u(p_J)(-ig \hat p_1 t^b) \langle Y| q(0)|H(p_2)\rangle\;,
\label{Mprocess}
\end{equation}
where $a$ is the color index of a collinear gluon from the hadron $H(p_1)$ and $b$ is the color index of the exchanged $t-$channel gluon. Here ${\cal A}_\nu^{ab}$ denotes the $S$-matrix element describing the $g g \to M$ transition. Its computation will be discussed in detail in the following subsections.
After factorization, illustrated symbolically by figure~\ref{Fig:Born-Process-factorized},
\begin{figure}[t]
\center
\psfrag{p1}{$\hspace{-.6cm} H(p_1)$}
\psfrag{p2}{$\hspace{-.6cm} H(p_2)$}
\psfrag{q1}{$\hspace{-.2cm} x \, p_1$}
\psfrag{q2}{$\hspace{-.2cm} x' \, p_2$}
\psfrag{k}{$k$}
\psfrag{q}{$p_J$}
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{M}{$\!p_M$}
\psfrag{X}{$X$}
\psfrag{Y}{$Y$}
\psfrag{g}{$g_\perp$}
\psfrag{p1s}{$\slashchar{p}_1$}
\psfrag{p2s}{$\slashchar{p}_2$}
\hspace{.2cm}\raisebox{2cm}{\includegraphics[scale=.75]{process-Born-square-no-arrows.eps}} \quad \psfrag{M}{\raisebox{-.05cm}{$\!M$}}
\psfrag{J}{$\!\!J$}
\includegraphics[scale=.85]{process-Born-factorisation.eps}
\caption{Left: square of the amplitude of the Born process. Right: symbolic factorized form of this squared amplitude involving, from top to bottom, the gluonic PDF, the impact factor describing the $g\, g \to M$ transition, the $t-$channel exchange of two off-shell gluons (in bold), the vertex describing the $g q \to {\rm jet}$ transition, and the quark PDF. The crosses symbolically denote the appropriate Fierz structure in Lorentz space. Namely, from top to bottom, $g_\perp$ tensors for gluons, due to the collinear factorization of the gluon out of the upper PDF, $\slashchar{p}_1$ and $\slashchar{p}_2$ arising from the {\it non-sense} polarizations of the $t-$channel gluons in $k_t$-factorization, and finally $\slashchar{p}_2$ and $\slashchar{p}_1$ due to the collinear factorization of the quark out of the down PDF. The black fermions and curved gluon lines symbolize the trace over color and Lorentz indices after the use of the Fierz identity in these two spaces, while the blue (grey in printed black and white) gluons are traces over color
after the
use of the Fierz identity in
color space.}
\label{Fig:Born-Process-factorized}
\end{figure}
we get
\begin{align}
\sum_{Mq} {\cal T}_{Mq}{\cal T}_{Mq}^* =& \frac{4}{s(p^2_{J\bot})^2} \frac{1}{4N(N^2-1)}\langle H(p_1)|A^{a'}_{\mu' \bot}(0)|X\rangle g_\bot^{\mu' \nu'}
\langle X| A^{a' }_{\nu' \bot}(0)|H(p_1) \rangle \nonumber \\
& \times \sum_M {\cal A}_{\mu \bot}^{ab} g^{\mu\nu}_\bot ( {\cal A}^{ab}_{\nu\bot})^*
g^2 \beta_{J} \langle H(p_2)| \bar q^c(0)|Y\rangle \langle Y| \hat p_1 q^c(0)|H(p_2) \rangle \,. \label{Tsquare}
\end{align}
The phase space measure reads\footnote{This should be understood in an extended way, in particular due to the fact that $M$ might involve several particles, as it is the case for the color singlet NRQCD contribution.}
\begin{align}
d\Phi =& (2\pi )^4 \delta(p_1 + p_2 - [p_X] - [p_Y] - [p_M] - p_{J} ) \nonumber \\
&\times \left[ \frac{d^3p_X}{(2\pi )^32E_X} \right] \left[ \frac{d^3p_Y}{(2\pi )^32E_Y} \right] \left[ \frac{d^3p_M}{(2\pi )^32E_M} \right] \frac{d^3p_{J}}{(2\pi)^32E_{J}} \, . \label{PhSp}
\end{align}
It can be written in a factorized form in terms of the rapidity $y_{J}$ of the quark jet and its transverse momentum $p_{J\bot }$:
\begin{align}
&d\Phi = \nonumber \\
& \frac{2\pi }{s} \int d^2k_\bot \delta^2(-[p_{M\bot}] + k_\bot) \,dx\, \delta(x- [\alpha_M]) \,\delta(1-x -[\alpha_X]) \left[ \frac{d^3p_X}{(2\pi )^32E_X} \right] \left[ \frac{d^3p_M}{(2\pi )^32E_M} \right]
\nonumber \\
& \hspace{0.8cm} \times \delta^2(k_\bot + p_{J\bot}) \, dx' \, \delta(x' - \beta_{J})\,\delta(1-x' - [\beta_Y]) \,\left[ \frac{d^3p_Y}{(2\pi )^32E_Y} \right] \, dy_{J}\, d^2p_{J\bot}\,. \label{PhSpFact}
\end{align}
This $k_t$-factorization formula involves an integration over the transverse momentum $k_t$ of the four-momentum transfer $k$ in the $t-$channel between both vertices.
Using the expressions of the unpolarized quark PDF
\begin{equation}
H^q(x')= \frac{1}{s} \int\, \left[ \frac{d^3p_Y}{(2\pi )^32E_Y} \right] \delta(1-x' - [\beta_Y]) \langle H(p_2)| \bar q(0)|Y\rangle \langle Y| \hat p_1 q(0)|H(p_2)\rangle \, ,
\label{Hq}
\end{equation}
and of the unpolarized gluon PDF,
\begin{equation}
\frac{g(x)}{x} = - \int \! \left[ \frac{d^3p_X}{(2\pi )^32E_X} \right] \delta(1-x -[\alpha_X]) \langle H(p_1)|A^{a'}_{\mu' \bot}(0)|X\rangle g_\bot^{\mu' \nu'}
\langle X| A^{a' }_{\nu' \bot}(0)|H(p_1) \rangle \, ,
\label{g}
\end{equation}
we obtain an expression for the differential cross section
\begin{align}
\frac{d \sigma}{dy_{J}d|p_{J\bot}|d\phi_{J}}=\int & \, dx \, g(x)\, dx' \,H^q(x')\,d^2k_\bot\,\delta(x-[\alpha_M])\,\delta^2(k_\bot - [p_{M\bot}]) \left[ \frac{d^3p_M}{(2\pi )^32E_M} \right] \nonumber \\
& \times \frac{8\sqrt{2}\pi^2}{s^2(N^2-1)^2 \,x\,k^2_\bot} \, \sum_{[M]} {\cal A}_{\mu \bot}^{ab} g^{\mu \nu}_\bot ({\cal A}^{ab}_{\nu\bot})^* \,\,V_{J,q}^{(0)}(-k_\bot,x')\,, \label{CSecM}
\end{align}
in which we factorized out the vertex for quark jet production in the Born approximation,
\begin{equation}
V_{J,q}^{(0)}(k_\bot,x') = \frac{g^2}{4\pi \sqrt{2}} \frac{C_F}{|k_\bot|} \,\delta\left(1-\frac{x_J}{x'}\right) \,\delta^2(k_\bot - p_{J\bot})\,,
\label{Vq}
\end{equation}
in accordance with eqs.~(\ref{def:V0}, \ref{def:S}).
\subsection{Color-singlet NRQCD contribution}
In the color-singlet contribution
the system $[M]$ is made of the produced $J/\psi$ charmonium and of the unobserved gluon produced simultaneously with the charmonium in gluon-gluon fusion due to the negative charge-parity of the $J/\psi$.
We parametrize the momentum $p_V$ of the $J/\psi$ and the momentum $l$
of the unobserved gluon in terms of Sudakov variables, as
\begin{equation}
p_V=\alpha_V p_1 + \frac{M^2_{J/\psi}-p^2_{V\bot}}{\alpha_V s}p_2 +p_{V\bot}\;,\;\;\;\;\;l=\alpha_l \, p_1 - \frac{l_\bot^2}{\alpha_l s} p_2+l_\bot \,.
\label{CSSudakov}
\end{equation}
Thus the expression of
\begin{align}
&\delta(x-[\alpha_M])\,\delta^2(k_\bot - [p_{M\bot}]) \left[ \frac{d^3p_M}{(2\pi )^32E_M} \right] \nonumber \\
& = \delta(x - \alpha_l -\alpha_V)\, \delta^2(k_\bot - l_\bot - p_{V\bot})
\frac{d^3l}{(2\pi )^32E_l}\frac{d^3p_V}{(2\pi )^32E_V}
\nonumber \\
& = \frac{1}{4(2\pi )^6}\delta(x-\alpha_l - \alpha_V)\delta^2(k_\bot - l_\bot - p_{V\bot}) \frac{d\alpha_l\,\theta(\alpha_l)}{\alpha_l}\,d^2l_\bot\,dy_V d^2p_{V\bot} \label{CSM}
\end{align}
permits, with the use of (\ref{CSecM}), to write the differential cross section in the form
\begin{align}
\frac{d\sigma}{dy_V d|p_{V\bot}|d\phi_V dy_{J} d|p_{J\bot}|d\phi_{J}} = \int & dx \, g(x) \,dy \, H^q(y) \, d^2k_\bot \frac{|p_{V\bot}|\sqrt{2} }{2^5 \pi^4 s^2 (N^2-1)^2 k_\bot^2\, x} \frac{\theta(x-\alpha_V)}{x-\alpha_V} \nonumber \\
& \times \sum_{\lambda_V, \lambda_l } {\cal A}_{\mu \bot}^{ab} g_\bot^{\mu \nu}({\cal A}_{\nu \bot}^{ab})^* \,\,V_q^{(0)}(-k_\bot,y) \, , \label{CSCrossS}
\end{align}
from which we read off the $J/\psi$ production vertex of the color singlet NRQCD contribution as
\begin{equation}
V_{J/\psi}^{(1)} =\frac{|p_{V\bot}|\sqrt{2} }{2^5 \pi^4 s^2 (N^2-1)^2 k_\bot^2\, x} \frac{\theta(x-\alpha_V)}{x-\alpha_V} \sum_{\lambda_V, \lambda_l } {\cal A}_{\mu \bot}^{ab} g_\bot^{\mu \nu}({\cal A}_{\nu \bot}^{ab})^* \,.
\label{CSVertex}
\end{equation}
One should note that the above expressions include an integration over the phase space of the unobserved gluon with momentum $l\,.$
The vertex which allows to pass from open $q \bar{q}$ production to $J/\psi$ production in color singlet NRQCD reads~\cite{Guberina:1980dc,Baier:1983va}
\begin{equation}
[v(q)\bar u(q)]^{ij}_{\alpha \beta} \rightarrow \frac{\delta^{ij}}{4N} \left( \frac{\langle {\cal O}_1 \rangle_V}{m} \right)^{1/2} \left[ \hat \epsilon^*_V \left( 2\hat q +2m \right)\right]_{\alpha \beta} ,
\label{CSvertex}
\end{equation}
with the momentum $q = \frac{1}{2}p_V$, $m$ being the mass of the charm quark, $M_{J/\psi}=2m$. In the following we will use the non-perturbative coefficient $C_1$ defined as
\begin{equation}
C_1 \equiv \left( \frac{\langle {\cal O}_1 \rangle_V}{m} \right)^{1/2}.
\label{C}
\end{equation}
The
matrix element $\langle O_1
\rangle_V$ in NRQCD is related to the
leptonic meson decay rate by~\cite{Bodwin:1994jh}
\begin{equation}
\Gamma[V\to l^+l^-]=\frac{2e_c^2\pi\alpha^2}{3}
\frac{\langle {\cal O}_1\rangle_V }{m^2}
\left( 1-\frac{16\alpha_s}{3\pi}\right) .
\label{decay}
\end{equation}
Here $\alpha$ is the fine-structure constant and $e_c=2/3$ is the electric charge of the charm quark. Equation (\ref{decay}) includes the one-loop
QCD correction \cite{Barbieri:1975ki,Billoire:1977mp,Celmaster:1978yz} and $\alpha_s$ is the strong coupling
constant. One can use the value of this decay rate to fix $\langle {\cal O}_1\rangle_V$ through this relation.
Namely, using the values $\Gamma_{e^+ e^-}= 5.55 \times 10^{-6}$ GeV~\cite{Olive:2016xmw}, $m=1.5$ GeV and a three-loop running coupling with $\Lambda_4=0.305$ GeV, we obtain $\langle{\cal O}_1 \rangle_{J/\psi}=0.444$ GeV$^3$. As quoted in ref.~\cite{Bain:2017wvk}, recent phenomenological analyses~\cite{Butenschoen:2011yh,Chao:2012iv,Bodwin:2014gia} have used slightly smaller values of either 0.387 or 0.440 GeV$^3$, as obtained in refs.~\cite{Eichten:1995ch} and~\cite{Bodwin:2007fz} respectively. In order not to underestimate the uncertainty, in the following we will vary $\langle{\cal O}_1 \rangle_{J/\psi}$ between 0.387 and 0.444 GeV$^3$.
The momentum transfer $k$ in the $t-$channel entering the charmonium vertex has the approximate form given by eq.~(\ref{Sudakov_k}).
The momentum conservation in the charmonium vertex $xp_1 +k =p_V+l$ leads to the following relations between the Sudakov variables of momenta:
\begin{equation}
x=\alpha_V + \alpha_l\;,\;\;\;\;\;\;\;\; k_\bot=p_{V\bot} + l_\bot \;,\;\;\;\;\; \beta = \frac{4m^2 -p_{V\bot}^2}{\alpha_Vs} - \frac{l_\bot^2}{\alpha_l s} \;.
\label{CSmomcons}
\end{equation}
\def1.3{1.3}
\def1cm{.67cm}
\begin{figure}[t]
\center
\psfrag{a}{}
\psfrag{b}{}
\psfrag{u}{}
\psfrag{d}{}
\psfrag{d}{}
\psfrag{l}{$\ell$}
\psfrag{qu}{$q$}
\psfrag{qd}{$-q$}
\psfrag{p1}[R]{$xp_1$}
\psfrag{p2}[R]{$\beta p_2 + k_\perp \hspace{.6cm}$}
\hspace*{0.05cm}
\begin{tabular}{ccc}
\psfrag{i}{
\hspace{1cm}\includegraphics[scale=1.3]{CE-D1-fierz.eps}
&
\psfrag{qu}{$\hspace{-.4cm}q$}
\psfrag{qd}{$\hspace{-.7cm}-q$}
\psfrag{i}{}
\hspace{1cm}\raisebox{0cm}{\includegraphics[scale=1.3]{CE-D2-fierz.eps}}
&
\psfrag{i}{
\psfrag{qu}{$\hspace{-.4cm}q$}
\psfrag{qd}{$\hspace{-.2cm}-q$}
\hspace{1cm}\raisebox{0cm}{\includegraphics[scale=1.3]{CE-D3-fierz.eps}}
\\
$D_1$ & $D_2$ & $D_3$
\\
\\
\psfrag{qu}{$\hspace{-.5cm}-q$}
\psfrag{qd}{$q$}
\psfrag{i}{}
\hspace{1cm}\includegraphics[scale=1.3]{CE-D4-fierz.eps}
&
\psfrag{qu}{$\hspace{-.5cm}-q$}
\psfrag{qd}{$\hspace{-.2cm}q$}
\psfrag{i}{}
\hspace{1cm}\raisebox{0cm}{\includegraphics[scale=1.3]{CE-D5-fierz.eps}}
&
\psfrag{qu}{$\hspace{-.5cm}-q$}
\psfrag{qd}{$\hspace{-.2cm}q$}
\psfrag{i}{}
\hspace{1cm}\raisebox{0cm}{\includegraphics[scale=1.3]{CE-D6-fierz.eps}}\\
$D_4$ & $D_5$ & $D_6$
\end{tabular}
\caption{The 6 diagrams contributing to the amplitude in color singlet NRQCD. The blobs with a cross symbolize the Fierz structure of eq.~(\ref{CSvertex}).}
\label{Fig:Diagrams-CE}
\end{figure}
The contribution to the hard part is given by the 6 diagrams shown in figure~\ref{Fig:Diagrams-CE}, which leads to the expressions
\begin{equation}
D_1 = \frac{(-ig)^3i^2C_1}{4N} {\rm tr}_c(t^l t^a t^b) \hbox{Tr} \! \left[ \hat \epsilon^*_V (2\hat q \!+\! 2m) \hat \epsilon^*(l) \frac{\hat q \!+\! \hat l \!+\! m}{(q \!+\! l)^2 \!-\! m^2} \gamma_{\bot}^\mu \frac{\beta \hat p_2 \!+\! \hat k_\bot \!-\! \hat q \!+\! m}{(\beta p_2 \!+\! k_\bot \!-\! q)^2 \!-\! m^2} \hat p_2\right], \hspace{0.45cm}
\label{D1}
\end{equation}
\begin{equation}
D_2 = \frac{(-ig)^3i^2C_1}{4N} {\rm tr}_c(t^a t^l t^b) \hbox{Tr} \! \left[ \hat \epsilon^*_V (2\hat q \!+\! 2m) \gamma_\bot^\mu \frac{\hat q \!-\! x\hat p_1 \!+\! m}{(q \!-\! xp_1)^2 \!-\! m^2} \hat \epsilon^*(l) \frac{\beta \hat p_2 \!+\! \hat k_\bot \!-\! \hat q \!+\! m}{(\beta p_2 \!+\! k_\bot \!-\! q)^2 \!-\! m^2} \hat p_2\right],
\label{D2}
\end{equation}
\begin{equation}
D_3 = \frac{(-ig)^3i^2C_1}{4N} {\rm tr}_c(t^a t^b t^l) \hbox{Tr} \! \left[ \hat \epsilon^*_V (2\hat q \!+\! 2m) \gamma_\bot^\mu \frac{\hat q \!-\! x\hat p_1 \!+\! m}{(q \!-\! xp_1)^2 \!-\! m^2} \hat p_2 \frac{- \hat q \!-\! \hat l \!+\! m}{( -q \!-\! l )^2 \!-\! m^2} \hat \epsilon^*(l)\right], \hspace{0.95cm}
\label{D3}
\end{equation}
\begin{equation}
\hspace{0.14cm} D_4 = \frac{(-ig)^3i^2C_1}{4N} {\rm tr}_c(t^l t^b t^a) \hbox{Tr} \! \left[ \hat \epsilon^*_V (2\hat q \!+\! 2m) \hat \epsilon^*(l) \frac{\hat q \!+\! \hat l \!+\! m}{(q \!+\! l)^2 \!-\! m^2} \hat p_2 \frac{x \hat p_1 \!-\! \hat q \!+\! m}{(x p_1 \!-\! q)^2 \!-\! m^2} \gamma_\bot^\mu\right],
\label{D4}
\end{equation}
\begin{equation}
D_5 = \frac{(-ig)^3i^2C_1}{4N} {\rm tr}_c(t^b t^l t^a) \hbox{Tr} \! \left[ \hat \epsilon^*_V (2\hat q \!+\! 2m) \hat p_2 \frac{\hat q \!-\! \beta \hat p_2 \!-\!\hat k_\bot \!+\! m}{(q \!-\! \beta p_2 \!-\! k_\bot)^2 \!-\! m^2} \hat \epsilon^*(l) \frac{ x \hat p_1 \!-\! \hat q \!+\! m}{(x p_1 \!-\! q)^2 \!-\! m^2} \gamma_\bot^\mu\right],
\label{D5}
\end{equation}
\begin{equation}
D_6 = \frac{(-ig)^3i^2C_1}{4N} tr_c(t^b t^a t^l) \hbox{Tr} \! \left[ \hat \epsilon^*_V (2\hat q \!+\! 2m) \hat p_2 \frac{\hat q \!-\! \beta \hat p_2 \!-\! \hat k_\bot \!+\! m}{(q \!-\! \beta p_2 \!-\! k_\bot)^2 \!-\! m^2} \gamma_\bot^\mu \frac{- \hat q \!-\! \hat l \!+\! m}{( - q \!-\! l)^2 \!-\! m^2} \hat \epsilon^*(l)\right], \hspace{0.15cm}
\label{D6}
\end{equation}
where ${\rm tr}_c$ and $\hbox{Tr}$ denote respectively the color and the Dirac traces.
Let us observe the following relations between the Dirac traces ${\hbox{Tr}}_{D(i)}$ of diagrams $D(i)$ due to charge conjugation invariance:
\begin{equation}
\hbox{Tr}_{D(1)} = \hbox{Tr}_{D(6)}\,,\;\;\;\; \hbox{Tr}_{D(2)} = \hbox{Tr}_{D(5)}\,,\;\;\;\; \hbox{Tr}_{D(3)} = \hbox{Tr}_{D(4)} \, .
\label{Drelations}
\end{equation}
Consider the color factor: the symmetry property (\ref{Drelations}) results in the appearance in the sum of all diagrams of the symmetric structure constants $d^{ckl}$ of the $SU(N)$ color group only. Thus, we obtain
\begin{align}
{\cal A}^{ab}_{\mu\bot}=&\sum\limits_{i=1}^6 D(i) = \frac{(-ig)^3i^2C_1}{4N} \frac{d^{abl}}{4} \left\{
\frac{2\,\hbox{Tr}_{D(1)\mu\bot}}{[ (q+l)^2 -m^2][ (\beta p_2 +k_\bot -q)^2 - m^2]} \right. \nonumber \\
& \hspace{0.6cm} \left. + \frac{2\,\hbox{Tr}_{D(2)\mu\bot}}{[ (q- x p_1)^2 -m^2 ][ (\beta p_2 +k_\bot -q)^2 - m^2 ]} +
\frac{2\hbox{Tr}_{D(3)\mu\bot}}{[ (q- x p_1)^2 -m^2 ][ (l +q)^2 - m^2 ]} \right\} \nonumber \\
\equiv & \; \frac{(-ig)^3i^2C_1}{4N} \frac{d^{abl}}{4} {\cal D}_{\mu\bot}^{\nu \rho} \epsilon^*_{V\rho }(2q)\epsilon^*_\nu (l) \, , \label{TrD}
\end{align}
where we introduced the shorthand notation ${\cal D}_{\mu\bot}^{\nu \rho} \epsilon^*_{V\rho }(2q)\epsilon^*_\nu (l)$ for the sum of all six diagrams contributing to $J/\psi$ production within the color singlet mechanism. One can check that this sum vanishes in the limit
$k_\perp \to 0,$ as it should be the case for an impact factor in $k_t-$factorization due to its gauge invariance.
For the gluon $g(l)$, we choose the gauge\footnote{Note that the sum of diagrams in this color singlet mechanism is gauge invariant, although the $t-$channel gluon is off-shell: indeed due to the simple single color structure $d^{abl}$ which factorizes, they are QED like.}
\begin{equation}
p_2\cdot \epsilon^*(l)=0\,,
\label{gaugel}
\end{equation}
which is a natural choice for a meson emitted in the fragmentation region of the hadron of momentum $p_1.$
The three different traces then read
\begin{align}
\hbox{Tr}_{D(1)}^{\,\mu\bot} =& 2m\,\hbox{Tr}\left[ m^2 \hat \epsilon^*_V \hat \epsilon^*(l) \gamma_\bot^\mu \hat p_2
+ \hat \epsilon^*_V \hat \epsilon^*(l) (\hat q + \hat l)\gamma_\bot^\mu ( \hat k_\bot - \hat q )\hat p_2 \right.
\nonumber \\
& \hspace{1.2cm} \left.
+ \hat \epsilon^*_V \hat q \hat \epsilon^*(l) (\hat q + \hat l)\gamma_\bot^\mu \hat p_2
+ \hat \epsilon^*_V \hat q \hat \epsilon^*(l) \gamma_\bot^\mu ( \hat k_\bot -\hat q ) \hat p_2
\right] \nonumber \\
=&8m\left[ k_\bot^\mu \left( 2 \epsilon^*(l)\cdot q \,\epsilon^*_V\cdot p_2 + \epsilon^*_V\cdot \epsilon^*(l)\, p_2\cdot l \right)
- \epsilon^{*\mu}_{V\bot}\left( k_\bot\cdot \epsilon^*(l)\,p_2\cdot l +4p_2\cdot q\,q\cdot \epsilon^*(l) \right) \right.
\nonumber \\
& \hspace{0.7cm} \left. +l_\bot^\mu \left( p_2\cdot \epsilon^*_V\, k_\bot\cdot \epsilon^*(l) -2p_2\cdot q\, \epsilon^*_V\cdot \epsilon^*(l) \right)\right. \nonumber \\
& \hspace{0.7cm} \left.+\epsilon^{*\mu}_\bot(l) \left( -k_\bot\cdot l_\bot\, p_2\cdot \epsilon^*_V + k_\bot\cdot \epsilon^*_V\, p_2\cdot l +2p_2\cdot q \, \epsilon^*_V \cdot l \right)
\right], \label{TrD1}
\end{align}
\begin{align}
&\hbox{Tr}_{D(2)}^{\,\mu\bot} = 2m\,\hbox{Tr}\left[ m^2 \gamma_\bot^\mu \hat \epsilon^*(l) \hat p_2 \hat \epsilon^*_V
+ \hat \epsilon^*_V \hat q \gamma_\bot^\mu \hat \epsilon^*(l) ( \hat k_\bot - \hat q ) \hat p_2 \right.
\nonumber \\
& \hspace{2.7cm} \left.
+ \hat \epsilon^*_V \hat q \gamma_\bot^\mu (\hat q - x \hat p_1) \hat \epsilon^*(l) \hat p_2
+ \hat \epsilon^*_V \gamma_\bot^\mu (\hat q - x \hat p_1) \hat \epsilon^*(l) ( \hat k_\bot -\hat q ) \hat p_2
\right] \nonumber \\
& = 2m \!\left\{
8q_\bot^\mu \left( -2p_2\cdot q\, \epsilon^*_V\cdot \epsilon^*(l) + \epsilon^*(l)\cdot k_\bot\,p_2\cdot \epsilon^*_V \right)
+8xp_2\cdot q\,\left( \epsilon^{*\mu}_{V\bot}\,p_1\cdot \epsilon^*(l) - \epsilon^{*\mu}_\bot(l)\,p_1\cdot \epsilon^*_V \right) \right.
\nonumber \\
& \hspace{1.1cm} \left.
+x\left[ -2s \epsilon^{*\mu}_{V\bot}\, k_\bot\cdot \epsilon^*(l) -2s \epsilon^{*\mu}_\bot(l)\,k_\bot\cdot \epsilon^*_V + k_\bot^\mu \left( 2s \epsilon^*_V\cdot \epsilon^*(l) -4 p_2\cdot \epsilon^*_V \, p_1\cdot \epsilon^*(l) \right)\right]
\right\}, \label{TrD2}
\end{align}
and
\begin{align}
\hbox{Tr}_{D(3)}^{\,\mu\bot} =& 2m\,\hbox{Tr}\left[ m^2 \hat \epsilon^*_V \gamma_\bot^\mu \hat p_2 \hat \epsilon^*(l)
- \hat \epsilon^*_V \gamma_\bot^\mu (\hat q - x \hat p_1) \hat p_2 (\hat q +\hat l) \hat \epsilon^*(l) \right.
\nonumber \\
& \hspace{1.2cm} \left.
- \hat \epsilon^*_V \hat q \gamma_\bot^\mu \hat p_2 (\hat q + \hat l) \hat \epsilon^*(l)
+ \hat \epsilon^*_V \hat q \gamma_\bot^\mu (\hat q - x \hat p_1) \hat p_2 \hat \epsilon^*(l)
\right] \nonumber \\
=&2m\left\{
8q^\mu_\bot \left( -2 q\cdot \epsilon^*(l)\, p_2\cdot \epsilon^*_V +\epsilon^*_V\cdot l\, p_2\cdot \epsilon^*(l) -p_2\cdot l\, \epsilon^*_V\cdot \epsilon^*(l) \right) \right. \nonumber \\
& \hspace{0.8cm} + x \left[
4s \epsilon^{*\mu}_{V\bot} \, q\cdot \epsilon^*(l) \left. + 2s \left( - \epsilon^{*\mu}_\bot(l) \, l\cdot \epsilon^*_V +l^\mu_\bot\, \epsilon^*(l)\cdot \epsilon^*_V \right) -4l_\bot^\mu \, p_1\cdot \epsilon^*(l)\,p_2\cdot \epsilon^*_V
\right. \right.
\nonumber \\
& \hspace{1.5cm} \left. \left. -4\epsilon^{*\mu}_\bot(l)\left( p_1\cdot \epsilon^*_V\, p_2\cdot l - p_1\cdot l \, p_2\cdot \epsilon^*_V \right) +4\epsilon^{*\mu}_{V\bot} \,p_2\cdot l\, p_1\cdot \epsilon^*(l)
\right]
\right\}. \label{TrD3}
\end{align}
The denominators appearing in the expression for ${\cal A}^{ab}_{\mu\bot}$ are equal to
\begin{align}
&\!\!\!(q+l)^2-m^2 = \frac{1}{2}\left[ k_\bot^2 +4m^2\left( \frac{x}{\alpha_V} -1 \right) -\frac{x}{\alpha_V} p_{V\bot}^2 -\frac{x}{\alpha_l} l_\bot^2 \right]\;, \nonumber \\
&\!\! \!(q-xp_1)^2 -m^2= -\frac{x}{2\alpha_V}(4m^2 - p_{V\bot}^2)\;, \;\;\; (\beta p_2 +k_\bot -q)^2 -m^2= \frac{1}{2}\!\left( k_\bot^2 -4m^2 +\frac{x}{\alpha_l} l_\bot^2 \right) . \label{CSdenom}
\end{align}
The cross section is obtained by squaring the sum of diagrams $D(i)$, i.e. by contracting this sum with its complex conjugate through the polarization tensors for the $J/\psi$ and the gluon $g(l)$ and the projection operator related to the factorization of the gluonic PDF, namely
\begin{equation}
{\cal D}^{(1)}(J/\psi) \equiv
{\cal D}^{\mu \nu \rho} g_{\bot \mu \mu'}\left( -g_{\rho \rho'} + \frac{q_\rho q_{\rho'}}{m^2} \right) \left( -g_{\nu \nu'} +\frac{p_{2\nu}l_{\nu'} +p_{2\nu'}l_\nu }{p_2\cdot l} \right) {\cal D}^{ *\mu' \nu' \rho'}\,.
\end{equation}
Thus we obtain that
\begin{equation}
\sum_{\lambda_V\, \lambda_l} {\cal A}_{\mu\bot}^{ab}\, g_\bot^{\mu \nu}\, ({\cal A}_{\nu\bot}^{ab})^* =\,\frac{g^6 C_1^2}{(4N)^2}\frac{d^{abl} d^{abl}}{4^2} \,{\cal D}^{(1)}(J/\psi) \, ,
\label{CSAA*}
\end{equation}
which by taking into account eq.~(\ref{CSVertex}) gives the $J/\psi$ production vertex in the form
\begin{equation}
V_{J/\psi}^{(1)}
= \frac{|p_{V\bot}|\sqrt{2 } g^6 C_1^2}{s^2 \pi^4 2^{13} k_\bot^2} \frac{d^{abl} d^{abl}}{N^2(N^2-1)^2} \frac{\theta(x-\alpha_V)}{x(x-\alpha_V)}{\cal D}^{(1)}(J/\psi) \, ,
\label{vertexsin}
\end{equation}
with $\alpha_V=\frac{\sqrt{4m^2-p_{V\bot}^2}}{\sqrt{s}} e^{y_V}$ and $d^{abl} d^{abl} = \frac{(N^2-4)(N^2-1)}{N}$.
The final expression for
${\cal D}^{(1)}(J/\psi)$ reads
\begin{align}
&{\cal D}^{(1)}(J/\psi)=
\frac{2^9}{\left(m^2-q_\perp^2\right)^2 \left(4 x k_\perp\cdot
q_\perp+k_\perp^2 \left(\alpha_V-2 x\right)+4 m^2
\left(x-\alpha_V\right)-4 x q_\perp^2\right)^2}\nonumber \\
& \times
\frac{s^2 \alpha_V^2 \left(\alpha_V-x\right)^2}{
\left(4 x \left(x q_\perp^2-\alpha_V k_\perp\cdot
q_\perp\right)+k_\perp^2 \alpha_V^2-4 m^2 \left(\alpha_V-x\right)^2\right)^2}
\left\{32 m^4 \alpha_V^2 \left(\alpha_V-x\right)^2 (k_\perp\cdot q_\perp)^2
\right. \nonumber \\
& \left.
+(k_\perp^2)^3 \alpha_V^2 \left[m^2 \left(\alpha_V^2-2 x \alpha_V+2 x^2\right)-q_\perp^2 \left(\alpha_V-x\right)^2\right]
\right. \nonumber \\
& \left.
+8 m^2 k_\perp^2 \left[-2 \alpha_V k_\perp\cdot q_\perp \left(m^2 \left(\alpha_V-x\right)^3+q_\perp^2 \left(2
\alpha_V^3+2 x^2 \alpha_V-3 x \alpha_V^2+x^3\right)\right)
\right.\right. \nonumber \\
& \left.\left.
+\alpha_V^2 \left(\alpha_V^2-2 x \alpha_V+3 x^2\right) (k_\perp\cdot q_\perp)^2 +2 \left(m^2 \left(\alpha_V-x\right)^2-q_\perp^2 \left(\alpha_V^2-x \alpha_V+x^2\right)\right)^2\right]
\right. \nonumber \\
& \left.
-4 (k_\perp^2)^2 \left[\alpha_V
k_\perp\cdot q_\perp \left(m^2 \left(\alpha_V^3+x^2 \alpha_V-x \alpha_V^2+x^3\right)-x q_\perp^2 \left(\alpha_V-x\right)^2\right)+m^4 \left(\alpha_V-x\right)^4
\right.\right. \nonumber \\
&
\left.\left.
+m^2
q_\perp^2 \left(-5 \alpha_V^4+6 x^3 \alpha_V-13 x^2 \alpha_V^2+12 x \alpha_V^3-2 x^4\right)+x^2 (q_\perp^2)^2 \left(\alpha_V-x\right)^2\right]\right\} \,.
\end{align}
\subsection{Color-octet NRQCD contribution}
In the color-octet contribution $[M]$ denotes one meson state, thus
\begin{equation}
\delta(x-[\alpha_M])\delta^2(k_\bot - [p_{M\bot}]) \left[ \frac{d^3p_M}{(2\pi )^32E_M} \right] = \frac{\delta(x-\alpha_V) \delta^2(k_\bot - p_{V\bot})}{2(2\pi )^3} dy_V d^2p_{V\bot} \, ,
\label{OctetM}
\end{equation}
which leads to the differential cross section
\begin{align}
\frac{d\sigma}{dy_V\, d|p_{V\bot }|d\phi_V dy_{J} d|p_{J\bot}|d\phi_{J}} = \int & \,dx \, g(x\,) \,dy \, H^q(y) \, d^2k_\bot
\frac{|p_{V\bot}|\delta(x-\alpha_V) \delta^2(k_\bot - p_{V\bot}) }{\sqrt{2} \pi s^2 (N^2-1)^2 \, k_\bot^2\, x} \nonumber \\
& \times \sum_{\lambda_V} {\cal A}_{\mu \bot}^{ab} g_\bot^{\mu \nu}({\cal A}_{\nu \bot}^{ab})^* \,\,V_q^{(0)}(-k_\bot,y) \, ,
\label{CrSecOctet}
\end{align}
from which we read off the $J/\psi$ production vertex of the color octet NRQCD contribution:
\begin{equation}
V_{J/\psi}^{(8)}(k_\bot,x) = \frac{|p_{V\bot}|\delta(x-\alpha_V) \delta^2(k_\bot - p_{V\bot}) }{\sqrt{2} \pi s^2 (N^2-1)^2 \, k_\bot^2\, x}
\sum_{\lambda_{V}} {\cal A}_{\mu \bot}^{ab} g_\bot^{\mu \nu}({\cal A}_{\nu \bot}^{ab})^* \, .
\label{VertexJPsiOctet }
\end{equation}
The vertex which allows to pass from open $q \bar{q}$ production to $J/\psi$ production in color octet NRQCD is defined as
\begin{equation}
[v(q)\bar u(q)]^{ij\rightarrow d}_{\alpha \beta} \rightarrow\,t^d_{ij} d_8\,
\left( \frac{\langle{\cal O}_8 \rangle_V}{m} \right)^{1/2}\left[ \hat \epsilon^*_V \left( 2\hat q +2m \right)\right]_{\alpha \beta} \, ,
\label{COvertex}
\end{equation}
where the value of the coefficient $d_8$ is determined by comparison with the result of Cho and Leibovich~\cite{Cho:1995vh,Cho:1995ce}, namely eq.~(A.1b) of ref.~\cite{Cho:1995ce}, for the total
squared amplitude for creating a specific quarkonium state $^3S_1^{(8)}$. Note that here we only consider the case where the quark-antiquark pair has the same spin and orbital momentum as the $J/\psi$ meson. At large transverse momentum, which is the case we will consider in the following, this contribution is found to be dominant, see e.g. ref.\cite{Kramer:2001hh}. For $N=3$ the coefficient $d_8$ equals $d_8=\frac{1}{4\sqrt{3}}$.
An early analysis~\cite{Hagler:2000eu} gave for the non-perturbative coefficient
\begin{equation}
C_8 \equiv \left( \frac{\langle{\cal O}_8 \rangle_V}{m} \right)^{1/2}
\end{equation}
values between $3.2\times 10^{-4}$ and $5 \times 10^{-4}$ GeV$^3$. More recent analyses~\cite{Butenschoen:2011yh,Chao:2012iv,Bodwin:2014gia}, as quoted in ref.~\cite{Bain:2017wvk}, obtained significantly larger values which we will use here, namely we will vary $\langle{\cal O}_8 \rangle_{J/\psi}$ between $0.224 \times 10^{-2}$ and $1.1 \times 10^{-2}$ GeV$^3$.
\def1.3{1.3}
\def1cm{1cm}
\begin{figure}[t]
\center
\psfrag{a}{}
\psfrag{b}{}
\psfrag{c}{}
\psfrag{d}{}
\psfrag{l}{$\ell$}
\psfrag{qu}{$q$}
\psfrag{qd}{$-q$}
\psfrag{p1}[R]{$xp_1$}
\psfrag{p2}[R]{$\beta p_2 + k_\perp \hspace{.6cm}$}
\begin{tabular}{ccc}
\includegraphics[scale=1.1]{CO1-fierz.eps}
&
\hspace{1cm}\raisebox{0cm}{\includegraphics[scale=1.1]{CO2-fierz.eps}}
&
\hspace{1cm}\raisebox{0cm}{\includegraphics[scale=1.1]{CO3-fierz.eps}}\\
A & B & C
\end{tabular}
\caption{The 3 diagrams contributing to the amplitude in color octet NRQCD.
The blobs with a cross symbolize the Fierz structure of eq.~(\ref{COvertex}).}
\label{Fig:Diagrams-CO}
\end{figure}
The hard part corresponds to the sum of the three diagrams of figure~\ref{Fig:Diagrams-CO}, namely
\begin{align}
\label{Doctet}
{\cal A}_{ \bot}^{ab \mu}(A+B+C) = (-ig)^2 i \,\bar u(q) & \left[ t^at^b \, \gamma_\bot^\mu \frac{\hat q - x \hat p_1 +m}{(q-x p_1)^2-m^2} \hat p_2 + t^bt^a \hat p_2 \frac{x \hat p_1 - \hat q +m}{(x p_1-q)^2-m^2} \gamma^\mu_\bot \right. \nonumber \\
& \hspace{0.2cm} \left. -if^{abc} \left( -2k_\bot^\mu p_2^\rho +4 p_2\cdot q g_\bot^{\mu \rho} \right) \frac{t^c \gamma_\rho}{4q^2}
\right] v(q) \,.
\end{align}
After taking into account the projection (\ref{COvertex}) we obtain
\begin{align}
\label{DOctetJ}
{\cal A}_{ \bot}^{ab \mu}(A+B+C \rightarrow J/\psi)_8 = (-ig)^2i^2 f^{abd} \,\frac{1}{2} & \left\{ \frac{8m}{(q-xp_1)^2-m^2} \left[ -2q\cdot p_2
\epsilon^{*\mu}_{V\bot} + k_\bot^\mu p_2\cdot \epsilon^*_V \right] \right.
\nonumber \\
& \hspace{0.2cm} \left. -\frac{16m}{4q^2} \left[ - k_\bot^\mu p_2\cdot \epsilon^*_\bot + 2p_2\cdot q \epsilon^{*\mu}_{V\bot} \right]
\right\}d_8\,{\cal C}_8\, ,
\end{align}
in which the propagator $(q-x p_1)^2-m^2=-\frac{1}{2}(4m^2- k_\bot^2)$.
One can easily check that this sum vanishes in the limit $k_\perp \to 0$, as it should be the case for an impact factor in $k_t$-factorization. This is also true at the level of open quark production, see eq.~(\ref{Doctet}).
The result (\ref{DOctetJ}) together with eq.~(\ref{VertexJPsiOctet }) leads to the $J/\psi$ production vertex for $N=3$:
\begin{equation}
V_{J/\psi}^{(8)} =
-\delta (x-\alpha_V) \delta^2(k_\bot - p_{V \bot}) \frac{|p_{V \bot}|\sqrt{2} g^4 k_\bot^2 x }{128\pi m^3(4m^2-k_\bot^2)^2} \langle{\cal O}_8 \rangle_V \,.
\label{Voctet}
\end{equation}
\subsection{Color evaporation model}
In the color evaporation model $[M]$ denotes an open quark-antiquark produced state with an invariant mass $M$. Moreover, the differential cross section in this model involves an integration over the invariant mass $M^2$ in the interval $[4m^2,4M_D^2]$, as it is assumed that in this interval below the $D-$meson mass threshold, a fixed fraction of these $c \bar{c}$ pairs (either produced in a singlet or in an octet color state) will form $J/\psi$ bound states. This fraction is parametrized by the constant $F_{J/\psi}$, which is assumed to
be universal as one of the main assumptions of the color evaporation model, and we will vary it between 0.02 and 0.04 based on a recent analysis~\cite{Nelson:2012bc}.
The $J/\psi$ momentum in this model is the sum $k_J=k_1+k_2$. We parametrize the momentum $k_1$ of the produced quark and the momentum $k_2$ of the produced anti-quark as follows:
\begin{align}
k_1 &= \alpha_1 p_1 + \beta_1p_2 +k_{1\bot} \equiv x \alpha p_1 + \frac{m^2- (\alpha k_\bot +l_\bot )^2}{x \alpha s} p_2 + \alpha k_\bot +l_\bot\,, \;\;\;k_1^2=m^2, \label{CEkin1} \\
k_2 &= \alpha_2 p_1 + \beta_2p_2 +k_{2\bot} \equiv x \bar \alpha p_1 + \frac{m^2- (\bar \alpha k_\bot - l_\bot)^2}{x\bar \alpha s}p_2 + \bar \alpha k_\bot - l_\bot\,, \;\;\;k_2^2=m^2, \label{CEkin2} \\
M^2 &= (k_1 + k_2)^2 \equiv \frac{m^2-l_\bot^2}{\alpha \bar \alpha}\;,
\label{CEkinM}
\end{align}
with $\bar\alpha=1-\alpha\,.$
Thus,
\begin{align}
\delta(x\!-\![\alpha_M])\delta^2(k_\bot\!-\![p_{M\bot}])
\!\left[\!\frac{d^2p_M}{(2\pi )^32E_M}\!\right] \!\!&=\!
\delta(x\!-\!\alpha_1\!-\!\alpha_2)\delta^2(k_\bot\!-\!k_{1\bot}\!-\!k_{2\bot}) \frac{d^3k_1}{(2\pi )^32E_1} \frac{d^3k_2}{(2\pi )^32E_2} \nonumber \\
&=\delta(x-\alpha_V) \delta^2(k_\bot - k_{V\bot}) \frac{1}{4(2\pi )^6}\, \frac{d\alpha\, d^2l_\bot}{\alpha \bar \alpha} \,dy_V d^2k_{V\bot}\,,
\label{CEPhSp}
\end{align}
which leads, by taking into account (\ref{CSecM}), to the differential cross section in the color evaporation model having the form
\begin{align}
&\frac{d\sigma}{dy_V d|p_{V\bot}|d\phi_V dy_{J}d|p_{J\bot}|d\phi_{J}}= F_{J/\psi} \!\int \! dx \, g(x) dy H^q(dy) d^2k_\bot \! \! \int_{4m^2}^{4M^2_{D}} \!\! dM^2\,
\delta \left(\! M^2 - \frac{m^2 - l^2_\bot }{\alpha \bar \alpha}\right)\!
\nonumber \\
& \hspace{0.6cm}\times \frac{ |p_{V \perp}|\sqrt{2} \,\delta(x-\alpha_V) \delta^2(k_\bot - p_{V\bot})}{2^5 \pi^4 s^2 (N^2-1)^2\, k^2_\bot\, x}\frac{d\alpha\, d^2l_\bot}{\alpha \bar \alpha} \sum_{\lambda_{k_1}\lambda_{k_2}} {\cal A}_{i \bot}^{ab} g_\bot^{i j}({\cal A}_{j \bot}^{ab})^* \,\,V_q^{(0)}(k_\bot,y)\,,
\label{CECrSec}
\end{align}
with $y_V= \ln \left( \alpha_V / \sqrt{\frac{M^2-p^2_{V\bot}}{s}} \right)$, from which we read off the $J/\psi$ production vertex in the color evaporation model:
\begin{align}
V^{(\rm CEM)}_{J/\psi}(k_\bot,x) = F_{J/\psi}\,\int\limits_{4m^2}^{4M_{D}^2} & dM^2 \, \delta\left(M^2- \frac{m^2 -l_\bot^2}{\alpha \bar \alpha}\right)\frac{d\alpha\, d^2l_\bot}{\alpha \bar \alpha} \sum_{\lambda_{k_1}\lambda_{k_2}} {\cal A}_{i \bot}^{ab} g_\bot^{i j}({\cal A}_{j \bot}^{ab})^* \nonumber \\
& \times \frac{|p_{V\bot}|\sqrt{2} \,\delta\left(x-\alpha_V) \delta^2(k_\bot - p_{V\bot}\right)}{2^5 \pi^4 s^2 (N^2-1)^2\, k^2_\bot\, x} \,. \label{CEVertex}
\end{align}
The contribution to the hard part of the vertex in the Born approximation is given by three diagrams analogous to the ones of the color octet NRQCD contribution, except for the absence of any Fierz projection, since we simply deal with open quark-antiquark production.
These diagrams are shown in figure~\ref{Fig:Diagrams-Cevaporation}.
The hard part then reads
\def1.3{1.3}
\def1cm{1cm}
\begin{figure}[t]
\center
\psfrag{a}{}
\psfrag{b}{}
\psfrag{c}{}
\psfrag{d}{}
\psfrag{l}{$\ell$}
\psfrag{qu}{$q$}
\psfrag{qd}{$-q$}
\psfrag{p1}[R]{$xp_1$}
\psfrag{p2}[R]{$\beta p_2 + k_\perp \hspace{.6cm}$}
\begin{tabular}{ccc}
\includegraphics[scale=1.1]{CO1.eps}
&
\hspace{1cm}\raisebox{0cm}{\includegraphics[scale=1.1]{CO2.eps}}
&
\hspace{1cm}\raisebox{0cm}{\includegraphics[scale=1.1]{CO3.eps}}
\end{tabular}
\caption{The 3 diagrams contributing to the amplitude in the color evaporation model.}
\label{Fig:Diagrams-Cevaporation}
\end{figure}
\begin{align}
{\cal A}_{i\bot}^{ab} \!=\! \bar u(k_1) & \! \left[ \! (-ig\gamma_{i\bot} t^a) \frac{i(-x\hat p_1 \! + \! \hat k_1 \!+ \!m)}{(-xp_1\!+\!k_1)^2\! -\! m^2} (-ig t^b \hat p_2) + (-ig t^b \hat p_2) \frac{i(x \hat p_1 \! - \! \hat k_2 \! + \! m)}{(xp_1 \! - \! k_2)^2 \! - \! m^2}(-ig \gamma_{i\bot} t^a) \right. \nonumber \\
& \hspace{0.2cm} \left. +g f^{abc} \left( -2 p_2^\nu k_{i\bot} + g^{ \nu}_{i\bot} xs \right) \frac{(-i)}{M^2} (-ig\gamma_\nu t^c)
\right] v(k_2)\;. \label{Aiab}
\end{align}
Thus its contribution to the $J/\psi$ production vertex has the form
\begin{equation}
\sum_{\lambda_{k_1}\lambda_{k_2}} ({\cal A}^{ab}_{i\bot})^* g_{\bot }^{ij} {\cal A}^{ab}_{j\bot} = \frac{g^4}{4} \left( c_a \hbox{Tr}_a + c_b \hbox{Tr}_b \right) \, ,
\label{CEhard}
\end{equation}
where the two color structures are given by
\begin{equation}
c_a=\frac{f^{abc}f^{abc}}{2}=\frac{N(N^2-1)}{2}\,,\, c_b= \frac{\delta^{ab}\delta^{ab}}{N^2}+\frac{d^{abc}d^{abc}}{2}=\frac{N^2-1}{N^2}\left( 1 + \frac{N(N^2-4)}{2}\right),\,\,\,\,\,\,\,
\label{ColorStructure-evap}
\end{equation}
and the two corresponding coefficients read
\begin{align}
\hbox{Tr}_a=&-4s \left[
\alpha_2 \beta_1 \left(\! -\frac{1}{\beta_1} +\frac{2xs}{M^2} \right)^2
\!\!+
\alpha_1 \beta_2 \left(\! -\frac{1}{\beta_2} +\frac{2xs}{M^2} \right)^2
\right] -8m^2 \left( \frac{2xs}{M^2} - \frac{1}{\beta_2} \right) \! \left( \frac{2xs}{M^2} - \frac{1}{\beta_1} \right) \nonumber \\
&+ \frac{8}{x} \left[\frac{k_{1\bot}}{\beta_1}+\frac{k_{2\bot}}{\beta_2}
- \frac{2 x s}{M^2} k_{\bot} \right] \cdot
\left[ \alpha_2 \left(\left(\frac{\alpha_1}x -1 \right)\frac{1}\beta_1
+ \frac{2 x s}{M^2} \right) k_{1\bot} \right. \nonumber \\
& \hspace{5cm} \left. + \alpha_1 \left(\left(\frac{\alpha_2}x -1 \right)\frac{1}\beta_2
+ \frac{2 x s}{M^2} \right) k_{2\bot}
- \frac{ 2 \alpha_1 \alpha_2s}{M^2} k_{\bot}\right]
\nonumber \\
=&-4s x\left[
\bar{\alpha} \beta_1 \left(\! -\frac{1}{\beta_1} +\frac{2xs}{M^2} \right)^2
\!\! +
\alpha \beta_2 \left(\! -\frac{1}{\beta_2} +\frac{2xs}{M^2} \right)^2
\right]
-8m^2 \left( \frac{2xs}{M^2} - \frac{1}{\beta_2} \right) \left( \frac{2xs}{M^2} - \frac{1}{\beta_1} \right) \nonumber \\
&+\!8\!\left[\frac{k_{1\bot}}{\beta_1}+\frac{k_{2\bot}}{\beta_2}
- \frac{2 x s}{M^2} k_{\bot} \right] \! \cdot \!
\left[ \bar{\alpha} \! \left(\!-\frac{\bar{\alpha}}\beta_1
\!+\! \frac{2 x s}{M^2} \right) k_{1\bot}
\!+\! \alpha \! \left(\!-\frac{\alpha}\beta_2
+ \frac{2 x s}{M^2} \right) k_{2\bot}
- \alpha \bar{\alpha} \frac{2 x s}{M^2} k_{\bot}\right] \! , \label{Tra}
\end{align}
and
\begin{align}
\hbox{Tr}_b \!=&\!
-4 s \left(\frac{\alpha_2}{\beta_1}+\frac{\alpha_1}{\beta_2}\right)
+ 8 \left[ \frac{\alpha_2}x \left(\frac{\alpha_1}x -1 \right) \frac{k_{1\bot}}{\beta_1}
- \frac{\alpha_1}x \left(\frac{\alpha_2}x -1 \right) \frac{k_{2\bot}}{\beta_2} \right] \! \cdot \! \left[\frac{k_{1\bot}}{\beta_1} - \frac{k_{2\bot}}{\beta_2} \right] + \frac{8m^2}{\beta_1 \beta_2} \,\,\,
\nonumber \\
\!=&\!
-4 s x\left(\frac{\bar{\alpha}}{\beta_1}+\frac{\alpha}{\beta_2}\right)
- 8 \left[ \bar{\alpha}^2 \frac{k_{1\bot}}{\beta_1}
-
\alpha^2
\frac{k_{2\bot}}{\beta_2} \right] \! \cdot \! \left[\frac{k_{1\bot}}{\beta_1} - \frac{k_{2\bot}}{\beta_2} \right] + \frac{8m^2}{\beta_1 \beta_2} \, . \label{Trb}
\end{align}
Using the fact that
\begin{equation}
k_{1\perp} \underset{k_\perp \to 0}{\sim} l_\perp \, ,
\quad k_{2\perp} \underset{k_\perp \to 0}{\sim} -l_\perp \, ,
\quad
\beta_1 \underset{k_\perp \to 0}{\sim} \frac{m^2 -l_\perp^2}{x \alpha s} \, ,
\quad
\beta_2 \underset{k_\perp \to 0}{\sim} \frac{m^2 -l_\perp^2}{x \bar{\alpha} s} \, ,
\end{equation}
as well as the kinematical relation (\ref{CEkinM}), one can easily check that, as expected, both $\hbox{Tr}_a$ and $\hbox{Tr}_b$ vanish in the limit $k_\perp \to 0\,.$
\section{Results}
In this section we compare the cross sections and azimuthal correlations between the $J/\psi$ meson and the jet obtained with the color singlet, color octet and color evaporation hadronization mechanisms, for two different values of the center of mass energy: $\sqrt{s}=8$ TeV and $\sqrt{s}=13$ TeV. We consider equal values of the transverse momenta of the $J/\psi$ and the jet, $|p_{V \bot}|=|p_{J \bot}|=p_\bot$, and four different kinematical configurations:
\begin{itemize}
\item $0<y_V<2.5, \; -6.5<y_J<5, \; p_\bot=10$ GeV,
\item $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=10$ GeV,
\item $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=20$ GeV,
\item $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=30$ GeV.
\end{itemize}
The very backward jet in the first configuration could be measured for example with the CASTOR detector at CMS. An experimental study combining the CASTOR detector to tag the jet and the CMS tracking system to measure the $J/\psi$ meson would therefore allow to probe rapidity separations $Y \equiv y_V-y_J$ up to values as large as 9. For the other three configurations we restrict the rapidity of the jet to $y_J>-4.5$ which corresponds to the typical values accessible by the main detectors at ATLAS and CMS. In this case the maximum rapidity separation is $Y=7$. Since a BFKL calculation is valid only for a large rapidity separation, we will only show results for $Y>4$. We use the BLM renormalization scale fixing procedure, see ref.~\cite{Ducloue:2013bva}, which modifies the ``natural'' initial scale
$\mu_{R, {\rm init}}=\sqrt{|p_{V \bot}|\cdot|p_{J \bot}|}$
by
\begin{equation}
\mu^2_{R,{\rm BLM}}= |p_{V \bot}|\cdot|p_{J \bot}| \exp \left[ \frac{1}{2}
\chi_0(n,\gamma)-\frac{5}{3}+2\left(\!1+\frac{2}{3}I\!\right)\! \right] \, ,
\end{equation}
where
\begin{equation}
\chi_0(n,\gamma) = 2\psi(1)-\psi\left(\gamma+\frac{n}
{2}\right)-\psi\left(1-\gamma+\frac{n}{2}\right)\,,
\end{equation}
is the LL BFKL eigenvalue and
$I=-2\int_0^1 dx \ln(x)/[x^2-x+1] \simeq 2.3439$. The uncertainty band is computed in the same way as in ref.~\cite{Ducloue:2013bva} with the addition of the variation of the non-perturbative constants related to $J/\psi$ hadronization in the ranges specified in the previous sections. We fix the charm quark mass to $m=1.5$ GeV.
\def0.75{0.75}
\psfrag{sigma}{\raisebox{1.5mm}{\scalebox{0.8}{$\displaystyle \frac{d\sigma}{d|p_{V \bot}|\, d|p_{J \bot}| \, dY} [{\rm nb.GeV}^{-2}]$}}}
\psfrag{cos}{\scalebox{0.9}{$\langle \cos \varphi \rangle$}}
\psfrag{Y}{\scalebox{0.9}{$Y$}}
\psfrag{2jets}{\scalebox{0.8}{2 jets}}
\psfrag{CS}{\scalebox{0.8}{Color singlet}}
\psfrag{CO}{\scalebox{0.8}{Color octet}}
\psfrag{CEM}{\scalebox{0.8}{Color evaporation}}
\begin{figure}[t]
\hspace{-0.3cm}\includegraphics[scale=0.75]{sigma_10_10_8TeV_castor.eps}
\includegraphics[scale=0.75]{sigma_10_10_8TeV.eps}
\hspace*{0.1cm} {\small $0<y_V<2.5, \; -6.5<y_J<-5, \; p_\bot=10$ GeV} \hspace{0.8cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=10$ GeV}
\vspace{0.4cm}
\hspace{-0.3cm}\includegraphics[scale=0.75]{sigma_20_20_8TeV.eps}
\includegraphics[scale=0.75]{sigma_30_30_8TeV.eps}
\hspace*{0.1cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=20$ GeV} \hspace{0.8cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=30$ GeV}
\caption{Cross section at $\sqrt{s}=8$ TeV as a function of the relative rapidity $Y$ between the $J/\psi$ and the jet, in four different kinematical configurations.}
\label{Fig:sigma-8}
\end{figure}
\begin{figure}[t]
\hspace{-0.3cm}\includegraphics[scale=0.75]{sigma_10_10_13TeV_castor.eps}
\includegraphics[scale=0.75]{sigma_10_10_13TeV.eps}
\hspace*{0.1cm} {\small $0<y_V<2.5, \; -6.5<y_J<-5, \; p_\bot=10$ GeV} \hspace{0.8cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=10$ GeV}
\vspace{0.4cm}
\hspace{-0.3cm}\includegraphics[scale=0.75]{sigma_20_20_13TeV.eps}
\includegraphics[scale=0.75]{sigma_30_30_13TeV.eps}
\hspace*{0.1cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=20$ GeV} \hspace{0.8cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=30$ GeV}
\caption{Cross section at $\sqrt{s}=13$ TeV as a function of the relative rapidity $Y$ between the $J/\psi$ and the jet, in four different kinematical configurations.}
\label{Fig:sigma-13}
\end{figure}
In figures~\ref{Fig:sigma-8} and \ref{Fig:sigma-13}
we show the differential cross section $\frac{d\sigma}{d|p_{V \bot}|\, d|p_{J \bot}| \, dY}$ as a function of the rapidity separation $Y$ for the four kinematical cuts described above, for $\sqrt{s}=8$ TeV and $\sqrt{s}=13$ TeV respectively.
We observe that in NRQCD the color octet contribution dominates over the color singlet one, especially at high $p_\bot$. The color evaporation model leads to similar results as the color octet NRQCD contribution. Note, however, that the absolute normalization of the cross section in the color evaporation model is not very well determined.
As expected, the cross-sections slightly increase when passing from
$\sqrt{s}=8$ TeV to $\sqrt{s}=13$ TeV, although this increase is much smaller than the uncertainties.
In figures~\ref{Fig:cos-8} and \ref{Fig:cos-13} we show, in the same kinematics, the variation of $\langle \cos \varphi \rangle$ as a function of $Y$, where $\varphi$ is defined as $\varphi=|\phi_V-\phi_J-\pi|$, for $\sqrt{s}=8$ TeV and $\sqrt{s}=13$ TeV respectively. A value of $\varphi=0$ therefore corresponds to a back-to-back configuration for the $J/\psi$ and the jet and values of $\langle \cos \varphi \rangle$ close to unity are equivalent to a strong correlation. One can see from these figures that the values of $\langle \cos \varphi \rangle$ obtained with the three production mechanisms are compatible with each other as well as with the results obtained when the $J/\psi$ vertex is replaced by the leading order jet vertex shown for comparison.
We note that passing from $\sqrt{s}=8$ TeV to $\sqrt{s}=13$ TeV increases very slightly the decorrelation effects.
One should note that these results could be significantly altered when taking into account the NLO corrections to the $J/\psi$ production vertex, as it is the case when passing from the LO to the NLO jet vertex, see refs.~\cite{Colferai:2010wu,Ducloue:2013hia}. The derivation of the NLO $J/\psi$ production vertex goes well beyond the scope
of this
work and is left for further studies.
\begin{figure}[t]
\hspace{-0.3cm}\includegraphics[scale=0.75]{cos_10_10_8TeV_castor.eps}
\includegraphics[scale=0.75]{cos_10_10_8TeV.eps}
\hspace*{0.1cm} {\small $0<y_V<2.5, \; -6.5<y_J<-5, \; p_\bot=10$ GeV} \hspace{0.8cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=10$ GeV}
\vspace{0.1cm}
\hspace{-0.3cm}\includegraphics[scale=0.75]{cos_20_20_8TeV.eps}
\includegraphics[scale=0.75]{cos_30_30_8TeV.eps}
\hspace*{0.1cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=20$ GeV} \hspace{0.8cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=30$ GeV}
\caption{Variation of $\langle \cos \varphi \rangle$ at $\sqrt{s}=8$ TeV as a function of the relative rapidity $Y$ between the $J/\psi$ and the jet, for the four kinematical cuts described in the text. The grey band corresponds to the results obtained when the $J/\psi$ production vertex is replaced by the leading order jet production vertex.}
\label{Fig:cos-8}
\end{figure}
\begin{figure}[t]
\hspace{-0.3cm}\includegraphics[scale=0.75]{cos_10_10_13TeV_castor.eps}
\includegraphics[scale=0.75]{cos_10_10_13TeV.eps}
\hspace*{0.1cm} {\small $0<y_V<2.5, \; -6.5<y_J<-5, \; p_\bot=10$ GeV} \hspace{0.8cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=10$ GeV}
\vspace{0.1cm}
\hspace{-0.3cm}\includegraphics[scale=0.75]{cos_20_20_13TeV.eps}
\includegraphics[scale=0.75]{cos_30_30_13TeV.eps}
\hspace*{0.1cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=20$ GeV} \hspace{0.8cm} {\small $0<y_V<2.5, \; -4.5<y_J<0, \; p_\bot=30$ GeV}
\caption{Variation of $\langle \cos \varphi \rangle$ at $\sqrt{s}=13$ TeV as a function of the relative rapidity $Y$ between the $J/\psi$ and the jet, for the four kinematical cuts described in the text. The grey band corresponds to the results obtained when the $J/\psi$ production vertex is replaced by the leading order jet production vertex.}
\label{Fig:cos-13}
\end{figure}
\section{Conclusions}
In the present article, we have shown that the study of
the inclusive production of a forward $J/\psi$ and a very backward jet at the LHC leads to very promising cross sections, to be studied either at the ATLAS or CMS experiments. The possibility of tagging a high rapidity jet on one side, and a $J/\psi$ charmonium on the other side (although with a smaller absolute rapidity), can give access to BFKL resummation effects, since the relative rapidity up to roughly 7 (and even 9 for CASTOR) is theoretically just in the appropriate kinematical range. We have computed the required matrix elements, in the NRQCD color singlet and color octet approaches, as well as in the color evaporation model. Our numerical results show that in the NRQCD approach, the color octet contribution dominates over the color singlet one, and the color evaporation model gives a prediction similar to the color octet NRQCD contribution. The study of the azimuthal correlations gives results which are very similar to the ones obtained in the Mueller-Navelet
case (using for consistency one of the two jet vertices at LO, since the $J/\psi$ vertex is itself treated at LO).
The next stage, in order to get full NLL BFKL predictions for this process, would require to use the NLO expression for the charmonium production vertex, which has not yet been computed. This is left for future studies.
Finally, we did not include any double parton scattering contribution, which through two decorrelated BFKL ladders could lead to the same final state. In the case of Mueller-Navelet jets, some of us have shown that this contribution is rather small with respect to the single BFKL ladder contribution~\cite{Ducloue:2015jba}, except potentially for large $s$ and small jet transverse momenta. For the present process, it would thus be interesting to study this contribution in the CASTOR kinematics. This is left for future studies.
\acknowledgments
We thank Evgenii Baldin, Andrey Grabovsky, Jean-Philippe Lansberg and Hua-Sheng Shao for discussions.
The research by R.B. and L.Sz. was supported by the National Science Center, Poland, grant No. 2015/17/B/ST2/01838.
The research by B.D. was supported by the Academy of Finland, project 273464, and by the European Research Council, grant ERC-2015-CoG-681707.
This work is partially supported by
the French grant ANR PARTONS (Grant No. ANR-12-MONU-0008-01), by the COPIN-IN2P3 agreement,
by the Labex P2IO and by the Polish-French collaboration agreement
Polonium.
This work used computing resources from CSC -- IT Center for Science in Espoo, Finland.
\providecommand{\href}[2]{#2}\begingroup\raggedright |
3,212,635,537,723 | arxiv | \section{Introduction \vspace{.15cm}}
This paper concerns the semilinear elliptic equation
\begin{equation}\label{eq:PDE}
-\Delta u=f(u) \qquad \text{in }\Omega\subset \mathbb{R}^n,
\end{equation}
where $\Omega$ is a bounded domain, $f\in C^1(\mathbb{R})$, and $u:\overline\Omega \subset \mathbb{R}^n\to \mathbb{R}$.
Notice that \eqref{eq:PDE} is the first variation of the functional
\begin{equation}\label{energy}
\mathcal E[u]:=\int_{\Omega}\Bigl(\frac{|\nabla u|^2}2-F(u)\Bigr)\,dx,
\end{equation}
where $F$ is a primitive of $f$.
Consider the second variation of $\mathcal E$ at $u$, given by
$$
\frac{d^2}{d\varepsilon^2}\Big|_{\varepsilon=0}\mathcal E[u+\varepsilon\xi]
=\int_{\Omega}\Bigl(|\nabla \xi |^2-f'(u)\xi^2\Bigr)\,dx.
$$
We say that $u$ is a {\it stable} solution of equation \eqref{eq:PDE} in $\Omega$ if the second variation at $u$ is nonnegative, namely, if
\begin{equation}\label{stabilityLip}
\int_{\Omega} f'(u) \xi^2\,dx\leq \int_{\Omega} |\nabla \xi|^2\,dx \quad \mbox{ for all }
\xi\in C^1(\overline\Omega) \text{ with } \xi_{|_{\partial\Omega}}\equiv 0.
\end{equation}
Note that the stability of $u$ is defined within the class of functions $u+\varepsilon\xi$ agreeing with~$u$ on the boundary $\partial\Omega$. Stability is equivalent to assuming the nonnegativeness of the first
Dirichlet eigenvalue in~$\Omega$ for the linearized operator of \eqref{eq:PDE} at~$u$, namely $-\Delta-f'(u)$.
If a {\it local minimizer} of $\mathcal E$ exists (that is, a minimizer for small perturbations having same boundary values), then it will be a stable solution as well.
This paper concerns the regularity of stable solutions to \eqref{eq:PDE}, a question that has been investigated since the seventies. It was motivated by the Gelfand-type problems \eqref{eq:lambda} described below, for which $f(u)=e^u$ is a model case. Notice that for this nonlinearity, and more generally when $f$ satisfies \eqref{nonl-G} below, the energy functional \eqref{energy} is unbounded below ---even among functions with same boundary values as $u$ on $\partial\Omega$.\footnote{\label{Foot1}To see this, consider $\mathcal E[u+tv]$ for any $v \in C^1_c(\Omega)$ with $v\geq0$ and $v\not\equiv 0$, and let $t\to +\infty$.} Thus, the functional admits no absolute minimizer. However, we will see that it often admits stable solutions. This class of solutions are the subject of our work.
Two other remarks may be helpful to settle the general framework and goals. First, note that once a solution is proven to be bounded, further regularity follows from classical linear elliptic theory, since the right-hand side $f(u)$ becomes bounded.\footnote{Hence, $f(u)\in L^p$ for all $p<\infty$ and one can apply first Calderon-Zygmund theory, and later Schauder theory.} Second, recall that within the class of all solutions (not being necessarily stable), $L^\infty$ bounds are available, generally speaking, only for nonlinearities $f$ growing at most as the critical Sobolev power. The goal is to reach a larger class of nonlinearities when we restrict ourselves to stable solutions.
This problem is a PDE analogue of another fundamental question: the regularity of stable minimal hypersurfaces in $\mathbb{R}^n$. It is known that these surfaces may be singular for $n\geq 8$ (the Simons cone being an explicit example). On the other hand, their regularity has been established for $n=3$ in the late seventies (independently by Fischer-Colbrie and Schoen and by Do Carmo and Peng) and for $n=4$ in the very recent paper~\cite{CL} by Chodosh and Li. The question remains open for $5\le n\le 7$.
In our setting, as in minimal surfaces, there is an explicit singular stable solution (with finite energy) in large dimensions. Indeed, for $n\geq 3$,
\begin{equation*}
u=\log\frac{1}{|x|^2} \quad \text{solves \eqref{eq:PDE} weakly, with } f(u)=2(n-2)e^u \text{ and } \Omega=B_1.
\end{equation*}
Using Hardy's inequality, we see that the linearized operator $-\Delta-2(n-2)e^u=-\Delta-2(n-2)|x|^{-2}$ is nonnegative when $2(n-2)\leq (n-2)^2/4$, i.e., when $n\ge 10$. Thus, there exist singular $W^{1,2}(B_1)$ stable solutions to equations of the form \eqref{eq:PDE} whenever $n\geq 10$. From our results we will see that this does not occur for $n\leq 9$.
Stable solutions exist and play an important role in the so called Gelfand-type problems. The case $f(u)=e^u$ arose, in the sixties, from an ignition problem in combustion theory~\cite{Gelf} and motivated the subsequent literature on stable solutions.
Assume that $f:[0,+\infty)\to\mathbb{R}$ is $C^1$ and that
\begin{equation}\label{nonl-G}
f\text{ is nondecreasing, convex, and satisfies } f(0)>0 \text{ and }
\lim_{t\to+\infty}\frac{f(t)}{t}=+\infty.
\end{equation}
Given a constant $\lambda > 0$ consider the problem\footnote{Problem \eqref{eq:lambda} can be written equivalently in the form \eqref{BVP} used later. Indeed, assuming \eqref{nonl-G}, extend $f$ to $(-\infty,0)$, no matter how but to have $f\in C^1(\mathbb{R})$ and $f\geq 0$ in $\mathbb{R}$. Then, by the maximum principle, solutions of the problem $-\Delta u=\lambda f(u)$ in $\Omega$, $u=0$ on $\partial\Omega$, coincide (for $\lambda >0$) with those of \eqref{eq:lambda}.}
\begin{equation}
\label{eq:lambda}
\left\{
\begin{array}{cl}
-\Delta u=\lambda f(u) & \text{in }\Omega\\
u>0 & \text{in }\Omega\\
u=0 & \text{on }\partial\Omega.\end{array}
\right.
\end{equation}
Note first that $-\Delta u=\lambda f(u)$ admits no trivial solution ($u\equiv 0$ is not a solution since $\lambda f(0)>0$). At the same time, the functional $\mathcal E$ is unbounded from below (see Footnote~\ref{Foot1}) and thus admits no absolute minimizer. However, \eqref{eq:lambda} admits a unique {\it stable} classical solution $u_\lambda$ for every $\lambda \in (0,\lambda^\star)$, where $\lambda^\star\in (0,+\infty)$ is a certain parameter; see e.g.\ the book \cite{Dup} by Dupaigne.\footnote{See \cite{C22} for a related situation concerning catenoids in minimal surfaces theory.} On the other hand, no weak solution exists for $\lambda>\lambda^\star$ (under various definitions of weak solution). The functions $u_\lambda$, which are bounded in $\overline\Omega$ for $\lambda<\lambda^\star$, increase as $\lambda\uparrow\lambda^\star$ towards an $L^1(\Omega)$ distributional {\it stable} solution $u^\star$ of \eqref{eq:lambda} for $\lambda=\lambda^\star$. It is called {\it the extremal solution} of the problem and, depending on the nonlinearity~$f$ and the domain~$\Omega$, it may be either bounded or singular.
In the seventies, the seminal paper of Crandall and Rabinowitz~\cite{CR} established the boundedness of $u^\star$ when $n \leq 9$ and $f$ is either the exponential $f(u)=e^u$ or a power $f(u)=(1+u)^p$ with $p>1$. In the mid-nineties Brezis asked for a similar result in the larger class of nonlinearities \eqref{nonl-G} (see, e.g., \cite{Brezis}), a question that gave rise to the following works, among others, cited in chronological order.
Nedev~\cite{Ned00} proved the regularity of $u^\star$ for $n\leq 3$. In the radial case,
Cabr\'e and Capella~\cite{CC} established the boundedness
of every $W^{1,2}_0(B_1)$ stable radial weak solution to \eqref{eq:PDE} in the unit ball
whenever $n\leq 9$, for every nonlinearity $f$.
Back to the general nonradial case, an interior $L^\infty$ bound for stable solutions holding for all nonlinearities was established for $n\leq 4$ by Cabr\'e~\cite{C10} in 2010. The author gave a different proof of this result more recently, in~\cite{C19}. Finally, the optimal dimension $n\leq 9$ has been reached in 2020, by Cabr\'e, Figalli, Ros-Oton, and Serra~\cite{CFRS}, assuming $f\ge 0$ for interior regularity, and $f\ge 0$, $f'\ge 0$, and $f''\ge 0$ for boundary regularity.
The monograph~\cite{Dup} contains detailed presentations of these developments, except for those of \cite{CFRS}, which are more recent. The survey \cite{C17}, which is a shorter presentation of the topic, contains also most of the proofs prior to \cite{CFRS}.
Some results of \cite{CFRS} were established through contradiction-compactness \linebreak proofs. The purpose of this article is to give quantitative proofs of all of them.
\medskip
\subsection{New results}\label{subsec-results}
\
\smallskip
The following is the interior regularity result of \cite{CFRS}. It provides a universal interior H\"older a priori bound for stable solutions when $n\leq 9$. It also establishes, in every dimension, an interior $W^{1,2}$ estimate (indeed a better $W^{1,2+\gamma}$ bound). All quantities are controlled in terms of only the $L^1$ norm of the solution. Regarding the nonlinearity, all what we need is $f\geq 0$.
Here, and throughout the paper, by ``dimensional constant'' we mean a constant that depends only on $n$.
\begin{zero}
\label{thm:0}\hspace{-1mm}(\cite{CFRS}) {\it
Let $u\in C^\infty(\overline B_1)$ be a stable solution of $-\Delta u=f(u)$ in $B_1\subset \mathbb{R}^n$, for some nonnegative function $f\in C^{1}(\mathbb{R})$.
Then,
\begin{equation}
\label{eq:W12 L1 int}
\|\nabla u\|_{L^{2+\gamma}({B}_{1/2})} \le C\|u\|_{L^1(B_1)}
\end{equation}
for some dimensional constants $\gamma>0$ and $C$.
In addition,
\begin{equation}
\label{eq:Ca L1 int}
\|u\| _{C^\alpha(\overline{B}_{1/2})}\leq C\|u\|_{L^1(B_1)} \quad \text{ if } n \leq 9,
\end{equation}
where $\alpha>0$ and $C$ are dimensional constants.
}\end{zero}
The theorem is stated as an a priori bound for smooth stable solutions. From it one may deduce, by approximation results,\footnote{Here one needs to approximate $f$ by smooth nonlinearities, since we assume $u\in C^\infty$. This is not necessary in \cite{CFRS}, which only assumes $u\in C^2$.} interior regularity for $W^{1,2}$ stable weak solutions to \eqref{eq:PDE} when $f$ is, in addition, nondecreasing and convex; see Proposition~4.2 and Corollary~4.3 in \cite{CFRS}. Together with our boundary estimates, the theorem will also lead to regularity results up to the boundary for the extremal solution~$u^\star$ to~\eqref{eq:lambda}, as commented below.
For $n\geq 10$ (and still $f\geq 0$), \cite{CFRS} proved that stable solutions belong, in the interior of the domain, to certain Morrey spaces. This was an almost optimal result that has been improved by Peng, Zhang, and Zhou \cite{PZZ1}, reaching optimality. For instance, for $n=10$, \cite{PZZ1} shows that stable solutions are BMO in the interior, a result also found, independently, by Figalli and Mayboroda (personal communication).
Also after \cite{CFRS}, another interesting result by the same authors Peng, Zhang, and Zhou~\cite{PZZ2} proves that, for $n=5$, the hypothesis $f\geq 0$ is not needed to conclude interior H\"older regularity. The H\"older control, however, is given by the $W^{1,2}$ norm of the solution instead of the $L^1$ norm. This establishes in dimension 5 previous analogue results of the author \cite{C10,C19} for $n\leq 4$. More details on this are given below in Subsection~\ref{subsec-open-related}, Open problem 1.3.1.
\begin{remark}\label{fbddbelow}
Our quantitative proof of Theorem~\ref{thm:0} allows for an easy extension of the result to the case $f\geq -K$ for some positive constant $K$. The resulting estimates in this case are those of Theorem~\ref{thm:0} with the right-hand sides replaced by $C(\|u\|_{L^1(B_1)}+K)$, where $C$ is still a dimensional constant. The necessary changes in the proofs will be explained at the end of Section~\ref{sect:higher}.
We point out that it was not clear how to obtain such a result for $f\geq -K$ by extending the proof in \cite{CFRS}, due to the compactness argument used in that paper. This has been accomplished recently by Fa Peng \cite{P}.
\end{remark}
The proof of Theorem \ref{thm:0} is divided into three steps, as described in next subsection. The last step consists of controlling some norm of $u$ by its radial derivative $u_r$ in an annulus. Here and
throughout the paper we will use the notation
\begin{equation}\label{notation r}
r=|x| \qquad\text{and}\qquad u_r=\frac{x}{|x|}\cdot \nabla u.
\end{equation}
In \cite{CFRS} this step of the proof controlled $|\nabla u|$ in $L^2$ by $u_r$ in $L^2$, under a {\it doubling assumption} on $|\nabla u|^2$. More precisely, \cite[Lemma 3.1]{CFRS} proved that
\begin{equation}\label{step3CFRS}
\Vert \nabla u\Vert^2_{L^{2}({B_{3/2})}}\leq C_\delta \Vert u_{r}\Vert^2_{L^{2}({B_{3/2}\setminus B_{1})}} \quad\text{ whenever }
\Vert \nabla u\Vert^2_{L^{2}({B_{1})}}\geq \delta \Vert \nabla u\Vert^2_{L^{2}({B_{2})}}
\end{equation}
and $u$ is a stable solution in $B_2$. This was accomplished through a contradiction-compactness argument. Compactness in the $W^{1,2}$ norm followed from the higher integrability bound \eqref{eq:W12 L1 int} (obtained in $B_{3/2}$ instead of $B_{1/2}$). The contradiction came from analyzing the validity of \eqref{step3CFRS} in the limiting case $u_r\equiv 0$. This is simple, since if $u_r\equiv 0$, then $u$ is $0$-homogeneous and, thus, $u$ is a superharmonic function for the Laplace-Beltrami operator on the unit sphere (recall that $f\geq 0$). This yields $u$ to be constant, and hence $|\nabla u|\equiv 0$.
In this paper we give a quantitative proof of this third step ---which is the only one that used a contradiction-compactness argument in \cite{CFRS}. The new proof does not rely on a compactness argument and, as a consequence, it allows to quantify the H\"older exponent $\alpha$ of Theorem~\ref{thm:0}. At the same time, having two different proofs of the regularity result is of interest for possible future extensions to other frameworks. For instance, our quantitative proof can be easily adapted to the case when $f$ is bounded below (instead of being nonnegative); see Remark~\ref{fbddbelow} above. It can also be extended to operators with variable coefficients, as done by Erneta in \cite{Ern1}. However, the main applications of our quantitative approach, commented below, concern the boundary regularity treatment and its extensions to other equations.
Our interior quantitative proof is based on the following new estimate. We replace \eqref{step3CFRS} by the new bound \eqref{new-int-ur} in annuli, which still suffices to complete the proof of Theorem~\ref{thm:0}. Instead, we will not use estimate \eqref{new-int-ur-ball} in balls, which follows easily from \eqref{new-int-ur} and is stated here only for completeness.
\begin{three}\label{thm:3}{\it
Let $u\in C^\infty (\overline B_1)$ be superharmonic in $B_1\subset \mathbb{R}^n$.
Then, there exists a constant $t$, which depends on $u$, such that
\begin{equation}\label{new-int-ur}
\Vert u-t\Vert_{L^{1}({B_{1}\setminus B_{1/2})}}\leq C\Vert u_{r}\Vert_{L^{1}({B_{1}\setminus B_{1/2})}}
\end{equation}
and
\begin{equation}\label{new-int-ur-ball}
\Vert u-t\Vert_{L^{1}({B_{1})}}\leq C\Vert u_{r}\Vert_{L^{1}({B_{1})}}
\end{equation}
for some dimensional constant $C$.
}\end{three}
With respect to \eqref{step3CFRS}, we have replaced $L^2$ norms by $L^1$ norms, as well as $|\nabla u|$ by $u-t$ for some constant $t$.
Notice also that the new estimate \eqref{new-int-ur} does not require a doubling assumption, unlike \eqref{step3CFRS}. In addition, it holds for all superharmonic functions.\footnote{Still, we could not find it in previous literature.} This is in contrast with the much more delicate case of boundary regularity, treated below, where the analogue estimate will crucially require $u$ to be a stable solution of a semilinear equation for a certain class of nonlinearities.
At least for $n\geq3$, Theorem \ref{thm:3} also holds when replacing \eqref{new-int-ur}, or \eqref{new-int-ur-ball}, by the stronger bound
$$
\| u-t\|_{L^1(B_{1})} \le C \|u_r\|_{L^1({B_{1}\setminus B_{1/2})}} \quad\text{ (if $n\ge 3$)}.
$$
This can be shown proceeding as in Remark~\ref{rk:bdry-ngeq3}, where the details are done for the boundary case.
The proof of Theorem \ref{thm:3} uses a harmonic replacement together with the maximum principle. This allows us to reduce the problem to the case when $u$ is harmonic ---here it will be crucial that we work with $L^1$ instead of $L^2$ norms. But when $\Delta u=0$, we confront a Neumann problem for the homogeneous Laplace equation (since $u_r$, restricted to any sphere, may be thought as the Neumann data), for which the estimate of the theorem is not surprising and, in fact, easy to prove.
Let us now turn into boundary regularity.
It is well known that the interior bound of Theorem~\ref{thm:0} and the moving planes method yield an $L^\infty$ bound for $u$ up to the boundary when the domain~$\Omega$ is convex; see \cite{C10} and Corollary~1.4 of \cite{CFRS}. Since this procedure does not work for nonconvex domains, \cite{CFRS} undertook the study of boundary regularity in general domains and proved the following result.
Let $\Omega\subset \mathbb{R}^n$ be a bounded domain of class $C^3$ and $u\in C^0(\overline\Omega) \cap C^2(\Omega)$ be a stable solution of
\begin{equation}\label{BVP}
\left\{
\begin{array}{cl}
-\Delta u=f(u) & \text{in }\Omega\\
u=0 & \text{on }\partial\Omega.
\end{array}
\right.
\end{equation}
Assuming that $f: \mathbb{R} \to \mathbb{R}$ is nonnegative, nondecreasing, and convex, \cite{CFRS} established that
\begin{equation*}
\|\nabla u\|_{L^{2+\gamma}(\Omega)}\le C_\Omega \,\|u\|_{L^1(\Omega)}
\end{equation*}
and that
\begin{equation*}
\text{ if $n\leq 9$, then } \|u\| _{C^\alpha(\overline\Omega)} \leq C_\Omega\, \|u\|_{L^1(\Omega)},
\end{equation*}
where $\gamma>0$ and $\alpha>0$ are dimensional constants, while $C_\Omega$ depends only on $\Omega$.
From these two a priori estimates, one deduces the corresponding regularity results for the extremal solution $u^\star$ to \eqref{eq:lambda}. This is accomplished by using the above estimates for the family of {\em bounded} stable solutions $u_\lambda$, where $\lambda<\lambda^\star$.
To prove the estimates, one first covers $\partial\Omega$ by small enough balls so that $\partial\Omega$ is almost flat (in the sense of \cite{CFRS}) inside each of the balls. Thus, it is sufficient to prove a local-boundary result\footnote{See the statement of Theorem \ref{thm:0bdry} to understand the expression ``local-boundary result''. It concerns a bound in a half-ball smaller than the half-ball where the equation and part of the boundary data are assumed.\label{foot5}} for almost flat boundaries.
To simplify the exposition, in the current paper we place ourselves in the simpler setting of a flat boundary. A forthcoming work by Erneta~\cite{Ern2} will extend our quantitative proof to almost flat boundaries. This will allow, in~\cite{Ern2}, to avoid the proof of Theorem 6.1 in \cite{CFRS}, which is a delicate blow-up and compactness argument used together with a Liouville theorem in the half-space. It was needed there since the contradiction-compactness argument (Step~2 of the proof of Proposition~6.3 in \cite{CFRS}) required the boundary to be flat.
We introduce the notation
$$
\mathbb{R}^n_+=\{x\in\mathbb{R}^n\, :\, x_n>0\}, \quad B_\rho^+=\mathbb{R}^n_+\cap B_\rho, \quad\text{and}\quad \partial^0 B^+_\rho = \{x_n=0\}\cap \partial B^+_\rho,
$$
and state the result of \cite{CFRS} in the case of flat boundaries.
\begin{zerobdry}
\label{thm:0bdry} \hspace{-1mm}(\cite{CFRS}) {\it
Let $u\in C^\infty(\overline{B^+_1})$ be a nonnegative stable solution of
\linebreak
$-\Delta u=f(u)$ in $B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$.
Assume that $f\in C^1(\mathbb{R})$ is nonnegative, nondecreasing, and convex.
Then,
\begin{equation}
\label{higher-bdry}
\|\nabla u\|_{L^{2+\gamma}(B^+_{1/2})} \le C \|u\|_{L^{1}(B^+_{1})}
\end{equation}
for some dimensional constants $\gamma>0$ and $C$.
In addition,
\begin{equation}
\label{holder-bdry}
\|u\| _{C^\alpha (\overline{B^+_{1/2}})}\leq C\|u\|_{L^1(B^+_1)} \quad \text{ if } n \leq 9,
\end{equation}
where $\alpha>0$ and $C$ are dimensional constants.
}\end{zerobdry}
It is important here to assume the solution $u$ to be nonnegative ---an assumption that will hold when applying the theorem to the global problem \eqref{BVP}, since $f\geq0$. Note also that the result requires more assumptions on the nonlinearity than the interior theorem.
As in the interior case, the proof of Theorem \ref{thm:0bdry} consists of three steps. The last one, on control of the solution $u$ by its radial derivative $u_r$, is where \cite{CFRS} used a contradiction-compactness argument (Step~2 of the proof of Proposition 6.3 in that paper). Carrying it out required to establish a rather delicate, but powerful, closedness result for stable solutions to semilinear equations with different (varying) nonlinearities. This is Theorem~4.1 of \cite{CFRS}, further commented below, which will not be needed in our quantitative proof. As a consequence, it will be simpler to implement our proof in other frameworks than that of \cite{CFRS}. This is the case of stable solutions for operators with variable coefficients, as treated in~\cite{Ern2}.
Our quantitative proof of boundary regularity will use estimate \eqref{introbdryradial} below, which is the main result of this paper.
\begin{six}\label{thm:3bdry}{\it
Let $u\in C^\infty(\overline{B_{1}^+})$ be a nonnegative stable solution of $-\Delta u=f(u)$ in $B_{1}^+\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B_{1}^+$. Assume that $f\in C^1(\mathbb{R})$ is nonnegative, nondecreasing, and convex.
Then,
\begin{equation}\label{introbdryradial}
\| u\|_{L^1(B_{1}^+\setminus B_{1/2}^+)} \le C \|u_r\|_{L^1({B_{1}^+\setminus B_{1/2}^+)}}
\end{equation}
and
\begin{equation}\label{introbdryradial-ball}
\| u\|_{L^1(B_{1}^+)} \le C \|u_r\|_{L^1({B_{1}^+)}}
\end{equation}
for some dimensional constant $C$.
}
\end{six}
Several comments are in order.
We will show in Remark~\ref{rk:not-superh} that, unlike the interior case of Theorem~\ref{thm:3}, estimate \eqref{introbdryradial} does not hold in the larger class of nonnegative superharmonic functions in~$B_1^+$ which are smooth up to the boundary of $B_1^+$ and vanish on $\partial^0 B^+_1$.
In Remark~\ref{rk:bdry-ngeq3} we will prove that, at least for $n\geq3$, Theorem \ref{thm:3bdry} also holds when replacing \eqref{introbdryradial}, or \eqref{introbdryradial-ball}, by the stronger bound
$$
\| u\|_{L^1(B_{1}^+)} \le C \|u_r\|_{L^1({B_{1}^+\setminus B_{1/2}^+)}} \quad\text{ (if $n\ge 3$)}.
$$
The proof of Theorem~\ref{thm:3bdry} is rather delicate and required new ideas, described below. It uses not only the semilinear equation for $u$, but also the stability condition. Furthermore, it requires the nonlinearity to satisfy $f\ge 0$, $f'\ge 0$, and $f''\ge 0$. Note that these are the exactly same assumptions needed in the contradiction-compactness proof in \cite{CFRS}. This fact came partly as a surprise, since our quantitative proof and the one of \cite{CFRS} are based in very different ingredients, as described next.
As we did in the interior case, it is interesting to first look at the extreme case of Theorem~\ref{thm:3bdry} in which $u_r\equiv 0$ ---this can be seen as a first validity test for \eqref{introbdryradial}. In fact, such case had to be analyzed also in the contradiction-compactness proof of \cite{CFRS} ---which does not prove \eqref{introbdryradial} but a boundary version of \eqref{step3CFRS}. In contrast with the interior case, ``bad news'' arise here. Indeed, there exist nonzero nonnegative superharmonic functions $u$ in $B_1^+$ which vanish on $\partial^0 B_{1}^+$ and are
\linebreak
0-homogeneous (i.e., for which $u_r\equiv 0$) ---obviously, they are not smooth at the origin as in our statement of Theorem~\ref{thm:3bdry}, but belong to $W^{1,2}(B_{1}^+)$ when $n\ge 3$. An example is given by the function
$$
u(x)= \frac{x_n}{|x|}.
$$
This is the key reason behind the fact that Theorem~\ref{thm:3bdry} does not hold in the class of superharmonic functions which are smooth up to the flat boundary, as we will show in Remark~\ref{rk:not-superh}.
As a consequence, the contradiction in \cite{CFRS} could not come from the limiting function being superharmonic ---as in the interior case---, but from a stronger property. Namely, Theorem~4.1 of \cite{CFRS} established that the class of $W^{1,2}$ functions~$u$ for which there exists a nonnegative, nondecreasing, and convex nonlinearity~$f$ (with $f$ blowing-up, perhaps, at the value $\sup u$)
such that $u$ is a stable solution of $-\Delta u=f(u)$, is closed under the $L^1$-convergence in compact sets of the functions~$u$. Note that the class does not correspond to stable solutions of only one equation, but of all equations with such nonlinearities. This is a very strong result, of interest by itself, which will not be needed in the current paper. \cite{CFRS} used it to find that, in the extreme case discussed above, the limiting function $u$ would be not only superharmonic, but also a solution of an equation $-\Delta u=g(u)$ for some nonlinearity $g$ with the properties listed above. This gave a contradiction, since then $g(u)$ would be $0$-homogeneous (being $u_r$ identically 0), while $-\Delta u$ would be $-2$-homogeneous.
Instead, the proof of Theorem \ref{thm:3bdry} originates from the identity
\begin{equation}\label{origin:bdry}
-2 \Delta u + \Delta (x\cdot\nabla u) = -f'(u) \, x\cdot\nabla u,
\end{equation}
which is easily checked by computing $\Delta (x\cdot\nabla u)$.\footnote{This is the same identity as \eqref{basic lin} in the interior proof. It is the crucial expression leading to the requirement $n\leq 9$ for interior regularity.}
After multiplying \eqref{origin:bdry} by an appropriate test function and integrating by parts in a half-annulus, it is easily seen that the left-hand side bounds the $L^1$ norm of $u$ by below, modulus an error which is admissible for \eqref{introbdryradial}: the $L^1$ norm of $x\cdot \nabla u=ru_r$. The main difficulty is how to control the right-hand side of \eqref{origin:bdry} by above, in terms of only $u_r$. This is delicate, due to the presence of $f'(u)$. We will explain the details in Section~\ref{sect:boundary-L1radial}. The proof involves introducing the functions $u_\lambda(x):=u(\lambda x)$, interpreting the radial derivative $x\cdot \nabla u (\lambda x)$ as $\frac{d}{d\lambda} u_\lambda (x)$, considering a ``$u_\lambda$-version'' of \eqref{origin:bdry}, and averaging such version in $\lambda$ to reduce the number of derivatives falling on the solution $u$ through the proof arguments.
\medskip
\subsection{Structure of the proofs of Theorems \ref{thm:0} and \ref{thm:0bdry}}\label{subsec-proofs}
\
\smallskip
We first note that, by approximation, the stability inequality \eqref{stabilityLip},
$$
\int_{\Omega} f'(u) \xi^2\,dx\leq \int_{\Omega} |\nabla \xi|^2\,dx,
$$
holds not only for $C^1(\overline\Omega)$ functions $\xi$ vanishing on $\partial\Omega$, but also for all Lipschitz functions $\xi$ in $\overline\Omega$ which vanish on $\partial\Omega$. Hence, we can consider a test function of the form $\xi=\mathbf{c}\eta$,
where $\mathbf{c}\in W^{2,\infty}(\Omega)$, $\eta$ is a Lipschitz function in $\overline\Omega$, and $\mathbf{c}\eta$ vanishes on~$\partial\Omega$.
Then, a simple integration by parts argument gives that
\begin{equation}\label{eq:07}
\int_{\Omega} \bigl( \Delta \mathbf{c}+f'(u)\mathbf{c}\bigr) \mathbf{c}\,\eta^{2}\, dx \leq
\int_{\Omega} \mathbf{c}^{2}|\nabla \eta|^{2} \, dx.
\end{equation}
This expression brings the linearized operator $\Delta+f'(u)$, acting on $\mathbf{c}$, into play. The rest of the paper will make three different crucial choices for the function $\mathbf{c}$, all of them displayed later in this subsection.
The proof of H\"older regularity is based on putting together three estimates. In the interior case, the first one, and where the restriction $n\leq 9$ appears, is the following. Here $f\geq 0$ is not needed.
\begin{one}\label{thm:1} \hspace{-1mm}(\cite{CFRS}) {\it
Let $u\in C^\infty (\overline B_1)$ be a stable solution of $-\Delta u=f(u)$ in $B_1\subset \mathbb{R}^n$, for some function $f\in C^{1}(\mathbb{R})$.
If $3 \leq n\leq 9$, then
\begin{equation}\label{ineq1}
\int_{B_\rho}r^{2-n} u_{r}^2\,dx \leq C\rho^{2-n}\int_{B_{3\rho/2}\setminus B_\rho}|\nabla u|^2\,dx
\end{equation}
for all $\rho< 2/3$, where $C$ is a dimensional constant.
}\end{one}
This result will be proven using \eqref{eq:07} with
\begin{equation}\label{test1}
\mathbf{c}= x\cdot\nabla u \quad\text{ and } \quad \eta=r^{(2-n)/2}\zeta,
\end{equation}
where $\zeta$ is a cut-off function. Recall that $r=|x|$.
Note that the theorem requires $n\geq 3$. However, adding superfluous independent variables to the solution, as in \cite{CFRS}, one can use it to establish Theorem~\ref{thm:0} also in dimensions one and two.
Notice that the left-hand side of estimate \eqref{ineq1} is rescale invariant. That is, it is an adimensional quantity ---like the $L^\infty$ norm of $u$ in $B_\rho$ is, as well. As we will see, if the right-hand side of estimate \eqref{ineq1} had $u_r^2$ as integrand, instead of $|\nabla u|^2$, then H\"older regularity of $u$ would be a simple consequence of it. Instead, in Appendix~\ref{app:example} we will show that in the actual form \eqref{ineq1}, Theorem~\ref{thm:1} (even when used with radial derivatives taken with respect to any point, and not only the origin) is not enough to deduce an interior $L^\infty$ bound for~$u$. This is true even assuming that $u$ is an (unstable)\footnote{Instead, the $L^\infty$ bound holds among stable solutions, by Theorem~\ref{thm:0}.} solution to \eqref{eq:PDE} for some nonnegative nonlinearity $f$. We emphasize this fact since we (and some other experts) initially thought that that the estimate of Theorem~\ref{thm:1}, used with respect to any base point, could suffice. Therefore, still having Theorem~\ref{thm:1} at hand, to get H\"older regularity one needs to exploit again the stability of the solution~$u$.
The second and third ingredients will allow to replace $|\nabla u|^2$ by $u_r^2$ as integrand in the right-hand side of \eqref{ineq1}. The second one, already contained in \cite{CFRS}, uses the choice
\begin{equation}\label{test2}
\mathbf{c} = |\nabla u|
\end{equation}
in \eqref{eq:07}. It was first introduced by Sternberg and Zumbrun \cite{SZ} while studying the Allen-Cahn equation. It will lead, in every dimension, to certain $L^2$ and $L^1$ second derivative estimates. From them, we will deduce the $W^{1,2}$ estimate of the following theorem, which holds in all dimensions.
Before stating it, let us mention that the two key test functions \eqref{test1} and \eqref{test2} used in the stability inequality correspond, respectively, to the following perturbations of the solution $u$ (one in the radial direction, the other in the normal direction to the level sets):
$$
u\left( x+ \varepsilon r^{(2-n)/2}\zeta (x) x \right)
\quad\text{ and }\quad
u\Big( x+ \varepsilon \eta (x)\frac{\nabla u (x)}{|\nabla u(x)|} \Big),
$$
where $\zeta$ and $\eta$ are cut-off functions. Indeed, making the derivative of these expressions with respect to $\varepsilon$, at $\varepsilon =0$, we obtain the above test functions $\xi=\mathbf{c} \eta$.
\begin{two}\label{thm:2} \hspace{-1mm}(\cite{CFRS}) {\it
Let $u\in C^\infty (\overline B_1)$ be a stable solution of $-\Delta u=f(u)$ in $B_1\subset \mathbb{R}^n$, for some nonnegative function $f\in C^{1}(\mathbb{R})$.
Then,
\begin{equation}\label{ineq2}
\|\nabla u\|_{L^{2}(B_{1/2})} \le C \|u\|_{L^{1}(B_{1})}
\end{equation}
for some dimensional constant $C$.
}\end{two}
This $W^{1,2}$ regularity result solves~\cite[Open problem 1]{BV}, a question posed by Brezis and V\'azquez. Previously, it had been proven, under the stronger hypotheses \eqref{nonl-G} on the nonlinearity, for $n\leq 5$ by Nedev~\cite{Ned00} and for $n=6$ by Villegas~\cite{Vil13}.
Theorem \ref{thm:2} provides the second step towards the H\"older regularity proof. Indeed, one starts from estimate \eqref{ineq1} of Theorem~\ref{thm:1} with, for instance, $\rho=5/8$. Now, one covers the annulus in the right-hand side of \eqref{ineq1} by a universal number of balls with an appropriate universally small radius. Next, in each of these balls one uses \eqref{ineq2}, properly rescaled, with $u$ replaced by the stable solution $u-t$ ---a solution of $-\Delta v= f(v+t)$---, where $t$ is a constant to be chosen later. Adding the right-hand sides of \eqref{ineq2} in all the balls, one concludes that the integral of $r^{2-n} u_{r}^2$ in $B_{1/2}$ can be controlled by the $L^1$ norm of $u-t$ in the annulus $B_1\setminus B_{1/2}$. Finally, one uses the third ingredient, estimate \eqref{new-int-ur} of Theorem~\ref{thm:3}.
In this way, one concludes that the integral of $r^{2-n} u_{r}^2$ in $B_{1/2}$ is controlled by its integral on the annulus $B_1\setminus B_{1/2}$. This estimate, used at all scales and iterated ---this is sometimes called ``the hole filling technique", referring to the hole of the annulus--- will easily lead to the H\"older regularity of $u$.
Let us now switch to estimates up to $\partial\mathbb{R}^n_+$, that is, global regularity in the case of flat boundaries. Theorem~\ref{thm:0bdry} will follow from three results, which are the analogues of the interior ones described above. The last one is our main result, Theorem~\ref{thm:3bdry}, from the previous subsection. The other two ingredients are the following.
\begin{four}\label{thm:1bdry} \hspace{-1mm}(\cite{CFRS}) {\it
Let $u\in C^\infty(\overline{B^+_1})$ be a stable solution of $-\Delta u=f(u)$ in $B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$, for some nonlinearity $f\in C^1(\mathbb{R})$.
If $3 \leq n\leq 9$, then
\begin{equation*}
\int_{B^+_\rho}r^{2-n} u_{r}^2\,dx \leq C_\lambda\, \rho^{2-n}\int_{B^+_{\lambda\rho}\setminus B^+_{\rho}}|\nabla u|^2\,dx
\end{equation*}
for all $\lambda>1$ and $\rho< 1/\lambda$, where $C_\lambda$ is a constant depending only on $n$ and $\lambda$.
}\end{four}
For the proof of this result, as well as of the following one, the stability condition is used with certain test functions $\xi=\mathbf{c}\eta$ for which it is important that $\eta$ does not vanish on $\partial^0 B^+_1$. Thus, we will need to take $\mathbf{c}$ vanishing on the flat boundary. For the previous theorem, one makes the same choice $\mathbf{c}=x\cdot\nabla u$ as in the interior case, which indeed does vanish on the flat boundary.
Instead, for the following result one cannot use the choice $\mathbf{c}=|\nabla u|$ made in the interior case, since this function does not vanish on $\partial^0 B^+_1$. In \cite{CFRS} we found that
$$
\mathbf{c}=|\nabla u|- u_{x_n},
$$
which vanishes on $\partial^0 B^+_1$, is a suitable replacement. This allowed to establish the following $W^{1,2+\gamma}$ estimate. Note that while the previous theorem required no condition on $f$, we now require $f$ to be nonnegative and nondecreasing ---but not necessarily convex.
\begin{five}\label{thm:2bdry} \hspace{-1mm}(\cite{CFRS}) {\it
Let $u\in C^\infty(\overline{B^+_1})$ be a nonnegative stable solution of $-\Delta u=f(u)$ in $B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$.
Assume that $f\in C^1(\mathbb{R})$ is nonnegative and nondecreasing.
Then,
\begin{equation*}
\|\nabla u\|_{L^{2+\gamma}(B^+_{1/4})} \le C \| u\|_{L^{1}(B^+_{1})},
\end{equation*}
for some dimensional constants $\gamma>0$ and $C$.
}\end{five}
To use this result as second ingredient towards the H\"older estimate, it would suffice to have the exponent $2+\gamma$ to be 2 (as it was the case in the interior regularity). However, as a novelty with respect to the interior theory, Theorem~\ref{thm:2bdry} will be used also to establish the control of $u$ by $u_r$ (that is, Theorem~\ref{thm:3bdry}, our main result). For this, having the integrability exponent $2+\gamma$ larger than 2 will be crucial.
\medskip
\subsection{Open problems and related equations}\label{subsec-open-related}
\
\smallskip
1.3.1. The assumption $f\geq 0$ is not needed in the interior $L^\infty$ estimates of \cite{C10,C19} for $n\leq 4$, nor in the interior H\"older bound of \cite{PZZ2} for $n\leq 5$.
It is neither needed in the bounds for $n\leq 9$ of the radial case, \cite{CC}. In the general nonradial case, it is not known if an interior $L^\infty$ estimate could hold for $6\leq n\leq 9$ without the hypothesis $f\geq 0$.
For future developments, it is of interest to keep in mind the test functions used in the stability inequality \eqref{stabilityLip}. Up to trivial cut-off functions $\zeta$, \cite{C10} used $\xi= |\nabla u| \varphi (u)\zeta$ for some appropriate function $\varphi$. Both \cite{CFRS} and the current paper use, for interior regularity, the two test functions $\xi=\mathbf{c}\eta$ given by \eqref{test1} and \eqref{test2} (this last one with $\eta$ being a cut-off). Instead, \cite{C19} and the recent \cite{PZZ2} use
$$
\xi= |\nabla u| r r^{-\beta} \zeta
$$
for some positive exponents $\beta$ ---\cite{PZZ2} takes the optimal exponent $\beta=(n-2)/2$.
\smallskip
1.3.2. Our boundary regularity results require $f\geq 0$, $f'\geq 0$, and $f''\geq 0$. Both the proof in \cite{CFRS} and the one of this paper, which are completely different, turn out to need these three conditions. It is not known if these hypotheses could be relaxed.
Theorem 1.5 in \cite{CFRS} establishes H\"older regularity up to the boundary in domains $\Omega$ of class $C^3$. May the regularity of the domain be improved?
\smallskip
1.3.3. It is an open problem to extend the techniques of \cite{CFRS} to stable solutions of MEMS problems (see, e.g.\ \cite{EGG, Lin-Yang}) and reach a regularity result in the optimal range $n\leq 6$. Here one has the equation $-\Delta u =\lambda f(u)$ for positive nonlinearities $f:[0,1)\to\mathbb{R}$ which blow-up at $u=1$ in such a way that $\int_0^1 f=+\infty$. In this new setting, a solution is said to be regular when $\Vert u\Vert_{L^\infty(\Omega)}< 1$.\footnote{Note that the results of \cite{CFRS} apply to this class of blow-up nonlinearities ---in fact, they appeared naturally while proving the results of Section~4 in that paper. Thus, Corollary 4.3 of \cite{CFRS} gives H\"older regularity of stable solutions to MEMS problems when $n\leq 9$. However, this does not prevent the solution to be singular in the new meaning, that is, equal to $1$ at some points.}
\smallskip
1.3.4. The results of \cite{CFRS} have been extended by Figalli and Zhang~\cite{Fig-Zhang} to finite Morse index solutions (a larger class than stable solutions). Here the nonlinearity must be required to be supercritical, in an appropriate sense. It would be interesting to make further progress on finite Morse index solutions, both in bounded domains and in the entire space. See also the presentation and open problems in~\cite{Wang}.
\smallskip
1.3.5. Dupaigne and Farina~\cite{Dup-Far} have used the results of \cite{CFRS} to establish Liouville theorems for entire stable solutions. One of them, for dimensions $n\leq 10$ and nonlinearities $f\geq 0$, applies to all stable solutions which are bounded by below. Still, \cite{Dup-Far} and \cite{Wang} raise some open questions in this topic.
\smallskip
1.3.6. The interior result of \cite{CFRS} has been extended to the $p$-Laplacian by Cabr\'e, Miraglio, and Sanch\'on~\cite{CMS}. This work reaches the optimal dimension for interior regularity of stable solutions when $p>2$. It remains an open problem to accomplish the expected optimal dimension for $1<p<2$. On the other hand, to our knowledge, boundary regularity \`a la \cite{CFRS} has not yet been treated (for any~$p$).
\smallskip
1.3.7. For the equation $(-\Delta)^su=f(u)$ involving the fractional Laplacian, the optimal dimension for regularity of stable solutions has only been reached when $f(u)=\lambda e^u$. This was accomplished by Ros-Oton~\cite{R-O}. The dimension found in that paper, which depends on~$s\in (0,1)$, is expected to also give the correct range of dimensions for other nonlinearities. This is however an open problem, even in the radial case. The most recent progress (in the general nonradial case) is the paper~\cite{CS} by Cabr\' e and Sanz-Perela, where the techniques of \cite{CFRS} are extended for the half-Laplacian ($s=1/2$) to prove interior regularity in dimensions $n\leq 4$ for all nonnegative convex nonlinearities ---here, with $s=1/2$, one expects $n\leq 8$ to be the optimal result.
\medskip
\subsection{Reading the article}\label{subsec-plan}
\
\smallskip
The monograph~\cite{Dup} and the shorter survey \cite{C17} present the general framework and previous regularity results on stable solutions, except for those from \cite{CFRS}. Besides regularity issues, \cite{Dup} treats other questions on stable solutions.
For expository purposes, the presentation in the current paper is self-contained, in the sense that all results are fully proven. In addition, a novel, non-expert, reader in elliptic PDEs can follow all proofs, since we only use basic Analysis tools (for instance, in the interior regularity proofs, we do not even use the Sobolev inequality).
The article is divided into two parts, dealing with interior and boundary regularity, respectively.
The new results, Theorems~\ref{thm:3} and \ref{thm:3bdry}, are proven in the third sections of Part~I and Part~II, respectively (Sections~\ref{sect:L1byRad} and \ref{sect:boundary-L1radial}). The appendices also contain new results that are needed in our proofs. The lemmas in the remaining sections, and most of their proofs, are taken from \cite{CFRS}. We will point out those passages which simplify some arguments of that paper.
\bigskip\medskip
\centerline{\large{\sc Part I: Interior regularity}}
\addtocontents{toc}{ \vspace{.2cm}\textsc{\hspace{.2cm} Part I: Interior regularity \vspace{.1cm}}}
\medskip
This first part of the article concerns interior regularity. Thanks to the new result of Theorem~\ref{thm:3}, which is proven in Section \ref{sect:L1byRad}, we avoid the contradiction-compactness argument of \cite{CFRS}.
In addition, our proof of the $W^{1,2}$ estimate of Theorem \ref{thm:2} (given in Section~\ref{sect:W12}) is more elementary than that of \cite{CFRS}, which needed the higher integrability result \eqref{eq:W12 L1 int} for $|\nabla u|$. While this last result was obtained through the Sobolev inequality and some estimates on each level set of the solution, we will instead use a new and simple interpolation inequality.
\section{The weighted $L^{2}$ estimate for radial derivatives}
\label{sect:interior-weighted}
The following is the key estimate towards the interior regularity result. In particular, it is here (and only here) where the condition $n\leq 9$ arises. It is proven by using the test function $\xi=(x\cdot \nabla u) \eta$ in the stability condition and by later taking $\eta=r^{(2-n)/2}\zeta$ for some cut-off function $\zeta$. Recall the notation \eqref{notation r} for $r$ and $u_r$.
Throughout the paper, some subscripts denote partial derivatives, that is, $u_i=\partial_iu,$ $u_{ij}=\partial_{ij}u$, etc.
\begin{lemma}[\cite{CFRS}]\label{conseqestab2}
Let $u\in C^\infty (\overline B_1)$ be a stable solution of $-\Delta u=f(u)$ in $B_1\subset \mathbb{R}^n$, for some function $f\in C^{1}(\mathbb{R})$.
Then,
\begin{equation}\label{eq:firstest}
\int_{B_1} \Big( \big\{(n-2)\eta +2 x\cdot \nabla \eta \big\}\eta\, |\nabla u|^2
- 2(x\cdot \nabla u) \nabla u\cdot \nabla(\eta^2)
- |x\cdot \nabla u|^2 |\nabla \eta|^2 \Big)\,dx \le 0
\end{equation}
for all Lipschitz functions $\eta$ vanishing on $\partial B_1$.
As a consequence,
\begin{equation}\label{firstest-n-2}
\begin{split}
\frac{(n-2)(10-n)}{4}\int_{B_1} & r^{2-n} u_r^2 \zeta^2\, dx \\
&\hspace{-2cm} \leq \int_{B_1} (-2) r^{3-n} |\nabla u|^2\zeta \zeta_r\, dx
+\int_{B_1} 4 r^{3-n} u_r \zeta\, \nabla u\cdot\nabla\zeta \, dx \\
& \hspace{-1cm}
+\int_{B_1} (2-n) r^{3-n} u_r^2\zeta \zeta_r \, dx
+\int_{B_1} r^{4-n} u_r^2 |\nabla\zeta|^2 \, dx
\end{split}
\end{equation}
for all Lipschitz functions $\zeta$ vanishing on $\partial B_1$.
\end{lemma}
\begin{proof}
We first establish \eqref{eq:firstest}.
We choose $\mathbf{c}(x):=x\cdot \nabla u(x)$. By a direct computation
we have that
\begin{equation}\label{basic lin}
\Delta \mathbf{c}=x\cdot \nabla \Delta u +2\sum_{i=1}^n u_{ii}=-f'(u)\mathbf{c}+2\Delta u\qquad\text{in } B_1.
\end{equation}
Substituting this identity in \eqref{eq:07} we get\footnote{The same computation arises in the proof of the classical Pohozaev identity.}
\begin{align*}
\int_{B_1} |x\cdot \nabla u|^{2}&\left|\nabla \eta\right|^{2} \,dx \geq
\int_{B_1}\bigl( \Delta \mathbf{c}+f'(u)\mathbf{c}\bigr) \mathbf{c}\,\eta^{2}\, dx=2\int_{B_1} (x\cdot \nabla u)\Delta u\,\eta^{2}\, dx\\
&\hspace{-.6cm} =\int_{B_1}\Big({\rm div}\bigl(2(x\cdot \nabla u)\nabla u - |\nabla u|^2x\bigr)+(n-2)|\nabla u|^2\Bigr)\eta^2\,dx\\
&\hspace{-.6cm} =\int_{B_1}\Big(-2(x\cdot \nabla u) \nabla u\cdot \nabla (\eta^2) +|
\nabla u|^2x\cdot \nabla (\eta^2)+(n-2)|\nabla u|^2\eta^2\Bigr)\,dx,
\end{align*}
which establishes \eqref{eq:firstest}.
We can now deduce \eqref{firstest-n-2}.
Given $a\in (0,n)$,
we would like to take $\eta:=r^{-a/2}\zeta$, with $\zeta$ Lipschitz and vanishing on $\partial B_1$, as test function in \eqref{eq:firstest}.
Since in general $\eta$ is not Lipschitz, we approximate it by the Lipschitz function
\[\eta_\varepsilon:= \min\{r^{-a/2},\varepsilon^{-a/2}\}\zeta\]
for $\varepsilon\in(0,1)$, which agrees with $\eta$ in $B_1\setminus B_\varepsilon$.
We have that $\eta_\varepsilon\to\eta$ and $\nabla \eta_\varepsilon\to \nabla \eta$ a.e.\ in $B_1$ as $\varepsilon\downarrow0$.
At the same time, when $\eta$ is chosen to be $\eta_\varepsilon$, every term in \eqref{eq:firstest} is bounded in absolute value by $Cr^{-a}|\nabla u|^2\leq \tilde Cr^{-a}\in L^1(B_1)$, since $u\in C^\infty (\overline B_1)$.
Hence, the dominated convergence theorem gives that \eqref{eq:firstest} also holds with $\eta:=r^{-a/2}\zeta$.
Now, noticing that
\begin{equation*}
x\cdot \nabla \eta=-\frac{a}2r^{-a/2}\zeta+r^{-a/2}x\cdot \nabla \zeta,\quad \nabla (\eta^2)=-ar^{-a-2}\zeta^2x+2r^{-a}\zeta \nabla \zeta,
\end{equation*}
and
\begin{equation*}
|\nabla \eta|^2=\Big|-\frac{a}2r^{-a/2-2}\zeta x+r^{-a/2}\nabla \zeta \Big|^2=\frac{a^2}4r^{-a-2}\zeta^2 + r^{-a}|\nabla \zeta|^2-ar^{-a-2}\zeta (x\cdot \nabla \zeta),
\end{equation*}
\eqref{firstest-n-2} follows from \eqref{eq:firstest} by choosing $a=n-2$.
\end{proof}
From the lemma, we easily deduce our key estimate in dimensions~$3\leq n\leq9$.
\begin{proof}[Proof of Theorem \ref{thm:1}]
Given $\rho \in (0,2/3)$, consider a Lipschitz function $\zeta$ satisfying $0\leq \zeta\leq 1$, $\zeta_{|B_{\rho}}=1$, $\zeta_{|\mathbb{R}^n\setminus B_{3\rho/2}}=0$,
and $|\nabla \zeta| \leq C/\rho$. Using it in \eqref{firstest-n-2}
and noticing that $r=|x|$ is comparable to $\rho$ inside ${\rm supp}(\nabla \zeta)\subset \overline B_{3\rho/2}\setminus B_\rho$, the result follows.
\end{proof}
\section{Hessian and $W^{1,2}$ estimates}
\label{sect:W12}
To establish the $W^{1,2}$ estimate of Theorem \ref{thm:2} we use again the stability of~$u$, but choosing now another test function: $\xi=|\nabla u|\eta$ with $\eta$ a cut-off.
This choice was introduced by Sternberg and Zumbrun~\cite{SZ} in 1998 while studying the Allen-Cahn equation. It leads to the following result.
Define
\begin{equation}\label{defAAA}
\mathcal A :=
\begin{cases} \left( \sum_{ij} u_{ij}^2 - \sum_{i} \left(\sum_{j} u_{ij} \frac{u_j}{|\nabla u|} \right)^2 \right)^{1/2} \quad \quad &\mbox{if } \nabla u \neq 0
\\
0 &\mbox{if } \nabla u=0.
\end{cases}
\end{equation}
\begin{lemma}[\cite{SZ}]\label{conseqestab}
Let $u\in C^\infty (\overline B_1)$ be a stable solution of $-\Delta u=f(u)$ in $B_1\subset \mathbb{R}^n$, for some function $f\in C^{1}(\mathbb{R})$.
Then,
\begin{equation}\label{stab-real-c}
\int_{B_1} \mathcal A^2 \eta ^2\, dx \le \int_{B_1} |\nabla u|^2 |\nabla \eta|^2\, dx
\end{equation}
for all Lipschitz functions $\eta$ vanishing on $\partial B_1$.
\end{lemma}
\begin{proof}
We would like to take $\mathbf{c}:=|\nabla u|$ in the alternative form \eqref{eq:07} of stability.
A simple computation shows that
\begin{equation}\label{real-c}
|\nabla u|\bigl(\Delta|\nabla u|+f'(u)|\nabla u|\bigr)=\mathcal A^2\quad \text{ in } \{\nabla u\neq0\}.
\end{equation}
This, together with \eqref{eq:07}, suggests that \eqref{stab-real-c} should hold. However a complete argument is needed, since we used integration by parts to derive \eqref{eq:07}, $\mathbf{c}=|\nabla u|$ may not be differentiable where the gradient vanishes, and \eqref{real-c} in principle only holds in $ \{\nabla u\neq0\}$.
The following is the most elementary proof of the lemma.\footnote{See \cite{CFRS} for an alternative proof
which does not require to compute~$\Delta |\nabla u|$ and hence applies to~$C^{2}$ solutions $u$ ---it uses, however, Stampacchia's theorem to ensure that $D^2u=0$ a.e.\ in $\{\nabla u=0\}$. A third proof, similar to the one we give here and that does not rely on Stampacchia's result, can be given following an argument for the $p$-Laplacian from \cite{CMS}. It consists of using \eqref{eq:07} with $\mathbf{c}= |\nabla u| \phi (\abs{\nabla u}/\varepsilon)$,
where $\phi\in C^\infty(\mathbb{R})$ is nondecreasing, $\phi(t)=0$ for $t\leq1$, and $\phi(t)=1$ for $t\geq2$, and letting $\varepsilon\downarrow 0$ at the end.}
As in \cite{C10}, for $\varepsilon >0$ we consider
$$
\mathbf{c}_\varepsilon:=\left( |\nabla u|^2+\varepsilon^{2}\right)^{1/2}
$$
and introduce
\begin{equation*}
\mathcal A_\varepsilon := \left( \sum\nolimits_{ij} u_{ij}^2 - \sum\nolimits_{i} \left(\sum\nolimits_{j} u_{ij} \frac{u_j}{\mathbf{c}_\varepsilon} \right)^2 \right)^{1/2}.
\end{equation*}
Note that we are taking the square root of nonnegative numbers and that now $\mathbf{c}_\varepsilon$ is a~$C^\infty$ function in all of $B_1$. Since $\Delta u_j+f'(u)u_j=0$ and $\mathcal A=0$ in $\{\nabla u=0\}$, it is simple to verify that
\begin{equation*}
\left(\Delta \mathbf{c}_\varepsilon +f'(u) \mathbf{c}_\varepsilon\right)\mathbf{c}_\varepsilon = \varepsilon^2 f'(u) + \mathcal A_\varepsilon^2 \geq \varepsilon^2 f'(u) + \mathcal A^2 \quad\text{ in } B_1.
\end{equation*}
Now, from \eqref{eq:07} used with $\mathbf{c}$ replaced by $\mathbf{c}_\varepsilon$ we deduce that
\begin{equation*}
\int_{B_1}\left( \varepsilon^2 f'(u) + \mathcal A^2 \right) \eta^2\, dx \le
\int_{B_1}\left(\left|\nabla u\right|^2
+\varepsilon^{2}\right)
\left|\nabla \eta\right|^2\,dx.
\end{equation*}
Letting $\varepsilon\downarrow0$, we conclude the lemma.
\end{proof}
We can now deduce two $L^1$ estimates for second derivatives. One of them, \eqref{ahgiohwiob1bis}, is weighted by the gradient and will lead to our $W^{1,2}$ a priori bound. We now assume $f\geq 0$ and use crucially the superharmonicity of the stable solution.
\begin{lemma}[\cite{CFRS}]\label{lem:hessian}
Let $u\in C^\infty (\overline B_1)$ be a stable solution of $-\Delta u=f(u)$ in $B_1\subset \mathbb{R}^n$, for some nonnegative function $f\in C^{1}(\mathbb{R})$.
Then,
\begin{equation}\label{estdivbis}
\Vert D^2 u \Vert_{L^1(B_{3/4})} \leq C \Vert \nabla u \Vert_{L^{2}(B_{1})}
\end{equation}
and
\begin{equation}\label{ahgiohwiob1bis}
\Vert\, |\nabla u| \, D^2 u \, \Vert_{L^1(B_{3/4})} \leq C \Vert \nabla u \Vert^2_{L^{2}(B_{1})}
\end{equation}
for some dimensional constant $C$.
\end{lemma}
\begin{proof}
Define $\nu=\nabla u/|\nabla u|$ in the set $\{\nabla u\neq 0\}$ and $\nu=0$ in $\{\nabla u=0\}$. We claim that
\begin{equation}
\label{hwuighwiu2}
\big| D^2 u - (D^2u[\nu, \nu]) \nu\otimes\nu\big| \leq C\mathcal A \quad\text{ a.e.\ in } B_1
\end{equation}
and, as a consequence,
\begin{equation}
\label{HessbyLapl}
|D^2u|\leq |\Delta u|+C\mathcal A \quad\text{ a.e.\ in } B_1,
\end{equation}
for some dimensional constants $C$.
Indeed, at points $x$ for which $\nabla u(x)\neq 0$, \eqref{hwuighwiu2} follows from the fact that $\mathcal A^2$, as defined in \eqref{defAAA}, is larger or equal than half the squared Hilbert-Schmidt norm of the matrix $D^2 u - (D^2u[\nu, \nu])\,\nu\otimes\nu$. This is easily seen by writing the symmetric matrix $D^2 u(x)$ in any orthonormal basis having $\nu(x)$ as last (say) basic vector. We conclude the validity, almost everywhere, of \eqref{hwuighwiu2} using Stampacchia's result (see for instance~\cite[Theorem~6.19]{LiebLoss}), which ensures that $D^2u=0$ a.e.\ in $\{\nabla u =0\}$.
Now, to prove \eqref{HessbyLapl}, notice that by \eqref{hwuighwiu2} it suffices to control $|D^2u[\nu, \nu]|$. To accomplish this, we simply rely on
$$
D^2u[\nu, \nu] = \Delta u - {\rm tr}\big( D^2 u - (D^2u[\nu, \nu]) \nu\otimes\nu\big)
$$
and use again \eqref{hwuighwiu2}.
Now, choose a nonnegative function $\zeta\in C^\infty_c(B_1)$ with $\zeta\equiv1$ in $B_{3/4}$.
Then, since $-\Delta u \geq 0$, we have
\begin{equation}\label{LapL1}
\int_{B_{3/4}}|\Delta u|\,dx
\leq -\int_{B_1} \Delta u \,\zeta\,dx=\int_{B_1} \nabla u \cdot \nabla \zeta\,dx \leq C\|\nabla u\|_{L^2(B_1)}.
\end{equation}
This bound, together with \eqref{HessbyLapl} and Lemma~\ref{conseqestab}, gives \eqref{estdivbis}.
To establish \eqref{ahgiohwiob1bis}, we begin from the pointwise identity
$$
{\rm div}(|\nabla u|\, \nabla u) = |\nabla u| \biggl( \sum\nolimits_{ij} \frac{u_{ij}u_i u_j}{|\nabla u|^2} + \Delta u\biggr) \quad\text{in } \{\nabla u\neq0\},
$$
from which we deduce
\begin{equation} \label{hwuighwiu}
{\rm div}(|\nabla u|\, \nabla u) = -|\nabla u|\,{\rm tr}\big( D^2 u - (D^2u[\nu, \nu]) \nu\otimes\nu\big) + 2|\nabla u|\Delta u \quad\text{ a.e.\ in } B_1,
\end{equation}
where tr denotes the trace. Here we used again Stampacchia's result (\cite[Theorem~6.19]{LiebLoss}), now also applied to each of the functions $|\nabla u| u_j$. For this, one needs to verify that $|\nabla u| u_j$ has weak derivatives locally in $L^1$.\footnote{This can be done replacing $|\nabla u|$ by $(|\nabla u|^2+\varepsilon^2)^{1/2}$ in the definition of weak derivative, and letting $\varepsilon\downarrow 0$ at the end.}
Stampacchia's theorem ensures then that $\partial_k (|\nabla u|u_j) = 0$ a.e.\ in $\{u_j=0\}$ for all $k$. In particular, $ {\rm div}(|\nabla u|\, \nabla u) = 0$ a.e.\ in $\{\nabla u=0\}$.
Next, let $\eta\in C^\infty_c(B_1)$ be a cut-off function with $\eta\equiv 1$ in $B_{3/4}$. We use \eqref{hwuighwiu}, \eqref {hwuighwiu2}, and Lemma \ref{conseqestab} to obtain
\begin{eqnarray}\label{pass}
2 \int_{B_{3/4}} |\nabla u|\,|\Delta u|\, dx &\le &
-\int_{B_{1}} 2|\nabla u|\,\Delta u \, \eta^2\,dx \nonumber
\\ &&\hspace{-3cm}=-\int_{B_1} |\nabla u|\,{\rm tr}\big( D^2 u - (D^2u[\nu, \nu]) \nu\otimes\nu\big)\, \eta^2\,dx - \int_{B_1} {\rm div}(|\nabla u|\, \nabla u)\,\eta^2\,dx
\nonumber\\&&\hspace{-3cm}
\le C\left(\int_{B_1} |\nabla u|^2\eta^2\,dx\right)^{1/2} \left(\int_{B_1} \mathcal A^2 \eta^2\,dx\right)^{1/2} + \int_{B_1} \,|\nabla u|\, \nabla u \cdot \nabla (\eta^2)\,dx
\nonumber\\&&\hspace{-3cm}
\le C \|\nabla u\|_{L^{2}(B_{1})}^2.
\end{eqnarray}
Using this bound and \eqref{HessbyLapl} (together with a similar argument as the previous one, to control the integral of $|\nabla u|\mathcal A$ in $B_{3/4}$), we conclude \eqref{ahgiohwiob1bis}.
\end{proof}
We now prove that the Dirichlet energy of a stable solution in a ball can be controlled by its $L^1$ norm in a larger ball. To get this result, we combine the previous weighted Hessian estimate with two new (but easy) interpolation lemmas proven in Appendix~\ref{app:interp}. This proof is more elementary than the one of \cite{CFRS}, which was based on the $W^{1,2+\gamma}$ bound of Section~\ref{sect:higher} below and on the Sobolev inequality.
\begin{proof}[Proof of Theorem \ref{thm:2}]
We cover $B_{1/2}$ (except for a set of measure zero) with a family of disjoint open cubes $Q_j$ of side-length small enough (depending only on $n$) such that $Q_j\subset B_{3/4}$.
We now combine the interpolation inequalities of Propositions \ref{prop5.2} and \ref{Nash} in each cube $Q_j$ (we rescale them to pass from the unit cube to the cubes $Q_j$), used with $p=2$ and $\tilde\varepsilon= \varepsilon^{3/2}$ for a given $\varepsilon\in(0,1)$. We obtain that
\begin{equation*}
\int_{Q_j}\abs{\nabla u}^{2}dx \leq C\varepsilon \int_{Q_j}\abs{\nabla u}\lvert D^2u\rvert\,dx+ C\varepsilon \int_{Q_j}\abs{\nabla u}^2dx+C\varepsilon^{-2-\frac{3n}{2}}\left( \int_{Q_j}\abs{u}\,dx\right)^2
\end{equation*}
and thus, using $Q_j\subset B_{3/4}$ and estimate \eqref{ahgiohwiob1bis}, that
\begin{equation*}
\int_{Q_j}\abs{\nabla u}^{2}dx \leq C\varepsilon \int_{B_1}\abs{\nabla u}^2dx+C\varepsilon^{-2-\frac{3n}{2}}\left( \int_{B_1}\abs{u}\,dx\right)^2.
\end{equation*}
Adding up all these inequalities (note that the number of cubes $Q_j$ depends only on~$n$), we get
\[
\|\nabla u\|_{L^{2}(B_{1/2})}^2 \le C\varepsilon \|\nabla u\|_{L^{2}(B_{1})}^2 + C\varepsilon^{-2-\frac{3n}{2}} \|u\|_{L^{1}(B_{1})}^2.
\]
Applying this estimate to the functions $u_{\rho,y}(\bar x):=u(y+\rho \bar x)$, where $\bar x\in B_1$ and $B_\rho(y)\subset B_1$ (note that $u_{\rho,y}$ is a stable solution to the semilinear equation $-\Delta u_{\rho,y}=f_{\rho}(u_{\rho,y})$ in~$B_1$ with $f_\rho(t)=\rho^2f(t)$, and thus all the previous results apply to $u_{\rho,y}$ as well), we conclude that
\begin{align*}
\rho^{n+2}\int_{B_{\rho/2}(y)}|\nabla u|^2\,dx&\leq
C\varepsilon \rho^{n+2}\int_{B_{\rho}(y)}|\nabla u|^2\,dx
+ C\varepsilon^{-2-\frac{3n}{2}}\left(\int_{B_{\rho}(y)}|u|\,dx\right)^2\\
& \leq C\varepsilon \rho^{n+2}\int_{B_{\rho}(y)}|\nabla u|^2\,dx
+C\varepsilon^{-2-\frac{3n}{2}}\left(\int_{B_{1}}|u|\,dx \right)^2
\end{align*}
for every $\varepsilon\in(0,1)$.
By Lemma \ref{lem_abstract}, applied with
$\sigma(B):=\|\nabla u\|_{L^2(B)}^2$ and $\varepsilon$ sufficiently small, the result follows.
\end{proof}
\section{The radial derivative controls the function in $L^{1}$}
\label{sect:L1byRad}
In the previous section we have controlled the $W^{1,2}$ norm of a stable solution by its $L^1$ norm. We now prove, as claimed in Theorem~\ref{thm:3}, that the $L^1$ norm of the solution (in fact of any superharmonic function) in an annulus (respectively, in a ball) can be controlled, up to an additive constant, by the $L^1$ norm of its radial derivative, taken also in the annulus (respectively, in the ball). For this, we first need a similar result for harmonic functions.
\begin{lemma}\label{lem:RadcontrolsHarm}
Let $v\in C^\infty(\overline B_1)$ solve $\Delta v=0$ in $B_1\subset \mathbb{R}^n$. Then,
\begin{equation}\label{Linfty}
\Vert v-v(0)\Vert_{L^{\infty}(\partial B_{1})}\leq 2n^{3/2}\Vert v_{r}\Vert_{L^{\infty}(\partial B_{1})}
\end{equation}
and
\begin{equation}\label{L1}
\Vert v-t\Vert_{L^{1}(\partial B_{1})}\leq 2n^{3/2}\Vert v_{r}\Vert_{L^{1}(\partial B_{1})},
\end{equation}
where $t:=\inf \left\{ \overline{t} : |\{v>\overline{t}\}\cap\partial B_{1}|\leq |\partial B_{1}|/2\right\}$ is the median of $v$.
\end{lemma}
The previous estimates should not be surprising. Indeed, we are controlling the solution to a boundary value problem for an homogeneous equation by its Neumann data ${v_r}_{|_{\partial B_1}}$. Alternatively, the flux ${v_r}_{|_{\partial B_1}}$ can be considered as a first-order elliptic integro-differential operator acting on the function $v_{|_{\partial B_1}}$. This operator is a kind of ``half-Laplacian'', since we extend the data $v_{|_{\partial B_1}}$ harmonically in the ball ---the half-Laplacian corresponds to the harmonic extension in a half-space. Anyhow, these comments will not be needed in the following elementary proof.
\begin{proof}[Proof of Lemma \ref{lem:RadcontrolsHarm}]
We start proving \eqref{Linfty}. From it, we will easily deduce \eqref{L1} by duality.
Let
$$
w(x):= x\cdot \nabla v (x) = r v_r \qquad\text{ for } x\in \overline B_ 1.
$$
We claim that
\begin{equation}\label{pointw}
|w(x)|\leq 2n^{3/2} \Vert v_{r}\Vert_{L^{\infty}(\partial B_{1})} \, r \qquad\text{ for } x\in \overline B_ 1.
\end{equation}
From this, \eqref{Linfty} will clearly follow since, for all $\sigma\in \partial B_1$ we have
$$
v(\sigma)-v(0)=\int_0^1 v_r(r\sigma) \,dr= \int_0^1 r^{-1} w(r\sigma)\,dr.
$$
Now, to prove \eqref{pointw}, note that $w$ is harmonic in $B_1$ and agrees with $v_r$ on $\partial B_1$. Consequently,
\begin{equation}\label{Linftyw}
\Vert w\Vert_{L^{\infty}(B_{1})} \leq \Vert v_r\Vert_{L^{\infty}(\partial B_{1})}.
\end{equation}
It follows that \eqref{pointw} holds whenever $|x|=r\geq 1/2$.
Assume now that $x\in B_{1/2}$. Note that
\begin{equation}\label{wdiff}
|w(x)|= |w(x)-w(0)|\leq \Vert \nabla w \Vert_{L^{\infty}(B_{1/2})} \, r.
\end{equation}
Using that the partial derivatives $w_i$ of $w$ are harmonic and also \eqref{Linftyw}, we see that
\begin{align*}
|w_i(x)|&=\frac{1}{|B_{1/2}|}\left| \int_{B_{1/2}(x)} w_i \, dx \right|=
\frac{1}{|B_{1/2}|}\left| \int_{\partial B_{1/2}(x)} w \nu^i \, d\mathcal{H}^{n-1} \right|
\\
&\leq \frac{|\partial B_{1/2}|}{|B_{1/2}|} \Vert v_r\Vert_{L^{\infty}(\partial B_{1})}
= 2n \Vert v_r\Vert_{L^{\infty}(\partial B_{1})}.
\end{align*}
Hence $|\nabla w(x)|\leq 2n^{3/2} \Vert v_r\Vert_{L^{\infty}(\partial B_{1})}$, which combined with \eqref{wdiff}, gives \eqref{pointw} when $r< 1/2$.
Next, we establish \eqref{L1} by a duality argument. First note that the value $t$ is finite and well-defined. Replacing $v$ by $v-t$ we may assume that $t=0$, and therefore that $0=\inf \left\{ \overline{t} : |\{v>\overline{t}\}\cap\partial B_{1}|\leq |\partial B_{1}|/2\right\}$. It follows that $|\{v>0\}\cap\partial B_{1}|\leq |\partial B_{1}|/2$ and that $|\{v>\overline{t}\}\cap\partial B_{1}|> |\partial B_{1}|/2$ for all $\overline{t}<0$. As a consequence, $|\{v<\overline{t}\}\cap\partial B_{1}|\leq |\partial B_{1}|/2$ for all $\overline{t}<0$, and thus $|\{v<0\}\cap\partial B_{1}|\leq |\partial B_{1}|/2$.
Let now $\text{sgn} (v):=v/|v|$ where $v\neq 0$. Since $|\{v>0\}\cap\partial B_{1}|\leq |\partial B_{1}|/2$ and $|\{v<0\}\cap\partial B_{1}|\leq |\partial B_{1}|/2$, it is possible to extend $ \text{sgn} (v)$ to $\{v=0\}\cap\partial B_{1}$, still taking values $\pm 1$ and in such a way that we have $\int_{\partial B_1} \text{sgn} (v)\, d\mathcal{H}^{n-1} =0$.\footnote{This follows from the fact that, given a measurable set $A\subset \partial B_1$ (which we take here to be $\{v=0\}\cap\partial B_{1}$) and $\theta\in (0,1)$, there exists a measurable subset $B\subset A$ with $|B|=\theta |A|$. This follows from the continuity with respect to $\rho$ of the quantity $|A\cap B_\rho(1,0,\ldots,0)|$.} In addition, it will also hold that $|v|=v\, \text{sgn} (v)$ on $\partial B_1$.
We now define the functions
$$
g_k (x) := \int_{\partial B_1} \text{sgn} (v)(y) \, \rho_k (|x-y|)\, d\mathcal{H}^{n-1} (y),
$$
where $\{\rho_k=\rho_k(|\cdot | )\}$ is a sequence of smooth mollifiers on $\partial B_1$. We have that $g_k\in C^\infty (\partial B_1)$, $|g_k|\leq 1$, and $\int_{\partial B_1} g_k\, d\mathcal{H}^{n-1} =0$ since $\text{sgn} (v)$ has zero average on~$\partial B_1$. In addition, since $|v|=v\, \text{sgn} (v)$ a.e.\ on $\partial B_1$, it holds that
\begin{equation}\label{limgk}
\int_{\partial B_1} |v|\, d\mathcal{H}^{n-1} = \lim_k \int_{\partial B_1} v g_k\, d\mathcal{H}^{n-1}.
\end{equation}
Now, since $g_k$ is smooth and has zero average on $\partial B_1$, we can uniquely solve the problem
$$
\left\{
\begin{array}{cl}
\Delta \varphi_k=0 & \text{in }B_1\\
\partial_r \varphi_k=g_k & \text{on }\partial B_1
\end{array}
\right.
$$
by imposing, additionally, $\varphi_k(0)=0$. By \eqref{Linfty}, we have $\Vert \varphi_k\Vert_{L^{\infty}(\partial B_{1})}\leq 2n^{3/2}$. We conclude that
\begin{align*}
\left| \int_{\partial B_{1}} v g_k \, d\mathcal{H}^{n-1} \right| & =
\left| \int_{\partial B_{1}} v \, \partial_r \varphi_k \, d\mathcal{H}^{n-1} \right| =\left| \int_{B_{1}} \nabla v \cdot \nabla\varphi_k \, dx \right|
=\left| \int_{\partial B_{1}} v_r \, \varphi_k \, d\mathcal{H}^{n-1} \right|
\\
&
\leq 2n^{3/2} \int_{\partial B_{1}} |v_r|\, d\mathcal{H}^{n-1}.
\end{align*}
This, together with \eqref{limgk}, concludes the proof of \eqref{L1}.
\end{proof}
We can now give the
\begin{proof}[Proof of Theorem \ref{thm:3}]
Since
$$
\Vert u_{r}\Vert_{L^{1}({B_{1}\setminus B_{1/2})}}= \frac{1}{1/2} \int_{1/2}^1 \left( \int_{\partial B_r} \frac{1}{2} |u_{r}|\, d\mathcal{H}^{n-1} \right) dr,
$$
there exists $\rho\in [1/2,1]$ such that $ \Vert u_{r}\Vert_{L^{1}({B_{1}\setminus B_{1/2})}}= \frac{1}{2}\int_{\partial B_\rho} |u_{r}|\, d\mathcal{H}^{n-1}$.
Let $v$ be the harmonic function in $B_\rho$ which agrees with $u$ on $\partial B_\rho$. We have $v\in C^\infty (\overline B_\rho)$. Decompose the radial derivatives $u_r=u_r^+-u_r^-$ and $v_r=v_r^+-v_r^-$ in their positive and negative parts. Since $v\leq u$ in $B_\rho$ and the functions agree on the boundary, we have $v_r\geq u_r$ on $\partial B_\rho$. As a consequence, $v_r^-\leq u_r^-$ on $\partial B_\rho$. Since, in addition
$
0=\int_{B_\rho} \Delta v\, dx = \int_{\partial B_\rho} v_{r}\, d\mathcal{H}^{n-1} = \int_{\partial B_\rho} v_{r}^+\, d\mathcal{H}^{n-1}-\int_{\partial B_\rho} v_{r}^-\, d\mathcal{H}^{n-1},
$
it follows that
\begin{align*}
2\Vert u_{r}\Vert_{L^{1}({B_{1}\setminus B_{1/2})}} &= \int_{\partial B_\rho} |u_{r}|\, d\mathcal{H}^{n-1}
\geq \int_{\partial B_\rho} u_{r}^-\, d\mathcal{H}^{n-1} \\
&\geq \int_{\partial B_\rho} v_{r}^-\, d\mathcal{H}^{n-1}=\int_{\partial B_\rho} v_{r}^+\, d\mathcal{H}^{n-1}= \frac{1}{2}\int_{\partial B_\rho} |v_{r}|\, d\mathcal{H}^{n-1}.
\end{align*}
Now, since $\rho\in [1/2,1]$, we can rescale the estimate \eqref{L1} in Lemma~\ref{lem:RadcontrolsHarm} and apply it to the harmonic function $v$ to deduce
$$
C\int_{\partial B_\rho} |v_{r}|\, d\mathcal{H}^{n-1} \geq \int_{\partial B_\rho} |v-t|\, d\mathcal{H}^{n-1}
= \int_{\partial B_\rho} |u-t|\, d\mathcal{H}^{n-1}
$$
for some value $t$ and a dimensional constant $C$, where in the last equality we used that $v=u$ on $\partial B_\rho$. Therefore, by the previous two estimates, the theorem will be proven once we establish that
\begin{equation}\label{conclsuper}
\Vert u-t\Vert_{L^{1}({B_{1}\setminus B_{1/2})}} \leq C\left(
\int_{\partial B_\rho} |u-t|\, d\mathcal{H}^{n-1}+
\Vert u_r\Vert_{L^{1}({B_{1}\setminus B_{1/2})}}
\right)
\end{equation}
and
\begin{equation}\label{conclsuper-ball}
\Vert u-t\Vert_{L^{1}({B_{1/2})}} \leq C\left(
\int_{\partial B_\rho} |u-t|\, d\mathcal{H}^{n-1}+
\Vert u_r\Vert_{L^{1}({B_{1})}}
\right).
\end{equation}
This is simple. For both estimates, we will use that
$$
(u-t)(s\sigma)=(u-t)(\rho\sigma)-\int_s^\rho u_r (r\sigma) \,dr
$$
for every $s\in (0,1)$ and $\sigma\in S^{n-1}$.
Now, to check \eqref{conclsuper} we take $s\in (1/2,1)$ and note that
$$
s^{n-1}|(u-t)(s\sigma)|\leq 2^{n-1}\rho^{n-1} |(u-t)(\rho\sigma)|+2^{n-1}\int_{1/2}^1 r^{n-1} |u_r (r\sigma)| \,dr.
$$
Integrating in $\sigma\in S^{n-1}$, and then in $s\in (1/2,1)$, we conclude \eqref{conclsuper}.
Finally, to prove \eqref{conclsuper-ball} we take $s\in (0,1/2)$ and note that (since $\rho\geq 1/2>s$)
\begin{align*}
s^{n-1}|(u-t)(s\sigma)| &\leq \rho^{n-1} |(u-t)(\rho\sigma)|+\int_{s}^\rho r^{n-1} |u_r (r\sigma)| \,dr \\
& \leq \rho^{n-1} |(u-t)(\rho\sigma)|+\int_{0}^1 r^{n-1} |u_r (r\sigma)| \,dr.
\end{align*}
Integrating in $\sigma\in S^{n-1}$, and then in $s\in (0,1/2)$, we conclude \eqref{conclsuper-ball} by using also \eqref{conclsuper}.
\end{proof}
\section{$C^{\alpha}$ estimate}
\label{sect:Holder}
Here we prove the interior H\"older estimate for $n\leq 9$. It yields, in particular, an interior $L^\infty$ bound for stable solutions.
\begin{proof}[Proof of the H\"older estimate \eqref{eq:Ca L1 int} in Theorem \ref{thm:0}]
We may assume $3 \leq n \leq 9$. Indeed, in case $n\leq2$ one can easily reduce the problem to the case $n=3$ by adding artificial variables. This simply means looking at the function of two variables (or, respectively, of one variable) as a function of three variables which does not depend on the third Euclidean variable (or, respectively, on the last two variables). Note that both the equation and the stability condition are preserved under this procedure.
Using Theorem~\ref{thm:1} with $\rho=5/8$, we have
\begin{equation}\label{eq:11proof}
\int_{B_{1/2}}r^{2-n} u_{r}^2\,dx \leq \int_{B_{5/8}}r^{2-n} u_{r}^2\,dx
\leq C\int_{B_{15/16}\setminus B_{5/8}}|\nabla u|^2\,dx.
\end{equation}
Now, since $1/2<5/8<15/16<1$, we can cover the annulus $B_{15/16}\setminus B_{5/8}$ by a finite number of balls $B_j$, all of them with the same radius, that we choose small enough for the balls $2B_j$ (with same center as $B_j$ and twice its radius) to satisfy $2B_j\subset B_{1}\setminus B_{1/2}$. Now, let $t\in\mathbb{R}$. In each ball $2B_j$ we can apply Theorem~\ref{thm:2}, rescaled, with $u$ replaced by $u-t$, since $u-t$ is also a stable solution of the same equation. We deduce that
$$
\int_{B_j}|\nabla u|^2\,dx\leq C\Vert u-t\Vert_{L^1(2B_j)}^2 \leq C\Vert u-t\Vert_{L^1(B_{1}\setminus B_{1/2})}^2 .
$$
Adding up in $j$, this yields
\begin{equation}\label{eq:12proof}
\int_{B_{15/16}\setminus B_{5/8}}|\nabla u|^2\,dx\leq C\Vert u-t\Vert_{L^1(B_{1}\setminus B_{1/2})}^2
\end{equation}
for all $t\in\mathbb{R}$.
We now use estimate \eqref{new-int-ur} from Theorem \ref{thm:3} to see that
\begin{equation*}
\Vert u-t\Vert_{L^1(B_{1}\setminus B_{1/2})}^2\le C\Vert u_r\Vert_{L^1(B_{1}\setminus B_{1/2})}^2 \le
C\Vert u_r\Vert_{L^2(B_{1}\setminus B_{1/2})}^2\le
C\int_{B_{1}\setminus B_{1/2}} r^{2-n} u_r^2\,dx
\end{equation*}
for some $t\in\mathbb{R}$ (which depends on $u$) ---it is here that we make the choice of the constant $t$.
This, together with \eqref{eq:11proof} and \eqref{eq:12proof}, leads to
$$
\int_{B_{1/2}}r^{2-n} u_{r}^2\,dx \leq C\int_{B_{1}\setminus B_{1/2}} r^{2-n} u_r^2\,dx.
$$
We can write this inequality, in an equivalent way, as
$$
\int_{B_{1/2}}r^{2-n} u_{r}^2\,dx \leq \theta \int_{B_{1}} r^{2-n} u_r^2\,dx
$$
for the dimensional constant $\theta= \frac{C}{1+C}\in (0,1)$.
For every $\rho \in (0,1)$, this estimate applied to the stable solution $u(\rho\, \cdot)$ yields
\begin{equation}\label{recurrence}
\int_{B_{\rho/2}}r^{2-n} u_{r}^2\,dx \leq \theta\int_{B_{\rho}} r^{2-n} u_r^2\,dx
\end{equation}
(note that the integrals are rescale invariant).
This inequality can be iterated, in the balls of radius $2^{-j}$ centered at $0$, for $j\ge 1$ an integer, to obtain
\begin{equation*}
\int_{B_{2^{-j}}}r^{2-n} u_{r}^2\,dx \leq \theta^{j-1} \int_{B_{1/2}}r^{2-n} u_{r}^2\,dx.
\end{equation*}
Since $\theta\in (0,1)$, it follows that, for some dimensional $\alpha\in (0,1)$,
\begin{equation}\label{finaltoholder}
\int_{B_\rho} r^{2-n}u_r^2\,dx\leq C\rho^{2\alpha}\int_{B_{1/3}} r^{2-n} u_r^2\,dx \le C\rho^{2\alpha}\Vert\nabla u\Vert_{L^2(B_{1/2})}^2 \quad\text{ for all } \rho \leq 1/3,
\end{equation}
where we have used Theorem~\ref{thm:1} with $\rho=1/3$ in the last inequality. In particular, since $\rho^{2-n}\leq r^{2-n}$ in $B_\rho$, we conclude that
\begin{equation*}
\int_{B_\rho} u_r^2\,dx \le C\rho^{n-2+2\alpha}\Vert\nabla u\Vert_{L^2(B_{1/2})}^2
\le C\rho^{n-2+2\alpha}\Vert u\Vert_{L^1(B_{1})}^2 \quad\text{ for all } \rho \leq 1/3,
\end{equation*}
where we have used Theorem \ref{thm:2} in the last inequality.
Next, given $y\in \bar B_{1/12}$, we can apply the last estimate to the function $u_y(x):=u(y+\frac{x}{2})$, defined for $x\in B_{1}$, since $y+\frac{1}{2}B_{1}\subset B_{1}$. Denoting by
$$
u_{r_y}(x):= \frac{x-y}{|x-y|}\cdot \nabla u (x)
$$
the radial derivative based at the point $y$, we get
\begin{equation}\label{radial2}
\int_{B_\rho(y)} u_{r_y}^2\,dx\leq C\rho^{n-2+2\alpha}\Vert u\Vert_{L^1(B_1)}^2\quad \mbox{ for all }\,y \in \bar B_{1/12} \text{ and } \rho \leq 1/6.
\end{equation}
From \eqref{radial2}, we immediately get
\begin{equation}\label{radial1}
\int_{B_\rho(y)}|u_{r_y}|\,dx\leq C\rho^{n-1+\alpha}\Vert u\Vert_{L^1(B_1)}\quad \mbox{ for all }\,y \in \bar B_{1/12}\text{ and } \rho \leq 1/6.
\end{equation}
At this point we use Morrey's estimate, as stated and proven in Appendix \ref{app:morrey}. By Theorem~\ref{thmC2} applied to $u(\frac{\cdot}{3})$ with $\bar C= C\Vert u\Vert_{L^1(B_1)}$, \eqref{radial1} yields\footnote{In \cite{CFRS} we employed an alternative way to conclude the H\"older estimate. It consists of averaging \eqref{radial2} with respect to $y$ to get the analogue estimate with the radial derivative in the integrand replaced by the full gradient. From this, by classical estimates on Morrey spaces (see for instance \cite[Theorem 7.19]{GT}) one deduces the desired bound. Instead, in this paper we see that there is no need to average in $y$ since, as we show in Appendix \ref{app:morrey}, Morrey's proof involves, in fact, radial derivatives ---and not the full gradient.}
$$
\|u\|_{C^\alpha(\overline B_{1/12})}\leq C\|u\|_{L^1(B_{1})}.
$$
Finally, from this estimate, \eqref{eq:Ca L1 int} follows by a standard covering and scaling argument.
\end{proof}
\section{$W^{1,2+\gamma}$ estimate\vspace{.15cm}}
\label{sect:higher}
Here we establish a higher $L^{2+\gamma}$ integrability result for the gradient of the solution, as stated in Theorem~\ref{thm:0}. In \cite{CFRS} this result was used crucially to get compacity in $W^{1,2}$, locally in the interior, for any sequence of stable solutions that are uniformly bounded in $L^1$ ---here the solutions may correspond to different nonnegative nonlinearities. This was an essential tool in the non-quantitative proof of H\"older continuity in \cite{CFRS}, when controlling the full gradient by the radial derivative. Instead, in the current paper we do not use the higher integrability result to establish interior H\"older continuity.
The proof of the $W^{1,2+\gamma}$ estimate will be based in the following bound on every level set of the solution.\footnote{Since level sets appear here, it is worth noticing (even though we will not use this in the paper) that the quantity $\mathcal A$ in \eqref{defAAA} controls the second fundamental form of every level set of $u$. This was crucially used in \cite{C10}, in combination with the Sobolev-type inequality of Michael-Simon and Allard (applied on every level set of $u$), to prove regularity of stable solutions up to dimension $n \leq 4$ for all nonlinearities $f$ (including sign-changing nonlinearities).}
\begin{lemma}[\cite{CFRS}]\label{levelsets}
Let $u\in C^\infty (\overline B_1)$ be a stable solution of $-\Delta u=f(u)$ in $B_1\subset \mathbb{R}^n$, for some nonnegative function $f\in C^{1}(\mathbb{R})$.
Then, for almost every $t\in\mathbb{R}$, we have
\begin{equation}\label{ahgiohwiob1}
\int_{\{u=t\}\cap B_{1/2}} |\nabla u|^2 \, d\mathcal{H}^{n-1} \le C\Vert \nabla u\Vert^{2}_{L^{2}(B_{1})}
\end{equation}
for some dimensional constant $C$.
\end{lemma}
\begin{proof}
Let $\eta \in C^\infty_c(B_{3/4})$ with $\eta\equiv1$ in $B_{1/2}$.
Combining \eqref{hwuighwiu} with \eqref{hwuighwiu2} and \eqref{pass},
and using again Lemma \ref{conseqestab}, we get
\begin{equation}\label{absdiv}
\begin{split}
\int_{B_{1}} \big| {\rm div}(|\nabla u|\, \nabla u) \big| \eta^2 \,dx &\le C \int_{B_{1}} |\nabla u| \mathcal A\, \eta^2\,dx + \int_{B_{1}} 2|\nabla u|\,|\Delta u|\, \eta^2\,dx \\
&\hspace{-3cm} \leq C\left(\int_{B_1} |\nabla u|^2\eta^2\,dx\right)^{1/2} \left(\int_{B_1} \mathcal A^2 \eta^2\,dx\right)^{1/2} + C \|\nabla u\|_{L^{2}(B_{1})}^2\\
&\hspace{-3cm}\le C \|\nabla u\|_{L^{2}(B_{1})}^2.
\end{split}
\end{equation}
Now, by Sard's theorem, $\{u=t\}$ is a smooth hypersurface for almost every $t\in\mathbb{R}$. For such values of $t$, we can apply the divergence theorem and get
\begin{align*}\label{claim-12}
\int_{\{u=t\}\cap B_{1/2}} |\nabla u|^2 \, d\mathcal{H}^{n-1} & \leq \int_{\{u=t\}\cap B_{1}} |\nabla u|^2\eta^2 \,d\mathcal{H}^{n-1}
\\
& =- \int_{\{u>t\} \cap B_1 } {\rm div}\big(|\nabla u|\, \nabla u \,\eta^2\big) \,dx.
\end{align*}
This bound, combined with \eqref{absdiv}, gives \eqref{ahgiohwiob1}.
\end{proof}
\begin{proof}[Proof of the $W^{1,2+\gamma}$ estimate \eqref{eq:W12 L1 int} in Theorem \ref{thm:0}]
Since $u/ \|\nabla u\|_{L^{2}(B_{1})}$ is a stable solution of a new equation, we may assume $\|\nabla u\|_{L^{2}(B_{1})}=1$. This normalization will simplify the following proof.
First note that, by the Sobolev inequality with average for $W^{1,2}$ functions, we have
\begin{equation*}
\left(\int_{B_{1}} |u-\bar u|^p \, dx\right)^{\frac 1 p } \le C\left( \int_{B_1} |\nabla u|^2\,dx\right)^{\frac 1 2 }=C
\end{equation*}
for some dimensional exponent $p>2$ and constant $C$, where $\bar u := \average\int_{B_1} u$.
Using this and the coarea formula, we obtain
\begin{equation}\label{ahgiohwiob2}
\int_{\mathbb{R}} dt \int_{\{u=t\}\cap B_{1}\cap \{|\nabla u|\neq0\}} d\mathcal{H}^{n-1} |t-\bar u|^p \,|\nabla u|^{-1} = \int_{B_1} |u-\bar u|^p1_{\{|\nabla u| \neq 0\}} \, dx \le C.
\end{equation}
Also, since $p>2$, we may choose dimensional constants $q>1$ and $\theta\in (0,1/3)$ such that $p/q = (1-\theta)/\theta$. Thus,
defining
\[
h(t) : = \max\big\{1, |t-\bar u|\big\}
\]
and using the coarea formula and H\"older's inequality (note that $p\theta -q(1-\theta)=0$), we obtain
\[
\begin{split}
\int_{B_{1/2}} |\nabla u|^{3-3\theta} \,dx&=\int_{\mathbb{R}} dt\int_{\{u=t\}\cap B_{1/2}\cap \{|\nabla u|\neq0\}} d\mathcal{H}^{n-1} h(t)^{p\theta -q(1-\theta)} |\nabla u|^{-\theta + 2(1-\theta)}
\\
&\le \left(\int_{\mathbb{R}} dt\int_{\{u=t\}\cap B_1\cap \{|\nabla u|\neq0\}} d\mathcal{H}^{n-1} h(t)^{p} |\nabla u|^{-1} \right)^\theta \cdot \\
& \hspace{3cm} \cdot \bigg(\int_{\mathbb{R}} dt\int_{\{u=t\}\cap B_{1/2}} \hspace{-3mm} h(t)^{-q} |\nabla u|^2 \bigg)^{1-\theta}.
\end{split}
\]
Observe now that, thanks to \eqref{ahgiohwiob2} and the definition of $h(t)$, we have
\begin{align*}
\int_{\mathbb{R}} dt & \int_{\{u=t\}\cap B_1\cap \{|\nabla u|\neq0\}} d\mathcal{H}^{n-1} h(t)^{p} |\nabla u|^{-1} \\
& \leq \int_{\bar u-1}^{\bar u+1} dt\int_{\{u=t\}\cap B_1\cap \{|\nabla u|\neq0\}} d\mathcal{H}^{n-1} |\nabla u|^{-1} +C\\
&\leq |B_1|+C\leq C.
\end{align*}
Also, since $q>1$ it follows that $\int_{\mathbb{R}} h(t)^{-q}dt$ is finite, and thus \eqref{ahgiohwiob1} leads to
\[
\int_{\mathbb{R}} dt \, h(t)^{-q} \int_{\{u=t\}\cap B_{1/2}} d\mathcal{H}^{n-1} |\nabla u|^2\le C\int_{\mathbb{R}} h(t)^{-q}\,dt \le C.
\]
Therefore, we have proven that
$\int_{B_{1/2}} |\nabla u|^{3-3\theta} \,dx \le C$
for some dimensional constants $\theta\in(0,1/3)$ and $C$, as desired.
\end{proof}
We finish this section by pointing out the necessary changes in the previous proofs to treat the case $f\geq -K$, as claimed in Remark~\ref{fbddbelow}. First, the estimates of Lemma~\ref{lem:hessian} and Theorem~\ref{thm:2} still hold after adding $K$, in their right-hand sides, to the $L^2$ norm of $\nabla u$ and to the $L^1$ norm of $u$. These changes come from the argument in \eqref{LapL1}, which is now done for the superharmonic function $v:= u-K |x|^2/(2n)$ instead of $u$. Second, Theorem~\ref{thm:3}, applied to $v$ instead of $u$, gives its validity also for $f\geq -K$ after adding $K$, in its right-hand sides, to the $L^1$ norm of $u_r$.
Next, one proceeds to the proof of the H\"older estimate of Theorem~\ref{thm:0}, as given in Section~\ref{sect:Holder}. Here one must replace inequality~\eqref{recurrence}, to be iterated, by
$$
\int_{B_{\rho/2}}r^{2-n} u_{r}^2\,dx \leq \theta \left( \int_{B_{\rho}} r^{2-n} u_r^2\,dx + \rho^4 K^2\right).
$$
Now, using Lemma~8.23 of \cite{GT} (taking $\mu=1/2$, for instance), we see that \eqref{finaltoholder} still holds when adding $K$ to the $L^2$ norm of $\nabla u$. Finally, with this same change, all arguments in the current section remain the same and establish the $W^{1,2+\gamma}$ estimate when $f\geq -K$.
\bigskip\medskip
\centerline{\large{\sc Part II: Boundary regularity}}
\addtocontents{toc}{\textsc{\hspace{.2cm} Part II: Boundary regularity \vspace{.1cm}}}
\medskip
In this second part of the article, we establish the boundary regularity results. Sections \ref{sect:boundary-L1radial} and \ref{sect:boundary-conclusion} are the main novelties. They significantly simplify the proofs in \cite{CFRS}, which used delicate compactness, blow-up, and Liouville-type arguments.
With $\mathbb{R}^n_+=\{x\in\mathbb{R}^n\, :\, x_n>0\}$ and $r=|x|$, we use the notation
$$
B_\rho^+=\mathbb{R}^n_+\cap B_\rho, \quad A_{\rho, \overline\rho} := \{\rho< r< \overline\rho\},\quad A^+_{\rho, \overline\rho} := \{x_n>0, \rho< r< \overline\rho\},
$$
and
$$
\partial^0 \Omega= \{x_n=0\}\cap \partial \Omega \quad\text{and}\quad \partial^+ \Omega=\mathbb{R}^n_+\cap \partial \Omega
$$
for an open set $\Omega\subset \mathbb{R}^n_+$.
\section{The boundary weighted $L^{2}$ estimate for radial derivatives}
\label{sect:bdry-weighted}
The following is the boundary analogue of the key lemma that yield an interior weighted $L^2$ bound for the radial derivative. The proof will be essentially the same as in the interior case. Note that, in the following statement, the test function $\eta$ does not necessarily vanish on $\partial^0 B_1^+$.
\begin{lemma}[\cite{CFRS}]
\label{lem:x Du def}
Let $u\in C^\infty(\overline{B_1^+})$ be a stable solution of $-\Delta u=f(u)$ in~$B_1^+\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B_1^+$, for some nonlinearity $f\in C^1(\mathbb{R})$.
Then,
\begin{equation}\label{basic}
\int_{B_1^+} \Big(\big\{(n-2)\eta + 2 x\cdot\nabla \eta \}\,\eta\, |\nabla u|^2 - 2(x\cdot \nabla u) \nabla u\cdot \nabla(\eta^2) - |x\cdot \nabla u|^2 |\nabla \eta|^2 \Big)\,dx \le 0
\end{equation}
for all Lipschitz functions $\eta$ vanishing on $\partial B_1$. As a consequence, given $\lambda >1$ and $\rho \in (0,1/\lambda)$, we have
\begin{equation}\label{bdry-notweighted}
\int_{B_\rho^+} |\nabla u|^2 \,dx \le C_\lambda \int_{A^+_{\rho,\lambda\rho}} |\nabla u|^2\,dx \qquad \text{if } n\ge 3,
\end{equation}
and
\begin{equation*}
\int_{B_\rho^+} r^{2-n} u_r^2 \,dx \le C_\lambda \, \rho^{2-n}\int_{A^+_{\rho,\lambda\rho}} |\nabla u|^2\,dx \qquad \text{if } 3\le n\le 9,
\end{equation*}
where $C_\lambda$ is a constant depending only on $n$ and $\lambda$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:x Du def} and of Theorem \ref{thm:1bdry}]
As in the interior case, we use $\xi=\mathbf{c}\eta =(x\cdot \nabla u)\,\eta$ as test function in the stability inequality in $B_1^+$. Since $\mathbf{c}=x\cdot \nabla u$ vanishes in $\partial^0B_1^+$, we do not need to require $\eta$ to vanish on this set. Now, the integration by parts argument in the beginning of Subsection~\ref{subsec-proofs} leads to
\begin{equation*}
\int_{B_1^+} \bigl( \Delta \mathbf{c}+f'(u)\mathbf{c}\bigr) \mathbf{c}\,\eta^{2}\, dx \leq
\int_{B_1^+} \mathbf{c}^{2}\left|\nabla \eta\right|^{2} \, dx.
\end{equation*}
From this, repeating the same computations of the interior case (in which now two boundary integrals on $\partial^0B_1^+$ arise, but they vanish), we deduce \eqref{basic}.
Next, let $\lambda>1$ and $\psi\in C^\infty_c (B_\lambda)$ be a nonnegative radially nonincreasing function with $\psi\equiv 1$ in $B_{1}$. For $\rho\in (0,1/\lambda)$, set $\psi_\rho(x) := \psi(x/\rho)$. Note that $|\nabla \psi_\rho|\leq C_\lambda/\rho$ (where $C_\lambda$ depends only on $\lambda$) and that $\nabla \psi_\rho$ vanishes outside of the annulus $A_{\rho,\lambda\rho}$.
Choosing $\eta=\psi_\rho$ in \eqref{basic}, we immediately deduce
\eqref{bdry-notweighted}.
To derive the last estimate of the lemma, for $a<n$ and $\varepsilon\in(0,\rho)$ we use the Lipschitz function $\eta_\varepsilon(x):=\min\{r^{-a/2},\varepsilon^{-a/2}\}\psi_\rho(x)$ as a test function in \eqref{basic}.
Throwing away the term $\int_{B^+_\varepsilon}(n-2)\eta_\varepsilon^2|\nabla u|^2dx$, we obtain
\begin{eqnarray*}
\int_{B_{\lambda\rho}^+\setminus B_\varepsilon^+} \Big\{(n-2-a) |\nabla u|^2 + \Big( 2a -\frac{a^2}{4} \Big) u_r^2 \Big\} r^{-a} \psi_\rho^2\,dx
& &\\
& & \hspace{-8cm} \le C_{a,\lambda}\, \rho^{-a}\int_{B_{\lambda\rho}^+\setminus B_\rho^+} |\nabla u|^2\,dx
\end{eqnarray*}
for some constant $C_{a,\lambda}$ depending only on $n$, $a$, and $\lambda$.
Choosing $a : = n-2$, since $ 2a -\frac{a^2}{4} = (n-2)\bigl(2-\frac{n-2}{4}\bigr)=\frac14(n-2)(10-n) >0$ for $3\le n\le 9$ we deduce
\[
\int_{B_{\lambda\rho}^+\setminus B_\varepsilon^+} r^{2-n} u_r^2 \, \psi_\rho^2\,dx \le C_\lambda \,\rho^{2-n}\int_{B_{\lambda\rho}^+\setminus B_\rho^+} |\nabla u|^2\,dx.
\]
Recalling that $\psi_\rho\equiv 1$ in $B_\rho$ and
letting $\varepsilon\downarrow0$, we conclude the proof.
\end{proof}
\section{Boundary Hessian and $W^{1,2+\gamma}$ estimates}
\label{sect:boundary-higher}
The following lemma, which is based on a Pohozaev identity, will be crucial and used several times in what follows. From now on, we will assume the solution to be nonnegative and the nonlinearity to satisfy some hypotheses.
\begin{lemma}[\cite{CFRS}]\label{lem:auxbdry2}
Let $u\in C^\infty(\overline{B^+_1})$ be a nonnegative stable solution of $-\Delta u=f(u)$ in~$B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$.
Assume that $f\in C^1(\mathbb{R})$ is nonnegative and nondecreasing.
Then,
\begin{equation*}
\| u_\nu\|_{L^2(\partial^0 B^+_{7/8})} \leq C\|\nabla u\|_{L^2(B^+_1)},
\end{equation*}
where $u_\nu=-u_{n}$ is the exterior normal derivative of $u$ on $\partial^0 B_1^+$ and $C$ is a dimensional constant.
\end{lemma}
\begin{proof}
Take a cut-off function $\eta\in C^\infty_c(B_1)$ such that $\eta=1$ in $B_{7/8}$,
and consider the vector-field $\mathbf{X}(x):= x+\boldsymbol e_n$. Multiplying the identity
\[
{\rm div}\big( | \nabla u|^2\mathbf{X}- 2(\mathbf{X}\cdot \nabla u) \nabla u \big) = (n-2) |\nabla u|^2 - 2(\mathbf{X}\cdot \nabla u) \Delta u
\]
by $\eta^2$, integrating in $B_1^+$,
and taking into account that $u_\nu^2=|\nabla u|^2$ on $\partial^0 B^+_1$ since $u=0$ on this set, we obtain
\[\begin{split}
\int_{\partial^0 B^+_{1}} u_\nu^2 \, \eta^2\,d\mathcal H^{n-1} - &\int_{B^+_{1}} \big( | \nabla u|^2\mathbf{X}- 2(\mathbf{X}\cdot \nabla u) \nabla u \big)\cdot \nabla\eta^2 \,dx
\\
&\qquad=\int_{B^+_{1}} \big((n-2) |\nabla u|^2 - 2(\mathbf{X}\cdot \nabla u) \Delta u\big) \eta^2 \,dx.
\end{split}\]
Since, for $F(t) : = \int_0^t f(s)ds$ we have $\mathbf{X}\cdot \nabla (F(u))= f(u)\mathbf{X}\cdot \nabla u = - \Delta u\, \mathbf{X}\cdot \nabla u$, we deduce
\[\begin{split}
\int_{\partial^0B^+_{1}} u_\nu^2\, \eta^2\, d\mathcal H^{n-1} &
\le C\int_{B^+_{1}} |\nabla u|^2\,dx + 2\int_{B^+_{1}} \mathbf{X}\cdot \nabla (F(u)) \eta^2\,dx
\\
&= C\int_{B^+_{1}} |\nabla u|^2\,dx - 2 \int_{B^+_{1}} F(u) \,{\rm div}(\eta^2\mathbf{X})\,dx.
\end{split}
\]
Observe now that, since $f$ is nondecreasing, $0\leq F(t) \leq tf(t)$ for all $t \geq 0$.
Hence, noticing that the function $g:=|{\rm div}(\eta^2\mathbf{X})|$ is Lipschitz and that $u$ and $f$ are nonnegative,
we have
\begin{align*}
- \int_{B^+_{1}} F(u) \,{\rm div}(\eta^2\mathbf{X})\,dx &\le \int_{B^+_{1}} u\,f(u)\,g\,dx
= -\int_{B^+_{1}} u\,\Delta u\,g\,dx\\
& = \int_{B^+_{1}}\big(|\nabla u|^2g + u\,\nabla u\cdot \nabla g\big)\,dx \\
&\leq C\int_{B^+_{1}} \big(u^2+|\nabla u|^2\big)\,dx.
\end{align*}
We conclude using Poincar\'e's inequality for functions $u$ vanishing on $\partial^0 B^+_{1}$.
\end{proof}
The following is the boundary analogue of the Sternberg and Zumbrun~\cite{SZ} inequality. It bounds the quantity $\mathcal A$ defined in \eqref{defAAA}.
Its proof, taken from \cite{CFRS}, required a new key idea since the test function $|\nabla u|$ used in the interior case does not vanish on $\partial^0 B_1^+$ (and hence cannot be used in the stability inequality).
\begin{lemma}[\cite{CFRS}]\label{lem:bdry-CalA}
Let $u\in C^\infty(\overline{B^+_{1}})$ be a nonnegative stable solution of $-\Delta u=f(u)$ in~$B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$.
Assume that $f\in C^1(\mathbb{R})$ is nonnegative and nondecreasing.
Then,
\begin{equation}\label{bdry-CalA}
\|\mathcal A\|_{L^{2}(B^+_{7/8})} \le C \| \nabla u\|_{L^{2}(B^+_{1})}
\end{equation}
for some dimensional constant $C$.
\end{lemma}
\begin{proof}
The key idea is to use $\bigl(|\nabla u|-u_{n}\bigr)\eta$, properly regularized,
as test function in the stability inequality. Note that, by taking $\eta$ with compact support in $B_1$, this function vanishes on $\partial B^+_{1}$, since $|\nabla u|-u_{n}=0$ on $\partial^0 B_1^+$.
\vspace{2mm}\noindent
{\it Step 1: We prove that
\begin{equation}\label{hwioghwoih}
\int_{B^+_{1/2}} \mathcal A^2\,dx \le C\int_{B^+_{3/4}} |\nabla u| \, |D^2 u|\, dx + C \int_{B^+_{1}} |\nabla u|^2\,dx,
\end{equation}
where $\mathcal A$ is defined by \eqref{defAAA}.}
For $\delta >0$, we set
\begin{equation*}
\phi_\delta(z) := |z|1_{\{|z|>\delta\}} + \Big(\frac{\delta}{2} +\frac{|z|^2}{2\delta}\Big) 1_{\{|z|<\delta\}},
\end{equation*}
a convex $C^{1,1}$ regularization of the absolute value. Notice that $\phi_\delta(\nabla u)\in W^{2,\infty}(B^+_{1})$, since the first derivatives of $\phi_\delta$ match on $\{|z|=\delta\}$.
Moreover, since $u$ is nonnegative and superharmonic,
the Hopf lemma yields
$|\nabla u|\ge c>0$ on $\partial^0 B^+_{3/4}$ for some constant~$c$ ---unless $u\equiv 0$, in which case there is nothing to prove.
Hence, for $\delta>0$ small enough we have
\begin{equation}\label{inaneighborhood}
\phi_\delta(\nabla u) = |\nabla u| \quad \mbox{in a neighborhood of }\partial^0 B^+_{3/4}\mbox{ inside }\overline{B^+_{3/4}}.
\end{equation}
Choosing $\delta>0$ small enough such that \eqref{inaneighborhood} holds, we set
\[
\mathbf{c_\delta} := \phi_\delta(\nabla u) -u_{n}
\]
and we take $\eta\in C^\infty_c (B_{3/4})$ satisfying $\eta\equiv1$ in $B_{1/2}$. Now, since $\mathbf{c_\delta} $ vanishes on $\partial^0 B^+_{3/4}$, we are allowed to take $\xi = \mathbf{c_\delta} \eta$ as a test function in the stability inequality \eqref{eq:07} with $\Omega = B^+_1$ ---recall that, as pointed out in the beginning of Subsection \ref{subsec-proofs}, $\mathbf{c_\delta} \in W^{2,\infty}(B^+_{1})$ suffices for the integration by parts argument that allows to pass from \eqref{stabilityLip} to \eqref{eq:07}. We obtain that
\begin{equation}\label{ajgpen1}
\int_{B^+_{1}} \big(\Delta \mathbf{c_\delta} +f'(u)\mathbf{c_\delta} \big)\,\mathbf{c_\delta} \, \eta^2 \,dx\le \int_{B^+_{1}} \mathbf{c}_\delta^2 |\nabla \eta|^2 \,dx.
\end{equation}
Note now that, since $\Delta u_{n} + f'(u)u_{n}=0$,
\begin{equation}\label{ajgpen2}
\begin{split}
\bigl(\Delta \mathbf{c_\delta} + f'(u)\mathbf{c_\delta}\bigr) \,\mathbf{c_\delta} &= \bigl({\Delta[\phi_\delta(\nabla u)]} +f'(u)\phi_\delta(\nabla u)\bigr) \phi_\delta(\nabla u)
\\
& \hspace{25mm} - \big( {\Delta[\phi_\delta(\nabla u)]} +f'(u) \phi_\delta(\nabla u)\big) u_{n}.
\end{split}
\end{equation}
Next, since $\Delta \nabla u = -f'(u) \nabla u$, we also have
\begin{align}
\bigl({\Delta[\phi_\delta(\nabla u)]} & +f'(u)\phi_\delta(\nabla u)\bigr) \phi_\delta(\nabla u)
\nonumber\\
& = f'(u)\phi_\delta(\nabla u)\Bigl(\phi_\delta(\nabla u)-\sum\nolimits_j (\partial_j\phi_\delta)(\nabla u)u_j\Bigr)
\label{may19}\\
& \hspace{2cm} +\,\phi_\delta(\nabla u)\sum\nolimits_{i,j,k} (\partial_{jk}^2\phi_\delta)(\nabla u)u_{ij}u_{ik}.\label{may19b}
\end{align}
Notice that, inside the set $\{|\nabla u|\leq \delta\}$, the term \eqref{may19b} is nonnegative since $\phi_\delta$ is convex, while the term \eqref{may19} is equal to
\[f'(u)\phi_\delta(\nabla u)\Bigl(\frac{\delta}{2}-\frac{|\nabla u|^2}{2\delta}\Bigr)\]
and, therefore, it is also nonnegative (all three factors are nonnegative).
On the other hand, inside the set $\{|\nabla u|>\delta\}$, the term \eqref{may19} vanishes, while the term \eqref{may19b} equals~$\mathcal A^2$ ---recall that $\mathcal A$ is defined by \eqref{defAAA}.
Therefore, we conclude that
\begin{equation}\label{ajgpen3}
\bigl({\Delta[\phi_\delta(\nabla u)]} +f'(u)\phi_\delta(\nabla u)\bigr) \phi_\delta(\nabla u) \ge \mathcal A^2 \, 1_{\{|\nabla u|>\delta\}}.
\end{equation}
Coming back to \eqref{ajgpen2}, since $\eta\in C^\infty_c (B_{3/4})$, {integrating by parts} and recalling \eqref{inaneighborhood} we have
\begin{equation}\label{ahoighwioh}
\begin{split}
\hspace{-2.1mm} \int_{B^+_{1}} {\Delta[\phi_\delta(\nabla u)]}\, u_{n} \,\eta^2\,dx &
\\
&\hspace{-40.5mm} = \int_{B^+_{1}} \phi_\delta(\nabla u) \, \Delta u_{n} \,\eta^2\,dx
+\int_{B^+_{1}} \left( 2 \phi_\delta(\nabla u) \, \nabla u_{n}\cdot \nabla (\eta^2) + \phi_\delta(\nabla u) \,u_{n} \,\Delta (\eta^2)\right)dx
\\
&\hspace{-20mm} -\int_{\partial^0 B^+_{1}} \left( \partial_{n} (|\nabla u|) \,u_{n}\, \eta^2 - |\nabla u| \partial_{n} (u_{n}\, \eta^2) \right) d\mathcal H^{n-1}.
\end{split}
\end{equation}
Now, on $\partial^0 B^+_{1}$ it holds
$\partial_n (|\nabla u|)\, u_{n} = u_{nn}u_{n}=|\nabla u| u_{nn}$ and therefore, thanks to Lemma \ref{lem:auxbdry2},
\begin{equation}\label{may19c}\begin{split}
& \bigg| \int_{\partial^0 B^+_{1}} \left( \partial_{n} (|\nabla u|) \,u_{n}\, \eta^2 - |\nabla u| \partial_{n} (u_{n}\, \eta^2) \right) d\mathcal H^{n-1} \bigg| \\
& \hspace{3cm} \le C\int_{\partial^0 B^+_{3/4}} |u_\nu|^2\,d\mathcal H^{n-1} \le C\int_{B^+_{1}}|\nabla u|^2\, dx.
\end{split}\end{equation}
Thus, by \eqref{ahoighwioh} and \eqref{may19c}, we conclude that
\begin{align*}
& \left| \int_{B^+_{1}} \big( {\Delta[\phi_\delta(\nabla u)]} +f'(u) \phi_\delta(\nabla u)\big)u_{n}\, \eta^2\,dx\right|\\ & \hspace{2cm}\le C\int_{B^+_{3/4}} (|\nabla u|+\delta)\left( |D^2u| + |\nabla u|\right)\,dx +C\int_{B^+_{1}}|\nabla u|^2\, dx.
\end{align*}
Combining this last bound with \eqref{ajgpen1}, \eqref{ajgpen2}, and \eqref{ajgpen3}, we finally obtain
\begin{align*}
&\int_{B^+_{1}} \mathcal A^2\eta^2 1_{\{|\nabla u|>\delta\}}\,dx \\ & \qquad
\le C\int_{B^+_{3/4}} \left\{ (|\nabla u|+\delta)^2+\big( |\nabla u|+\delta \big) \left( |D^2u| + |\nabla u|\right)\right\} \,dx+C\int_{B^+_{1}}|\nabla u|^2\, dx.
\end{align*}
Since $\eta\equiv1$ in $B_{1/2}$, by letting $\delta\downarrow 0$ we conclude \eqref{hwioghwoih},
as desired.
\vspace{2mm}\noindent
{\it Step 2: We show that
\[
\int_{B^+_{{1/2}}} \mathcal A^2\,dx \le \varepsilon \int_{B^+_{1}} \mathcal A^2\,dx + \frac{C}{\varepsilon}\|\nabla u\|_{L^2(B^+_{1})}^2
\]
for all $\varepsilon\in (0,1)$.}
Let $\eta \in C^\infty_c(B_{7/8})$ satisfy $\eta\equiv1$ in $B_{{3/4}}$.
From \eqref{hwuighwiu} and
\eqref{hwuighwiu2} ---note that these are pointwise relations that also hold for solutions in half-balls instead of full balls---, we have that
\begin{equation}\label{ahigohwiowb}
-\int_{B^+_{1}} {\rm div}\big( |\nabla u| \nabla u\big) \eta^2\,dx \ge \int_{B^+_{1}} \big( -2 \Delta u - C\mathcal A\big)|\nabla u|\,\eta^2\,dx.
\end{equation}
Hence, since $|D^2u|\leq |\Delta u|+C\mathcal A$ a.e.\ ---recall \eqref{HessbyLapl}--- and since $\Delta u \leq 0$, \eqref{ahigohwiowb} yields
\begin{equation}\label{ahigohwiowb2}
\int_{B^+_{1}}|\nabla u|\, |D^2u|\,\eta^2\,dx \leq
\biggl|\frac12\int_{B^+_{1}} {\rm div}\big( |\nabla u| \nabla u\big) \eta^2\,dx \biggr|+C \int_{B^+_{1}} \mathcal A\,|\nabla u|\,\eta^2\,dx.
\end{equation}
On the other hand,
using Lemma \ref{lem:auxbdry2} we obtain
\begin{equation}\label{ahigohwiowb3}
\begin{split}
\hspace{-1mm}\biggl|\int_{B^+_{1}} {\rm div}\big( |\nabla u| \nabla u\big) \eta^2\,dx\biggr| &=
\biggl|-\int_{\partial B^+_{1}} (u_\nu)^2 \eta^2 \,d\mathcal H^{n-1} - \int_{B_1^+} |\nabla u| \nabla u \cdot \nabla (\eta^2)\,dx\biggr|\\
&\leq C \int_{B^+_{1}} |\nabla u|^2\,dx.
\end{split}
\end{equation}
Thus, by \eqref{ahigohwiowb2}
and \eqref{ahigohwiowb3},
we see that
\begin{equation}
\label{eq:Du D2u}
\int_{B^+_{1}} |\nabla u|\, |D^2u| \,\eta^2\,dx \le C\int_{B^+_{1}} \mathcal A \,|\nabla u|\,\eta^2\,dx + C\int_{B^+_{1}} |\nabla u|^2\,dx.
\end{equation}
Recalling that $\eta\equiv1$ in $B_{{3/4}}$,
\eqref{hwioghwoih} and \eqref{eq:Du D2u} yield, for every $\varepsilon\in(0,1)$,
\[\begin{split}
\int_{B^+_{{1/2}}} \mathcal A^2\,dx
&\le C \int_{B^+_{{3/4}}} |\nabla u|\, |D^2u| \,dx+C\|\nabla u\|_{L^2(B^+_{1})}^2
\\
&\le C \int_{B^+_{1}} \mathcal A\, |\nabla u|\,dx + C\|\nabla u\|_{L^2(B^+_{1})}^2\\
&\le \varepsilon \int_{B^+_{1}} \mathcal A^2\,dx + \frac{C}{\varepsilon}\|\nabla u\|_{L^2(B^+_{1})}^2.
\end{split}\]
\vspace{2mm}\noindent
{\it Step 3: Conclusion.}
We first claim that
\begin{equation}\label{angoiwnown}
\rho^2\| \mathcal A \|^2_{L^2(\mathbb{R}^n_+\cap B_{\rho/2}(y))} \le C\varepsilon \rho^2\| \mathcal A \|^2_{L^2(\mathbb{R}^n_+\cap B_{\rho}(y))} +
\frac{C}{\varepsilon} \|\nabla u\|_{L^2(B^+_1)}^2
\end{equation}
for every ball $B_{\rho}(y)\subset B_{1}$ and $\varepsilon\in(0,1)$.
Note that $B_{\rho}(y)$ is not necessarily centered at a point on $\partial \mathbb{R}^n_+$. The proof of the lemma will be finished once this is shown. Indeed, \eqref {angoiwnown} and Lemma \ref{lem_abstract}, applied to the subadditive quantity
$$
\sigma(B):=\| \mathcal A \|^2_{L^2(\mathbb{R}^n_+ \cap B)} \quad\text{for } B\subset B_1
$$
(note here that the $L^2$ norm is not taken in $B$, but in its intersection with the upper half-space) lead to \eqref{bdry-CalA} with $B^+_{{7/8}}$ replaced by $B^+_{1/2}$. Finally, a standard
covering and scaling argument gives the same estimate with $B^+_{{7/8}}$ in its left-hand side.
It remains to show \eqref{angoiwnown}. To do this, it is easy to check that there is a dimensional number of balls $\{B_{\rho/16} (y_i)\}_i$ and $\{B_{3\rho/16} (z_j)\}_j$ which cover $\mathbb{R}^n_+\cap B_{\rho/2}(y)$ and have the following properties. Each ball $B_{\rho/16} (y_i)$ is interior, in the sense that the ball with twice its radius satisfies $B_{\rho/8} (y_i)\subset \mathbb{R}^n_+\cap B_{\rho}(y)\subset B_1^+$. The part of $\mathbb{R}^n_+\cap B_{\rho/2}(y)$ not covered by any of the previous balls is covered by the union of the half-balls $B^+_{3\rho/16} (z_j)$, where $z_j\in \partial\mathbb{R}^n_+$. In addition, the half-ball with twice its radius satisfies $B^+_{3\rho/8} (z_j)\subset \mathbb{R}^n_+\cap B_\rho (y)\subset B^+_1$.\footnote{This can be easily seen as follows.
Take a dimensional number of points $y_k\in B_{\rho/2} (y)$ such that $B_{\rho/2} (y) \subset\cup_k B_{\rho/16} (y_k)$. Keep those balls which are interior, in the sense that $B_{\rho/8} (y_i)\subset \mathbb{R}^n_+$. Note that we will also have $B_{\rho/8} (y_i)\subset B_{\rho}(y)$. To cover all of $\mathbb{R}^n_+\cap B_{\rho/2}(y)$, it remains to consider those balls $B_{\rho/16} (y_j)$ which intersect $\mathbb{R}^n_+$ and with $B_{\rho/8} (y_j)\not\subset\mathbb{R}^n_+ $. It follows that there exists a point $z_j\in \partial\mathbb{R}^n_+ \cap B_{\rho/8} (y_j)$. Now, the claimed covering property holds, since $B_{\rho/16} (y_j) \subset B_{3\rho/16} (z_j)$. In addition, $B_{3\rho/8} (z_j)\subset B_\rho (y)$.}
Now, in each interior ball $B_{\rho/16} (y_i)$ we use Lemma~\ref{stab-real-c}, rescaled, to deduce
$$
\rho^2\| \mathcal A \|^2_{L^2(B_{\rho/16}(y_i))} \le C \|\nabla u\|_{L^2(B_{\rho/8}(y_i))}^2 \le \frac{C}{\varepsilon} \|\nabla u\|_{L^2(B_1^+)}^2,
$$
since $\varepsilon <1$. Instead, for the balls $\{B_{3\rho/16} (z_j)\}_j$ we use the statement of Step~2 above (after rescaling and a translation to make $z_j\in\partial \mathbb{R}^n_+$ to be the origin) to get
\[\begin{split}
\rho^2\| \mathcal A \|^2_{L^2(B^+_{3\rho/16}(z_j))} & \le \varepsilon \rho^2\| \mathcal A \|^2_{L^2(B^+_{3\rho/8}(z_j))} +
\frac{C}{\varepsilon} \|\nabla u\|_{L^2(B^+_{3\rho/8}(z_j))}^2 \\
& \le \varepsilon \rho^2\| \mathcal A \|^2_{L^2(\mathbb{R}^n_+\cap B_{\rho}(y))} +
\frac{C}{\varepsilon} \|\nabla u\|_{L^2(B^+_1)}^2.
\end{split}\]
Adding all these inequalities we obtain \eqref{angoiwnown} ---notice that $\varepsilon$ is multiplied by a dimensional constant $C$ in \eqref{angoiwnown}.
\end{proof}
The following $L^1$ estimates for the full Hessian will be useful in several occasions, not only in this section to control the gradient in $L^2$ by the function in $L^1$, but also in next section for the control of the $L^1$ norm of $u$ by its radial derivative in $L^1$.
\begin{lemma}[\cite{CFRS}]\label{lem:Hessbdry}
Let $u\in C^\infty(\overline{B^+_1})$ be a nonnegative stable solution of $-\Delta u=f(u)$ in~$B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$.
Assume that $f\in C^1(\mathbb{R})$ is nonnegative and nondecreasing.
Then,
\begin{equation}\label{bdry-product}
\Vert\, |\nabla u| \, D^2 u \, \Vert_{L^1(B^+_{3/4})} \leq C \Vert \nabla u \Vert^2_{L^{2}(B^+_{1})}
\end{equation}
and
\begin{equation}\label{massHessbdry}
\|D^2u\|_{L^1({B_{3/4}^+})} \leq C \|\nabla u\|_{L^2(B^+_1)}
\end{equation}
for some dimensional constant $C$.
\end{lemma}
\begin{proof}
We first show \eqref{bdry-product}. By taking $\eta \in C^\infty_c(B_{7/8})$ with $\eta\equiv1$ in $B_{3/4}$, the estimate follows from \eqref{eq:Du D2u}, the Cauchy-Schwarz inequality, and Lemma~\ref{lem:bdry-CalA}.
To prove \eqref{massHessbdry}, we choose a nonnegative function $\chi\in C_c^\infty(B_{7/8})$ with $\chi\equiv 1$ in $B_{3/4}$. Since $-\Delta u \geq 0$, we see that
\begin{equation}\label{massLaplbdry}
\begin{split}
\|\Delta u\|_{L^1({B_{3/4}^+})} & \leq \int_{B_{7/8}^+} (-\Delta u) \,\chi\, dx=-\int_{\partial^0 B_{7/8}^+} u_\nu \chi \, d\mathcal H^{n-1}+\int_{B_{7/8}^+} \nabla u\cdot \nabla \chi\,dx \\ & \leq C \|\nabla u\|_{L^2(B^+_1)},
\end{split}
\end{equation}
where we have used Lemma \ref{lem:auxbdry2}.
Recall now that $|D^2u|\leq |\Delta u| + C {\mathcal A}$ a.e.\ in $B_{3/4}^+$ ---note that the pointwise inequality in \eqref{HessbyLapl} also holds for solutions in half-balls instead of full balls.
Now, estimate \eqref{massHessbdry} follows from this inequality, \eqref{massLaplbdry}, and Lemma~\ref{lem:bdry-CalA}.
\end{proof}
With the weighted Hessian estimate \eqref{bdry-product} at hand, we can now control the gradient in~$L^2$ by the solution in $L^1$.
\begin{lemma}[\cite{CFRS}]\label{lem:bdry-nabla2byL1}
Let $u\in C^\infty(\overline{B^+_{1}})$ be a nonnegative stable solution of $-\Delta u=f(u)$ in~$B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$.
Assume that $f\in C^1(\mathbb{R})$ is nonnegative and nondecreasing.
Then,
\begin{equation*}
\|\nabla u\|_{L^{2}(B^+_{1/2})} \le C \| u\|_{L^{1}(B^+_{1})},
\end{equation*}
for some dimensional constant $C$.
\end{lemma}
\begin{proof}
We cover $B^+_{1/2}$ (except for a set of measure zero) with a family of disjoint open cubes $Q_j\subset\mathbb{R}^n_+$ of side-length which depends only on $n$ and is small enough such that $Q_j\subset B^+_{3/4}$.
We now combine the interpolation inequalities of Propositions~\ref{prop5.2} and \ref{Nash} in each cube $Q_j$ (we rescale them to pass from the unit cube to the cubes~$Q_j$), used with $p=2$ and $\tilde\varepsilon= \varepsilon^{3/2}$ for any given $\varepsilon\in(0,1)$. We obtain that
\begin{equation*}
\int_{Q_j}\abs{\nabla u}^{2}dx \leq C\varepsilon \int_{Q_j}\abs{\nabla u}\lvert D^2u\rvert\,dx+ C\varepsilon \int_{Q_j}\abs{\nabla u}^2dx+C\varepsilon^{-2-\frac{3n}{2}}\left( \int_{Q_j}\abs{u}\,dx\right)^2.
\end{equation*}
Now, using $Q_j\subset B^+_{3/4}$ and estimate
\eqref{bdry-product} from Lemma~\ref{lem:Hessbdry},
we deduce
\begin{equation*}
\int_{Q_j}\abs{\nabla u}^{2}dx \leq C\varepsilon \int_{B^+_1}\abs{\nabla u}^2dx+C\varepsilon^{-2-\frac{3n}{2}}\left( \int_{B^+_1}\abs{u}\,dx\right)^2.
\end{equation*}
Adding up all these inequalities (note that the number of cubes $Q_j$ depends only on~$n$), we get
\begin{equation*}
\|\nabla u\|_{L^{2}(B^+_{1/2})}^2 \le C\varepsilon \|\nabla u\|_{L^{2}(B^+_{1})}^2 + C\varepsilon^{-2-\frac{3n}{2}} \|u\|_{L^{1}(B^+_{1})}^2.
\end{equation*}
This estimate, applied to rescaled solutions, yields
\begin{equation}\label{bdry-grad-eps}
\rho^{n+2}\int_{B^+_{\rho/2}}|\nabla u|^2\,dx \leq
C\varepsilon \rho^{n+2}\int_{B^+_{\rho}}|\nabla u|^2\,dx
+ C\varepsilon^{-2-\frac{3n}{2}}\left(\int_{B^+_{\rho}}|u|\,dx\right)^2
\end{equation}
for all $\rho <1$ and $\varepsilon\in(0,1)$.
We now claim that, for all balls $B_\rho(y)\subset B_1$ (not necessarily contained in $B_1^+$) and every $\varepsilon\in(0,1)$, we have
\begin{equation}\label{all-balls-grad-eps}
\rho^{n+2}\int_{\mathbb{R}^n_+\cap B_{\rho/2}(y)}|\nabla u|^2\,dx \leq C\varepsilon \rho^{n+2}\int_{\mathbb{R}^n_+\cap B_{\rho}(y)}|\nabla u|^2\,dx
+C\varepsilon^{-2-\frac{3n}{2}}\Vert u \Vert^2_{L^1(B_1^+)}.
\end{equation}
To show this, we proceed exactly as in Step~3 of the proof of Lemma~\ref{lem:bdry-CalA}, where we proved \eqref{angoiwnown}. That is, we use a covering by balls which are either centered at $\partial\mathbb{R}^n_+$ or interior to $\mathbb{R}^n_+$. For the first ones we use \eqref{bdry-grad-eps} (with $\rho$ multiplied by a small dimensional factor), while for the interior ones we employ Theorem~\ref{thm:2} (recalling that $\varepsilon <1$). We conclude \eqref{all-balls-grad-eps}.
Finally, by Lemma \ref{lem_abstract} applied to the subadditive quantity
$\sigma(B):=\|\nabla u\|_{L^2(\mathbb{R}^n_+\cap B)}^2$, the result follows.
\end{proof}
The following higher integrability result is of interest by itself, but it will be also needed in next section to control the $L^1$ norm of $u$ in a half-annulus by its radial derivative in~$L^1$.
\begin{lemma}[\cite{CFRS}]\label{lem:bdry-gamma-2}
Let $u\in C^\infty(\overline{B^+_{1}})$ be a nonnegative stable solution of $-\Delta u=f(u)$ in~$B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$.
Assume that $f\in C^1(\mathbb{R})$ is nonnegative and nondecreasing.
Then,
\begin{equation*}
\|\nabla u\|_{L^{2+\gamma}(B^+_{3/4})} \le C \| \nabla u\|_{L^{2}(B^+_{1})}
\end{equation*}
for some dimensional constants $\gamma>0$ and $C$.
\end{lemma}
\begin{proof}
\vspace{2mm}\noindent
{\it Step 1: We show that, for almost every $t>0$,}
\begin{equation}\label{bdry-level}
\int_{\{u=t\}\cap B_{3/4}^+} |\nabla u|^2 \, d\mathcal{H}^{n-1} \leq C\int_{B^+_{1}} |\nabla u|^2\,dx.
\end{equation}
Take $\eta \in C^\infty_c(B_{7/8})$ with $\eta\equiv1$ in $B_{{4/5}}$. We combine \eqref{hwuighwiu} and \eqref{hwuighwiu2} ---note that these are pointwise relations that also hold for solutions in half-balls instead of full balls--- with Lemma~\ref{lem:bdry-CalA} and \eqref{eq:Du D2u}, to get
\begin{align*}
\int_{B^+_{{4/5}}} \big| {\rm div}(|\nabla u|\, \nabla u) \big| \,dx &\le C \int_{B^+_{{4/5}}} |\nabla u| \,\mathcal A\,dx +\int_{B^+_{{4/5}}} -2|\nabla u|\,\Delta u\,dx \\
& \hspace{-3cm}\leq C\biggl(\int_{B^+_{{4/5}}} |\nabla u|^2\,dx\biggr)^{1/2} \biggl(\int_{B^+_{{4/5}}} \mathcal A^2\,dx\biggr)^{1/2}+ C\int_{B^+_{{7/8}}} \mathcal A^2\,dx+C\int_{B^+_{1}} |\nabla u|^2\,dx \\
&\hspace{-3cm}\le C\int_{B^+_{1}} |\nabla u|^2\,dx.
\end{align*}
Now, let $\zeta \in C^\infty_c(B_{4/5})$ with $\zeta\equiv1$ in $B_{3/4}$. By Sard's theorem, $\{u=t\}$ is a smooth hypersurface for almost every $t>0$. For such values of $t$, from the previous bound we deduce
\begin{equation*}
\begin{split}
\int_{\{u=t\}\cap B_{3/4}^+} |\nabla u|^2 \, d\mathcal{H}^{n-1} & \\ & \hspace{-2cm} \leq \int_{\{u=t\}\cap B_{4/5}^+} |\nabla u|^2\zeta^2 \,d\mathcal{H}^{n-1}
=- \int_{\{u>t\} \cap B_{4/5}^+} {\rm div}\big(|\nabla u|\, \nabla u \,\zeta^2\big) \,dx \\
& \hspace{-2cm} \le C\int_{B^+_{1}} |\nabla u|^2\,dx,
\end{split}
\end{equation*}
as claimed.
\vspace{2mm}\noindent
{\it Step 2: Conclusion.}
Here it is convenient to assume $\|\nabla u\|_{L^2(B^+_{1})}=1$, which can be achieved after multiplying $u$ by a constant.
Setting $h(t)=\max\{1,t\}$, by the Sobolev embedding for functions vanishing on $\partial^0 B^+_{1}$, we see that
\begin{equation}\label{ashgowobb}
\begin{split}
\int_{\mathbb{R}^+} dt & \int_{\{u=t\}\cap B^+_{1}\cap \{|\nabla u|\neq0\}} d\mathcal{H}^{n-1} h(t)^p \,|\nabla u|^{-1} \\
& \qquad\qquad \leq |B^+_{1}\cap \{u<1\}|+ \int_{B^+_{1}} u^p \, dx\leq C
\end{split}
\end{equation}
for some $p>2$.
Hence, choosing dimensional constants $q>1$ and $\theta\in (0,1/3)$ such that $p/q = (1-\theta)/\theta$, we see that
\begin{equation*}
\begin{split}
\int_{B^+_{3/4}} |\nabla u|^{3-3\theta}\,dx &=\int_{\mathbb{R}^+} dt\int_{\{u=t\}\cap B^+_{3/4}\cap \{|\nabla u|\neq0\}} d\mathcal{H}^{n-1} h(t)^{p\theta -q(1-\theta)} |\nabla u|^{-\theta + 2(1-\theta)}
\\
&\le \left(\int_{\mathbb{R}^+} dt\int_{\{u=t\}\cap B^+_1\cap \{|\nabla u|\neq0\}}d\mathcal{H}^{n-1} h(t)^{p} |\nabla u|^{-1} \right)^\theta
\\
&\qquad\qquad \cdot \bigg(\int_{\mathbb{R}^+} h(t)^{-q} dt\int_{\{u=t\}\cap B^+_{3/4}}d\mathcal{H}^{n-1} |\nabla u|^2 \bigg)^{1-\theta}.
\end{split}
\end{equation*}
By \eqref{ashgowobb} and \eqref{bdry-level}, this yields
\[
\int_{B^+_{3/4}} |\nabla u|^{3-3\theta} \,dx \le C,
\]
which concludes the proof.
\end{proof}
We can now give the
\begin{proof}[Proof of Theorem \ref{thm:2bdry}]
By rescaling Lemma \ref{lem:bdry-gamma-2}, we deduce that
$$
\|\nabla u\|_{L^{2+\gamma}(B^+_{1/4})} \le \|\nabla u\|_{L^{2+\gamma}(B^+_{3/8})} \le C \| \nabla u\|_{L^{2}(B^+_{1/2})}.
$$
This and Lemma~\ref{lem:bdry-nabla2byL1} conclude the proof.
\end{proof}
For future use, note also that putting together Lemma \ref{lem:bdry-nabla2byL1} with \eqref{massHessbdry} (rescaled), we obtain that
\begin{equation}\label{last-gamma}
\|D^2 u\|_{L^{1}(B^+_{1/4})} \le \|D^2 u\|_{L^{1}(B^+_{3/8})} \le C\|\nabla u\|_{L^{2}(B^+_{1/2})} \le C \| u\|_{L^{1}(B^+_{1})}.
\end{equation}
When proving Theorem \ref{thm:3bdry} in next section, we will need the following bounds in half-annuli. They follow from a simple covering and scaling argument combined with the boundary estimates that we have just proven and their interior analogues (i.e., those in balls whose double is contained in $\mathbb{R}^n_+$).
\begin{corollary}\label{corol:Deltabdry}
Let $u\in C^\infty(\overline{B^+_1})$ be a nonnegative stable solution of $-\Delta u=f(u)$ in~$B^+_1\subset\mathbb{R}^n$, with $u=0$ on $\partial^0 B^+_1$.
Assume that $f\in C^1(\mathbb{R})$ is nonnegative and nondecreasing. Let $0<\rho_1<\rho_2<\rho_3<\rho_4\leq 1$.
Then,
\begin{equation} \label{prop5.2CFRS-ann}
\|\nabla u\|_{L^{2+\gamma}(A^+_{\rho_2,\rho_3})}
\le C_{\rho_i} \, \|u\|_{L^{1}(A^+_{\rho_1,\rho_4})}
\end{equation}
and
\begin{equation} \label{hess-ann}
\|D^2 u\|_{L^{1}(A^+_{\rho_2,\rho_3})}
\le C_{\rho_i} \, \|u\|_{L^{1}(A^+_{\rho_1,\rho_4})}
\end{equation}
for some dimensional constant $\gamma>0$ and some constant $C_{\rho_i}$ depending only on $n$, $\rho_1$, $\rho_2$, $\rho_3$, and $\rho_4$.
\end{corollary}
\begin{proof}
We first cover $\partial^0 A^+_{\rho_2,\rho_3}$ by a finite number of balls $B_j$ centered at $\{x_n=0\}$ and with a sufficiently small radius~$\delta_0$ such that the balls $4B_j$ (with same centers as the previous ones, and with radius $4\delta_0$) are contained in $A_{\rho_1,\rho_4}$. We now cover $\bar{A^+_{\rho_2,\rho_3}}\setminus \cup_j B_j$ by balls $B_k$ of a smaller radius $\delta_1$ in such a way that $4 B_k\subset A^+_{\rho_1,\rho_4}$. In this way we obtain a covering of $A^+_{\rho_2,\rho_3}$ made of half-balls $B_j^+$ (centered at $\{x_n=0\}$) and interior balls $B_k$, with $4B_j^+$ and $4B_k$ all contained in $A^+_{\rho_1,\rho_4}$.
Now, to deduce our two estimates, in each half-ball $B_j^+$ we use Theorem~\ref{thm:2bdry} and \eqref{last-gamma} (both rescaled). Instead, in the interior balls $B_k$ we apply \eqref{eq:W12 L1 int} as well as \eqref{estdivbis} (both rescaled). In this way we control the norms of the quantities on the left-hand sides of \eqref{prop5.2CFRS-ann} and of \eqref{hess-ann} but now integrated in any of the previous balls, by $C_{\rho_i} \, \|u\|_{L^{1}(A^+_{\rho_1,\rho_4})}$ (each of them). Adding all the inequalities completes the proof.
\end{proof}
\section{The radial derivative controls the function in $L^{1}$ up to the boundary}
\label{sect:boundary-L1radial}
In this section we establish our main result, Theorem \ref{thm:3bdry}. This will be much more delicate than the interior case of Theorem~\ref{thm:3}, which holds assuming only that the function~$u$ is superharmonic. Instead, in the boundary setting, superharmonicity is not enough for the theorem to hold, as the following remark shows. Hence, within the proof we will need to use the semilinear equation satisfied by $u$ and, in fact, also the stability of $u$.
\begin{remark}\label{rk:not-superh}
For $n\ge 2$, the estimate \eqref{introbdryradial} of Theorem \ref{thm:3bdry}, controlling the function in~$L^1$ in a half-annulus by its radial derivative in $L^1$ (or even in $L^\infty$), also in a half-annulus, cannot not hold within the class of nonnegative superharmonic functions which are smooth in $\overline{\mathbb{R}^n_+}$ and vanish on $\partial \mathbb{R}^n_+$.
Indeed, for $\delta \in (0,1)$, consider
$$
u^\delta (x):= \frac{x_n}{|(x',x_n+\delta)|} \qquad\text{ for } x=(x',x_n)\in\mathbb{R}^{n-1}\times\mathbb{R}, \, x_n\geq 0.
$$
The function $u^\delta$ is nonnegative and smooth in $\overline{\mathbb{R}^n_+}$, vanishes on $\{x_n=0\}$, and, as a simple computation shows,\footnote{One may start from $\Delta u^\delta = x_n \Delta \varphi + 2 \varphi_{x_n}$, where $\varphi=(|x|^2+ 2\delta x_n+\delta^2)^{-1/2}$.} is superharmonic in~$\mathbb{R}^n_+$ for $n\geq 2$. At the same time, in the half-annulus it satisfies
$$
|\partial_r u^\delta| = \frac{x_n}{r\, |(x',x_n+\delta)|^3}\, \delta (x_n+\delta) \le C\delta \quad\text{ in } A^+_{1/2,1}
$$
for some constant $C$ independent of $\delta$. By taking $\delta$~small enough, this shows that the estimate \eqref{introbdryradial} of Theorem \ref{thm:3bdry} cannot hold within this class of functions.
Note also that in the limiting case $\delta=0$, we are exhibiting a nonnegative superharmonic function $u^0=x_n/r$ which belongs to $W^{1,2}(B_1^+)$ for $n\geq 3$, vanishes a.e.\ on $\{x_n=0\}$, and is zero homogeneous (i.e., $\partial_r u^0\equiv 0$).
\end{remark}
To control the $L^1$ norm of a stable solution in a half-annulus by its radial derivative in $L^1$, the starting idea is to use the equation
\begin{equation}\label{continuousbdry}
-2 \Delta u + \Delta (x\cdot\nabla u) = -f'(u) \, x\cdot\nabla u,
\end{equation}
after multiplying it against a cut-off function and integrating it in the half-annulus. We will see that this easily yields a lower bound for the integral of the left-hand side which is appropriate for our purposes. The difficulty is how to control the integral of the right-hand side, by above, in terms of only $x\cdot\nabla u= ru_r$. As we will explain later in Footnote~\ref{foot-xgrad}, we wish to use the stability condition to deal with the factor $f'(u)$ in \eqref{continuousbdry}. However, through the simplest approach this would force to control the $L^2$ norm of $\nabla (x\cdot\nabla u)$ ---which is not at our hands since we do not have $L^2$ control on the full Hessian of $u$.
To proceed in a similar manner but reducing the number of derivatives falling on~$u$, for $\lambda >0$ we consider the functions
$$
u_\lambda(x):=u(\lambda x)
$$
and note that $\frac{d}{d\lambda} u_\lambda (x)= x \cdot \nabla u (\lambda x)= \lambda^{-1} x \cdot \nabla u_\lambda (x)$. Using this and noticing that $\lambda^{-2} \Delta u_\lambda = -f(u_\lambda)$, we have
\begin{equation}\label{expression}
\begin{split}
-2\lambda^{-3}\Delta u_{\lambda} + \lambda^{-2} \Delta ( \lambda^{-1} x\cdot \nabla u_{\lambda}) &
= \frac{d}{d\lambda} \left( \lambda^{-2} \Delta u_\lambda\right) \\
& = - \frac{d}{d\lambda} \, f( u_\lambda )= -f'(u_\lambda)\ \lambda^{-1} x\cdot\nabla u_\lambda.
\end{split}
\end{equation}
Evaluating the first and last expressions at $\lambda=1$, we recover \eqref{continuousbdry}. However, it will be crucial (see the later Footnote~\ref{foot-xgrad}) to use instead the equality between the first and third expressions, after integrating them not only in $x$, but also in $\lambda$. Integrating in $\lambda$ will be essential to reduce the number of derivatives falling on~$u$. In addition, the monotonicity and convexity of $f$ will allow to use the stability condition appropriately.
This will be understood in all detail going through the following proof, which is the main novelty of this paper.
\begin{proof}[Proof of Theorem \ref{thm:3bdry}]
By rescaling, we may suppose that we have a stable solution in $B_6^+$ instead of $B_1^+$.
We choose a nonnegative smooth function $\zeta$ with compact support in the full annulus $A_{4,5}$ and such that $\zeta\equiv 1$ in $A_{4.1,4.9}$. Then, the function $\xi:= x_n \zeta$ satisfies
\begin{equation*}
\begin{split}
&\xi\ge 0 \text{ in } A_{4,5}^+, \quad \xi=0 \text{ on }\partial^0 A_{4,5}^+, \\
& \xi=\xi_{\,\nu} = 0 \text{ on } \partial^+ A_{4,5}^+, \quad \text{and}\quad \xi=x_n \text{ in } A_{4.1,4.9}^+.
\end{split}
\end{equation*}
The proof starts from the identity
\begin{equation}\label{derla0-1}
\begin{split}
2\lambda^{-3}\int_{A_{4,5}^+} (-\Delta u_{\lambda}) \, \xi\, dx + \lambda ^{-2}\int_{A_{4,5}^+} \Delta ( \lambda^{-1} x\cdot \nabla u_{\lambda}) \, \xi\, dx &\\
&\hspace{-3cm}= - \frac{d}{d\lambda} \int_{A_{4,5}^+} f(u_\lambda) \, \xi\, dx,
\end{split}
\end{equation}
which follows from the equality between the first and third expressions in \eqref{expression}.
In a first step, we will bound the left-hand side of \eqref{derla0-1} by below. But the subtle part of the proof, where we use the stability of the solution, will be the second step. It will bound the right-hand side of \eqref{derla0-1} by above, but only after averaging it in~$\lambda$. For this, we will use that
\begin{equation}\label{derla0-2}
\int_1^{1.1} \big( - \frac{d}{d\lambda} \int_{A_{4,5}^+} f( u_\lambda)\, \xi\, dx \big) d\lambda
= \int_{A_{4,5}^+} \big( f(u)-f(u_{1.1}) \big) \xi\, dx ,
\end{equation}
where
$$
u_{1.1}:= u_{11/10}=u\big( (11/10) \cdot\big).
$$
\vspace{2mm}\noindent
{\it Step 1: We prove that}
\begin{equation}\label{bdrystep1}
\begin{split}
2\lambda^{-3}\int_{A_{4,5}^+} (-\Delta u_{\lambda}) \, \xi\, dx + \lambda ^{-2}\int_{A_{4,5}^+} \Delta ( \lambda^{-1} x\cdot \nabla u_{\lambda}) \, \xi\, dx &\\
& \hspace{-3cm}\ge c \| u\|_{L^1(A_{4.7,4.8}^+)} - C \|u_r\|_{L^1({A_{3,6}^+})}
\end{split}
\end{equation}
{\it for every $\lambda\in [1,1.1]$, where $c$ and $C$ are positive dimensional constants.}
To bound by below the second integral in \eqref{bdrystep1} is easy. Since $\xi$ and $\xi_{\,\nu}$ vanish on $\partial^+A^+_{4,5}$, and both $\xi$ and $x\cdot \nabla u_{\lambda}$ vanish on $\partial^0 A^+_{4,5}$, we deduce that
\begin{equation}\label{2ndint}
\int_{A_{4,5}^+} \Delta ( x\cdot \nabla u_{\lambda}) \, \xi\, dx =\int_{A_{4,5}^+} x\cdot \nabla u_{\lambda}\, \Delta \xi\, dx \ge -C \|u_r\|_{L^1({A_{4,5.5}^+})}
\end{equation}
since $x\cdot \nabla u_{\lambda}(x)= \lambda x\cdot \nabla u(\lambda x)= \lambda r \, u_r(\lambda x)$.
Next, to control the first integral in \eqref{bdrystep1}, given any $\rho_1\in (4.1,4.2)$ and $\rho_2\in (4.8,4.9)$ we consider the solution $\varphi$ of
\begin{equation}\label{torsion}
\begin{cases} -\Delta \varphi=1 \quad \quad &\mbox{in } A_{\rho_1,\rho_2}^+
\\
\varphi=0 &\mbox{on } \partial^0 A_{\rho_1,\rho_2}^+
\\
\varphi_\nu=0 &\mbox{on } \partial^+ A_{\rho_1,\rho_2}^+.
\end{cases}
\end{equation}
Note that $\varphi\ge 0$ by the maximum principle. In addition, as shown in Appendix~\ref{app:neumann}, we have $|\nabla \varphi|\le C$ in $ A_{\rho_1,\rho_2}^+$ for some dimensional constant $C$. This bound yields $c\varphi\le x_n=\xi$ in $A_{\rho_1,\rho_2}^+$ for some small dimensional constant $c>0$. Since, in addition, $-\Delta u_{\lambda}$, $\xi$, and $\varphi$ are all nonnegative, it follows that (for positive constants $c$ and $C$ that may differ from line to line)
\begin{eqnarray*}
\int_{A_{4,5}^+} (-\Delta u_{\lambda}) \, \xi\, dx &\geq& c\int_{A_{\rho_1,\rho_2}^+} (-\Delta u_{\lambda})\, \varphi \, dx
\\ & &\hspace{-2cm} = -c \int_{\partial^+ A_{\rho_1,\rho_2}^+} (u_{\lambda})_\nu \,\varphi \, d\mathcal{H}^{n-1}
+c\int_{A_{\rho_1,\rho_2}^+} u_{\lambda}\, dx
\\ & &\hspace{-2cm} \geq - C \int_{\partial^+ B_{\rho_1}^+} |(u_{\lambda})_r| \, d\mathcal{H}^{n-1} -C \int_{\partial^+ B_{\rho_2}^+} |(u_{\lambda})_r| \, d\mathcal{H}^{n-1}
+c\| u\|_{L^1(A_{\lambda\rho_1,\lambda\rho_2}^+)}
\\ & &\hspace{-2cm} \geq - C \int_{\partial^+ B_{\rho_1}^+} |(u_{\lambda})_r| \, d\mathcal{H}^{n-1} -C \int_{\partial^+ B_{\rho_2}^+} |(u_{\lambda})_r| \, d\mathcal{H}^{n-1}
+c\| u\|_{L^1(A_{4.7,4.8}^+)}
\end{eqnarray*}
since $\lambda\rho_1\le 1.1\cdot 4.2\le 4.7$ and $4.8\le \lambda\rho_2$. Finally, integrating first in $\rho_1\in (4.1,4.2)$ and then in $\rho_2\in (4.8,4.9)$, we arrive at
\begin{equation*}
\int_{A_{4,5}^+} (-\Delta u_{\lambda}) \, \xi\, dx
\ge -C \|u_r\|_{L^1({A_{4.1,6}^+})}
+c\| u\|_{L^1(A_{4.7,4.8}^+)},
\end{equation*}
since $1.1\cdot 4.9\le 6$.
This and \eqref{2ndint} establish the claim of Step 1.
\vspace{2mm}\noindent
{\it Step 2: We prove that, for every $\varepsilon\in (0,1)$,}
\begin{equation}\label{bdrystep2}
\int_{A_{4,5}^+} \big( f(u)-f( u_{1.1}) \big) \xi\, dx\le C\big( \varepsilon \|u\|_{L^{1}(A^+_{3,6})}+ \varepsilon^{-1-2\frac{2+\gamma}{\gamma}} \|u_r\|_{L^1({A_{3,6}^+})}\big)
\end{equation}
{\it for some dimensional constants $\gamma>0$ and $C$ ---with $\gamma$ being the exponent in Corollary~\ref{corol:Deltabdry}. }
By convexity of $f$ we have $f(u)-f( u_{1.1}) \leq f'(u)(u- u_{1.1})$. We now use that $\xi=0$ on $\partial A_{4,5}^+$ and that $u- u_{1.1}=0$ on $\partial^0 A_{3.9,5.1}^+$ in order to take advantage, twice, of the stability of $u$. Taking a function $\phi\in C^\infty_c(A_{3.9,5.1})$ with $\phi=1$ in $A_{4,5}$, and since $\xi$ and $f'(u)$ are nonnegative, we deduce
\begin{align}
\hspace{0cm} \int_{A_{4,5}^+} \big( f(u)-f( u_{1.1}) \big) \xi\, dx & \leq \int_{A_{4,5}^+} f'(u)(u-u_{1.1}) \xi\, dx \label{int-fsxi1}
\\ & \hspace{-3cm}
\leq \left( \int_{A_{4,5}^+} f'(u)\xi^2\, dx \right)^{1/2}\left( \int_{A_{4,5}^+} f'(u) (u- u_{1.1})^2\, dx \right)^{1/2} \nonumber
\\ & \hspace{-3cm}
\leq \left( \int_{A_{4,5}^+} |\nabla \xi|^2\, dx \right)^{1/2} \left( \int_{A_{3.9,5.1}^+} f'(u) \big((u-
u_{1.1})\phi\big)^2\, dx \right)^{1/2}
\nonumber\\ & \hspace{-3cm}
\leq C\left( \int_{A_{3.9,5.1}^+} \left|\nabla \big( (u- u_{1.1})\phi\big)\right|^2\, dx \right)^{1/2}
\nonumber\\ & \hspace{-3cm}
\leq C\Vert \nabla(u- u_{1.1}) \Vert_{L^2(A_{3.9,5.1}^+)}\label{int-fsxi3},
\end{align}
where in the last bound we have used Poincar\'e's inequality in $A_{3.9,5.1}^+$ for functions vanishing on $\partial^0 A_{3.9,5.1}^+$. The previous chain of inequalities, which are a crucial part of the proof, have used the stability of the solution twice.\footnote{\label{foot-xgrad}Note first that a simple Cauchy-Schwarz argument to control the right-hand side of \eqref{int-fsxi1} would not work, since we do not have control on the integral of $f'(u)^2$. On the other hand, we would be in trouble if we had performed the above chain of inequalities starting from \eqref{continuousbdry} instead of \eqref{expression}. Indeed, in such case, $f'(u)(u- u_{1.1}) \xi $ in \eqref{int-fsxi1} would be replaced by $f'(u)\,x\cdot\nabla u\,\xi$. Hence, proceeding as we have done above, the final quantity appearing in \eqref{int-fsxi3} would be the $L^2$ norm of $\nabla (x\cdot\nabla u)$. But recall that we do not have control on the $L^2$ norm of the full Hessian of $u$.}
Next, by the $W^{1,2+\gamma}$ estimate \eqref{prop5.2CFRS-ann} (rescaled to hold in $B_6^+$ instead of $B_1^+$) and taking $q:=\frac{2(1+\gamma)}{2+\gamma}$, we have
\begin{align} \label{int-fsxibis}
\|\nabla (u- u_{1.1})\|_{L^{2}(A^+_{3.9,5.1})}& \le \|\nabla (u- u_{1.1})\|_{L^{2+\gamma}(A^+_{3.9,5.1})}^{\frac{1}{q}}\|\nabla (u- u_{1.1})\|_{L^{1}(A^+_{3.9,5.1})}^{\frac{1}{q'}}\nonumber\\
&\hspace{-1.5cm} \le C \|\nabla u\|_{L^{2+\gamma}(A^+_{3.9,5.1\cdot 1.1})}^{\frac{1}{q}}\|\nabla(u- u_{1.1})\|_{L^{1}(A^+_{3.9,5.1})}^{\frac{1}{q'}} \nonumber \\
&\hspace{-1.5cm} \le C \|u\|_{L^{1}(A^+_{3,6})}^{\frac{1}{q}} \|\nabla(u- u_{1.1})\|_{L^{1}(A^+_{3.9,5.1})}^{\frac{1}{q'}}\nonumber\\
&\hspace{-1.5cm} \le \varepsilon \|u\|_{L^{1}(A^+_{3,6})}
+ C\varepsilon^{-\frac{q'}{q}} \|\nabla(u- u_{1.1})\|_{L^{1}(A^+_{3.9,5.1})}
\end{align}
for all $\varepsilon\in (0,1)$.
Now, by the interpolation inequality in cubes of Proposition \ref{prop5.2} applied with $p=1$, we claim that
\begin{align}\label{frominterp}
\|{\nabla (u- u_{1.1})}\|_{L^1({A^+_{3.9,5.1}})}&\nonumber\\
&\hspace{-2.5cm} \leq C\varepsilon^{1+\frac{q'}{q}} \|{D^2(u- u_{1.1})}\|_{L^1({A^+_{3.8,5.2}})}+ C \varepsilon^{-1-\frac{q'}{q}}\|u- u_{1.1}\|_{L^1({A^+_{3.8,5.2}})}\nonumber\\
&\hspace{-2.5cm} \leq C\varepsilon^{1+\frac{q'}{q}} \|{D^2u}\|_{L^1({A^+_{3.8,5.8}})}+ C \varepsilon^{-1-\frac{q'}{q}}\|u- u_{1.1}\|_{L^1({A^+_{3.8,5.2}})}.
\end{align}
To see the first inequality, one covers the half-annulus $A^+_{3.9,5.1}$ (except for a set of measure zero) by cubes of sufficiently small side-length to be contained in $A^+_{3.8,5.2}$. One then applies Proposition \ref{prop5.2} with $p=1$, after rescaling it and renaming $\varepsilon$, in each of these cubes, and finally one adds up all the inequalities ---exactly as we did in the beginning of the proof of Theorem \ref{thm:2} in Section~\ref{sect:W12}.
From \eqref{frominterp} and the Hessian bound \eqref{hess-ann} (rescaled to hold in $B_6^+$ instead of $B_1^+$) we conclude
\begin{equation*}
\|{\nabla (u- u_{1.1})}\|_{L^1({A^+_{3.9,5.1}})}\leq \hspace{-.8mm}C\varepsilon^{1+\frac{q'}{q}} \|u\|_{L^{1}(A^+_{3,6})}+ C\varepsilon^{-1-\frac{q'}{q}} \|u- u_{1.1}\|_{L^1({A^+_{3.8,5.2}})}.
\end{equation*}
Putting together this last bound with \eqref{int-fsxi3} and \eqref{int-fsxibis}, we arrive at
\begin{equation*}
\int_{A_{4,5}^+} \big( f(u)-f( u_{1.1}) \big) \xi\, dx\le C\varepsilon \|u\|_{L^{1}(A^+_{3,6})}+ C\varepsilon^{-1-2\frac{2+\gamma}{\gamma}}\|u- u_{1.1}\|_{L^1({A^+_{3.8,5.2}})}.
\end{equation*}
At the same time, since $\frac{d}{d\lambda} u_\lambda (x)=x\cdot\nabla u (\lambda x)= r\, u_r(\lambda x)$, we have
\begin{eqnarray*}
\|u- u_{1.1}\|_{L^1({A_{3.8,5.2}^+})} &=& \int_{A_{3.8,5.2}^+} dx \left| \int_1^{1.1} d\lambda \ r\, u_r(\lambda x)\right| \\
& \le & C\int_1^{1.1} d\lambda \int_{A_{3.8,5.2\cdot 1.1}^+} dy \ |u_r(y)| \le C\|u_r\|_{L^1({A_{3,6}^+})}.
\end{eqnarray*}
The last two bounds establish Step 2.
\vspace{2mm}\noindent
{\it Step 3: Conclusion.}
We integrate \eqref{derla0-1} in $\lambda\in (1,1.1)$ and use \eqref{derla0-2}, as well as \eqref{bdrystep1} and \eqref{bdrystep2} in the statements of Steps 1 and 2.
We obtain the bound
\begin{equation}\label{finalbdry}
\| u\|_{L^1(A_{4.7,4.8}^+)} \le C\big( \varepsilon \|u\|_{L^{1}(A^+_{3,6})}+ \varepsilon^{-1-2\frac{2+\gamma}{\gamma}}\|u_r\|_{L^1({A_{3,6}^+})} \big)
\end{equation}
for all $\varepsilon\in (0,1)$. It is now simple to conclude the desired estimate
\begin{equation}\label{finalB6}
\| u\|_{L^1(A_{3,6}^+)} \le C \|u_r\|_{L^1({A_{3,6}^+})}.
\end{equation}
Indeed, we first show that
\begin{equation}\label{annulusB3}
\| u\|_{L^1(A_{3,6}^+)} \le C \big( \|u\|_{L^{1}(A^+_{4.7,4.8})}+ \|u_r\|_{L^1({A_{3,6}^+})} \big).
\end{equation}
To prove this, use that
\begin{equation}\label{alongr}
u(s\sigma)=u(t\sigma)-\int_s^t u_r (r\sigma) \,dr
\end{equation}
for $s\in (3,6)$, $t\in (4.7,4.8)$, and $\sigma\in S^{n-1}$. We deduce
$$
(s/6)^{n-1}|u(s\sigma)|\leq (t/4)^{n-1} |u(t\sigma)|+\int_{3}^6 (r/3)^{n-1} |u_r (r\sigma)| \,dr.
$$
Integrating in $\sigma\in S^{n-1}$, and then in $s\in (3,6)$ and in $t\in (4.7,4.8)$, we conclude \eqref{annulusB3}.
Now, we use \eqref{annulusB3} to bound the right-hand side of \eqref{finalbdry}. In the resulting inequality we choose $\varepsilon\in (0,1)$ small enough so that the constant $C\varepsilon$ multiplying $ \|u\|_{L^{1}(A^+_{4.7,4.8})}$ satisfies $C\varepsilon\le1/2$. We deduce
$$
\| u\|_{L^1(A_{4.7,4.8}^+)} \le C \|u_r\|_{L^1({A_{3,6}^+})},
$$
that, together with \eqref{annulusB3}, yields \eqref{finalB6}.
Next, to show $\| u\|_{L^1(B_{6}^+)} \le C \|u_r\|_{L^1({B_{6}^+})}$, by \eqref{finalB6} it suffices to show $\| u\|_{L^1(B_{3}^+)} \le C \|u_r\|_{L^1({B_{6}^+})}$. Using again \eqref{finalB6}, it is indeed enough to show
$$
\| u\|_{L^1(B_{3}^+)} \le C \left(\| u\|_{L^1(A_{3,6}^+)} + \|u_r\|_{L^1({B_{6}^+})}\right).
$$
This is easily shown using \eqref{alongr}, with a similar argument as above or as in the interior case of Theorem~\ref{thm:3}.
The proven bounds hold for a solution in $B_6^+$. By rescaling we conclude the estimates of the theorem for solutions in $B_1^+$.
\end{proof}
For $n\geq 3$, the estimate that we have just proven can be improved as follows.
\begin{remark}\label{rk:bdry-ngeq3}
When $n\geq3$, Theorem \ref{thm:3bdry} also holds when replacing $\| u\|_{L^1(A_{1/2,1}^+)}$, in the left-hand side of~\eqref{introbdryradial}, by $\| u\|_{L^1(B_{1}^+)}$.
To show this, and given the theorem, it suffices to prove
$$
\| u\|_{L^1(B_{1/2}^+)} \leq C \| u\|_{L^1(A_{1/2,1}^+)}.
$$
But notice that, by the Poincar\' e inequality for functions vanishing on $\partial^0 B^+_{1/2}$, we have that $\| u\|_{L^1(B_{1/2}^+)}\le C\|\nabla u\|_{L^{2}(B^+_{1/2})}\le C\|\nabla u\|_{L^{2}(B^+_{4/6})}$.
Now, using \eqref{bdry-notweighted} (here we use $n\geq 3$) we can bound this last quantity by $C\|\nabla u\|_{L^{2}(A^+_{4/6,5/6})}$. Finally, by \eqref{prop5.2CFRS-ann} we know that
$$
\|\nabla u\|_{L^{2}(A^+_{4/6,5/6})}
\le C \|u\|_{L^{1}(A^+_{1/2,1})}.
$$
\end{remark}
\section{Boundary $C^\alpha$ estimate \vspace{.15cm}}
\label{sect:boundary-conclusion}
We have now all the ingredients to give the
\begin{proof}[Proof of Theorem \ref{thm:0bdry}]
The proof of \eqref{higher-bdry} follows from Theorem~\ref{thm:2bdry} and a standard covering and scaling argument.
To prove the H\"older estimate \eqref{holder-bdry},
we may assume (as in the interior case) that $3\le n\le 9$ by adding superfluous variables. Note here that the ``half-ball geometry'' in the statement of the theorem allows, when adding variables, to deduce the result for $n=1$ and $n=2$ from the one for $n=3$.
Using Theorem~\ref{thm:1bdry} with $\rho=3/5$ and $\lambda=4/3$, we obtain
\begin{equation}\label{eq:11proofbdry}
\int_{B^+_{1/2}}r^{2-n} u_{r}^2\,dx \leq \int_{B^+_{3/5}}r^{2-n} u_{r}^2\,dx
\leq C\int_{A^+_{3/5,4/5}}|\nabla u|^2\,dx.
\end{equation}
Now, since $1/2<3/5<4/5<1$, \eqref{prop5.2CFRS-ann} in Corollary~\ref{corol:Deltabdry} yields
\begin{equation}\label{eq:12proofbdry}
\int_{A^+_{3/5,4/5}}|\nabla u|^2\,dx\leq C\Vert u\Vert_{L^1(A^+_{1/2,1})}^2.
\end{equation}
At the same time, by the estimate \eqref{introbdryradial} from Theorem \ref{thm:3bdry} we have
\begin{equation*}
\Vert u\Vert_{L^1(A^+_{1/2,1})}^2\le C\Vert u_r\Vert_{L^1(A^+_{1/2,1})}^2 \le
C\Vert u_r\Vert_{L^2(A^+_{1/2,1})}^2\le
C\int_{A^+_{1/2,1}} r^{2-n} u_r^2\,dx.
\end{equation*}
This, together with \eqref{eq:11proofbdry} and \eqref{eq:12proofbdry}, leads to
$$
\int_{B^+_{1/2}}r^{2-n} u_{r}^2\,dx \leq C\int_{A^+_{1/2,1}} r^{2-n} u_r^2\,dx.
$$
We can write this inequality, in an equivalent way, as
$$
\int_{B^+_{1/2}}r^{2-n} u_{r}^2\,dx \leq \theta \int_{B^+_{1}} r^{2-n} u_r^2\,dx
$$
for the dimensional constant $\theta= \frac{C}{1+C}\in (0,1)$.
This estimate (which is rescale invariant), applied to the rescaled stable solutions $u(\rho\, \cdot)$, yields
$$
\int_{B^+_{\rho/2}}r^{2-n} u_{r}^2\,dx \leq \theta\int_{B^+_{\rho}} r^{2-n} u_r^2\,dx
$$
for all $\rho \in (0,1)$.
This inequality can be iterated, in balls of radius $2^{-j}$, with $j\ge 1$ an integer, to obtain
$\int_{B^+_{2^{-j}}}r^{2-n} u_{r}^2\,dx \leq \theta^{j-1} \int_{B^+_{1/2}}r^{2-n} u_{r}^2\,dx$.
Since $0<\theta<1$, it follows that, for some dimensional $\alpha\in (0,1)$,
$$
\int_{B^+_\rho} r^{2-n}u_r^2\,dx\leq C\rho^{2\alpha}\int_{B^+_{1/2}} r^{2-n} u_r^2\,dx \le C\rho^{2\alpha}\Vert u\Vert_{L^1(B^+_{1})}^2 \qquad\text{ for all } \rho \leq 1/2,
$$
where we have used \eqref{eq:11proofbdry} and \eqref{eq:12proofbdry}
in the last inequality. In particular, since $\rho^{2-n}\leq r^{2-n}$ in $B^+_\rho$, we conclude that
$$
\int_{B_\rho^+} |u_r|\,dx \le C\rho^{n-1+\alpha}\Vert u\Vert_{L^1(B^+_{1})}
\qquad\text{ for all } \rho \leq 1/2.
$$
Next, given $y'\in \partial^0 B^+_{1/2}$, we can apply the last estimate to the function $u_{y'}(x):=u(y'+\frac{x}{2})$, defined for $x\in B_{1}^+$, since $y'+\frac{1}{2}B_{1}^+\subset B_{1}^+$. Using the notation
$u_{r_{y'}}(x):= |x-y'|^{-1}(x-y')\cdot \nabla u (x)$, we get
\begin{equation*}
\int_{B^+_\rho(y')}|u_{r_{y'}}|\,dx\leq C\Vert u\Vert_{L^1(B_1^+)}\,\rho^{n-1+\alpha}\quad \mbox{ for all }\,y' \in \partial^0 B^+_{1/2}\text{ and } \rho \leq 1/4.
\end{equation*}
At this point we can reflect $u$ oddly to get a $C^1$ function in all $B_1$, which we still call $u$ and which satisfies the previous estimate but now with the integral of $|u_{r_{y'}}|$ extended to the whole ball $B_\rho(y')$. Thus, given $0<y_n\leq 1/8$, we can apply Lemma~\ref{lemmaC1} in $B_{2y_n}(y')$ (that is, with $y$ replaced by $y'$ and $d=2y_n$) and with $S:=B^+_{2y_n}(y')$ to get (recall that $u\geq 0$)
\begin{equation}\label{bdrycontrolaver}
\frac{1}{(y_n)^n}\Vert u\Vert_{L^1({B^+_{2y_n}(y')})}= C|u_S|=C|u(y')-u_S|\leq C \Vert u\Vert_{L^1(B_1^+)}\,(y_n)^{\alpha}.
\end{equation}
We are now ready to control the H\"older norm of $u$ up to the boundary. Given $y=(y',y_n)\in B^+_{1/8}$, by the interior H\"older estimate \eqref{eq:Ca L1 int} of Theorem~\ref{thm:0} (applied in the interior ball $B_{y_n}(y)$ and properly rescaled) and by \eqref{bdrycontrolaver}, we have
\begin{equation}\label{bdry-interior}
\begin{split}
\Vert u\Vert_{L^\infty(B_{y_n/2}(y))}+ (y_n)^\alpha [ u]_{C^\alpha(\overline{B}_{y_n/2}(y))}
&\leq \frac{C}{(y_n)^n} \Vert u\Vert_{L^1(B_{y_n}(y))}\\
& \hspace{-1cm} \leq \frac{C}{(y_n)^n} \Vert u\Vert_{L^1(B_{2 y_n}^+(y'))}\leq C \Vert u\Vert_{L^1(B_1^+)}(y_n)^{\alpha}.
\end{split}
\end{equation}
In particular, $|u(y)|\leq C \Vert u\Vert_{L^1(B_1^+)}$ and thus we have controlled the $L^\infty$ norm of $u$ in~$B^+_{1/8}$.
Finally, to bound $|u(y)-u(z)|$ for another given point $z\in B^+_{1/8}$, note that, by symmetry, we may assume $z_n\leq y_n$. Now, if $|z-y|\leq y_n/2$ then from \eqref{bdry-interior} we get $|u(y)-u(z)|/|y-z|^\alpha \leq C \Vert u\Vert_{L^1(B_1^+)}$. Instead, if $|z-y|>y_n/2$ we apply the $L^\infty$ estimate in \eqref{bdry-interior} (also with the point $y$ replaced by $z$) to deduce
\begin{equation*}
\begin{split}
|u(y)-u(z)|
&\leq u(y)+u(z) \leq C \Vert u\Vert_{L^1(B_1^+)} \left( (y_n)^{\alpha} + (z_n)^{\alpha}\right) \\
&\leq C \Vert u\Vert_{L^1(B_1^+)} (y_n)^{\alpha} \leq C \Vert u\Vert_{L^1(B_1^+)} |y-z|^{\alpha}.
\end{split}
\end{equation*}
We have thus controlled the $C^\alpha$-seminorm of $u$ in $B^+_{1/8}$ for every stable solution in $B_1^+$. From this, the H\"older estimate \eqref{holder-bdry} of Theorem \ref{thm:0bdry} follows by a standard covering and scaling argument.
\end{proof}
|
3,212,635,537,724 | arxiv | \section{Introduction}
\label{sec:dbl}
The properties of strongly correlated electrons confined to a ladder
(or double chain) and described by $t$-$J$ or Hubbard models have
been the subject of intensive investigation recently.\cite{%
dagotto,noack,tsunetsugu94,rice,sigrist,khveshchenko,tsunetsugu95}
The reason lies in the unusual spin liquid nature of the undoped
parent system.\cite{%
dagotto,hirsch88,strong,gopalan,barnes93,barnes94,white,troyer}
Another reason for especial interest is weakly coupled ladders
compounds like ${\rm SrCu}_2{\rm O}_3 $ and
$({\rm VO})_2{\rm P}_2{\rm O}_7$.\cite{takano,johnston}
Recent measurements of the magnetic susceptibility and the nuclear
spin relaxation rate in these materials show the existence of a
finite spin gap.
The key question in the current study is the evolution of the finite
gap in the spin excitation spectrum upon doping. The spin gap remains
in other spin liquids systems and is a sign of strong superconducting
fluctuations.\cite{ogata_A,imada}
A recent analysis of the $t$-$J$ ladder using a mean-field theory
with Gutzwiller renormalization of the matrix elements to account
for the strong correlations, gave a continuous evolution of the spin
gap with doping.\cite{sigrist} The short range resonance valence
bond (RVB) state evolves into a superconductor with modified
$d$-wave symmetry within this mean-field approximation.
A tendency towards modified $d$-wave superconductivity was also
found in a bosonization approach\cite{khveshchenko} and in a recent
numerical study of the Hubbard ladder.\cite{noack}
We have investigated $t$-$J$ ladders up to a size of $10 \times 2$
sites using a Lanczos diagonalization method. First results have been
published in Ref.~\onlinecite{tsunetsugu94}. Here we report in
more detail our results for larger lattices including a detailed
investigation of the excitation spectrum, a discussion of phase
separation and the calculation of the superconducting order
parameter and of the form factor of the Cooper pairs.
We find clear evidence of hole pairing and a modified $d$-wave
RVB state in lightly doped systems in agreement with the mean-field
theory. An interesting difference however is the discontinuous
evolution of the excitation spectrum upon doping.
New ``quasiparticle" excitations appear carrying both charge and
spin. These excitations are in addition to a band of magnons which
evolve continuously away from the undoped spin liquid.
This separation of the excitation spectrum into bound holon-spinon
quasiparticles and collective magnon excitation contrasts with the
full spin-charge separation found in a Luttinger liquid.
The $t$-$J$ ladder Hamiltonian is
\begin{eqnarray}
\label{eq:ham}
{\cal H}&=&
-t\sum_{j,\sigma,a}
{\cal P}
\left(
c_{j,a,\sigma}^\dagger c_{j+1,a,\sigma}
+ {\rm H. c. }
\right)
{\cal P}
\nonumber \\ &&
-t'\sum_{j,\sigma}
{\cal P}
\left(
c_{j,1,\sigma}^\dagger c_{j,2,\sigma}
+ {\rm H. c. }
\right)
{\cal P} \\
&&+J \sum_{j,a}
\left( {\bf S}_{j,a}\cdot {\bf S}_{j+1,a}
- \textstyle{1 \over 4} n_{j,a} n_{j+1,a}
\right)
\nonumber \\ &&
+J' \sum_{j}
\left( {\bf S}_{j,1} \cdot {\bf S}_{j,2}
- \textstyle{1 \over 4} n_{j,1} n_{j,2}
\right), \nonumber
\end{eqnarray}
where $j$ runs over $L$ rungs, and $\sigma$ $(=\uparrow,\downarrow)$
and $a$ $(=1,2)$ are spin and leg indices. The $t$-$J$ ladder is
sketched in Fig.~\ref{fig:geometry}.
The first two terms are the kinetic energies and the $J$ $(J')$ are
exchange couplings along the ladder (rungs).
Unless noted otherwise we set $t'=t$.
The projection operator
${\cal P} \equiv \prod_{i,a}(1-n_{i,a,\uparrow}n_{i,a,\downarrow})$
prohibits double occupancy of a site. Periodic or antiperiodic
boundary conditions (PBC, APBC) are used along the ladder.
The wave vector ${\bf k}=(k_x,k_y)$ is consequently well defined,
$k_x$ and $k_y$ being the momenta along the ladder and rungs.
The transverse momentum $k_y$ takes only the values $0$ and $\pi$,
corresponding to bonding and antibonding states.
At half filling the $t$-$J$ ladder is equivalent to the Heisenberg
ladder, which was investigated in earlier publications.\cite{%
dagotto,sigrist,gopalan,barnes93,barnes94,white,troyer}
The ground state of the Heisenberg ladder is a short range RVB
state with a spin gap of $\Delta\approx J/2$\cite{%
dagotto,barnes93,white,troyer} at isotropic coupling, $J'=J$.
The strong coupling limit $J'/J\rightarrow\infty$ is a good starting
point to describe the system as there a simple description of the
spectrum is available.\cite{reigrotzki} In that limit,
each eigenfunction of the total system can be written as a direct
product of one-rung states, which are either spin singlets or one
of the triplets, and the ground state is that with all singlets.
The first excited multiplet consists of the states with one triplet
rung. A small but finite value of $J$ lifts the degeneracy of
these states. The one-magnon excitations then form a three-fold
spin degenerate band with dispersion
$\epsilon_k=J'+J \cos k_x + {1\over 4} (J^2/J')(3-2\cos 2k_x)$
up to second order in $J$. It has a minimum gap
$\Delta = J'-J+{1\over 2} {J^2\over J'}$
at $k_x=\pi$.\cite{reigrotzki}
The momentum perpendicular to the chains is $k_y=\pi$. The higher
excited states form a continuum of excited states and its minimum
is at ${\bf k}=(0,0)$ with energies slightly larger than twice
the gap $2\Delta$. With increasing $J$ the collective excitation
branch crosses into the continuum, but the qualitative description
is still valid.
In this paper we study the effects of doping holes into such a
ladder. Although the isotropic case, $J'/J=1$, is of most interest,
we also study the limit $J' \gg J, t$, which can be easily
understood. In this limit the problem reduces to a system of
weakly coupled rungs. The properties can be continuously followed
down to the isotropic point $J=J'$.
This paper is organized as follows. In Sec.~\ref{sec:ferro} we
briefly discuss the occurrence of ferromagnetism in the ladder
doped with one or two holes at $J=0$ and discuss the relationship
with the occurrence of ferromagnetism in two dimensions.
Next in Sec.~\ref{sec:pair} we discuss the pairing of holes doped
into the ladder and the occurrence of phase separation.
To understand the excitation spectra we start from the
single hole case in Sec. \ref{sec:onehole} and go on to the two-hole
case in Sec.~\ref{sec:twohole}. Section~\ref{sec:paircorr} discusses
long range correlations, in particular the interesting question of
the symmetry of the pairs and the mapping to a Luther-Emery liquid.
The single-particle excitations are discussed in
Sec.~\ref{sec:chargexc}. Over all we find a remarkable similarity
between the ladder and 2D clusters.
\section{Ferromagnetism for $J=0$}
\label{sec:ferro}
The $t$-model ($t$-$J$ model with $J=0$) is equivalent to the
infinite-$U$ Hubbard model. In single chains the ground state of the
$t$-model is degenerate in the spin degrees of freedom. In two
dimensions on the other hand the ground state of the $t$-model doped
with one hole is ferromagnetic.\cite{nagaoka} This is called the
Nagaoka effect.
The extension of the proof by Nagaoka to finite hole doping in the
thermodynamic limit proved to be difficult. Actually the ground state
of the two-dimensional (2D) square-lattice $t$-model doped with
{\it two} holes is {\it not} ferromagnetic.\cite{doucot} For finite
densities in the thermodynamic limit there are contradicting
results. Variational estimates for the $U=\infty$ Hubbard model
indicate that the fully polarized ferromagnetic state is stable until
a critical doping $\delta_{cr}=0.29$.\cite{linden} High
temperature series expansions by Putikka {\it et al.} on the other
hand show evidence that the fully polarized ferromagnetic ground
state does not survive at any finite doping. Instead they find
evidence for a partially polarized ferromagnetic state at low
hole doping. A fully polarized ferromagnetic state at finite
doping was found only for $J<0$.\cite{putikka}
In this context it is of interest to study the occurrence of
ferromagnetism in the ladder models. While the proof by Nagaoka
\cite{nagaoka} cannot be applied to the one-dimensional chain
it is valid for the ladder. The proof relies on the existence of
closed loops on the lattice. Such loops exist in 2D
lattices and on ladders, but cannot be formed on single chains. The
ground state of the ladder doped with one hole is thus
ferromagnetic.
We have numerically studied the $t$-ladder with
$L=2,3,\ldots,10$ rungs, doped with two holes. In
Fig.~\ref{fig:FM_engy} we show the ground state energies of the
ladders for both PBC's and APBC's.
We find that the ground state is always ferromagnetic for
APBC and an even number of rungs and for PBC and an odd number of
rungs. For the other boundary conditions the ground state is a spin
singlet.
An important point is that the ferromagnetic state always has the
lower energy for a ladder with at least four rungs. The singlet state
is very close in energy and deserves a more detailed investigation.
In Fig.~\ref{fig:FM_corr} we plot the real-space spin correlations
$\langle S^z(0)S^z(r)\rangle$ of the lowest singlet state of the
$L=10$ ladder. These spin correlations show that the singlet state
actually consists of two ferromagnetic domains with opposite
magnetization.
The results show clear evidence for a ferromagnetic ground state of
the $t$-ladder ($L \ge 4$) doped with two holes. In the thermodynamic
limit however two holes is not a finite density. Extrapolations of
our small-cluster results at finite doping to the thermodynamic limit
$L \rightarrow \infty$ are hard to obtain. But one may speculate that
the existence of a ferromagnetic ground state of the $t$-ladder
with 2 holes and $L\ge4$ could indicate a ferromagnetic state for
dopings $\delta < \delta_{\rm cr} \approx0.25$. Similar results
were obtained by Hirsch and M\"uller-Hartmann.\cite{hirsch_phd}
\section{Hole Pairing and Phase Separation}
\label{sec:pair}
\subsection{Hole Pairing}
In this section we will discuss the pairing of holes doped into the
$t$-$J$ ladder and the occurrence of phase separation at large values
of $J/t$. We will start from the simple limit $J'\gg J,t$. In this
limit the undoped ladder consists of weakly coupled rungs, as is
sketched in Fig.~\ref{fig:strong_limit}(a).
In this limit two holes doped into the ladder will go onto the same
rung in order to minimize the number of broken singlet bonds. This
state is graphically shown in Fig.~\ref{fig:strong_limit}(c).
In order to study the occurrence of hole pairing at smaller values
of $J'$ and down to the isotropic point $J=J'$ we calculate the
binding energy and the hole-hole correlation function. We find
that even at isotropic coupling the holes still form a bound pair,
although the pair is more spread out there.
The binding energy $E_B$ is defined as
\begin{equation}
E_B \equiv 2E_{\rm G.S.}(2L-1) - E_{\rm G.S.}(2L )
- E_{\rm G.S.}(2L-2),
\end{equation}
where $E_{\rm G.S.}(N)$ is the ground state energy for $N$ electrons,
the boundary conditions are chosen between PBC and APBC to give the
lowest energy.
In the large $J'$ region the binding energy can easily be estimated.
A single hole doped into a Heisenberg ladder breaks one bond
with energy loss $J'$, but can gain kinetic energy $-t$ along the
ladder (see the next section for details) and $-t'$ along the rung.
It follows that
$E_{\rm G.S.}(2L-1)\approx E_{\rm G.S.}(2L)+J'-t-t'$.
Two holes on the same rung also break one bond, but the
kinetic energy of such a bound pair is much smaller, of order
$-4t^2/J'$, as will be calculated later. Thus we estimate
$E_{\rm G.S.}(2L-2)\approx E_{\rm G.S.}(2L ) +J'$,
and a binding energy:
\begin{equation}
E_B \approx J'-2t-2t'\qquad \mbox{for}\;J'\gg J,t,t'\;.
\end{equation}
Figure \ref{fig:gaps} shows $E_B$ as a
function of $J'$. It remains positive and thus shows binding down to
the isotropic value, $J/t=J'/t=0.3$. The same holds for a larger
$J/t=0.5$.
Additional evidence for pairing is provided by the hole-hole
correlation functions
\begin{equation}
\bigl\langle n_{\rm h}(0) n_{\rm h}(r) \bigr\rangle \equiv
\bigl\langle (1-n_{i,a}) (1-n_{i+r,a'})
\bigr\rangle,
\end{equation}
measured on the same leg $a=a'$ and on different legs $a\ne a'$
in the ground state. They are plotted in Fig.~\ref{fig:hole_corr}
for $J/t=0.3$ and $J'/J=1$ and $10$. For $J'/J \gg 1$ the two
holes are predominantly on the same rung and the correlation
function shows a clear exponential decay. At the isotropic point
the pair is more extended. The maximum of the correlation function
is now at a distance 1 along the legs and on different legs,
but it again decays at large distances. We can calculate the
size of the hole pair by fitting the inter-chain correlations
to an exponential form
$\langle n_{\rm h}(0) n_{\rm h}(r) \rangle
\sim e^{-r/\xi} + e^{-(L-r)/\xi}$
for the two largest distances, $L/2$ and $L/2-1$.
The inset of Fig.~\ref{fig:hole_corr} shows the size $\xi$
of the hole pair as a function of the inter-chain coupling $J'/t$.
The pair is very tightly bound for $J'\gg J$. At the isotropic
point the pair is still bound, with a diameter of about two lattice
spacings. Note the oscillation of the radius with respect to $L$.
The size seems to converge to a value in between the $L=8$ and the
$L=10$ result at the isotropic point.
\subsection{Effective boson model for the large $J'$ limit}
We may say that the system belongs to the Luther-Emery universality
class of 1D correlated systems,\cite{luther_A} in the sense that
the spin excitations acquire a finite gap while the charge
excitations remain gapless. In the limit of large $J'$, however,
the picture that tightly bound hole pairs are moving in a
background of singlet rungs is more appropriate than weak coupling
approaches like $g$-ology. Considering these hole pairs as hard
core bosons, we can determine the long-range correlations by a
mapping to an effective boson model.
The pair hopping matrix element to second order in
perturbation theory is
\begin{eqnarray}
t^* = { 2t^2 \over J'-{4t'^2 / J'}}.
\end{eqnarray}
There is a weak attraction $V^*$ between two hole pairs on
neighboring rungs, which again to second order takes the form
\begin{eqnarray}
V^* \equiv -{ J \over 2 } - {3J^2 \over 8J'} +
{4t^2 \over J'-{4t'^2 / J'}},
\end{eqnarray}
where the first, attractive, term comes from the charge part of the
$J$-term in the Hamiltonian. As $t^* , V^* \ll J'$ we can map the
low-energy part of the $t$-$J$ ladder onto an effective hard-core
boson model on a chain with nearest neighbor interaction:
\begin{equation}
H^{*} =
-t^* \sum_i \bigl( B_i^{\dag} B_{i+1} + \mbox{H. c.} \bigr)
+V^* \sum_i N_i N_{i+1},
\end{equation}
where the hard-core boson creation operator $B_i^{\dag}$ creates a
hole pair at the rung $i$ and $N_i \equiv B_i^{\dag}B_i$ is its
number operator. There is a hard-core repulsion since only one
hole pair can be created on any given rung.
Our effective boson model is equivalent to the XXZ-model in a
magnetic field, which has been solved exactly by a bosonization
approach and conformal
field theory.\cite{haldane} For $V^* < -2|t^*|$ the system is phase
separated. This is the case for $J' > J'_{\rm PS}$, where
\begin{equation}
J'_{\rm PS} = { 16 \, t^2 \over J } - {J \over 2} +
{\rm O} \left( {J^3 \over t^2} \right),
\end{equation}
again to second order perturbation theory. For physically reasonable
values of $J/t$ phase separation occurs only at very large values of
$J'$: $J'_{\rm PS}/t=53.2$ for $J/t=0.3$ and $J'_{\rm PS}/t=31.8$ for
$J/t=0.5$. Note that the dominant attractive part of the interaction
comes from the charge part $-{1\over4} J n_{j,a} n_{j+1,a}$ of the
$J$-term.
Next we will discuss the region where the system is not yet
phase separated but $J'$ is still large ($J,t \ll J' < J'_{PS}$).
There we can determine the dominant correlations from the
effective boson model. The correlation exponents have been
calculated indirectly by Bethe ansatz.\cite{haldane} Both the
charge density wave correlations and the superconducting
correlations show a power-law decay at large distances:
\begin{mathletters}
\begin{eqnarray}
\langle N_r N_0\rangle &&\sim
{\rm const.} \times r^{-2} +
{\rm const.} \times \cos \bigl( 2k_F r \bigr) \, r^{-K_\rho},\\
\langle B_r^{\dag} B_0\rangle &&\sim r^{-1/K_\rho}.
\end{eqnarray}
\end{mathletters}
The superconducting correlations $\langle B_r^{\dag}B_0\rangle$ are
dominant if $K_\rho>1$. This is the case for most of the phase
diagram, except for the phase separation regime at $V^*<-2t^*$. At
quarter filling $\rho=1/2$ and for $V^*>2t^*$ the system is in the
Ising-limit and shows a long range charge density wave ground
state. At fillings close to that line and for large $V^*>2t^*$ there
is a small region where $K_\rho<1$.\cite{haldane}
In our effective model we have $V^*<0$ and there
$K_\rho>2$.\cite{haldane} We are thus always in the region of
dominant superconducting correlations. Even neglecting the
attractive charge part of the $J$-term we are still in the
superconducting regime where $K_\rho>1$.
In the limit of large $J'$ the equivalence of the $t$-$J$ ladder with
a Luther-Emery liquid can clearly be seen. Going to isotropic
coupling the spin gap remains finite and the only low-lying
excitation is the collective charge mode, as we will show in the
following sections. Thus also at isotropic coupling the $t$-$J$
ladder is still a Luther-Emery liquid. In Sec. \ref{sec:map},
we will develop another approach which relates the long-range
correlations to thermodynamic quantities for more general $J$'s,
based on a bosonization of density fluctuations.
\subsection{Phase separation}
Finally we study the occurrence of phase separation at isotropic
coupling $J=J'$. We estimate the onset of phase separation by
determining the coupling $J$ at which the compressibility $\kappa$
diverges. The compressibility per site can be calculated as usual
\begin{equation}
\kappa^{-1}=\rho^2 \
{ \partial^2 \epsilon (\rho)
\over
\partial \rho^2 }\;,
\label{eq:kappa}
\end{equation}
where $\epsilon(\rho)$ is the energy density per site of the ladder
with a particle density per site $\rho = N/(2L)$.
In a finite system usually the discrete version
\begin{equation}
\kappa^{-1}={N^2\over 2L}
\left[
{ E(N+2;L)+E(N-2;L)-2E(N;L) \over 4}
\right]
\label{eq:kappaf}
\end{equation}
is used, where $E(N;L)$ is the ground state energy of the finite
system with $N$ particles on the ladder with $L$ rungs (Volume $2L$).
At small hole doping however this procedure may not be reliable due
to finite size effects caused by frustration on small lattices.
To see this let us consider the $L=8$ ladder doped with zero,
two or four holes. In the undoped case there are 8 spins on each
leg of the ladder. Two holes doped into the ladder will
predominantly go onto different legs and there will be seven spins
per leg. Thus the antiferromagnetic configuration on the legs will
be frustrated. For four holes there will be six holes on each leg
and the system is again not frustrated. Conversely on an $L=9$
ladder the undoped ladder and the ladder doped with four holes will
be frustrated, while the ladder doped with two holes will be
non-frustrated.
We have thus used a different formula to calculate the
compressibility at small hole doping. We calculate the ground state
energies for an $L=8$ ladder doped with $N_{\rm h}=0$ and $4$ holes
and for an $L=9$ ladder doped with $2$ and $6$ holes. In all these
cases the ladder
is not frustrated. Then we estimate the compressibility from these
energies using finite differences similar to the above
Eq. (\ref{eq:kappaf}). In the thermodynamic limit
$L \rightarrow \infty$ both formulas give the same result,
as the frustration appears only on small lattices.
While the finite size effects are quite small at low electron
densities they are much larger at small hole dopings due to
frustration mentioned before. The estimated errors on the phase
separation line may thus be much larger there, about $\pm 0.2t$.
A comparison with the results obtained with open boundary
conditions (OBC) confirms our results. Only at small doping
the OBC results are not reliable since there the holes are trapped
on the ends of the chain.
Figure \ref{fig:phasesep} shows the phase separation line for the
$t$-$J$ ladder for $J=J'$, in the $J$-$\rho$ plane. Note that,
opposite to the single chain case,\cite{ogata_B} the onset of phase
separation at small hole doping is at lower values of $J/t$ than at
small electron concentrations. This resembles the behavior in two
dimensions,\cite{putikka} although the precise position of the
phase separation line in two dimensions has not yet been established.
\section{Properties of a single hole}
\label{sec:onehole}
In the previous section we have discussed the ground state of
the ladder doped with two holes. In order to understand the low
energy excitations of the ladder it is useful to study the one-hole
problem first.
As mentioned above the limit $J'\gg J,t$ is a good starting point to
explore the $t$-$J$ ladder. There are nine different states,
depicted in Fig.~\ref{fig:nine_basis}. A single electron goes
either into the bonding or antibonding orbital
\begin{equation}
b_{i,\sigma}^{\dag} = {1\over\sqrt{2}}
\left(
c_{i,1,\sigma}^{\dag} + c_{i,2,\sigma}^{\dag}
\right),
a_{i,\sigma}^{\dag} = {1\over\sqrt{2}}
\left(
c_{i,1,\sigma}^{\dag} - c_{i,2,\sigma}^{\dag}
\right),
\end{equation}
with energy $ \mp t'$, respectively. Two electrons on the rung are
either in the singlet state with energy $-J'$ or in one of the three
triplet states with energy~$0$.
The singlet state expressed in bonding and antibonding orbitals is
\begin{equation}
{1 \over \sqrt{2}}
\left(
c^{\dag}_{i,1,\uparrow} c^{\dag}_{i,2,\downarrow}
- c^{\dag}_{i,1,\downarrow}c^{\dag}_{i,2,\uparrow}
\right) =
{1 \over \sqrt{2}}
\left(
b_{i,\uparrow}^{\dag}b_{i,\downarrow}^{\dag}
- a_{i,\uparrow}^{\dag}a_{i,\downarrow}^{\dag}
\right)
\end{equation}
Similarly the three triplets can be expressed as combinations of one
bonding and one antibonding electron:
\begin{equation}
a^{\dag}_\uparrow b^{\dag}_{\uparrow},\;
{1\over\sqrt{2}}
\left(
a^{\dag}_\uparrow b^{\dag}_{\downarrow}
+a^{\dag}_\downarrow b^{\dag}_{\uparrow}
\right),\;
a^{\dag}_\downarrow b^{\dag}_{\downarrow}
\end{equation}
Figures \ref{fig:disp_1h}(a) and (b) show the one-hole spectra for
$L=8$ for large inter-chain coupling $J'/J=10$, calculated by
exact diagonalization for $J/t=0.3$, $J'/t=3$ and $J/t=0.5$, $J'/t=5$
respectively.
A hole on a single rung can be either in the bonding or the
antibonding orbital. One hole doped into the half filled ladder
will thus be either in a bonding or antibonding state, depending on
the parity symmetry of the total ladder
[see Fig.~\ref{fig:strong_limit}(b)]. This hole can
propagate along the ladder with a hopping matrix element
$\tilde{t}=+t/2$ in first order perturbation theory. Thus the low
energy states are two bands of holes in the bonding and anti-bonding
orbitals. They are split by the energy difference $2t'$ of the
bonding and antibonding states. These two bands can clearly be seen
in the spectra [Figs.~\ref{fig:disp_1h}(a) and (b)]. The minimum of
the bands is at $k_x=\pi$, since the hopping matrix element for
holes $\tilde{t}>0$. The bandwidth of both bands is $4\tilde{t}=2t$
in the limit $J'\gg J,t$. At finite $J$ the bandwidth is reduced
due to hybridization with the higher excited states.
Decreasing $J'$ to the isotropic points $J=J'$ changes the dispersion
of these bands [see Figs.~\ref{fig:disp_1h}(c) and (d)]. At low
energies we can still see the bands of holes in the bonding and
antibonding orbitals. These bands evolve continuously from the large
$J'$ limit. The minima of the energy bands are not at $k_x=0$ or
$k_x=\pi$, but at a large momentum
${\bf k}_{F}^{B}\approx\left(\pm{3\over5}\pi,0\right)$ for the
bonding and
${\bf k}_{F}^{A}\approx\left(\pm{2\over5}\pi,\pi\right)$ for the
antibonding band. We can fit the low-lying hole bands to
a dispersion of the form
\begin{eqnarray}
E(k_x) = &&E_0+\Delta E+\alpha_1\cos k_x \nonumber\\
&&\ \ \ + \alpha_2 \cos 2k_x + \alpha_3 \cos 3k_x,
\label{eq:fitspec}
\end{eqnarray}
corresponding to nearest neighbor ($\alpha_1$), next-nearest neighbor
($\alpha_2$) and third-nearest neighbor ($\alpha_3$) hopping. $E_0$
is the ground state energy of the undoped ladder and $\Delta E$ the
shift in energy of the center of the band upon doping. In
Fig.~\ref{fig:disp_fit} we show the bands and the excellent fit.
The parameters are shown in Table \ref{tab:fitspec}.
The changes in the hole dispersion with decreasing $J'$ are
summarized as follows:
(i) The center of the bands shifts downwards by $\Delta E < 0 $,
compared to the undoped ladder. The energy gain for one hole in the
case of $J=0$ would be just the kinetic energy $-t'$. When $J > 0$
we lose magnetic energy by introducing the hole. The energy gain
is therefore smaller at larger $J/t$, as we can also see from the
fit parameters.
(ii) The hole bands are narrowed compared to the large $J'$ limit.
In that limit the bandwidth of the hole bands was $2t$.
This bandwidth is renormalized by the stronger polarization effects
at isotropic coupling, and it is now of the same order as the
magnetic energy $J$, instead of the kinetic energy $2t$.
(iii) The dispersion changes as longer range hopping processes
($\alpha_2,\alpha_3)$ are introduced with decreasing $J'$, and the
minima move away from $k_x = \pi$. The minima of both bands are
very close in energy, again in contrast to the strong coupling
region where they are split by $2t'$. In Sec. \ref{sec:chargexc}
we will identify the minima $k_{F}^{B}$ and $k_{F}^{A}$ with the
Fermi points of the bonding and antibonding quasiparticle bands.
Another interesting question is the behavior of the free spin that is
left over after one hole has been doped into the ladder.
In the $t$-$J$ chain the spin and charge excitations are carried by
different soliton excitations which are far apart in space from each
other. This is a typical feature of spin-charge separation and such
a system is called a Luttinger liquid. In a Fermi liquid on the
other hand they are bound and the excitations are described by
quasiparticles carrying both charge and spin.
We have calculated the hole-spin correlations to answer the question
if spin-charge separation occurs in the ladder. The real space
correlations
\begin{equation}
\langle n_{h,a} (j+r) S^{z}_{a'} (j) \rangle ,
\end{equation}
are shown in Fig.~\ref{fig:hole-spin}. This correlation function is
nonzero for the ground state in the subspace of
$S^z_{\rm tot} ={1 \over 2}$ since there remains one spin unpaired.
The result shows that the hole is tightly bound to the remaining
free spin. At strong inter-chain coupling $J'\gg J,t$ it is again
predominantly on the same rung. At isotropic coupling the spin-hole
bound state is more extended. These spin-hole bound states thus
carry both charge and spin. In this sense they are similar to the
quasiparticles in a Fermi liquid. This is in contrast to the
spin-charge separation in the single chain. We will therefore call
the single holes bound to the free spin ``quasi-particles'',
although the system has a spin gap.
\section{Excitation spectra of the ladder with two holes}
\label{sec:twohole}
\subsection{Excitation Spectra}
The ground state of the ladder doped with two holes is, as discussed
above, a bound state of the two holes. This bound pair coherently
propagates along the ladder, giving rise to the lowest-lying band.
When $J'\gg J$, this band, spin-singlet charge excitations, is
clearly seen in the numerical results
as shown in Fig.~\ref{fig:disp_2h_strong}.
The higher energy excitations are again understood simply
in the large $J'$ limit. An essential difference from the
lowest-lying singlet band
is that two holes are now separate rather than forming a bound
pair. Being separate, they can gain a larger kinetic energy,
but only in return for a even larger cost of exchange energy
$\sim J'$ as one more singlet rung is
broken. Thus there are continua of scattering states of the two holes
(``quasiparticles'') at higher energies. Since the residual
interactions between the two quasiparticles are weak,
the energy is almost degenerate between the $S=0$ and
$S=1$ spin subspaces.
On the finite lattice we naturally do not
see a continuum of scattering states, but only several discrete
bands. These bands, and the fact that the energies of the triplet and
singlet are nearly degenerate (up to boundary effects) can be seen in
the spectra.
There are various combinations of the two quasiparticle bands
in the two quasiparticle continuum of states.
The lowest are scattering states of two bonding
quasiparticles, with $k_y=0$. Higher states are scattering states of
one bonding and one anti bonding quasiparticle. Having the same
transverse momentum, $k_y=\pi$
these states hybridize with the one-magnon excitations in the
spin background, resulting in a ``bound state'' below the
two-particle continuum. This is nearly dispersionless and clearly
seen in the spectrum. The continua of two antibonding
quasiparticles is much higher in energy and not included in the
figure.
At higher energies there are spin excitations in the spin background.
They are not described by the quasiparticles and
will be discussed in more detail in the next section.
The results in the isotropic case, $J'/J=1$, is shown in
Fig.~\ref{fig:disp_2h_iso}.
The energy spectrum is more complicated but the above description
still holds qualitatively.
The ground state is still the bound hole pair. It moves coherently
along the ladder, yielding a gapless band of singlet charge
excitations. The band has a linear dispersion around
${\bf k}=(0,0)$ compared with quadratic in the large $J'$ case.
An important point is that despite its complicated dispersion the
low energy part is well separate from the other excitations.
Therefore also at isotropic coupling the only low-energy excitation
is the collective charge excitations, and we may identify the
isotropic $t$-$J$ ladder as Luther-Emery liquid. We will discuss
the essential role of these charge fluctuations concerning
superconductivity in Sec.~\ref{sec:map}.
In addition to the gapless band of charge fluctuations, there are
various local minima at higher energies. However, they can be
explained by taking account of the nonmonotonic dispersion of the
one-hole spectra shown in Fig.~\ref{fig:disp_1h}, and our
quasiparticle picture still holds. More specifically, there are
four local minima in the single-hole spectra at
${\bf k} \approx ( \pm {3 \over 5} \pi,0)$ and
${\bf k} \approx ( \pm {2 \over 5} \pi,\pi)$, which have nearly the
same energy. Thus to construct low-energy two-quasiparticle
excitations there are many possible combinations of different
minima of one-particle states as discussed below. This is the
origin of many local minima in the two-particle spectra and this is
confirmed by the fact that the dependence on the boundary condition
is consistent with this picture.
The ground state of the two-hole spectrum can be constructed from
holes near $\pm k_F$ in the single-hole spectra. The other minima
can be explained similarly. The minimum in the PBC spectrum at
$k_x ={4\over5}\pi$ can be identified with the $2k_F$ excitation,
where a particle moves from one Fermi point to the opposite one.
($2 \times { \pm 3 \over 5} \pi \equiv {\pm 4 \over 5} \pi$).
When using APBC we do not have the $k$-value of $\pm{3\over5}\pi$
in the single-hole spectrum on an $L=10$ ladder. The closest
$k_x$-points are $\pm{5\over10}\pi$ and $\pm {7 \over 10} \pi$,
leading to minima at $\pm \pi$ and $\pm {3 \over 5} \pi$ in the
two-hole spectra. Another feature that can be explained from the
single hole spectra is the minimum at ${\bf k}=(\pi,\pi)$
(odd parity, $k_x=\pi$). Using PBC this state is obtained with one
hole with ${\bf k}=({3\over5}\pi,0)$ and one with
${\bf k}=({2\over5}\pi,\pi)$, leading to the minimum at $(\pi,\pi)$.
Using APBC's we can combine one hole at
${\bf k}=({3 \over 10} \pi,\pi)$ with one at either
${\bf k}=({5 \over 10} \pi,\pi)$ or
${\bf k}=({7 \over 10} \pi,\pi)$. As mentioned above these two
states are higher in energy than the minimum at
$\pm {3 \over 5} \pi$ and very similar in energy. Therefore we
expect two states at
${\bf k}=({4 \over 5} \pi,\pi)$ and ${\bf k}=(\pi,\pi)$ which
are similar in energy but at a higher energy than the corresponding
states with PBC. This is exactly what we observe.
The odd parity states near $k_x =0$ can be constructed similarly.
The qualitative features of the three-hole excitation spectrum can
again be explained similarly.
\subsection{Spin Excitations}
\label{sec:spinexc}
One of the most interesting properties of the $t$-$J$ ladder is that
there are two distinct types of spin excitation. Although it is
most easily seen in the large $J'$ limit, the qualitative
distinction remains down to the isotropic point.
The first type is the collective magnon excitations inherited from
the undoped spin ladder. One of the electron-filled rungs is now
excited to a spin triplet. This local excitation is what we call
``magnon'' and it propagates coherently along the ladder, leading to
an energy dispersion with respect to $k_x$. For a detailed
investigation, we have examined the two-hole spectra in more detail
and have calculated the spin-spin and spin-hole correlations of the
low-lying triplet states. We find that the magnon excitations of
the Heisenberg ladder evolve continuously upon doping. However
there discontinuously appears a new kind of spin-triplet excitation
at lower energies, which is not present in the undoped ladder.
The lowest excitation is a different type for which quasiparticles
play an essential role. Therefore, the spin gap, defined as the
excitation energy to the lowest triplet, is a discontinuous
function of the hole doping at $\delta =0$. This new type of
excitations consists of breaking a pair of holes into two separate
quasiparticles, each carrying charge $+|e|$ and spin
$1/2$.\cite{K_takano} When the two quasiparticles are both in the
bonding orbital, their lowest energy, at ${\bf k}=(0,0)$, is lower
than the lowest magnon excitation. The additional energy gain is
easily understood, since the two separate holes have a larger
kinetic energy of the order of $t$ while the gain of the magnon
kinetic energy is the order of $t^2 /J'$. As was shown in
Fig.~\ref{fig:gaps}, down to the isotropic point the lowest
two-quasiparticle excitation is lower in energy than the lowest
magnon excitation.
The above picture is confirmed by comparing the correlation function
between the two different states. The dynamical spin structure
factor gives another confirmation.
Figure \ref{fig:corrs} shows the equal-time correlations of the
two holes,
$\langle n_{\rm h} ({\bf r}) n_{\rm h} ({\bf 0}) \rangle$,
and of spin and hole,
$\langle S^z ({\bf r}) n_{\rm h} ({\bf 0}) \rangle$,
calculated for these two types of spin-triplet excitations.
The latter quantity is nonzero since the states with $S^z =1$
are used in the calculation. When $J'/J=10$, the two holes
are separate in space in the lowest state, while tightly
bound in the other state. The position of magnetic excitation is,
on the other hand, close to the hole position in the lowest state,
while they are far apart from each other in the other state.
These two behaviors are what is predicted by our picture explained
above, and despite modification in small detail they are
qualitatively consistent even at the isotropic point.
Our numerical results confirm that the lowest triplet state is
the quasiparticle excitation where the bound hole pair breaks up.
The two holes repel each other and the hole-hole correlations have
the maximum at the largest distance $L/2$. Each of the holes is
bound to a spin-$1/2$, as can be seen from the hole-spin correlation
function. This state is sketched in Fig.~\ref{fig:strong_limit}(d).
As a typical magnon excitation, we show the correlations for the
state at ${\bf k}=(\pi,\pi)$ that has the main spectral weight in
the dynamical spin structure factor which will be discussed soon.
The hole-hole correlations are similar to the ground state.
The hole-spin correlations show that the triplet carrying spin
current is far away from the hole pair. This state is shown in
Fig.~\ref{fig:strong_limit}(e). Mean field theory\cite{sigrist}
predicts only this magnon excitation, which evolves continuously
{}from the Heisenberg ladder.
In neutron scattering experiments the relevant quantity is the
dynamical spin structure factor
\begin{eqnarray}
{\cal S}({\bf q},\omega_0) \equiv \sum_{n}
\Bigl| \langle n|S_{\bf k}^{z}|\mbox{G.S.}\rangle
\Bigr|^2
\, \delta(E_n-E_{\rm G.S.}-\omega_0),
\end{eqnarray}
where $|n\rangle$ is the complete set of eigenstates with energy
$E_n$, $|\mbox{G.S.}\rangle$ is the ground state with energy
$E_{\rm G.S.}$ and
\begin{equation}
S_{\bf q}^{z} \equiv {1\over\sqrt{2L}}\sum_{\bf r}
e^{i {\bf q} \cdot {\bf r}} S_{\bf r}^{z} .
\end{equation}
Figure \ref{fig:spin_0h} shows ${\cal S}({\bf q},\omega)$ calculated
for the Heisenberg ladder using the Lanczos diagonalization combined
with the continued fraction method.\cite{heine} It can clearly be
seen that the dominant contributions arise from the collective
excitations near ${\bf q}=(\pi,\pi)$ and there is very little weight
in the continuum of spin excitations at higher energy.
In the doped case the two types of spin excitations have different
contributions to the dynamical spin structure factor. It can be seen
in Fig.~\ref{fig:spin_2h} that the continuum of spin excitations
move towards lower energies. Most of the weight is in the magnon
excitations, consistent with the mean-field theory,\cite{sigrist}
and there is very little weight in the lowest triplet excitation
consisting of the two separate quasiparticles.
As in the large $J'$ limit the lowest triplet excitation with
$q_y=\pi$ is a bound state of a spin triplet and the hole pair.
At ${\bf q}=(\pi,\pi)$ this state has no spectral weight,
while most of the weight is in the second excited state,
which has the triplet separated from the bound pair.
This is a finite size effect of the two-hole system.
The reason is that at $q_y=0$ or $\pi$ we have an additional
symmetry, reflection invariance in the ladder direction.
The parity under these reflections is different for the ground
state and the triplet-hole pair ground state, leading to the
vanishing weight.
A significant difference between the two types of spin excitations
is that the largest number of ``quasiparticle" excitations is
limited by the number of holes. We need at least two holes to
create such an excitation. The number of possible excitations
is thus proportional to the hole doping $\delta$. On the other
hand, the magnon excitations can be excited at any rung where
there are no holes. The number of these excitations is thus
proportional to $1-\delta$ instead, much larger for a small doping
$\delta$. Therefore with decreasing temperature the susceptibility
will show a large exponential drop at temperatures of the order
of the gap of the undoped system ($T\sim0.5J$), followed by a
small drop at temperatures around the spin gap of the doped system.
To summarize we can describe the spin excitations of the $t$-$J$
ladders by a simple picture: quasiparticles moving in a spin
liquid background. In the ground state the quasiparticles are
paired. In the excitation spectrum two types of excitations
can be distinguished. The first corresponds to the breaking
of a pair of quasiparticles. This excitation has the lowest
energy, but its number is limited by the number of holes. Of more
importance for measurements of the susceptibility or inelastic
neutron scattering experiments is the second type, which are magnon
excitations in the spin liquid background. They evolve continuously
{}from the undoped Heisenberg ladder.
Although the gap for this type of excitation is larger it is more
important since we can excite more of these excitations.
Also the weight in the dynamical spin structure factor is larger.
\subsection{Charge Excitations}
Similarly to the calculation of the dynamical spin structure factor
we calculate the dynamical charge structure factor defined by
\begin{equation}
{\cal N}({\bf q},\omega_0)\equiv \sum_{n}
\Bigl| \langle n|\rho_{\bf q} |\mbox{G.S.} \rangle
\Bigr| ^2
\delta( E_n - E_{\rm G.S.} - \omega_0),
\end{equation}
where $\rho_{\bf q}$ is the Fourier transform of the local density
fluctuation around the average density $\rho$.
The result of our calculations is shown in
Fig.~\ref{fig:charge_2h}. The main contribution arises from the
coherent motion of the hole pairs. This leads to large peaks
at low energies near ${\bf q}=(0,0)$. The rest of the weight is
distributed incoherently over a wide region
at rather high energies of order $4t$. It arises from interactions
of a single hole with the surrounding spin background. This is
similar to results obtained for two dimensions.\cite{eder}
Recently Tohyama et al. have calculated ${\cal S}({\bf q},\omega)$
and ${\cal N}({\bf q},\omega)$ for single chains and 2D
clusters.\cite{tohyama} In the single chain they find that, as
expected from Luttinger liquid theory, the charge and spin
excitations are decoupled in the low energy region. The dynamical
charge structure factor is very similar to that of spinless
fermions, consisting of large peaks at the energies expected
{}from the cosine band of the spinless fermions.
In two dimensions they find different behavior. In the spin
structure factor nearly all of the weight is in a few sharp peaks
at low energies. In the charge structure on the other hand the
main weight is at relatively large energies, of the order of
several $t$, and it strongly broadened. This is indicative of
strong spin-charge interactions. Only at some special
$\bf q$-points are there peaks at low energies. The dynamical
charge structure factor, ${\cal N}({\bf q},\omega)$ for
the ladder shown in Fig.~\ref{fig:charge_2h} differs a lot from the
single chain but resembles the 2D cluster. At large $q_x$, we see
the peaks at large energies $\sim 2t \mbox{-} 4t$.
At small $q_x$ in the $q_y=0$ sector, ${\cal N}({\bf q},\omega )$
for the ladder is dominated by the collective mode of the hole
pairs, and its behavior differs substantially from spinless
fermions. Another similarity to the 2D system is found in the
dynamical spin structure ${\cal S}({\bf q},\omega)$ where the
main weight is in peaks with energies $\sim J$
(see Fig.~\ref{fig:spin_2h}). We conclude from this comparison
that the 2D clusters and the ladders are closely related -
a fact which points towards a ``d-wave'' paired state for the
2D clusters. A more difficult question is to what extent
this behavior of the finite 2D cluster is a consequence of the
strong tendency of finite clusters to favor singlet ground states
and to what extent it is representative of an infinite plane.
\section{Long Range Correlations}
\label{sec:paircorr}
\subsection{Superconducting Correlations}
A highly intriguing and much debated subject of the high-$T_c$
superconductors is the internal symmetry of the order parameter.
In this section we study the internal structure of the pairs in the
doped ladders. For the $t$-$J$ model usually only nearest-neighbor
pairs have been considered except for a few cases. It is reasonable
to assume that they are the dominant pair correlations.
However more quantitatively we have chosen the optimal form for
the pairs.
Using the Lanczos algorithm we calculate the pairing correlation
functions for different pairings. Let us introduce the operator
creating a singlet pair of electrons on sites ${\bf r}$ and
${\bf r+d}$,
\begin{equation}
P_{\bf r,d}^{\dag}={1 \over \sqrt{2}}
\left(
c_{{\bf r},\uparrow}^{\dag} c_{{\bf r+d},\downarrow}^{\dag}
-c_{{\bf r},\downarrow}^{\dag} c_{{\bf r+d},\uparrow}^{\dag}
\right),
\end{equation}
Using this definition of the pair operator we can calculate the
superconducting order parameter
\begin{equation}
\chi_{\bf d} =
\Bigl\langle
\mbox{G.S., $N_{\rm h}$$-$2 holes}
\Big|
{1\over 2L} \sum_{\bf r} P_{\bf r,d}^{\dag}
\Big|
\mbox{G.S., $N_{\rm h}$ holes}
\Bigr\rangle
\end{equation}
and its Fourier transform
\begin{eqnarray}
\chi_{\bf k}&&=
\sum_{\bf k} \chi_{\bf d} \, e^{i {\bf k} \cdot {\bf d} }
\nonumber \\
&&=
\bigl\langle
\mbox{G.S., $N_{\rm h}$$-$2 holes}
\big|
P_{{\bf k},-{\bf k}}^{\dag}
\big|
\mbox{G.S., $N_{\rm h}$ holes}
\bigr\rangle
\end{eqnarray}
where $P_{{\bf k},-{\bf k}}$ is the Fourier transform of the real
space pair operator with zero total momentum.
Figure \ref{fig:pair_k} shows the superconducting order parameter
for the $L=8$ ladder with PBC's for several values of $J/t$.
The most obvious properties are that the sign is opposite for pairs
with $k_y=0$ and $k_y=\pi$ and that the absolute values are very
small near $(0,0)$ and $(\pi,\pi)$. This is similar to $d$-wave
pairing in a fully 2D system. The absolute value is largest near
wave vectors which we have identified with the Fermi points of the
quasiparticles
${\bf k}=(3 \pi /4,0) \approx {\bf k}_{F}^{B}$ and
${\bf k}=(\pi /2,\pi)\approx {\bf k}_{F}^{A}$. At large $J/t$
the order parameter is very similar to the simple
$\cos k_x-\cos k_y$ structure of nearest neighbor $d$-wave pairs.
Consider now pairs with a form factor $F$:
\begin{equation}
P_{\bf r}^\dagger
= \sum_{\bf d} F_{\bf d}^* \, P_{{\bf r},{\bf d}}^\dagger,
\end{equation}
where $F$ is normalized
\begin{equation}
\sum_{\bf d} |F_{\bf d}|^2 \equiv 1.
\end{equation}
A simple calculation shows that the superconducting
``order parameter''
\begin{equation}
\chi_F = \sum_{\bf d} F_{\bf d}^* \, \chi_{\bf d}
\end{equation}
is maximized by the pair with the form factor
\begin{equation}
F_{\bf d} =
{ \chi_{\bf d}
\over
\Bigl[
{\displaystyle \sum_{\bf r}|\chi_{\bf r}|^2 }
\Bigr] ^{1/2}
},
\label{eq:formfactor}
\end{equation}
which is proportional to the order parameter $\chi_{\bf d}$.
This optimal order parameter then is
\begin{equation}
\chi = \Bigl[ \sum_{\bf d}|\chi_{\bf d}|^2 \Bigr] ^{1/2}.
\label{eq:optimal}
\end{equation}
In Fig.~\ref{fig:OP} we show this maximal order parameter for a
$L=8$ ladder as a function of $J/t$. A clear increase as a function
of $J/t$ can be seen, similar to single chains and 2D planes where
the superconducting correlations are also enhanced at larger $J$
values. We note that the expectation value of the optimal order
parameter is larger for four doped holes than for two holes.
This is a consequence of the strong correlation effect, as any
pairing order parameter or fluctuations should vanish as
$\delta \rightarrow 0$. The mean-field theory including the strong
correlation effects predicts a $\delta$-linear dependence of the
order parameter as $\delta \rightarrow 0$:\cite{sigrist}
i.e., the superfluid density is proportional to the hole doping.
Our results show that the value for four doped holes is less than
twice the value for two holes. This may be because that in the
four hole case the doping, $\delta=0.25$, is already large and out
of the region of $\delta$-linear dependence. We need further study
using larger lattices to clarify this point more quantitatively.
\subsection{Mapping to a Luther-Emery Liquid}
\label{sec:map}
In Secs.~\ref{sec:pair}-\ref{sec:twohole}, our numerical results
have shown that all the spin excitations cost a finite energy and
the only gapless excitations are charge fluctuations,
i.e., coherent propagation of bound hole pairs along the ladder
direction. We may therefore say that the system is a Luther-Emery
liquid\cite{luther_A} from large $J'$ limit down to the isotropic
coupling. This observation indicates that the low-energy and
long-wavelength properties of lightly doped $t$-$J$ ladders would
be entirely described by an effective continuum Hamiltonian
in terms of charge degrees of freedom and this model correctly
predicts long-range asymptotic behavior of correlation functions.
The effective model is actually the bosonic Gaussian model proposed
by Efetov and Larkin,\cite{efetov}
\begin{eqnarray}
\label{Efetov_Larkin}
{\cal H}_{\rm EL}
&&= {\pi \over 2} \int dx
\left\{
{ [ n_B (x) - \overline{n}_B ]^2 \over \pi K_{\rm B} }
+ \pi K_{\rm B} v_s^2
\Bigl[
{ \nabla \theta_B (x) \over \pi }
\Bigr] ^2
\right\},
\end{eqnarray}
where $n_B (x)$ represents the density of bound hole pairs
at the rung $x$ with the average value
$\overline{n}_B \equiv N_B / L = \delta$, and
$\theta_B (x)$ is its conjugate phase obeying the canonical
commutation relation,
$[n_B (x) , \theta_B (x')] = i \delta (x-x')$.
As shown by the fact that the $q=0$ mode of the first term
in Eq.\ (\ref{Efetov_Larkin}) denotes the change in the ground
state energy associated with the number of bound hole pairs,
$ \Delta E \propto (1/2K_B) (\Delta N_B)^2 $,
the parameter $K_{\rm B}$ is given by the compressibility of the
hole pairs, which will be shown afterwards. The second term,
on the other hand, describes the energy change associated with
current, related with the sound velocity $v_s$. Instead of a
direct calculation of the dispersion relation, the sound velocity
can also be obtained numerically by applying an Aharanov-Bohm flux
penetrating in the center of the ladder with PBC's, since the flux
induces a finite current along the chain direction. Once the two
parameters, $v_s$ and $K_{\rm B}$, are determined in this way,
it is straightforward to calculate the power-law exponents of
correlation functions as follows.
The Efetov-Larkin Hamiltonian is actually identical to the
single-component (i.e., spinless) Luttinger model diagonalized by
bosonization, which was studied by Mattis and Lieb,\cite{mattis}
Luther and Peschel,\cite{luther_B} and Haldane\cite{haldane}
in details.
The Hamiltonian (\ref{Efetov_Larkin}) is immediately solved by
rewriting each Fourier component with a boson operator,
\begin{eqnarray}
\label{LE_bosons}
n_{B,k} \equiv
{\displaystyle \sqrt{{ |k| \over 2 \pi}} }
\bigl( b_{k} + b_{-k}^\dagger \bigr) , \ \
\theta_{B,k} \equiv i
{\displaystyle \sqrt{\pi \over 2 |k|} }
\bigl( b_{k}^\dagger - b_{-k} \bigr) ,
\end{eqnarray}
then the result is
\begin{eqnarray}
\label{Ham_diag}
{\cal H}_{\rm EL} = \sum_{|k|}
v_s |k| b_{k}^\dagger b_{k}
+{\pi \over 2L}
\Bigl[ v_N (N_{\rm B}-N_{\rm B}^0)^2
+v_J J_{\rm B}^2
\Bigr] .
\end{eqnarray}
Here $N_B$ is the total number of bound hole pairs and
$v_N= (\pi K_{\rm B})^{-1}$ is the charge velocity and
$v_J = \pi K_{\rm B} v_s^2$ is the current velocity associated with
the number of $2k_F^{\rm SF}$ excitations, $J_{\rm B}$, obeying the
universal relation, $v_N v_J = v_s^2$.
Here $k_F^{\rm SF} = \pi \delta = \pi (1-\rho)$ is the Fermi wave
number of spinless fermions transmuted from the bosonic hole pair
operators by the Jordan-Wigner transformation. The last two terms
in Eq.\ (\ref{Ham_diag}) describe non-bosonic excitations
accompanied with the change in the quantum numbers, $N_{\rm B}$
and $J_{\rm B}$. Their importance was first pointed out by Haldane
in his Luttinger liquid concept.\cite{haldane}
The propagator of bound hole pair, i.e. pairing correlation, is then
obtained as in the calculation of the Debye-Waller factor,
\begin{eqnarray}
\label{pair_corr}
G(r) &&=
\overline{n}_B \, \langle e^{i [ \theta_B (r) - \theta_B (0) ]}
\rangle
= \overline{n}_B \, e^{ - \langle [ \theta_B (r) - \theta_B (0) ]^2
\rangle /2 } \nonumber\\
&&\propto r^{-1/K_{\rho}},
\end{eqnarray}
where the exponent is
\begin{equation}
\label{exponent}
K_{\rho} = 2 \pi K_{\rm B} v_s =
{\pi} {\rho}^2 \, v_s \, \kappa,
\end{equation}
where $\rho$ is the electron density per site and $\kappa$ is the
electron compressibility. The relation for the ladder,
$K_B = \kappa \rho^2 /2$, is used to obtain the second equality.
This result differs from the one for the chain, which has the
numerical factor $\pi /2$ instead. It is important to emphasize
that this result is
the Luttinger liquid parameter, $K_\rho = 2 (v_J / v_N)^{1/2}$, for
the single-component boson systems.
Density-density correlations are characterized by a power-law decay
at the wave number $2k_F^{\rm SF}$. Since $k_F^{\rm SF}$ is twice
the average of the Fermi wave number of the original electron
bonding and antibonding bands, $k_F = {\pi \over 2} \rho$, the wave
number $2k_F^{\rm SF}$ is actually $4k_F$ in the original picture.
The exponent of the $2k_F^{\rm SF}$=$4k_F$ CDW fluctuations is
calculated using a density-phase duality as was done in the
original paper.\cite{efetov}
The density operator has a short-range $2k_F^{\rm SF}$ CDW order,
but since the value of $2k_F^{\rm SF}$ is locally determined by the
density at each position, it is fluctuating around the average value
$2\pi \delta$ associated with the density fluctuations.
We therefore may write as
\begin{mathletters}
\label{density}
\begin{eqnarray}
n_B (x) &&= \overline{n}_B \cos
\bigl[ 2k_F^{\rm SF} x + 2 \theta_J (x) \bigr], \\
{\partial \theta_J \over \partial x}
&&= \pi (n_B - \overline{n}_B ) .
\end{eqnarray}
\end{mathletters}
Rewriting the Hamiltonian (\ref{Efetov_Larkin}) with $\theta_J$ and
its conjugate operator $n_J \equiv {1 \over \pi} \nabla \theta_B$,
we again obtain a Gaussian model,
\begin{eqnarray}
\label{conjugate}
{\cal H}_{\rm EL}
&&= {\pi \over 2} \int dx
\left\{
v_J \bigl[ n_J (x) \bigr]^2
+ v_N
\Bigl[
{ \nabla \theta_J (x) \over \pi }
\Bigr] ^2
\right\},
\end{eqnarray}
where $[n_J (x),\theta_J (x')]= i \delta (x-x')$.
This duality of the charge and current operators was also
emphasized in the Haldane's paper.\cite{haldane} Since the
density-density correlation is written as an exponential of
$\theta_J$ in this representation, its calculation can be done
similarly as before,
\begin{eqnarray}
N_B(r) &&\equiv \langle n_B (r) n_B (0) \rangle
\approx {\overline{n}_B}^2 \cos (2k_F^{\rm SF} r )
\langle e^{2i [ \theta_J (r) - \theta_J (0) ]} \rangle \nonumber \\
&&\propto \cos (2k_F^{\rm SF} r ) \, r^{-K_{\rho}}.
\label{density_corr}
\end{eqnarray}
Instead of calculating the sound velocity, the value of $K_\rho$ is
more accurately obtained through the Drude weight defined by
\begin{equation}
D \equiv {L \over 2} \,
{\partial^2 E_{\rm G.S.}(\phi )
\over
\partial \phi ^2 }
\Bigg|_{\phi=0} \ ,
\label{eq:Drude}
\end{equation}
where $E_{\rm G.S.}(\phi )$ is the total energy of the ground state
when the flux $\phi$ is penetrating. Assuming the holes propagate
in pair, the relation between $D$ and $v_s$ for the ladder is
obtained and the result agrees with the one for the
chain\cite{schulz} aside from its numerical factor,
\begin{equation}
D = {4 \over \pi} \, v_J = {2 \over \pi} \, K_\rho v_s \ .
\label{eq:Drude_2}
\end{equation}
Combined this with Eq.\ (\ref{exponent}), we finally obtain the
formula of the correlation exponent in terms of the compressibility
and the Drude weight:
\begin{equation}
K_\rho = \pi \, \rho \ \sqrt{\kappa D \over 2} .
\label{eq:exponent_2}
\end{equation}
It is noted that this expression is identical to the one for the
chain in terms of $\kappa$ and $D$, while they differ by factor 2
in terms of $\kappa$ and $v_s$.
In this way, when the energy scale concerned is smaller than the
spin gap, we can predict long range asymptotic behavior of the
correlation functions based on the Efetov-Larkin effective model,
and calculate correlation exponents once the two parameters,
$v_s$ and $K_{\rm B}$ (or equivalently $\kappa$ and $D$),
are numerically determined.
Figure \ref{fig:K_rho} shows the correlation exponent
calculated in this way for the isotropic ladder ($J'/J=1$)
with $L=7$ and two holes as a function of $J$. This corresponds
to the electron density, $\rho ={6 \over 7}=0.857$. The necessary
quantities are carefully calculated by using the Lanczos
diagonalization. The compressibility, $\kappa$, is determined
{}from the ground state energy of unfrustrated systems in the sense
explained before. The Drude weight, $D$, is then calculated by
Eq.\ (\ref{eq:Drude}). Combining these two, $K_\rho$ is obtained
via Eq.\ (\ref{eq:exponent_2}).
Near the phase separation boundary, $K_\rho$ grows rapidly.
This is owing to the divergence of the compressibility at this
boundary. In other words, the collective charge excitations become
softening and the superconducting fluctuations are enhanced
correspondingly.
Recently Hayward et al.\cite{hayward} directly calculated various
correlation functions for the $t$-$J$ ladders by using the
density-matrix renormalization-group method. At $J/t = J'/t =1$ and
$\rho =0.8$, they found a power-law decay of the pairing
correlations with the exponent close to unity. This exponent
corresponds to $K_\rho \sim 1$, which is larger than our estimate
at the same $J$ and $J'$, $K_\rho \approx 0.7$.\cite{tsunetsugu95}
The discrepancy may be owing to the larger electron density in our
calculation, $\rho = {6 \over 7}=0.857$, but it is not clear if
this suffices to account for the difference until new calculation
is carried out at the same density. Hayward et al. also report
data on the density-density correlation function\cite{hayward} but
the exponent of the expected power-law decay of the oscillatory
term cannot be easily extracted.
\section{One-Particle Excitations}
\label{sec:chargexc}
Finally we discuss the one-particle Green's function where we can
see the quasiparticle excitations directly. The electron and hole
parts of its spectral function are defined as
\begin{eqnarray}
A_{\rm e,\sigma} ({\bf k},\omega)
&& \equiv \sum_n
\Bigl| \langle n, \mbox{2$L$$-$1}
| c_{{\bf k}\sigma}^\dagger
|{\rm G.S.},\mbox{2$L$$-$2}
\rangle
\Bigr|^2
\nonumber\\
&& \times
\delta (
\omega - E_n (\mbox{2$L$$-$1})
+ E_{\rm G.S.} (\mbox{2$L$$-$2}) + \mu
),
\nonumber\\
A_{\rm h,\sigma} ({\bf k},\omega)
&& \equiv \sum_n
\Bigl| \langle n, \mbox{2$L$$-$3}
| c_{{\bf k}\sigma}
|{\rm G.S.},\mbox{2$L$$-$2}
\rangle
\Bigr|^2
\nonumber\\
&& \times
\delta (
\omega + E_n (\mbox{2$L$$-$3})
- E_{\rm G.S.} (\mbox{2$L$$-$2}) + \mu
),
\label{eq:spe}
\end{eqnarray}
where $|n , N \rangle$ is an eigenstate for $N$ electrons with the
energy $E_n (N)$ and $|{\rm G.S.},N\rangle$ denotes the ground state
with $N$ electrons. Positive (negative) energies correspond to the
electron (hole) part. The chemical potential is defined by
\begin{equation}
\mu \equiv {\textstyle {1 \over 2}}
\bigl[
E_{\rm G.S.}(\mbox{2$L$$-$1})-E_{\rm G.S.}(\mbox{2$L$$-$3})
\bigr].
\end{equation}
The results for $L=10$ and the isotropic couplings, $J'/J=1$, are
shown in Figs.~\ref{fig:GF} for $J/t$=0.3 and 0.5. The wave
vectors along the ladder $k_x = {2 \pi \over L} n$ ($n$: integer)
are for PBC's and $k_x = {2 \pi \over L} (n + {1 \over 2})$ for
APBC's. The ground state energy $E_{\rm GS}(2L-2)$ and the
chemical potential $\mu$ in Eq.\ (\ref{eq:spe}) are the average
over both boundary conditions. The results for $L=10$ are very
similar to our $L=8$ results published before.\cite{tsunetsugu94}
The spectral function has large weights for the bonding ({\sl B})
($k_y =0$) and antibonding ({\sl A}) ($k_y = \pi$) orbitals only
near the Fermi energy $\omega = 0$, and they seem to constitute
quasiparticle bands. Away from the Fermi energy, the individual
quasiparticle peaks are much less prominent and there is an
incoherent part with an energy of the order of $t$.
The quasiparticle part of the spectrum is consistent with the
mean-field theory based on the $d$-wave RVB state.\cite{sigrist}
The undoped ladder consists of local singlets on the rungs.
Such a singlet is the superposition of two electrons in the bonding
orbital and two electrons in the antibonding orbital,
$ b_{\uparrow}^{\dag} b_{\downarrow}^{\dag}
-a_{\uparrow}^{\dag} a_{\downarrow}^{\dag} |0\rangle$.
Holes doped into the half-filled ladder will go predominantly into
the anti-bonding orbitals to gain a larger kinetic energy along the
rung direction. The bonding band is occupied by more electrons,
while the antibonding band is occupied by less electrons.
The quasiparticle with energy closest to $\omega =0$ has a wave
vector nearest to the original Fermi wave number, $k_F$:
($k_x = {3\pi \over 5}$ for bonding and $k_x = {2\pi \over 5}$
for antibonding). Because of the band splitting,
$k_F^{\rm B} > k_F^{\rm A}$, but the Luttinger sum rule is
satisfied, $k_F^{\rm B} + k_F^{\rm A} = (1-\delta) \pi $.
This means the Fermi volume is large, proportional to the electron
number rather than the hole number, and this is consistent with
photo-emission experiments on cuprate superconductors.\cite{dessau}
It is important to notice that the quasiparticle peaks near the
Fermi energy have their counterparts on the opposite side of the
Fermi energy. An electronic quasiparticle peak at energy
$\omega >0$ has a shadow hole peak at energy around $-\omega <0$,
and {\it vice versa}. These peaks indicate that the quasiparticle
excitations are those of the Bogoliubov quasiparticles as in BCS
theory, {\it i.e.}, mixture of an electron and a hole
($ \alpha_{\bf k}^\dagger = u_{\bf k} c_{{\bf k}
\uparrow}^\dagger + v_{\bf k} c_{-{\bf k} \downarrow}$).
The weights in the electron and hole parts are proportional to
$|u_{\bf k}|^2$ and $|v_{\bf k}|^2$. They are hole-like around
$k_x =0$ and electron-like around $k_x =\pi$ for both the bonding
and antibonding bands.
There exists a finite energy gap in the quasiparticle spectra.
The electron and hole branches both come close to the Fermi energy
at $k_x \sim {\pi \over 2}$, but instead of passing through they
move away from it. The energy gap for $J/t=0.3$ ($0.5$) is
$0.13t$ ($0.29t$) at ${\bf k}=({3\pi \over 5},0)$ and
$0.22t$ ($0.39t$) at ${\bf k}=({2\pi \over 5},\pi)$.
This corresponds to a quasiparticle gap
$2\Delta_{\rm QP} \simeq 0.13t \simeq {J \over 2}$ for
$J/t=0.3$ and $2\Delta_{\rm QP} \simeq 0.29t
\simeq {3 J\over 5}$ for $J/t=0.5$.
It is interesting to note that the calculations of
$A({\bf k},\omega)$ in 2D clusters\cite{stephan,ohta} show similar
behavior for ${\bf k}$-points not along $(1,1)$ but no shadow peaks
for ${\bf k} \parallel (1,1)$, indicating $d_{x^2-y^2}$-pairing also.
Figure \ref{fig:DOS} shows the spectral function of the one-particle
Green's function of two holes in an $L=10$ ladder summed over all
wave vectors:
$A_{{\rm e},\sigma}(\omega)=
\sum_{\bf k} A_{{\rm e},\sigma}({\bf k},\omega)$, and
$A_{{\rm h},\sigma}(\omega)=
\sum_{\bf k} A_{{\rm h},\sigma}({\bf k},\omega)$.
This quantity is the local density of states to add and remove
electrons. In a strongly correlated system, the sum rules on the
weight (i.e., the integrated values) of $A_{\rm e}(\omega)$ and
$A_{\rm h}(\omega)$ are very different since the former is given by
the number of empty sites and the latter by the number of filled
sites (or equivalently the number of holes and electrons,
respectively). In a Fermi liquid the values of $A_{\rm e}(\omega)$
and $A_{\rm h}(\omega)$ for small values of $|\omega -\mu |$ are
determined by quasiparticle weight at the Fermi energy and are
continuous. It is interesting therefore to note that
Fig.~\ref{fig:DOS} shows approximately similar values for
$A_{\rm e}(\omega -\mu)$ and $A_{\rm h}(\mu-\omega)$
around $\omega \sim \mu$, but the sum rule on the total weight is
satisfied through the large weight in incoherent excitations in
$A_{\rm h}(\omega)$ at higher energies, $|\omega - \mu | > J.$
The strong correlation condition is reflected in the much smaller
total weight in $A_{\rm e} (\omega)$ which comes about through an
effective cut-off in energy on $A_{\rm e} (\omega)$.
In this respect the system in energy space is similar to a lightly
hole doped band insulator although as we discussed earlier the
location in ${\bf k}$-space of the coherent quasiparticle peaks
corresponds to a large Fermi surface to add and remove electrons.
The momentum distribution for electrons,
$n^{\rm e}_\sigma({\bf k}) \equiv
\langle c_{{\bf k},\sigma}^{\dag}c_{{\bf k},\sigma} \rangle$,
and for holes,
$n^{\rm h}_\sigma({\bf k}) \equiv
\langle c_{{\bf k},\sigma}c_{{\bf k},\sigma}^{\dag} \rangle$,
is shown in Fig.~\ref{fig:n_k}. Note because of the strong
correlation condition, these do not add to one but instead their
sum is given by
$ n^{\rm e}_\sigma({\bf k}) +n^{\rm h}_\sigma({\bf k})
= {1 \over 2}(1+\delta)$.
The strong correlation condition is also evident in the reduced
magnitude of the variation of $n^{\rm e}_\sigma({\bf k})$ as a
function of ${\bf k}$. Nonetheless the presence of an apparent
``Fermi surface'' in the center of the Brillouin zone is clear,
consistent with the dispersion relations of the coherent
quasiparticles. The difference between the bonding ($k_y=0$) and
antibonding ($k_y=\pi$) bands and the reduced occupation of the
antibonding band arise from the energy gain in placing the doped
holes preferentially in the antibonding band.
\section{Conclusions}
The results of our Lanczos diagonalizations confirm earlier studies
which concluded that lightly doped two-leg ladders belong to a
different universality class from single chains. The latter are
Tomonaga-Luttinger liquids with gapless and separated spin and
charge excitations. The ladder in contrast has a finite gap in the
spin excitation spectrum and gapless excitations only in the charge
sector. The low energy excitations evolve continuously from the
limit of strong inter-chain exchange coupling ($J' \gg J,t$) and
the simplicity of that limit allows a clear interpretation of
our results.
At large $J'$ the dispersion relation of a single doped hole
consists of two cosine bands corresponding to bonding and
antibonding states on a rung. Lowering $J'$ to the isotropic limit
($J'/J=1$) and setting both $J,J'<t$ changes the dispersion
relation substantially. The coherent parts of both bands are
centered at energies $\sim -1.5t$ but the width is $\sim J$ only.
The spin and charge components are still bound but more loosely and
the large magnetic polarizability of the spin background introduces
longer-range hoppings. Remarkably the form of the bands resembles
the noninteracting band structure so that a photoemission
experiment which removes electrons would measure in effect a large
``Fermi surface'' with bonding and antibonding pieces. In this
regime the quasiparticle propagation is strongly influenced by
the coupling to magnetic excitations.
When two holes are added they bind together on a single rung at
large $J'$ and remain bound although the size of the bound hole
pair increases as $J'$ approaches $J \sim t/2$. Moreover the
qualitative features of the density-density
${\cal N}({\bf q},\omega)$ and spin-spin structure factor
${\cal S}({\bf q},\omega)$, which are easy to understand at large
$J'$, remain similar as $J'$ approaches $J \sim t/2$.
${\cal N}({\bf q},\omega)$ near ${\bf q}=(0,0)$ is dominated by
the low energy mode associated with the motion of hole pairs.
At large $q_x$, ${\cal N}({\bf q},\omega)$ has a broad peak at high
energies ($\sim 4t$) similar to that found by Ohta, Eder and
Maekawa for 2D clusters.\cite{ohta} They interpreted this as local
excitations of single holes in the magnetic cloud or spin
bag.\cite{eder} The dynamical spin structure factor,
${\cal S}({\bf q},\omega)$, also resembles 2D clusters and not 1D
chains when we compare to the results of Tohyama, Horsch and
Maekawa.\cite{tohyama} The major weight is at energies $\sim J$.
The spin gap evolves discontinuously upon doping through the new
quasiparticle excitations that can be made by breaking a hole pair
into two separate single holes. However the major weight of the
spin excitations remains in the collective magnon mode whose
dispersion evolves continuously from the $\delta=0$ limit,
although it is influenced by the continuum of quasiparticle
excitations.
We have also investigated the one-particle spectral functions to
add and remove electrons from the two-hole ground state.
These show clearly the unusual nature of this ``Fermi liquid''.
When electrons are removed (or holes added), the spectral weight is
spread over a large energy region ($\sim 6t$), but the coherent
part is limited only to energies $\sim J$ below the Fermi energy
$\mu$. The energy dispersion relations show a large apparent
Fermi surface for the coherent quasiparticles and which matches
onto a similar one for adding electrons at energies greater than
$\mu$. These ${\bf k}$-space features resemble a metal with a
large Fermi surface. The property that resembles a lightly hole
doped insulator is the energy dependence of the spectral weight to
add an electron. This shows a low energy cut-off
($\sim \delta \cdot 6t$) similar to a lightly hole doped band
insulator. The result is an intriguing duality between
metallic-like features in ${\bf k}$-space and lightly hole doped
insulating features in energy space.
The overall properties of the lightly doped ladder place it in the
Luther-Emery class rather than the Tomonaga-Luttinger class of 1D
systems. The low energy properties of Luther-Emery liquids are
described by interacting hard-core bosons as shown by Efetov and
Larkin. In the present case the Efetov-Larkin bosons are bound
hole pairs. Two features distinguish the $t$-$J$ ladder from the
usual Luther-Emery liquids arising from attractive interactions.
One is the $d$-wave character of the pairing and the second is the
presence of magnon excitations and limited quasiparticle
excitations. Note the magnon excitations cannot be viewed as the
collective mode of quasiparticles since the latter vanish as
$\delta\rightarrow 0$. The system is not a standard Fermi liquid,
but rather is a new and interesting mixture of a dilute attractive
Fermi gas in which the hole binding energy remains finite as
$\delta\rightarrow 0$, and a dense Fermi liquid with an apparent
large Fermi surface in $\bf k$-space.
Comparing the ladder with the results by Tohyama et al., we see that
the ladder is very different from the single chain but similar to 2D
clusters in many respects. Both in ladders and in 2D clusters
$d$-wave pairs are found down to small $J/t$. The dynamical charge
and spin structure factors look remarkably similar and the
single-particle spectral functions indicate the existence of
Bogoliubov quasiparticles with a finite superconducting gap.
Thus we are lead to the conjecture that the $t$-$J$ model on 2D
clusters is a doped RVB spin liquid showing $d$-wave pairing,
similar to the ladder.
\acknowledgements
We wish to thank M. Sigrist, F.C. Zhang, H. Monien, R. Noack,
D. Poilblanc, P. Prelovsek, D.J. Scalapino, S.R. White, and
D. W\"urtz for helpful discussions.
This work has been supported by the Swiss National Fund under grant
number NFP-304030-032833, by an internal grant of ETHZ and by the
Centro Svizzero di Calcolo Scientifico CSCS Manno. The calculations
have been performed on the Cray Y-MP/464 of ETH Z\"urich and
on the NEC SX-3/24R of CSCS Manno.
|
3,212,635,537,725 | arxiv |
\section{Introduction}\label{sec_Intro}
The $A_\infty$ T-system \cite{DFK13}, also called the octahedron recurrence, is a discrete dynamical system of formal variables $T_{i,j,k}$ for $i,j,k\in\mathbb{Z}$ satisfying
\[
T_{i,j,k-1}T_{i,j,k+1} = T_{i-1,j,k}T_{i+1,j,k} + T_{i,j-1,k}T_{i,j+1,k}.
\]
This recurrence relation preserves the parity of $i+j+k$ and therefore there are two independent systems depending on the parity of $i+j+k.$
We will assume throughout the paper that $i+j+k\equiv 1 \bmod 2$.
Several combinatorial solutions have been considered including solutions in terms of alternating sign matrices \cite{RR86}, domino tilings \cite{RR86,EKLP}, perfect matchings \cite{Speyer} and networks \cite{DFK13}.
The $A_\infty$~T-system can also be interpreted \cite{DFK09} as mutation in an infinite-rank cluster algebra \cite{FZ} of geometric type without coefficients.
Its corresponding quiver is called the octahedron quiver \cite{DFK13}.
One can also consider cluster algebras with coefficients \cite{FZ4}.
For cluster algebras of geometric type, this is equivalent to adding frozen vertices to the quiver.
A quite general type of coefficients are principal coefficients.
It corresponds to adding one frozen vertex for each quiver vertex and an arrow pointing from it to the quiver vertex.
In some literature, this new quiver is called the coframed quiver associated with the octahedron quiver.
The reason why the principal coefficients are very important is due to the separation formula \cite[Theorem 3.7]{FZ4}, stating that a cluster algebras with any coefficients can be written in terms of one with principal coefficients.
Some generalizations of T-systems with coefficients have been suggested by Speyer in his work on Speyer's octahedron recurrence \cite{Speyer} and by Di Francesco in his work on the generalized lambda determinant \cite{DF13}.
In this paper, we consider $A_\infty$ T-systems with principal coefficients using cluster algebras definition. We then give combinatorial solutions in terms of perfect matchings, non-intersecting paths and networks.
The paper is organized as follows.
In Section \ref{sec_Cluster}, we review some basic definitions and results in cluster algebras from \cite{FZ,FZ4}.
In Section \ref{sec_T-system}, we define the T-system with principal coefficients, whose initial condition is in the form of initial data on a stepped surface.
The goal is to find, for an arbitrary point $(i_0,j_0,k_0)$ and a stepped surface $\k$, an expression of $T_{i_0,j_0,k_0}$ in terms of initial data on $\mathbf{k}$.
Laurent phenomenon for cluster variables \cite[Theorem 3.1]{FZ} guarantees that the expression is indeed a Laurent polynomial in the initial data and coefficients.
In this paper, we give explicit combinatorial expressions of $T_{i_0,j_0,k_0}$ in terms of initial data when the point $(i_0,j_0,k_0)$ is above $\mathbf{k}$ and $\mathbf{k}$ is above the fundamental stepped surface $\mathbf{fund}:(i,j)\mapsto(i+j \bmod 2)-1.$
Some other cases will be discussed in Section \ref{sec_General}.
Section \ref{sec_Dimer} is devoted to a perfect matching solution.
Following the construction in \cite{Speyer}, we first construct a finite bipartite graph with open faces $G$ depending on both $(i_0,j_0,k_0)$ and $\mathbf{k}$, then construct face-weight $w_f$ and pairing-weight $w_p$ on perfect matchings of $G.$
This leads to the perfect-matching solution (Theorem \ref{thm:main}):
\[ T_{i_0,j_0,k_0} = \sum_{M} w_p(M)w_f(M) \]
where the sum runs over all perfect matchings of $G$.
The weight $w_p(M)$ is a monomial in the cluster coefficients, while $w_f(M)$ is a Laurent monomial in the initial data (cluster variables).
In Section \ref{sec_edge-weight}, we define the closure $\overline{G}$ of the graph $G$ and transform our previous two weights to the edge-weight $w_e.$
This gives another form of the perfect-matching solution (Theorem \ref{thm:edgesol}):
\[
T_{i_0,j_0,k_0} = \sum_{\overline{M}} w_e(\overline{M}) \Big/ w_e(\overline{M}_0)\big|_{c_{i,j}=1}.
\]
The sum runs over all perfect matchings of $\overline{G}$ with a certain boundary condition, see Definition \ref{defn:Mbar0}.
In Section \ref{sec_path}, we orient all the edges of $G$ and $\overline{G}$ and give an explicit bijection between perfect matchings of $G$ and non-intersecting paths on $G$ with certain sources and sinks.
This bijection can also be extended to $\overline{G}.$
Using the modified edge-weight $w'_e$ obtaining from $w_e$ together with the bijection, the perfect-matching solution for $\overline{G}$ gives the nonintersecting-path solution (Theorem \ref{thm_pathsol}):
\[
T_{i_0,j_0,k_0} = \sum_{\overline{P}} w_e'(\overline{P}) \Big/ \prod_{\substack{\circ\text{---}\bullet\\b}\in \overline{M}_0}\bar{p}_b
\]
where the sum runs over all non-intersecting paths on $\overline{G}$ with certain sources and sinks, see Theorem \ref{thm_pathsol}.
In Section \ref{sec_network}, we first consider the network $N$, studied in \cite{DF10,DFK13} associated with $(i_0,j_0,k_0)$ and $\mathbf{k}$.
It is obtained from the shadow of the point $(i_0,j_0,k_0)$ on the lozenge covering on $\mathbf{k}$.
We point out that it can also be obtained from the graph $\overline{G}$ by tilting all the diagonal edges of $\overline{G}$ so that they become horizontal.
This allows us to pass the modified edge-weight $w'_e$ on $\overline{G}$ to a weight on the network $N$.
Paths on $\overline{G}$ then become paths on $N$.
From Theorem \ref{thm_pathsol}, we get the network solution (Theorem \ref{thm:network}) as a partition function of weighted non-intersecting paths on the network $N$, which can also be written as a certain minor of the network matrices (Theorem \ref{thm:networkmatrix}).
In Section \ref{sec_app}, we discuss other types of coefficients of the T-system related to Speyer's octahedron recurrence \cite{Speyer}, generalized lambda-determinants \cite{DF13} and (higher) pentagram maps \cite{Schwartz,OST,Glick,GSTV14}.
\subsection*{Acknowledgements}
The author would like to thank his advisors R. Kedem and P. Di Francesco for their helpful advice and comments. The author also thank G. Musiker and M. Glick for discussions. This work was partially supported in part by a gift to the Mathematics Department at the University of Illinois from Gene H. Golub, the NSF grant DMS-1100929, NSF grant DMS-1301636 and the Morris and Gertrude Fine endowment.
\section{Cluster algebras}\label{sec_Cluster}
In this section, we quote some basic definitions and important results in the theory of cluster algebras mainly from \cite{FZ,FZ4}.
\subsection{Finite rank cluster algebras}
Let $(\P,\oplus,\cdot)$ be a semifield, i.e., $(\P,\cdot)$ is an abelian group, and $\oplus$ is an auxiliary addition: commutative, associative and distributive with respect to the multiplication.
The following are two important examples of semifields.
\begin{defn}[Universal semifield]
For a set of labels $J,$ the \defemph{universal semifield} on the set of variables $\{y_j\mid j\in J\}$ denoted by $\mathbb{Q}_{sf}(y_j :j\in J) := (\mathbb{Q}_{sf}(y_j :j\in J),+,\cdot)$ is the set of all subtraction-free expressions of rational functions in independent variables $\{y_j \mid j\in J \}$ over $\mathbb{Q}$ with the usual addition and multiplication.
\end{defn}
\begin{defn}[Tropical semifield]
For a set of labels $J,$ the \defemph{tropical semifield} on the set of variables $\{y_j\mid j\in J\}$ denoted by $\operatorname{Trop}(y_j : j\in J)$ is the free multiplicative abelian group generated by $\{y_j \mid j\in J\}$ with the auxiliary addition defined by:
\[
\prod_j y_j^{a_j}\oplus\prod_j y_j^{b_j} = \prod_j y_j^{\min(a_j,b_j)}.
\]
\end{defn}
Here are some notations that will be used throughout the paper:
\[
[x]_+=\max(0,x),~ [1,n]=\{1,2,\dots,n\},~ \operatorname{sgn}(x) =
\begin{cases}
-1, &x<0\\
0, &x=0\\
1, &x>0
\end{cases}.
\]
Let $n\in\mathbb{N},$ we now state the main definitions of cluster algebras of rank $n.$
Let~$\P$ be a semifield, $\mathcal{F}=\mathbb{Q}\P(x_1,\dots,x_n)$ an ambient field, $\mathbb{T}_n$ the $n-$regular tree whose $n$ edges incident to each vertex have different labels from $1$ to $n.$
\begin{defn}[Cluster patterns and Y-patterns]\label{def:mutation}
A \defemph{cluster pattern} (resp. \defemph{Y-pattern}) is an assignment $t\mapsto (\mathbf{x}_t,\mathbf{y}_t,B_t)$ (resp. $t\mapsto(\mathbf{y}_t,B_t)$) of any vertex $t\in\mathbb{T}_n$ to a \defemph{labeled seed} (resp. \defemph{labeled Y-seed}) such that:
\begin{itemize}
\item The \defemph{cluster} tuple $\mathbf{x_t}=(x_{1;t},\dots,x_{n;t})$ is an $n-$tuple of elements of $\mathcal{F}$ forming a free generating set.
\item The \defemph{coefficient} tuple $\mathbf{y_t}=(y_{1;t},\dots,y_{n;t})$ is an $n-$tuple in $\P.$
\item The \defemph{exchange matrix} $B_t=(b^{(t)}_{ij})\in M_{n\times n}(\mathbb{Z})$ is a skew-symmetrizable matrix.
\item $ t \stackrel{k}{\edge} t'$ in $\mathbb{T}_n$ if and only if $(\mathbf{x}_t,\mathbf{y}_t,B_t) \stackrel{\mu_k}{\longleftrightarrow} (\mathbf{x}_{t'},\mathbf{y}_{t'},B_{t'}),$ where $\mu_k$ is the \defemph{seed mutation} in direction k defined by:
\begin{itemize}
\item $B_{t'}=(b^{(t')}_{ij})$ where
\[
b^{(t')}_{ij}=
\begin{cases}
-b^{(t)}_{ij}, &i=k\text{ or }j=k,\\
b^{(t)}_{ij}+\operatorname{sgn}(b^{(t)}_{ik}) [b^{(t)}_{ik} b^{(t)}_{kj}]_+, &\text{otherwise.}
\end{cases}
\]
\item $\mathbf{y}_{t'}=(y_{1,t'},\dots,y_{n,t'})$ where
\[
y_{j,t'} =
\begin{cases}
y_{k,t}^{-1}, & j=k,\\
y_{j,t} (y_{k;t}^+)^{[b^{(t)}_{kj}]_+}(y_{k;t}^-)^{-[-b^{(t)}_{kj}]_+}, & j\neq k.
\end{cases}
\]
\item $\mathbf{x}_{t'}=(x_{1;t'},\dots,x_{n;t'})$ where
\[
x_{j;t'} =
\begin{cases}
x_{k,t}^{-1} \left( y_{k;t}^+ \prod_{i=1}^n x_{i;t}^{[b^{(t)}_{ik}]_+} + y_{k;t}^- \prod_{i=1}^n x_{i;t}^{[-b^{(t)}_{ik}]_+} \right), & j=k,\\
x_{j;t}, & j\neq k,
\end{cases}
\]
where $y_{k;t}^+ = \dfrac{y_{k;t}}{(y_{k;t}\oplus 1)}$ and $y_{k;t}^- = \dfrac{1}{(y_{k;t}\oplus 1)}.$
\end{itemize}
\end{itemize}
\end{defn}
\begin{defn}[Cluster algebra]
The \defemph{cluster algebra} $\mathcal{A}$ associated with a cluster pattern $t\mapsto (\mathbf{x}_t,\mathbf{y}_t,B_t)$ for $t\in\mathbb{T}_n$ is defined to be the $\mathbb{ZP}$-algebra generated by all cluster variables, i.e.
$$ \mathcal{A} = \mathbb{Z}\mathbb{P}[x_{i,t} : t\in\mathbb{T}_n,i\in[1,n]].$$
\end{defn}
In this paper, we will consider only cluster algebras with skew-symmetric exchange matrix.
In this case, we can think of the cluster mutation as a combinatorial rule performing on a quiver.
\begin{defn}[The quiver associated with a skew-symmetric $B$]
When the exchange matrix $B=(b_{ij})_{i,j\in [1,n]}$ is skew-symmetric, we define $\mathcal{Q}_B$, the \defemph{quiver} associated with $B$, to be a directed graph with vertices $1,\dots,n$.
There are $b_{ij}$ arrows from $i$ to $j$ if and only if $b_{ij}>0.$
\end{defn}
All the information of the exchange matrix $B$, skew-symmetric, is encoded in the quiver $\mathcal{Q}_B.$
The mutation $\mu_k$ will then act on $\mathcal{Q}_B$ by the following process:
\begin{enumerate}
\item For every pair of arrows $i\rightarrow k$ and $k\rightarrow j$, add an arrow $i\rightarrow j$.
\item Reverse all the arrows incident to $k$.
\item Remove all oriented 2-cycles.
\end{enumerate}
One can easily check that $\mu_k(\mathcal{Q}_B) = \mathcal{Q}_{\mu_k(B)}.$
The following is an example of the quiver associated with an exchange matrix and the quiver mutation.
\[
\begin{psmallmatrix}
0&1&0&-1 \\
-1&0&1&0 \\
0&-1&0&1 \\
1&0&-1&0
\end{psmallmatrix}
\enspace\stackrel{\mu_1}{\longleftrightarrow}\enspace
\begin{psmallmatrix}
0&-1&0&1 \\
1&0&1&-1 \\
0&-1&0&1 \\
-1&1&-1&0
\end{psmallmatrix}
\enspace\stackrel{\mu_3}{\longleftrightarrow}\enspace
\begin{psmallmatrix}
0&-1&0&1 \\
1&0&-1&0 \\
0&1&0&-1 \\
-1&0&1&0
\end{psmallmatrix}
\]
\begin{center}
\includegraphics[scale=0.8]{fig_quivmut}
\end{center}
Fixing $t_0\in\mathbb{T}_n,$ we can express variables $x_{1;t},\dots,x_{n;t},y_{1,t},\dots,y_{n;t}$ of the labeled seed at arbitrary $t\in\mathbb{T}_n$ in terms of variables $x_{1;t_0},\dots,x_{n;t_0},y_{1,t_0},\dots,y_{n;t_0}$ at $t_0.$
We will then call the labeled seed (resp. labeled Y-seed) at $t_0$ an \defemph{initial labeled seed} (resp. \defemph{initial labeled $Y$-seed}) and assign a simpler notations:
\[
\mathbf{x}=\mathbf{x}_{t_0},~\mathbf{y}=\mathbf{y}_{t_0},~B=B_{t_0}
\]
where
\[
x_i = x_{i;t_0},~ y_i=y_{i,t_0},~b_{ij} = b_{ij}^{(t_0)} \quad(i,j\in[1,n]).
\]
Unless stated otherwise, the initial labeled seed $(\mathbf{x},\mathbf{y},B)$ is always at $t_0$ and we denote $\mathcal{A}(\mathbf{x},\mathbf{y},B)$ the cluster algebra with an initial labeled seed $(\mathbf{x},\mathbf{y},B).$
Clearly, a cluster pattern (resp. Y-pattern) is completely determined by its initial labeled seed (resp. initial labeled Y-seed).
In addition, a cluster variable is a Laurent polynomial in the initial variables, as stated in the following theorem.
\begin{thm}[Laurent phenomenon {{\cite[Theorem 3.7]{FZ4}}}]\label{thm:Laurent}
The algebra $\mathcal{A}(\mathbf{x},\mathbf{y},B)$ is contained in the Laurent polynomial ring $\mathbb{Z}\P[\mathbf{x}^{\pm 1}]$, i.e. every cluster variable is a Laurent polynomial over $\mathbb{Z}\P$ in the initial cluster seed $x_1,\dots,x_n$.
\end{thm}
If $y_i=1$ for all $i\in[1,n]$, we have $y_{i,t}=1$ for all $i\in[1,n],t\in\mathbb{T}_n.$
We call it a \defemph{coefficient-free} cluster algebra, and write just $(\mathbf{x}_t,B_t)$ for for its labeled seeds.
\begin{defn}[Frozen variables]
For a cluster algebra (resp. cluster pattern) of rank $m$ with initial seed $(\mathbf{x},\mathbf{y},B)$, we consider a subpattern $t\in \mathbb{T}_n\subseteq \mathbb{T}_m \mapsto (\mathbf{x}_t,\mathbf{y}_t,B_t)$.
It is the pattern obtained by $\mu_1,\dots,\mu_n$.
That means the directions $n+1,\dots,m$ are not mutated.
We call it a cluster algebra (resp. cluster pattern) of rank $n$ with \defemph{frozen variables} $x_{n+1},\dots,x_m.$
\end{defn}
\begin{remark}
In the cluster algebra of rank $n$ with frozen variables $x_{n+1},\dots,x_m,$ the cluster seeds are not mutated in directions $n+1,\dots,m$.
So, the necessary information in the $m\times m$ matrix $B$ for mutations are only the columns $1$ to $n.$
Hence we often use a \defemph{reduced exchange matrix} $\tilde{B}$ instead of the full exchange matrix $B$, where $\tilde{B}$ is the $m\times n$ submatrix of $B$ obtained by deleting columns $n+1$ to $m.$
\end{remark}
\begin{defn}[Geometric type]
A cluster algebra (or cluster pattern, or Y-pattern) is of \defemph{geometric type} if $\P$ is a tropical semifield.
\end{defn}
\begin{remark}[Geometric type and Frozen variables]\label{rem:GeometricFrozen}
For a cluster algebra or cluster pattern of geometric type, the notion of coefficients and frozen variables are interchangeable.
Let $t\in\mathbb{T}_n\mapsto(\mathbf{x}_t,\mathbf{y}_t,B_t)$ be a cluster pattern of geometric type of rank $n$ where $\P = \operatorname{Trop}(x_{n+1},\dots,x_{m}).$ Since $x_{n+1},\dots,x_{m}$ generate $\P,$ we can choose the initial seed coefficients to be $y_j = \prod_{i=n+1}^m x_i^{{b}_{ij}}$ for all $j\in [1,n].$
Then the pattern is equivalent to a coefficient-free cluster pattern: $t\in\mathbb{T}_m\mapsto ((x_{1;t},\dots,x_{m;t}),\tilde{B}_t)$ with frozen variables $x_{n+1},\dots,x_{m}$, where $\tilde{B} = (b_{ij})_{m\times n}.$
\end{remark}
\begin{ex}\label{ex:frozen}
Consider a semifield $\P=\operatorname{Trop}(x_5,x_6)$ and a rank 4 cluster algebra of geometric type with an initial seed $(\mathbf{x},\mathbf{y},B)$ where
\[
\mathbf{x}=(x_1,x_2,x_3,x_4),\quad\mathbf{y}=\left(\frac{x_6}{x_5},\frac{1}{x_6},1,1\right),\quad B=
\begin{psmallmatrix}
0&1&0&-1 \\
-1&0&1&0 \\
0&-1&0&1 \\
1&0&-1&0
\end{psmallmatrix}.
\]
We have $y_1^+=x_6$ and $y_1^-=x_5$.
After the seed mutation $\mu_1$ (Definition \ref{def:mutation}), we have
\[
\mathbf{x}'=\left( \dfrac{x_6x_4+x_5x_2}{x_1} , x_2,x_3,\dots,x_6 \right), \quad \mathbf{y}' = \left( \dfrac{x_5}{x_2}, 1, 1, \dfrac{1}{x_5} \right).
\]
On the other hand, by Remark \ref{rem:GeometricFrozen}, we can think of $x_5,x_6$ as frozen variables and transform our cluster algebra with coefficients to the coefficient-free cluster algebra of rank 6 with the following cluster variables and reduced exchange matrix.
\[
\mathbf{x}=(x_1,x_2,\dots,x_6),\quad \tilde{B}=
\begin{psmallmatrix}
0&1&0&-1 \\
-1&0&1&0 \\
0&-1&0&1 \\
1&0&-1&0 \\
-1&0&0&0 \\
1&-1&0&0
\end{psmallmatrix}.
\]
The mutation $\mu_1$ gives the new cluster tuple $\mathbf{x}'=\left( \dfrac{x_4x_6+x_2x_5}{x_1} , x_2,x_3,\dots,x_6 \right)$ and the following quiver mutation.
\begin{center}
\includegraphics[scale=0.8]{fig_quivmutcoef}
\end{center}
We see that the mutated quiver encodes the information of $\mathbf{y}'= \left( \dfrac{x_5}{x_2}, 1, 1, \dfrac{1}{x_5} \right)$.
Also notice that we can omit arrows between frozen variables because they will not effect any mutations at non-frozen variables.
\end{ex}
\begin{defn}[Principal coefficient]\label{defn:principalcoef}
A cluster algebra (or cluster pattern, or Y-pattern) has a \defemph{principal coefficients} at $t_0\in\mathbb{T}_n$ if $\P=\operatorname{Trop}(y_1,\dots,y_n)$ where the initial coefficient tuple is $y_{t_0}=(y_1,\dots,y_n)$.
We denote $\mathcal{A}_\bullet(B)$ for the cluster algebra with principal coefficients.
\end{defn}
\begin{remark}[Principal coefficients and Frozen variables]\label{rem:PrincCoefFrozen}
From Remark \ref{rem:GeometricFrozen}, a cluster algebra with principal coefficients of rank $n$ with initial seed $(\mathbf{x},\mathbf{y},B)$ where $\mathbf{x}=(x_1,\dots,x_n)$ and $\mathbf{y}=(y_1,\dots,y_n)$ can be identified with a coefficient-free cluster algebra of rank $2n$ with an initial seed
$$((x_1,\dots,x_n,y_1,\dots,y_n),\tilde{B})\quad\text{where}\enspace \tilde{B}=\begin{pmatrix} B \\ I_n \end{pmatrix}.$$
The quiver $\mathcal{Q}_{\tilde{B}}$ is obtained from $\mathcal{Q}_B$ by adding one vertex $i'$ and an arrow $i'\rightarrow i$ for any vertex $i$ in the quiver $Q_{B}$.
The new $\mathcal{Q}_{\tilde{B}}$ is called the \defemph{coframed quiver} associated with $\mathcal{Q}_B.$
\end{remark}
\begin{defn}[The functions $X_{l;t}^{(B)}$ and $F_{l;t}^{(B)}$]\label{def:XF}
Given an exchange matrix $B,$ we consider the unique (up to isomorphism) cluster pattern $t\mapsto({\mathbf{X}}_t,{\mathbf{Y}}_t,B_t)$ with principal coefficients at $t_0$ and an initial seed $(\mathbf{X},\mathbf{Y},B).$
For $l\in[1,n]$ and $t\in\mathbb{T}_n,$ we let
$$X_{l;t}^{(B)}\in\mathbb{Q}_{sf}(X_1,\dots,X_n;Y_1,\dots,Y_n)$$
be the $l$-th component of the cluster tuple at $t$, and
\begin{align*}
F_{l;t}^{(B)} &:= X_{l;t}^{(B)}(1,\dots,1;Y_1,\dots,Y_n)\in\mathbb{Z}[Y_1,\dots,Y_n].
\end{align*}
In short, $X_{l;t}^{(B)}$ is a cluster variable in the cluster algebra with principal coefficients.
For a fixed $B$, we often view it as a function on the initial variables $X_i$ and $Y_i$ for $i\in[1,n]$.
The function $F_{l;t}^{(B)}$ is a specialization of $X_{l;t}^{(B)}$ when $X_i=1$ for $i\in[1,n]$.
\end{defn}
The next theorem states that cluster variables of any cluster pattern can be written in terms of the functions $X_{l;t}^{(B)}$ and $F_{l;t}^{(B)}$ with some restriction.
\begin{thm}[Separation formula {\cite[Theorem 3.7]{FZ4}}]\label{thm:sep}
Let $t\mapsto (\mathbf{x}_t,\mathbf{y}_t,B_t)$ be a cluster pattern over a semifield $\P$ with an initial seed $(\mathbf{x},\mathbf{y},B).$
Then
\[
x_{l;t} = \frac{X_{l;t}^{(B)}(x_1,\dots,x_n;y_1,\dots,y_n)}{F_{l;t}^{(B)} |_\P(y_1,\dots,y_n)}.
\]
\end{thm}
The notation $F_{l;t}^{(B)} |_\P(y_1,\dots,y_n)$ means that we compute $F_{l;t}^{(B)}(y_1,\dots,y_n)$ in $\P$ by changing $+$ to $\oplus$.
\begin{ex}\label{ex:sep}
Consider the cluster algebra with principal coefficients with the same exchange matrix as in Example \ref{ex:frozen}.
Let $\P=\operatorname{Trop}(Y_1,Y_2,Y_3,Y_4)$, we can write an initial seed as $(\mathbf{X},\mathbf{Y},B)$ where
\[
\mathbf{X}=(X_1,X_2,X_3,X_4),\quad \mathbf{Y}=(Y_1,Y_2,Y_3,Y_4).
\]
By Remark \ref{rem:PrincCoefFrozen}, we think of $Y_i$'s as frozen variables and get a coefficient-free cluster algebra of rank 8 with the following quiver and exchange matrix, where $Y_i$ is the cluster variable on the vertex $i+4$.
\[
\raisebox{-.5\height}{\includegraphics[scale=0.8]{fig_princquiv}}
\hspace{50pt}
\scalebox{0.8}
{$
\begin{pmatrix}
0&1&0&-1 \\
-1&0&1&0 \\
0&-1&0&1 \\
1&0&-1&0 \\
1&0&0&0 \\
0&1&0&0 \\
0&0&1&0 \\
0&0&0&1
\end{pmatrix}
$}
\]
Then the mutation $\mu_1$ gives
\[
X_1' = \dfrac{Y_1 X_4 + X_2}{X_1}.
\]
Let us try to compute $x'_1$ in Example \ref{ex:frozen} using the separation formula.
From the formula, we think of $X'_1$ as a function $(X_1,\dots,X_4;Y_1,\dots,Y_4)\mapsto\dfrac{Y_1 X_4 + X_2}{X_1}$.
Then
\begin{align*}
x'_1 &= \dfrac{ X'_1 \left(x_1,x_2,x_3,x_4;\frac{x_6}{x_5},\frac{1}{x_6},1,1 \right) }{ X'_1 \left(1,1,1,1;\frac{x_6}{x_5},\frac{1}{x_6},1,1 \right) \big|_{\operatorname{Trop}(x_5,x_6)}}\\
&= \dfrac {( \frac{x_6}{x_5} x_4 + x_2 )/x_1 }{ ( \frac{x_6}{x_5} \oplus 1 )/1 } = \dfrac {( \frac{x_6}{x_5} x_4 + x_2 )/x_1 }{{1}/{x_5} } = \dfrac{x_4x_6+x_2x_5}{x_1}.
\end{align*}
\end{ex}
\begin{defn}[The functions $Y_{l,t}^{(B)}$]
Given an exchange matrix $B,$ we consider the unique (up to isomorphism) Y-pattern $t\mapsto(\mathbf{Y}_t,B_t)$ having an initial seed $(\mathbf{Y},B)$ in the semifield $\mathbb{Q}_{sf}(Y_1,\dots,Y_n).$
Let $Y_{l;t}^{(B)}\in\mathbb{Q}_{sf}(Y_1,\dots,Y_n)$ be the $l$-th component in the coefficient tuple at $t$.
\end{defn}
Again, we think of $Y_{l;t}^{(B)}$ as a function on $Y_1,\dots,Y_n$.
The next theorem gives an expression of the function $Y_{j,t}^{(B)}$ in terms of the polynomials $F_{i,t}^{(B)}.$
\begin{thm}[{\cite[Proposition 3.13]{FZ4}}]\label{thm:YasF}
Given an exchange matrix $B$ and a semifield $\P=\operatorname{Trop}(y_1,\dots,y_n)$, we get
\[
Y_{j;t}^{(B)}(y_1,\dots,y_n) = Y_{j;t}^{(B)}\big|_\P(y_1,\dots,y_n) \prod_{i=1}^n \left(F_{i;t}^{(B)}(y_1,\dots,y_n)\right)^{b_{ij}^{(t)}}.
\]
where $Y_{j;t}^{(B)}|_\P(y_1,\dots,y_n)$ can be interpreted as a cluster coefficient in the Y-pattern with principal coefficients with an initial coefficient tuple $(y_1,\dots,y_n)$.
\end{thm}
\begin{ex}
Consider the Y-pattern with the following quiver and the initial coefficient tuple $(y_1,y_2,y_3,y_4)$ in $\P= \mathbb{Q}_{sf}(y_1,y_2,y_3,y_4)$.
\begin{center}
\includegraphics[scale=0.8]{fig_Ypattern}
\end{center}
\begin{align}
\begin{aligned}
\mathbf{y}= (y_1,\,y_2,\,y_3,\,y_4)\quad &\overset{\mu_1}{\line(1,0){20}}\quad
\mathbf{y}'= \left( \frac{1}{y_1},\, y_2\frac{y_1}{1+y_1},\, y_3,\, y_4(1+y_1) \right)\\
&\overset{\mu_3}{\line(1,0){20}} \quad
\mathbf{y}''= \left( \frac{1}{y_1},\, y_2y_1\frac{1+y_3}{1+y_1},\, \frac{1}{y_3},\, y_4y_3\frac{1+y_1}{1+y_3} \right)
\end{aligned}
\label{eq:ypattern}
\end{align}
Consider a different Y-pattern with the same quiver but with principal coefficients.
So with the same initial coefficients, we set $\P=\operatorname{Trop}(Y_1,Y_2,Y_3,Y_4)$.
Using Remark \ref{rem:GeometricFrozen}, we can realize the coefficients as frozen variables.
\begin{center}
\includegraphics[scale=0.8]{fig_Ypatterncoef}
\end{center}
\begin{align*}
\mathbf{Y}= (Y_1,\,Y_2,\,Y_3,\,Y_4) \quad &\overset{\mu_1}{\line(1,0){20}}\quad
\mathbf{Y}'= \left( \frac{1}{Y_1},\, Y_2Y_1,\, Y_3,\, Y_4(1+Y_1) \right)\\
&\overset{\mu_3}{\line(1,0){20}}\quad
\mathbf{Y}''= \left( \frac{1}{Y_1},\, Y_2 Y_1,\, \frac{1}{Y_3},\, Y_4Y_3 \right).
\end{align*}
In order to apply Theorem \ref{thm:YasF}, we also need to compute the cluster variables of the cluster algebra with principal coefficients of the same quiver.
Let $(X_1,X_2,X_3,X_4)$ be the initial cluster tuple, we then get the following.
\begin{align*}
\mathbf{X} = (X_1,\,X_2,\,X_3,\,X_4) \quad &\overset{\mu_1}{\line(1,0){20}}\quad
\mathbf{X}' = \left( \frac{Y_1X_4+X_2}{X_1} ,\,X_2,\,X_3,\,X_4 \right)\\
&\overset{\mu_3}{\line(1,0){20}} \quad
\mathbf{X}'' = \left( \frac{Y_1X_4+X_2}{X_1} ,\,X_2,\, \frac{Y_3X_2+X_4}{X_3},\,X_4 \right)
\end{align*}
We consider $\mathbf{Y}''_2$, $\mathbf{X}''_1$ and $\mathbf{X}''_3$ as functions on $X_i$'s and $Y_i$'s.
By Theorem \ref{thm:YasF} we have
\[
y''_2 = \mathbf{Y}''_2(y_1,y_2,y_3,y_4) \frac{\mathbf{X}''_3(1,1,1,1;y_1,y_2,y_3,y_4)}{\mathbf{X}''_1(1,1,1,1;y_1,y_2,y_3,y_4)} = y_2y_1\frac{y_3+1}{y_1+1}.
\]
This is the same result as we computed directly in \eqref{eq:ypattern}.
\end{ex}
\subsection{Infinite rank cluster algebras}
We define infinite rank cluster algebras in a similar way. The cluster tuple, coefficient tuple and the exchange matrix now are infinite dimensional.
For the mutation to make sense, we only need the condition: For each $j,$ $b_{ij}=0$ for all but finitely many~$i.$
If~$B$ is also skew-symmetric, this condition is equivalent to saying that an infinite quiver $\mathcal{Q}_B$ has only finitely many arrows incident to each of its non-frozen vertex.
For the cluster pattern, although we think of it as an assignment from the infinite tree $\mathbb{T}$, we usually restrict the study to only those seeds obtainable from the initial seed by finitely many mutations.
In the next section, we will review the fact that the T-system can be realized as an infinite-rank coefficient-free cluster pattern, and then define the T-system with principal coefficients in the same way as we have already discussed cluster algebras with principal coefficients in this section.
We pick a specific cluster seed to put principal coefficients.
This choice generates a new recurrence relation which we will call the octahedron recurrence with principal coefficients.
\section{T-systems}\label{sec_T-system}
In this paper, we consider the $A_\infty$ T-system \cite{DFK13}, which is also known as the octahedron recurrence.
It is the infinite-rank version of $A_r$ T-system.
We will refer to $A_\infty$ T-system as just the T-system when there is no ambiguity.
The T-system can be realized as mutation in an infinite-rank coefficient-free cluster algebra of geometric type \cite{DFK09,DFK13}.
Its exchange matrix is skew-symmetric, so we can express it as a quiver, the octahedron quiver.
In this section, we review this connection and define T-systems with generic coefficients by inserting principal coefficient into the relation, as in Definition \ref{defn:principalcoef}, in the initial quiver.
We will also show that it is equivalent to the recurrence relation \eqref{eq:tsyscoef}.
\subsection{T-systems without coefficients}\label{subsec:Tsys}
Let $\mathbb{Z}^3_{\rm odd} = \{ (i,j,k)\in\mathbb{Z}^3 \mid i+j+k\equiv 1\bmod 2\}.$ The \defemph{T-system}, so called the \defemph{octahedron recurrence}, is a recurrence relation on the set of formal variables $\mathcal{T}=\{ T_{i,j,k} \mid (i,j,k)\in\mathbb{Z}^3_{\rm odd}\}$ defined by
\begin{align}
T_{i,j,k-1}T_{i,j,k+1} = T_{i-1,j,k}T_{i+1,j,k} + T_{i,j-1,k}T_{i,j+1,k}.
\label{eq:Tsys}
\end{align}
A \defemph{stepped surface} is a subset $\{(i,j,\k(i,j)) \mid i,j\in\mathbb{Z}\}\subset\mathbb{Z}^3_{\rm odd}$ defined by a function $\k:\mathbb{Z}\times\mathbb{Z}\rightarrow\mathbb{Z}$ satisfying:
\[
|\k(i,j)-\k(i',j')|=1 \quad \text{when} \quad |i-i'|+|j-j'|=1.
\]
We will also denote this surface by the function $\k$.
The condition $|i-i'|+|j-j'|=1$ is referred to as $(i,j)$ and $(i',j')$ are \defemph{lattice-adjacent}, and $\k(i,j)$ is called the \defemph{height} of $(i,j)$ with respect to $\k$.
There are three important stepped surfaces which we will use throughout the paper.
We define
\begin{align}
\begin{aligned}
\mathbf{fund} &:(i,j)\mapsto (i+j \bmod 2)-1,\\
\mathbf{proj}_{(i',j',k')} &:(i,j)\mapsto k' - | i- i'| -| j- j'|,\\
\k_{p} &:(i,j)\mapsto\min \left( \k(i,j), \mathbf{proj}_p(i,j) \right),
\end{aligned}
\label{eq:fun&topsurface}
\end{align}
and call them the \defemph{fundamental stepped surface}, the \defemph{stepped surface projected from a point} $(i',j',k')$ and the \defemph{adjusted stepped surface} associated with a surface $\k$ and a point $p$, respectively.
See Figure \ref{fig_surface} for examples.
To each $\mathbf{k},$ we can attach an \defemph{initial condition} $X_\mathbf{k}(\mathbf{t}):\{ T_{i,j,\k(i,j)}=t_{i,j} \mid i,j\in\mathbb{Z}\}$ for some formal variables $\mathbf{t}=\{ t_{i,j} \mid i,j\in\mathbb{Z}\}$, to which we refer as \defemph{initial data/values} along the stepped surface $\mathbf{k}$.
It is worth pointing out that for a point $(i_0,j_0,k_0)\in\mathbb{Z}^3_{\rm odd}$, not every initial data gives a finite solution to $T_{i_0,j_0,k_0}.$
In other words, an expression of $T_{i_0,j_0,k_0}$ in terms of $t_{i,j}$'s may not be finite.
We call an initial data on $\mathbf{k}$ that gives a finite expression for $T_{i_0,j_0,k_0}$ an \defemph{admissible initial data} with respect to $(i_0,j_0,k_0)$.
\begin{ex}
The fundamental stepped surface is always admissible with respect to any point in $\mathbb{Z}^3_{\rm odd}.$
The stepped surface $\mathbf{proj}_{(0,0,m)}$ is not admissible with respect to a point $(0,0,n)$ when $n>m$.
\end{ex}
\begin{figure}
\begin{center}\includegraphics[scale=0.9]{fig_surface}\end{center}
\caption{
The surfaces $\mathbf{fund}$, ${\mathbf{proj}_q}$ and $\k_p$ associated with a surface $\k$ in the section $i=i_0$ of the 3-dimensional lattice.}
\label{fig_surface}
\end{figure}
The T-system can also be interpreted as an infinite-rank coefficient-free cluster algebra \cite{DFK09}.
Using $\mathbb{Z}^2$ as the index set, the initial seed is $(\mathbf{x},B) = \left( (x_{i,j})_{(i,j)\in\mathbb{Z}^2} , (b_{(i',j'),(i,j)}) \right)$ where
\[
x_{i,j}=T_{i,j,\mathbf{fund}(i,j)} \quad \text{and} \quad b_{(i',j'),(i,j)} = (-1)^{i+j}\left(\delta_{i',i\pm 1}\delta_{j',j}-\delta_{i',i}\delta_{j',j\pm 1}\right).
\]
The quiver $\mathcal{Q}_B$ associated with $B$ is shown in Figure \ref{fig:octquiver}.
We call it the \defemph{octahedron quiver}.
We embed the vertices of the quiver into the 3-dimensional lattice $\mathbb{Z}^3_{\rm odd}$ so that they lie on the fundamental stepped surface, i.e. the vertex $(i,j)$ of the octahedron quiver lies at the point $(i,j,\mathbf{fund}(i,j))\in\mathbb{Z}^3_{\rm odd}$.
The reason for picking this choice is to associate it to the index of the initial cluster variables $T_{i,j,\mathbf{fund}(i,j)}$ at $(i,j)$ for $(i,j)\in\mathbb{Z}^2$.
\begin{figure}
\begin{center}\includegraphics{fig_tsysquiver}\end{center}
\caption{
The octahedron quiver.
The red dots correspond to indices $(i,j)$ where $i+j$ is even, and the blue to the odd $i+j.$
The quiver is infinite in both $i$ and $j$ directions.
}
\label{fig:octquiver}
\end{figure}
We then allow mutations only on vertices $(i,j,k)$ having the property that there are exactly two incoming and two outgoing arrows incident to $(i,j,k).$
This property is equivalent to saying that all four neighbors of $(i,j,k)$ have the same third coordinate in $\mathbb{Z}^3_{\rm odd}$, i.e., the four neighbors are all either $(i\pm 1,j\pm 1,k-1)$ or $(i\pm 1,j\pm 1,k+1).$
If the neighbors are $(i\pm 1,j\pm 1,k-1)$, after the mutation at $(i,j,k)$, we move the vertex that used to be at $(i,j,k)$ to $(i,j,k-2)$ and call the new cluster variable obtained by the mutation $T_{i,j,k-2}.$
We call this mutation a \defemph{downward mutation}.
On the other hand, when the neighbors are $(i\pm 1,j\pm 1,k+1)$, $(i,j,k)$ is moved to $(i,j,k+2)$ and the new cluster variable is called $T_{i,j,k+2}$.
We call it an \defemph{upward mutation}.
The set of vertices of a quiver $\mathcal{Q}$ obtained from the octahedron quiver by allowed mutations forms a stepped surface, denoted by $\mathbf{k}_\mathcal{Q}$.
On the other hand, we can create a quiver from a stepped surface by reading the arrangement of the quiver arrows from Table \ref{tab:quiver&k}, and call this quiver $\mathcal{Q}_\mathbf{k}.$
We notice that the quiver mutation at $(i,j)$ corresponds to moving $(i,j,\k(i,j))$ to $(i,j,\k(i,j)\pm 2)$, depending on the height of its neighbors as discussed above.
We say that $\k'$ is obtained from $\k$ by a \defemph{mutation} at $(i,j)$ if $\mathcal{Q}_{\k'}=\mu_{(i,j)}(\mathcal{Q}_{\k})$.
\subsection{T-systems with principal coefficients}
We define the \defemph{T-systems with principal coefficients} from the cluster algebra setting.
Instead of the coefficient-free cluster algebra with the octahedron quiver, we consider the cluster algebra with principal coefficients (Definition \ref{defn:principalcoef}) on the same quiver, where the initial coefficient at $(i,j)$ is $c_{i,j}.$
Due to Remark \ref{rem:PrincCoefFrozen}, it is the same as the coefficient-free cluster algebras on the coframed octahedron quiver, where the variables $c_{i,j}$ on the added vertices are frozen, see Figure \ref{fig_framedocta}.
We show that the cluster variables satisfy the recurrence relation \eqref{eq:tsyscoef} on $\{ T_{i,j,k} \mid (i,j,k)\in\mathbb{Z}^3_{\rm odd} \}$ with an extra set of coefficients $\{c_{i,j} \mid (i,j)\in\mathbb{Z}^2\}.$
We will use this recursion as an alternative definition of the T-system with principal coefficients.
\begin{figure}
\begin{center}\includegraphics[scale=0.8]{fig_framedocta}\end{center}
\caption{A portion of the infinite framed octahedron quiver and its cluster variables including frozen variables $c_{i,j}$.}
\label{fig_framedocta}
\end{figure}
\begin{thm}\label{thm:Tsyscoef}
Let $\{ T_{i,j,k} \mid (i,j,k)\in\mathbb{Z}^3_{\rm odd} \}$ be the set of cluster variables obtained from the T-system with principal coefficients.
Then
\begin{align}
T_{i,j,k-1}T_{i,j,k+1} = J_{i,j,k} T_{i-1,j,k}T_{i+1,j,k} + I_{i,j,k}T_{i,j-1,k}T_{i,j+1,k}
\label{eq:tsyscoef}
\end{align}
where
\begin{align}
I_{i,j,k} =
\begin{cases}
\prod_{a=k+1}^{-(k+1)} c_{i+a,j}, & k<0,\\
1, & k\geq 0,
\end{cases} \quad \text{and}\quad
J_{i,j,k} =
\begin{cases}
1, &k<0,\\
\prod_{a=-k}^k c_{i,j+a}, &k\geq 0.
\end{cases}
\label{eq:IJ}
\end{align}
\end{thm}
We call the relation \eqref{eq:tsyscoef} the \defemph{octahedron recurrence with principal coefficients}.
The pictorial representation of $I$ and $J$ are shown in Figure \ref{fig_IJ}.
In order to prove Theorem \ref{thm:Tsyscoef}, we first compute the coefficients at the vertices in any quiver obtained by the octahedron quiver.
We note that unlike cluster variables, a coefficient at $(i,j)$ on a stepped surface $\k$ depends on the height of $(i,j)$ and its neighbors $(i\pm 1,j\pm 1)$.
\begin{prop}\label{prop:coefatvertex}
Consider the T-system with principal coefficients.
Let $\k$ be a stepped surface obtained from $\mathbf{fund}$ by a finite number of allowed mutations.
Let $y_{(i,j),\k}$ be the coefficient at the vertex $(i,j)$ in $\mathcal{Q}_\k$.
Then
\begin{align}
y_{(i,j),\k} = \frac{ I_{i,j,k-1} J_{i,j-1,k}^{[\epsilon_1]_+} J_{i,j+1,k}^{[\epsilon_2]_+} }{ J_{i,j,k-1} I_{i-1,j,k}^{[\epsilon_3]_+} I_{i+1,j,k}^{[\epsilon_4]_+}} = \frac{ J_{i,j,k+1}I_{i-1,j,k}^{[-\epsilon_3]_+} I_{i+1,j,k}^{[-\epsilon_4]_+}}{ I_{i,j,k+1}J_{i,j-1,k}^{[-\epsilon_1]_+} J_{i,j+1,k}^{[-\epsilon_2]_+}},
\label{eq:coefatvertex}
\end{align}
when $\k(i,j)=k$, $\k(i,j-1)=k+\epsilon_1$, $\k(i,j+1)=k+\epsilon_2$, $\k(i-1,j)=k+\epsilon_3$ and $\k(i+1,j)=k+\epsilon_4$ where $ \epsilon_\ell\in\{-1,1\}$, as described as follows:
\[
\begin{matrix}
& (i,j+1,k+\epsilon_2) &\\\\
(i-1,j,k+\epsilon_3) & (i,j,k) & (i+1,j,k+\epsilon_4)\\\\
& (i,j-1,k+\epsilon_1) &
\end{matrix}
\]
\end{prop}
\begin{ex}
Consider a stepped surface $\k$ having height as the following.
\[
\begin{matrix}
& (i,j+1,k+1) & \\\\
(i-1,j,k+1) & (i,j,k) & (i+1,j,k-1) \\\\
& (i,j-1,k-1) &
\end{matrix}
\]
The coefficient at the vertex $(i,j)$, $y_{(i,j),\k}$, computed by Proposition \ref{prop:coefatvertex} is
\[
y_{(i,j),\k}=\frac{ I_{i,j,k-1} J_{i,j+1,k} }{ J_{i,j,k-1} I_{i-1,j,k}} = \frac{ J_{i,j,k+1}I_{i+1,j,k}}{ I_{i,j,k+1}J_{i,j-1,k}}.
\]
\end{ex}
\begin{proof}[Proof of Proposition \ref{prop:coefatvertex}]
We fist show the second equality in \eqref{eq:coefatvertex}.
Notice that $[\epsilon_i]_+ +[-\epsilon_i]_+ = 1$ for all $i$.
So we only need to show
\[
\frac{ I_{i,j,k-1} J_{i,j-1,k} J_{i,j+1,k} }{ J_{i,j,k-1} I_{i-1,j,k} I_{i+1,j,k}} = \frac{ J_{i,j,k+1}}{ I_{i,j,k+1}},
\]
which can be easily derived from the definition of $I$ and $J$ in \eqref{eq:IJ}.
We will then prove the proposition by induction on the number of mutations from the fundamental stepped surface.
On $\mathbf{fund}$, the vertices are in the forms $(i,j,-1)$ or $(i,j,0)$ depending on the parity of $i+j$, and $y_{(i,j),\mathbf{fund}}=c_{i,j}$ for all $(i,j)$.
When $i+j\equiv 0\bmod 2$, $\mathbf{fund}(i,j)=-1$ and the neighbors of $(i,j,-1)$ are $(i\pm 1,j\pm 1,0)$.
So $\epsilon_\ell=1$ for all $\ell$ at $(i,j)$.
We also have
\[
y_{(i,j),\k} = c_{i,j} = \frac{ J_{i,j,0}}{ I_{i,j,0}}.
\]
When $i+j\equiv 1\bmod 2$, $\mathbf{fund}(i,j)=0$ and the neighbors of $(i,j,0)$ are $(i\pm 1,j\pm 1,-1)$.
So $\epsilon_\ell= -1$ for all $\ell$.
We have
\[
y_{(i,j),\k} = c_{i,j} = \frac{ I_{i,j,-1}}{ J_{i,j,-1}}.
\]
Hence the proposition holds for the fundamental stepped surface.
Next we assume that the proposition holds for a stepped surface $\k$.
Consider a stepped surface $\k'$ obtained from $\k$ by a mutation at $(i,j)$.
Then $\k=\k'$ on every point except at $(i,j)$.
Also $y_{(a,b),\k}=y_{(a,b),\k'}$ for all $(a,b)$ but at most five points: $(i,j)$, $(i\pm 1, j\pm 1,)$.
So we only need to consider the coefficients at these five points.
Let us assume that $\k(i,j)=k$.
Since $\k$ is mutable at $(i,j)$, we have two cases: $\k(i\pm 1,j\pm 1)=k-1$ or $\k(i\pm 1,j\pm 1) =k+ 1$, as shown in the following pictures.
\[
\scalebox{0.7}{
$\xymatrix@=1em{
& (i,j+1,k-1)\ar[d] & \\
(i-1,j,k-1) & \ar[l](i,j,k)\ar[r] & (i+1,j,k-1) \\
& (i,j-1,k-1)\ar[u] &.
}
\hspace{2em}
\xymatrix@=1em{
& (i,j+1,k+1) & \\
(i-1,j,k+1)\ar[r] & (i,j,k)\ar[u]\ar[d] & \ar[l](i+1,j,k+1)\\
& (i,j-1,k+1) &.
}$
}
\]
\begin{description}
\item[Case 1]
We know that $y_{(i,j),\k} = I_{i,j,k-1}/J_{i,j,k-1}$ by the induction hypothesis.
After the mutation at $(i,j)$, the point $(i,j,k)$ becomes $(i,j,k-2)$.
So on $\k'$, $\epsilon_i = 1$ for all $i$.
We also get
\[
y_{(i,j),\k'} = \left(\frac{ I_{i,j,k-1}}{ J_{i,j,k-1}}\right)^{-1}=\frac{ J_{i,j,k'+1}}{ I_{i,j,k'+1}} = \frac{ J_{i,j,k'+1}I_{i-1,j,k'}^{[-1]_+} I_{i+1,j,k'}^{[-1]_+}}{ I_{i,j,k'+1}J_{i,j-1,k'}^{[-1]_+} J_{i,j+1,k'}^{[-1]_+}}
\]
where $k'=k-2$.
Hence the expression of $y_{(i,j),\k'}$ agrees with the proposition.
At $(i,j+1,k-1)$, the induction hypothesis gives
\begin{align*}
y_{(i,j+1),\k} &= \frac{ I_{i,j+1,k-2} J_{i,j,k-1}^{[1]_+} J_{i,j+2,k-1}^{[\epsilon_2]_+} }{ J_{i,j+1,k-2} I_{i-1,j+1,k-1}^{[\epsilon_3]_+} I_{i+1,j+1,k-1}^{[\epsilon_4]_+}}\\
&= \frac{ I_{i,j+1,k-2} J_{i,j,k-1} J_{i,j+2,k-1}^{[\epsilon_2]_+} }{ J_{i,j+1,k-2} I_{i-1,j+1,k-1}^{[\epsilon_3]_+} I_{i+1,j+1,k-1}^{[\epsilon_4]_+}}.
\end{align*}
We know that $\epsilon_1=1$ since $\k(i,j)= k =\k(i,j+1)+1$.
Then the mutation at $(i,j)$ gives
\begin{align*}
y_{(i,j+1),\k'} &= y_{(i,j+1),\k} (1\oplus y_{(i,j),\k})\\
&= y_{(i,j+1),\k}\frac{1}{J_{i,j,k-1}}\\
&= \frac{ I_{i,j+1,k-2} J_{i,j,k-1}^{[-1]_+} J_{i,j+2,k-1}^{[\epsilon_2]_+} }{ J_{i,j+1,k-2} I_{i-1,j+1,k-1}^{[\epsilon_3]_+} I_{i+1,j+1,k-1}^{[\epsilon_4]_+}},
\end{align*}
which agrees to the proposition.
By the similar argument, we can show that all four of the $y_{(i\pm 1,j\pm 1),\k'}$ agree to the proposition.
\item[Case 2]
We know that $y_{(i,j),\k} = J_{i,j,k+1}/I_{i,j,k+1}$ by the induction hypothesis.
After the mutation at $(i,j)$, the point $(i,j,k)$ becomes $(i,j,k+2)$.
So on $\k'$, $\epsilon_i = -1$ for all $i$.
We also get
\[
y_{(i,j),\k'} = \left(\frac{ J_{i,j,k+1}}{ I_{i,j,k+1}}\right)^{-1}=\frac{ I_{i,j,k'-1}}{ J_{i,j,k'-1}} = \frac{ I_{i,j,k'-1}J_{i,j-1,k'}^{[-1]_+} J_{i,j+1,k'}^{[-1]_+}}{ J_{i,j,k'-1}I_{i-1,j,k'}^{[-1]_+} I_{i+1,j,k'}^{[-1]_+}}
\]
when $k'=k+2$.
Hence the expression of $y_{(i,j),\k'}$ agrees with the proposition.
At $(i,j+1,k-1)$, the induction hypothesis gives
\begin{align*}
y_{(i,j+1),\k} &= \frac{ I_{i,j+1,k-2} J_{i,j,k-1}^{[-1]_+} J_{i,j+2,k-1}^{[\epsilon_2]_+} }{ J_{i,j+1,k-2} I_{i-1,j+1,k-1}^{[\epsilon_3]_+} I_{i+1,j+1,k-1}^{[\epsilon_4]_+}}\\
&= \frac{ I_{i,j+1,k-2} J_{i,j,k-1} J_{i,j+2,k-1}^{[\epsilon_2]_+} }{ J_{i,j+1,k-2} I_{i-1,j+1,k-1}^{[\epsilon_3]_+} I_{i+1,j+1,k-1}^{[\epsilon_4]_+}}.
\end{align*}
We know that $\epsilon_1=-1$ since $\k(i,j)= k =\k(i,j+1)-1$.
Then the mutation at $(i,j)$ gives
\begin{align*}
y_{(i,j+1),\k'} &= y_{(i,j+1),\k} \frac{y_{(i,j),\k}}{(1\oplus y_{(i,j),\k})}\\
&= y_{(i,j+1),\k}J_{i,j,k-1}\\
&= \frac{ I_{i,j+1,k-2} J_{i,j,k-1}^{[1]_+} J_{i,j+2,k-1}^{[\epsilon_2]_+} }{ J_{i,j+1,k-2} I_{i-1,j+1,k-1}^{[\epsilon_3]_+} I_{i+1,j+1,k-1}^{[\epsilon_4]_+}},
\end{align*}
which agrees to the proposition.
By the similar argument, we can show that all four of the $y_{(i\pm 1,j\pm 1),\k'}$ agree to the proposition.
\end{description}
By both cases, we proved the proposition.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:Tsyscoef}]
To show \eqref{eq:tsyscoef}, it is enough to show that it is the mutation rule at $(i,j,k-1)$ on $\k$ such that $\k(i\pm 1,j\pm 1)=k$ and $\k(i,j)=k-1$.
The quiver at $(i,j)$ will look like the following:
\[
\scalebox{0.8}{$
\xymatrix@=1em{
& (i,j+1,k) & \\
(i-1,j,k)\ar[r] & (i,j,k-1)\ar[u]\ar[d] & \ar[l](i+1,j,k) \\
& (i,j-1,k) &
}
$}
\]
So it is equivalent to show that $y_{(i,j),\k} = J_{i,j,k}/I_{i,j,k}$, which comes from Proposition \ref{prop:coefatvertex}.
\end{proof}
\begin{figure}
\begin{center}\includegraphics{fig_IJ}\end{center}
\caption{
$J_{0,0,5} = c_{0,-5}c_{0,-4}\dots c_{0,5}$ can be realized as a shadow shaded down from $(0,0,5)$ as depicted by the red dots in the section $i=0$.
$I_{0,0,-5} = c_{-4,0}c_{-3,0}\dots c_{4,0}$ can be realized as a shadow shaded up from $(0,0,-5)$ as depicted by the blue dots in the section $j=0$.
}
\label{fig_IJ}
\end{figure}
Due to Theorem \ref{thm:Tsyscoef}, we view the T-system with principal coefficients as a recurrence relation on $T_{i,j,k}$, $(i,j,k)\in\mathbb{Z}^3_{\rm odd}$ with extra coefficient variables $c_{i,j}$, $(i,j)\in\mathbb{Z}^2.$
Fixing a point $p=(i_0,j_0,k_0)\in\mathbb{Z}^3_{\rm odd}$ and an admissible initial data on a stepped surface $\mathbf{k}$, Theorem \ref{thm:Laurent} guarantees that the expression of $T_{i_0,j_0,k_0}$ is a Laurent polynomial in the initial data $\{ t_{i,j}=T_{i,j,\k(i,j)}\mid (i,j)\in\mathbb{Z}^2 \}$ and coefficients $\{c_{i,j}\mid (i,j)\in\mathbb{Z}^2 \}$.
The goal is to give combinatorial interpretation for this expression.
In this paper, we study the case when $p$ is above the $\k$ and $\k$ is above $\mathbf{fund}$, i.e.
\begin{align}\label{eq:surfacecond}
k_0 \geq \k(i_0,j_0)\quad\text{and}\quad\k(i,j) \geq \mathbf{fund}(i,j) = (i+j\bmod 2)-1.
\end{align}
In this case, we have explicit combinatorial solutions in terms of perfect matchings in Sections \ref{sec_Dimer} and \ref{sec_edge-weight}, non-intersecting paths in Section \ref{sec_path} and networks in Section \ref{sec_network}.
\section{Perfect-matching solution}\label{sec_Dimer}
The goal of this section is to give an expression of $T_{i_0,j_0,k_0}$ in terms of a partition function of weighted perfect matchings of a certain graph.
There are previous works \cite{Speyer,MS10,JMZ13} on expressing cluster variables by using perfect matchings of certain weighted graphs.
Regarding only cluster variables, the weight studied in \cite{Speyer} coincides with the ``face-weight'' in Definition \ref{defn:faceweight}, while the weight in \cite{MS10,JMZ13} coincides with the ``edge-weight'' in Definition \ref{defn:edgeweight}.
\subsection{Graphs from stepped surfaces}\label{subsec_graph}
We fix a point $p=(i_0,j_0,k_0)\in\mathbb{Z}^3_{\rm odd}$, an admissible stepped surface $\k$ and an initial data $X_\mathbf{k}(\mathbf{t}):\{ T_{i,j,\k(i,j)}=t_{i,j} \mid i,j\in\mathbb{Z}\}$ on $\mathbf{k}$.
Also assume that $k_0 \geq \k(i_0,j_0)$ and $\k \geq \mathbf{fund}$, i.e. $\k(i,j)\geq\mathbf{fund}(i,j)$ for all $(i,j)\in\mathbb{Z}^2$.
From the stepped surface $\mathbf{k}$, we follow the construction in \cite{Speyer} and define, using Table \ref{tab:quiver&k}, an infinite bipartite graph $G_\k$ associated with $\k$.
This graph can also be realized as the dual of the quiver $\mathcal{Q}_\k$ associated with $\k$ with vertex bi-coloring, see the end of Section \ref{subsec:Tsys}.
Faces of $\mathcal{Q}_{\k}$ become vertices of $G_\k.$
Since all faces of $\mathcal{Q}_k$ are always oriented, we color a vertex of the graph in white if the arrows around its corresponding face of the quiver are oriented counter-clockwise and black if they are oriented clockwise.
Vertices of $\mathcal{Q}_\k$ become faces of $G_\mathbf{k}$.
Since the vertices of the quiver are indexed by $\mathbb{Z}^2$, we will use $(i,j)\in\mathbb{Z}^2$ to represent a face of the graph.
Arrows of $\mathcal{Q}_\k$ gives edges of $G_\k$.
There are three types of edges in the graphs: horizontal, vertical and diagonal, which came from vertical, horizontal and diagonal arrows of the quiver, respectively.
See Figure \ref{fig:3edges} for an example.
\begin{table}
\begin{center}{\input{table}}\end{center}\bigskip
\caption{All six local pictures of $\mathcal{Q}_\k$ and $G_\k$ for four points $A=(i,j),$ $B=(i+1,j),$ $C=(i+1,j+1),$ and $D=(i,j+1)$ on a stepped surface $\k$.}
\label{tab:quiver&k}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
$\begin{matrix} 0 && 1 && 2 &&3 \\\\ 1 && 2 && 1 && 2 \\\\ 2& & 1 && 0 && 1 \\\\ 1 && 2 && 1 && 2 \end{matrix}$
\caption{A portion of $\mathbf{k}$}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering$\xymatrix@=1.45em{
\bullet\ar[d] & \bullet\ar[l]\ar[d] & \bullet\ar[l]\ar[rd] & \bullet\ar[l] \\
\bullet\ar[ur]\ar[d] &\bullet\ar[l]\ar[r] & \bullet\ar[u]\ar[ld]\ar[rd] & \bullet\ar[l]\ar[u] \\
\bullet\ar[r] & \bullet\ar[u]\ar[r]\ar[d] & \bullet\ar[u]\ar[d] & \bullet\ar[u]\ar[l]\ar[d] \\
\bullet\ar[u] & \bullet\ar[l]\ar[r] & \bullet\ar[ul]\ar[ur] & \bullet\ar[l]
}$
\caption{A portion of $\mathcal{Q}_\k$}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering\includegraphics[scale=1.2]{fig_surface_quiver_graph}
\caption{A portion of $G_\mathbf{k}$}
\end{subfigure}
\caption{An example of $\mathbf{k}$ and its corresponding $\mathcal{Q}_k$ and $G_\mathbf{k}.$}
\label{fig:3edges}
\end{figure}
If $\k'$ is obtained from $\k$ by a mutation at $(i,j)$, then we can see from Table \ref{tab:quiver&k} that the face $(i,j)$ in $G_k$ must be a square.
In addition, $G_{\k'}$ can be obtained \cite{Ciucu,Speyer} from $G_\k$ by the following steps.
\begin{enumerate}
\item Apply \defemph{urban renewal} at the face $(i,j)$, see Figure \ref{fig:urban}.
\item \defemph{Collapse} any degree-2 vertices created by the previous step, see Figure \ref{fig:deg2vertex}.
\end{enumerate}
\begin{figure}
\begin{center}\includegraphics{fig_singlemutation}\end{center}
\caption{The urban renewal at the face $(i,j)$.}
\label{fig:urban}
\end{figure}
\begin{figure}
\begin{center}\includegraphics[scale=1.3]{fig_degree2vertex}\end{center}
\caption{A degree-2 vertex and its two adjacent vertices collapse into one vertex.}
\label{fig:deg2vertex}
\end{figure}
We use the notations $F(G)$, $V(G)$ and $E(G)$ for the set of faces, vertices and edges of a graph $G$, respectively.
We then define two subsets $\mathring{F}=\mathring{F}(p,G_\k)$ and $\partial F = \partial F(p,G_\k)$ of $F(G_\mathbf{k})=\mathbb{Z}^2$ depending on $p$ and $\k$ as follows.
\begin{align}
\begin{aligned}
\mathring{F}&=\left\{ (i,j)\in \mathbb{Z}^2 ~\big|~ |i-i_0|+|j-j_0|<k_0-\k(i,j) \right\},\\
\partial F &= \left\{ (i',j')\in\mathbb{Z}^2 \setminus \mathring{F} ~\Big|~ |i'-i|+|j'-j|=1 \text{ for some }(i,j)\in \mathring{F} \right\}.
\end{aligned}\label{eq:opencloseface}
\end{align}
We also assume that $\partial F=\{(i_0,j_0)\}$ when $k_0=\k(i_0,j_0)$.
The set $\mathring{F}$ can be illustrated as the set of points inside (excluding boundary) the shadow projecting from $p$ onto $\k$, while $\partial F$ is the boundary of the projection.
The following example shows elements of $\mathring{F}$ in blue and elements of $\partial F$ in red when $p=(0,0,3)$ and $\k:(i,j)\mapsto |i+j|-1$.
\begin{center}
\includegraphics[scale=0.65]{fig_opencloseface}
\end{center}
The picture on the left shows the faces on $(i,j)$-plane, discarding the $k$-direction.
The picture on the right shows the projection in the section $j=0$ of the whole 3-dimensional space.
We will see later from the solution to the T-system (Theorem \ref{thm:main}) that the expression of $T_{i_0,j_0,k_0}$ depends only on $t_{i,j}$'s where $(i,j)\in \mathring{F}\cup\partial F$.
Due to this reason, we will work on a finite subgraph $G_{p,\mathbf{k}}$ of $G_\mathbf{k}$ generated by the faces in $\mathring{F},$ while considering faces in $\partial F$ as ``open faces'' as in the following definition.
\begin{defn}[Graph with open faces {\cite[Section 2.2]{Speyer}}]
The \defemph{graph with open faces} associated with $p$ and $\k$ is defined to be a pair $\left(G,\partial F(G)\right)$ where $G:=G_{p,\mathbf{k}}$ is a finite subgraph of $G_\mathbf{k}$ generated by the faces in $\mathring{F}$, and $\partial F(G):= \partial F$ is the set of \defemph{open faces}.
\end{defn}
Since we can always determine $\partial F(G)$ from $F(G),$ we will omit $\partial F(G)$ by writing just $G$ instead of $(G,\partial F(G)).$
The faces in $F(G)=\mathring{F}$ are called \defemph{closed faces}, while the faces in $\partial F(G)=\partial F$ are called \defemph{open faces}.
Later in the paper, some other solutions to the T-systems with principal coefficients will look nicer if written in terms of the ``closure" of $G$ instead of $G$.
This will be a graph with no open faces.
\begin{defn}[The closure $\overline{G}$ of $G$]\label{defn:Gclosure}
For a point $p$ and a surface $\k$, let $\k_p$ be the adjusted stepped surface associated with $\k$ and $p$ defined in \eqref{eq:fun&topsurface} and $G_\infty:=G_{\k_p}$ be the graph associated to $\k_p$.
We define the closure $\overline{G}$ of $G$ to be the finite subgraph of $G_\infty$ generated by $\mathring{F}\cup\partial F$, and we think of it as a graph with no open face.
\end{defn}
We note that $\k(i,j)=\k_p(i,j)$ for all $(i,j)\in F(G)\cup\partial F(G).$
So the graphs with open faces $G_{p,\k}$ and $G_{p,\k_p}$ are exactly the same except for the shape of the open faces.
Due to the following proposition, we can obtain $\overline{G}$ directly from $G$ by closing all the open faces of $G$ in a certain way.
\begin{prop}\label{prop:modifiedopenface}
All 16 types of the faces of $\overline{G}$ in $F(\overline{G})\setminus F(G) = \partial F$ are shown in Figure \ref{fig_modifiedopenface} where dotted lines indicate edges in $E(\overline{G})\setminus E(G).$
\end{prop}
\begin{proof}
At each open face of $G$, we consider the height of its neighboring faces.
The shape of the faces are obtained from Table \ref{tab:quiver&k}.
The proposition then easily follows.
\end{proof}
\begin{ex}\label{ex:gbar}
Let $\k(i,j)=|i+j|-1$ and $p=(0,0,3)$.
Then the infinite graphs $G_\mathbf{k}$, $G_\infty=G_{\k_p}$ and the finite graphs $G=G_{p,\k}$, $\overline{G}$ are shown in Figure \ref{fig_Gbar}.
\end{ex}
\begin{figure}
\begin{center}\includegraphics[scale=0.8]{fig_modifiedopenface}\end{center}
\caption{Faces in $F(\overline{G})\setminus F(G) = \partial F$ of $\overline{G}.$}
\label{fig_modifiedopenface}
\end{figure}
\begin{figure}
\begin{center}\includegraphics[scale=0.5]{fig_Gbar}\end{center}
\caption{
$G_\mathbf{k}$, $G_\infty$, $G$ and $\overline{G}$ when $\k(i,j)=|i+j|-1$ and $p=(0,0,3)$.
The shaded faces are the faces in $\mathring{F}$.
}
\label{fig_Gbar}
\end{figure}
\subsection{Face-weight and pairing-weight}\label{subsec:weight}
From this point onward, we let $G:=G_{p,\k}$ as a graph with open faces.
Let $\mathcal{M}$ be a set of all perfect matchings, a.k.a. dimer configurations, of $G.$
We recall that a perfect matching of $G$ is a subset $M\subseteq E(G)$ such that each $v\in V(G)$ is incident to exactly one edge in $M.$
We define the face-weight $w_f$ and the pairing-weight $w_p$ on $G$, which contribute cluster variables/initial data $t_{i,j}$'s and coefficients $c_{i,j}$'s, respectively, to the expression of $T_{i_0,j_0,k_0}$.
\begin{defn}\label{defn:faceweight}
For a face $(i,j)\in \mathring{F}\cup \partial F,$ we define the \defemph{face-weight} depending on a perfect matching $M$ of $G$ as:
\[
w_f(M) := \prod_{x\in \mathring{F}\cup\partial F}w_f(x),
\]
where a contribution of a face to the product is defined as:
\[
w_f(i,j) :=
\begin{cases}
t_{i,j}^{\left\lceil {\frac{b-a}2}\right\rceil-1}, & (i,j)\in \mathring{F},\\
t_{i,j}^{\left\lceil {\frac{b-a}2}\right\rceil}, & (i,j)\in \partial F,
\end{cases}
\]
where $a$ is the number of sides of $(i,j)$ in the matching $M$ and $b$ the number of sides in $E(G)\setminus M.$
\end{defn}
The pairing-weight will be defined on pairs of horizontal edges in $M.$
We first note that there are exactly two types of horizontal edges in $G$ as follows.
\begin{center}
\includegraphics{fig_blackwhite}
\end{center}
\begin{itemize}
\item a \defemph{white-black} horizontal edge, an edge joining a white vertex on the left and a black vertex on the right. We will call it $N(i,j)$, indexing by the face $(i,j)$ below it (the north side of the face $(i,j)$).
\item a \defemph{black-white} horizontal edge, an edge joining a black vertex on the left and a white vertex on the right. We will call it $S(i,j)$, indexing by the face $(i,j)$ above it (the south side of the face $(i,j)$).
\end{itemize}
Let an \defemph{allowed pair} be a pair of $S(i,j_1)$ and $N(i,j_2)$ when $j_1\leq j_2$ in the same column of the graph.
In the other words, an allowed pair is a pair of a white-black horizontal edge above a black-white horizontal edge in the same column.
We denote $\binom{N(i,j_2)}{S(i,j_1)}$ for an allowed pair. Since $F(G)\subset\mathbb{Z}^2$, for each $i$ we can consider a subgraph of $G$ generated by the faces in $F(G)\cap (\{i\}\times\mathbb{Z}).$
In this column subgraph, we read from the bottom to the top and get a sequence of horizontal edges in the matching $M$.
We then pair these edges into allowed pairs by the following steps.
\begin{enumerate}
\item If $S(i,j_1)$ and $N(i,j_2)$ where $j_1\leq j_2$ are consecutive in the sequence, we pair the two.
\item Remove both $S(i,j_1)$ and $N(i,j_2)$ from the sequence, and repeat the first step until the sequence is empty.
\end{enumerate}
We do this to all of the columns of $G$.
The set $P$ of all allowed pairs obtained by this process is called the \defemph{perfect pairing} of $M$.
Proposition \ref{prop:main} will guarantee that the process works and the perfect pairing always exists.
Now the pairing-weight is defined in the following definition.
\begin{defn}\label{defn:pairweight}
Let $P$ be the perfect pairing of a perfect matching $M$ of $G$.
The \defemph{pairing-weight} on $M$ is defined to be:
\[
w_p(M):=\prod_{x\in P}w_p(x),
\]
where a contribution of an allowed pair in the product is defined as:
\begin{align*}
w_p\binom{ N(i,j_2)}{S(i,j_1)} := J_{i,j',k'}= \prod_{a=j_1-\k(i,j_1)-1}^{j_2+\k(i,j_2)+1}c_{i,a},
\end{align*}
and
\begin{align*}
j' &= j_1-\k(i,j_1)+k'-1 = j_2+\k(i,j_2)-k'+1,\\
k' &= \dfrac{\k(i,j_1)+\k(i,j_2)-j_1+j_2}{2}+1.
\end{align*}
A contribution of an allowed pair in the perfect pairing to the pairing-weight can be illustrated by Figure \ref{fig_pairing}.
\end{defn}
\begin{figure}
\begin{center}\includegraphics{fig_pairing}\end{center}
\caption{
If $\k$ is the surface depicted in blue, then $w_p\binom{ N(0,10)}{S(0,4)}= c_{0,0}c_{0,1}\dots c_{0,15} = J_{0,8,8}$ is shown in red.
The picture is drawn in the section $i=0$ of the 3-dimensional lattice.
}
\label{fig_pairing}
\end{figure}
\begin{ex}
Consider the following perfect matching $M$ of the graph $G$ from Example \ref{ex:gbar}.
\begin{center}
\includegraphics[scale=0.9]{fig_pairingex}
\end{center}
The perfect pairing is
\[
P=\left\{ \binom{N(-1,1)}{S(-1,0)}, \binom{N(0,0)}{S(0,0)}, \binom{N(0,1)}{S(0,-1)}, \binom{N(1,0)}{S(1,-1)} \right\}.
\]
We then have
\[
w_p(M) = (c_{-1,-1}c_{-1,0}c_{-1,1})(c_{0,0})(c_{0,-2}c_{0,-1}\dots c_{0,2})(c_{1,-1}c_{1,0}c_{1,1}).
\]
Also, the face-weight of $M$ is $w_f(M) = t_{-2,0}t_{0,0}^{-1}t_{2,0}.$
\end{ex}
\begin{prop}\label{prop:main}
Let $M$ be a perfect matching of $G$.
Then the following holds.
\begin{enumerate}
\item
If all four adjacent faces of a face $(i,j)\in\mathring{F}$ have the same height, then the face $(i,j)$ is a square.
Also, the coloring around the face depends on the height of $(i,j)$ and its neighbors as shown in Figure \ref{fig:mutatable}.
\item
For each $i\in\mathbb{Z},$ we get $|\{ S(i,j)\in M \mid j\in\mathbb{Z}\}| = |\{ N(i,j)\in M \mid j\in\mathbb{Z}\}|.$
That means the number of black-white horizontal edges in $M$ and the number of white-black horizontal edges in $M$ in the same column are equal.
\item
For each $i,j\in\mathbb{Z},$ $|\{ S(i,b)\in M \mid b\leq j\}| \geq |\{ N(i,b)\in M \mid b\leq j\}|.$
That means in any column of $G$ the number of black-white horizontal edges in $M$ dominates the number of the white-black edges in $M$ when counting from bottom to top.
\end{enumerate}
\end{prop}
\begin{proof}
(1) follows directly from Table \ref{tab:quiver&k}.
For (2) and (3), we first notice that if the point $p=(i_0,j_0,k_0)$ lies on the stepped surface $\k$ then the graph with open face associated to $p$ and $\k$ is $(G,\partial F)$ where $G$ is empty and $\partial F = \{ (i_0,j_0) \}.$
So (2) and (3) automatically hold.
If $k_0 > \k(i_0,j_0)$, then $G:=G_{p,\k} = G_{p,\k_p}$, see \eqref{eq:fun&topsurface}.
Without loss of generality, we can then assume that $\k = \k_p$.
Then $\k$ is obtainable from the stepped surface $\mathbf{proj}_p$ by a finite number of downward mutations.
We will show (2) and (3) using induction on the number of downward mutations from $\mathbf{proj}_p.$
When $\k$ is away from $\mathbf{proj}_p$ by only one downward mutation, $G$ is a square of type (S1) in Figure \ref{fig:mutatable}.
There are only two perfect matchings of the graph, which both satisfy (2) and (3).
Next, we assume that the claims hold for any surfaces which are away from $\mathbf{proj}_p$ by less than $n$ mutations.
Let $\k$ be a surface away from $\mathbf{proj}_p$ by $n$ downward mutations.
There must be an intermediate surface $\k'$ such that $\k'$ is obtained from $\mathbf{proj}_p$ by $n-1$ downward mutations and $\k$ is obtained from $\k'$ by one downward mutation, says at $(i,j)$.
We have two cases:
\begin{center}
\includegraphics{fig_2case}
\end{center}
\textbf{Case 1.}
If $(i,j)$ is a closed face of $G_{p,\k'}$, then $(i,j)$ is also a closed face of $G_{p,\k}$.
$G_{p,\k}$ is obtained from $G_{p,\k'}$ by applying the urban renewal action at the face $(i,j)$ then collapsing all degree-2 vertices created by the urban renewal.
For any perfect matching $M$ of $G_{p,\k}$, there exists a perfect matching $M'$ of $G_{p,\k'}$ differing from $M$ only at the face $(i,j)$, see Figure \ref{fig:matching-mutation}.
Since $M'$ satisfies (2) and (3) by the induction hypothesis, we see from Figure \ref{fig:matching-mutation} that $M$ also satisfy (2) and (3).
So they hold for any matchings of $G_{p,\k}$.
\textbf{Case 2.}
If $(i,j)$ is an open face of $G_{p,\k'}$, then $(i,j)$ becomes a closed face of $G_{p,\k}$.
We first consider the case when $i>i_0$ and $j>j_0$.
$G_{p,\k}$ is obtained from $G_{p,\k'}$ by applying the urban renewal action at the face $(i,j)$ and collapsing all degree-2 vertices created by the urban renewal.
This yields the correspondence of the matchings of $G_{p,\k'}$ and $G_{p,\k}$ via Figure \ref{fig_urbanopen}.
With the same argument as for a closed face, (2) and (3) hold for any perfect matchings of $G_{p,\k}$.
Similarly, if $i>i_0$ and $j=j_0$, the correspondence is shown in Figure \ref{fig_urbaneast}, which implies (2) and (3) for any perfect matchings of $G_{p,\k}$.
The other cases can be treated similarly.
From both cases, the statements (2) and (3) hold for every perfect matching of $G_{p,\k}.$
By induction, we proved (2) and (3).
\end{proof}
\begin{figure}
\begin{center}\includegraphics{fig_mutatable}\end{center}
\caption{
The only two possibilities of square faces.
The values on the faces indicate their height.
}
\label{fig:mutatable}
\end{figure}
\begin{table}
\begin{center}\includegraphics[scale=0.9]{fig_matching_mutation}\end{center}
\caption{The list of all correspondences between matchings before and after a single downward mutation at a closed face.}
\label{fig:matching-mutation}
\end{table}
\begin{table}%
\begin{center}\includegraphics[scale=0.9]{fig_urbanopen}\end{center}
\caption{The list of all correspondences between matchings before and after a single downward mutation at an open face $(i,j)$ where $i>i_0$ and $j>j_0$.}%
\label{fig_urbanopen}%
\end{table}
\begin{table}%
\begin{center}\includegraphics[scale=0.9]{fig_urbaneast}\end{center}
\caption{The list of all correspondences between matchings before and after a single downward mutation at an open face $(i,j)$ where $i>i_0$ and $j=j_0$.}%
\label{fig_urbaneast}%
\end{table}
We have defined face-weight and pairing-weight for perfect matchings of $G$.
The previous proposition ensures that the pairing-weight is well-defined.
We are now ready to state the main theorem.
\subsection{Perfect-matching solution}
\begin{thm}[Perfect-matching solution]\label{thm:main}
Let $p=(i_0,j_0,k_0)$ and $\mathbf{k}$ be an admissible initial data stepped surface with respect to $p$ where $k_0\geq \k(i_0,j_0)$ and $\k\geq \mathbf{fund}$.
Then
\begin{align}
T_{i_0,j_0,k_0} = \sum_{M\in\mathcal{M}} w_p(M)w_f(M)
\label{eq:dimersol}
\end{align}
where $\mathcal{M}$ is the set of all the perfect matchings of $G=G_{p,\k}.$
\end{thm}
This solution specializes to the solution in \cite{Speyer} for the coefficient-free T-system \cite[The Aztec Diamonds theorem]{Speyer} when $c_{i,j}=1$ for all $(i,j)\in\mathbb{Z}^2$.
The proof of the theorem follows from the proof in \cite{Speyer} using the ``infinite completion" of $G=G_{p,\k}$, which is the same thing as $G_\infty = G_{\k_p}$ in our setup.
To do so, we need to make sense of perfect matchings of $G_\infty$ and weight on them.
\begin{defn}[Acceptable perfect matching of $G_\infty$]
We call a perfect matching $M_\infty$ of $G_\infty$ \defemph{acceptable} if $M_\infty \setminus E(G)$ is exactly the set of all the diagonal edges in $E(G_\infty)\setminus E(G)$.
\end{defn}
We then extend the definition of the face-weight and the pairing-weight to acceptable perfect matchings of $G_\infty$.
Notice that $G_\infty$ has no open faces.
Also the weight of $M$ and $M_\infty$ are equal, i.e.
\begin{align}
w_p(M_\infty)w_f(M_\infty) = w_p(M)w_f(M).
\label{eq:weightpreserve}
\end{align}
The following proposition gives a bijection between the perfect matchings of $G$ and the acceptable perfect matching of $G_\infty$.
\begin{prop}[{{\cite[Proposition 6]{Speyer}}}]\label{prop:infcomp}
There exists a bijection between the set of all perfect matchings of $G$ and the set of all acceptable perfect matchings of $G_\infty$, which maps a perfect matching $M$ of $G$ to an acceptable perfect matching $M_\infty$ of $G_\infty$ where
\[
M_\infty = M \cup \{ \text{diagonal edges in }E(G_\infty)\setminus E(G) \}.
\]
\end{prop}
\begin{ex}
Figure \ref{fig_exinfcomp} shows an example of a perfect matching $M$ of $G$ and its corresponding acceptable perfect matching $M_\infty$ of $G_\infty$ from the bijection in Proposition \ref{prop:infcomp}.
An edge in $M_\infty$ is either an edge in $M$ (described in red) or a diagonal edge in $E(G_\infty)\setminus E(G)$ (described in blue).
\end{ex}
\begin{figure}
\begin{center}\includegraphics[scale=0.6]{fig_exinfcomp}\end{center}
\caption{A perfect matching $M$ of $G$ and its corresponding acceptable perfect matching $M_\infty$ of $G_\infty$. }
\label{fig_exinfcomp}
\end{figure}
From \eqref{eq:weightpreserve} and Proposition \ref{prop:infcomp}, we can see that Theorem \ref{thm:main} is equivalent to the following theorem.
\begin{thm}[Perfect matching solution for $G_\infty$]\label{thm:maininf}
Let $p$ and $\k$ be as in the assumption of Theorem \ref{thm:main} and $\k_p$ be defined as in \eqref{eq:fun&topsurface}.
We have
\[
T_{i_0,j_0,k_0} = \sum_{M} w_p(M)w_f(M)
\]
where the sum runs over all the acceptable perfect matchings of $G_\infty=G_{\k_p}.$
\end{thm}
\begin{proof}
Since $T_{i_0,j_0,k_0}$ depends only on $G_{p,\k}=G_{p,\k_p}$, we can assume without loss of generality that $\k = \k_p$.
We will prove the theorem by using induction on the number of downward mutations from the top-most stepped surface $\mathbf{proj}_p$ to $\k$.
The base case is when $\k = \mathbf{proj}_p$.
The graph $G_{\mathbf{proj}_p}$ is shown in Figure \ref{fig:p}.
There is only one acceptale perfect matching and its weight is $t_{i_0,j_0}=T_{i_0,j_0,k_0}$.
So the theorem holds for the base case.
Assuming that the theorem holds for any stepped surfaces away from $\mathbf{proj}_p$ by less than $n$ downward mutations, we let $\k$ be a surface obtained from $\mathbf{proj}_p$ by $n$ downward mutations.
Then we can find an intermediate surface $\k'$ such that it is obtained from $\mathbf{proj}_p$ by $n-1$ downward mutations and $\k$ is obtained from $\k'$ by one downward mutation at $(i,j)$.
We also assume that $\k(i,j)=k-1$ and $\k'(i,j)=k+1$ for some $k\in\mathbb{Z}$.
By the induction hypothesis we have
\[
T_{i_0,j_0,k_0} = \sum_{\text{acceptable }M\text{ of }G_{\k'}} w_p(M)w_f(M).
\]
Let $M$ be any acceptable perfect matching of $G_{\k'}$.
By Proposition \ref{prop:main}, the face $(i,j)$ of $G_{\k'}$ is a square of type (S2) in Figure \ref{fig:mutatable}.
Then the matching $M$ at the face $(i,j)$ must be one of the 7 cases in the first column of Figure \ref{fig:matching-mutation}.
If $M$ is of type $(A)$ at $(i,j)$, there are two matchings $M_{A1'}$ and $M_{A2'}$ of $G_{\k}$ of type $(A1')$ and $(A2'),$ respectively, such that the matchings are the same except locally at the face $(i,j)$.
We then have
\begin{align*}
w_p(M_{A1'})w_f(M_{A1'}) &= \frac{T_{i,j-1,k}T_{i,j+1,k}}{T_{i,j,k-1}T_{i,j,k+1}}w_p(M)w_f(M),\\
w_p(M_{A2'})w_f(M_{A2'}) &= \frac{J_{i,j,k}T_{i-1,j,k}T_{i+1,j,k}}{T_{i,j,k-1}T_{i,j,k+1}}w_p(M)w_f(M).
\end{align*}
The term $J_{i,j,k}$ in the second equation came from an extra pair $\binom{ N(i,j)}{ S(i,j)}$ in $M_{A2'}$ which gives an extra term $J_{i,j,k}$ to the pairing-weight.
By \eqref{eq:tsyscoef}, we have
\begin{align}
w_p(M_{A1'})w_f(M_{A1'})+w_p(M_{A2'})w_f(M_{A2'}) = w_p(M_A)w_f(M_A).
\label{eq:a}
\end{align}
If $M$ is of type $(B)$, there exists a unique corresponding matching $M_{B'}$ of $G_\k$ of type $(B')$.
We see that the weight of $M$ and $M_{B'}$ are equal.
That is
\begin{align}
w_p(M_{B'})w_f(M_{B'}) = w_p(M)w_f(M).
\label{eq:b}
\end{align}
If $M$ is of type $(C1)$ (resp. $(C2)$), let $M'$ be another matching of $G_{\k'}$ of type $(C2)$ (resp. $(C1)$) which is the same as $M$ except for the two edges at the face $(i,j)$.
Without loss of generality, we assume that $M$ is of type $(C1)$ and $M'$ is of type $(C2)$.
Then there exists a corresponding perfect matching $M_{C'}$ of $G_\k$ of type $(C')$.
We then have
\[
w_p(M)w_f(M) = \frac{T_{i,j-1,k}T_{i,j+1,k}}{T_{i,j,k-1}T_{i,j,k+1}}w_p(M_{C'})w_f(M_{C'}).
\]
To write $w_p(M')$ in terms of $w_p(M_{C'})$, we first notice that there must be two other edges $S(i,j_1),N(i,j_2)\in M'$ where $j_1<j<j_2$ such that both pairs $\binom{N(i,j-1)}{ S(i,j_1)}$ and $\binom{N(i,j_2)}{S(i,j+1)}$ are in the perfect pairing of $M'$, while in its corresponding $M_{C'}$ there is only $\binom{N(i,j_2)}{S(i,j_1)}$.
Thus we have
\[
w_p(M') = \frac{\prod_{b=j_1-k_1-1}^{j-1+k+1} c_{i,b} \prod_{b=j+1-k-1}^{j_2+k_2+1} c_{i,b}}{\prod_{b=j_1-k_1-1}^{j_2+k_2+1} c_{i,b}}w_p(M_{C'})= J_{i,j,k}w_p(M_{C'}),
\]
and so
\[
w_p(M')w_f(M') = \frac{J_{i,j,k}T_{i-1,j,k}T_{i+1,j,k}}{T_{i,j,k-1}T_{i,j,k+1}}w_p(M_{C'})w_f(M_{C'}).
\]
Hence,
\begin{align}
w_p(M)w_f(M)+w_p(M')w_f(M') = w_p(M_{C'})w_f(M_{C'}).
\label{eq:c}
\end{align}
By \eqref{eq:a}, \eqref{eq:b}, \eqref{eq:c} and the induction hypothesis, we can conclude that
\[
\sum_{\text{acceptable }M\text{ of }G_{\k}} w_p(M)w_f(M) = \sum_{\text{acceptable }M\text{ of }G_{\k'}} w_p(M)w_f(M) = T_{i_0,j_0,k_0}.
\]
So the statement holds for $\k$. By the induction, we proved the theorem.
\end{proof}
\begin{figure}
\begin{center}\includegraphics{fig_p}\end{center}
\caption{
The only one acceptable perfect matching of $G_{\mathbf{proj}_p}$.
The center face of the graph is $(i_0,j_0)$.
}
\label{fig:p}
\end{figure}
Now we have proved Theorem \ref{thm:main} and Theorem \ref{thm:maininf}.
In the proof of Theorem \ref{thm:maininf}, we notice that for any acceptable perfect matching $M_\infty$ of $G_\infty$, any face outside $\mathring{F}\cup\partial F$ always gives $1$ for its face-weight.
Also edges in $M_\infty\setminus E(G)$ are all diagonal, so they will not contribute any weight to the partition function.
We then have the following perfect-matching solution for the closure $\overline{G}$ of $G$.
\begin{thm}[Perfect matching solution for $\overline{G}$]\label{thm:mainclosure}
Let $p$ and $\k$ be as in the assumption of Theorem \ref{thm:main} and $\overline{G}$ be the closure of $G=G_{p,\k}$ defined as in Definition \ref{defn:Gclosure}.
We have
\[
T_{i_0,j_0,k_0} = \sum_{\overline{M}\in\overline{\mathcal{M}}} w_p(\overline{M})w_f(\overline{M})
\]
where $\overline{\mathcal{M}} := \left\{ \overline{M}= M\cup \operatorname{Diag}(E(\overline{G})\setminus E(G)) \mid M\in\mathcal{M} \right\}$,
$\operatorname{Diag}(A)$ is the set of all diagonal edges in $A$, and $\mathcal{M}$ is the set of all perfect matchings of $G$.
\end{thm}
We now have a combinatorial expression of $T_{i_0,j_0,k_0}$ as a partition function of face-weight and pairing-weight over all perfect matchings of a graph.
In the next section, we will combine the two weights together and construct an edge-weight.
This will be the first step toward our next aim to construct a solution in terms of networks, analog to \cite{DF14} for the coefficient-free T-system.
\section{Perfect-matching solution via edge-weight} \label{sec_edge-weight}
Now that we have multiple versions of the perfect matching solution to the T-system with principal coefficients in Theorem \ref{thm:main}, Theorem \ref{thm:maininf} and Theorem \ref{thm:mainclosure}, our next goal is to find a network solution analog to the network solution for coefficient-free T-systems studied in \cite{DFK13,DF14}.
One big advantage of the network solution is its explicit solution via the network matrices.
In the perfect matching solution, we need to enumerate all the perfect matchings of the graph $G$ in order to compute the solution.
For the network solution, we associate an initial data stepped surface with a product of network matrices.
Then the solution is just a certain minor of the product.
In order to get the network solution, we first transform the face-weight and the pairing-weight studied in the last section to the edge-weight $w_e$ (Definition \ref{defn:edgeweight}) on edges of the closure $\overline{G}$ of $G$, which also gives us a new perfect-matching solution but with the edge-weight $w_e$ (Theorem \ref{thm:edgesol}).
This solution will be used to construct a nonintersecting-path solution in the next section.
We also note that our edge-weight coincide with the weight studied in \cite{MS10,JMZ13} in the case when all $c_{i,j}=1$.
\begin{defn}[Edge-weight $w_e$]\label{defn:edgeweight}
Let $\k$ be an admissible initial stepped surface with respect to $p$, $\overline{G}$ be the closure of the graph $G=G_{p,\k}$.
For each edge of $\overline{G}$ we assign the \defemph{edge-weight} $w_e$ as follows:
\begin{center}
\includegraphics[trim=1cm 0cm 1cm 0cm, scale=0.9]{fig_edgeweight1}
\end{center}
where $p_a$ and $\overline{p}_a$ are the following formal products
\[
p_a = \prod_{\alpha=j-k-1}^\infty c_{i,\alpha} \quad\text{and} \quad \overline{p}_a = \prod_{\alpha=i+k+2}^\infty c_{i,\alpha}
\]
when $a=(i,j)$ and $k=\k(i,j).$ We also assume that $t_a=1$ when $a\notin F(\overline{G})$ and $p_a=\overline{p}_a = 1$ when $a\notin F(G)$.
\end{defn}
\begin{defn}\label{defn:Mbar0}
Let $\mathcal{M}$ be the set of all perfect matching of $G.$ For a matching $M\in\mathcal{M}$, we let
\[
\overline{M} := M\cup\operatorname{Diag}(E(\overline{G})\setminus E(G))
\]
be its corresponding (not necessary perfect) matching of $\overline{G}$ where $\operatorname{Diag}(A)$ is the set of all diagonal edges in $A$ for $A\subseteq E(\overline{G}).$
Also let
\begin{align*}
\overline{\mathcal{M}} &:= \left\{ \overline{M} \mid M\in\mathcal{M}\right\},\\
\overline{M}_0 &:= \left\{\text{all white-black horizontal and diagonal edges of }\overline{G}\right\}.
\end{align*}
Then the \defemph{edge-weight} of a matching $\overline{M}\in\overline{\mathcal{M}}$ is
\begin{align}
w_e(\overline{M}) := \prod_{x\in \overline{M}} w_e(x).
\label{eq:edgeweight}
\end{align}
\end{defn}
We note that $\overline{M}$ and $\overline{M}_0$ are not necessary perfect matchings of $\overline{G}.$
Also for $j_1\leq j_2$,
\[
p_{(i,j_1)} (\overline{p}_{(i,j_2)})^{-1} = \prod_{\alpha = j_1-\k(i,j_1)-1}^{j_2+\k(i,j_2)+1} = w_p\binom{N(i,j_2)}{S(i,j_1)}.
\]
By Proposition \ref{prop:main}, the product in \eqref{eq:edgeweight} is indeed a finite product of pairing-weights, hence a finite product of $c_{i,j}$'s.
The following lemma interprets the face-weight of a matching $\overline{M}$ as a function of $\overline{M}$ and our special matching $\overline{M}_0$.
\begin{lemma}\label{lem:faceweightedgeweight}
For $x\in F(\overline{G})$ and $\overline{M}\in\overline{\mathcal{M}}$, we have
\[
w_f(x) = t_x^{N_x-D_x}
\]
where $N_x=\left| \{ e\in \overline{M}_0 \mid e\text{ is a side of }x \} \right|$ and $D_x = \left| \{ e\in \overline{M} \mid e\text{ is a side of }x \} \right|$.
\end{lemma}
\begin{proof}
We can easily check that for all $x\in F(\overline{G})=\mathring{F}\cup\partial F,$ we get $w_f(x) = t_x^{\lceil \frac{N-D}{2}\rceil-1}$ where $N$ and $D=D_x$ are the numbers of sides of $x$ which are not in $\overline{M}$ and are in $\overline{M}$, respectively.
Let $S\in\{4,6,8\}$ be the number of sides of $x$. Then $N=S-D.$
So
\[
w_f(x) = t_x^{\lceil \frac{S-2D}{2}\rceil-1}=t_x^{S/2-D-1}.
\]
Since $x$ must be one of the cases in Figure \ref{fig_468}, we have $S/2-1 = N_x.$
Hence $w_f(x) = t_x^{N_x-D_x}$.
\end{proof}
\begin{figure}
\begin{center}\includegraphics[scale=0.8]{fig_468}\end{center}
\caption{All possible faces of $\overline{G}$ up to $\frac{\pi}{2}$ rotation and $i$-axis reflexion.}
\label{fig_468}
\end{figure}
\begin{thm}[Perfect-matching solution for $\overline{G}$ with the edge-weight $w_e$]\label{thm:edgesol}
Let~ $\k$ be an admissible initial data stepped surface with respect to $p=(i_0,j_0,k_0)$, $\overline{M}$ and $\overline{M}_0$ be defined as in Definition \ref{defn:Mbar0}.
Then
\[
T_{i_0,j_0,k_0} = \sum_{\overline{M}\in\overline{\mathcal{M}}} w_e(\overline{M}) \Big/ w_e(\overline{M}_0)\big|_{c_{i,j}=1}
\]
where $w_e(\overline{M}_0)\big|_{c_{i,j}=1}$ denotes the substitution $c_{i,j}=1$ for any $(i,j)\in\mathbb{Z}^2.$
\end{thm}
\begin{proof}
Let $\overline{M}\in\overline{\mathcal{M}}$.
By Lemma \ref{lem:faceweightedgeweight}, we get
\begin{align*}
w_f(\overline{M})= \prod_{x\in F(\overline{G})}t_x^{N_x-D_x}= \prod_{x\in F(\overline{G})}t_x^{N_x}\prod_{x\in F(\overline{G})}t_x^{-D_x}.
\end{align*}
By Definition \ref{defn:edgeweight}, it equals to
\begin{align*}
w_f(\overline{M}) = {\Big(w_e(\overline{M}_0)\big|_{c=1}\Big)}^{-1} \prod_{x\in F(\overline{G})}t_x^{-D_x}.
\end{align*}
Thus we have
\begin{align}
\prod_{x\in F(\overline{G})}t_x^{-D_x} = w_f(\overline{M}) w_e(\overline{M}_0)\big|_{c=1}.
\label{eq:1}
\end{align}
By the definition of $w_e$, we consider
\begin{align*}
w_e(\overline{M}) = \prod_{y\in \overline{M}}w_e(y) = \prod_{\bullet\overset{a}{\text{---}}\circ\in\overline{M}}p_a \prod_{\circ\underset{b}{\text{---}}\bullet\in\overline{M}}\overline{p}_b \prod_{x\in F(\overline{G})}t_x^{-D_x}.
\end{align*}
From \eqref{eq:1} and the fact that $\overline{M}\setminus M$ contains only diagonal edges, we then get
\begin{align*}
w_e(\overline{M})=\prod_{\bullet\overset{a}{\text{---}}\circ\in M}p_a \prod_{\circ\underset{b}{\text{---}}\bullet\in M}\overline{p}_b ~ w_f(\overline{M}) w_e(\overline{M}_0)\big|_{c=1}.
\end{align*}
Since
\[
\prod_{\bullet\overset{a}{\text{---}}\circ\in M}p_a \prod_{\circ\underset{b}{\text{---}}\bullet\in M}\overline{p}_b = w_p(\overline{M}),
\]
we conclude that $w_f(\overline{M})w_p(\overline{M}) = w_e(\overline{M})\big/w_e(\overline{M}_0)\big|_{c=1}$ for any $\overline{M}\in\overline{\mathcal{M}}.$
By Theorem \ref{thm:mainclosure}, we have $T_{i_0,j_0,k_0} = \sum_{\overline{M}\in\overline{\mathcal{M}}} w_e(\overline{M}) \big/ w_e(\overline{M}_0)|_{c=1}.$
\end{proof}
Now we have a combinatorial expression of $T_{i_0,j_0,k_0}$ in terms of a partition function of edge-weight over all matchings $\overline{G}$.
In the next section, we give an explicit bijection between perfect matchings and non-intersecting paths (with certain sources and sinks) in both $G$ and $\overline{G}$.
Using this bijection, we are able to transform the perfect-matching solutions to a solution in terms of non-intersecting paths.
\section{Non-intersecting path solution}\label{sec_path}
In this section, we provide an explicit bijection (Proposition \ref{prop:bijection}) between the perfect matchings of $G$ and the non-intersecting paths in the oriented graph $G$ with certain sources and sinks.
It can be extended to a bijection between the matchings in $\overline{\mathcal{M}}$ of $\overline{G}$ and the non-intersecting paths in the oriented graph $\overline{G}$ with certain sources and sinks (Proposition \ref{prop:bijectionbar}).
Using this bijection and a new weight $w_e'$ modified from the edge-weight $w_e$, we can write the solution to the T-system in terms of non-intersecting paths in $\overline{G}$ (Theorem \ref{thm_pathsol}).
\subsection{Some setup}
We first show some properties of the graph $G$ and $\overline{G}$.
\begin{prop}
$G$ and $\overline{G}$ are bipartite and connected.
\end{prop}
\begin{proof}
It was proved in \cite[Section 3.5]{Speyer} that $G$ is bipartite and connected. The extended result to $\overline{G}$ follows from Proposition \ref{prop:modifiedopenface}.
\end{proof}
\begin{defn}\label{defn:leftright}
For two vertices $v,v'$ of a graph, we say that $v$ (resp. $v'$) is on the \defemph{left} (resp. \defemph{right}) of $v'$ (resp. $v$) if there is a sequence of vertices of the graph $v=v_0,v_1,v_2,\dots,v_n=v'$ such that any two consecutive vertices are connected by one of the following edges.
\begin{center}
\includegraphics{fig_leftright}
\end{center}
\end{defn}
We center the graph at the face $(i_0,j_0)$.
Then the notions of the North-West, North-East, South-West and South-East of the center are well-defined.
\begin{defn}\label{defn:direction}
We denote $V_{\rm NW}(G)$, $V_{\rm NE}(G)$, $V_{\rm SW}(G)$ and $V_{\rm SE}(G)$ to be the set of all the vertices of a graph $G$ in the North-West, North-East, South-West and South-East of the center face $(i_0,j_0)$, respectively.
Also let $V_{\rm leftNW}(G)$, $V_{\rm right NE}(G)$, $V_{\rm left SW}(G)$, $V_{\rm right SE}(G)$ be the set of all left-most NW vertices, right-most NE, left-most SW and right-most SE vertices of $G$, respectively.
See Figure \ref{fig_ex} for an example.
\end{defn}
\begin{figure}
\begin{center}\includegraphics[scale=0.8]{fig_source_sink}\end{center}
\caption{With $(0,0)$ as the center face, we have $|V_{SW}(G)|=4,$ $|V_{SE}(G)|=6,$ $V_{\rm left SW}(G)=\{a,b\}$ and $V_{\rm right SE}(G)=\{c,d\}$.}
\label{fig_ex}
\end{figure}
\begin{prop}\label{prop_g}
$G$ has the following properties.
\begin{enumerate}
\item Vertices in $V_{\rm left SW}(G)\cup V_{\rm right NE}(G)$ are black.
\item Vertices in $V_{\rm right SE}(G)\cup V_{\rm left NW}(G)$ are white.
\item $|V_{\rm left SW}(G)|=|V_{\rm right SE}(G)|$.
\item $|V_{\rm left NW}(G)|=|V_{\rm right NE}(G)|$.
\end{enumerate}
\end{prop}
\begin{proof}
To show (1), we use the same analysis as in the proof of Proposition 5 in \cite{Speyer}.
It is clear that a left-most vertex must be on the boundary of $G.$
We then consider an open face $(i,j)\in \partial F$ on the South-West of $(i_0,j_0)$ with height $k\in\mathbb{Z}.$
All the eight possibilities are shown in Figure \ref{fig_SW}, where the face of height $k$ in the circle is $(i,j)$.
We see that all the left-most South-West vertices must be black.
A similar argument can be used to show the other case of (1) and (2).
In order to show (3), we consider a maximal sequence $v_0,v_1,\dots,v_n$ of vertices of $G$ such that $v_i$ is on the left of $v_{i+1}$ for all $i$.
We have $v_0\in V_{\rm left SW}(G)$ if and only if $v_n\in V_{\rm right SE}(G)$.
This gives a bijection between $V_{\rm left SW}(G)$ and $V_{\rm right SE}(G)$.
So we proved (3).
The similar argument also shows (4).
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{fig_SW}
\end{center}
\caption{
All the eight possibilities of the boundary near an open face in the South-West of the center.
The face is of height $k$ and is marked by a circle.
}
\label{fig_SW}
\end{figure}
\subsection{Non-intersecting paths and perfect matchings}
We will give an explicit bijection between the perfect matchings of $G$ and the non-intersecting paths from $V_{\rm left SW}(G)$ to $V_{\rm right SE}(G)$ in $G$ (Proposition \ref{prop:bijection}).
This bijection can be extended to $\overline{G}$ in Proposition \ref{prop:bijectionbar}.
In order to define a path in $G$ and $\overline{G}$, we need an orientation of the graphs.
\begin{defn}[Edge orientation of $G$ and $\overline{G}$]\label{defn:edgeoreient}
Let the orientation of the edges of $G$ and $\overline{G}$ be such that it goes from left to right for diagonal and horizontal edges and from a black vertex to a white vertex for the vertical as follows.
\medskip
\begin{center}
\includegraphics[scale=0.8]{fig_edgeorientation}
\end{center}
\end{defn}
\begin{defn}\label{defn:sourcesink}
Let $G=G_{p,\k}$ and $\overline{G}$ be as in Definition \ref{defn:Gclosure}. With the notations from Definition \ref{defn:direction}, we define
\begin{align*}
\begin{aligned}
M_0 &:= \{\text{white-black horizontal and diagonal edges of }G\},\\
\overline{M}_0 &:= \{\text{white-black horizontal and diagonal edges of }\overline{G}\},\\
\mathcal{M} &:= \{\text{perfect matchings of }G\},\\
\overline{\mathcal{M}} &:= \{\overline{M}= M\cup\operatorname{Diag}(E(\overline{G})\setminus E(G)) \mid M\in\mathcal{M}\},\\
\mathcal{P} &:= \{\text{non-intersecting paths from }V_{\rm left SW}(G)\text{ to }V_{\rm right SE}(G)\text{ on }G\},\\
\overline{\mathcal{P}} &:= \{\text{non-intersecting paths from }V_{\rm left SW}(\overline{G})\text{ to }V_{\rm right SE}(\overline{G})\text{ on }\overline{G}\}.
\end{aligned}
\end{align*}
\end{defn}
The map $\Phi$ (resp. $\overline{\Phi}$) in the following definition is indeed the bijection between $\mathcal{M}$ and $\mathcal{P}$ (resp. between $\overline{\mathcal{M}}$ and $\overline{\mathcal{P}}$).
They will be key ingredients to construct our nonintersecting-path solution.
\begin{defn}\label{defn:barphi}
We define a map $\Phi:\mathcal{M}\rightarrow\mathcal{P}$ by
\begin{align*}
\Phi(M) :=M\triangle M_0,\quad\text{for }M\in\mathcal{M}.
\end{align*}
This map can be extended to a map $\overline{\Phi}:\overline{\mathcal{M}}\rightarrow\overline{\mathcal{P}}$ defined by
\begin{align*}
\overline{\Phi}(\overline{M}) := \overline{M}\triangle\overline{M}_0,\quad\text{for }\overline{M}\in\overline{\mathcal{M}},
\end{align*}
where $A\triangle B := A\cup B \setminus A\cap B = (A\setminus B)\sqcup(B\setminus A)$ is the symmetric difference of $A$ and $B$.
The notation $\sqcup$ represents the disjoint union.
\end{defn}
\begin{remark} It is worth mentioning that if we consider $M\in\mathcal{M}$ and $P\in\mathcal{P}$ as sets of dimers on edges of $G,$ then the action of $\Phi$ can be interpreted as superposing with $M_0$ and counted the number of dimers on each edge modulo 2.
Similarly, $\overline{\Phi}$ is the superposition with $\overline{M}_0$ with number of dimers being counted modulo 2.
\end{remark}
\begin{ex}
This example shows an interpretation of the map $\Phi$ on $\mathcal{M}$ as the superposition with $M_0$ modulo 2.
\begin{center}
\includegraphics[scale=0.7]{fig_ex_superposition}
\end{center}
We notice that the image of the map is indeed an element in $\mathcal{P}$.
\end{ex}
Using the following lemmas, Proposition \ref{prop:bijection} shows that $\Phi$ is a well-defined bijection from $\mathcal{M}$ to $\mathcal{P}$.
\begin{lemma}\label{lem1}
Let $M\in\mathcal{M}$. Then $\Phi(M)$ has the following properties.
\begin{enumerate}
\item For any vertex in $V_{\rm left SW}(G)\cup V_{\rm right SE}(G)$, exactly one of its incident edges is in $\Phi(M)$.
\item For the other vertices, either none or two of their incident edges are in $\Phi(M).$
\end{enumerate}
\end{lemma}
\begin{proof}
To show (1), we first consider a vertex in $V_{\rm left SW}(G)$.
By the fact that $M$ is a perfect matching, exactly one of its incident edges is in $M$.
By Proposition \ref{prop_g}, the vertex is black.
Since it is the left-most, none of its incident edges is in $M_0$.
This is because there cannot be a white vertex on its left.
So, exactly one of its incident edges is in $\Phi(M)$.
Similarly, exactly one incident edge of a vertex in $V_{\rm right SE}(G)$ is in $M$.
There is none of its edges is in $M_0$ because the vertex is white and the right-most. Hence (1) holds.
For (2), if a vertex is not in $V_{\rm left SW}(G)\cup V_{\rm right SE}(G)$, it must be either in $V_{\rm left NW}(G)\cup V_{\rm right NE}(G)$ or it has both left and right adjacent vertices.
For a vertex in $V_{\rm left NW}(G)\cup V_{\rm right NE}(G)$, there is one of its incident edges in $M,$ and also one in $M_0.$
They can be the same or different.
So, either none or two of its incident edges are in $\Phi(M)$.
For a vertex having left and right adjacent vertices, one of its incident edges must be in $M_0$.
This is because the vertex is either black and is on the right of a white vertex, or white and on the left of a black vertex.
Also its one of the incident edges must be in $M$ because $M$ is a perfect matching.
So either none or two of its incident edges are in $\Phi(M)$.
Hence (2) holds.
\end{proof}
\begin{lemma}\label{lem2}
For a black vertex of $G$ having two incident edges in $\Phi(M),$ the two edges must be of the form A, B or C in Figure \ref{fig_pathatblack} where the horizontal edges in the figure represent horizontal or diagonal edges of $G$.
\end{lemma}
\begin{proof}
It is easy to see that the six cases in Figure \ref{fig_pathatblack} are all configurations of two incident edges of a vertex of $G$.
Since $M$ is a perfect matching, one of the two edges must come from $M_0$.
Since the cases (D), (E) and (F) contain no edge from $M_0$, they cannot happen.
\end{proof}
\begin{lemma}\label{lem3}
For a white vertex of $G$ having two incident edges in $\Phi(M),$ the two edges must be of the form A, B or C in Figure \ref{fig_pathatwhite} where the horizontal edges in the figure represent horizontal or diagonal edges of $G$.
\end{lemma}
\begin{proof}
Similar to Lemma \ref{lem2}.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{fig_pathatblack}
\end{center}
\caption{
All configurations of two incident edges of a black vertex of $G$.
The horizontal edges in the picture represent horizontal/diagonal edges of $G$.
}
\label{fig_pathatblack}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{fig_pathatwhite}
\end{center}
\caption{
All configurations of two incident edges of a white vertex of $G$.
The horizontal edges in the picture represent horizontal/diagonal edges of $G$.
}
\label{fig_pathatwhite}
\end{figure}
\begin{prop}\label{prop:bijection}
$\Phi:\mathcal{M}\rightarrow\mathcal{P}$ is a bijection.
\end{prop}
\begin{proof}
To show that $\Phi(M)$ is well-defined, we need to show that $\Phi(M)$ is a collection of non-intersecting paths from $V_{\rm left SW}(G)$ to $V_{\rm right SE}(G)$ in $G$, where $G$ is oriented as in Definition \ref{defn:edgeoreient}.
From Lemma \ref{lem2} and Lemma \ref{lem3}, only the case (A), (B) or (C) can happen at any vertex.
From this, all the vertices incident to two edges in $\Phi(M)$ are neither source nor sink.
So $\Phi(M)$ is a set of non-intersecting paths in the oriented graph $G$, where a source or a sink is incident to exactly one edge in $\Phi(M)$.
From Lemma \ref{lem1}, the sources and the sinks must in $V_{\rm left SW}(G) \cup V_{\rm right SE}(G)$.
But (1) and (2) of Proposition \ref{prop_g} guarantees that $V_{\rm left SW}(G)$ must be the source and $V_{\rm right SE}(G)$ must be the sink.
It remains to show that there is no loop in $\Phi(M)$.
Since every vertex in a loop are neither source nor sink, there must be an horizontal/diagonal edge oriented from right to left.
This contradicts the orientation of $G$.
So $\Phi(M)$ contains no loop.
To show that $\Phi$ is a bijection, we construct another map $\Psi:\mathcal{P}\rightarrow\mathcal{M}$ by letting $\Psi(P)$ be the superposition of the path $P$ with $M_0$ and counting dimers on each edge modulo 2.
In other words $\Psi(P)=(P\cup M_0)\setminus (P\cap M_0).$
It is obvious that $\Psi\circ\Phi = \text{id}_\mathcal{M}$ and $\Phi\circ\Psi = \text{id}_\mathcal{P}.$ So, $\Psi = \Phi^{-1}$ provided that $\Psi$ is well-defined.
To show that $\Psi(P)$ is a perfect matching of $G,$ it suffices to show that any vertex has exactly one incident edge in $\Psi(P)$.
We first consider a black vertex.
\begin{itemize}
\item
If it is in $V_{\rm left SW}(G)$, it has one incident edge in $P$ because it is a source.
Also, it has no edge in $M_0$ since it is the left-most.
So, it still has one incident edge in $\Psi(P)$ after the superposition.
\item
If it is not in $V_{\rm left SW}(G)$, it has either none or two edges in $P.$
In both cases, since the vertex is not the left-most, there must be a white vertex on its left.
So it has an incident edge in $M_0.$
\begin{itemize}
\item If it has no edge in $P,$ it will receive an edge from $M_0$ after mapping by $\Psi$.
\item
If it has two edges in $P,$ it is either of the case A, B or C in Figure \ref{fig_pathatblack}.
So exactly one of the edges gets removed after superposing with $M_0.$
\end{itemize}
From both cases, the vertex is incident to exactly one edge in $\Psi(P)$.
\end{itemize}
A similar argument holds for a white vertex.
So we conclude that $\Psi(P)\in \mathcal{M}$.
\end{proof}
\begin{prop}\label{prop:iff}
Let $e\in M_0.$
Then $e\in M$ if and only $e \notin \Phi(M)$
\end{prop}
\begin{proof}
Let $e\in M_0.$ If $e\in M$, then $e\in M\cap M_0$.
Hence $e\notin (M\cup M_0)\setminus(M\cap M_0)=\Phi(M)$.
On the other hand, if $e\notin M$, then $e\notin M\cap M_0$.
Hence $e\in (M\cup M_0)\setminus(M\cap M_0)=\Phi(M).$
\end{proof}
Now we have analogs to Proposition \ref{prop:bijection} and Proposition \ref{prop:iff} for $\overline{G}.$
\begin{prop}\label{prop:bijectionbar}
Let $\overline{\Phi}$ be defined as in Definition \ref{defn:barphi}.
Then we have the following:
\begin{enumerate}
\item $\overline{\Phi}:\overline{\mathcal{M}}\rightarrow\overline{\mathcal{P}}$ is a bijection,
\item For $e\in \overline{M}_0,$ $e\in \overline{M}$ if and only $e \notin \overline{\Phi}(\overline{M}).$
\end{enumerate}
\end{prop}
\begin{proof}
We recall that the symmetric difference $\triangle$ is commutative and associative.
Let $\overline{D} = \operatorname{Diag}(E(\overline{G})\setminus E(G)).$
Since $M\cap \overline{D}=\emptyset$ for $M\in\mathcal{M}$, $\mathcal{M}$ and $\overline{\mathcal{M}}$ are in bijection via the map:
\[
f:M\mapsto M\cup\overline{D} = M\triangle \overline{D},
\]
with the inverse map:
\[
f^{-1}:\overline{M}\mapsto \overline{M}\triangle \overline{D}.
\]
Consider $\overline{P}\in\overline{\mathcal{P}},$ we can see from Proposition \ref{prop:modifiedopenface} that the paths from $V_{\rm left SW}(\overline{G})$ must go to the right via horizontal or diagonal edges in $E(\overline{G})\setminus E(G).$
Similarly, the paths will arrive $V_{\rm right SE}(\overline{G})$ via horizontal or diagonal edges in $E(\overline{G})\setminus E(G).$
Since there is a unique choice of these edges, we have $ \overline{P} = P\cup \overline{P}_0 $ where $\overline{P}_0\subset E(\overline{G})\setminus E(G)$ is the set of all horizontal and diagonal edges in the South-West, South or South-East.
Note that $\overline{P}_0$ is also equal to the set of white-black horizontal and black-white diagonal edges in $E(\overline{G})\setminus E(G).$
Since $P\cap \overline{P}_0=\emptyset$ for $P\in\mathcal{P}$, $\mathcal{P}$ and $\overline{\mathcal{P}}$ are in bijection via the map:
\[
g:P\mapsto P\cup \overline{P}_0 =P\triangle \overline{P}_0
\]
with the inverse map:
\[
g^{-1}:\overline{P}\mapsto \overline{P}\triangle \overline{P}_0.
\]
If we can show that $\overline{\Phi}=g\circ\Phi\circ f^{-1}$, (1) will automatically follows.
To show this, we first show that that $\overline{M}_0 = \overline{D}\triangle M_0 \triangle \overline{P}_0$.
Consider
\[
\overline{D}\triangle M_0 \triangle \overline{P}_0
= \left( \overline{D} \triangle \overline{P}_0 \right) \triangle M_0
= (\overline{M}_0\setminus M_0) \triangle M_0
= (\overline{M}_0\setminus M_0) \cup M_0
= \overline{M}_0.
\]
From $\overline{M}_0 = \overline{D}\triangle M_0 \triangle \overline{P}_0$, we then have
\[
\overline{\Phi}(\overline{M}) = \overline{M} \triangle \overline{M}_0 = \overline{M} \triangle (\overline{D}\triangle M_0 \triangle \overline{P}_0) = ((\overline{M}\triangle \overline{D}) \triangle M_0)\triangle \overline{P}_0 = (g\circ\Phi\circ f^{-1})(\overline{M})
\]
for all $\overline{M}\in\overline{\mathcal{M}}$.
So, $\overline{\Phi}=g\circ\Phi\circ f^{-1}:\overline{\mathcal{M}}\rightarrow\overline{\mathcal{P}}$ is a bijection.
For~(2), since $\overline{\Phi}:\overline{M}\mapsto\overline{M}\triangle \overline{M}_0,$ we have that for any $e\in\overline{M}_0,$
\[
e\in \overline{M}\iff e\in\overline{M}\cap\overline{M}_0\iff e \notin (\overline{M}\cup\overline{M}_0)\setminus(\overline{M}\cap\overline{M}_0)=\overline{\Phi}(\overline{M}).
\]
Hence we proved (2).
\end{proof}
\subsection{Modified edge-weight and nonintersecting-path solution}
We recall the edge-weight $w_e$ in Definition \ref{defn:edgeweight}.
It is compatible with the perfect-matching solution (Theorem \ref{thm:edgesol}).
In order to construct a nonintersecting-path solution from the bijection $\overline{\Phi}$, the edge-weight $w_e$ requires some modification.
Due to Proposition \ref{prop:bijectionbar}, we only need to inverse the weight of all edges in $\overline{M}_0$.
\begin{defn}[Modified edge-weight $w_e'$]\label{defn:modifiededgeweight}
For $x\in E(\overline{G})$ we define the \defemph{modified edge-weight} as follows:
\[
w_e'(x):=
\begin{cases}
w_e(x)^{-1}, & x\in\overline{M}_0,\\
w_e(x), & \text{otherwise,}
\end{cases}
\]
where $w_e$ is the edge-weight defined in Definition \ref{defn:edgeweight}.
\end{defn}
\begin{defn}[Modified edge-weight for paths in $\overline{G}$]
For a path $p=x_1 x_2 \dots x_n$ in $\overline{G}$, its \defemph{modified edge-weight} is defined to be the following product
\[
w_e'(p) := \prod_{i=1}^n w_e'(x_i) .
\]
Then the weight for a non-intersecting path is defined by
\[
w_e'(\overline{P}) := \prod_{p\in\overline{P}} w_e'(p)\quad\text{for}\enspace\overline{P}\in\overline{\mathcal{P}}.
\]
\end{defn}
Now we are ready for a nonintersecting-path solution for $\overline{G}$.
Note that we can also consider the solution for $G$, but it turns out to be more complicated due to the present of open faces of $G$, which needs to be treated separately.
\begin{thm}[Nonintersecting-path solution for $\overline{G}$]\label{thm_pathsol}
Let $\k$ be an admissible initial data stepped surface with respect to $p=(i_0,j_0,k_0)$, $\overline{M}_0$ be defined as in Definition \ref{defn:sourcesink}.
Then
\begin{align*}
T_{i_0,j_0,k_0} =\sum_{\overline{P}\in\mathcal{\overline{P}}} w_e'(\overline{P}) \Big/ \prod_{\circ\underset{b}{\text{---}}\bullet\in \overline{M}_0}\overline{p}_b,
\end{align*}
where $\overline{\mathcal{P}}$ is the set of all non-intersecting paths in $\overline{G}$ from $V_{\rm left SW}(\overline{G})$ to $V_{\rm right SE}(\overline{G})$.
\end{thm}
\begin{proof}
Let $\overline{M}\in\overline{\mathcal{M}}$.
From Definition \ref{defn:modifiededgeweight}, we have
\[
w_e'(\overline{M}\setminus\overline{M}_0)=w_e(\overline{M}\setminus\overline{M}_0) \text{ and } w_e'(\overline{M}_0\setminus\overline{M})=w_e(\overline{M}_0\setminus\overline{M})^{-1}.
\]
Since we can write the symmetric difference as the disjoint union $A\triangle B = (A\setminus B) \sqcup (B\setminus A)$, we have
\begin{align*}
w_e(\overline{M})
&=w_e(\overline{M}\setminus\overline{M}_0)w_e(\overline{M}\cap\overline{M}_0) \\
&=w_e(\overline{M}\setminus\overline{M}_0)w_e(\overline{M}_0\setminus\overline{M})^{-1}w_e(\overline{M}_0\setminus\overline{M})w_e(\overline{M}\cap\overline{M}_0) \\
&=w_e'(\overline{M}\setminus\overline{M}_0)w_e'(\overline{M}_0\setminus\overline{M})w_e(\overline{M}_0) \\
&=w_e'(\overline{M}\triangle \overline{M}_0)w_e(\overline{M}_0) \\
&=w_e'(\overline{\Phi}(\overline{M}))w_e(\overline{M}_0).
\end{align*}
From Theorem \ref{thm:edgesol} and the previous equality, we have
\begin{align*}
T_{i_0,j_0,k_0}
&= \sum_{\overline{M}\in\overline{\mathcal{M}}} w_e(\overline{M}) \Big/ w_e(\overline{M}_0)\big|_{c_{i,j}=1} \\
&= \sum_{\overline{M}\in\overline{\mathcal{M}}} w_e'\left(\overline{\Phi}(\overline{M})\right) \frac{w_e(\overline{M}_0)}{ w_e(\overline{M}_0)\big|_{c_{i,j}=1}} \\
&= \sum_{\overline{P}\in\overline{\mathcal{P}}} w_e'(\overline{P}) \Big/ \prod_{\circ\underset{b}{\text{---}}\bullet\in \overline{M}_0}\overline{p}_b.
\end{align*}
Hence we proved the theorem.
\end{proof}
The nonintersecting-path solution obtained in this section gives us a hint that it is possible to have a network solution analog to \cite{DFK13,DF14}.
In the next section, we will transform the oriented graph $\overline{G}$ to a network.
The modified edge-weight will be used on the network as well.
This leads to a network solution and a network-matrix solution.
\section{Network solution}\label{sec_network}
In this section, we will construct a weighted directed network $N$ associated with the oriented graph $\overline{G}$ and the modified edge-weight $w_e'$.
We then decompose $N$ into network chips and their associated elementary network matrices (adjacency matrices).
The product of all the elementary matrices associated with the network chips, according to an order of the chips, is then called the network matrix associated to $N$.
The nonintersecting-path solution for $\overline{G}$ (Theorem \ref{thm_pathsol}) can then be interpreted as a path solution on this network, and can also be computed from a certain minor of the network matrix.
We also show that our network and elementary network matrices coincide with the objects studied in \cite{DFK13} in the case of coefficient-free T-systems ($c_{i,j}=1$ for all $(i,j)\in\mathbb{Z}^2$).
\subsection{Network associated with a graph}
We construct the \defemph{directed network} $N$ associated with the oriented graph $\overline{G}$ by tilting all the diagonal edges so that they become horizontal, and tilting all the vertical edges so that the vertex on the left is black as shown in Figure \ref{fig_flatG}.
Also, the network is directed from left to right.
So a path in the oriented graph $\overline{G}$ (Definition \ref{defn:edgeoreient}) corresponds to a path in $N$.
We then introduce the notion of ``row" for vertices of $N$.
Two vertices are said to be in the same row if they are joined by a connected path of horizontal or diagonal edges.
In other words, they are on the left and right of each other in $\overline{G}$, see Definition \ref{defn:leftright}.
We number the rows so that they increase by one from bottom to top and the center face $(i_0,j_0)$ of $N$ lies between the row $-1$ and $0$. See Figure \ref{fig_flatG}.
The precise definition is as the following.
\begin{defn}\label{defn:row}
The vertex $v\in V(\overline{G})$ is in \defemph{row} $r$ if two of its incident faces are $(i,j_0+r)$ and $(i,j_0+r+1)$ for some $i\in\mathbb{Z}$.
\end{defn}
\begin{figure}%
\begin{center}\includegraphics[scale=0.5]{fig_flatG}\end{center}
\caption{An example of $\overline{G}$ and its associated network $N$.}
\label{fig_flatG}%
\end{figure}
Let $N$ be the directed network associated with $\overline{G}$, where $r_{\rm min}$ and $r_{\rm max}$ are the smallest and the largest row numbers of $N$.
We then put weight on the network locally around each black vertex as shown in Figure \ref{fig_tablenetwork}.
The weight comes directly from the modified edge-weight $w_e'$ on $\overline{G}$ (Definition \ref{defn:modifiededgeweight}), so we will carry the notation $w_e'$ for the weight on $N$.
\begin{remark}
When a black vertex in Figure \ref{fig_tablenetwork} is on the boundary, we will assume the following.
\begin{align*}
t_a =1, \text{ when }a\notin F(\overline{G}),\\
p_b =\overline{p}_b =1, \text{ when }b\notin F(G).
\end{align*}
This assumption comes directly from Definition \ref{defn:edgeweight}.
\end{remark}
\begin{table}%
\begin{center}\input{table2}\end{center}
\caption{
A table comparing a part of $\overline{G}$ around a black vertex, its corresponding elementary network chip in $N$ and its corresponding elementary network matrix defined in Definition \ref{defn:elementarymatrix}
}
\label{fig_tablenetwork}%
\end{table}
\subsection{Nonintersecting-path solution for the network}
We have already defined the weighted directed network $N$ associated with $\overline{G}$, where the weight on $N$ comes directly from the weight on $\overline{G}$.
The nonintersecting-path solution for $\overline{G}$ (Theorem \ref{thm_pathsol}) can then be interpreted in terms of nonintersecting paths in $N$.
The left-most SW vertices of $\overline{G}$ correspond to the left-most vertices in the rows $[r_{\rm min},-1]:=\{r_{\rm min},\dots,-2,-1 \}$ of $N$ while the right-most SE vertices of $\overline{G}$ corresponds to the right-most vertices in rows $[r_{\rm min},-1]$ of $N$.
We then get the following theorem.
\begin{thm}[Nonintersecting-path solution for $N$]\label{thm:network}
Let $N$ be the weighted directed network associated with $\overline{G}$.
Then
\[
T_{i_0,j_0,k_0} = Q^{-1} \sum_{P} w_e'(P),
\]
where the sum runs over all the non-intersecting paths on $N$ starting from the rows $[r_{\rm min},-1]$ on the left to $[r_{\rm min},-1]$ on the right and $Q :=\displaystyle\prod_{\substack{\circ\text{---}\bullet\\b}\in \overline{M}_0}\overline{p}_b$.
\end{thm}
\subsection{Network matrix}\label{subsec:networkmatrix}
We will now decompose $N$ into \defemph{elementary network chips}, which are small pieces of $N$ around black vertices illustrated in Figure \ref{fig_tablenetwork}.
This can be done by breaking the network at all of its white vertices.
See Figure \ref{fig_networkdecomp} for an example.
We notice that we will have the following choice when there are two incident vertical edges at a white vertex.
\begin{center}
\includegraphics[scale=0.8]{fig_localnetworkdecomp}
\end{center}
Since the weight of a path on $N$ is independent of this choice, the nonintersecting-path solution is independent of this ambiguity.
\begin{figure}%
\begin{center}\includegraphics[scale=0.9]{fig_networkdecomp}\end{center}
\caption{A network is decomposed into elementary network chips}%
\label{fig_networkdecomp}%
\end{figure}
A decomposition gives a partially-order on the network chips, we then pick a ``finer'' totally-order which does not contradict the partially-order.
This totally-order can be thought as the order in which to pull chips out of the network from the left.
\begin{ex}
From the example in Figure \ref{fig_flatG}, we can pick a network decomposition as in Figure \ref{fig_networkdecomp}, and then pick a totally-orders as in Figure \ref{fig_sixorder}.
\end{ex}
\begin{figure}%
\begin{center}\includegraphics[scale=0.7]{fig_totalorder}\end{center}
\caption{A totally-order from a decomposition in Figure \ref{fig_networkdecomp}}%
\label{fig_sixorder}%
\end{figure}
The next step is to associate each chip with an elementary network matrix shown in Figure \ref{fig_tablenetwork}.
The matrices $W_r$, $V_r$, $V'_r$, $U_r$ and $U'_r$ are defined in the following definition.
\begin{defn}\label{defn:elementarymatrix}
We define an \defemph{elementary network matrix} associated with a network $N$, depending on its configuration around a black vertex in Figure \ref{fig_tablenetwork}, to be an square matrix of size $r_{\rm max}-r_{\rm min}+1$ with entries:
\begin{align*}
(U_r(x,y,z))_{\alpha,\beta}&:=
\begin{cases}
(U(x,y,z))_{\alpha-r+2,\beta-r+2}, &\text{if }\alpha,\beta\in\{r-1,r\},\\
\delta_{\alpha,\beta}, &\text{otherwise,}
\end{cases}\\
(V_r(x,y,z))_{\alpha,\beta}&:=
\begin{cases}
(V(x,y,z))_{\alpha-r+1,\beta-r+1}, &\text{if }\alpha,\beta\in\{r,r+1\},\\
\delta_{\alpha,\beta}, &\text{otherwise,}
\end{cases}\\
(W_r(w,x,y,z))_{\alpha,\beta}&:=
\begin{cases}
(W(w,x,y,z))_{\alpha-r+2,\beta-r+2}, &\text{if }\alpha,\beta\in\{r-1,r,r+1\},\\
\delta_{\alpha,\beta}, &\text{otherwise,}
\end{cases}
\end{align*}
and $U'_r$ (resp. $V'_r$) is defined in the same way as $U_r$ (resp. $V_r$) where
\begin{align*}
U(t_a,t_b,t_c)&:=
\begin{pmatrix}
1&0 \\
\overline{p}_a\frac{t_c}{t_b} & \overline{p}_a \frac{t_a}{t_b}
\end{pmatrix},\qquad
U'(t_a,t_b,t_d):=
\begin{pmatrix}
1&0 \\
\frac{t_d}{t_b} & p_d \frac{t_a}{t_b}
\end{pmatrix},\\
V(t_a,t_c,t_d)&:=
\begin{pmatrix}
\overline{p}_a\frac{t_c}{t_d} & \overline{p}_a \frac{t_a}{t_d}\\
0&1
\end{pmatrix},\qquad
V'(t_b,t_c,t_d):=
\begin{pmatrix}
p_d\frac{t_c}{t_d} & \frac{t_b}{t_d} \\
0&1
\end{pmatrix},\\
W(t_a,t_b,t_c,t_d)&:=
\begin{pmatrix}
1&0&0\\
\overline{p}_a \frac{t_c}{t_b} & \overline{p}_ap_d\frac{t_at_c}{t_bt_d} & \overline{p}_a\frac{t_a}{t_d} \\
0&0&1
\end{pmatrix}.
\end{align*}
The $(i,j)$ entry of a network matrix is just the weight of the edge from row $i$ to row $j$ in the network chip, see Figure \ref{fig_tablenetwork}.
\end{defn}
If two network chips do not have an order from the decomposition, then their corresponding elementary network matrices commute.
For a totally-order of the network chips, we define the product of all the elementary network matrices with the same order of the chips.
This product is independent of a choice of totally-order and decomposition.
We call it the \defemph{network matrix} associated with $N$.
\begin{remark}\label{rem:collapsingvscommuting}
The network matrix $W_r$ defined in Definition \ref{defn:elementarymatrix} can be factored as follows:
\[
V_r(t_a,t_c,t_d)U'_r(t_a,t_b,t_d)=W_r(t_a,t_b,t_c,t_d)=U_r(t_a,t_b,t_c)V'_r(t_b,t_c,t_d).
\]
This corresponds to the following picture when collapsing a degree-2 vertex.
\begin{center}
\includegraphics[scale=0.8]{fig_twocollapse}
\end{center}
\end{remark}
\begin{remark}\label{rem:UVfactor}
$U_r,U'_r,V_r$ and $V'_r$ defined in Definition \ref{defn:elementarymatrix} can also be factored.
This factorization separates the coefficients $\overline{p}_a$'s and $p_d$'s from the variables $t_\alpha$'s.
\begin{align*}
U(t_a,t_b,t_c) &=
\begin{pmatrix}
1&0 \\
0&\overline{p}_a
\end{pmatrix}
\begin{pmatrix}
1&0 \\
\frac{t_c}{t_b}&\frac{t_a}{t_b}
\end{pmatrix}, \quad
U'(t_a,t_b,t_d) =
\begin{pmatrix}
1&0 \\
\frac{t_d}{t_b}&\frac{t_a}{t_b}
\end{pmatrix}
\begin{pmatrix}
1&0 \\
0&p_d
\end{pmatrix},\\
V(t_a,t_c,t_d) &=
\begin{pmatrix}
\overline{p}_a&0 \\
0&1
\end{pmatrix}
\begin{pmatrix}
\frac{t_c}{t_d}&\frac{t_a}{t_d} \\
0&1
\end{pmatrix},\quad
V'(t_b,t_c,t_d) =
\begin{pmatrix}
\frac{t_c}{t_d}&\frac{t_b}{t_d} \\
0&1
\end{pmatrix}
\begin{pmatrix}
p_d&0 \\
0&1
\end{pmatrix}.
\end{align*}
\end{remark}
Since the entry $(i,j)$ of the network matrix is the partition function of weighted paths from row $i$ to row $j$, by the Lindstr\"{o}m-Gessel-Viennot theorem \cite{Lindstrom,GV}, the partition function of weighted non-intersecting paths from the rows $[r_{\rm min},-1]$ on the left to the rows $[r_{\rm min},-1]$ to the right is the principal minor of the network matrix corresponding to the rows/columns $[r_{\rm min},-1]$.
Theorem \ref{thm:network} then gives the following.
\begin{thm}[Network-matrix solution]\label{thm:networkmatrix}
Let $M$ be the network matrix associated with the network $N$ of the graph $\overline{G}$. Then
\[
T_{i_0,j_0,k_0} = Q^{-1} |M|_{r_{\rm min},\dots,-2,-1}^{r_{\rm min},\dots,-2,-1},
\]
where $|M|_{r_{\rm min},\dots,-2,-1}^{r_{\rm min},\dots,-2,-1}$ denotes the principal minor of $M$ corresponding to the rows $[r_{\rm min},-1]$ and columns $[r_{\rm min},-1]$, and $Q :=\displaystyle\prod_{\substack{\circ\text{---}\bullet\\b}\in \overline{M}_0}\overline{p}_b$.
\end{thm}
We may absorb the factor $Q^{-1}$ into the elementary network matrices, by defining the \defemph{modified elementary network matrices} to be as follows.
\begin{align}
\begin{aligned}
\overline{U}_r(t_a,t_b,t_c) &:= (\overline{p}_a)^{-1} U_r(t_a,t_b,t_c),\quad &
\overline{U}'_r(t_a,t_b,t_d) &:= U'_r(t_a,t_b,t_d),\\
\overline{V}_r(t_a,t_c,t_d) &:= (\overline{p}_a)^{-1} V_r(t_a,t_c,t_d),\quad &
\overline{V}'_r(t_b,t_c,t_d) &:= V'_r(t_b,t_c,t_d),\\
\overline{W}_r(t_a,t_b,t_c,t_d) &:= (\overline{p}_a )^{-1}W_r(t_a,t_b,t_c,t_d).
\end{aligned}
\end{align}
Then the \defemph{modified network matrix} associated with $N$ is defined as the product of all modified elementary network matrices according to a totally-order of the network chips as before.
Theorem \ref{thm:networkmatrix} becomes the following.
\begin{thm}[Modified-network-matrix solution]\label{thm:modifiednetworkmatrix}
Let $\overline{M}$ be the modified network matrix associated with the network $N$ of the graph $\overline{G}$.
Then
\[
T_{i_0,j_0,k_0} = |\overline{M}|_{r_{\rm min},\dots,-2,-1}^{r_{\rm min},\dots,-2,-1},
\]
where $|\overline{M}|_{r_{\rm min},\dots,-2,-1}^{r_{\rm min},\dots,-2,-1}$ is the minor of $\overline{M}$ corresponding to the rows $[r_{\rm min},-1]$ and columns $[r_{\rm min},-1]$.
\end{thm}
\begin{remark}[Flatness condition]
It is worth pointing out that the condition
\begin{align}
\overline{U}'_{k+1}(t_\ell,t_c,t_u)\overline{V}_{k}(t_d,t_c,t_r) = \overline{V}'_k(t_d,t_\ell,t_c')\overline{U}_{k+1}(t_c',t_r,t_u)
\label{eq:flatcond}
\end{align}
is equivalent to
\[
t'_c t_{c} = J_{i,j,k} t_\ell t_r + t_u t_d
\]
where $c=(i,j)$, $\ell=(i-1,j)$, $r=(i+1,j)$, $u=(i,j+1)$, $d=(i,j-1)$ and $k=\k(c)-1=\k(\ell)=\k(r)=\k(u)=\k(d)$.
It corresponds to the following action on networks.
\begin{center}
\includegraphics[scale=0.8]{fig_urbannetwork}
\end{center}
In the graph $\overline{G}$, this is the urban renewal (Figure \ref{fig:urban}) at the face $c$ on the graph $\overline{G}$.
\begin{center}
\includegraphics[scale=0.8]{fig_urban2}
\end{center}
On the stepped surface $\k$, this is a downward-mutation at $c$.
Elementary network matrices therefore encode the octahedron recurrence \eqref{eq:tsyscoef} as the flatness condition of \eqref{eq:flatcond}.
\end{remark}
\subsection{Lozenge covering}
In \cite{DFK13} the authors construct a lozenge covering from a stepped surface and use it to construct a weighted network.
Then the solution to the coefficient-free T-system is a partition function of non-intersecting paths in the network.
We recall the procedure in \cite{DFK13} with some modification so that it fits in our setting.
Starting from a stepped surface $\k$, a lozenge covering is constructed depending on the height of the four corners of a square: $(i,j)$, $(i+1,j)$, $(i,j+1)$ and $(i+1,j+1)$.
The rule can be summarized in the following table.
\begin{center}
\includegraphics[scale=0.75]{fig_lozengecover}
\end{center}
We note that a choice of triangulating the squares in the first two cases makes a lozenge covering not unique.
Next, we restrict the covering to the points $(i,j)$'s in $\mathring{F}\cup\partial F$.
This gives a triangulation of a finite region.
The next step is to group two triangles sharing a horizontal side together.
As a result, we obtain ``elementary chips'' as follows.
\begin{center}
\includegraphics[scale=0.75]{fig_elementarychips}
\end{center}
They are classified into two types depending on whether the shaded triangle is on the top or the bottom of the lozenge.
An ``elementary matrix'' is then associated with each type depending on the corners of the shaded triangle of a lozenge as follows.
\begin{center}
\includegraphics[scale=0.75]{fig_uv}
\end{center}
We recall the notion of row defined in Definition \ref{defn:row}.
The matrix $U_r(x,y,z)$ and $V_r(x,y,z)$ are square matrices of size $(r_{\rm max}-r_{\rm min}+1)\times (r_{\rm max}-r_{\rm min}+1)$ with entries
\begin{align*}
(U_r(x,y,z))_{\alpha,\beta}=
\begin{cases}
(U(x,y,z))_{\alpha-r+2,\beta-r+2}, &\text{if }\alpha,\beta\in\{r-1,r\},\\
\delta_{\alpha,\beta}, &\text{otherwise},
\end{cases}\\
(V_r(x,y,z))_{\alpha,\beta}=
\begin{cases}
(V(x,y,z))_{\alpha-r+1,\beta-r+1}, &\text{if }\alpha,\beta\in\{r,r+1\},\\
\delta_{\alpha,\beta}, &\text{otherwise},
\end{cases}
\end{align*}
and similarly for $V_r(x,y,z)$.
The $2\times 2$ matrices $U(x,y,z)$ and $V(x,y,z)$ are defined as
\[
U(x,y,z)=\begin{pmatrix}1&0\\\frac{z}{y}&\frac{x}{y}\end{pmatrix},\quad
V(x,y,z)=\begin{pmatrix}\frac{y}{z}&\frac{x}{z}\\0&1\end{pmatrix}.
\]
We can immediately see that this is a specialization of the elementary network matrices in Definition \ref{defn:elementarymatrix} to the case when $c_{i,j}=1$ for all $(i,j)\in\mathbb{Z}^2$.
With the same idea as the network-chip decomposition in Section \ref{subsec:networkmatrix}, the lozenge covering is decomposed.
This gives an ordering of elementary network matrices.
We then construct a ``network matrix'', which is a product of all elementary network matrices according to the order.
The cluster variable $T_{i_0,j_0,k_0}$ is then expressed as a certain minor of the network matrix, similar to Theorem \ref{thm:networkmatrix} and Theorem \ref{thm:modifiednetworkmatrix}.
\begin{remark}
In the original definition of $U_r$ and $V_r$ in \cite{DFK13}, the index of $U$ is shifted by 1, i.e. $\mathcal{U}_r = U_{r+1}$ where $\mathcal{U}_r$ is the network matrix defined in \cite{DFK13}.
\end{remark}
In our construction, the existence of coefficients forces us to make a finer classification of the elementary network matrices as follows:
\begin{center}
\includegraphics[scale=0.75]{fig_uuvv}
\end{center}
From Remark \ref{rem:UVfactor}, we can think that the coefficients live on the vertical sides of shaded triangles. From Remark \ref{rem:collapsingvscommuting}, the matrix $W_r$ can be realized as a combination of two lozenges with the following choice of decomposition.
\begin{center}
\includegraphics[scale=0.75]{fig_W}
\end{center}
The main reason that the lozenge covering has a rich connection to our story is because it is indeed a dual of our bipartite graph.
This can be locally described as follows:
\begin{center}
\includegraphics[scale=0.75]{fig_graphlozenge}
\end{center}
A choice to triangulate a white square corresponds to a choice of decomposing a white vertex, and a choice to triangulate a black square corresponds to a choice of collapsing a degree-2 white vertex in Remark \ref{rem:collapsingvscommuting}.
These can be illustrated by the following pictures:
\begin{center}
\includegraphics[scale=0.75]{fig_choicelozenge}
\end{center}
We have provided various solutions to the T-system with principal coefficients.
These solutions give combinatorial expressions of $T_{i_0,j_0,k_0}$ in terms of coefficients $c_{i,j}$'s and initial data $t_{i,j}$'s on $\k$ under the conditions:
\[ k_0\geq \k(i_0,j_0) ,\quad\k(i,j)\geq \mathbf{fund}(i,j)\enspace\text{for }(i,j)\in\mathbb{Z}^2.\]
We will discuss some other cases in Section \ref{sec_General}.
\section{Other coefficients}\label{sec_app}
In this section, we discuss a few examples of other choices of coefficients on T-systems: Speyer's octahedron recurrence \cite{Speyer}, generalized lambda-determinants \cite{DF13} and (higher) pentagram maps \cite{Glick,GSTV14}.
Almost all of them have their own explicit combinatorial solutions and are treated with different techniques.
Applying Theorem \ref{thm:sep} and Theorem \ref{thm:YasF} to our solutions, we get a partial solution to each of them when the initial data stepped surface is $\mathbf{fund}$.
In \cite{Speyer}, the author provides a perfect matching solution to the Speyer's octahedron recurrence, which is a partition function of perfect matchings of $G$.
In fact, our perfect matching solution is developed from the method used in the paper.
The author uses face-weight (same as Definition \ref{defn:faceweight}) which gives cluster variables and edge-weight (instead of our pairing-weight) which gives cluster coefficients.
So the main difference is that the edge-weight is specific to a choice of coefficients.
For the generalized lambda-determinant in \cite{DF13}, the author provides a network solution which is a partition function of non-intersecting paths in a weighted directed network.
This network is the same as the network discussed in Section \ref{sec_network}.
However, the weight used in \cite{DF13} is different to our weight due to the choice of coefficients.
The (higher) pentagram maps \cite{Schwartz,OST,Glick,GSTV14} can be realized as a Y-pattern, i.e. a dynamic on cluster coefficients, not cluster variables.
So it is not directly a T-system, but is instead a Y-pattern on the octahedron quiver.
In \cite{Glick}, the author gives a combinatorial solution to the pentagram map using alternating sign matrices.
\subsection{Speyer's octahedron recurrence}
The \defemph{Speyer's octahedron recurrence}, defined with coefficients, is a recurrence relation \cite{Speyer} on the set of formal variables $\mathcal{T}^{(s)}=\{T^{(s)}_{i,j,k} \mid (i,j,k)\in\mathbb{Z}^3_{\rm odd}\}$ together with a set of extra variables, called coefficients, $\{A_{i,j},B_{i,j},C_{i,j},D_{i,j} \mid (i,j)\in\mathbb{Z}^2_{\rm even}\}$ satisfying
\begin{align}
T^{(s)}_{i,j,k-1}T^{(s)}_{i,j,k+1} =B_{i,j+k}D_{i,j-k} T^{(s)}_{i-1,j,k}T^{(s)}_{i+1,j,k} + A_{i+k,j}C_{i-k,j} T^{(s)}_{i,j-1,k}T^{(s)}_{i,j+1,k}
\label{eq:speyer}
\end{align}
We note that in \cite{Speyer}, the condition on the index $(i,j,k)$ of $T^{(s)}_{i,j,k}$ is $i+j+k\equiv 0 \bmod 2$, and the coefficients are defined on $\mathbb{Z}^2_{\rm odd}$.
With a shift in the indices, we make it coherent with our construction.
Speyer's octahedron recurrence can also be interpreted \cite{Speyer} as a cluster algebra with coefficients.
Its initial quiver is the octahedron quiver with initial cluster variables $T^{(s)}_{i,j,\mathbf{fund}(i,j)}$ similar to what we discussed in Section \ref{subsec:Tsys}.
The only difference is the initial coefficients:
\[
y_{i,j}=
\begin{cases}
{B_{i,j}D_{i,j}}/{A_{i,j}C_{i,j}}, &i+j\equiv 0\bmod 2, \\
{A_{i-1,j}C_{i+1,j}}/{B_{i,j-1}D_{i,j+1}}, &i+j\equiv 1\bmod 2,
\end{cases}
\]
in the semifield $\P=\operatorname{Trop}(A_{i,j},B_{i,j},C_{i,j},D_{i,j} : (i,j)\in\mathbb{Z}^2_{\rm even})$.
By Remark \ref{rem:GeometricFrozen}, it can be interpreted as a coefficient-free cluster algebra with frozen variables $\{A_{i,j},B_{i,j},C_{i,j},D_{i,j} \mid (i,j)\in\mathbb{Z}^2_{\rm even}\}$ with the quiver illustrated in Figure \ref{fig_Speyerquiver}.
\begin{figure}
\begin{center}\includegraphics[scale=0.8]{fig_Speyerquiver}\end{center}
\caption{
A portion of the infinite quiver of Speyer's octahedron recurrence.
The frozen variables and their incident arrows are in blue.
}
\label{fig_Speyerquiver}
\end{figure}
Since we will only consider the initial stepped surface $\mathbf{fund}$, we let $T^{(s)}_{i_0,j_0,k_0}$ denote its expression in terms of the initial data $t_{i,j}:=T^{(s)}_{i,j,\mathbf{fund}(i,j)}$ and the coefficients $A_{i,j},B_{i,j},C_{i,j},D_{i,j}$ for $(i,j)\in\mathbb{Z}^2$.
For the T-system with principal coefficients, $T_{i_0,j_0,k_0}$ denotes its expression in terms of $t_{i,j}:=T_{i,j,\mathbf{fund}(i,j)}$ and $c_{i,j}$ for $(i,j)\in\mathbb{Z}^2$.
In order to get $T^{(s)}_{i_0,j_0,k_0}$, we will have to specialize values of $t_{i,j}$ and $c_{i,j}$ in $T_{i_0,j_0,k_0}$ according to Theorem \ref{thm:sep}.
Let $T_{i_0,j_0,k_0}(c_{i,j}=y_{i,j})$ denote the expression of $T_{i_0,j_0,k_0}$ where $t_{i,j}$ stayed untouched but $c_{i,j}$ is substituted by $y_{i,j}$.
$T_{i_0,j_0,k_0}|_{\P}(t_{i,j}=1;c_{i,j}=y_{i,j})$ denotes the expression of $T_{i_0,j_0,k_0}$ where $t_{i,j}$ is set to $1$, $c_{i,j}$ is set to $y_{i,j}$, and then the whole expression is finally computed in $\P$.
By the separation formula (Theorem \ref{thm:sep}), we get a solution to the Speyer's octahedron recurrence from the solution to the T-system with principal coefficients:
\begin{align}
T^{(s)}_{i_0,j_0,k_0} = \frac{T_{i_0,j_0,k_0}(c_{i,j}=y_{i,j})}{T_{i_0,j_0,k_0}|_{\P}(t_{i,j}=1;c_{i,j}=y_{i,j})}
\label{eq:speyerfromprinc}
\end{align}
where $\P=\operatorname{Trop}(A_{i,j},B_{i,j},C_{i,j},D_{i,j} : (i,j)\in\mathbb{Z}^2_{\rm even})$ and
\begin{align}
y_{i,j}=
\begin{cases}
{B_{i,j}D_{i,j}}/{A_{i,j}C_{i,j}}, &i+j\equiv 0\bmod 2, \\
{A_{i-1,j}C_{i+1,j}}/{B_{i,j-1}D_{i,j+1}}, &i+j\equiv 1\bmod 2.
\end{cases}
\label{eq:speyercoef}
\end{align}
We now compare our result to the solution in \cite{Speyer}.
From our perfect matching solution (Theorem \ref{thm:main}), we then have
\begin{align}
T^{(s)}_{i_0,j_0,k_0} = \frac{\sum_{M\in\mathcal{M}} w_p(M)w_f(M)}{\bigoplus_{M\in\mathcal{M}} w_p(M)}
\label{eq:mathchingsolforspeyer}
\end{align}
where $\mathcal{M}$ is the set of perfect matchings of the graph $G_{p,\mathbf{fund}}$.
The denominator is a sum in $\P=\operatorname{Trop}(A_{i,j},B_{i,j},C_{i,j},D_{i,j} : (i,j)\in\mathbb{Z}^2_{\rm even})$, hence a monomial in $\{A_{i,j},B_{i,j},C_{i,j},D_{i,j}\mid (i,j)\in\mathbb{Z}^2_{\rm even}\}$.
\begin{thm}[\cite{Speyer}]\label{thm:Speyer}
For a point $p=(i_0,j_0,k_0)$ and an admissible initial stepped surface $\k$, we have
\[
T^{(s)}_{i_0,j_0,k_0} = \sum_{M\in\mathcal{M}} w_s(M)w_f(M)
\]
where the sum runs over all the perfect matchings of $G=G_{p,\k}$. The weight $w_s$ is defined by $w_s(M):=\prod_{x\in M} w_s(x)$ and the weight $w_s(x)$ is defined for $x\in E(G)$ as follows:
\begin{center}
\includegraphics[scale=0.75]{fig_Speyer}
\end{center}
\end{thm}
Comparing \eqref{eq:mathchingsolforspeyer} to Theorem \ref{thm:Speyer}, we can write $w_s$ on a perfect matching on $G_{p,\mathbf{fund}}$ in terms of the pairing weight as follows:
\begin{align}
w_s(M) = \frac{ w_p(M)}{ \bigoplus_{M\in\mathcal{M}_\mathbf{fund}} w_p(M)}
\end{align}
for $M\in\mathcal{M}$, where the sum in the denominator is computed in the semifield $\P = \operatorname{Trop}(A_{i,j},B_{i,j},C_{i,j},D_{i,j} : (i,j)\in\mathbb{Z}^2_{\rm even})$.
\begin{ex}
Let $p=(0,0,1)$, then we get the graph $G$ and its two matchings as follows:
\begin{center}
\includegraphics[scale=0.6]{fig_ex_Speyer}
\end{center}
Let $T_{i,j,\mathbf{fund}(i,j)} = t_{i,j}$ be the initial data. From Theorem \ref{thm:Speyer}, we have
\[
T^{(s)}_{0,0,1}= B_{0,0}D_{0,0}\,t_{-1,0}t_{0,0}^{-1}t_{1,0}+A_{0,0}C_{0,0}\,t_{0,-1}t_{0,0}^{-1}t_{0,1}.
\]
On the other hand, to apply the separation formula, we set $c_{0,0}=\frac{B_{0,0}D_{0,0}}{A_{0,0}C_{0,0}}$ as in \eqref{eq:speyercoef}.
Equation \eqref{eq:speyerfromprinc} then gives
\begin{align*}
T^{(s)}_{0,0,1} &= \frac{c_{0,0}\,t_{-1,0}t_{0,0}^{-1}t_{1,0}+t_{0,-1}t_{0,0}^{-1}t_{0,1}}{c_{0,0}\oplus 1}\\
&= B_{0,0}D_{0,0}\,t_{-1,0}t_{0,0}^{-1}t_{1,0}+A_{0,0}C_{0,0}\,t_{0,-1}t_{0,0}^{-1}t_{0,1}.
\end{align*}
\end{ex}
\subsection{Lambda determinants}
The generalized lambda-determinant \cite{DF13} can be considered as a recurrence relation on $\mathcal{T}^{(\lambda)}=\{T^{(\lambda)}_{i,j,k} \mid (i,j,k)\in\mathbb{Z}^3_{\rm odd}\}$ together with a set of coefficients $\{\lambda_i,\mu_i \mid i\in\mathbb{Z}\}$ satisfying
\begin{align}
T^{(\lambda)}_{i,j,k-1}T^{(\lambda)}_{i,j,k+1} =\lambda_i T^{(\lambda)}_{i-1,j,k}T^{(\lambda)}_{i+1,j,k} + \mu_j T^{(\lambda)}_{i,j-1,k}T^{(\lambda)}_{i,j+1,k}
\label{eq:lambda}
\end{align}
for all $(i,j,k)\in\mathbb{Z}^3_{\rm odd}.$
It can also be realized \cite{DF13} as a cluster algebra with coefficients.
The quiver is the octahedron quiver, the initial cluster variables are $T_{i,j,\mathbf{fund}(i,j)}$ and the initial coefficients are
\begin{align}
y_{i,j}=
\begin{cases}
\lambda_i/\mu_j, &i+j\equiv 0\bmod 2,\\
\mu_j/\lambda_i, &i+j\equiv 1\bmod 2,
\end{cases}
\label{eq:lambdadetcoef}
\end{align}
in $\P = \operatorname{Trop}( \lambda_i,\mu_i : i\in\mathbb{Z} )$.
By Remark \ref{rem:GeometricFrozen}, we can also interpret it as a coefficient-free cluster algebra with frozen variables $\{\lambda_i,\mu_i \mid i\in\mathbb{Z}\}$ with the quiver illustrated in Figure \ref{fig_LambdaDet}.
\begin{figure}%
\begin{center}\includegraphics[scale=0.8]{fig_LambdaDet}\end{center}
\caption{A portion of the infinite quiver of the lambda-determinant.}%
\label{fig_LambdaDet}%
\end{figure}
With the initial data stepped surface $\mathbf{fund}$, we let $T^{(\lambda)}_{i_0,j_0,k_0}$ be the expression in terms of the initial data $t_{i,j}:=T_{i,j,\mathbf{fund}(i,j)}$ and $\lambda_i,\mu_j$.
By the separation formula (Theorem \ref{thm:sep}), we have
\begin{align}
T^{(\lambda)}_{i_0,j_0,k_0} = \frac{T_{i_0,j_0,k_0}(c_{i,j}=y_{i,j})}{T_{i_0,j_0,k_0}|_{\P}(t_{i,j}=1;c_{i,j}=y_{i,j})}
\end{align}
where $\P=\operatorname{Trop}(\lambda_i,\mu_i : i\in\mathbb{Z})$ and
\begin{align}
y_{i,j}=
\begin{cases}
\lambda_i/\mu_j, &i+j\equiv 0\bmod 2,\\
\mu_j/\lambda_i, &i+j\equiv 1\bmod 2.
\end{cases}
\label{eq:lambdacoef}
\end{align}
\subsection{Pentagram maps}
The pentagram map \cite{Schwartz,OST} is a discrete evolution on points in $\mathbb{R}\P^2$.
It maps on a twisted polygon with $n$ vertices to give another twisted $n$-gon whose vertices are the intersections of the shortest diagonals of the original polygon.
In \cite{Glick} the pentagram map evolution is interpreted as the mutation in a Y-pattern (cluster mutation on cluster coefficients).
Using this interpretation, the authors of \cite{GSTV14} give a generalization of the pentagram map called \defemph{higher pentagram maps}.
For a given integer $3\leq\kappa\leq n-1$, the higher pentagram map on a twisted $n-$gon produces a new polygon using $(\kappa-1)^{th}$-diagonals (connecting vertex $i$ to vertex $i+\kappa-1$) instead of the shortest diagonals (connecting vertex $i$ to vertex $i+2$) in the case when $\kappa=3$.
The following is an example of one evolution of the higher pentagram map, $\kappa=4$, on a closed 9-gon.
\begin{center}
\includegraphics[scale=0.8]{fig_pentmap}
\end{center}
For a twisted $n-$gon, one can define $2n$ variables $p_1,\dots,p_n,q_1,\dots,q_n \in \mathbb{R}$.
Then the evolution \cite{Glick,GSTV14} of the higher pentagram map on these variables are as follows
\begin{align*}
q_i' = p_i^{-1}, \quad p_i' = q_i \dfrac{(1+p_{i-r})(1+p_{i+r'})}{(1+p_{i-r-1}^{-1})(1+p_{i+r'+1}^{-1})},
\end{align*}
where $r=\lfloor\frac{\kappa-2}{2}\rfloor$ and $r'=\lceil\frac{\kappa-2}{2}\rceil$ and $p'_1,\dots,p'_n,q'_1,\dots,q'_n$ are the new variables associated with the new polygon produced by the higher pentagram map.
We note that the variables originally defined in \cite{GSTV14} differ from the ones considered here by a change of variables and a shift in indices.
See \cite{GSTV14} and \cite{KV14} for more details.
The evolution of the variables $p_i,q_i$ can be also realized \cite{Glick,GSTV14} as the Y-pattern of a cluster algebra of rank $2n$ in the universal semifield $$\P=\mathbb{Q}_{sf}(p_1,\dots,p_n,q_1,\dots,q_n)$$ with the initial coefficient tuple $(p_1,\dots,p_n,q_1,\dots,q_n)$.
The exchange matrix is $B=\begin{psmallmatrix}\mathbf{0}&C\\-C^T&\mathbf{0}\end{psmallmatrix}$ where $C=(c_{ij})$ is an $n\times n$ matrix defined by $c_{ij} = \delta_{i,j-1}-\delta_{i,j}-\delta_{i,j+1}+\delta_{i,j+1}$ (the indices are read modulo $n$).
The quiver corresponding to $B$, the \defemph{generalized Glick's quiver}, is a bipartite graph with $2n$ vertices labeled by $p_1,\dots,p_n,q_1,\dots,q_n$ with four arrows adjacent to each $q_i$ as the following.
\begin{center}
\includegraphics[scale=0.7]{fig_genglickquiver}
\end{center}
Then the variables $p'_1,\dots,p'_n,q'_1,\dots,q'_n$ of the new polygon produced by the higher pentagram map is obtained by a composition of mutations at all $p_i$ for $i\in[1,n]$.
Wrapping this finite quiver around a torus, we can then interpret it as the octahedron quiver with certain identification of vertices, along the two periods of the torus: $(i,j)\equiv(i+\kappa,j+2-\kappa)$ and $(i,j)\equiv(i+n,j-n)$ \cite{DFK13}.
Hence we can realize the higher pentagram map as a Y-pattern on the octahedron quiver with periodic initial data $p_1,\dots,p_n,q_1,\dots,q_n$ in $\mathbb{Q}_{sf}(p_1,\dots,p_n,q_1,\dots,q_n)$, where only mutations at the vertex with exactly two incoming and two outgoing arrows are allowed.
Let $\pi:\mathbb{Z}^2\rightarrow\{p_1,\dots,p_n,q_1,\dots,q_n\}$ be the map identifying the vertex of the octahedron quiver with the vertex of the generalized Glick's quiver.
We pick $\pi$ by starting with $\pi(0,0)=p_n$ and then continuing by the following patterns:
\[
\begin{matrix}
&q_{i-r'-1}&\\
q_{i+r}&p_i&q_{i-r'}\\
&q_{i+r+1}
\end{matrix}
\qquad\text{and}\qquad
\begin{matrix}
&p_{i-r-1}&\\
p_{i+r'}&q_i&p_{i-r}\\
&p_{i+r'+1}
\end{matrix}
\]
where the indices are read modulo $n$. This choice of $\pi$ agrees with the map $\phi$ used for unfolding periodic two dimensional quivers in \cite[Section 12]{GP15} in the case when $S=\{(0,0),(\kappa-1,0),(r,1),(r+1,1)\}$ is a Y-pin for the higher pentagram map. The following is an example of the assignment by the map $\pi$ where the indices are read modulo $n$ and the center is $(0,0)$.
\[
\begin{matrix}
&&&\vdots&&&\\
&p_{n-2}&q_{n-r'-2}&p_{n-\kappa}&q_{n-\kappa-r'}&p_{n-2\kappa+2}&\\
&q_{r-1}&p_{n-1}&q_{n-r'-1}&p_{n-\kappa+1}&q_{n-\kappa-r'+1}&\\
\cdots&p_{\kappa-2}&q_{r}&p_n&q_{n-r'}&p_{n-\kappa+2}&\cdots\\
&q_{\kappa+r-1}&p_{\kappa-1}&q_{r+1}&p_{1}&q_{n-r'+1}\\
&p_{2\kappa-2}&q_{\kappa+r}&p_{\kappa}&q_{r+2}&p_{2}\\
&&&\vdots&&&
\end{matrix}
\]
Let $p^{(k)}_\ell$ and $q^{(k)}_\ell$ ($\ell\in[1,n]$) be the pentagram variables after the $k$-th iterate of the higher pentagram map.
Then
$$p^{(k)}_\ell = 1/q^{(k+1)}_\ell \text{ for all }\ell\in[1,n],$$
and $q^{(k)}_\ell$ is the coefficient at a vertex $(i,j,k)$ on $\k$ such that
$$\pi(i,j)=q_\ell \quad\text{and}\quad \k(i,j)=k=\k(i\pm 1,j\pm 1)+1.$$
At the vertex $(i,j,k)$, the quiver associated with $\k$ will be as the following.
\[
\scalebox{0.8}{$
\xymatrix@=1em{
& (i,j+1,k-1)\ar[d] & \\
(i-1,j,k-1) & (i,j,k)\ar[l]\ar[r] & (i+1,j,k-1) \\
& (i,j-1,k-1)\ar[u] &
}
$}
\]
By Theorem \ref{thm:YasF} and Proposition \ref{prop:coefatvertex}, we then have
\begin{align}\begin{aligned}
q^{(k)}_\ell
&= y_{(i,j),\k} \frac{ T_{i,j-1,k-1}T_{i,j+1,k-1}}{ T_{i-1,j,k-1}T_{i+1,j,k-1}}\bigg|_{\substack{T_{i,j,\mathbf{fund}(i,j)}=1\\c_{i,j}=\pi(i,j)}}\\
&= \frac{I_{i,j,k-1}}{J_{i,j,k-1}} \frac{ T_{i,j-1,k-1}T_{i,j+1,k-1}}{ T_{i-1,j,k-1}T_{i+1,j,k-1}}\bigg|_{\substack{T_{i,j,\mathbf{fund}(i,j)}=1\\c_{i,j}=\pi(i,j)}}
\label{eq:yijk}
\end{aligned}\end{align}
where $y_{(i,j),\k}$ is defined as in Proposition \ref{prop:coefatvertex}.
This gives an expression of all the pentagram variables in terms of the solution to the T-system with principal coefficients.
\section{Conclusion and Discussion}\label{sec_General}
In this paper, we have defined the T-system with principal coefficients from cluster algebra aspect.
We obtain the octahedron recurrence with principal coefficients, which is a recurrence relation governing the T-system with principal coefficients.
Various explicit combinatorial solutions and their connection have been established.
This is for a special case when the point $p$ and the initial data stepped surface $\k$ satisfy the following conditions
\begin{align}
k_0 &\geq \k(i_0,j_0),\label{cond1}\\
\k(i,j) &\geq \mathbf{fund}(i,j)\quad\text{for }(i,j)\in\mathbb{Z}^2.\label{cond2}
\end{align}
These solutions to the T-system with principal coefficients allow us to solve any other systems having other choices of coefficients on the T-system as we seen in the previous section.
In particular, we are able to give a solution to the higher pentagram maps as a product of T-system variables and coefficients, see \eqref{eq:yijk}.
We notice a symmetry $\{i\leftrightarrow j,k\leftrightarrow -k-1\}$ of the T-system with principal coefficients \eqref{eq:tsyscoef}.
This symmetry basically switches the roles between $i$ and $j$ and reflects the system upside down.
So if we have a point $p$ and an initial data stepped surface $\k$ such that $k_0\leq\k(i_0,j_0)$ and $\k(i,j)\leq\mathbf{fund}(i,j)$ for $(i,j)\in\mathbb{Z}^2$, after applying the symmetry the system will satisfy the conditions \eqref{cond1} and \eqref{cond2}.
Furthermore, the condition \eqref{cond2} can be relaxed a little more.
Since the expression of $T_{i_0,j_0,k_0}$ depends only on the values of $\k(i,j)$ when $(i,j)\in\mathring{F}\cup\partial F$ (see \eqref{eq:opencloseface}), Condition \eqref{cond2} can be relaxed to
\begin{align*}
\k(i,j) &\geq \mathbf{fund}(i,j)\quad\text{for }(i,j)\in\mathring{F}\cup\partial F. \label{newcond2}
\end{align*}
Nevertheless, an explicit combinatorial solution for arbitrary $p$ and $\k$ is still unknown.
Our general solution for the T-systems with principal coefficients may be applied to various problems related to the octahedron recurrence.
For instance, there are known connections between the T-systems and Bessenrodt-Stanley polynomials discussed in \cite{DF15}.
We expect the solutions to the T-systems with principal coefficients to provide generalizations of this family of polynomials.
It would also be interesting to apply our solutions to study the arctic curves of the octahedron equation with principal coefficients in the same spirit as \cite{DFS14}.
We expect the coefficients to act as additional probability for dimer configurations, which may give other shapes to the arctic curves.
Lastly, it would be interesting to investigate the quantum version of the T-systems with principal coefficients analog to \cite{DF11,DFK12}.
|
3,212,635,537,726 | arxiv | \section{Introduction}
Accelerating the convergence of Krylov iterative solvers \cite{SAAD:2000:IMS} is an old issue which has returned to the spotlight because of the increasing number of applications for which these are preferred to direct solvers today. Traditional approaches aim at improving the condition number by using frameworks in which efficient preconditioners exist (e.g. domain decomposition methods \cite{KLAWONN:2001:FNN,GOSSELET.2007.1}), or for which good initialization vectors \cite{GOSSELET:2003:IEI}, relevant augmentation subspaces \cite{DOSTAL:1988:CGM,SAAD:1997:AAK,CHAPMAN:1996:DAK} or suitable block strategies (see \cite{Aliaga00alanczos-type} for a very general block-Lanczos algorithm) are available. For instance:
\begin{itemize}
\item For 3D elasticity problems, domain decomposition methods come with ``physical'' augmentation associated with the global equilibrium of floating substructures (rigid body motions), which makes the methods scalable \cite{FARHAT:1994:ADV,MANDEL:1996:COEF}; for plate and shell problems, additional augmentation through ``corner modes'' \cite{FARHAT:1996:COR1,FARHAT:1996:COR2,LETALLEC:1997:SHELL} is required.
\item For structures with repeated patterns, block strategies are possible \cite{GOSSELET.2009.1}.
\item For restarted algorithms, one can use deflation or augmentation \cite{MORGAN:1995:Eigen,ERHEL:1995:RGM}, or block techniques \cite{baker:1608}.
\item For problems with multiple right-hand sides, deflation \cite{FARHAT.1994.ESBI,SAAD:2000:DCG,ERHEL:2000:ACG} is a rather classical approach.
\end{itemize}
The problem with these techniques is that they require some \emph{a priori} information which is seldom available, except in specific cases.
Many recent works present theoretical and practical comparisons of the numerous algorithms which have been developed in connection with these ideas \cite{TANG:2007:TNC,SIMONCINI:2007:RCD}.
Multiresolution approaches form a general framework in which numerical information is available to accelerate the convergence of Krylov solvers. Multiresolution refers to situations in which the solution of a mechanical problem cannot be achieved through the resolution of a single linear system. For example, calculating the solution of a nonlinear or time-dependent problem or exploring a design of experiment during an optimization procedure requires the resolution of sequences of linear systems. Multiresolution is more general than using multiple right-hand sides because the matrices themselves are likely to change from one system to another (multiple right-hand and left-hand sides). Thus, the problem consists in solving a $k$-indexed family of large, sparse, linear $n\times n$ systems of the form:
\begin{equation}
\ensuremath{\mathbf{A}}^{(k)} x^{(k)} = b^{(k)}
\end{equation}
Although different, the systems are assumed to be similar to one another. This similarity can be defined in several ways: in terms of rank, by the fact that $\operatorname{rank}(\ensuremath{\mathbf{A}}^{(k)}-\ensuremath{\mathbf{A}}^{(k-1)})\ll \operatorname{rank}(\ensuremath{\mathbf{A}}^{(k)})$; or in a spectral sense by the fact that the eigenspaces remain stable from one system to another; or in terms of the Krylov subspaces generated \cite{CARPRAUX:1994:SKB}. The first case can be dealt with easily, even with direct solvers, by using the Sherman-Morrison formula; the second case requires augmentation strategies in order to eliminate the most penalizing part of the spectrum and improve the active condition number\footnote{``active'' referring to the part of the spectrum of the matrix which is solicited by the right-hand side.} \cite{Rey:1998:RRP,GOSSELET:2002:SRK,PARKS:2006:RKS,WANG:2007:LST}; and the last case calls for preconditioning techniques \cite{REY:1996:TAR,Risler:2000:IAA}.
While most of the studies of Krylov methods for multiresolution (often referred to as the recycling of Krylov subspaces) are set in the framework of GMRes/MinRes \cite{PARKS:2006:RKS,WANG:2007:LST}, we chose to work on the specific case of the resolution of symmetric, positive definite systems using conjugate gradients (CGs), in which the convergence is under control and related to easily calculated spectral properties \cite{VANDERSLUIS:1986:RCCG}.
In earlier works, the authors developed efficient preconditioners based on previous Krylov subspaces \cite{REY:1996:TAR,Risler:2000:IAA} which took advantage of the conjugation properties of CGs,
but did not extract the most interesting part of the information available in the Krylov subspaces, and they proposed augmentation techniques using Ritz vectors \cite{Rey:1998:RRP}.
Typically, these works were aimed at nonlinear mechanical systems solved by Newton-Raphson linearization and FETI or BDD domain decomposition \cite{GOSSELET:2002:DDM,lene:2001}.
The recycling of Krylov subspaces can also be analyzed from the model reduction point of view. Since Krylov solvers satisfy Petrov-Galerkin conditions, they share many common points with strategies based on Karhunen-Loeve expansion \cite{meyer.2003.1,Ryckelynck.2006.1,Nouy.2007.1}. These similarities are well-known \cite{Freund2000395,Freund.2003.1,heres.2007.1}. But our objective is not to develop reduced models of mechanical systems in order to perform fast but coarse analyzes; it is to define, improve and reuse reduced models in order to carry out calculations both rapidly and accurately.
In this paper, we undertake a more in-depth investigation of augmentation using a selection of post-processed Ritz vectors. In Section 2, we begin with a detailed presentation of the theoretical framework of the augmented preconditioned conjugate gradient method; then, in Section 3, we propose a first reuse algorithm in a multiresolution framework (TRKS); in Section 4, we improve this algorithm by proposing a procedure for selecting the ``best'' Ritz vectors (SRKS and ``cluster''); finally, in Section 5, we propose an evaluation of the method in the case of nonlinear mechanics and parametric problems, using domain decomposition methods \cite{GOSSELET.2007.1} to define efficient preconditioners.
\section{The augmented preconditioned conjugate gradient method}\label{section:APCG}
\subsection{Algorithm and properties}
Let us consider the linear problem
\begin{equation}\label{pb1}
\ensuremath{\mathbf{A}} x = b,
\end{equation}
where $\ensuremath{\mathbf{A}}$ is an $n \times n$ symmetric positive definite matrix, and let us study the resolution of this system using the augmented preconditioned conjugate gradient algorithm. With $\ensuremath{\mathbf{M}}$ being the $n \times n$ symmetric positive definite matrix of the preconditioner, we introduce the following notations:
\begin{equation}
\begin{aligned}
i=0\ldots m &\qquad & \text{the iteration number}\\
x_i & & \text{the } i^{th} \text{ approximation}\\
r_i=b-Ax_i = A(x-x_i) & & \text{the } i^{th} \text{ residual}
\end{aligned}
\end{equation}
With no loss of generality, the presentation can be limited to the case of a zero initial guess $x_{00}=0$. (Otherwise, one can set $b \leftarrow b-\ensuremath{\mathbf{A}} x_{00}$.)
Let $\ensuremath{\mathcal{C}}$ be a subspace of $\mathbb{R}^n$ of dimension $n_c$, and let Matrix $\ensuremath{\mathbf{C}}=\left[c_1, \ldots, c_{n_c}\right]$ be a basis of $\ensuremath{\mathcal{C}}$. The search principle of the augmented left-preconditioned conjugate gradient is:
\begin{equation}\label{eq:krysol_principle}
\left\{\begin{array}{ll}
\textrm{find } &x_i \in \ensuremath{\mathcal{K}}_i(\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}},\ensuremath{\mathcal{C}},\ensuremath{\mathbf{M}}^{-1}r_0)\\
\textrm{such that }&r_i \perp \ensuremath{\mathcal{K}}_i(\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}},\ensuremath{\mathcal{C}},\ensuremath{\mathbf{M}}^{-1}r_0)
\end{array}\right.
\end{equation}
where $\ensuremath{\mathcal{K}}_i(\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}},\ensuremath{\mathcal{C}},\ensuremath{\mathbf{M}}^{-1}r_0)$ is the augmented Krylov subspace associated with preconditioned operator $\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}}$ and augmentation subspace $\ensuremath{\mathcal{C}}$:
\begin{equation}
\ensuremath{\mathcal{K}}_i(\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}},\ensuremath{\mathcal{C}},\ensuremath{\mathbf{M}}^{-1}r_0) = \\
\operatorname{span}\left(\ensuremath{\mathbf{M}}^{-1}r_0,\ldots,(\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}})^{(i-1)}\ensuremath{\mathbf{M}}^{-1}r_0\right)\oplus\ensuremath{\mathcal{C}}
\end{equation}
A classical implementation relies on the definition of a convenient initialization and projector pair $(x_0$,$\ensuremath{\mathbf{P}})$:
\begin{equation}
x=x_0 + \ensuremath{\mathbf{P}} y \quad
\left\{ \begin{array}{lll}
\ensuremath{\mathbf{C}}^Tr_0=0 &\Leftarrow &x_0=\ensuremath{\mathbf{C}}(\ensuremath{\mathbf{C}}^T\ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}})^{-1}\ensuremath{\mathbf{C}}^Tb \\
\ensuremath{\mathbf{C}}^T\ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}}=0 &\Leftarrow& \ensuremath{\mathbf{P}}=\ensuremath{\mathbf{I}}-\ensuremath{\mathbf{C}}(\ensuremath{\mathbf{C}}^T\ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}})^{-1}\ensuremath{\mathbf{C}}^T\ensuremath{\mathbf{A}}
\end{array}\right.
\end{equation}
One should note that since $\ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}}=\ensuremath{\mathbf{P}}^T\ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}}=\ensuremath{\mathbf{P}}^T\ensuremath{\mathbf{A}}$ augmentation preserves symmetry. One should also note that $\ensuremath{\mathbf{P}}\ensuremath{\mathbf{C}}=0$. The system to be solved is:
\begin{equation}\label{sys1}
\ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}} y= (\ensuremath{\mathbf{P}}^{T} \ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}}) y=r_0=\ensuremath{\mathbf{P}}^{T}b
\end{equation}
The $\ensuremath{\mathbf{C}}$-augmented, $\ensuremath{\mathbf{M}}$-preconditioned conjugate gradient technique (APCG) implemented by projection is presented in Algorithm \ref{APCG}. (For the sake of simplicity, the methods will be described assuming exact arithmetic, even though they are compatible with more realistic full reorthogonalization \cite{LINGEN:2000:EGS}.)
\begin{algorithm2e}[th]\caption{APCG($\ensuremath{\mathbf{A}}, \ensuremath{\mathbf{M}},\ensuremath{\mathbf{C}}$, $b$)}\label{APCG}
Calculate $\ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}}$, $(\ensuremath{\mathbf{C}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}})^{-1}$ \quad; \quad($\ensuremath{\mathbf{P}}=\iden - \ensuremath{\mathbf{C}}\left(\ensuremath{\mathbf{C}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}}\right)^{-1}\ensuremath{\mathbf{C}}^T \ensuremath{\mathbf{A}}$)\; %
$x_0 = \ensuremath{\mathbf{C}}(\ensuremath{\mathbf{C}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}})^{-1}\ensuremath{\mathbf{C}}^T b$\;%
$r_0=b-\ensuremath{\mathbf{A}} x_0=\ensuremath{\mathbf{P}}^T b$\;%
$z_0 = \ensuremath{\mathbf{P}} \ensuremath{\mathbf{M}^{-1}} r_0$, $w_0=z_0$\;%
\For{$j=1,\ldots,m$}{%
$\alpha_{j-1}=(r_{j-1},w_{j-1})/(\ensuremath{\mathbf{A}} w_{j-1},w_{j-1})$\\
$x_{j}=x_{j-1}+\alpha_{j-1} w_{j-1}$\\
$r_{j}=r_{j-1}-\alpha_{j-1} \ensuremath{\mathbf{A}} w_{j-1}$\\
$z_{j} = \ensuremath{\mathbf{P}} \ensuremath{\mathbf{M}^{-1}} r_{j+1}$\\
$w_{j}=z_{j} - \beta_j w_{j-1}$\\
$\beta_j=(\ensuremath{\mathbf{A}} w_{j-1},z_{j})/(w_{j-1},\ensuremath{\mathbf{A}} w_{j-1})$%
}%
\end{algorithm2e}
The following basic relations hold:
\begin{equation}\label{relation:acg}
\begin{aligned}
(r_i,z_j)&=0, \quad i \neq j\\
(w_i,\ensuremath{\mathbf{A}} w_j)&=0, \quad i \neq j
\end{aligned}
\end{equation}
With $\ensuremath{\mathbf{W}}_i=\left[w_0,\ldots,w_{i-1}\right]$ and $\ensuremath{\mathbf{Z}}_i=\left[z_0,\ldots,z_{i-1}\right]$ being two bases of $\ensuremath{\mathcal{K}}_i(\ensuremath{\mathbf{P}}\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}},z_0)$, the projector enables the spaces to be divided orthogonally:
\begin{equation}\ensuremath{\mathcal{K}}_i(\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}},\ensuremath{\mathcal{C}},\ensuremath{\mathbf{M}}^{-1}r_0) = \ensuremath{\mathcal{K}}_i(\ensuremath{\mathbf{P}}\ensuremath{\mathbf{M}}^{-1}\ensuremath{\mathbf{A}},z_0) \overset{\perp_A}{\oplus} \ensuremath{\mathcal{C}}\end{equation}
Of course, in the absence of optional constraints ($\ensuremath{\mathbf{C}}=0$, $\ensuremath{\mathbf{P}}=\iden$), APCG reduces to standard preconditioned conjugate gradients PCG($\ensuremath{\mathbf{A}}$,$\ensuremath{\mathbf{M}}$, $b$); if, in addition, $\ensuremath{\mathbf{M}^{-1}}=\iden$, it becomes a standard conjugate gradient algorithm CG($\ensuremath{\mathbf{A}}$, $b$).
Let us recall a first result which was proven in \cite{DOSTAL:1988:CGM} for the case of non-preconditioned augmented conjugate gradients.
\begin{prop}\label{prop:acg} Let $\mathcal{V}=\ensuremath{\operatorname{Range}}(\ensuremath{\mathbf{P}})$
\item[$\bullet$] APCG($\ensuremath{\mathbf{A}}, \iden,\ensuremath{\mathbf{C}}$, $b$) is equivalent to CG(${{\ensuremath{\mathbf{P}}}}^{T} {\ensuremath{\mathbf{A}}} {\ensuremath{\mathbf{P}}}_{|\ensuremath{\mathbf{A}}\mathcal{V}}$, $\ensuremath{\mathbf{P}}^T b$) in the sense that both generate the same residuals. $x_i$, the $i^{th}$ APCG approximation, is connected to $y_i$, the $i^{th}$ CG approximation, by $x_i=x_0+\ensuremath{\mathbf{P}} y_i$.
\item[$\bullet$] APCG($\ensuremath{\mathbf{A}}, \iden,\ensuremath{\mathbf{C}}$, $b$) does not break down; it converges, and its asymptotic convergence rate is governed by the condition number $\kappa\left( {{\ensuremath{\mathbf{P}}}}^{T} {\ensuremath{\mathbf{A}}} {\ensuremath{\mathbf{P}}}_{|{\ensuremath{\mathbf{A}}} {\mathcal{V}}} \right) \leqslant \kappa\left({\ensuremath{\mathbf{A}}} \right)$.%
\end{prop}
Consequently, augmentation strategies never decrease the asymptotic convergence rate. The following corollary is straightforward:
\begin{cor}\label{prop:acg_e}
Let $\mathbf{D}=\left[ d_1, \ldots, d_{m_d}\right]$ be a set of $m_d$ linearly independent vectors such that $\ensuremath{\mathbf{E}}=\left[\ensuremath{\mathbf{C}}, \mathbf{D} \right]$ is a full column rank matrix. Let $\ensuremath{\mathbf{P}}_{\ensuremath{\mathbf{E}}}=\ensuremath{\mathbf{I}}-\ensuremath{\mathbf{E}}(\ensuremath{\mathbf{E}}^T\ensuremath{\mathbf{A}} \ensuremath{\mathbf{E}})^{-1}\ensuremath{\mathbf{E}}^T\ensuremath{\mathbf{A}}$, and let $\mathcal{V}_{\ensuremath{\mathbf{E}}}$ be the range of $\ensuremath{\mathbf{P}}_{\ensuremath{\mathbf{E}}}$.
\item[$\bullet$] APCG($\ensuremath{\mathbf{A}}, \iden, \ensuremath{\mathbf{E}}$, $b$) is equivalent to APCG($\ensuremath{\mathbf{P}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}}$, $\iden$, $\mathbf{D}$, $\ensuremath{\mathbf{P}}^T b$) in the sense that both generate the same residual. $x^{\ensuremath{\mathbf{E}}}_i$, the $i^{th}$ approximation of APCG($\ensuremath{\mathbf{A}}, \iden, \ensuremath{\mathbf{E}}$, $b$), is connected to $x^\ensuremath{\mathbf{D}}_i$, the $i^{th}$ approximation of APCG($\ensuremath{\mathbf{P}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}}$, $\iden$, $\mathbf{D}$, $\ensuremath{\mathbf{P}}^T b$), by $x^\ensuremath{\mathbf{E}}_i=x_0+\ensuremath{\mathbf{P}} x^\ensuremath{\mathbf{D}}_i$.
\item[$\bullet$] The asymptotic convergence rate is governed by $\kappa\left( {{\ensuremath{\mathbf{P}}}_{\ensuremath{\mathbf{E}}}}^{T} {\ensuremath{\mathbf{A}}} {\ensuremath{\mathbf{P}}_{\ensuremath{\mathbf{E}}}}_{|{\ensuremath{\mathbf{A}}} {\mathcal{V}_{\ensuremath{\mathbf{E}}}}} \right) \leqslant \kappa\left({{\ensuremath{\mathbf{P}}}}^{T} {\ensuremath{\mathbf{A}}} {\ensuremath{\mathbf{P}}}_{|{\ensuremath{\mathbf{A}}} {\mathcal{V}}} \right)\leqslant \kappa\left({\ensuremath{\mathbf{A}}}\right)$.
\end{cor}
In conclusion, an increase in the size of the augmentation can only improve the asymptotic rate of convergence. (In the worst case, it leaves it unchanged.)
Now let us focus on the effect of preconditioning. Since $\ensuremath{\mathbf{M}}$ is a symmetric positive definite matrix, it can be factorized in Cholesky form $\ensuremath{\mathbf{M}}=\ensuremath{\mathbf{L}} \ensuremath{\mathbf{L}}^{T}$ (where $\ensuremath{\mathbf{L}}$ denotes a lower triangular matrix with positive diagonal coefficients). Let us introduce the notation:
\begin{equation}
\begin{array}{c}
\hat{\ensuremath{\mathbf{A}}}=\ensuremath{\mathbf{L}}^{-1}\ensuremath{\mathbf{A}} \ensuremath{\mathbf{L}}^{-T} \quad; \quad
\hat{b}=\ensuremath{\mathbf{L}}^{-1} b \quad; \quad \hat{x}=\ensuremath{\mathbf{L}}^T x\\
\hat{\ensuremath{\mathbf{C}}}=\ensuremath{\mathbf{L}}^{T}\ensuremath{\mathbf{C}} \quad; \quad \hat{\ensuremath{\mathbf{P}}}=\iden-\hat{\ensuremath{\mathbf{C}}}(\hat{\ensuremath{\mathbf{C}}}^T\hat{\ensuremath{\mathbf{A}}} \hat{\ensuremath{\mathbf{C}}})^{-1}\hat{\ensuremath{\mathbf{C}}}^T\hat{\ensuremath{\mathbf{A}}}\\
\end{array}
\end{equation}
Then, the following equivalence between preconditioned and non-preconditioned augmented conjugate gradients holds:
\begin{prop}\label{prop:pcg}
APCG($\ensuremath{\mathbf{A}}, \ensuremath{\mathbf{M}},\ensuremath{\mathbf{C}}$, $b$) is equivalent to APCG($\hat{\ensuremath{\mathbf{A}}}$, $\iden$, $\hat{\ensuremath{\mathbf{C}}}$, $\hat{b}$) with $\hat{r}=\ensuremath{\mathbf{L}}^{-1}r=\hat{z}=\ensuremath{\mathbf{L}}^{T}z$, $\hat{w}=\ensuremath{\mathbf{L}}^{T}w$, $\hat{\alpha}=\alpha$ and $\hat{\beta}=\beta$. Its asymptotic convergence rate is governed by
$\kappa\left( {\hat{\ensuremath{\mathbf{P}}}}^{T} \hat{\ensuremath{\mathbf{A}}} \hat{\ensuremath{\mathbf{P}}}_{|\hat{\ensuremath{\mathbf{A}}}
\hat{\mathcal{V}}} \right) \leqslant \kappa\left(\hat{\ensuremath{\mathbf{A}}} \right)$.
\end{prop}%
\begin{proof} Since $\hat{\ensuremath{\mathbf{P}}}=\ensuremath{\mathbf{L}}^{T} \ensuremath{\mathbf{P}} \ensuremath{\mathbf{L}}^{-T}$, we obtain directly $\hat{x_{0}}=\hat{\ensuremath{\mathbf{C}}}(\hat{\ensuremath{\mathbf{C}}}^T \hat{\ensuremath{\mathbf{A}}} \hat{\ensuremath{\mathbf{C}}})^{-1}\hat{\ensuremath{\mathbf{C}}}^T \hat{b}=\ensuremath{\mathbf{L}}^T x_0$, $\hat{r_{0}}=\hat{b}-\hat{\ensuremath{\mathbf{A}}}\hat{x_{0}}=\ensuremath{\mathbf{L}}^{-1} r_0=\hat{z_{0}}=\ensuremath{\mathbf{L}}^T z_0$ and $\hat{w_{0}}=\ensuremath{\mathbf{L}}^T w_0$. By induction, it follows that $\hat{\alpha}_{j-1}=(\hat{r}_{j-1},\hat{z}_{j-1})/(\hat{\ensuremath{\mathbf{A}}} \hat{w}_{j-1},\hat{w}_{j-1})={\alpha}_{j-1}$, $\hat{r_{j}}=\ensuremath{\mathbf{L}}^{-1} r_{j}$, $\hat{\beta}_{j}=(\hat{\ensuremath{\mathbf{A}}}\hat{w}_{j-1},\hat{z}_{j})/(\hat{\ensuremath{\mathbf{A}}} \hat{w}_{j-1},\hat{w}_{j-1})={\beta}_{j}$ and $\hat{w_{j}}=\ensuremath{\mathbf{L}}^T w_j$.
Proposition \ref{prop:acg} provides the inequality concerning the asymptotic convergence rate.
\end{proof}
Putting these propositions together, APCG($\ensuremath{\mathbf{A}}, \ensuremath{\mathbf{M}},\ensuremath{\mathbf{C}}$, $b$) is equivalent to CG($\ensuremath{\mathbf{L}}^{-1}\ensuremath{\mathbf{P}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}} \ensuremath{\mathbf{L}}^{-T}$, $\ensuremath{\mathbf{L}}^{-1}\ensuremath{\mathbf{P}}^Tb$).
All these results lead us to propose an efficient augmentation by analogy with an equivalent, simpler system solved by classical conjugate gradients.
\subsection{Interpretation and choice of the augmentation}
From a ``constraint'' point of view, the projection guarantees the $\ensuremath{\mathbf{C}}$-orthogonality of the residual throughout the iterations ($\ensuremath{\mathbf{C}}^T r_j=0$). For example, in the FETI domain decomposition method, the residual is the displacement jump between the subdomains; in the case of shell and plate problems, matrix $\ensuremath{\mathbf{C}}$ is introduced to enforce the continuity of the displacement at the corner points \cite{FARHAT:2000:FETINL}. In the BDD domain decomposition method, matrix $\ensuremath{\mathbf{C}}$ is associated with the rigid body motions of floating substructures and, therefore, local Neumann problems in the preconditioner are always well-posed \cite{MANDEL:1993:BAL}; for shell and plate problems, the matrix is enriched by corner mode corrections \cite{LETALLEC:1997:SHELL}. In both cases, matrix $(\ensuremath{\mathbf{C}}^T\ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}})^{-1}$, called a coarse grid matrix, plays a crucial role in the scalability of these methods.
From a ``spectral'' point of view, augmentation can be used to decrease the active condition number (``active'' referring to eigenelements solicited by the right-hand side) and, thus, improve the asymptotic convergence rate. This is called a deflation strategy \cite{ERHEL:2000:ACG,CHAPMAN:1996:DAK}, which boils down to building matrix $\ensuremath{\mathbf{C}}$ by using (approximate) eigenvectors associated with the lowest eigenvalues. Obviously, when $\ensuremath{\mathbf{C}}$ consists of the $n_c$ eigenvectors associated with the lowest eigenvalues $\left(\lambda_1\leqslant\ldots\leqslant\lambda_{n_c}\leqslant\ldots\leqslant\lambda_n\right)$, the condition number decreases strictly: $\kappa\left(\ensuremath{\mathbf{P}}^T\ensuremath{\mathbf{A}} \ensuremath{\mathbf{P}}_{\ensuremath{\mathbf{A}} \mathcal{V}}\right)=\frac{\lambda_n}{\lambda_{n_c}}<\frac{\lambda_n}{\lambda_{1}}$.
From a ``model reduction'' point of view, subspace $\ensuremath{\mathcal{C}}$ represents a ``macro'' (or coarse) space in which the macro part of the solution is calculated directly during the initialization while the ``micro'' part of the solution, when required, is obtained during the iterations.
\subsection{Estimation of computation costs}
With regard to the numerical cost of augmentation, the main operations for the construction of the projector are: (i) the block product $\ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}}$ (and assembly with neighbors for domain decomposition methods), (ii) the block dot-product $(\ensuremath{\mathbf{C}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}})$ (plus an all-to-all sum for domain decomposition methods), and (iii) the factorization of the fully-populated coarse matrix $(\ensuremath{\mathbf{C}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}})$. Then, the application of the projector consists simply of (i) one block dot-product $((\ensuremath{\mathbf{A}}\ensuremath{\mathbf{C}})^Tx)$ (plus an all-to-all exchange), (ii) the resolution of the coarse problem, and (iii) the matrix-vector product $(\ensuremath{\mathbf{C}}\alpha)$.
Thus, provided that the number of columns of matrix $\ensuremath{\mathbf{C}}$ is small, the main cost is related to the calculation of $\ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}}$. One must bear in mind that block operations (on ``multivectors'') are comparatively much faster than single vector operations, especially when the matrices are sparse (because data fetching is factorized). In a domain decomposition context, product~$\ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}}$ corresponds to the resolution of Dirichlet or Neumann problems in substructures, which makes the simultaneous treatment of many columns very efficient (and minimizes the number of exchanges). One must also remember that a conjugate gradient iteration involves a preconditioning step which may be expensive. (The cost is comparable to that of an operator product in optimal domain decomposition methods.) Thus, the additional cost of augmentation relative to the cost of one iteration depends on many parameters (the size of the problem, the number of augmentation vectors, the number of subdomains, the preconditioner chosen...). Typically, in the examples presented in this paper, we found that, using an optimal preconditioner, the CPU cost of between 4 and 7 augmentation vectors (depending on the hardware configuration) cost no more than one CG iteration.
A question which is not addressed in this paper is the verification of the full-rank property of matrix $\ensuremath{\mathbf{C}}$, which affects the quality of the factorization of matrix $(\ensuremath{\mathbf{C}}^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{C}})$. Strategies to correct a dependence among the columns of matrix $\ensuremath{\mathbf{C}}$ due to inexact arithmetic can be found in \cite{Aliaga00alanczos-type}.
\section{Total reuse of Krylov subspaces}
In this section, we show how it is possible to define efficient augmentation strategies in a multiresolution context. Let us consider the sequence of linear systems:
\begin{equation}
\ensuremath{\mathbf{A}^{(k)}} x^{(k)}=b^{(k)} \quad, \quad k=1, \ldots p
\end{equation}
where $A^{(k)}$ is an $n \times n$ symmetric positive definite matrix and $b^{(k)}$ is the right-hand side. Each linear system is solved using an augmented preconditioned conjugate gradient algorithm APCG($\ensuremath{\mathbf{A}^{(k)}}$, $\ensuremath{{\mathbf{M}^{(k)}}}, \ensuremath{\mathbf{C}}^{(k)}$, $b^{(k)}$). Let $m^{(k)}$ be the number of iterations which is necessary to reach convergence, and let
\begin{equation}
\ensuremath{\mathbf{W}}^{(k)}_m=\left[w_0^{(k)}, \ldots, w_{m^{(k)}-1}^{(k)} \right]
\end{equation}
be a basis of the associated Krylov subspace.
As explained in the previous section, augmentation never increases the condition number which governs the asymptotic convergence rate. More precisely, the presence of active eigenvectors of the current preconditioned problem in $\ensuremath{\mathbf{C}}^{(k)}$ may increase the efficiency of the iterative solver significantly. Classical strategies can be used in the case of invariant preconditioned operators ($\ensuremath{\mathbf{A}}^{(k)}=\ensuremath{\mathbf{A}}$, $\ensuremath{\mathbf{M}}^{(k)}=\ensuremath{\mathbf{M}}$) and multiple right-hand sides.
It is more difficult to define efficient strategies in the general case of varying operators with no information available on their evolution. A simple and natural idea is to reuse previous Krylov subspaces. A first algorithm which reuses all the previous Krylov subspaces is Total Reuse of Krylov Subspaces (TRKS) (Algorithm \ref{TRKS}), which needs only a few comments:
\begin{itemize}
\item Since (according to \eqref{relation:acg}) ${\ensuremath{\mathbf{C}}^{(k)}}^{T}\ensuremath{\mathbf{A}}^{(k)}\ensuremath{\mathbf{W}}^{(k)}_m=0$, the vectors of the concatenated matrix $\ensuremath{\mathbf{C}}^{(k+1)}$ are linearly independent. Therefore, APCG($\ensuremath{\mathbf{A}}^{(k+1)}$, $\ensuremath{\mathbf{M}}^{(k+1)},
\ensuremath{\mathbf{C}}^{(k+1)}$, $b^{(k+1)}$) does not break down and converges.
\item The previous Krylov subspaces are fully reused through concatenation without post-processing; the only downside is that the memory requirements increase due to the need to save the Krylov subspaces.
\item If the number of columns of matrix $\ensuremath{{\mathbf{C}^{(k)}}}$ becomes too large, the method may become computationally inefficient, even though the number of iterations decreases considerably. Nevertheless, TRKS probably leads to the best reduction in the number of iterations achievable by reusing Krylov subspaces. Therefore, it can be used as a reference in terms of the reduction of the number of iterations for any other algorithm based on a reuse of Krylov subspaces.
\item One possible way to reduce the cost of TRKS without reducing the size of $\ensuremath{\mathbf{C}}$ consists in using approximate solvers, as in the IRKS strategy \cite{Risler:2000:IAA}.
\end{itemize}
\begin{algorithm2e}[ht]\caption{TRKS-APCG}\label{TRKS}
Initialize $\ensuremath{\mathbf{C}}^{(0)}=\ensuremath{\mathbf{C}}_0$ (an $n\times m_0$ full-rank matrix)\;%
\For{$k=0,\ldots,p-1$}{%
Solve $\ensuremath{\mathbf{A}}^{(k)}x^{(k)} = b^{(k)}$ \\ $\;\;\;$ with APCG($\ensuremath{\mathbf{A}^{(k)}}$, $\ensuremath{{\mathbf{M}^{(k)}}}, \ensuremath{{\mathbf{C}^{(k)}}}$, $b^{(k)}$)\;%
Define $\ensuremath{\mathbf{W}}^{(k)}_m=\left[\hdots, w_j^{(k)} ,\hdots\right]_{0\leqslant j< m^{(k)}}$\;%
Concatenate: $\ensuremath{\mathbf{C}}^{(k+1)}=\left[\ensuremath{{\mathbf{C}^{(k)}}},\ensuremath{\mathbf{W}}^{(k)}_m \right]$}%
\end{algorithm2e}
In order to reduce the cost associated with the total reuse of Krylov subspaces, we propose to work on extracted sub-subspaces, an operation often referred to as the recycling of Krylov subspaces. The objective is to retain the smallest number of independent vectors which achieve the greatest decrease in the number of iterations. Clearly, the most effective approach would be to calculate approximate eigenvectors from the previous Krylov subspaces for the current operator. However, because of the variability of the operators, the extraction of such information would be extremely time consuming and would affect the global efficiency. Conversely, approximate eigenvectors of previous problems can be calculated from the associated Krylov subspaces at nearly no cost. In the following section, we describe an efficient algorithm for the extraction of such approximation vectors along with a simple selection procedure to recycle only a few of these vectors. Of course, the performance of our method depends on the stability of the eigenspaces from one system to another. This topic, especially concerning the lower part of the spectrum, is discussed in \cite{kilmer2006}.
\section{Selective recycling of Krylov subspaces}
The standard convergence of conjugate gradients corresponds to an asymptotic convergence rate. Using this property to predict the number of iterations $n_{_{\epsilon}}$ which is required to reach an accuracy level $\epsilon_{_{cg}}$ leads to a huge overestimation. Indeed, one has:
\begin{equation}\label{eq:res-conv-classique}\begin{aligned}
&\frac{\Vert x_{_{i}}-x \Vert_{_{\ensuremath{\mathdj{A}}}}}{\Vert x_{_{0}}-x \Vert_{_{\ensuremath{\mathdj{A}}}}} \leqslant 2 (\sigma_{_{1,n}})^i \leqslant \epsilon_{_{cg}} \quad \Rightarrow \quad
i \geqslant n_{_{\epsilon}} = \frac{ \ln( \epsilon_{_{cg}}/2)}{\ln(\sigma_{_{1,n}})}\\
&\mbox{ with } \sigma_{_{r,s}} = \frac{\sqrt{\kappa_{_{r,s}}} - 1}{\sqrt{\kappa_{_{r,s}}} + 1} \mbox{ and } \kappa_{_{r,s}}= \frac{\lambda_{_{r}}}{\lambda_{_{s}}} \end{aligned}
\end{equation}
This result alone cannot explain the improvement in the convergence rate observed during the iteration process. This superconvergence phenomenon can be explained by a study of the convergence of Ritz values \cite{MORGAN:1995:Eigen} which enables one to define an instantaneous convergence rate \cite{VANDERSLUIS:1986:RCCG}. This explanation can be improved by a study of the influence of the distribution of the eigenvalues \cite{NOTAY:1993:CRCGRE,AXELSSON:1986:RCPCG}.
The objective of recycling Krylov subspaces is to find the best augmentation space in order to trigger superconvergence quickly. This section is organized as follows: we start with a review of Ritz eigenelement analysis and continue with a brief presentation of the improved convergence results; these results lead to a number of selection strategies, which will be assessed in Section \ref{sec:assess}.
\subsection{Ritz analysis: theory and practical calculation}\label{subs:ritz}
For $0\leqslant i<m$, Ritz vectors $(\hat{y}_m^i)$ and values $(\theta_m^i)$ are approximations of the eigenvectors and eigenvalues of the symmetric positive definite matrix $\hat{\ensuremath{\mathbf{A}}}$; their definition is similar to that of the iterates in the conjugate gradient algorithm \eqref{eq:krysol_principle}
\begin{equation}\label{eq:kryeig_principle}
\left\{\begin{array}{ll}
\text{find }& (\hat{y}_m^i,\theta_m^i) \in \ensuremath{\mathcal{K}}_m(\hat{\ensuremath{\mathbf{A}}},\hat{v}_0)\times\ensuremath{\mathdj{R}}\\
\text{such that }& \hat{\ensuremath{\mathbf{A}}} y_m^i - \theta_m^i y_m^i \perp \ensuremath{\mathcal{K}}_m(\hat{\ensuremath{\mathbf{A}}},\hat{v}_0)
\end{array}\right.
\end{equation}
The symmetric Lanczos algorithm \cite{SAAD:1992:NMLEP} enables one to build a particular orthonormal basis of $\ensuremath{\mathcal{K}}_m(\hat{\ensuremath{\mathbf{A}}},\hat{v}_0)$, denoted $\ensuremath{\mathbf{V}}_m$. Then, the search principle becomes:
\begin{equation}
y_m^i=\hat{\ensuremath{\mathbf{V}}}_m q_m^i,\quad \hat{\ensuremath{\mathbf{V}}}_m^T\hat{\ensuremath{\mathbf{A}}}\hat{\ensuremath{\mathbf{V}}}_m q_m^i = \theta_m^i q_m^i
\end{equation}
The Lanczos basis $\hat{\ensuremath{\mathbf{V}}}_m$ makes the Hessenberg matrix $\hat{\ensuremath{\mathbf{H}}}_m=\hat{\ensuremath{\mathbf{V}}}_m^T\hat{\ensuremath{\mathbf{A}}}\hat{\ensuremath{\mathbf{V}}}_m$ symmetrical and tridiagonal. $\hat{\ensuremath{\mathbf{V}}}_m$ and $\hat{\ensuremath{\mathbf{H}}}_m$ can be recovered directly from the conjugate gradient coefficients \cite{SAAD:2000:IMS}:
\begin{equation}\begin{aligned}
\left\{\begin{array}{l}
\hat{\ensuremath{\mathbf{V}}}_m = \left(\hdots,(-1)^j\frac{\hat{r}_j}{\|\hat{r}_j\|} ,\hdots\right)_{0\leqslant j< m} \\
\hat{\ensuremath{\mathbf{H}}}_m=\operatorname{tridiag}(\eta_{j-1},\delta_j,\eta_j)_{0\leqslant
j< m}
\end{array}\right.\\
\text{with }\delta_0 = \frac{1}{\alpha_0},\ \delta_j =
\frac{1}{\alpha_j}+\frac{\beta_{j-1}}{\alpha_{j-1}},\
\eta_j=\frac{\sqrt{\beta_{j}}}{\alpha_j}
\end{aligned}
\end{equation}
Since matrix $\hat{\ensuremath{\mathbf{H}}}_m$ is symmetrical and tridiagonal, its eigenelements $(\theta^m_j,q^m_j)_{1\leqslant j\leqslant m}$ can be calculated easily, for example using a Lapack procedure. Let us define $\ensuremath{\boldsymbol{\Theta}}_m=\operatorname{diag}(\theta^1_m \leqslant \ldots\leqslant \theta_m^m)$ and $\ensuremath{\mathbf{Q}}_m=\left[q_m^1,\ldots, q_m^m\right]$ such that $\hat{\ensuremath{\mathbf{H}}}_m=\ensuremath{\mathbf{Q}}_m \ensuremath{\boldsymbol{\Theta}}_m {\ensuremath{\mathbf{Q}}_m}^T$. $\ensuremath{\boldsymbol{\Theta}}_m$ and $\hat{\ensuremath{\mathbf{Y}}}_m= \hat{\ensuremath{\mathbf{V}}}_m \ensuremath{\mathbf{Q}}_m$ are the Ritz values and associated Ritz vectors, which are approximations of the eigenelements of operator $\hat{\ensuremath{\mathbf{A}}}$ and satisfy:
\begin{equation*} \hat{\ensuremath{\mathbf{Y}}}_m^T \hat{\ensuremath{\mathbf{A}}}\hat{\ensuremath{\mathbf{Y}}}_m = \ensuremath{\boldsymbol{\Theta}}_m \qquad\text{and}\qquad \hat{\ensuremath{\mathbf{Y}}}_m^T\hat{\ensuremath{\mathbf{Y}}}_m = \iden_m\end{equation*}
We presented Ritz analysis for the equivalent symmetric system described previously because symmetry simplifies the calculation of eigenelements, but the analysis can be transferred back to the left-preconditioned system using the following transformation rules:
\begin{equation}\label{eq:vritz:2}
\begin{aligned}
\ensuremath{\mathbf{V}}_m &= \ensuremath{\mathbf{L}}^{-T}\hat{\ensuremath{\mathbf{V}}}_m = \left[\ldots, (-1)^{j}\frac{{z}_j}{({r}_j,z_j)^{1/2}}, \ldots\right]\\
\ensuremath{\mathbf{H}}_m &= \hat{\ensuremath{\mathbf{H}}}_m = \ensuremath{\mathbf{V}}_m^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{V}}_m\\
\ensuremath{\mathbf{Y}}_m &= \ensuremath{\mathbf{L}}^{-T}\hat{\ensuremath{\mathbf{Y}}}_m=\ensuremath{\mathbf{V}}_m\ensuremath{\mathbf{Q}}_m
\end{aligned}
\end{equation}
The Ritz vectors are the solution of a generalized eigenproblem and satisfy the following orthogonality properties:
\begin{equation}
\ensuremath{\mathbf{Y}}_m^T \ensuremath{\mathbf{A}} \ensuremath{\mathbf{Y}}_m = \ensuremath{\boldsymbol{\Theta}}_m \qquad\text{and}\qquad \ensuremath{\mathbf{Y}}_m^T \ensuremath{\mathbf{M}} \ensuremath{\mathbf{Y}}_m = \iden_m
\end{equation}
One can show that when $m$ increases the Ritz values converge toward the eigenvalues of $\hat{\ensuremath{\mathbf{A}}}$, and that the convergence is either from above or from below depending on their rank \cite{VANDERSLUIS:1986:RCCG,JIA:2004:CRV}:
\begin{equation} \theta^{{1}}_{m} \geqslant \theta^{{1}}_{m-1} \geqslant \theta^{{2}}_{m} \geqslant \ldots \geqslant \theta^{{m-1}}_{m} \geqslant \theta^{{m-1}}_
{m-1} \geqslant \theta^{{m}}_{m}
\end{equation}
In addition, in the case of clearly distinct eigenvalues, the convergence of a Ritz value results in the convergence of the associated Ritz vector.
\subsection{Relation between the convergence of conjugate gradients and the convergence of the Ritz values}\label{subs:ba}
In \cite{VANDERSLUIS:1986:RCCG}, the superconvergence phenomenon is explained by the convergence of the Ritz values through the definition, at each iteration, of a instantaneous convergence rate associated with the part of the spectrum that is not yet approximated correctly by the Ritz values: at a given conjugate gradient iteration, one can find a deflated system (with some of its extreme eigenvalues removed) with similar behavior. Let $\left[\lambda_{_{l}} , .., \lambda_{_{r}} \right]$ be the spectrum of the deflated operator. The equivalent convergence rate is:
\begin{equation}\label{eq:cvcgritz} \Vert x-x_{_{i+1}} \Vert_{_{\ensuremath{\mathdj{A}}}} \leqslant \; F_{_{i,l,r}} \; 2 \; \sigma_{_{l,r}}
\Vert x-x_{_{i}} \Vert_{_{\ensuremath{\mathdj{A}}}} \end{equation}
where $F_{_{i,l,r}}$ quantifies the convergence of the $l$ smallest and $r$ largest Ritz values to the extreme eigenvalues:
\begin{equation*} \begin{aligned} F_{_{i,l,r}} &= \max_{l'>l } J^{(i)}_{_{l,l'}} \; \max_{r' \geqslant r} L^{(i)}_{_{r,r'}}\\J^{(i)}_{_{l,l'}} &= \prod_{_{j=1}}^{l}
\left| 1 - \frac{\lambda_{_{l'}}}{ \lambda_{_{j}}}\right| \;
\left| 1 - \frac{\lambda_{_{l'}}}{\theta_{j}^{i}}\right|^{-1} \\
L^{(i)}_{_{r,r'}} &= \prod_{_{j=1}}^{r}
\left| 1 - \frac{\lambda_{_{n-r^{'}}}}{\lambda_{_{n+1-j}}} \right| \;
\left| 1 - \frac{\lambda_{_{n-r^{'}}}}{\theta_{i+1-j}^{i}} \right|^{-1}\end{aligned}\end{equation*}
Since this result holds for every pair $(l,r)$, the effective convergence rate at Iteration $i$ corresponds to the pair $(l,r)$ which minimizes $\sigma_{_{i,l,r}} = F_{_{i,l,r}} \sigma_{_{l,r}}$.
Then, after some iterations, the superconvergent conjugate gradient algorithm behaves very much like a conjugate gradient algorithm augmented by the extreme eigenvectors which are associated with the converged Ritz values.
In a multiresolution context, provided the linear systems have similar spectral properties, the Ritz vectors associated with the converged Ritz values obtained for one system should define a viable augmentation space for the subsequent resolutions.
\subsection{Effect of the distribution of the eigenvalues}\label{subs:evdba}
The effect of the distribution of the eigenvalues on the convergence of conjugate gradients was studied in \cite{NOTAY:1993:CRCGRE,AXELSSON:1986:RCPCG}. The results take into account the fact that preconditioning often leads to clustered eigenvalues as opposed to uniformly distributed eigenvalues, as can be seen in Figure~\ref{fig:eigen-distrib}.
In addition to other results, the authors showed that if a spectrum consists of $p$ isolated eigenvalues in the high part of the spectrum, $p$ isolated eigenvalues in the low part of the spectrum and $n-2p$ uniformly distributed central eigenvalues, then the conjugate gradient convergence takes the form:
\begin{equation}
n_{_{\epsilon}} \geqslant \tilde{n}_{_{\epsilon}} = 2p +\\
\mathrm{int} \; \left( \frac{ \ln \; (\epsilon_{_{cg}} /2 )}{ \ln \; \sigma_{_{p+1,n-p}} }
- \frac{ \sum_{i=1}^{p} \; \ln \left(\frac{\lambda_{_{n-p+i}}}{4\lambda_{_{i}}}\left(1 - \frac{\lambda_{_{i}}}{\lambda_{_{n-p+i}}} \right) \right)}{\ln \; \sigma_{_{p+1,n-p}} } \right)
\label{eq:conv-cg-petitesgrandesvp}
\end{equation}
The convergence rate is approximately equal to the classical convergence rate for the central part, plus one iteration per higher eigenvalue and a little more than one iteration per lower eigenvalue. These results can be combined with the work by Jiao \cite{JIA:2004:CRV,JIA:2004:CHRV} on the convergence of Ritz values. In general, since the method is related to the power iteration method, a correct approximation by the Ritz values is obtained first for the highest eigenvalues, then for the lowest part of the spectrum, resulting in superconvergence (which is governed by the asymptotic convergence rate of the reduced spectrum).
\subsection{Selection procedures}\label{subs:sel-proc}
The results of Section~\ref{subs:ba} lead to a first proposal of a selection procedure for converged Ritz vectors: convergence is identified by the stagnation of the Ritz values; if the conjugate gradient algorithm converges at iteration $m$, the Ritz values are calculated for the previous two states $\ensuremath{\boldsymbol{\Theta}}_m$ and $\ensuremath{\boldsymbol{\Theta}}_{m-1}$.
Once ranked, the $m$ most recent Ritz values $\ensuremath{\boldsymbol{\Theta}}_m$ are compared to the $m-1$ previous values according to the following criteria:
\begin{equation}
\left\{\begin{aligned}
&\theta_{m}^j \text{ has converged if } \frac{\vert \theta_{m}^j-\theta_{m-1}^j \vert}{\vert \theta_{m}^j \vert} \leqslant \varepsilon,\qquad 1 \leqslant j\leqslant m\!-\!1 \\
&\theta_{m}^{m-j} \text{ has converged if } \frac{\vert \theta_{m}^{m-j}-\theta_{m-1}^{m-1-j} \vert}{\vert \theta_{m}^{m-j} \vert} \leqslant \varepsilon,\qquad 0 \leqslant j\leqslant m-2
\end{aligned}\right.
\end{equation}
where $\varepsilon$ is a user parameter which is easy to adjust since the criterion is generally either very high (before the convergence of the Ritz value) or very small (after convergence). Figure \ref{fig:eigen-distrib} illustrates that property with the simple example of the operator associated with the decomposition of a linear elastic cube into ten subdomains; in that case, the higher half of the spectrum has converged.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\linewidth]{fig/spectretypiq1.pdf}
\end{center}
\caption{Ritz spectrum and convergence of the Ritz values}
\label{fig:eigen-distrib}
\end{figure}
The principle of the selective recycling of Krylov subspaces (SRKS-APCG) is described in Algorithm~\ref{SRKS}. Basically, in addition to the memory required by APCG, the SRKS-APCG algorithm requires storage for $m$ $n$-vectors $(z_j)_{j=1,m}$. One should note that the selected vectors are normalized by the square root of the associated Ritz value in order to improve the condition number of the coarse matrix. (If operator $\ensuremath{\mathbf{A}}$ remained constant, matrix $(\ensuremath{\mathbf{C}}^T \ensuremath{\mathbf{A}}\ensuremath{\mathbf{C}})$ would be the identity matrix.)
\begin{algorithm2e}[h!t]\caption{SRKS-APCG}\label{SRKS}
Initialize $C^{(0)}=\ensuremath{\mathbf{C}}_0$ (full column rank matrix)\
\For{$k=0,\ldots,p-1$}{%
$\bullet$ Solve $\ensuremath{\mathbf{A}}^{(k)}x^{(k)} = b^{(k)}$ with APCG($\ensuremath{\mathbf{A}^{(k)}}, \ensuremath{{\mathbf{M}^{(k)}}}, \ensuremath{{\mathbf{C}^{(k)}}}, b^{(k)}$)\
$\bullet$ Define $ {\ensuremath{\mathbf{V}}}_m=\left[\hdots, (-1)^{j}\frac{{z}_j}{({r}_j,z_j)^{1/2}} ,\hdots\right]_{0\leqslant j< m}$\
$\bullet$ Define $ {\ensuremath{\mathbf{H}}}_m=\operatorname{tridiag}(\eta_{j-1},\delta_j,\eta_j)_{0\leqslant j< m}$ \,
$ \delta_0 = \frac{1}{\alpha_0} \; , \; \delta_j = \frac{1}{\alpha_j} + \frac{\beta_{j-1}}{\alpha_{j-1}} \; , \; \eta_j=\frac{\sqrt{\beta_{j}}}{\alpha_j}$\
$\bullet$ Compute eigenelements $ (\ensuremath{\mathbf{Q}}_m,\ensuremath{\boldsymbol{\Theta}}_m)$ of ${\ensuremath{\mathbf{H}}}_m$ $(\theta_m^1\geqslant \ldots \geqslant \theta^m_m)$\
$\bullet$ Compute $\ensuremath{\mathbf{Y}}_m = \ensuremath{\mathbf{V}}_m\ensuremath{\mathbf{Q}}_m=\left[y_m^1, \ldots , y_m^m \right]$\
$\bullet$ Extract ${\ensuremath{\mathbf{H}}}_{m-1}=\operatorname{tridiag}(\eta_{j-1},\delta_j,\eta_j)_{0\leqslant j< m-1}$ \
$\bullet$ Compute eigenvalues $(\theta_{m-1}^j)$ of ${\ensuremath{\mathbf{H}}}_{m-1}$ \
\For{$j=1,\ldots,m-1$}{%
$\ensuremath{\mathbf{C}}=\left[\ensuremath{\mathbf{C}}, \frac{y_m^j}{\sqrt{|\theta_{m}^j|}} \right]$ if $|\theta_{m}^j-\theta_{m-1}^j| \leqslant \varepsilon |\theta_{m}^j| $\;
$\ensuremath{\mathbf{C}}=\left[\ensuremath{\mathbf{C}}, \frac{y_m^{j+1}}{\sqrt{|\theta_{m}^{j+1}|}} \right]$ if $|\theta_{m}^{j+1}-\theta_{m-1}^{j}| \leqslant \varepsilon |\theta_{m}^{j+1}|$
} %
$\bullet$ Concatenate $ {\ensuremath{\mathbf{C}}}^{(k+1)}=[\ensuremath{\mathbf{C}}^{(k)};{\ensuremath{\mathbf{C}}}]\;\; , \;\; \ensuremath{\mathbf{C}}=\left[0\right] $\
$\bullet$ If dim(${\ensuremath{\mathbf{C}}}^{(k+1)}) \geqslant n_{c_{lim}}$, then ${\ensuremath{\mathbf{C}}}^{(k)}=\ensuremath{\mathbf{C}}^{(0)}$
}
\end{algorithm2e}
For better computational efficiency, a restart parameter can be introduced in order to limit the size of the augmentation space associated with parameter $n_{c_{lim}}$ in Algorithm~\ref{SRKS}. This limit size can be set after a complexity analysis under the assumption that all non-augmented systems would be solved in the same number of iterations. However, we did not use such a restart procedure in our experiments.
In order to be even more selective, we propose a reselection strategy based on a prediction of the efficiency of the retained vectors. Indeed, the results of Section~\ref{subs:evdba} in terms of the effect of the distribution of the eigenvalues lead us to retain only the converged Ritz vectors which belong to the external part of the spectrum:
\begin{itemize}
\item this is known to be the first part of the spectrum whose approximation by Ritz values is good;
\item since the convergence of Ritz vectors is identified by the stagnation of the associated Ritz values, the fact that the external Ritz values are distinct ensures that the Ritz vectors approximate the eigenvectors correctly \cite{JIA:2004:CRV};
\item while choosing vectors in the dense central zone does not modify the shape of the spectrum and does not improve convergence, selecting the external part of the spectrum triggers superconvergence instantly.
\end{itemize}
In order to select only the external part of the spectrum, we implemented the cluster identification algorithm proposed in \cite{MOLINARI:2001:MTCD}. This algorithm seeks the piecewise constant distribution which is nearest (in a least squares sense) to the distribution of the distances among the sorted eigenvalues. The only parameter required is the minimum size of the cluster, which we set at one-fifth the number of preselected vectors. As will be shown in the next section, the performance achieved with this reselection algorithm is not outstanding, but some results in terms of gain per augmentation vector are worth considering.
\section{Numerical assessments}\label{sec:assess}
We present three numerical experiments. Two concern the evaluation of a structure made of random materials, as is the case in a Monte-Carlo simulation. In the first case, the materials are elastic; in the second case, which is a nonlinear problem, they are elastic-plastic. The last case is a large displacement problem, which raises specific difficulties.
The methods were implemented in the ZEBULON code \cite{ZEBUUSER:2001} and parallelism was introduced using MPI. The calculations were performed on the LMT-Cachan cluster, which consists of dual quadcore and dual hexacore processors connected by a gigabit network. The calculations were always carried out on homogeneous sets of processors which were entirely dedicated to one task which fit entirely in memory, so swapping was not necessary. In each case, we indicate the CPU time which measures the amount of work performed for one subdomain. The Wall Clock Time (WCT), a global measure which is more sensitive to external perturbations induced by the operating system and the presence of other users, was considered to be unreliable in many cases; so we mention it only for the first set of experiments. One should note that the gains calculated with WCT were always greater.
The CPU plots show the total time as well as the time dedicated to augmentation (preparation of the coarse operator, initialization and projections); the difference represents the iterations of the solver.
All the calculations used a dual formulation of the interface problem through domain decomposition (FETI). The convergence was evaluated using the norm of the residual (which corresponds to the displacement gaps at the interfaces) normalized by the condensed right-hand side. Classically for such structural problems, total reorthogonalization was used to enforce the $\ensuremath{\mathbf{A}}$-conjugation of the search directions. (The case without reorthogonalization is discussed briefly in the first example.)
\subsection{The case of a sequence of linear systems}
We considered a cube (of side $50$ mm) with $4\times4\times4=64$ small cubic inclusions (of side $5.5$ mm). A slice through this structure is shown in Figure~\ref{fig:cube}. The cube was clamped over one side, and the opposite side plus another side were subjected to uniform pressure. The mesh consisted of $125,000$ linear hexahedral elements for a total of $400,000$ degrees of freedom. Three automatic decompositions (into 12, 48 and 96 subdomains) were performed using the Metis algorithm \cite{METIS:1998} (see Figure~\ref{fig:cubedec}). The resulting interface system contained $54,000$ unknowns for the 12-subdomain decomposition, $96,000$ unknowns for the 48-subdomain decomposition and $133,000$ unknowns for the 96-subdomain decomposition. All the materials were isotropic, linear and elastic, and were characterized by their Young's modulus and Poisson's coefficient. The material properties of each inclusion and of the matrix were chosen randomly following a normal law with a relative standard deviation equal to $10\%$, leading to a $\pm 23\%$ variation range about the nominal value. The average Young's modulus was $200$ MPa for the matrix and $20,000$ MPa for each inclusion, and the average Poisson's coefficient was $0.27$ for the matrix and $0.35$ for each inclusion. The objective was to perform the calculations for $40$ draws of the $130$ coefficients.
\begin{figure}[ht]\centering
\begin{minipage}{.49\textwidth}\centering
\includegraphics[width=0.9\textwidth]{fig/slice2.png}
\caption{A slice through the heterogeneous cube (shear stress)} \label{fig:cube}
\end{minipage}\hfill
\begin{minipage}{.49\textwidth}\centering
\includegraphics[width=0.8\textwidth]{fig/cube48.png}
\caption{The decomposition into 48 subdomains} \label{fig:cubedec}
\end{minipage}
\end{figure}
We used a dual formulation (FETI) with both a Dirichlet (optimal) and a lumped preconditioner, leading to $10^{-3}$ and $10^{-6}$ APCG accuracy respectively.
We considered the following algorithms: conjugate gradients (cg), total reuse of Krylov subspaces (trks) and selective reuse of Krylov subspaces with two values of the criterion, $\varepsilon=10^{-6}$ (srks6) and $\varepsilon=10^{-14}$ (srks14). In addition, in the last case ($\varepsilon=10^{-14}$), we also attempted to further refine the selection by not selecting the converged Ritz values contained in the central cluster (identified by the algorithm proposed by \cite{MOLINARI:2001:MTCD}); this method is labeled (clust14).
\subsubsection{Comparison of the strategies}\
The results for the 12-subdomain decomposition are summarized in Table~\ref{tab:cubinc}, which gives the average number of APCG iterations to convergence, the average size of the augmentation space, the final size of the augmentation space, the average CPU and wall clock times per system, from which we also deduced the augmentation time (operator preparation and projection). Computations were conducted one dual hexacore processor (one subdomain per core). When the average and final sizes of the augmentation space are close, this means that most of the augmentation space was identified with the first systems. For a given configuration (accuracy and preconditioner), the figures in bold in the three columns `average number of iterations', `average CPU time' and `average wall clock time' indicate the best strategy in terms of gain per unit augmentation vector compared to CG.
For the 12-subdomain decomposition, Figures~(\ref{fig:cub3i}, \ref{fig:cub3a}, \ref{fig:cub3t}) (for an objective of $10^{-3}$ accuracy) and Figures~(\ref{fig:cub6i}, \ref{fig:cub6a}, \ref{fig:cub6t})) (for an objective of $10^{-6}$ accuracy) give the evolutions of the number of APCG iterations to convergence for each linear system, the dimension of the augmentation space $n_c$ and the the CPU time for the resolution of each system, with both lumped and Dirichlet preconditioners. \medskip
\begin{table}\centering
\begin{tabular}{|c|c|c||p{.7cm}|p{.7cm}|p{.7cm}|p{.7cm}|p{.7cm}|p{.7cm}|p{.7cm}|}\hline
\multicolumn{3}{|c||}{} & \multicolumn{7}{|c|}{12 subdomains} \\\hline
accur. & precond. & & avg. \# it & avg. $n_c$ & max $n_c$ & avg. total CPU & avg. CPU aug. & avg. total WCT& avg. WCT aug. \\\hline
\multirow{12}*{$10^{-3}$} & \multirow{5}*{\rotatebox{90}{Dirichlet}}
& cg (no reo.) & 70 & ---& ---& 23.5 & 0 & 31.8 & 0\\
& & cg & 44.3 & ---& ---& 16.0 &0& 24.5 &0 \\
& & trks & 2.4 & 77.8 & 96 & 7.1 & 6.3 & 9.1 & 7.6\\
& & srks6 & 20.5 & 41.3 & 50 & 11.0 & 3.8 & 15.6 & 4.7\\
& & srks14 & 25.6 & 24.6 & 27 & \textbf{11.7} & 2.7 &\textbf{17.1} & 3.5\\
& & clust14 & \textbf{30.6} & 16.8 & 24 & 13.8 & 2.2 & 25.3 & 3.2 \\\cline{2-10}
& \multirow{5}*{\rotatebox{90}{Lumped}}
& cg (no reo.) & 145 & ---& ---& 27.7 & 0 & 40.3&0 \\
& & cg & 68.1 & ---& ---& 13.6 & 0 & 24.0&0\\
& & trks & \textbf{0.4} & 71.8 & 74 & \textbf{5.7 } &5.6& \textbf{6.8}&6.7\\
& & srks6 & 27.4 & 81 & 108& 12.3 &6.5& 18.0&8.0 \\
& & srks14 & 32.0 & 59.6 & 71 & 11.8 & 5.1 & 18.1 &6.4\\
& & clust14 & 51.1 & 20 & 39 & 12.8 & 2.3& 21.7&3.2 \\\hline
\multirow{12}*{$10^{-6}$} & \multirow{6}*{\rotatebox{90}{Dirichlet}}
& cg (no reo.) & 174 & ---& ---& 58.3 &0& 85.8&0 \\
& & cg & 84.4 & ---& ---& 30.0 &0& 49.1& 0\\
& & trks & 13.2 & 382.9& 551& 46.5 &41.3& 60.4&52.8\\
& & srks6 & 36.7 & 104.3& 142& 22.3 &8.6& 35.7&11.0 \\
& & srks14 & 42.8 & 72.6 & 87 & 22.7 &6.3& 36.2& 8.1\\
& & clust14 & \textbf{60.8} & 35.2 & 77 & \textbf{25.7} & 3.6&\textbf{39.8 }& 4.9\\\cline{2-10}
& \multirow{5}*{\rotatebox{90}{Lumped}}
& cg (no reo.) & {\tiny $>$}400& ---& ---& {\tiny $>$}78&0&{\tiny $>$}110 & 0 \\
& & cg & 147.7 & ---& ---& 31.7 &0& 61.0&0 \\
& & trks & 16.3 & 516.7& 735&70.2 &65.7& 93.3&85.9\\
& & srks6 & 54.2 & 225.7& 311& 33.4 &20.5& 49.4&25.6\\
& & srks14 & 60.6 & 170.2& 216& 29.2 &15.0 &44.2&18.3\\
& & clust14 & \textbf{129.4} & 20 & 39 & \textbf{30.4} &2.6& \textbf{55.6}&4.0 \\\hline
\end{tabular}
\caption{Performance summary for the cube with inclusions}\label{tab:cubinc}
\end{table}
\begin{figure}[ht]
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/cubinc3l_i.png}
\caption{Cube 12 subdomains, lumped, $10^{-3} accuracy$, number of iterations per linear system} \label{fig:cub3i}
\end{minipage}
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/cubinc6d_i.png}
\caption{Cube 12 subdomains, Dirichlet, $10^{-6} accuracy$, number of iterations per linear system} \label{fig:cub6i}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/cubinc3l_a.png}
\caption{Cube 12 subdomains, lumped, $10^{-3} accuracy$, dimension of the augmentation space} \label{fig:cub3a}
\end{minipage}
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/cubinc6d_a.png}
\caption{Cube 12 subdomains, Dirichlet, $10^{-6} accuracy$, dimension of the augmentation space} \label{fig:cub6a}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/cubinc3l_t.png}
\caption{Cube 12 subdomains, lumped, $10^{-3} accuracy$, CPU time per linear system} \label{fig:cub3t}
\end{minipage}
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/cubinc6d_t.png}
\caption{Cube 12 subdomains, Dirichlet, $10^{-6} accuracy$, CPU time per linear system} \label{fig:cub6t}
\end{minipage}
\end{figure}
Without full reorthogonalization, the performance was very poor and led to about twice the number of iterations of the recommended fully reorthogonalized conjugate gradients. This was expected because systems resulting from domain decomposition formulations are known to often require full reorthogonalization \cite{FARHAT:1994:ADV}. Furthermore, one should note that the additional iterations carried out in the non-reorthogonalized case led to vector sets which made the Ritz analysis more complex due to the appearance of nonphysical, multiple eigenvalues. The non-reorthogonalized approach was no longer considered in the other examples.
With the TRKS approach, two types of behavior were observed. In the low-accuracy case ($10^{-3}$), for both preconditioners (but especially for the lumped preconditioner), the size of the augmentation space reached a plateau, which means that the augmentation space contained almost all the required information; the gains in terms of both the number of iterations ($>90\%$) and the CPU time ($>55\%$) were excellent. In the high-accuracy case ($10^{-6}$), the size of the augmentation space never stabilized; therefore, even though the number of iterations decreased drastically, the CPU time increased. Table \ref{tab:trksit} gives extended performance results for TRKS which confirm this analysis. The gains are given relative to conjugate gradients. The efficiency of augmentation is defined by the average decrease in the number of iterations per augmentation vector; the higher the required accuracy, the less efficient the TRKS approach. These results justify our decision to select the subspaces so that the dimension of the augmentation space would remain under control.
The SRKS14 approach succeeded in limiting the size of the augmentation space and led to a satisfactory decrease in the number of iterations. As can be seen on the figures, SRKS6 did not stabilize the augmentation space as efficiently and behaved half way between TRKS and SRKS14; therefore, we will choose SRKS14 as our reference algorithm from now on.
The cluster strategy as it stands today gave unsatisfactory results: even though it often led to the best gain per augmentation vector, it seemed to impair the selection of useful vectors and allow much less reduction in the number of iterations than SRKS. After the resolution of many systems, it tended to lead to the same augmentation space as SRKS.
To confirm that hypothesis, we compared the spaces $C_{SRKS}$ and $C_{cluster}$ after the 40 resolutions for the low-accuracy Dirichlet case. We used the following procedure:
first, the vectors were orthonormalized using SVD: $C=U\Sigma V^T$; then SVD was applied to the concatenated matrix $[U_{SRKS},U_{cluster}]$. A plot of the singular values is shown in Figure~\ref{fig:svdclustsrks}. Independent spaces would lead to a constant value equal to 1, while for nested spaces the common space would lead to $\{\sqrt{2},0\}$ pairs of singular values. One can observe that the spaces are not exactly nested, but come quite close.
In conclusion, the cluster strategy is not mature yet, but it is promising. It was not considered for the following experiments because, due to the larger number of systems involved, it would behave quite similarly to SRKS.
\begin{figure}[ht]\centering
\begin{minipage}{.49\textwidth}
\includegraphics[width=0.9\textwidth]{fig/svdclustsrks.png}
\caption{Singular values of $[C_{SRKS},C_{cluster}]$} \label{fig:svdclustsrks}
\end{minipage}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|p{2cm}|}\hline
accur. & precond. & \# subdomains & iteration gain & CPU gain & efficiency of augmentation\\\hline
\multirow{4}*{$10^{-3}$} &\multirow{2}*{lumped} &12 &99.4\% &58.2\% &0.94 \\
& &48 &97.7\% &60.1\% &0.97 \\ \cline{2-6}
&\multirow{2}*{Dirichlet} &12 &94.7\% &55.5\% &0.54 \\
& &48 &96.2\% &62.6\% &0.69 \\ \hline
\multirow{4}*{$10^{-6}$}&\multirow{2}*{lumped} &12 &89\% &-121.2\% &0.25 \\
& &48 &90.2\% &-162.3\% &0.37 \\ \cline{2-6}
&\multirow{2}*{Dirichlet} &12 &84.4\% &-55.1\% &0.19 \\
& &48 &83.3\% &-90.7\% &0.21 \\ \hline
\end{tabular}\caption{Relative performance of TRKS}\label{tab:trksit}
\end{table}
\subsubsection{Study of SRKS14 in various configurations}\
Table~\ref{tab:srks14it} shows the relative performance of SRKS14 as a function of the number of subdomains, of the preconditioners and of the accuracy. The efficiency of the augmentation is defined as the decrease in the average number of iterations per augmentation vector. One can observe that the efficiency ranged between $0.5$ and $0.85$ and was best for the lower accuracy and the improved preconditioner. For these spectra in which there exist no small isolated eigenvalues (which could lead to efficiencies greater than 1), such results are consistent with the theory (see Section~\ref{subs:evdba}). In the next section, we will see that this moderate efficiency does not preclude significant CPU improvements.
The gains in terms of the number of iterations were relatively stable, typically between $50\%$ and $60\%$ in the high-accuracy case.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|p{1.7cm}|c|c|p{2cm}|}\hline
precond. & accur. & \# subdomains & CG avg. \#~iterations & avg. $n_c$& iteration gain & efficiency of augmentation\\\hline
\multirow{4}*{lumped} &\multirow{2}*{$10^{-3}$} &12&68.1 &59.6 &52.9\% &0.6\\
& &48& 43.7 &28.2 &54.2\% &0.84 \\
\cline{2-7}
&\multirow{2}*{$10^{-6}$} &12& 147.7 &170.2 &59\% &0.51\\
& &48& 162.7 &186.3 &62.5\% &0.55 \\
\hline
\multirow{6}*{Dirichlet}&\multirow{3}*{$10^{-3}$} &12& 43.3 &24.6 &42.2\% & 0.76\\
& &48&50.7 &31 &48.6\% &0.79 \\
& &96&67.4 &51.1 & 51.7\% & 0.68 \\\cline{2-7}
&\multirow{3}*{$10^{-6}$} &12& 84.4 &72.6 &49.3\% &0.57\\
& &48&116. &111.5 &57.2\% &0.6 \\
& &96&140.7 &141.9 & 60.3\% & 0.6\\\hline
\end{tabular}\caption{Iteration gains for SRKS14}\label{tab:srks14it}
\end{table}
\subsubsection{Influence of the hardware configuration on the CPU gains}\
Now, let us study the performance of SRKS14 in terms of CPU time for the same decomposition into 48 subdomains, but using different hardware configurations:
\begin{enumerate}
\item Configuration A corresponds to 4 dual hexacore nodes with 1 subdomain per core;
\item Configuration B corresponds to 6 dual quadcore nodes with 1 subdomain per core;
\item Configuration C corresponds to 3 dual quadcore nodes with 2 subdomains per core;
\item Configuration D corresponds to 2 dual quadcore nodes with 3 subdomains per core.
\end{enumerate}
One can note that the processors in Configuration A were different from those used in the other cases.
In all the cases, the memory was sufficient to avoid swapping. The results are given in Table~\ref{tab:srks14cpudiri} for the Dirichlet preconditioner and in Table~\ref{tab:srks14cpulumped} for the lumped preconditioner. One can see that Configurations B,C and D had similar performances and were slower than Configuration A due to the different memory technology.
One interesting factor is the ratio of the average CPU cost of an iteration to the average CPU cost of an augmentation vector (the last columns of Table~\ref{tab:srks14cpudiri} and~\ref{tab:srks14cpulumped}). One can see that in Configuration A, 4 augmentation vectors cost no more than one iteration; in the other configurations 7 augmentation vectors cost no more than one iteration. Since we saw that one needs about $1/0.6\simeq 1.6$ augmentation vectors to save one iteration, the advantage of augmentation is clear. Indeed, we observe a 32\% CPU improvement in Configuration A and a $40\%$ to $50\%$ improvement in the other configurations.
Note that when the lumped preconditioner is used the equivalent cost of an iteration is only 2.8 augmentation vectors in Configuration A and 4.5 augmentation vectors in Configuration D (see Table~\ref{tab:srks14cpulumped}). Since the efficiency of the augmentation vectors is less when this inexpensive preconditioner is used (in the high accuracy case), so is the CPU improvement.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|p{3.5cm}|}\hline
Configuration & CG avg. CPU & CPU gain & CPU~per~iteration / CPU~per~augm.~vector \\\hline
A & 9.5 & 32.6\%& 4 \\\hline
B & 25.7& 41.5\%& 6.7\\\hline
C & 29.7& 48.9\%& 7.7\\\hline
D & 29.9& 47.6\%& 7.8\\\hline
\end{tabular}\caption{CPU performance of SRKS14 for $10^{-6}$ accuracy with the Dirichlet preconditioner}\label{tab:srks14cpudiri}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|p{3.5cm}|}\hline
Configuration & CG avg. CPU & CPU gain & CPU~per~iteration / CPU~per~augm.~vector \\\hline
A & 10.7& 22.4\%& 2.8 \\\hline
D & 27.4 & 33.3\%& 4.5\\\hline
\end{tabular}\caption{CPU performance of SRKS14 for $10^{-6}$ accuracy with the lumped preconditioner}\label{tab:srks14cpulumped}
\end{table}
The ratio of the CPU time per iteration to the CPU time per augmentation vector for SRKS (Column 4 of the previous tables) turned out to be relatively stable for a given machine with a given preconditioner. This is due to the stability of the size of the augmentation space which prevented the cost from soaring (as would happen with TRKS). Thus, the CPU performance can be deduced from the iteration gains and the augmentation efficiency (see Table~\ref{tab:srks14it}). For instance, the CPU gain for SRKS with the 96-subdomain decomposition was slightly greater than 50\%.
\subsection{The case of a sequence of nonlinear problems}
Now let us consider a hexahedral holed plate ($10\times 10\times 0.2$ mm with a center hole of radius $1$ mm, see Figure~\ref{fig:plaque}) subjected to unidirectional tension (a prescribed normal displacement). The plate was discretized into $61,000$ linear hexahedral elements for a total of $41,000$ degrees of freedom. The structure was divided into 8 subdomains using the Metis algorithm, which resulted in an interface system with $3,000$ unknowns. The problem was solved using one 8-core processor (one subdomain per core). Elastic-plastic behavior with nonlinear isotropic hardening and a Von Mises'-type plasticity criterion was assumed. Denoting $\sigma$ the Cauchy stress tensor, $\epsilon(u)$ the symmetric gradient of the displacement field $u$, and $\ensuremath{\mathcal{K}}$ the Hooke tensor, the material law can be written as:
\begin{equation}
\left\{\begin{array}{l}
\epsilon(u)={\epsilon}^{{e}}+{\epsilon}^{{p}}, \qquad {\sigma} = \ensuremath{\mathcal{K}} : {\epsilon}^{{e}} \\
\text{ if } f(\sigma) =0 \; \text{ then } \; \dot{\epsilon}^{{p}} = \ensuremath{\lambda} f_{_{,\sigma}} \\
\text{ if } f(\sigma) \leqslant 0 \; \text{ then } \; \dot{\epsilon}^{{p}} = 0 \\
f(\sigma ) = \sqrt{ \frac{3}{2} \; \sigma :\sigma } - \left(R_{_{0}} + Q \left( 1 -e^{- b \ensuremath{\lambda}}\right)\right)
\end{array}\right.
\end{equation}
The coefficients were assigned a normal law with a $10\%$ relative standard deviation, which implied variations of up to $\pm 23\%$ in the coefficients. The mean values of the material parameters were: $E=200,000$ MPa, $\nu=0.3$, $R_0=300$ MPa, $b=22$ and $Q=170$ MPa. The loading was applied in two steps: first, a single increment to reach the elastic limit; then, $16$ equal increments in order to multiply the prescribed displacement by $4$. The objective of the study was to analyze $21$ configurations.
Again, the linear solver used was FETI with a Dirichlet or lumped preconditioner. The accuracy objective for the linear systems was set at $10^{-6}$. (The accuracy must be high for the nonlinear process to run well). Because of the approximations, not all the methods converged in the same number of Newton iterations; on average, one nonlinear analysis required the resolution of 95 tangent systems. Table~\ref{tab:plaque} summarizes the performances of the various methods; Figure~\ref{fig:plate_i} shows the evolution of the average number of APCG iterations with the lumped preconditioner during the sequence of linear systems; Figure~\ref{fig:plate_a} shows the evolution of the size of the augmentation space; Figures~\ref{fig:plate_cpu} and~\ref{fig:plate_wall} show the evolutions of the average CPU time and wall clock time for the resolution of one linear system along with the evolution of the average augmentation time (operator creation and projection).
\begin{minipage}{.3\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/hole.png}
\captionof{figure}{The holed plate example (plastic strain)} \label{fig:plaque}
\end{minipage}
\begin{minipage}{.7\textwidth}\centering
\begin{tabular}{|c|c|p{.65cm}|p{.65cm}|p{.65cm}|p{.65cm}|p{.65cm}|}\hline
precond. & method & avg. \# it & avg. $n_c$ & max $n_c$ & avg. CPU & avg. WCT\\\hline
Dirichlet & cg & 25.6 & -- & -- & 1.21 & 3.03 \\
Dirichlet & trks$^*$ & 1.4 & 358 & 492 & 2.66 & 9.83 \\
Dirichlet & srks14 & \textbf{16.1} & 17 & 19 & \textbf{0.98} & \textbf{2.35}\\\hline
lumped & cg & 41.4 & -- & -- & 1.03 & 3.24\\
lumped & trks$^*$ & 1.2 & 520 & 695 & 4.72 &6.98\\
lumped & srks14 & \textbf{19.1} & 43 & 45 & \textbf{0.87}&\textbf{2.08}\\\hline
\end{tabular}
{\footnotesize $^*$ calculation too slow, was stopped before all the systems were solved
}
\captionof{table}{Holed plate, performance summary} \label{tab:plaque}
\end{minipage}
\begin{figure}[ht]
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/holelit.png}
\caption{Plate, lumped -- avg. \# it. / linear system} \label{fig:plate_i}
\end{minipage}
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/holelaug.png}
\caption{Plate, lumped -- dimension of aug. space} \label{fig:plate_a}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/holelcpu.png}
\caption{Plate, lumped -- avg. CPU time / system} \label{fig:plate_cpu}
\end{minipage}
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/holelwall.png}
\caption{Plate, lumped -- avg. wall clock time / system} \label{fig:plate_wall}
\end{minipage}
\end{figure}
This test leads to conclusions similar to the previous ones. More specifically, one can observe that the SRKS augmentation space selected after the first nonlinear configuration remained stable. Conversely, since the TRKS augmentation space never reached a plateau, the solutions of the linear systems did not belong to a common space. SRKS was the most efficient method, leading to a 20\% CPU gain and a 36\% wall clock time improvement.
\subsection{The case of a large displacement problem}
Finally, let us consider the problem of the buckling of a straight heterogeneous beam with a circular cross section (length/diameter ratio equal to $30$), clamped at one end and subjected to an axial pressure at the other, with no radial displacement. The heterogeneities consisted of five straight fibers whose stiffness was $1,000$ times that of the matrix. The problem was formulated in the updated Lagrangian framework, assuming linear elastic behavior (characterized by the Young's modulus and Poisson's coefficient) in the current configuration. The beam was discretized into $90,000$ linear hexahedral finite elements for a total of $300,000$ degrees of freedom. It was divided into $10$ subdomains using the Metis algorithm, leading to an interface system with $16,000$ unknowns. A single 12-core processor was used (1 subdomain per core, leaving 2 inactive cores). The pressure was applied incrementally up to the configuration shown in Figure~\ref{fig:pout_flamb}, in which the maximum axial displacement was about $3\%$ of the total length. 12 increments were used, leading to the resolution of about 30 tangent linear systems.
We used a FETI solver with a Dirichlet preconditioner and an ``identity'' projector. The FETI convergence criterion was set to $10^{-6}$. Figure~\ref{fig:pout_flamb_iter} shows the evolution of the number of conjugate gradient iterations required for the resolution of each linear system. Figure~\ref{fig:pout_flamb_aug} shows the evolution of the size of the augmentation space.
\begin{figure}[ht]
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/poutkiki_i.png}
\caption{Buckling of the heterogeneous beam: number of iterations per linear system} \label{fig:pout_flamb_iter}
\end{minipage}
\begin{minipage}{.5\textwidth}\centering
\includegraphics[width=0.99\textwidth]{fig/poutkiki_a.png}
\caption{Buckling of the heterogeneous beam: dimension of the augmentation space for each linear system} \label{fig:pout_flamb_aug}
\end{minipage}
\end{figure}
Three algorithms were tested: classical conjugate gradients, total reuse of subspaces, and selective reuse of subspaces ($\varepsilon=10^{-14}$). Table~\ref{tab:pout} summarizes the main results.
\begin{figure}[ht]\centering
\begin{minipage}[b]{.45\textwidth}\centering
\includegraphics[width=0.95\textwidth]{fig/pout_flamb.png}
\captionof{figure}{The beam in the reference and deformed configurations, with a view of the cross section} \label{fig:pout_flamb}
\end{minipage}\hfill
\end{figure}
\begin{table}[ht]\centering
\begin{minipage}[b]{.5\textwidth}\centering
\begin{tabular}{|c|c|c|c|c|}\hline
method & avg. \# it. & avg. $n_c$ & CPU & WCT\\\hline
CG & 97.2 & -- & 392 & 504 \\
TRKS & 7.8 & 131.7 & 203 & 217 \\
SRKS 14 & \textbf{33.8} & 75.1 & \textbf{278} & \textbf{298}\\\hline
\end{tabular}
\captionof{table}{Recycling performance for the buckling problem}\label{tab:pout}
\end{minipage}
\end{table}
The following observations can be made:
\begin{itemize}
\item The performance of TRKS was really impressive because the systems converged in 10 times fewer iterations than with CG, but the size of the associated augmentation space was large (up to 208 vectors) and never ceased to increase;
\item SRKS appeared to be efficient: the number of iterations was divided by 3 with a space whose size increased slowly, then reached a plateau;
\item With the hardware configuration used, the best results in terms of computation time (a 48\% CPU improvement) were achieved with TRKS, but the gain normalized by the number of augmentation vectors was better with SRKS (a 29\% CPU improvement). SRKS was truly successful in controlling the dimension of the augmentation space.
\end{itemize}
In this example, the augmentation proved to be very efficient in terms of the iteration gain per augmentation vector, especially for SRKS (0.85). This was probably because of a specificity of the spectrum of the preconditioned operator due to the use of domain decomposition in large displacements. Indeed, the tangent matrix of a floating subdomain with prescribed Neumann conditions may become non-positive, contrary to the Dirichlet operator which remains positive definite. It is known that a slight lack of positivity of the operator does not prevent reorthogonalized conjugate gradients from converging \cite{PAIGE:1995:ASE}, but convergence is slower than when all the eigenvalues are positive. The positivity of the preconditioner makes the selection procedure still possible. Moreover, the negative eigenvalues are systematically selected by the procedure, so using augmentation causes the solver to iterate in the subspace in which the operator is positive, leading to a much better convergence rate.
\section{Conclusion}
This paper dealt with the resolution of sequences of large linear systems with varying matrices and right-hand sides using conjugate gradients. We proposed several algorithms based on the augmentation of the current Krylov subspace by a selection of previously generated subspaces. The advantage of these methods is that some of the iterations are replaced by the preprocessing of a coarse problem associated with optimized operations.
When low accuracy is sufficient, total reuse of the previous subspaces (the TRKS algorithm) appears to lead to satisfactory results. When high accuracy is required, the subspaces are too unstable, which causes the dimension of the TRKS augmentation space to soar. Therefore, we proposed to retain only the part of the subspace generated by the Ritz vectors associated with converged Ritz values of the preconditioned operator (the SRKS algorithm). These vectors can be built very inexpensively. Such an augmentation was found to remain stable throughout the linear systems and to lead to a reduction in the number of iterations which is consistent with the theory. In terms of computation time, the proposed method leads to a variable, but always positive, gain compared to non-augmented systems. We observed CPU time improvements of 20\% to 50\%, and wall clock time improvements of 40\% to 70\%.
Up until now, our attempts to improve the selection algorithm by eliminating the converged values in the central part of the spectrum have not led to impressive results. This probably means that the Ritz vectors associated with converged Ritz values contain meaningful information which cannot be removed from the analysis of the current system. A continuation of this work could consist in a better analysis of the accumulation of the augmentation vectors. This can be done by studying the coarse matrix $\ensuremath{\mathbf{C}}^T\ensuremath{\mathbf{A}}\ensuremath{\mathbf{C}}$ whose distance to the identity matrix characterizes the variation of the Krylov subspaces. Another objective would be to port some of the ideas presented in this paper to nonsymmetric solvers.
\bibliographystyle{plain}
|
3,212,635,537,727 | arxiv | \section{Introduction}
Samarium hexaboride (SmB$_6$) has attracted a great deal of interest after the theoretical prediction that it is a topological Kondo insulator. For half a century, SmB$_6$ has been known to be a Kondo insulator, the insulating gap forming by the hybridization of the 4$f$ and the 5$d$ bands. In 2010, theoretical studies predicted that some Kondo insulators, including SmB$_6$, are strong 3D topological insulators (3D TI), and therefore called topological Kondo insulators (TKI)\cite{DzeroTKI, TakimotoTKI, Dzero_2012,XiDaiPRL2013,MYe2013TKI}. If SmB$_6$ is a 3D TI, it must harbor a topologically protected surface state, or a two-dimensional metallic layer where the carriers follow a gapless Dirac-like dispersion and have a non-degenerate spin, with the direction of spin determined by the crystal momentum direction. So far, the hybridization gap and the conducting surface have been studied and verified by numerous experimental studies, including non-local transport\cite{Wolgast1}, magneto-transport\cite{Wolgast2, chineseMR}, thermal-transport, angle-resolved photoemission spectroscopy (ARPES)\cite{Jiang2ARPES2013,NXuARPES2013,NeupaneARPES2013,Denlinger2013TempARPES,Denlinger2016consistency,Xu2014direct,Frantzeskakis_ARPES}, de Haas-van Alphen (dHvA) quantum oscillations\cite{GLiDhVA, Xiang_dHvA}, point contact spectroscopy\cite{LosAlamosThermal}, scanning tunneling spectroscopy\cite{HarvardSTM, SWirthSTM}, ac conductivity\cite{NLaurita}, and spin-resolved transport\cite{song2016spin,kim2018electrical}. Compared to the weakly correlated 3D TIs, one promising aspect of SmB$_6$ from the electrical transport perspective is that the bulk is truly insulating even in the presence of mild disorder created by off-stoichiometry, and therefore its surface states can be reliably accessible without worrying about interruption by conduction through the bulk states\cite{eo2019transport}.
Although there has been an explosion of experimental studies on SmB$_6$ that verify the existence of conducting surface states, as mentioned above, only a few researchers have made use of this exciting material for further studies,\cite{lee2016observation,lee2018observation} perhaps because there are still remaining disagreements. Some experimental reports suggest that the surface states have a trivial origin instead of emerging from the topologically non-trivial bulk\cite{hlawenka2018samarium}. An even more serious problem is the disagreement on the very existence of the 2D metallic surface state. Arguably, the disagreement between the dHvA quantum oscillation studies is at the center of this problem. The report by Tan $et$ $al.$\cite{TanDhVa} claims that their observed quantum oscillations are from exotic 3D bulk states and that they include low-frequency ranges ($\sim$ 400 T) that are similar to the previous 2D quantum oscillation report by G. Li $et$ $al$\cite{GLiDhVA}. The two groups later follow up for further studies and the disagreement continues\cite{Xiang_dHvA, hartstein2018fermi}. A more recent study by a third group (S. Thomas $et$ $al$. \cite{thomas2019quantum}) reports that aluminum inclusions embedded in the SmB$_6$ bulk are responsible for the quantum oscillations and show a similar frequency range and 2D-like angle dependence. Next, a careful look within the reports that claim the verification of the 2D conducting surface states reveals that there are also disagreements, which may be too detailed for the community studying outside of SmB$_6$ to follow. For example, the size of the Fermi pockets estimated from the 2D dHvA reports by G. Li $et$ $al.$\cite{GLiDhVA} do not strictly agree with the size that is estimated from the ARPES reports. Also, from the historical perspective of the 2D electron gas studies, dHvA quantum oscillations have been more challenging to observe than the SdH oscillations, because the total magnetic moments are very small in a typical 2D electron layer \cite{JPEisenstein}. Although not a disagreement in the strict sense, it is peculiar that Shubnikov de-Haas (SdH) quantum oscillations through dc electrical transport have not been observed even up to $\sim$80 T,\cite{WolgastBulk} although dHvA has been reported with an onset lower than 10 T. In electrical transport, weak anti-localization (WAL), which is a supporting evidence of the helical spin structure on the TI surface, has been reported\cite{SThomasWAL,nakajima2016one,IndiaWAL}; however, there is also a report that WAL may not be universal possibility because of the magnetic impurities existing on the surface\cite{Wolgast2}. As we will show in this study, there are a number of transport experiments such as Hall effect, thermal transport, etc., but these electrical transport experiments report conductivity ($\sigma_{2D}$), carrier density ($n_{2D}$) and mobility ($\mu_{2D}$) in a wide (orders of magnitude) range of different values.
The first possibility for such disagreements is that the quality varies from sample to sample. Of course, one can naturally think that a different crystal growth method can introduce different types of defects and disorder. This aspect must be resolved especially for the quantum oscillation perspective. The dHvA reported from G. Li $et$ $al$.\cite{GLiDhVA} was measured on a sample that was grown by the aluminum flux method, whereas the report from Tan $et$ $al$.\cite{TanDhVa} was measured on a sample that was grown by the optical floating zone method. Another aspect is the role of magnetic impurities. A common belief is that if SmB$_{6}$ is a true 3D TI, protected by time-reversal symmetry, magnetic impurities and the surface state existence should have an antagonistic relation to each other. However, it is not clear if the existence of the magnetic impurities in the crystal is the most dominant factor for influencing the surface properties. In addition, even within the samples grown by the same technique, there is a wide perception that the qualities of the bulk and surface may differ from one to another. Another possibility is that the data was interpreted differently. So far, when the surface carrier density and the mobility are estimated by transport experiments on SmB$_6$, they rely on the classical (Drude) or semi-classical (Boltzmann) transport models. For example, when using these classical or semi-classical models to interpret transport experiments,
the geometric conversion factor ($g$) from measured resistance to conductivity ($\sigma_{2D}$) or resistivity ($\rho_{2D}$), must be known accurately. SmB$_6$ is difficult in a sense that many transport measurements were performed on thick (order of tens of microns up to mm) single crystal samples. Thus, unlike the weakly correlated 3D TIs that are grown in thin films, the side surfaces, edges, and corners on the sample surface also contribute as current paths, and the interpretation of $g$ can be very difficult. The problem becomes even more difficult when studying magnetotransport on the conducting surface, where the angle between the current and the magnetic field direction is different for all surfaces. Also, unlike other 2D systems, electron and hole pockets can simultaneously exist in the surface Brillouin zone in a 3D TI as long as the number of Fermi pockets is odd. If electron and holes coexist, the total carrier density can be dramatically overestimated by the Hall coefficient. The strategy to avoid this complication is using a Corbino disk, which confines the surface current path and is insensitive to the carrier species. There are two reports which measured surface transport using Corbino disks, and each estimates the $n_{2D}$ and $\mu_{2D}$ by gating\cite{PSyersCorbino} and magnetotransport\cite{Wolgast2}.
Many early experimental studies must be re-investigated because these aspects were not taken into account at the time. In the end, the parameters ($\sigma_{2D}$, $n_{2D}$, and $\mu_{2D}$) from transport must agree with the other experimental measurements (dHvA, ARPES, and optical ac conductivity). In this paper, we first present an alarming study that suggests that subsurface cracks conduct in parallel with the exposed surfaces at low temperatures of SmB$_6$. This implies that rough polishing, which is a typical intermediate procedure for obtaining a smooth surface on a single crystal, can create cracks just below the surface, thus changing $g$. Similarly, we suggest that it is even possible that small grain boundaries on a natural as-grown crystalline surface can be unexpected conducting paths and can complicate the surface geometric factor even more. Furthermore, keeping these new aspects in mind, we perform Corbino disk magnetotransport studies on multiple samples, including samples grown by both aluminum flux and floating zone techniques, both polished and unpolished. Our results show that the sample-to-sample variation, including both flux and floating zone growth samples, is smaller than the perceived widespread disagreement. In our studies, the estimated carrier density and mobility only vary within an order of magnitude when only considering the samples that are prepared properly by our own standards.
Many groups have previously tried to characterize the surface conduction on SmB$_6$ using high-field magnetoresistance measurements with the hope of observing SdH oscillations. On the (001) surface, as we reported previously, we do not observe SdH oscillations up to 80 T, as shown in Fig.~\ref{Fig:NoSdH93T}. The absence of SdH oscillations by such measurements provide an upper bound for the mobility of surface electrons. Our goal in this paper is to construct an allowed parameter space, $n_{2D}$ vs. $\mu_{2D}$, for the (001) surface and estimate a range of where each pocket should exist. Furthermore, we compare this transport analysis with other experimental reports.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{Fig1_AbsenceSdH.pdf}
\caption[Absence of Shubnikov de-Haas oscillations in SmB$_{6}$ up to 80 T.]{Magnetoresistance trace measured on a Corbino disk on the (001) surface using a pulsed magnetic field up to 80 T at 1.4 K. The negative magnetoresistance above 80 T, indicated with the arrow, is due to a surface-to-bulk crossover behavior, where the bulk channel starts to dominate the transport, consistent with the reduction of activation energy at high magnetic fields. \cite{WolgastBulk} The sample does not exhibit Shubnikov de-Haas oscillations up to 80 T. The corresponding sheet resistance (below 80 T) is shown in the right scale (in red).
}
\label{Fig:NoSdH93T}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1.2]{Fig2_CorbinoTheoryMR.pdf}
\caption{A Corbino disk. (a) Setup of a Corbino disk on sample. (b) Simulated resistance ratio vs. Magnetic field angle fixed at different magnetic field values based on Eq.~\ref{Eq:CorbinoFormulaBField}. }
\label{fig:Corbinosimulation}
\end{center}
\end{figure}
\label{Eq:linearBoltzmann3}
\section{\label{Sec:cracks}Influence of Subsurface Cracks on Surface Transport}
Within the SmB$_{6}$ community, there is a widespread perception that the surface sheet resistance estimated from the resistance plateau below 3\textendash 4 K varies dramatically from sample to sample. Of course, depending on the method of crystal growth, the leading order of defects can be different from one sample to another, and some samples can even be non-stoichiometric or contain unintentional impurities. However, we have been long suspicious of whether the measured resistance is indeed the true sheet resistance of the SmB$_{6}$ surface, especially in situations where the reported sheet resistances are very low (a few Ohms), for two reasons. First, such a low sheet resistance would imply a high electron mobility, in excess of 10,000 (cm$^2$/V$\cdot$s), which cannot be achieved unless the surface electrons are formed in an environment that is relatively free of disorder. We note that unlike semiconductor based two-dimensional electron gas (2DEG) systems, where the 2D electrons live in a clean interface of an epitaxially grown semiconductor heterostructure interface, the surface states of SmB$_6$ are exposed in ambient conditions. We note that although the surface of a 3D TI prohibits back scattering, this does not guarantee a high mobility. Absence of back scattering only results in a mild correction to the scattering time compared to the case where back scattering is allowed\cite{ozturk2017influence}. In the case of a exposed surfaces, the carriers are expected to scatter strongly from disorder arising from the non-epitaxial native oxide that must be present on all SmB$_6$ transport samples. Second, transport samples with such electron mobilities (in excess of 10,000 (cm$^2$/V$\cdot$s)) are expected to exhibit SdH oscillations starting in magnetic fields as low as 1 T, which has not been seen in any SmB$_6$ studies. We also note that this low resistance cannot be a bulk conducting path, unlike in weakly correlated 3D TIs. First, the surface of SmB$_{6}$ can be tuned by gating\cite{PSyersCorbino}. Also, from inverted resistance measurements, the bulk resistivity continues to rise exponentially below 3\textendash 4 K, and therefore we can safely conclude that the carriers flowing in the bulk are negligible\cite{eo2019transport}. Instead, we hypothesize that invisible (or hidden) conductive surfaces can exist in addition to the surfaces that are exposed. If these hidden conduction paths are not accounted for in transport experiments, this may explain why some of the experiments report low sheet resistances.
We rely heavily on the Corbino disk structures, as shown in Fig~\ref{fig:Corbinosimulation}~(a), which allow us to study the transport properties of different crystalline surfaces individually. The resistance of the Corbino sample is expected to be inversely proportional to the sheet conductivity:
\begin{equation}
R_{\mathrm{Corbino}}=\frac{\ln(r_{\mathrm{out}}/r_{\mathrm{in}})}{2\pi}\frac{1}{\sigma_{s}(0)},
\label{Eq:CorbinoFormulaBField}
\end{equation}
where $\sigma_{s}(0)$ (in 1/$\Omega$) is the surface conductivity at zero magnetic field and $r_{in}$ and $r_{out}$. In an ideal 2D layer, where the carrier density is independent of magnetic field, the Corbino resistance would be magnetic field dependent as a result of mobility reduction, following the Lorentz force factor:
\begin{equation}
R_{\mathrm{Corbino}}(B)=\frac{\ln(r_{\mathrm{out}}/r_{\mathrm{in}})}{2\pi}\frac{(1+(\mu B \mathrm{cos} \theta)^2)}{ne\mu},
\label{Eq:CorbinoFormulaBField2}
\end{equation}
where $B$ is the applied magnetic field, $\theta$ is the angle between the field and the surface normal direction, $n$ is the surface carrier density, $e$ is the electron charge, and $\mu$ is the surface mobility. We will later rely on these two equations above to analyze our magnetotransport data. For instructive purposes, we illustrate in Fig.~\ref{fig:Corbinosimulation}~(b) how the resistance changes with angle at a fixed magnetic field based on Eq.~(\ref{Eq:CorbinoFormulaBField}).
In this section, we study the role of surface preparation. First, we study the role of polishing and surface treatment by oxygen plasma induced oxidization. Among these studies, the most surprising finding is that the sample surfaces that were prepared with rougher polishing tend to result in lower resistance plateau values than the ones that were more finely polished. In typical materials, we would expect the opposite. If the surface is rough, the surface roughness can contribute to extra scattering. Because this scattering will contribute to lowering the mobility, the resistance is expected to be higher.
We hypothesized in this study that subsurface cracks that are created by rough polishing can serve as additional surface conduction paths. If SmB$_{6}$ is truly a 3D TI, then since subsurface cracks are a termination of the bulk, they must also be topologically protected surfaces. This hypothesis is consistent with the resistance plateau value trend that we observed for different polishing qualities because, in semiconductors, it is well known that rougher polishing creates subsurface cracks with larger length scales\cite{PeiCrack,mchedlidze1995subsurface}.
To test the hypothesis that subsurface cracks contribute to surface conduction, we prepared an SmB$_{6}$ sample with two Corbino disks fabricated on a finely polished surface. The resistance vs$.$ temperature was measured below $\sim$4.5 K, both before and after the active region (annular ring) of one of the Corbino disks was scratched with a scriber, as shown in Fig.~(\ref{Fig:Crack})~(a) $-$ (c). As shown in Fig.~(\ref{Fig:Crack})~(g), while the plateau resistance of the unscratched Corbino disk remains almost identical in both measurements within a few percent, the plateau resistance of the scratched Corbino disk further lowered from 140 $\mathrm{\Omega}$ to 110 $\mathrm{\Omega}$. After the second scratch, the resistance further lowers to 60 $\mathrm{\Omega}$. Indeed, the surface roughness increases after scratching, but the resistance does not become higher. Instead, it becomes lower, consistent with our hypothesis that the rougher surface conducts.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.9]{Fig3_ScratchExperiment.pdf}
\caption[Scratched Corbino measurements.]{(Color) Scratched Corbino measurements. (a) -- (c) Optical images of the two Corbino disks before scratching and after each scratch. The scratches are indicated with magenta arrows. White scale bars are 500 $\mu$m. (d) -- (f) SEM images of the second scratch at increasing magnification. Yellow scale bars are 100 $\mu$m, 100 $\mu$m, and 10 $\mu$m, respectively. (g) Resistance vs. temperature curves of both disks before scratching and after each scratch. Thick solid lines indicate the scratched disk, while thin dotted lines indicate the unscratched (control) Corbino disk. (Sample: Fisk011)}
\label{Fig:Crack}
\end{center}
\end{figure}
We further investigated if the scratched surface indeed harbors subsurface cracks. An ion-beam milling was performed on the scratched surface. The ion-milled wall profile is shown in Fig.~(\ref{Fig:Crack})~(d) $-$ (f) through an SEM image. We indeed observed subsurface cracks that are several microns long and up to 100 nm wide. Normally, we would expect that the polishing grit particles would introduce a stress to the surface, and therefore create subsurface cracks. Indeed, subsurface cracks were also observed on a separate SmB$_{6}$ crystal prepared by rough-polishing (P1200 grit, which produces micron level surface roughness), as shown in Fig.~(\ref{fig:Cracks})~(g) - (h). The cracks visible in Fig.~(\ref{fig:Cracks})~(h) are up to 1-$\mu$m long in the transverse or vertical direction, though they are much narrower than those seen in Fig.(~\ref{fig:Cracks}) (b) and (d), approaching the focal resolution limit of our SEM. Although cracks produced at finer polishing levels are below the resolution of our SEM images, it is not unreasonable to hypothesize their existence and contribution to the total surface conduction. Such cracks are expected to form whenever the maximum contact stress, which actually does occur a few $\mu$m below the surface,\cite{hertz1882beruhrung} exceeds the tensile strength of the material.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.9]{Fig4_IonMillCracks.pdf}
\caption{(Color) Ion-milled cross-sections. (a) Ion-milled cross-section across a scratch. (b), Subsurface cracks visible below the scratch in (a). (c) Ion-milled cross-sections across another scratch. (d) - (f), Subsurface cracks visible below the scratch in (c). (g), Ion-milled cross-section on a rough-polished surface. (h), Subsurface cracks visible below the rough surface in (g). In all panels, small yellow arrows indicate cracks. (Sample: Fisk011)}
\label{fig:Cracks}
\end{center}
\end{figure}
The length scales of these subsurface cracks would scale with the size of the grit particles. Therefore, the sample must be polished with the finest possible grit size and thinned sufficiently to eliminate the subsurface cracks that are created from the rougher polishing grit introduced in the previous polishing step. We eventually chose a grit size of 0.3 $\mu$m of Al$_{2}$O$_{3}$ for the finest polishing step.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.9]{Fig5_AsGrownSurfaces.pdf}
\caption[Comparison of resistance vs$.$ temperature of a Corbino disk on unpolished single crystal vs$.$ polycrystal.]{Comparison of resistance vs$.$ temperature of a Corbino disk on unpolished single crystal vs$.$ polycrystal. (a) Corbino disk on a clean single crystal surface in the (001) direction. (b) Corbino disk on a clean polycrystalline surface with many domain boundaries. (c) Resistance vs$.$ temperature of the two samples. The blue curve is measured from sample (a), and the red curve is measured from sample (b).(Sample: (a) GISTasGrown2 (b) GISTasGrown1)} \label{Fig:AsGrown}
\end{center}
\end{figure}
Next, we consider as-grown surfaces. If thinning and polishing can potentially create cracks, a single crystal that begins with a large enough surface is desirable. We were fortunate to obtain a single crystal that had a large enough area to fabricate a Corbino disk, as shown in Fig.~(\ref{Fig:AsGrown})~(a). In comparison, we also fabricated a Corbino disk on a low quality sample where multiple crystal facets can be seen with boundary lines between them (shown in Fig.~(\ref{Fig:AsGrown})~(b)). These lines are most likely naturally grown cracks or domain boundaries. Neither samples underwent any thinning or polishing treatments. The samples were only cleaned in dilute HCl before and after the lithography process. We measured the resistance vs$.$ temperature on both samples using the standard Corbino disk geometry. The results are shown in Fig.~(\ref{Fig:AsGrown})~(c). The resistance curve of the single crystalline surface is shown in the blue line, and the result for the polycrystalline surface is shown in the red line. The single crystal shows a resistance plateau of $\sim$260 $\mathrm{\Omega}$, with a corresponding sheet resistance of 2.3 k$\mathrm{\Omega}$. This sheet resistance value is consistent with the results from the final polishing step of 0.3 micron Al$_{2}$O$_{3}$. This gives us confidence that the final polishing step was sufficient for our studies. On the other hand, the resistance plateau value of the polycrystalline sample is only 3.2 $\mathrm{\Omega}$. We believe that the domain boundaries serve as conduction paths, similar to that of the conduction through subsurface cracks.
Another aspect to consider is the role of disorder on the surface. Previous studies report that a native oxide, most likely Sm$_2$O$_3$, forms on the surface of SmB$_6$\cite{PRL_Tjeng}. This oxide is suspected to be a source of surface magnetism, as reported previously\cite{nakajima2016one,Wolgast2}. As an instructive exercise, we treat our Corbino disk sample with a series of plasma oxidization events to introduce native oxides with more disorder and acid etching to remove the native oxide layer.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1.2]{Fig6_Oxide.pdf}
\caption[Surface oxidization experiment]{Surface oxidization experiment on a Corbino disk on SmB$_6$.(Sample:Paglione79B)}
\label{Fig:Oxide}
\end{center}
\end{figure}
To study this effect, we prepared a Corbino disk patterned on a polished SmB$_6$ sample. We summarize the results shown in Fig.~(\ref{Fig:Oxide}). First, after etching the sample in HCl, we measured resistance vs. temperature within an hour after the etching event, shown in curve Fig.~(\ref{Fig:Oxide})~(a). The resistance value rises to a maximum value of 79 $\Omega$ at base temperature. After exposing the sample in air for two months, the resistance measurement was repeated, as shown in curve Fig.~(\ref{Fig:Oxide})~(b). Then, the maximum value of resistance changed to 75 $\Omega$. We again measured the sample after plasma oxidizing with 100W power under 0.5 torr of oxygen for 10 minutes, and measured the resistance of the sample again shown in curve Fig.~(\ref{Fig:Oxide})~(c). The resistance rises up to a maximum value of 85 $\Omega$. Finally, we etched this sample with HCl diluted down to 20$\%$ for 4 minutes. We loaded this sample into the cryostat within 40 minutes and remeasured resistance vs. temperature, shown in curve Fig.~(\ref{Fig:Oxide})~(d). The maximum resistance this time was 81 $\Omega$. The sample does show a small change with plasma treatment and acid etching. However, we find these effects are much smaller than the previous scratch experiments.
We next consider how surface preparation and transport geometry can influence magnetotransport of surface studies of SmB$_6$. First, we discuss samples with a Hall bar geometry. In our earlier reports, we have attempted to find the surface carrier density using a bulk Hall bar geometry\cite{Wolgast1}. Surprisingly, the surface carrier density estimation by a naive Hall voltage formula ($V_{H}=BI/n_{2D}e$) results in an extremely large range of $n_{2D} \sim 10^{18} (\mathrm{cm}^{-2})$. We will later show in Sec.~\ref{Sec:ParameterSpace} that this is an unphysical value. As an instructive demonstration, we prepared a sample more carefully, polishing four surfaces as identically as possible with the goal of eliminating the subsurface cracks. This sample is shown in Fig.~\ref{Fig:AlexaHall}, and the measurement resulted in a more reasonable carrier density of $n_{2D} = 1.7\times 10^{14}(\mathrm{cm}^{-2})$. However, we find that many errors can be introduced in interpreting the data. We are relying on the assumption that the sample is a perfect Hall bar with four identical surface qualities, ignoring the current flowing through the side surfaces and the edges, and the contribution of contact size.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{Fig7_AlexaHall.pdf}
\caption[Hall Measurement]{Hall bar measurement of SmB$_6$ at 14T. (a) Hall bar sample. (b) Hall resistance at 14T vs. temperature. (c) lateral resistance vs. temperature. (Sample: GISTHall1)}
\label{Fig:AlexaHall}
\end{center}
\end{figure}
To correctly find the geometric factor ($g$) of the transport geometries, we must know where and how much the current flows through each surface. Choosing a Corbino disk geometry confines the current path to flow on a single surface and therefore we can ignore the surfaces where current does not flow. In our previous study\cite{Wolgast2}, we have found that a Corbino disk on a SmB$_6$ surface does not follow $R \propto (1+\mu^2B^2cos^2\theta)$ as in Eq.~\ref{Eq:CorbinoFormulaBField} because the sheet resistance without the Lorentz force contribution has an overall negative magnetoresistance, independent of its tilt angle, i.e., $dR_\square (|B|)/dB <0$. Still, resistance divided by the in-plane magnetoresistance ($\theta = 90^{0}$), $R_{||}$, shows a good agreement with $R/R_{||} \propto (1+\mu^2B^2cos^2\theta)$.
When the sample is finely polished, we find that the $R/R_{||} \propto (1+\mu^2B^2cos^2\theta)$ effect becomes larger. We study how our previously reported Corbino disk SmB$_6$ sample\cite{Wolgast2} changes its magnetotransport characteristics after carefully polishing. In our previous work\cite{Wolgast2}, when we were not aware of subsurface crack conduction, the Corbino disk sample was prepared by final polishing with a SiC polishing pad with an average grit size of 2.5 $\mathrm{\mu}$m\cite{Wolgast1}. In our present work, the Corbino disk sample was prepared by final polishing with an Al$_{2}$O$_{3}$ oxide with a particle size of 0.3 $\mathrm{\mu}$m. It is also important that the finer polishing step after the rough polishing step must remove the subsurface cracks created from the previous rough polishing step. Our Corbino disk magnetotransport measurements on the (001) surface at 0.35K before and after the finer polishing is shown in Fig.~\ref{Fig:MRBeforeAfter}. We first find that the estimated sheet resistance is much larger after fine polishing. Also, while the negative magnetoresistance remains -10$\%$ up to 35T, estimated by the in-plane magnetic field sweep, the angle dependence is more dramaticafter fine polishing.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.45]{Fig8_MRBeforeAfter.pdf}
\caption[Comparing magnetotransport of SmB$_{6}$ with previous work]{Comparing magnetotransport of Corbino disk on a (001) surface of SmB$_{6}$ from our previous report\cite{Wolgast2} and our current result. The temperature is at 0.35 K, and the magnetic field is swept while the angle of the magnetic field is kept fixed. (Sample: FiskGate1)}
\label{Fig:MRBeforeAfter}
\end{center}
\end{figure}
Next, we show that this enhanced angle-dependent magnetotransport after surface re-preparation changes the estimation of the surface mobility and the carrier density. As shown in Fig.~(\ref{Fig:MR001Results})~(a), the trace for the out-of-plane direction (0$^{\circ}$ angle, shown in a black line) deviates from a quadratic behavior. Also, the in-plane direction (90$^{\circ}$ angle, shown in a green line) shows a negative magnetoresistance in contrast to Eq.~\ref{Eq:CorbinoFormulaBField}, which predicts that there should be no magnetoresistance at that angle ($\theta$ = 90$^{0}$). In our previous report, we have speculated that the carrier density increases as the bulk band gap narrows\cite{Wolgast2} and that the mobility changes by the Kondo effect and surface roughness scattering \cite{Wolgast2}. As mentioned above, in spite of the presence of the negative magnetoresistance, the classical two-dimensional magnetotransport behavior is still valid. When dividing the resistances by the in-plane magnetic field resistance ($R(\theta)/R_{||}$), the ratio shows a good agreement with the quadratic dependence, as shown in Fig.~(\ref{Fig:MR001Results})~(b). When rotating the angle of the sample while the magnetic field is fixed, the data follows the cosine squared behavior, as shown in Fig.~(\ref{Fig:MR001Results})~(c). Fig.~(\ref{Fig:MR001Results})~(b) and Fig.~(\ref{Fig:MR001Results})~(c) also show that the surface carriers still experience the Lorentz force on a two dimensional layer. From Fig.~(\ref{Fig:MR001Results})~(a), this implies that the surface is an unusual 2D system in that the carrier density and mobility change as the magnetic field increases, also consistent with our previous report\cite{Wolgast2}. A quadratic fit to Fig.~(\ref{Fig:MR001Results})~(b) results in a mobility of 104.5 (cm$^{2}$/V$\cdot$sec) and a carrier density of 2.71$\times$10$^{13}$ (1/cm$^2$). From cosine square fits of Fig.~(\ref{Fig:MR001Results})~(c), the carrier density changes by about 10$\%$, and the mobility changes by about 3$\%$ over the range of magnetic fields, consistent with our previous studies \cite{Wolgast2}.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.8]{FigX_CorbinoResults.pdf}
\caption[Corbino Magnetotransport of SmB$_{6}$]{Magnetotransport results on a Corbino disk on the (001) surface. The temperature is at 0.35 K. (a) Dividing the resistance vs. magnetic field from the top curves in Fig.~\ref{Fig:MRBeforeAfter} to resistance at 90$^0$ (in-plane). (c) Resistance vs. tilt angle of the magnetic field at different magnetic field values. (Sample: FiskGate1)}
\label{Fig:MR001Results}
\end{center}
\end{figure}
Lastly, we summarize in Table~(\ref{Tab:TransportEvolution}) how our transport parameter estimations have evolved from Hall bar geometry to Corbino disk geometry, and then further through carefully preparing the surface. Comparing to our previous report with this sample from Ref.~\cite{Wolgast2}, as shown in Fig.~(\ref{Fig:MRBeforeAfter}), the estimated surface mobility increased by 41$\%$ and the carrier density decreased by a factor of 7.4. Although our most recent mobility estimation is higher than what we had originally estimated, this value is still small compared to high-quality 2DEG systems. We believe this is why SdH oscillations have not yet been observed. In the following section, we discuss our most recent magnetotransport results. Just by relying on classical magnetotransport analysis, we did not observe clear signatures of the existence of multiple channels in magnetotransport. This is in contrast to other experimental reports such as ARPES and dHvA that report on multiple pockets, so we will also discuss why our results do not show signatures of multiple channels based on the low mobility of the surface carriers.
\begin{table}[ht]
\centering
\begin{tabular}{c c c c c}
\hline\hline
Year& Geometry& $R_{\square}$ ($\mathrm{\Omega}$)& $n_{2D}$ (cm$^{-2}$)& $\mu_{2D}$ (cm$^{2}$/V$\cdot$sec) \\ [1ex]
\hline
2012 & Hall bar\cite{Wolgast1}&9.1 & 1.0$\times$10$^{18}$ & 0.69\\
2015 & Corbino Disk\cite{Wolgast2} &570 & 2.0$\times$10$^{14}$ & 61\\
Current & Hall bar &1400 &1.7$\times$10$^{14}$ & 26\\
Current & Corbino Disk &2200 & 2.71$\times$10$^{13}$ & 105\\[1ex]
\hline
\end{tabular}
\caption[Evolution of the surface transport parameter estimation from measurements at different years.]{Surface transport parameter estimation from measurements after understanding effects of geometry and subcrack conduction.}
\label{Tab:TransportEvolution}
\end{table}
\section{\label{Sec:ParameterSpace}Constructing the Transport Parameter Space of Each Channel}
\begin{figure}[t]
\begin{center}
\includegraphics{Fig9_TwoChannelParameterSpace.pdf}
\caption[(001) surface of SmB$_{6}$.]{(001) surface of SmB$_{6}$. (a) The three Fermi pockets of the BZ on the (001) surface. The solid red lines indicate the $X$-pockets, and the solid black line indicates the $\Gamma$-pocket. The dotted lines indicate how the Fermi surfaces deform if the electric field is applied in the $k_x$-direction. (b) The surface transport parameter space of SmB$_{6}$ in the (001) direction. The gray area (region I and II) are the forbidden regions where the parameters cannot exist. The dotted line indicates the mobility of 1/(80 T). Region III is shown in a gradient color, with the darker area indicating that the transport parameters would be less likely to exist at higher mobilities.}
\label{Fig:FermiPocketAndFundamentalLimit}
\end{center}
\end{figure}
The topological Kondo insulator theory predicts that three surface Fermi pockets exist, each surrounding a high symmetry point in the Brillouin zone (BZ)\cite{XiDaiPRL2013,MYe2013TKI}. In particular, on the (001) crystal surface, theory predicts that one of the Fermi pockets should be centered at the $\Gamma$-point (called the $\Gamma$-pocket in short), and the other two should be centered at the $X$-point (or $X$-pocket in short), as shown in solid red in Fig.~(\ref{Fig:FermiPocketAndFundamentalLimit})~(a). If SdH oscillations had been observed, the verification of each pocket would have been possible. As mentioned previously, SdH oscillations were not observed up to 80 T, and therefore our analysis is based on classical magnetotransport. The quadratic fit of $R(B)/R_{||}$ at a fixed angle and a cosine squared fit of $R(\theta)/R_{||}$ at a fixed magnetic field relying on Eq.~(\ref{Eq:CorbinoFormulaBField}) assumes that only a single channel (or Fermi pocket) exists in the BZ.
In our magnetotransport results shown in Fig.~(\ref{Fig:MR001Results}), the reason we were not able to observe distinct signatures of multiple channels can be explained if the mobilities of each channel are small. If an electric field is applied in the $+k_x$ direction in the BZ, the Fermi surfaces will change as shown in the dotted lines in Fig.~(\ref{Fig:FermiPocketAndFundamentalLimit})~(a). To simplify, we consider that the top and bottom $X$-pockets contribute equally to the left and right $X$-pockets by assuming that the associated scattering time does not depend on the direction of crystal momentum. Also, we assume that intrapocket scattering is dominant compared to interpocket scattering. Then, the conductivity of the two channels, $\sigma_{\Gamma}$ and $\sigma_{2X}$, add together to get the total conductivity ($\sigma_{t}$):
\begin{equation}
\sigma_{t} = \sigma_{\Gamma} + \sigma_{2X}.
\label{Eq:totalcondCh3}
\end{equation}
Furthermore, when a magnetic field is applied, we assume the carrier density and mobility that we measure are effective parameters ($n_{\mathrm{eff}}$ and $\mu_{\mathrm{eff}}$) based on both pockets, i.e., $\sigma_{t}=n_{\mathrm{eff}}e\mu_{\mathrm{eff}}$. The magnetoconductivity can also be expressed in terms of the transport parameters of the two individual pockets:
\begin{equation}
\frac{n_{\mathrm{eff}}e\mu_{\mathrm{eff}}}{1+\mu_{\mathrm{eff}}^2B^2\cos^2\theta}
=\frac{n_{\Gamma}e\mu_{\Gamma}}{1+\mu_{\Gamma}^2B^2\cos^2\theta}
+\frac{n_{2X}e\mu_{2X}}{1+\mu_{2X}^2B^2\cos^2\theta},
\label{Eq:totalmagncondCh3}
\end{equation}
where $n_{\Gamma}$ ($n_{2X}$) is the surface carrier density of the $\Gamma$-pocket (two $X$-pockets), and $\mu_{\Gamma}$ ($\mu_{2X}$) is the mobility of the $\Gamma$-pocket (two $X$-pockets). $R(B)/R_{\mathbin{\!/\mkern-5mu/\!}}$ at a fixed angle or $R(\theta)/R_{\mathbin{\!/\mkern-5mu/\!}}$ at a fixed magnetic field would be inversely proportional to Eq.~(\ref{Eq:totalmagncondCh3}). The inverse of Eq.~(\ref{Eq:totalmagncondCh3}) can be expanded in a series, and dividing by $R_{\mathbin{\!/\mkern-5mu/\!}}$ ($=1/\sigma_t$) we have
\begin{equation}
\begin{split}
\frac{R(B,\theta)}{R_{\mathbin{\!/\mkern-5mu/\!}}}
&= 1 + \mu_{\mathrm{eff}}^2B^2\cos^2\theta \\
&+ (\frac{\sigma_{\Gamma}}{\sigma_{t}}-(\frac{\sigma_{\Gamma}}{\sigma_{t}})^2)(\mu_{\Gamma}^2-\mu_{2X}^2)^2B^4\cos^4\theta +\dotsb,
\label{Eq:EffectCondSeries}
\end{split}
\end{equation}
where
\begin{equation}
\mu_{\mathrm{eff}}=\sqrt{(\mu_{\Gamma}^2-\mu_{2X}^2)(\frac{\sigma_{\Gamma}}{\sigma_{t}})+\mu_{2X}^2}.
\label{Eq:EffectMobility}
\end{equation}
From Eq.~(\ref{Eq:EffectCondSeries}), we can see that as long as any of the mobilities ($\mu_{\Gamma}$ and $\mu_{2X}$) from the channels are much smaller than $1/B$, the third term or higher order terms in Eq.~(\ref{Eq:EffectCondSeries}) will be overwhelmed by the second order term, and Eq.~(\ref{Eq:EffectCondSeries}) will be indistinguishable from the magnetotransport for the single channel equation, Eq.~(\ref{Eq:CorbinoFormulaBField}). Therefore, we will not be able to tell if there are two channels from magnetotransport. Furthermore, the effective carrier density is:
\begin{equation}
n_{\mathrm{eff}}=\frac{\sigma_t}{e\mu_{\mathrm{eff}}}=\frac{(n_{\Gamma}\mu_{\Gamma} + n_{2X}\mu_{2X})^{3/2}}{(n_{\Gamma}\mu_{\Gamma}^3 + n_{2X}\mu_{2X}^3)^{1/2}}.
\label{Eq:EffectCarrierDensity}
\end{equation}
We have so far explained why our magnetotransport of a Corbino disk fails to resolve signatures of multiple pockets. In the following subsections, we will show that the transport parameters from each pocket can only exist in a constrained space. Furthermore, together with our (effective) surface carrier density and mobility results, we will further constrain the range (or area) of the parameter spaces, where each pocket should exist. This result will be compared with other experimental reports.
\subsection{Forbidden Parameter Space}
In a 2D transport, there are two fundamental limits that constrain the parameter space. First, the size of the Fermi pocket cannot be larger than the defined surface Brillouin zone of the material. When a periodic boundary condition (Born-von Karman) is chosen to describe the system of interest in quantum mechanics, there is a relation between the occupied area (or volume) in $k$-space and the associated density of carriers. In a two-dimensional system, this relation is:
\begin{equation}
n_{2D}=\frac{s}{(2\pi)^2}A_{\mathrm{FS}},
\label{Eq:FermiSurfaceCarrierDensity}
\end{equation}
where $A_{\mathrm{FS}}$ is the area of the Fermi surface, and $s$ is the spin degeneracy ($s$ = 2 for typical materials, $s$ = 1 when the spin degeneracy is split). The maximum allowed surface carrier density is then, of course, related to the total area of the BZ. The maximum carrier density allowed on the (001) surface of SmB$_6$ is $n_{\mathrm{max}}$ = 5.86$\times$10$^{14}$ cm$^{-2}$ when there is no spin degeneracy (in a true 3D TI case). Therefore, any carrier density larger than this value is forbidden. In Fig.~(\ref{Fig:FermiPocketAndFundamentalLimit})~(b), the surface transport parameter space, where we plot the mobility and surface carrier density in a log-log graph, this forbidden region indicated as region I, shaded in gray.
Next, in 2D transport, the system undergoes a metal-to-insulator transition when $R_{\square}$ becomes larger than the quantum of resistance, $h/e^2$ (Ioffe-Regel criterion for 2D). Therefore, $R_{\square}$ larger than the quantum of resistance must be forbidden. The diagonal line in Fig.~(\ref{Fig:FermiPocketAndFundamentalLimit})~(b) represents $R_{\square} = h/e^2$ (or quantum conductivity: $\sigma_s = e^2/h$), and region II, the gray shaded region that is below the diagonal line is forbidden.
The absence of SdH oscillation up to 80 T, as shown in Fig.~(\ref{Fig:NoSdH93T}), also provides a constraint on the pockets in the parameter space. We first revisit the Lifshitz-Kosevich (LK) formalism very briefly. The conductivity oscillates with a damping factor that is related to the scattering of the carriers. This is the Dingle damping factor ($D_{D}$), and it is related to the mobility. The Dingle damping factor is:
\begin{equation}
D_{D}=\exp(-\frac{\pi}{\mu_Q B}),
\label{Eq:DingleDamp}
\end{equation}
where $\mu_Q=e\tau_Q/m^*$ is the quantum mobility. In the semi-classical approach, $\tau$ is estimated by Fermi's Golden Rule where the scattering potential is multiplied by an extra factor, ($1-\cos\theta$), so that it does not include the forward scattering\cite{GrapheneScattTime,GrapheneScatteringTheory}. This factor is not included in the $\tau_Q$ (quantum scattering time) in quantum oscillations and can differ from $\tau$ found from our classical (or semiclassical) transport approach. We will revisit the LK formalism when comparing our transport to the dHvA results\cite{GLiDhVA}.
From the Dingle factor ($D_{D}$), we see that a low mobility means that the onset of quantum oscillations would be at a high magnetic field. Of course, the exact onset magnetic field estimation would depend on the sensitivity of the measurement, how we model the broadening of the Landau levels due to disorder, and the prefactor of the quantum scattering time, etc. A crude estimate of the onset of quantum oscillations is when $\mu B\approx 1$, or $B\approx 1/\mu$. Therefore, the absence of SdH up to 80 T, which is the highest magnetic field ever applied on the SmB$_6$ (001) surface, tells us that the magnetic field requirement for the onset of SdH oscillations must be greater than 80 T, and therefore the mobility should be in the vicinity of 1/(80 T) or less. In Fig.~(\ref{Fig:FermiPocketAndFundamentalLimit})~(b), the mobility of 1/(80 T) is shown as a horizontal dotted line, and the area above is indicated as region III. The transport parameters would be less likely to exist at a higher mobility, although this is not strictly forbidden. To graphically show this in the figure, region III is shown in a gradient color that becomes darker at higher mobility ranges.
Excluding the regions of I and II, and since the high mobility far above 1/(80 T) is less likely, the Fermi pockets must be in or near the white triangular region in Fig.~(\ref{Fig:FermiPocketAndFundamentalLimit})~(b). Next, continuing with the effective mobility and carrier density found from our Corbino disk magnetotransport, we constrain the parameter space of carrier density and mobility where each pocket can be allowed.
\subsection{Constraining the Transport Parameter Space Region for Each Pocket from Corbino Magnetotransport}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1.1]{Fig10_CorneredParameterSpace.pdf}
\caption[The surface transport parameter space of SmB$_6$ on the (001) surface.]{The surface transport parameter space of SmB$_6$ on the (001) surface.}
\label{Fig:CorneredParameterSpace}
\end{center}
\end{figure}
In this subsection, we construct a parameter space region where each pocket ($\Gamma$ and $2X$) on the (001) surface can be located based on our magnetotransport results. In the transport parameter space, the carrier density and mobility of each pocket is represented as a point. Our result for effective surface carrier density and mobility ($n_{\mathrm{eff}}$ = 2.71$\times$10$^{13}$ cm$^{-2}$ and $\mu_{\mathrm{eff}}$ = 104.5 cm$^2$/(V$\cdot$s)) from the Corbino disk is shown as a black triangle in Fig.~(\ref{Fig:CorneredParameterSpace}). If we assume that our magnetotransport is a result of two channels, we can use Eq.~(\ref{Eq:totalcondCh3}).
The corresponding conductivities of each pair of points must add up to the conductivity corresponding to the triangular point (which denotes the effective parameters). The two points that represent the transport of each channel are somewhere in the allowed region shown in Fig.~(\ref{Fig:FermiPocketAndFundamentalLimit})~(b). Also, to be consistent with Eq.~(\ref{Eq:totalcondCh3}) \textendash ~Eq.~(\ref{Eq:EffectCarrierDensity}), one of the mobilities must be larger than $\mu_{\mathrm{eff}}$ while the other mobility must be smaller (a detailed justification is in Appendix A). Then, the allowed parameter space of the channel with the small carrier density ($\Gamma$-pocket), and the channel with the large carrier density ($2X$-pocket) should be in the regions that are indicated in Fig.~(\ref{Fig:CorneredParameterSpace}). Although we cannot specify the exact carrier density and mobility values, the small carrier density channel must be in region $\Gamma$, and the large carrier density channel must be in region $2X$. In the following section, we compare our constructed parameter space to other experimental reports such as Hall, dHvA, and ARPES.
\section{Comparison with Other Experiments}
In this section, we compare our constructed transport parameter space from the previous section to other experimental reports on the SmB$_{6}$ (001) surface. Before we proceed, we will briefly review how to estimate the carrier density and mobility from non-transport experiments.
\subsection{Quantum Oscillation (dHvA) Interpretation }
When the area of the Fermi surface in the BZ has a size of $A_{\mathrm{FS}}$, the associated frequency of quantum oscillations, $F$, follows the Onsager relation:
\begin{equation}
F(T)=\frac{\hbar}{2\pi e}A_{\mathrm{FS}}.
\label{Eq:OnsagerCh3}
\end{equation}
Also, since the carrier density ($n$) and $A_{\mathrm{FS}}$ are related (Eq.~(\ref{Eq:FermiSurfaceCarrierDensity})), the relation between $n$ and $F$ can be found as:
\begin{equation}
n_{2D}=s\frac{e}{h}F(T),
\label{Eq:CarrierDensityQuantumOscill}
\end{equation}
According to the LK formula, the amplitude of the oscillations is damped by two factors, the temperature damping factor ($D_{T}$) and the Dingle damping factor ($D_{D}$), which was introduced previously (Eq.~(\ref{Eq:DingleDamp})). The two damping factors are given by:
\begin{equation}
D_{T}=\frac{2\pi^2 (k_{B}T/\hbar \omega_{c})}{\sinh(2\pi^2 (k_{B}T/\hbar \omega_{c}))},
\label{Eq:TempDampFactor}
\end{equation}
\begin{equation}
D_{D}=\exp(-\frac{\pi}{\mu_Q B}),
\end{equation}
where $\omega_{c}$ (=$eB/m^*$) is the cyclotron frequency. From the temperature dependence of the amplitude of quantum oscillations, the effective mass, $m^*$, can be found. From the magnetic field dependence of the amplitude at a fixed temperature, the Dingle damping factor ($D_{D}$) can be used to find $\mu_{Q}$.
\subsection{ARPES and STM Quasiparticle Interference Interpretation}
In ARPES and STM quasiparticle interference (QPI), the momentum and energy resolved intensity data ($I(k,\omega)$) is measured and is understood by the spectral density function, $S(k,\omega)$, associated with removal of electrons from photons, with energy $\omega$\cite{ARPESCuprates,ColemanBook}:
\begin{equation}
I(k,\omega)=f(E)S(k,\omega),
\label{Eq:ARPESIntensity}
\end{equation}
where $f(E)$ is the Fermi-Dirac distribution and $S(k,\omega)$ is given by:
\begin{equation}
S(k,\omega)=-\frac{1}{\pi}\frac{\mathrm{Im}(\Sigma)}{[\omega - E(k) - \mathrm{Re}(\Sigma)]^2 + \mathrm{Im}(\Sigma)^2}.
\label{Eq:ARPESPSD}
\end{equation}
where $\Sigma$ is the self energy. Notice from Eq.~(\ref{Eq:ARPESIntensity}) that $I(k,\omega)$ becomes weak above the Fermi energy because of $f(E)$. Also notice that when the photon energy $\omega$ approaches $E(k)- \mathrm{Re}(\Sigma)$, the magnitude of $I(k,\omega)$ is enhanced. Therefore, ARPES can measure the energy dispersion below the Fermi energy and slightly above at finite temperatures, with the momentum dependence. By resolving different momentum directions at the Fermi energy, $E_{F}(k)$, the sizes and shapes of the Fermi pockets can be found. The size of the Fermi pockets can be converted to carrier density using Eq.~(\ref{Eq:FermiSurfaceCarrierDensity}). From the slope or curvature of the dispersion $E(k)$ below the Fermi energy, the effective mass ($m^*$) can be found. Furthermore, the spectral broadening is related to the momentum relaxation, and the associated scattering time, $\tau_p$, is:
\begin{equation}
\frac{\hbar}{\tau_{p}} = - \mathrm{Im}(\Sigma).
\label{Eq:ARPEStau}
\end{equation}
Therefore, the mobility can be found by $\mu = e\tau_{p}/m^*$.
\subsection{Estimation of Transport Parameters from Previous Reports}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{Fig11_Altogether.pdf}
\caption[Comparison of surface transport parameters of SmB$_{6}$ on the (001) surface.]{Comparison of surface transport parameters of SmB$_{6}$ on the (001) surface. (a) Comparison over a large transport range. (b) Comparison near the constrained parameter space. The triangular points are the transport reports. The square points are from the dHvA quantum oscillations. The circular points are from the ARPES $X$-pockets. The arrow is the carrier density estimation of the $\Gamma$-pocket from ARPES. The hexagonal points are from STM QPI reported pockets.}
\label{Fig:ComparisonParameterSpace}
\end{center}
\end{figure}
With the conversion relations that we have reviewed in the previous subsections, in Fig.~(\ref{Fig:ComparisonParameterSpace}) we plot the estimated mobility and carrier density of the reported transport, ARPES, dHvA, and STM QPI studies on the (001) surface of SmB$_{6}$. Because the figure is too crowded to plot all of the transport reports, we also provide a table of the transport reports, as shown in Table~\ref{Tab:TransportTable}, and the table of ARPES and STM QPI reports, as shown in Table~\ref{Tab:ARPESTable}, that also includes the data points that are omitted in the graph.
\begin{table*}[t]
\centering
\begin{tabular}{c c c c c}
\hline\hline
Report & Geometry& R/square ($\Omega$)&$n_{2D}$ (cm$^{-2}$)& $\mu_{2D}$(cm$^{2}$/V$\cdot$sec)\\ [1ex]
\hline
Y.S. Eo $et$ $al$. (This work) & Corbino & 2.2$\times$10$^{3}$ & 2.71$\times$10$^{13}$ & 104.5\\
S. Wolgast $et$ $al$. (2013)\cite{Wolgast2} & Corbino & 5.1$\times$10$^{2}$& 2.0$\times$10$^{14}$ & 61\\
P. Syers $et$ $al$.(2015)\cite{PSyersCorbino} & Corbino & 2.4$\times$10$^{2}$ &2.0$\times$10$^{14}$ & 133\\
S. Thomas $et$ $al$. (2013)\cite{SThomasWAL} & Hall* & 9.2$\times$10$^{-4}$&1.0$\times$10$^{18}$ & 6.8$\times$10$^3$\\
S.H. Lee $et$ $al$. (2013)\cite{SHLeeThinHall} & Thin Film Hall& 87 &1.86$\times$10$^{16}$ & 4.27\\
S. Adhikari $et$ $al$. (2015)\cite{adhikariHall} & Hall & 0.35& 2.0$\times$10$^{18}$ & 9\\
J.W. Allen $et$ $al$.(1979)\cite{JWAllenHall1979} & Hall* &5.3$\times$10$^{2}$& 1.34$\times$10$^{15}$ & 8.82\\
N. J. Laurita $et$ $al$.(2016)\cite{NLaurita} & ac Optical & $\geq$10$^3$& N/A & N/A\\[1ex]
\hline
\end{tabular}
\caption[Summary of the transport parameters from transport experiments.]{Summary of the transport parameters from transport experiments. *The carrier density and mobility are estimated naively by the Hall coefficient, thickness, and resistivity without the exact geometry information}
\label{Tab:TransportTable}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{c c c c }
\hline\hline
Report & Pocket& $n_{2D}$ (cm$^{-2}$)& $\mu_{2D}$(cm$^{2}$/V$\cdot$sec)\\ [1ex]
\hline
J. Denlinger $et$ $al$. (2013)\cite{Denlinger2013TempARPES,Denlinger2016consistency} & $2X$ & 1.67$\times$10$^{14}$ & 13\\
N. Xu $et$ $al$. (2013)\cite{NXuARPES2013} & $2X$ & 1.74$\times$10$^{14}$ & 15.2\\
N. Xu $et$ $al$.(2013)\cite{NXuARPES2013} & $\Gamma$ & 8.61$\times$10$^{12}$ & N/A\\
J. Jiang $et$ $al$. (2013)\cite{Jiang2ARPES2013}& $2X$ & 1.92$\times$10$^{14}$ & N/A\\
J. Jiang $et$ $al$. (2013)\cite{Jiang2ARPES2013} & $\Gamma$ &6.45$\times$10$^{12}$ & N/A\\
M. Neupane $et$ $al$. (2013)\cite{NeupaneARPES2013} & $2X$ & 3.01$\times$10$^{14}$ & N/A\\
M. Neupane $et$ $al$.(2013)\cite{NeupaneARPES2013} & $\Gamma$ & 9.39$\times$10$^{12}$ & N/A\\
P. Thunstrom $et$ $al$.(2019)\cite{thunstrom2019topology} & $\Gamma$ & 4.8$\times$10$^{13}$ & N/A\\
H. Pirie $et$ $al$.(2019)\cite{HarvardSTM, HarvardSTM_private} & $2X$ & 2.69$\times$10$^{14}$ & 0.94\\[1ex]
H. Pirie $et$ $al$.(2019)\cite{HarvardSTM, HarvardSTM_private} & $\Gamma$ & 8.94$\times$10$^{12}$ & 64.7\\[1ex]
\hline
\end{tabular}
\caption[The transport parameters estimated from ARPES.]{The transport parameters estimated from ARPES and STM QPI.}
\label{Tab:ARPESTable}
\end{table*}
We explain some of the highlights of Fig.~(\ref{Fig:ComparisonParameterSpace})~(a) and tables Table~\ref{Tab:TransportTable} and Table~\ref{Tab:ARPESTable}. The transport reports, including our Corbino magnetotransport, are shown with triangles in Fig.~(\ref{Fig:ComparisonParameterSpace}). The early Hall effect data shows very high surface carrier density, with even exceeding the maximum carrier density associated with the BZ area. The very large carrier density tendency of these data points is consistent with our old estimation from the Hall geometry measurement. The ac optical conductivity study did not observe the existence of the surface states, but provide a minimum sheet conductivity based on their noise level\cite{NLaurita}. The dHvA quantum oscillations by G. Li $et$ $al$.\cite{GLiDhVA} report two Fermi pockets that originate from the (001) surface, plotted in hollow-square symbols in Fig.~(\ref{Fig:ComparisonParameterSpace}). The dHvA results show the highest mobility and lowest carrier density in the transport parameter space. Next, the ARPES reports of the $X$-pockets are plotted in circular points in Fig.~(\ref{Fig:ComparisonParameterSpace}). All of the ARPES studies that agree with the existence of the surface states see a very large $X$-pocket. Among these, J. Denlinger $et$ $al$.\cite{Denlinger2013TempARPES} and N. Xu $et$ $al$.\cite{NXuARPES2013} report the Im($\Sigma$), and therefore the mobility of the $X$-pocket can be estimated. Early ARPES studies report the $\Gamma$-pocket\cite{NXuARPES2013, NeupaneARPES2013, Jiang2ARPES2013} that has a small size. Recently, a suggestion that those states are trivial Rashba states has been made\cite{hlawenka2018samarium}. Instead, a larger $\Gamma$-pocket, previously interpreted as a Umklapp state formed folding of a $X$ pocket with respoect to the (2$\times$1) reconstruction surface Brillouin zone boundaries, has been proposed\cite{ohtsubo2019non,thunstrom2019topology}. Unfortunately, both reportes lack an estimation of Im($\Sigma$). Since we can only estimate the surface carrier density from the area of the pocket, we show one of the conversions from N. Xu $et$ $al.$\cite{NXuARPES2013} as an (orange) arrow pointing to the corresponding carrier density. Lastly, the STM QPI study report both the $X$- and $\Gamma$-pocket\cite{HarvardSTM}. Together with the scattering time that was estimated by the author\cite{HarvardSTM_private}, we report the estimated carrier density and mobility, as shown with hexagons in Fig.~(\ref{Fig:ComparisonParameterSpace}).
\subsection{Estimation of Mobility of ARPES Fermi Pockets Using Transport}
In this subsection, by comparing our effective transport parameters from the Corbino disk magnetotransport with the ARPES reports that report a small $\Gamma$ pocket\cite{NXuARPES2013, NeupaneARPES2013, Jiang2ARPES2013}, we can further estimate what the mobility value of each pocket should be. Here we review two equations, Eq.~(\ref{Eq:totalcondCh3}) and Eq.~(\ref{Eq:totalmagncondCh3}).
\begin{equation}
\sigma_{t} = \sigma_{\Gamma} + \sigma_{2X},
\end{equation}
Furthermore, when a magnetic field is applied, we assume the carrier density and mobility we find from experiment is an effective parameter ($n_{\mathrm{eff}}$ and $\mu_{\mathrm{eff}}$) from the two pockets, i.e., $\sigma_{t}=n_{\mathrm{eff}}e\mu_{\mathrm{eff}}$. The magnetoconductivity is then:
\begin{equation}
\frac{n_{\mathrm{eff}}e\mu_{\mathrm{eff}}}{1+\mu_{\mathrm{eff}}^2B^2\cos^2\theta}
=\frac{n_{\Gamma}e\mu_{\Gamma}}{1+\mu_{\Gamma}^2B^2\cos^2\theta}
+\frac{n_{2X}e\mu_{2X}}{1+\mu_{2X}^2B^2\cos^2\theta}.
\end{equation}
Here, the known parameters are $n_{\mathrm{eff}}$ and $\mu_{\mathrm{eff}}$, where we use our magnetotransport results as well as $n_{\Gamma}$ and $n_{2X}$ from the ARPES reports in Table~(\ref{Tab:ARPESTable}). Using the two equations above, we can solve for $\mu_{\Gamma}$ and $\mu_{2X}$. We report the results in Table~(\ref{Tab:ARPESEstimate}). Our results predict that $\mu_{\Gamma}$ is about $\sim$150 cm$^2$/V$\cdot$s. A smaller $\Gamma$-pocket area requires a larger $\mu_{\Gamma}$ to be consistent with our magnetotransport report.
\begin{table}[h]
\centering
\begin{tabular}{c c c c }
\hline\hline
Report & $\mu_{\Gamma}$ (cm$^{2}$/V$\cdot$sec)& $\mu_{2X}$(cm$^{2}$/V$\cdot$sec)\\ [1ex]
\hline
N. Xu $et$ $al$. (2014)\cite{NXuARPES2013} & 168.36 & 9.11\\
J. Jiang $et$ $al$. (2013)\cite{Jiang2ARPES2013} & 148.73 & 4.78\\
M. Neupane $et$ $al$.(2013)\cite{NeupaneARPES2013} & 152.96 & 8.71\\[1ex]
\hline
\end{tabular}
\caption[Estimation of mobility of the Fermi pockets ARPES after comparing with the Corbino magnetotransport.]{Estimation of mobility of the Fermi pockets ARPES after comparing with the Corbino magnetotransport.}
\label{Tab:ARPESEstimate}
\end{table}
\subsection{Consistency with Corbino Magnetotransport and Discussion}
In this subsection, we discuss how consistent the other previous studies are with our study of Corbino magnetotransport. We can say that they are in full agreement when the transport parameters ($n_{2D}$ and $\mu_{2D}$) of each pocket ($\Gamma$ and $2X$) from other studies are in the parameter space that we constructed in Fig.~(\ref{Fig:CorneredParameterSpace}). For convenience, we plot Fig.~(\ref{Fig:CorneredParameterSpace}) and Fig.~(\ref{Fig:ComparisonParameterSpace})~(a) together in Fig.~(\ref{Fig:ComparisonParameterSpace})~(b).
None of the previous studies are in full agreement with our transport study either because at least one of the pockets are lying exterior to the parameter space or full information is lacking (e.g., no mobility report). Although there is no full agreement, ARPES report by N. Xu $et.$ $al.$\cite{NXuARPES2013} are in most agreement among the ARPES studies. The 2$X$ pocket is within our constructed parameter space and the carrier density of $\Gamma$ is consistent with our expected range. J. Denlinger $et.$ $al.$\cite{Denlinger2013TempARPES} is in good agreement with our 2X parameter space, but the report of $\Gamma$ pocket is missing. The recent large $\Gamma$ pocket suggestion\cite{thunstrom2019topology, ohtsubo2019non} does not agree with our parameter space construction. In this case, the carrier density is close to the effective carrier density ($n_{eff}$) from the bare magnetotransport fitting. This requires an absence of the 2X pockets.
The carrier density ($n_{2D}$) is the more important transport parameter than the mobility that needs agreement since it is a measure of the intrinsic property (electronic structure), whereas mobility can change based on the level of disorder of the sample and surface quality. Note in ARPES and STM experiments, the samples are cleaved and measured under vacuum. In contrast in transport, the samples are exposed in ambient conditions. We therefore believe a mobility difference by a few factors and not orders of magnitude difference can have potential agreement. In that sense, the STM QPI study are in most agreement except for its consistently lower mobilities. From STM QPI, the tendency that the $\Gamma$ pocket has a higher mobility than the $2X$ pocket is consistent with our transport analysis. One might attribute the consistently lower mobilities are from the sample and surface quality difference. Interestingly, the authors from the same group provide a scenario that may be an alternative explanation for this disagreement. The authors explain the difference in ARPES and STM QPI, most notably the difference in effective mass can be explained by band bending driven by polar reconstruction surfaces\cite{matt2018consistency}. The authors note that the notable difference between STM QPI and ARPES are the length scales they are probing. The largest ordered domains measured by STM are on the order of tens of nanometers, whereas ARPES measures a spatial average of tens of microns consisting of many domains formed by reconstruction surfaces that may be either polar or non-polar\cite{matt2018consistency}. The dominant effect on polar surfaces is to shift the Dirac point of the surface towards the bulk valance band and therefore results in an enhanced Fermi velocity. The study shows that a smaller effective mass can be seen by a spatial averaging including these polar reconstruction surfaces\cite{matt2018consistency}. A full consistency with our transport, that is also agreeing with the mobility of the $\Gamma$ pocket, might be possible after considering a further spatial averaging\cite{HarvardSTM_private}.
The pockets reported by dHvA are more difficult to agree with our constructed parameter space. All of the pocket sizes are small. The large pocket size that is consistent with the $X$-pocket parameter space seems to be lacking. From the perspective of our transport parameter space, an extremely large magnetic field ($>$1000 T) is needed for this observation. Instead, the two pockets in dHvA are reported both closer to the $\Gamma$ pocket parameter space. Also, the two pockets have higher mobilities than the effective mobility we find from magnetotransport (see Appendix A for reasoning). We note that the mobility estimated from the dHvA oscillations is the quantum mobility ($\mu_Q$) and not the transport mobility ($\mu$). Still, this cannot explain the disagreement since typically, $\mu_Q$ and $\mu$ only differ slightly. Finally, we briefly mention the Cambridge dHvA results, although our study does not aim to re-interpret the bulk origin claim to the surface. There has been a study that re-considers the quantum oscillations from the Cambridge group by Tan $et$ $al.$\cite{TanDhVa} as having a surface origin instead of a bulk origin\cite{Denlinger2016consistency}. The corresponding Fermi surface area of the high frequencies is consistent with the X-pocket observed by ARPES. When comparing the Cambridge quantum oscillations with our transport results, the corresponding carrier density is also consistent with the 2X parameter space that we constructed. However, the associated mobility for the onset of those high-frequency quantum oscillations is far too high to be consistent with our transport parameter space and the ARPES reports.
\section{Conclusion}
In conclusion, we have studied the surface transport of SmB$_{6}$ using high-field magnetotransport at low temperatures. We have not been able to observe SdH quantum oscillations in our studies, and therefore we were not able to see signatures of multiple Fermi pockets. To properly characterize the surface without SdH oscillations, we have discussed the importance of employing the proper transport geometry and surface preparation in surface transport studies. From our estimated carrier density and mobility, we were able to constrain the possible carrier density and mobility values of each Fermi pocket (each channel).
\begin{acknowledgments}
We thank H. Pirie for providing us the scattering time estimations from recent STM QPI work and the helpful discussions. We also thank K. Sun, L. Li, and J. W. Allen for their insightful comments. Portion of this work performed at the National High Magnetic Field Laboratory (NHMFL), which is supported by the National Science Foundation Cooperative Agreement Nos. DMR-1157490 and DMR-1644779 and the State of Florida, and by the DOE. We thank Jan Jaroszynski for assistance during the high magnetic field experiments. Device fabrication was performed in part at the Lurie Nanofabrication Facility. The authors acknowledge the University of Michigan College of Engineering for financial support and the Michigan Center for Materials Characterization for use of the instruments and staff assistance.
\end{acknowledgments}
|
3,212,635,537,728 | arxiv | \section{Introduction
At very short time and length scales the diffusive motion of a Brownian particle breaks down into a series of individual ballistic flights. The functional form of this transition is controlled by the microscopic structure and behavior of the fluid. The microscopic time and length scales for the ballistic motion are so small that direct measurements have only recently become possible \cite{lukic_direct_2005, huang_direct_2011, franosch_resonances_2011, kheifets_observation_2014}. These experiments have used optical traps to confine a test particle within a harmonic well, allowing a high precision measurement of the short time motion but at the cost of a loss of information about the crossover to longer time behavior that is indicative of the microscopic structure of the fluid. Furthermore, laser traps by their nature create a harmonic potential energy well for the motion of the particle and thus function similarly to an elastic term in a viscoelastic fluid. As such, it can be difficult to deconvolve the effect of the trap from the effects of the elastic component of the fluid. Indeed, all studies of viscoelastic fluids known to the authors don't address the ballistic regime. Here we avoid the limitations and contaminations caused by the use of a laser trap and present direct measurements of the full transition away from ballistic motion for a freely moving colloid suspended in simple Newtonian and viscoelastic Maxwell fluids. These measurements are achieved in an interaction free manner using a high speed camera, intense illumination, and an accurate tracking algorithm \cite{parthasarathy_rapid_2012}. These measurements allows us to unambiguously distinguish between microscopic models for dense fluid thermal motion \cite{lemons_paul_1997, uhlenbeck_theory_1930, zwanzig_hydrodynamic_1970,widom_velocity_1971, hinch_application_1975, clercx_brownian_1992,van_zanten_brownian_2000, wilhelm_rotational_2003, atakhorrami_short-time_2008, raikher_brownian_2013} and provide a hithertofore impossible glimpse into the fundamental behavior of thermal fluids. In a simple Newtonian fluid, our measurement is in close correspondence with analytic predictions. By fitting our data we can directly measure the constants of motion as well as a first principles measurement of the temperature of the fluid. Having proven the validity of this method, we experimentally examine the motion of a single particle in a Maxwell fluid as it transitions from ballistic to elastically trapped to diffusive motion, the first observation of this kind. We compare these results to existing microscopic models for Maxwell fluids \cite{mason_optical_1995, van_zanten_brownian_2000, raikher_brownian_2013, grimm_brownian_2011}, and find significant discrepancies between the model predictions and the observed behavior.
An early effort to model the balistic diffusive transition was performed with the ideal gas approximation \cite{lemons_paul_1997, uhlenbeck_theory_1930}. A more accurate model for dense fluids, motivated by early computer simulations \cite{rahman_liquid_1966,alder_decay_1970}, was achieved by adding an effective mass term and a memory term to the ideal gas model \cite{zwanzig_hydrodynamic_1970, hinch_application_1975}. The effective mass term models the frictionally bound fluid that is attached to the particle and the memory term models the inertial interaction of the particle with nearby moving fluid \cite{zwanzig_compressibility_1975}. At sufficiently short timescales and close to the speed of sound in the fluid this model breaks down and is replaced with the simple ideal gas model. The memory term in the dense fluid model, comes from the entrained fluid in a dense sytem which slows the change of direction.
These modifications to the Langevin equation were analytically solved \cite{clercx_brownian_1992} under the assumptions that the fluid is viscous and incompressible, the Reynolds number is low, and the test particle is a hard sphere \cite{widom_velocity_1971, l._d._landau_&_e.m._lifshitz_fluid_1959}. The predicted MSD is included in the supplementary information and plotted in figure \ref{trla7Var}. The dense fluid model and the ideal gas model both share similar asymptotic forms. At short times, known as the ballistic regime, the MSD asymptotes to $(2k_{B}T/M)t^2$. At long times, the diffusive regime, the MSD scales as $(4k_{B}T/\gamma)t$. Here $\gamma$ is the Stokes value ($6 \pi r \eta$), $k_B$ the Boltzman constant, $T$ the temperature, $\eta$ the viscosity, $r$ the tracer radius, $M$ the effective mass which is $m+ \frac{1}{2}m_f$, where $m_f = \frac{4}{3} \pi r^3 \rho$ is the the mass of the fluid displaced by the colloid, and $\rho$ is the density of the fluid. The dense fluid MSD differs from the ideal gas MSD in two salient ways: 1) It gives rise to a slower ballistic velocity, caused by the increased effective mass of the particle. 2) It has a much gentler crossover between ballistic and diffusive motion, caused by the inertial memory of the liquid.
Non-Newtonian fluids, however, have much more complicated Langevin equations \cite{van_zanten_brownian_2000, raikher_brownian_2013} which have not been explored as intensely due to the lack of experimental data at the shortest length and time scales. One of the simplest non-Newtonian fluids is a Maxwell fluid, characterized by a single terminal relaxation time between spring like and viscous like behavior. This requires the addition of a decaying spring term to the Langevin equation. This additional term results in an intermediate plateau regime in the MSD corresponding to the behavior of a thermal spring. While analytical solutions for the Maxwell fluid are lacking, predictions have been made about the asymptotic behavior in the three regimes: ballistic motion, elastically trapped motion, and finally diffusive motion. The short-time ballistic behavior is predicted to asymptote to $(4k_{B}T/m)t^2$, the elastic trap should have a constant MSD of $2 k_{B}T/\pi r G_0$, and the long-time diffusive motion is predicted to have an asymptote of $(4k_{B}T/\gamma)t$ \cite{van_zanten_brownian_2000}. Here $G_0$ is the plateau modulus, a commonly measured rheological value which measures the amplitude of the storage and loss modulii \cite{aramaki_composition-insensitive_2010}. We use our technique to test the rheological predictions when applied to a single unconstrained particle in such a fluid moving between ballistic and diffusive regimes.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.6]{trla7ImTraceDbImg.pdf}}
\caption{A single colloid suspended in water viewed through the microscope. The path the particle travels for the next 2 seconds is shown in blue.}
\label{particleTrace}
\end{figure}
In this experiment, we used polystyrene hard spheres with a radius of 21.8 $\mu$m as our tracer particle. Polystyrene was chosen because it is easily density matched to water using NaCl with only minimal, and known, changes to the viscosity. The size of the particle, larger than those in optical trap experiments, was chosen because the tracking precision as well as the ideal gas transition time and length increase with increasing radius. We chose to use water as our experimental liquid because of its ubiquity in experiments and its relatively low viscosity. For our setup, the Reynolds number is $2.4\times10^{-9}$. This system fulfills all of the underlying assumptions required by the dense fluid equation. When selecting a Maxwell fluid we chose to use a solution of Cetyltrimethylammonium chloride (CTAC) mixed with water. This mixture has been found to exhibit a Maxwell fluid behavior caused by worm-like micelles \cite{oda_effect_1998, wilhelm_rotational_2003} and is a commonly studied Maxwell fluid. The micelles formed produce a network within the fluid. This network, acting together, changes the properties of the supporting fluid. Because the test particle we use is larger than the miccelles it will probe the properties of the complex fluid rather than solely those of the intervening fluid.
Test particles were placed into a deionized water mixture at $5\times10^{-3}$ \% w/v of colloid. The colloids are slightly denser than water, so NaCl was added to density match the system at a measured value of $1.06 \times 10^{3}$ kg/m$^3$. After sonicating and degassing the colloid-water solution, it was placed in a Fastwell silicon spacer cavity between a slide and a cover slip. The chamber was sealed with vacuum grease to ensure that air bubbles did not form. The silicon spacer had a width of 2.4 mm and circular void with a radius of 5 mm allowing the colloids to be imaged far from wall effects. The slides were cleaned with piranha solution and dried with nitrogen gas, which removed any coatings on the slides.
We created a Maxwell fluid using a solution of CTAC at 1\% by weight with water with the addition of 0.12 M of NaSal to facilitate the formation of micelles. We directly measured the plateau modulus of this solution to be 5.72 Pa using an angular frequency sweep from 62 - 0.062 rad/s at a constant displacement of 0.18 mrad (TA Instruments AR-2000ex rheometer, with a 60mm 1.025$^{\circ}$ cone plate). Our fluid had a density of $1.055 \times 10^{3}$ kg/m$^3$, slightly lower than the average density of the beads. However, the density is close enough that beads did not fall out of suspension until well after all the measurements were complete. The same test particles were added at a concentration of $2.5\times10^{-3}$ \% w/v. When preparing samples, we used a process almost identical to the one for water. The major difference was that the sample was not sonicated ahead of being added to the Fastwell because the fluid solidified when exposed to high frequency agitation. Instead, the sample was slowly mixed using a low frequency mixer.
All data was collected on a Nikon TE2000s microscope on a floating stage optical table in a climate controlled room. Illumination was provided by a 500mW red LED (Thorlabs LED635L) shining through the microscope condenser. In between the LED and sample a neutral density filter on a swivel mount was added to allow initial setup to be done without excessive local heating of the sample. The sample was encased in a small cardboard box for isolation from acoustic vibrations. Images were gathered through a 50x lens (Nikon LU plan ELWD 50x/0.55 B $\inf$/0 WD 10.1) using a Phantom M310 high speed camera. Videos were taken at 40,000 fps (T = 25 $\mu$s) with an image size of 192x192 pixels and a magnification of 0.4$\mu$m/pix. When filming a particle all motorized elements on the microscope and camera were turned off to eliminate small vibrations. Once a particle was found, filming lasted 2.84 s (113,600 frames) after which the LED was immediately shut off and the ND filter replaced.
We used a radial center tracking algorithm\cite{parthasarathy_rapid_2012} to find the center of the colloids in progressive frames of the video. Using a combination of simulations and tracking test particles which were stuck to the slide, we found that the algorithm did not exhibit a preferred direction. We found that the mean position error was about $1.5$ nm in each frame. A representative trace with the first video frame is shown in figure \ref{particleTrace}.
We calculate the mean squared displacement (MSD) from our measured data as $\text{MSD}(\tau) = \langle \| \vec x(t+\tau)-\vec x(t) \| ^2 \rangle $ where $\vec x(t)$ is the measured position of the particle at time $t$, $\tau$ is the lag-time between position measurements, and angle brackets denote a time average. The MSD for a representative particle is plotted as green squares in figure \ref{trla7Var}. This MSD exhibits a small drift at long times, past about 0.1 s, and a noise floor at very short times. The drift is likely the result of convective flows within our sample chamber, driven, perhaps, by local heating of the sample. However, this drift can be easily removed by calculating the variance of particle position as a function of lag time as $\text{Var}(\tau) = \text{MSD}(\tau)-\| \langle \vec x(t+\tau)- \vec x(t) \rangle ^2 \|$. The noise floor is caused by photon shot noise in our camera contributing to uncertainty in the localization of a particle. We can directly measure this noise by tracking a particle fixed to a slide and find it to be independent and identically distributed Gaussian noise with a variance of approximately $2\times10^{-18}$ m$^2$ (inset to figure \ref{trla7Var}). The precise value of this noise variance changes from run to run due to variations in particle size and particle focus (due to changes in z-position). Because this noise is independent and identically distributed for a given measurement we can simply subtract the noise floor from our measurement to find the true variance of our particle, plotted as black circles in figure \ref{trla7Var}.
\begin{figure}[t]
\centerline{\includegraphics[scale=1.1]{trla7MSDdataLgd.pdf}}
\caption{ The variance of the colloid's trace, shown in figure~\ref{particleTrace}. The green squares show the mean squared displacement. The black dots with the error bars show the variance with the noise floor subtracted, in this case $2.34\times10^{-18}$ m$^2$. In red is the fitted Clercx-Schram theoretical prediction for the MSD. The best fit value for Temperature is 302 K, and for radius $21$ $\mu$m. We propagate the localization error of the position measurement as well as sampling error through our calculation to obtain error bars for the plot as described in the supplementary material. Inset: A histogram of measured positions for a stranded particle.}
\label{trla7Var}
\end{figure}
The plotted variance clearly shows a ballistic regime below about $10^{-3}$ to $10^{-4}$ s, a crossover regime up till about $10^{-2}$ s, and a diffusive regime for longer times. The measured variance fits the dense fluid model exceedingly well over the entire range of measured lag-times as shown in figure \ref{trla7Var}. The model depends on four physical parameters: 1) temperature, 2) particle radius, 3) fluid density, and 4) fluid viscosity. Of these, we independently measure the fluid density prior to observation. The fluid viscosity of salt water is a known function of density and the temperature \cite{sharqawy_thermophysical_2010}. Therefore, we have only two independent fitting parameters: temperature and particle radius. To this, we add a third fitting parameter to describe the magnitude of the noise floor.
We independently fit 18 measurements using 18 different particles, shown in figure \ref{residpPlot}. On average, the particle radius was found to be $20.5 \pm 0.8$ $\mu$m, within tolerance of the manufacturer's quoted radius. The average temperature measured by our fitting was found to be slightly higher ($297 \pm 4.5$ $K$) than the measured room temperature ($293 \pm 2$ K), likely the result of local heating from the intense illumination. The noise floors for the measurement were found to range from $1.2\times10^{-18}$ m$^2$ to $2.4\times10^{-18}$ m$^2$. Thus fit, the dense fluid functional form is indistinguishable from the data over much of our measured range. To characterize the agreement, we plot the residual percentages, and find them to be unbiased and with error less than 5\% over at least two decades of lag time, as shown in figure \ref{residpPlot}. At longer times, where drift and sampling errors increase, the percent error increases as well.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Allfig.pdf}}
\caption{Top: The noise floor adjusted variance for 18 different videos. Inset: The fitted temperature, test particle radius, and noise floor for the different trials. The black line shows the mean values with a gray standard deviation. Bottom: Residual percentages showing by what percent the measurement deviates from the individual fits for each of the corresponding variance's shown above.}
\label{residpPlot}
\end{figure}
We perform similar experiments in a Maxwell fluid created with a solution of CTAC and water as described above. As in the case with water, we see a minimum noise floor at short times and a long time drift in the MSD. The drift in the measurement is removed by using the variance as described above and the noise floor is estimated and subtracted as shown in Figure \ref{ctacTrl5msd} for a representative trial. In total 30 independent measurements were made with this Maxwell fluid.
\begin{figure}[t]
\centerline{\includegraphics[scale=1.05]{CTACmsdTrl5pasymWord.pdf}}
\caption{The MSD and adjusted variance for a particle moving in a Maxwell fluid. The black dots show the noise floor subtracted variance. In this figure the noise floor used is $2.2883\times10^{-18}$ m$^2$. The red lines show the predicted theoretical asymptotes \cite{van_zanten_brownian_2000} for the ballistic (solid) and plateau (dashed) regimes. The blue lines show the observed asymptotes for the ballistic (solid) and plateau (dashed) regimes. Inset: The Cole-Cole plot for CTAC as measured with a conventional rheometer. The red curve is a fit to the Maxwell function demonstrating a measured plateau modulus of 5.72 Pa.}
\label{ctacTrl5msd}
\end{figure}
The plotted variance for a Maxwell fluid has two notable features. 1) At short times the motion is clearly ballistic. The best fit prefactor for the asymptote is however considerably lower than the one predicted by either the ideal gas or dense fluid models, corresponding to an effective mass six times larger than the particles mass or an entrained region with a radius 39.6$\mu m$, compared to the reported radius of 21.8$\mu m$. This increased effective size of the particle can perhaps be understood as a result of the fact that the surrounding fluid contains a network of worm-like micelles. The test particle impinges upon the network of intertwined micelles and pulls some of them along thus increasing the particle's effective mass. Alternatively, the surface of the particle may actually attract the micelles which would increase the effective mass as well. However, due to the presence of salt in this solution any interaction between the particle and the micelles must neccessarily be small. 2) The variance shows a clear secondary plateau which is independent of the noise floor. This plateau is characteristic of thermally damped motion consistent with a Maxwell fluid's predicted behavior for high frequencies. Examining the best fit asymptotes to the plateau regime, we find an average plateau modulus over all measurements of 16 $\pm$ 2 $Pa$. This is almost a factor of 3 larger than the rheometer measured value of 5.7 $Pa$.
The Cole-Cole plot (inset to Figure \ref{ctacTrl5msd}) for this fluid shows it to be a perfect Maxwell fluid when measured on a conventional rheometer, however this deviates from the observed microscopic behavior. These results demonstrate that at the short time and length scales that our technique probes, the physics governing this fluid are in fact significantly more complicated than those of a simple Maxwell fluid model. The displacement scales probed with our technique are just under 4 orders of magnitude smaller than those accessible to a rheometer and the time scales are two orders of magnitude smaller.
In this experiment, we resolve the functional form of the ballistic crossover, revealing the fundamental length and time scales between individual and collective interactions in both Newtonian and Maxwell fluids. In so doing we have created a microscale first-principles thermometer based on the kinetic theory definition of temperature. We have demonstrated the validity of this approach by the extremely precise agreement between our results and theoretical models for motion in dense Newtonian fluids. We have experimentally tested the accuracy of Maxwell fluid Langevin equation solutions and found them to be wanting in accurately describing real materials. Asymptotically, we see a clear need for the addition of an effective mass term. More troublingly, the plateau values as measured with this method are markedly different from those found with a conventional rheometer. This difference may be a sign of a shift in behavior between the microscale addressed by our measurement and the macroscale measurement performed with a rheometer, suggesting that materials which appear Maxwell at large length scales may be more complicated at small lengthscales. Alternatively, this result could be indicative that the assumptions used in deriving the asymptotic behavior of the model need to be further modified. Our technique provides an independent method for testing models for the microscopic structure of fluids and the accompanying macroscopic fluid constants. In the future, this method promises to be useful in measuring multiple transitions between motion regimes in viscoelastic materials, an area where laser traps have difficulty because of the effects of confinement \cite{grimm_brownian_2011}. This method will also enable detailed studies of the influence of long range interactions, such as wall effects, in an interaction free manner \cite{felderhof_effect_2005}. As such, high speed single particle tracking promises to become an important tool in the study of the fundamental behavior of liquids.
We thank Raghu Parthasarathy, Tristan Ursell, and Mike Taormina for helpful discussions, Travis Walker and Britany Swann for help with the rheometer, and the University of Oregon machine and electrical shop staff. This work was supported by National Science Foundation (NSF) Career Award DMR-1255370.
|
3,212,635,537,729 | arxiv | \section{Introduction}
\seclab{intro}
The lunar Askaryan technique began with Askaryan's predictions for coherent radio--microwave radiation arising from a build-up of a negative charge excess in particle cascades, and his identification of the lunar regolith as a suitable target material due to its relative radio transparency \cite{askaryan1962}. Its modern experimental incarnation was first proposed by \cite{dagkesamanskii1989}, who noted that by using ground-based radio-telescopes, the total effective area of the visible lunar surface --- approximately $20$ million km$^2$ --- could be utilised as a particle detector. The large observation distance (mean of $3.8 \,10^{5}$\,km) however imposes a high detection threshold, meaning the technique is ideally suited to targeting the rarest ultra-high-energy (UHE; $\gtrsim 10^{18}$\,eV) cosmic rays (CR) and neutrinos. Observations of UHE CR are motivated by the as-yet unknown origin of these extreme particles, while UHE neutrinos are predicted from specific theories of topological defects and super-heavy dark matter \cite{GU,KachRev,SHDM1,SHDM2}. It should be noted that efforts to observe the lunar regolith with a lunar orbiter are ongoing \cite{LORD} --- however, both the bulk of experimental efforts, and these proceedings, concentrate on the ground-based method.
Since the first search for lunar Askaryan pulses at Parkes in 1995 \cite{Parkes95}, several experiments have been performed, at Goldstone (`GLUE'; \cite{GLUE}), Kalyazin (`RAMHAND'; \cite{beresnyak2005}), the VLA (`RESUN'; \cite{jaeger2010}); Lovell (`LaLUNA'; \cite{spencer2010}); Westerbork (`NuMoon'; \cite{buitink2010}); and with ATCA and Parkes (`LUNASKA'; \cite{LUNASKA_atca,LUNASKA_parkes,LUNAKSA_Parkes_limit}).
While these experiments have produced a number of limits on high-energy particle fluxes, only the diffuse flux limits from NuMoon \cite{buitink2010}, and LUNASKA limits on Centaurus A \cite{LUNAKSA_Parkes_limit,LUNASKA_cena}, are competitive against other experiments, and then only at extremely high energies ($\gtrsim 10^{22}$~eV). This is due to two factors: no observation has yet exceeded $200$~hr, whereas most competing particle-detection experiments operate continuously; and because detection thresholds have mostly been near $10^{21}$\,eV.
The current experimental emphasis is therefore on performing long-duration observations to limit the UHE neutrino flux, and on increasing sensitivity to detect the known flux of UHE CR. Unlike neutrinos, most of which will penetrate too deeply into the Moon to be detected, and deposit only $\sim 20$\% of their energy in detectable hadronic cascades, cosmic rays will interact immediately at the surface with $100$\% energy deposition. While it was long-thought that formation zone effects would suppress the Askaryan emission from near-surface cascades and make cosmic rays undetectable with the technique, the rapid rise and fall of the excess charge will still result in radiation regardless of the presence of a dielectric \cite{endpoints, numoon_cr}. Furthermore, future experiments --- in particular, with the Square Kilometre Array --- promise the order-of-magnitude increase in experimental sensitivity required to detect the known flux of cosmic rays, potentially allowing for arrival-direction studies.
In this overview, the main emphasis is on the planned observation strategy and simulated sensitivity of the Square Kilometre Array to UHE particles. The specific plans of the SKA's High Energy Cosmic Particles Focus Group (HECP FG\footnote{http://astronomers.skatelescope.org/home/focus-groups/high-energy-cosmic-particles/}) for observations with the low-frequency component of SKA Phase 1 will be described --- prospects for future observations with both SKA Phase 2 and other instruments will also be discussed.
\section{The Square Kilometre Array}
\seclab{sec:ska}
The Square Kilometre Array (SKA) is a radio telescope project consisting of three separate arrays covering different frequency ranges.\footnote{www.skatelescope.org} SKA-low ($50$--$350$\,MHz) will be built in Australia, while SKA-mid dishes ($\gtrsim 1$\,GHz) and an aperture array (to cover the intervening frequencies) will be built in South Africa and nearby countries. Construction of Phase $1$ of the SKA will begin shortly, with a target completion date of 2023; the full instrument in Phase 2 is targeted for completion in 2030. Phase 1 of SKA-low (SKA1-low) will consist of $131,072$ log-periodic dipole antennas deployed in $512$ stations of $256$ antennas each \cite{SKA_baseline_design_v2}, and will be deployed with a large central concentration of detectors over a $\sim$1\,km$^2$ area.
The science goals of the SKA are diverse, and are broadly categorised into Science Working Groups. The SKA is designed primarily as an imaging telescope, but the ability to observe in other modes --- pulsar searches and timing, and transient studies --- is also part of standard design features. In order to enable the SKA to perform lunar Askaryan observations, members of most prior experimental collaborations have formed the SKA's HECP Focus Group, together with groups interested in performing observations of extensive air showers \cite{ska_eas}. This group has studied the sensitivity of the SKA when using the lunar technique, with the main science case being published in the latest edition of the SKA Science book \cite{ska_lunar}. As a result of these studies, which are described further in \secref{sec:simulations}, the group has decided to focus on SKA-low. More importantly, the group has determined the required engineering changes in order to allow Phase 1 of SKA low (`SKA1-low') to perform lunar Askaryan observations, and the target observation mode is presented in \secref{sec:phase1}.
\section{Simulating the sensitivity of the SKA}
\seclab{sec:simulations}
The first detailed estimates of the sensitivity of the SKA to UHE particles were made by \cite{JamesProtheroe09a}, and have undergone continuous updates as the design of the instrument became more concrete. The current expectation is a sensitivity of $500$ m$^2$K$^{-1}$ for SKA1-low and perhaps $4000$ m$^2$K$^{-1}$ for Phase 2. This sensitivity is applicable to the detection of Askaryan pulses only when all telescope elements are correctly phased (added coherently in the voltage domain) in the direction of the pulse, and the signal power is contained to only one polarisation channel.\footnote{Equivalently, the frequency-dependence of the polarisation can be predicted, such that a signal search competes against the effective noise of one polarisation channel only.}
This `raw' sensitivity however tends to be limited by the sensitivity that can be brought to bear for online triggering, the details of which depend on the exact experimental setup. Such effects include self-vetoing due to strong signals exceeding noise-veto criteria, imperfect coverage of the lunar disk by the trigger beams, approximate dedispersion, artefacts of (analogue or digital) signal-processing, and non-optimal search algorithms --- see \cite{singh2012, LUNASKA_atca, Bray_revised_2016, Bray_these} for further details. Only the self-veto and lunar coverage effects are reproduced here.
In the case of SKA1-low, $16$ coherent beams will be available for real-time searches (see \secref{sec:phase1}). The best trade-off between beam width and lunar coverage is expected by beamforming with all core stations (approximately $50$\% of the total), here modelled using a 2D Gaussian distribution of antenna density with 1/e fall-off radius $240$\,m.
Since the sky temperature rises rapidly at low frequencies for all pointing positions, observations in a $250$\,MHz band above $100$\,MHz are considered, although SKA1-low will be capable of observing over all $50$--$350$\,MHz simultaneously.
A preliminary optimisation of the beam pointing positions for the Moon at zenith, and assuming a frequency-independent sky plus system noise core sensitivity of $250$\,m$^2$\,K$^{-1}$, indicates that beams placed $0.9^{\prime}$ inside the limb would be optimal (see \figref{fig:aeff}, left). The event rate is evaluated assuming independently polarised triggers on each beam, for a total of $32$ independent triggers.
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{pointing_radius_constE.pdf} \includegraphics[width=0.49\textwidth]{sixteen_beams.pdf}
\caption{(left) Effective area for a single SKA1-low beam (see text) as a function of fractional lunar pointing radius, for three different cosmic ray energies $E_{\rm CR}$. (right) Effective aperture $A_{\rm eff}$ of all beams of SKA2-low and SKA1-low to cosmic rays and neutrinos as a function of their energy. \cite{ska_lunar}.
} \figlab{fig:aeff}
\end{figure}
In the case of Phase 2, $100$\% lunar coverage at the sensitivity of the full array ($4000$\,m$^2$K$^{-1}$) is assumed. The chosen threshold of $10 \sigma$ corresponds to a final false positive rate of less than once per century.
\section{Results for cosmic rays and neutrinos}
Using these values, the resulting estimates of SKA1-low and SKA2-low sensitivity to UHE CR and neutrinos are shown in \figref{fig:aeff} (right). While SKA1-low will be sensitive to cosmic rays below $10^{20}$\,eV, the effective aperture in this range is significantly less than what is obtained by the Pierre Auger Observatory \cite{Abraham2010}. Using an approximate parameterisation of the cosmic ray spectrum, and a lunar visibility of $29\%$ above an elevation angle of $30^{\circ}$ from the SKA-low site, it is estimated that SKA1-low could detect of order $1$ CR yr$^{-1}$ of observing whenever the Moon is visible ($\sim$2500\,hr) \cite{james_icrc_2015}, which may allow for a first detection. The estimated rate for Phase 2 however will be much higher: of order $100$\,yr$^{-1}$, of which half will be above the 56\,EeV threshold found by the Pierre Auger Observatory to be most significant for anisotropy studies \cite{PAO07C1,Auger_cena_2015}. The sensitivity of SKA-low in either phase to UHE $\nu$ however will not be sufficient to detect the estimated flux of cosmogenic neutrinos resulting from cosmic ray interactions with background photon fields.
The clear goal therefore of lunar Askaryan observations with SKA Phase 2 will be to perform directional studies of cosmic rays at the highest energies. Preliminary estimates indicate an angular resolution of order $5^{\circ}$ will be achievable \cite{james_icrc_2015}, which is sufficient to study objects such as Centaurus A. No resolution on primary composition however is anticipated, and energy resolution may be poor, so SKA observations should not be seen as a replacement for surface arrays such as the Pierre Auger Observatory or Telescope Array.
Observations with SKA1-low will, however, be able to probe beyond-the-standard-model physics at energies above $10^{20}$\,eV. These observations could also be performed through very long ($\gg 1,000$\,hr) observations with less-sensitive (and hence less-subscribed) instruments such as AuScope at $2.3$\,GHz \cite{Bray_revised_2016}, or LOFAR observations in UHEP mode \cite{Winchen}. A first detection of cosmic rays may also be possible with instruments such as FAST \cite{FAST} or Parkes \cite{Bray_revised_2016}.
\section{Implementation with Phase 1 of SKA-low}
\seclab{sec:phase1}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{signal_path_sideways.pdf}
\caption{Sketch of the proposed signal path for SKA1-low lunar Askaryan observations, colour-coded by observation mode. Red: part of the standard imaging pipeline; blue: added functionality for pulsar and transient observations; green: specialised lunar pulse detection hardware; orange: required interfaces.}
\figlab{fig:signal_path}
\end{figure}
A sketch of the proposed SKA-lunar signal path for SKA1-low is shown in \figref{fig:signal_path}, based on the updated SKA1-low baseline design \cite{SKA_baseline_design_v2}. In standard operations, dual linear polarisations from all 256 elements in each station will be digitised, passed through a polyphase filter (PPF) to split them into $300$ $1$\,MHz channels, and formed into a `station beam'. These station beams can then be passed to the correlator for imaging (not shown), the pulsar timing array beamformer (PBF) to form `array' beams, and/or buffered to memory for transient detection. The PBF will be able to produce 16 dual polarisation array beams at $300$\,MHz bandwidth.. In order to search for lunar Askaryan pulses, these beams will be directed at the Moon, and their output passed to dedicated hardware.
To correct for ionospheric delays, each sub-band will undergo a bulk de-dispersion according to the best real-time STEC prediction, and then be passed to an inverse PPF. This will operate on each station beam to transform the signal back to the time-domain. Note that the bulk dedispersion, and also any noise-whitening, can be performed for free by modifying the initial inverse PPF coefficients.
In order to search for a lunar Askaryan signal, each beam will be copied to many separate streams, with each applying a different fine dedispersion to account for errors in the initial bulk dedispersion. Each of the subsequent signals will then be searched for possible lunar pulses.
The optimal search method for a coherent signal against white noise uses a matched filter. However, even if the pulse shape can be perfectly predicted, the signal phase will vary with respect to the sampling phase. Therefore, a search over multiple signal shapes must be performed. The expected trigger rate from pure noise events is expected to be in the range 0.1--1\,Hz.
The final stage will be an anti-coincidence test over all $16$ beams to veto RFI events. Once a trigger is generated that passes the veto, both the array beams, and the buffered station beams, will be stored offline for further analysis. Further details of the proposed implementation of these components is given in \cite{lunar_technical}.
The doubled effective area available in offline processing through buffered data from non-core stations will be sufficient to discriminate these false positives from true lunar events --- which necessarily implies that the final sensitivity will be trigger-limited. Returning $10$\,$\mu$s per trigger is expected to be sufficient to capture a fully dispersed pulse, account for arrival-time uncertainties on long baselines, and provide sufficient off-signal data to estimate the noise level. The total data volume, assuming station-beam data at 8-bit resolution, will be $8$\,MB per trigger
\section{Conclusion}
Using the lunar Askaryan technique, the low-frequency component of the Square Kilometre Array in Phase 2 will have the ability to detect an unprecedented number of ultra-high-energy cosmic rays, in order to perform directional studies and potentially resolve the cosmic ray origin. Almost all the necessary technical development has been performed by the member groups of the High Energy Cosmic Particles Focus Group within the SKA Organisation. Observations with LOFAR's UHEP mode, and with Phase 1 of SKA-low, will provide both critical technical development, and important limits on beyond-the-standard-model physics. There are good prospects for a first detection with SKA1-low or other instruments, and for very-long-duration observations specifically to test exotic models of extremely high-energy particles.
|
3,212,635,537,730 | arxiv | \section{Introduction}
When an unknown model parameter is estimated from experimental data,
the estimation always yields a value, be the sample size large or small.
We would, however, expect a more accurate value from a larger sample.
A {\em confidence interval} measures this ``accuracy'' in some way.
As ``accuracy''
can be defined in different ways, there are different approaches to the
construction of confidence intervals.
The most common approach is the {\em frequentist approach}, which is
based on the {\em coverage probability} and is taught in introductory
texts on statistics \cite{fahrmeir04}. It assumes the unknown parameter
to be known and then chooses an interval around the estimator that includes
the parameter with a given probability (typically $95\%$). The
{\em evidence based approach} utilizes the {\em likelihood ratio} and
chooses an interval wherein the likelihood function is greater than
a given threshold (typically $1/8$ of its maximum value) \cite{blume02}.
The {\em Bayesian approach} treats the unknown parameter as a random
variable and estimates its distribution from the observation. This leads
to the {\em highest posterior density} interval \cite{turkkan93}.
Both for binomial proportions and for mean values, simple formulas or algorithms
to compute confidence intervals can be given. A possible evaluation criteria
for the obtained intervals is the coverage probability. One should think
that this criterion favors the frequentist approach, but even for
this approach, the coverage probability may vary considerably, depending
on the true parameter value. For non-symmetric intervals, another evaluation
criterion is the interval length because, from two intervals with the same
coverage probability, the shorter one is preferable.
Beyond the binomial proportion and the mean value, there is no standard
formula for computing a confidence interval. For maximum likelihood
estimators, it is however known that they are asymptotically normal,
provided the likelihood function is sufficiently smooth \cite{greene00}.
In these cases, the confidence interval for the mean value can be used.
This requires an estimation of the estimator variance, which can be done
in two ways: the diagonal elements of the inverted {\em Hessian matrix}
of the log-likelihood function, or the {\em Jackknife} variance.
For non-smooth likelihood functions or for arbitrary estimators, only
the {\em bootstrap} method is universally applicable. This method
generates new data from the observations by random sampling with
replacement and estimates the confidence interval from the sampled data.
In principle, the bootstrap method is always applicable, even in cases
when the other methods work, but in the experiments described in this
report, the bootstrap method had a poorer coverage probability than the
classic confidence interval, and it should therefore only be used when
other methods cannot be applied.
This report is organized as follows: section
\ref{sec:grundbegriffe} defines the basic terms estimator,
coverage probability, likelihood ratio, and posterior density.
In sections \ref{sec:p} and \ref{sec:mu}, the different approaches
are applied to the binomial proportion and to the mean value.
Sections \ref{sec:ml} and \ref{sec:bootstrap} describe construction
methods for confidence intervals for maximum likelihood estimators
and for arbitrary estimators.
Section \ref{sec:vergleich} presents Monte Carlo experiments that
evaluate the coverage probability of the different confidence intervals.
The final section makes recommendations which confidence interval should
be used in which case.
\section{Basic terms}
\label{sec:grundbegriffe}
The probability distribution of a random variable $X$ be known except for
the value of some parameter $\theta$. In other words: the shape of
the probability density $f_\theta(x)$ be known, but not the value of
the parameters $\theta$. In the most general case, $\theta$ is a vector and
represents several parameter values. If $X$ is normal distributed, for
instance, then $\theta$ represents two parameters: $\theta=(\mu,\sigma^2)$.
An {\em estimator} is a function to estimate the unknown parameter from
independent observations $x_1,\ldots,x_n$ of the random variable $X$.
The particular estimated value is denoted with $\hat{\theta}$:
\begin{equation}
\label{eq:theta}
\hat{\theta} = \hat{\theta}(x_1,\ldots,x_n)
\end{equation}
Simple examples are the relative frequency as an estimator for a binomial
proportion, or the statistical average as an estimator for the parameter $\mu$
of the normal distribution.
\subsection{Maximum likelihood (ML)}
\label{sec:grundbegriffe:ml}
The {\em maximum likelihood principle} is a general method to obtain
estimators \cite{greene00}. It chooses the parameter $\theta$ in such a
way that the {\em likelihood function} $L$ or\footnote{Note that $L(\theta)$ and $\log L(\theta)$ have their maximum at the same argument, because the logarithm is a monotonic function.}
the {\em log-likelihood function} $\ell$ is maximized:
\begin{subequations}
\label{eq:L}
\begin{eqnarray}
\label{eq:L:a}
L(\theta) & = & \prod_{i=1}^n f_\theta(x_i) \\
\label{eq:L:b}
\ell(\theta) & = & \log L(\theta) = \sum_{i=1}^n \log f_\theta(x_i)
\end{eqnarray}
\end{subequations}
Loosely speaking, $L(\theta)$ is a measure for the probability of the
observation $x_1,\ldots,x_n$ under the assumption that the true parameter
value is $\theta$. If $\theta=(\theta_1,\ldots,\theta_t)$ and
$\ell(\theta)$ is differentiable, the maximum likelihood principle yields
$t$ equations for the determination of the $t$ parameters
$\theta_1,\ldots,\theta_t$:
\begin{equation}
\label{eq:ML}
\frac{\partial}{\partial\theta_i} \ell(\theta) = 0 \quad\mbox{for }i=1\ldots,t
\end{equation}
Maximum likelihood estimators have a number of attractive properties like
asymptotic normality under quite general conditions. This will play a role
in section \ref{sec:ml}. In many cases, the equations (\ref{eq:ML}) cannot
be solved in closed form, thereby making a numerical maximization
of the log-likelihood function necessary. If this is not possible, one might
try other methods that possibly yield estimators in a simpler way, like
the {\em method of moments} or its generalization \cite{zsohar10}.
\subsection{Coverage probability}
\label{sec:grundbegriffe:pcov}
An estimation function (\ref{eq:theta}) yields only a single value
and is therefore called a {\em point estimator}.
A {\em confidence interval}, on the contrary, gives a region
$[\theta_l, \theta_u]$ wherein the parameter falls with high probability.
The boundaries $\theta_{l,u}$ of the interval depend on the observed data
$x_1,\ldots,x_n$ and are thus random variables. The {\em frequentist approach}
is based on the following consideration: if $\theta$ is the true parameter
value, then it ideally should fall into the confidence interval with
a pre-defined {\em coverage probability} $(1-\alpha)$:
\begin{equation}
\label{eq:Pcov}
P_{cov}(\theta) = P(\theta\in[\theta_l,\theta_u]) = 1-\alpha
\end{equation}
Unfortunately, Eq.~(\ref{eq:Pcov}) cannot be used to determine $\theta_l$ and
$\theta_u$, because the unknown $\theta$ is part of the equation. This dilemma
can be resolved when the problem is re-interpreted as a hypothesis testing
problem: under the hypothesis $\theta\notin[\theta_l,\theta_u]$,
the probability that the estimator deviates from $\theta$ more than the
observed value $\hat{\theta}$ is less than $\alpha$. Or, in hypothesis
testing lingo: if $\theta$ were one of the interval boundaries, then
everything beyond $\hat{\theta}$ would fall into the rejection region.
When the probability $\alpha$ is distributed evenly among small and large
deviations, the formal definition of the {\em frequentist confidence
interval} becomes\footnote{This definition reads slightly different from the definition given by DiCiccio \& Efron \cite{diciccio96}: Eq.~(\ref{eq:CIfreq:b}) is identical, but in Eq.~(\ref{eq:CIfreq:a}) they write ``$>$'' instead of ``$\geq$''. This makes no difference for continuous random variables, but it would treat the two boundaries differently for discrete random variables.}:
\begin{subequations}
\label{eq:CIfreq}
\begin{eqnarray}
\label{eq:CIfreq:a}
P_{\theta=\theta_l}(\hat{\theta}\geq\theta_0) & = & \alpha/2 \quad\mbox{and}\\
\label{eq:CIfreq:b}
P_{\theta=\theta_u}(\hat{\theta}\leq\theta_0) & = & \alpha/2
\end{eqnarray}
\end{subequations}
where $\theta_0$ is the observed value for the estimator
and $P_{\theta=\theta_{l,u}}$ is the probability under the assumption
that the true parameter value is the lower or upper boundary, respectively.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{pcov-binom-clopper}
\caption{\label{fig:pcov-binom-exact}Coverage probability $P_{cov}$ of the ``exact'' confidence interval for a binomial proportion after Eq.~(\ref{eq:CIfreq}) as a function of the true parameter $p$ for $n=100$ and $\alpha=0.05$.}
\end{figure}
Although the confidence interval obtained by solving Eq.~(\ref{eq:CIfreq})
for $\theta_l$ and $\theta_u$ is guaranteed to have have at least
$1-\alpha$ coverage probability independent from $\theta$, there are
two hitches: the example in Fig.~\ref{fig:pcov-binom-exact} shows that
even an ``exact'' confidence interval directly computed with
Eq.~(\ref{eq:CIfreq}) can have coverage probability that is too large
for most values of $\theta$, which means that the interval is too wide.
Moreover, the probability is often known only approximately, or
Eq.~(\ref{eq:CIfreq}) can only be solved asymptotically, which leads
to an approximate confidence interval, which can have $P_{cov}(\theta)$
less than $1-\alpha$.
\subsection{Likelihood ratio}
\label{sec:grundbegriffe:lr}
A different approach to obtain a confidence interval is based on the
likelihood function (\ref{eq:L:a}). The ML estimator $\hat{\theta}$
chooses $\theta$ such that it maximizes the probability of the observed data.
However, other values of $\theta$ lead to a high probability of the observation,
too. It is thus natural to define an interval wherein the ratio
$L(\hat{\theta})/L(\theta)$ is greater than some threshold. To distinguish
this interval from the frequentist confidence interval, it is called the
{\em likelihood ratio support interval} $[\theta_l,\theta_u]$:
\begin{equation}
\label{eq:LR}
\frac{L(\theta)}{L(\hat{\theta})} \geq \frac{1}{K} \quad\mbox{for all }
\theta\in [\theta_l,\theta_u]
\end{equation}
where $\hat{\theta}$ is the ML estimator for $\theta$. A common choice for
$K$ is $K=8$ because, in the case of mean values, it leads to intervals
very close to the frequentist interval for $\alpha=0.05$
(see section \ref{sec:mu:lr}).
\subsection{Posterior density}
\label{sec:grundbegriffe:hpd}
A third approach to confidence interval construction tries to estimate
a probability density for $\theta$ on basis of the observation $\hat{\theta}$.
The true parameter $\theta$ is here considered as a random variable, and
$p_\theta(\hat{\theta})$ is a conditional probability density\footnote{Note that $\theta$ and $\hat{\theta}$ are continuous variables, so that their probability distribution is described by a density, here denoted with the lower case letter $p$.}
$p(\hat{\theta}|\theta)$ that can be computed with Bayes' formula:
\begin{equation}
\label{eq:bayes}
p(\theta|\hat{\theta}) = \frac{p(\hat{\theta}|\theta)\cdot p(\theta)}{\int_{\mathbb{R}}p(\hat{\theta}|\tau)\cdot p(\tau)\,d\tau}
\end{equation}
Based on this density, the {\em highest posterior density (HPD) interval}
is defined as the region $[\theta_l,\theta_u]$ with highest probability
density values and a total probability of $(1-\alpha)$. Formally, this definition
leads to the coupled equations (see Fig.~\ref{fig:hpd})
\begin{subequations}
\label{eq:hpd}
\begin{eqnarray}
1-\alpha &=& \int_{\theta_l}^{\theta_u}\!\! p(\theta|\hat{\theta})\,d\theta
\quad\quad\mbox{and}\\
p(\theta_l|\hat{\theta}) &=& p(\theta_u|\hat{\theta})
\end{eqnarray}
\end{subequations}
\begin{figure}[t]
\centering\includegraphics[width=0.9\columnwidth]{hpdinterval}
\caption{\label{fig:hpd}Determination of the highest posterior density interval $[\theta_l,\theta_u]$ according to Eq.~(\ref{eq:hpd}).}
\end{figure}
Apart from the nuisance that this system of equations can only be solved
numerically, the HPD interval has a fundamental deficiency: to compute
$p(\theta|\hat{\theta})$ with Eq..~(\ref{eq:bayes}), it is necessary
to make an assumption about the ``a priori distribution'' $p(\theta)$
of the unknown parameter $\theta$, and this assumption is arbitrary.
Typically, $p(\theta)$ is chosen to be constant which implies that
nothing is known about the approximate location of $\theta$. Although
this assumption is rarely realistic in practical situations, this does
not necessarily mean that the HPD interval is bad. As we will see in the
next section, it can even have a good coverage probability.
\section{Relative frequencies}
\label{sec:p}
The relative frequency $\hat{p}$ is a ubiquitous estimator for a probability,
or a binomial proportion $p$. The probability distribution of $\hat{p}$
is exactly given by the binomial distribution. When an event has probability
$p$, the probability that it occurs $k$ times in $n$ independent trials is
\begin{equation}
\label{eq:binom}
P_p(k) = {n \choose k} p^k (1-p)^{n-k}
\end{equation}
The relative frequency $\hat{p}=k/n$ then has the probability
\begin{equation}
\label{eq:binom}
P_p(\hat{p}=p_0) = {n \choose np_0} p^{np_0} (1-p)^{n(1-p_0)}
\end{equation}
Eq.~(\ref{eq:binom}) is the starting point for all confidence intervals of
the relative frequency.
\subsection{Frequentist interval for $\hat{p}$}
\label{sec:p-mu:frequentistisch}
\lstset{language=R,
basicstyle=\small \ttfamily,
literate={.help}{.help}5,
keywordstyle=\ttfamily,
frame=bottomline,
floatplacement=!t,
aboveskip=0pt,
belowskip=0pt,
captionpos=b
}
\begin{lstlisting}[float, caption=R implementation of the exact Clopper-Pearson confidence interval for the relative frequency after Eqs.~(\ref{eq:freq:pexakt}) \& (\ref{eq:freq:pexakt:0n})., label=lst:freq:pexakt]
ci.binom <- function(n, k, alpha) {
if (k == 0) {
p1 <- 0.0
p2 <- 1 - (alpha/2)**(1/n)
}
else if (k == n) {
p1 <- (alpha/2)**(1/n)
p2 <- 1.0
}
else {
helper <- function(p, k, n, val) {
return (pbinom(k, n, p) - val)
}
r <- uniroot(helper, k=(k-1),
n=n, val=1-alpha/2,
interval=c(0,1))
p1 <- r$root
r <- uniroot(helper, k=k,
n=n, val=alpha/2,
interval=c(0,1))
p2 <- r$root
}
return (data.frame(p1=p1, p2=p2))
}
\end{lstlisting}
Insertion of (\ref{eq:binom}) into Eq.~(\ref{eq:CIfreq}) yields the
following equations to determine boundaries $p_l$ and $p_u$:
\begin{subequations}
\label{eq:freq:pexakt}
\begin{eqnarray}
& 1-\mbox{pbinom}\left((k-1)/n,n,p_l\right) = \alpha/2 & \\
\mbox{and} &
\mbox{pbinom}\left(k/n,n,p_u\right) = \alpha/2 &
\end{eqnarray}
\end{subequations}
where $k/n=\hat{p}$ is the observed relative frequency, and {\em pbinom}
is the R function for the cumulative distribution function (CDF) of
the binomial distribution. In the special cases $k=0$ or $k=0$, one of
the equations (\ref{eq:freq:pexakt}) does not have a solution because
$p_l$ and $p_u$ are restricted to the interval $[0,1]$. In these cases,
let $p_l=0$ ($k=0$) or $p_u=1$ ($k=n$), respectively. The other boundary
can be found analytically in these cases as
\begin{subequations}
\label{eq:freq:pexakt:0n}
\begin{eqnarray}
k=0 & \Rightarrow & [p_l,p_u] = [0, 1-\sqrt[n]{\alpha/2}] \\
k=n & \Rightarrow & [p_l,p_u] = [\sqrt[n]{\alpha/2}, 1]
\end{eqnarray}
\end{subequations}
In all other cases, Eq.~(\ref{eq:freq:pexakt}) must be solved numerically,
e.g., with the R function {\em uniroot}\footnote{It would also be possible to use the inverse of the incomplete beta function, because 1-{\em pbinom} can be expressed through this function (see \cite{abramowitz84} Eq.~26.5.7). The inverse of the incomplete beta function, however, must be computed numerically either.}.
The corresponding R code is given in listing \ref{lst:freq:pexakt}.
This interval is known as the {\em Clopper-Pearson} interval \cite{clopper34},
which is also implemented in the R function {\em binom.confint} from the
package {\em binom}, with the option {\em method='exact'}.
An approximate confidence interval is obtained when the binomial distribution
is replaced by the normal distribution, which is justified by the central limit
theorem. For large $n$, $\hat{p}$ is approximately normally distributed with
$\mu=p$ and $\sigma^2=p(1-p)/n$. With this approximation,
Eq.~(\ref{eq:CIfreq:a}) becomes
\begin{align}
& 1-\mbox{pnorm}\left(\hat{p},p_l,\sqrt{p_l(1-p_l)/n}\right) = \alpha/2 \nonumber \\
\Leftrightarrow\quad & \mbox{pnorm}\left(\frac{\hat{p}-p_l}{\sqrt{p_l(1-p_l)/n}},0,1\right) = 1-\alpha/2 \nonumber \\
\Leftrightarrow\quad & \frac{\hat{p}-p_l}{\sqrt{p_l(1-p_l)/n}} = z_{1-\alpha/2}
\label{eq:wilson:ansatz}
\end{align}
where {\em pnorm} is the R function for the CDF of the normal distribution,
and $z_{1-\alpha/2}=\mbox{qnorm}(1-\alpha/2)$ is the $(1-\alpha/2)$ quantile
of the standard normal distribution. The quadratic equation
(\ref{eq:wilson:ansatz}) and its analogous version for $p_u$ can be solved
analytically, thereby yielding the {\em Wilson interval}:
\begin{equation}
\label{eq:freq:pwilson}
\frac{1}{1+z^2/n}\left[\hat{p} + \frac{z^2}{2n} \pm z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}+\frac{z^2}{4n^2}}\right]
\end{equation}
where $z=z_{1-\alpha/2}$, for the sake of brevity. In the comparative study
\cite{brown01}, Brown et al.~recommended the Wilson interval due to its
coverage probability. For large $n$, Eq.~(\ref{eq:freq:pwilson}) asymptotically
transforms into the classical {\em Wald interval} that is taught in
introductory text books:
\begin{equation}
\label{eq:freq:pwald}
\hat{p} \pm z_{1-\alpha/2} \sqrt{\hat{p}(1-\hat{p})/n}
\end{equation}
\begin{figure}[t]
\centering\includegraphics[width=0.9\columnwidth]{lr-plot}
\caption{\label{fig:lr-binom}Likelihood ratio $L(p)/L(\hat{p})$ of the binomial distribution for $n=40$ and $k=10$.}
\end{figure}
\subsection{Likelihood ratio for $\hat{p}$}
\label{sec:p:lr}
When the event of interest occurs $k$ times in $n$ trials, the likelihood
function is
\begin{equation}
L(p) = p^k(1-p)^{n-k}
\end{equation}
The relative frequency $\hat{p}=k/n$ is the ML estimator for $p$. The
likelihood ratio support interval therefore encompasses all $p$ with
\begin{equation}
\label{eq:lr:binom}
\frac{L(p)}{L(\hat{p})} = \frac{p^k(1-p)^{n-k}}{\hat{p}^k(1-\hat{p})^{n-k}}
\geq \frac{1}{K}
\end{equation}
A plot of the function on the left hand side is shown in
Fig.~\ref{fig:lr-binom}. Eq.~(\ref{eq:lr:binom}) must be solved numerically,
e.g., with the R function {\em uniroot}. A possible implementation is given
in listing \ref{lst:lr-binom}.
\lstset{language=R,
basicstyle=\small \ttfamily,
literate={.help}{.help}5,
keywordstyle=\ttfamily,
frame=bottomline,
floatplacement=!t,
aboveskip=0pt,
belowskip=0pt,
captionpos=b
}
\begin{lstlisting}[float, caption=R code that computes the likelihood ratio support interval for the relative frequency according to Eq.~(\ref{eq:lr:binom})., label=lst:lr-binom]
lr.binom <- function(n, k, K) {
helper <- function(p, n, k, K) {
return (p**k * (1-p)**(n-k) /
((k/n)**k * (1-k/n)**(n-k))
- 1/K)
}
p1 <- rep(0,length(k))
p2 <- p1
if (k==0) {
p1 <- 0
} else {
r <- uniroot(helper,n=n,k=k,K=K,
interval=c(0,k/n))
p1 <- r$root
}
if (k==n) {
p2 <- 1
} else {
r <- uniroot(helper,n=n,k=k,K=K,
interval=c(k/n,1))
p2 <- r$root
}
return (data.frame(p1=p1, p2=p2))
}
\end{lstlisting}
\subsection{Highest posterior density for $\hat{p}$}
\label{sec:p:hpd}
The R package {\em HDInterval} provides the function {\em hdi} for
computation of HPD intervals. {\em hdi} requires as one
function argument a function that computes the inverse of
$\int_{-\infty}^{\theta} p(\tau|\hat{\theta})\,d\tau$. This means that
{\em hdi} is only applicable in cases where this inverse function can
be readily computed. The binomial distribution is such a case.
Insertion of the binomial distribution (\ref{eq:binom}) into
Eq.~(\ref{eq:bayes}) yields with the assumption of a constant
``a priori'' density $p(\theta)=\mbox{\em const.}$:
\begin{eqnarray}
p(p|k) & = &
\frac{ {n \choose k} p^k (1-p)^{n-k} }{\int_{0}^{1} {n \choose k} q^k (1-q)^{n-k}\,dq} \nonumber \\[1ex]
& = & \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\,p^{a-1}(1-p)^{b-1} \nonumber \\[1ex]
& = & \mbox{dbeta}(p, a, b)
\end{eqnarray}
where $a=k+1$ and $b=n-k+1$, and {\em dbeta} is the R function for the
probability density of the beta distribution. The inverse CDF of the
beta distribution in provided by R as the function {\em qbeta}, so that
the HPD interval can be computed with the code in listing \ref{lst:hpd-binom}.
\lstset{language=R,
basicstyle=\small \ttfamily,
keywordstyle=\ttfamily,
frame=bottomline,
floatplacement=!t,
aboveskip=0pt,
belowskip=0pt,
captionpos=b
}
\begin{lstlisting}[float, caption=R code that computes the $(1-\alpha)$ HPD interval for the relative frequency., label=lst:hpd-binom]
library(HDInterval)
ci <- hdi(qbeta, 1-alpha,
shape1=(k+1),
shape2=(n-k+1))
p1 <- ci[1]; p2 <- ci[2]
\end{lstlisting}
\section{Mean values}
\label{sec:mu}
Another ubiquitous estimator is the statistical average $\overline{x}$
as an estimator for the expectation value $\mu=E(X)$. For the statistical
average $\overline{x}=\frac{1}{n}\sum_{i=1}^nx_i$, it is possible to construct
a quantity that only depends on the unknown $\mu$ and has a known distribution,
albeit only in the special case that the variable $X$ is normally distributed.
In this case, the random variable
\begin{equation}
\label{eq:xquer-standard}
Z=\frac{\overline{x}-\mu}{\sqrt{s^2/n}}\quad\mbox{with }
s^2=\frac{1}{n-1}\sum_{i=1}^n (x_i-\overline{x})^2
\end{equation}
is $t$ distributed with $(n-1)$ degrees of freedom\footnote{The esoterically sounding term ``degrees of freedom'' is just the parameter of the $t$ distribution.}.
If $X$ is not normally distributed, it is at least known from the central limit
theorem that the quantity (\ref{eq:xquer-standard}) is approximately
standard normally distributed\footnote{The ``standard'' normal distribution is the normal distribution with parameters $\mu=0$ and $\sigma^2=1$.}
\cite{fahrmeir04}. In general, it is not known whether $X$ is normally
distributed, which means that confidence intervals for the mean value can
alternatively be based on the $t$ distribution or the normal distribution.
\subsection{Frequentist interval for $\mu$}
\label{sec:mu:frequentistisch}
Let $\mu_0$ be the observed value for $\overline{x}$. Then the
Eq.~(\ref{eq:CIfreq:a}) specifying $\mu_l$ reads, with utilization of the
$t$ distribution:
\begin{align}
\label{eq:freq:mu-ansatz}
&P_{\mu=\mu_l}(\overline{x}\geq \mu_0) = \alpha/2 \\ \nonumber
\Leftrightarrow\quad &P\left(Z\geq (\mu_0-\mu_l)/\sqrt{s^2/n}\right) = \alpha/2 \\ \nonumber
\Leftrightarrow\quad & 1- \mbox{pt}\left((\mu_0-\mu_l)/\sqrt{s^2/n}, n-1\right) = \alpha/2 \\ \nonumber
\Leftrightarrow\quad & (\mu_0-\mu_l)/\sqrt{s^2/n} = \mbox{qt}(1-\alpha/2, n-1) \\ \nonumber
\Leftrightarrow\quad & \mu_l = \mu_0-\mbox{qt}(1-\alpha/2, n-1)\cdot \sqrt{s^2/n}
\end{align}
where {\em pt} is the CDF of the $t$ distribution, and {\em qt} its inverse.
In the same way, Eq.~(\ref{eq:CIfreq:b}) can be solved for $\mu_u$.
With utilization of the symmetry property $\mbox{qt}(t) = -\mbox{qt}(1-t)$,
the confidence interval based upon the $t$ distribution becomes:
\begin{equation}
\label{eq:freq:mu-t}
\overline{x}\pm t_{1-\alpha/2}(n-1)\cdot\sqrt{s^2/n}
\end{equation}
where $t_{1-\alpha/2}(n-1)$ denotes the $(1-\alpha/2)$ quantile of the $t$
distribution, which can be computed with the R function {\em qt}.
Based on the normal distribution, the same calculation method yields the
confidence interval
\begin{equation}
\label{eq:freq:mu-z}
\overline{x}\pm z_{1-\alpha/2}\cdot\sqrt{s^2/n}
\end{equation}
where $z_{1-\alpha/2}$ denotes the $(1-\alpha/2)$ quantile of the
standard normal distribution, which can be computed with the R function
{\em qnorm}.
It seems paradoxical that we obtain the different confidence intervals
(\ref{eq:freq:mu-t}) or (\ref{eq:freq:mu-z}), depending on a condition
(the underlying distribution) that we do not know about. This is
no contradiction, however. Although
\begin{equation}
t_{1-\alpha/2}(n-1) > z_{1-\alpha/2} \quad\mbox{for all }n
\end{equation}
and the interval (\ref{eq:freq:mu-t}) is therefore always slightly larger,
for large $n$ both intervals become asymptotically similar because of
\begin{equation}
\lim_{n\to\infty} t_{1-\alpha/2}(n-1) = z_{1-\alpha/2}
\end{equation}
For $\alpha=0.05$, both values are close to two, which leads for both of the
above confidence intervals to the rule of thumb ``two times sigma'', with
$\sigma=\sqrt{s^2/n}$.
\subsection{Likelihood ratio for $\mu$}
\label{sec:mu:lr}
On basis of the $t$ distribution the specifying equation (\ref{eq:LR})
for the likelihood ratio support interval reads
\begin{equation}
\frac{L(\mu)}{L(\hat{\mu})} = \left(1+\frac{n(\overline{x}-\mu)^2}{s^2(n-1)}\right)^{-n/2} \geq \frac{1}{K}
\end{equation}
This equation can readily be solved for $\mu$, which yields the support interval
\begin{equation}
\label{eq:mu:lr-t}
\overline{x}\pm\sqrt{(K^{2/n}-1) s^2\frac{n-1}{n}}
\end{equation}
On basis of the normal distribution the specifying equation reads
\begin{equation}
\frac{L(\mu)}{L(\hat{\mu})} = \exp\left(-\frac{n(\overline{x}-\mu)^2}{2s^2}\right) \geq \frac{1}{K}
\end{equation}
This can again be solved elementary for $\mu$, too, which yields the support
interval
\begin{equation}
\label{eq:mu:lr-z}
\overline{x}\pm\sqrt{\frac{2s^2}{n}\ln K}
\end{equation}
It seems as though (\ref{eq:mu:lr-t}) and (\ref{eq:mu:lr-z}) were completely
different intervals, but in fact they are very similar: for large $n$, both
intervals are asymptotically equal because of\footnote{This limiting value follows from inversion of \cite{abramowitz84} Eq.~4.2.21.}
\begin{equation}
\ln x = \lim_{n\to\infty} n(x^{1/n} - 1)
\end{equation}
The numerical evaluation of the right hand side of Eq.~(\ref{eq:mu:lr-t})
becomes inaccurate, however, for large $n$ due to extinction of the most leading
digit from from similar floating point numbers. Therefore,
Eq.~(\ref{eq:mu:lr-z}) is preferable for large $n$ even in the case of the
$t$ distribution.
When we compare the support interval (\ref{eq:mu:lr-z}) with the
confidence interval (\ref{eq:freq:mu-t}), we see the reason for the choice
$K=8$: it is $\sqrt{2\ln 8}\approx 2.0393$, which means that the frequentist
interval for $\alpha=0.05$ and the LR support interval roughly coincide.
For $K=7$, it is even with good accuracy $\sqrt{2\ln K}\approx z_{1-\alpha/2}$,
but, as we will see in section \ref{sec:vergleich}, the frequentist interval
based on $z_{1-\alpha/2}$ is generally to small, so that $K=8$ is a safer choice.
\subsection{Highest posterior density for $\mu$}
\label{sec:mu:hpd}
On basis of the $t$ distribution, Eq.~(\ref{eq:bayes}) becomes with the
assumption of a constant ``a priori'' distribution $p(\mu)=\mbox{\em const.}$:
\begin{align}
p(\mu|\overline{x}) & = \frac{\sqrt{n}\Gamma(\frac{n}{2})}{s\sqrt{\pi(n-1)}\Gamma(\frac{n-1}{2})}\left(1+\frac{(\overline{x}-\mu)^2n}{s^2(n-1)}\right)^{\!\!-\frac{n}{2}} \nonumber \\[1ex]
& = \sqrt{\frac{n}{s^2}}\cdot\mbox{dt}\!\left(\frac{(\overline{x}-\mu)\sqrt{n}}{s}, n-1\right)
\end{align}
where {\em dt} is the R function for the probability density of the $t$
distribution. On basis of the normal distribution, we obtain under the
analogous assumption $p(\mu)=\mbox{\em const.}$:
\begin{align}
p(\mu|\overline{x}) &=
\sqrt{\frac{n}{2\pi s^2}}\cdot\exp\left(-\frac{(\overline{x}-\mu)^2n}{2s^2}\right) \nonumber \\[1ex]
&= \mbox{dnorm}(\mu, \overline{x}, s^2\!/n)
\end{align}
where {\em dnorm} is the R function for the probability density of the
normal distribution. The resulting densities are thus identical to
the symmetric densities used for the frequentist interval, which has the
effect that the specifying equation (\ref{eq:freq:mu-ansatz}) for the HPD
interval has the same solution as the specifying equation for the
frequentist interval. The HPD interval for the mean value is therefore
exactly identical to the frequentist interval (\ref{eq:freq:mu-t})
or (\ref{eq:freq:mu-t}), respectively.
This is no coincidence, but a consequence of the fact that $\mu$ is a
``location parameter'', i.e., that $p(\overline{x}|\mu)=f(\overline{x}-\mu)$.
When this functional relationship holds, frequentist interval and HPD interval
are always identical \cite{karlen02}.
\section{Maximum likelihood estimators}
\label{sec:ml}
To obtain a confidence interval for different estimators, it is necessary
to know the probability distribution of the estimated value $\hat{\theta}$.
Unfortunately, this is almost impossible in other cases than the aforementioned
two examples. There is however a large category of estimators for which
the asymptotic distribution is known: maximum likelihood (ML) estimators
are asymptotically normally distributed around the true value $\theta$
for ``regular'' log-likelihood functions\footnote{\label{fn:ml-normal}The precise requirements are as follows: the log-likelihood function $\ell(\theta)$ must be three times continuously differentiable, the expectation values of all first and second derivatives exist, and the third derivations must be bounded by a function with finite expectation value \cite{greene00}.}
$\ell(\theta)$ (see Eq.~(\ref{eq:L:b})) for large $n$. In other words,
the asymptotic probability density of $\hat{\theta}$ is given by
\begin{equation}
\label{eq:normalverteilung}
p(\hat{\theta}) = \frac{\exp\left(-\frac{1}{2}\langle\hat{\theta}-\theta, \Sigma^{-1} (\hat{\theta}-\theta)\rangle\right)}{\sqrt{(2\pi)^t\det(\Sigma)}}
\end{equation}
where $t$ is the number of parameters $\theta=(\theta_1,\ldots,\theta_t)$,
$\Sigma$ is the covariance matrix, and the exponent ``$-1$'' denotes matrix
inversion.
If it is thus possible to determine the covariance matrix
$(\sigma_{ij})=\Sigma$, then its diagonal elements
$\sigma_{ii}=\mbox{\em Var}(\theta_i)$ can be used to construct confidence
intervals based on the normal distribution as in section \ref{sec:mu}:
\begin{equation}
\label{eq:ml-klassisch}
\hat{\theta} \pm z_{1-\alpha/2}\sqrt{\sigma_{ii}}
\end{equation}
Alternatively, it would also be sufficient to have a direct estimator
for the variances $\sigma_{ii}$ of the parameters. This leads to two
possible approaches for an estimation of the variance of maximum likelihood
estimators:
\begin{itemize}
\item estimation of the covariance matrix via inversion of the Hessian matrix
of the log-likelihood function
\item jackknife estimator for the variance
\end{itemize}
The first method has the advantage that it can yield closed formulas for
the variance in cases that allow for an analytic calculation of the Hessian
matrix. The second method has the advantage that it requires no analytic or
numeric calculation of derivatives at all, but that it provides an elementary
and fast algorithm for computing the variance.
When the requirements listed in footnote \ref{fn:ml-normal} do not hold,
the Hessian matrix cannot be computed, and the first method is ruled out.
Although the jackknife variance can nevertheless be computed even in this case,
it is of little use, because neither is guaranteed that the estimator is
normally distributed, nor that the jackknife variance is a good estimator
for the true variance (see \cite{miller64} for a counterexample).
In such a situation, it is therefore necessary to resort to the
bootstrap method which is described in section
\ref{sec:bootstrap}.
\subsection{Hessian matrix}
\label{sec:ml:hesse}
When the preconditions mentioned in footnote \ref{fn:ml-normal} hold,
the covariance matrix in Eq.~(\ref{eq:normalverteilung}) can be estimated
through
\cite{greene00}
\begin{equation}
\label{eq:ml-hesse}
\left(\sigma_{ij}\right) = \left(-\left.\frac{\partial^2 \ell}{\partial\theta_i \partial\theta_j}\right|_{\theta=\hat{\theta}}\right)^{-1}
\end{equation}
where $\ell(\theta)$ is the log-likelihood function form Eq.~(\ref{eq:L:b}),
and the exponent ``$-1$'' denotes matrix inversion.
\lstset{language=R,
basicstyle=\small \ttfamily,
literate={.help}{.help}5,
keywordstyle=\ttfamily,
frame=bottomline,
floatplacement=!t,
aboveskip=0pt,
belowskip=0pt,
captionpos=b
}
\begin{lstlisting}[float, caption={R code for the numerical calculation of an
ML estimator for $\theta=(\theta_1,\ldots,\theta_t)$ in combination with a
variance estimation for the estimated values. The log-likelihood function
must be defined negatively, because {\em optim} seeks the minimum instead
of the maximum.}, label=lst:ml]
lnL <- function(theta1, theta2, ...) {
# definition of the negative (!)
# log-likelihood function
...
}
# starting values for the optimization
theta0 <- c(start1, start2, ...)
# optimization
p <- optim(theta0, lnL, hessian=TRUE)
if (p$convergence == 0) {
theta <- p$par
covmat <- solve(p$hessian)
sigma <- sqrt(diag(covmat))
}
\end{lstlisting}
In many cases, neither the equation (\ref{eq:ML}) specifying the ML
estimator $\hat{\theta}$ can be solved in closed form, nor can the inverse
of the Hessian matrix (\ref{eq:ml-hesse}) be computed analytically.
This does not mean, however, that this method must be ruled out in this case,
because a numerical solution is often viable. The R function {\em optim}
even offers an argument {\em hessian=TRUE} which asks for an additional
estimation of the Hessian matrix during optimization. An example implementation
utilizing this function is given in listing \ref{lst:ml}.
\subsection{Jackknife}
\label{sec:ml:hesse}
The jackknife method is based on the idea to compute the estimator
$\hat{\theta}(x_1,\ldots,x_n)$ many times, but each time with the omission of
one value $x_i$. The variance of $\hat{\theta}$ is then estimated form the
distribution of these ``delete-one'' estimators. Let $\theta_{(i)}$ be the
estimator computed without the $i$-th data point $x_i$. Then the jackknife
estimator for the variance of $\hat{\theta}$ is:
\begin{align}
\label{eq:sigma-jk}
\sigma_{\mbox{\scriptsize\it JK}}(\hat{\theta}) &= \sqrt{\frac{n-1}{n}\sum_{i=1}^n (\theta_{(i)}-\theta_{(.)})^2}\\
\mbox{ with }&\quad \theta_{(.)}=\frac{1}{n}\sum_{i=1}^n\theta_{(i)} \nonumber
\end{align}
When $\theta$ is a vector with several components, it is also possible to
estimate the entire covariance matrix $Sigma$ with the jackknife. This is
of little use however, because the confidence intervals (\ref{eq:ml-klassisch})
only need the diagonal elements $\sigma_{ii}$ of $\Sigma$. Hence it is
sufficient to apply Eq.~(\ref{eq:sigma-jk}) to each component of
$\theta$. For asymptotically normally distributed ML estimators,
$\sigma_{\mbox{\scriptsize\it JK}}$ is an asymptotically unbiased and consistent
estimator for their variance \cite{reeds78}. An implementation of
formula (\ref{eq:sigma-jk}) is given in listing \ref{lst:jackknife}.
\lstset{language=R,
basicstyle=\small \ttfamily,
literate={ü}{{\"u}}1{ä}{{\"a}}1,
keywordstyle=\ttfamily,
frame=bottomline,
floatplacement=!t,
aboveskip=0pt,
belowskip=0pt,
captionpos=b
}
\begin{lstlisting}[float, caption={Calculation of the jackknife variance of an estimator $\hat{\theta}(x_1,\ldots,x_n)$ in R.}, label=lst:jackknife]
theta.hat <- function(x) {
# implementation of the estimator
...
}
theta.jk <- rep(0, n)
for (i in 1:n) {
theta.jk[i] <- theta.hat(x[-i])
}
theta.dot <- mean(theta.jk)
sigma.jk <- sqrt((n-1) *
mean((theta.jk-theta.dot)^2))
\end{lstlisting}
\section{Bootstrap}
\label{sec:bootstrap}
Similar to the jackknife method, the bootstrap method is based on the
generation of new data sets from the original data $x_1,\ldots,x_n$, In the
bootstrap, this is however not done deterministically via cyclic omission,
but in a random way. This can either be be done by $n$-fold drawing
with replacement ({\em non-parametric bootstrap}), or by $n$-fold generation
of random numbers distributed according to the density estimated with the
estimator ({\em parametric bootstrap}). The non-parametric bootstrap thus
considers all observed data, while the parametric bootstrap only considers
the point estimator $\hat{\theta}$ computed from the data.
When we repeat the drawing of new data sets $R$ times, we obtain a Monte-Carlo
simulation of the distribution of the estimator $\hat{\theta}$. From this
distribution, confidence intervals can be estimated.\footnote{It is also possible to estimate the variance from this Monte Carlo simulation \cite{efron83}, but a confidence interval based on this variance would again make the assumption of a normally distributed $\hat{\theta}$.}\footnote{You could think that instead of the bootstrap random samples, one could alternatively estimate the confidence interval from the distribution of the $n$ jackknife ``delete one'' estimators $\theta_{(i)}$. This does not work, however, because even in the case of regular ML estimators, the distribution of the $\theta_{(i)}$ is not normal and therefore not representative for the distribution of $\hat{\theta}$ \cite{wu86}.}.
There is a bewildering variety of methods for estimating a confidence interval
from the simulated distribution, which are summarized together with their
asymptotic coverage probability in \cite{carpenter00}. Their theoretical
background is explained in \cite{diciccio96}. The most important methods are:
\begin{description}
\item[Percentile \& Basic.] The {\em Percentile bootstrap} was the original
method proposed by Efron. It simply takes the percentiles of the simulated
distribution $\hat{\theta}_1,\ldots,\hat{\theta}_n$ of
$\hat{\theta}$. The {\em Basic bootstrap interval} flips the percentile
bootstrap at $\hat{\theta}$. Venables \& Ripley recommend the Basic
bootstrap over the Percentile bootstrap \cite{venables02}, but the experiments
in section \ref{sec:vergleich} lead to the opposite conclusion.
\item[Bias corrected accelerated ($BC_a$).] This method tries to estimate
transformation parameters that make the distribution symmetric.
This is the method recommended by Efron.
\end{description}
It can be shown that the $BC_a$ interval has a coverage probability that
converges asymptotically for large $n$ to the nominal value $1-\alpha$ with
a rate $o(n^{-1})$ \cite{diciccio96}. This is a faster convergence than
for the classical $z_{1-\alpha/2}\sigma$ interval, which has a convergence
rate of $o(n^{-1/2})$. DiCiccio \& Efron concluded form this observation that
the bootstrap method is generally preferable (comments to \cite{diciccio96}, p.~228):
\begin{quote}
``If the standard intervals were invented today, they might not be
publishable.''
\end{quote}
\lstset{language=R,
basicstyle=\small \ttfamily,
literate={ü}{{\"u}}1{ä}{{\"a}}1,
keywordstyle=\ttfamily,
frame=bottomline,
floatplacement=!t,
aboveskip=0pt,
belowskip=0pt,
captionpos=b
}
\begin{lstlisting}[float, caption={Calculation of bootstrap confidence intervals with the R library {\em boot}.}, label=lst:bootstrap]
# estimator function; indices are
# for boot() to select data points
schaetzer <- function(x, indices) {
x.auswahl <- x[indices]
... # compute estimator from x.auswahl
return(theta.hat)
}
# bootstrap confidence intervals
boot.out <- boot(data=x,
statistic=schaetzer, R=1000)
ci <- boot.ci(boot.out,
conf=0.95, type="all")
# percentile interval:
theta1 <- ci$perc[4]
theta2 <- ci$perc[5]
# basic interval:
theta1 <- ci$basic[4]
theta2 <- ci$basic[5]
# BCa interval:
theta1 <- ci$bca[4]
theta2 <- ci$bca[5]
\end{lstlisting}
This is somewhat misleading, however, because the difference between the
different confidence intervals is marginal for large $n$ anyway, and
the convergence rate for large $n$ is therefore of merely theoretical
interest. Of practical relevance is instead the behavior for small $n$,
where the bootstrap intervals perform poorer than the classic intervals
(see section \ref{sec:vergleich}). In defense of the comparatively poor
performance of the bootstrap for small $n$ in a specific study, its inventor,
Bradley Efron, wrote \cite{efron87}:
\begin{quote}
``Bootstrap methods are intended to supplement rather than replace
parametric analysis, particularly when parametric methods can't be used
because of modeling uncertainties or theoretical intractability.''
\end{quote}
The function {\em boot.ci} from the R library {\em boot} can compute a number
of bootstrap confidence intervals, including the three aforementioned.
According to Efron \& Tibshirani \cite{efron86}, the minimum value for the
number $r$ of bootstrap replications should be $R=1000$. Usage of the
R function {\em boot.ci} is shown in listing \ref{lst:bootstrap}.
Apart from the confusion about the most appropriate bootstrap interval in a
particular situation, the bootstrap method has another drawback:
as it is based on Monte Carlo simulations, its results are not deterministic
and not reproducible. This means that two different researchers will end
with different confidence intervals for the same data. Leon Jay Gleser
sees therein a violation of a rule that he calls the ``first law of
applied statistics'' (comments to \cite{diciccio96}, p.~220):
\begin{quote}
``Two individuals using the same statistical
method on the same data should arrive at the
same conclusion.''
\end{quote}
It should be noted that the differences are small, though. Nevertheless
they are noticeable and the bootstrap method might therefore leave some
users with a slightly uneasy feeling.
\begin{figure*}[!t]
\begin{center}
\subfigure[Wald interval]{
\includegraphics[width=0.45\textwidth]{pcov-binom-wald}
}
\subfigure[Wilson interval]{
\includegraphics[width=0.45\textwidth]{pcov-binom-wilson}
}
\subfigure[LR support interval ($K=8$)]{
\includegraphics[width=0.45\textwidth]{pcov-binom-lr8}
}
\subfigure[HPD interval]{
\includegraphics[width=0.45\textwidth]{pcov-binom-hpd}
}
\end{center}
\caption{\label{fig:pcov-binom}Coverage probability $P_{cov}(p)$ of the confidence intervals for a binomial proportion as a function of the true parameter value $p$ for $n=100$ and $1-\alpha=0.95$.}
\end{figure*}
\section{Performance in examples}
\label{sec:vergleich}
This section provides a comparative evaluation of the different confidence
intervals with respect to examples for all three of the aforementioned cases.
Apart from the coverage probability $P_{cov}$, the relative size of the
confidence intervals is of interest, too.
For fixed $n$, the relative frequency can only be one of $n+1$ discrete
values, so that $P_{cov}(p)$ can be computed exactly. As an example for
the mean value, I have chosen an asymmetric distribution with density
$f(x)=3x^2$, such that Monte Carlo simulations might show whether the
bootstrap provides any advantages over the classic intervals that assume
symmetry. As an example for an ML estimator, I have chosen the parameter
$\lambda$ of the exponential distribution. In this example, even the inverse
of the Hesse matrix can be calculated analytically in closed form, which allows
for a comparison of all methods by means of a Monte Carlo simulation. From
the bootstrap methods, I have only tested the non-parametric bootstrap because
the parametric method is not universally applicable, but must be tailored to
each particular use case, which might be too much of an effort for an end user only interested in confidence intervals\footnote{Apart from an understanding of probability theory, it also requires knowledge about the generation of random numbers (transformation method, rejection method \cite{press89}).}.
\subsection{$P_{cov}$ for the relative frequency}
\label{sec:vergleich:p}
The coverage probability of different confidence intervals for a binomial
proportion was already investigated by Brown et al.~\cite{brown01}.
Based on their results, they recommended the Wilson interval. As they did not
include the LR support interval or the HPD interval in their study, the
corresponding behavior of $P_{cov}(p)$ as a function of $p$ is shown
in Fig.~\ref{fig:pcov-binom}. The corresponding behavior of the ``exact''
(Clopper-Pearson) interval is shown in Fig.~\ref{fig:pcov-binom-exact}.
The curves have been computed as follows:
\begin{itemize}
\item for every $0\leq k\leq n$, the confidence interval was calculated
\item for every sampled value $p\in[0,1]$, the probabilities of all $k$
were added for which $p$ fell into the confidence interval
\end{itemize}
As already noted by Brown et al., the classical Wald interval taught in
statistics text books has a way too low coverage probability almost over the
entire range of $p$ values. $P_{cov}$ even approaches zero for small or large
$p$. The Wilson interval, on the contrary, fluctuates around the nominal
value, albeit with greater deviations towards the boundaries of the $p$-range.
Interestingly, the HPD interval has an even better better coverage probability
than the Wilson interval because it mostly shows a similar behavior, but
has no too small values at the boundaries. The behavior of the LR
support interval for $K=8$ is similar to that of the exact Clopper-Pearson
interval, but there are instances where $P_{cov}$ falls considerably below
the nominal value.
\begin{figure}[!b]
\centering\includegraphics[width=0.95\columnwidth]{maxlength-binom}
\caption{\label{fig:maxlength-binom}Maximum length of the confidence intervals for the relative frequency as a function of $n$ for $1-\alpha=0.95$. The maximum length of the HPD and Wilson interval are nearly identical.}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[width=0.95\columnwidth]{length-binom}
\caption{\label{fig:length-binom}Confidence interval length for the relative frequency as a function of $\hat{p}$ for $1-\alpha=0.95$ and $n=100$.}
\end{figure}
Another evaluation criterion is the interval length, which should be minimal
for comparable coverage probability. The maximum length of all intervals occurs
for $p=1/2$ and is plotted as a function of $n$ in
Fig.~\ref{fig:maxlength-binom}. The widest interval is the exact interval,
which is inevitable prize for the guarantee of $P_{cov}(p)\geq 1-\alpha$ with
a greater $P_{cov}$ more often than not. Curiously enough, the maximum length
of the Wald interval is greater than that of the Wilson or HPD interval,
although its coverage probability is smaller. This apparent contradiction
is resolved when the interval lengths for varying $\hat{p}$ with fixed $n$
are compared (see Fig.~\ref{fig:length-binom}). It can be seen that the
classical Wald interval is unnecessarily wide for $\hat{p}\approx 0.5$, but
too short for $\hat{p}\approx 0$ or $\hat{p}\approx 1$.
It is interesting to note that the HPD interval for $\hat{p}\approx 0$ or
$\hat{p}\approx 1$ is shorter than the Wilson interval, even though it has
a considerably higher coverage probability in this region. With respect to
the criteria coverage probability and length, the HPD interval has the
best properties. It has the drawback, though, that it can be computed
only numerically (see listing \ref{lst:hpd-binom}). If a closed formula is
required, the Wilson interval (see Eq.~(\ref{eq:freq:pwilson})) can be
used as an alternative, provided $\hat{p}$ is not too close to zero or one.
\subsection{$P_{cov}$ for the mean value}
\label{sec:vergleich:mu}
To compare the classical confidence intervals for the mean value with
the bootstrap intervals, I have chosen a random variable with the
probability density
\begin{equation}
\label{eq:dichtex2}
f(x) = \left\{ \begin{array}{ll} 3x^2 & \mbox{ for }0\leq x\leq 1 \\
0 & \mbox{ otherwise}\end{array} \right.
\end{equation}
The expectation value of this distribution is $3/4$, and random numbers
drawn from this distribution can be generated by means of the transformation
method \cite{press89} with
\begin{quote}
\verb$runif(N, min=0, max=1)**(1/3)$
\end{quote}
The number of simulated mean value measurements was set to $N=10^6$, which
means that the coverage probability can be estimated with an accuracy
$\pm 0.0004$ for $\alpha=0.05$.
\begin{figure}[!t]
\begin{center}
\subfigure[Coverage probability]{
\includegraphics[width=0.95\columnwidth]{mean-pcov}
}
\subfigure[Average length]{
\includegraphics[width=0.95\columnwidth]{mean-length}
}
\end{center}
\caption{\label{fig:mean-pcov}Coverage probability and average length of the different confidence intervals for the mean value of $n$ random variables distributed according to Eq.~(\ref{eq:dichtex2}).}
\end{figure}
The behavior of $P_{cov}$ and the length of the different confidence intervals
as a function of the number $n$ of observed data points is shown in
Fig.~\ref{fig:mean-pcov}. Surprisingly, the classic confidence interval
based on the $t$ distribution has the best coverage probability throughout,
although the distribution of $\overline{x}$ is asymmetric for small $n$.
The weaknesses of the bootstrap method for small $n$ are thus not compensated
in this case by its ability to take asymmetries into consideration.
The best bootstrap interval in this case is the $BC_a$ interval. It has
a length that is is similar to the classic $z_{1-\alpha/2}$ interval, but
with a greater $P_{cov}$. Venables' \& Ripley's recommendation for
the basic over the percentile bootstrap cannot be confirmed, but, on the
contrary, the basic bootstrap interval has a clearly too low $P_{cov}$
in this case, whilst the percentile interval has a coverage probability that
is comparable to the classic $z_{1-\alpha/2}$ interval.
\subsection{$P_{cov}$ for ML estimators}
\label{sec:vergleich:mu}
Let us consider the exponential distribution as a test case for comparing
the different confidence intervals for maximum likelihood estimators.
The exponential distribution has the single parameter $\lambda$ and
the probability density
\begin{equation}
\label{eq:dichteexp}
f(x) = \left\{ \begin{array}{ll} \lambda e^{-\lambda x} & \mbox{ for }x\geq 0 \\
0 & \mbox{ otherwise}\end{array} \right.
\end{equation}
The log-likelihood function obtained from this density is
\begin{equation}
\label{eq:ellexp}
\ell(\lambda) = n\log(\lambda) + \lambda\sum_{i=1}^n x_i
\end{equation}
The ML estimator for $\lambda$ is obtained by solving the equation
(\ref{eq:ML}) for $\lambda$:
\begin{equation}
\label{eq:lambda-ML}
\hat{\lambda} = \frac{n}{\sum_{i=1}^n x_i} = \frac{1}{\overline{x}}
\end{equation}
As the exponential distribution only has a single parameter, the Hessian matrix
is of dimension $1\times 1$, ergo a scalar. It can be readily computed as
\begin{equation}
\label{eq:exp-HM}
H(\lambda) = \left(\frac{\partial^2}{\partial\lambda^2}\ell\right)
= \left(-\frac{n}{\lambda^2}\right)
\end{equation}
When the variance of $\hat{\lambda}$ is estimated form the Hessian with the
method of section \ref{sec:ml:hesse}, it reads
\begin{equation}
\label{eq:exp-sigma-HM}
\hat{\sigma}_{\mbox{\scriptsize\em HM}} = \sqrt{\Big(-H(\hat{\lambda})\Big)^{-1}} = \frac{\hat{\lambda}}{\sqrt{n}}
\end{equation}
\begin{figure}[!t]
\begin{center}
\subfigure[Coverage probability]{
\includegraphics[width=0.95\columnwidth]{lambda-pcov}
}
\subfigure[Average length]{
\includegraphics[width=0.95\columnwidth]{lambda-length}
}
\end{center}
\caption{\label{fig:lambda-pcov}Coverage probability and average length of the different confidence intervals for the ML estimator of the parameter $\lambda$ of the exponential distribution.}
\end{figure}
Again, I have generated $N=10^6$ times $n$ exponentially distributed random
numbers with $\lambda=2$ in order to simulate the distribution of
$\hat{\lambda}$ and to compare $P_{cov}$ and average length of the different
confidence intervals. The results are shown in Fig.~\ref{fig:lambda-pcov}.
The classical interval with $\hat{\sigma}_{\mbox{\scriptsize\em HM}}$ has the best
coverage probability, followed by the classical interval with
$\hat{\sigma}_{\mbox{\scriptsize\em JK}}$. Among the bootstrap intervals,
the $BC_a$ interval has the highest coverage probability, and again the
percentile bootstrap performs better than the basic bootstrap.
Venables' \& Ripley's recommendation in favor of the basic bootstrap must
therefore be rejected. Overall, the bootstrap intervals show a
coverage probability that is clearly below the nominal value $1-\alpha$.
It is surprising that the confidence interval based on the jackknife variance
is wider, but has a smaller coverage probability than the interval based
on the Hessian. A closer look at the simulated distribution of
$\hat{\lambda}$ reveals that in this case the ML estimator is biased and is on average too large\footnote{ML estimators are only guaranteed to be {\em asymptotically} unbiased for large $n$.}.
As $\hat{\sigma}_{\mbox{\scriptsize\em HM}}$ is proportional to $\hat{\lambda}$
according to Eq.~(\ref{eq:exp-sigma-HM}), the confidence interval is wider
when the estimated value is too large, which compensates the bias of the ML
estimator in this case. This lead to a correlation of
$|\hat{\lambda}-\lambda|$ with respect to
$\hat{\sigma}_{\mbox{\scriptsize\em HM}}$ of about
$0.60$ in the Monte-Carlo simulations for $n=20$, but of only about $0.40$
with respect to $\hat{\sigma}_{\mbox{\scriptsize\em JK}}$. This explains why
$P_{cov}$ can be smaller for the wider interval.
\section{Conclusions}
\label{sec:fazit}
For the practitioner, the comparative evaluation of the different confidence
intervals leads to the following recommendations:
\begin{enumerate}
\item For a relative frequency, the HPD interval (listing \ref{lst:hpd-binom})
or the Wilson interval (Eq.~(\ref{eq:freq:pwilson})) should be used.
The Wilson interval has the advantage of a closed formula, but it has a
smaller coverage probability than the HPD interval for $p$ values
close to zero or one.
\item For mean values, the classical confidence interval based on the $t$
distribution should be used (Eq.~(\ref{eq:freq:mu-t})).
\item For ML estimators with a smooth log-likelihood function,
the confidence interval $z_{1-\alpha/2}\cdot\hat{\sigma}$ should be used.
The variance $\hat{\sigma}$ can be estimated either from the Hessian matrix
or, in a simpler way, by means of the jackknife (listing \ref{lst:jackknife}).
\item In the remaining cases, the $BC_a$ bootstrap interval should be used.
\end{enumerate}
The results of this technical report thus confirm the already cited remark
by Efron \cite{efron87}:
\begin{quote}
``Bootstrap methods are intended to supplement rather than replace
parametric analysis, particularly when parametric methods can't be used
because of modeling uncertainties or theoretical intractability.''
\end{quote}
\bibliographystyle{ieeetr}
|
3,212,635,537,731 | arxiv | \section{Appendix}
\label{appendix}
Proofs in the appendix use definitions from \autoref{sec:results}.
\subsection{Proof of \autoref{lem:finitely-many-nonexp-dirs}}
The lemma also follows from existence of generating sets introduced by Cyr and Kra \cite{CyrKra15}. Here we show a proof using polynomials:
\env{proof}{
By Lemma~5 of \cite{KariSzabados2016Arxiv}, there exists a non-trivial annihilator of the configuration. Let $F$ denote convex hull of it support. It has finitely many edges. We claim that only directions of the edges can be one-sided non-expansive for $X_c$.
Let ${\bm{u}}$ be a direction such that $F$ has a vertex ${\bm{v}}$ in direction ${\bm{u}}$. Let $\ell$ be the line in direction ${\bm{u}}$ which is the closest to $H_{\bm{u}}$ but lies outside of $H_{\bm{u}}$. Then $F$ can be translated such that $F \setminus \{{\bm{v}}\}$ lies in $H_{\bm{u}}$ and ${\bm{v}} \in \ell$. Linear combination given by the annihilator determines the value of $c_{\bm{v}}$ from values in $H_{\bm{u}}$, and by translation in the whole line $\ell$. Moving to the next and next line in direction ${\bm{u}}$, all the values of $c$ are determined. We proved that ${\bm{u}}$ is a one-sided expansive direction for $X_c$.
\qed}
\subsection{Proof of \autoref{lem:periodic-stripe-extends}}
The proof is by reduction to Lemma 39 of \cite{KariSzabados2016Arxiv}:
\env{lemma*}{[Lemma~39]
Let $c$ be a counterexample candidate and ${\bm{v}} \in \mathbb{Z}^2$ a non-zero vector. Let $S$ be an infinite stripe in the direction of ${\bm{v}}$ of maximal width such that $\phi$ does not fit in. Then $c$ restricted to the stripe $S$ is non-periodic in the direction of ${\bm{v}}$.
}
We assume the reader is comfortable with notions used in its statement. Let us however briefly describe some of them. A two-dimensional configuration is a \emph{counterexample candidate} if it is normalized non-periodic finitary integral configuration which has a non-trivial annihilator. Without going into further details, \emph{normalized} configurations have the property from \autoref{lem:normalized} and any configuration can be made normalized by changing the symbols in $\mathcal{A}$. The polynomial $\phi$ is the largest polynomial (w.r.t. polynomial division) which divides every annihilator, it is product of polynomials $\phi_i$ from the statement of \autoref{lem:explicit-decomposition}.
\env{proof}{[of \autoref{lem:periodic-stripe-extends}]
Without loss of generality assume that $c$ is normalized and for the contrary assume that it is non-periodic, then $c$ is a counterexample candidate. By \autoref{lem:normalized} there is an annihilator which fits in $-D$ and therefore also in $S$. Then also $\phi$ fits in $S$. Let $T \subset S^\circ$ be a stripe in direction ${\bm{u}}$ of maximal width such that $\phi$ does not fit in. Since $c \restr T$ is periodic in direction ${\bm{u}}$, by Lemma~39 also $c$ is.
\qed}
\section{Introduction}
Let $\mathcal{A}$ be a finite set of symbols and $d$ a positive integer, the dimension. A \emph{$d$-dimensional symbolic configuration\/} $c$ is an element of $\mathcal{A}^{\mathbb{Z}^d}$, that is, a map assigning a symbol to every vertex of the lattice $\mathbb{Z}^d$. The symbol at position ${\bm{v}} \in \mathbb{Z}^d$ is denoted $c_{\bm{v}}$.
For a non-empty finite domain $D \subset \mathbb{Z}^d$, the elements of $\mathcal{A}^D$ are \emph{$D$-patterns}. We can observe patterns in a given configuration, the $D$-pattern occurring in $c$ at position ${\bm{v}} \in \mathbb{Z}^d$ is the map
\env{align*}{
p \colon D &\rightarrow \mathcal{A} \\
{\bm{u}} &\mapsto c_{{\bm{v}}+{\bm{u}}}.
}
The number of distinct $D$-patterns occurring in $c$, denoted $P_c(D)$, is the \emph{$D$-pattern complexity} of $c$. We say that $c$ has \emph{low complexity} if $P_c(D) \leq \abs D$ holds for some $D$.
We study what conditions on complexity imply that a configuration is periodic, that is, when there exists a non-zero vector ${\bm{u}}$ such that $c_{\bm{v}} = c_{{\bm{v}}+{\bm{u}}}$ for all ${\bm{v}} \in \mathbb{Z}^d$. The situation in one dimension was described by Morse and Hedlund \cite{MorseHedlund38}, let us denote $\llbracket n \rrbracket = \{0,\dots,n-1\}$:
\env{theorem*}{[Morse--Hedlund]
Let $c$ be a one-dimensional symbolic configuration. Then $c$ is periodic if and only if there exists $n \in \mathbb{N}$ such that $P_c(\llbracket n \rrbracket) \leq n$.
}
As a corollary, non-periodic one-dimensional configurations satisfy $P_c(\llbracket n \rrbracket) \geq n+1$. Those for which equality holds for every $n$ are \emph{Sturmian words}, they are a central topic of combinatorics on words and have connections to discrete geometry, finite automata and mathematical physics \cite{Lothaire2002algebraic, Allouche2003automatic, Damanik1999}. Note that Sander and Tijdeman \cite{SanderTijdeman00} extended the Morse--Hedlund theorem for patterns of other shapes than $\llbracket n \rrbracket$, they showed that in fact any low complexity one-dimensional symbolic configuration is periodic.
Nivat's conjecture \cite{Nivat97} is a natural extension of the theorem to two-dimensions. To simplify notation we write $P_c(m,n) = P_c(\llbracket m \rrbracket \times \llbracket n \rrbracket)$.
\env{conjecture*}{[Nivat]
If a two-dimensional symbolic configuration $c$ satisfies $P_c(m,n) \leq mn$ for some $m,n \in \mathbb{N}$, then it is periodic.
}
Nivat's conjecture is tight in the sense that there exist non-periodic configurations satisfying $P_c(m,n) = mn+1$ for all $m,n \in \mathbb{N}$, all such configurations were classified by Cassaigne \cite{Cassaigne99}. Note that the conjecture is not an equivalence, the opposite implication is easily seen to be false.
There have been a number of partial results towards the conjecture. Cyr and Kra \cite{CyrKra16} proved that having $P_c(3,n) \leq 3n$ for some $n \in \mathbb{N}$ implies periodicity, which was an improvement on a previous result with constant 2 \cite{SanderTijdeman02}. In another direction, there are results showing that having $P_c(m,n) \leq \alpha m n$ for some $m,n \in \mathbb{N}$ implies periodicity for a suitable real $\alpha$. The best result to date is also by Cyr and Kra \cite{CyrKra15} with $\alpha = 1/2$, which improved on previous constants $\alpha = 1/16$ \cite{QuasZamboni04} and $\alpha = 1/144$ \cite{EpifanioKoskasMignosi03}. Recently, Kari and the author \cite{KariSzabados2015ICALP} proved an asymptotic version of the conjecture: If $P_c(m,n) \leq mn$ for infinitely many pairs $(m,n) \in \mathbb{N}^2$, the configuration is periodic.
The Morse--Hedlund theorem does not analogously generalize to higher dimensions. There exists a three-dimensional configuration with low block complexity which is not periodic \cite{SanderTijdeman00}.
\subsection*{Our contributions}
In \cite{KariSzabados2015ICALP}, Kari and the author introduced an algebraic view on symbolic configurations. Following their definition, let a \emph{configuration} be any formal power series in $d$ variables $x_1, \dots, x_d$ with complex coefficients, that is, an element of
\env{align*}{
\mathbb{C}[[X^{\pm1}]] = \makesetbig{\sum_{{\bm{v}} \in \mathbb{Z}^d} c_{\bm{v}} X^{\bm{v}} }{c_{\bm{v}} \in \mathbb{C}}
}
where $X^{\bm{v}}$ is a shorthand for $x_1^{v_1} \cdots x_d^{v_d}$.\footnote{For the most of this paper, however, it is enough to consider configurations to be elements of $\mathbb{C}^{\mathbb{Z}^d}$.} If the configuration has only integer coefficients it is called \emph{integral}, if they come from a finite set the configuration is \emph{finitary}. A symbolic configuration can be identified with a finitary integral configuration if the symbols from $\mathcal{A}$ are chosen to be integers. Kari and the author in \cite{KariSzabados2015ICALP} proved:
\env{theorem*}{[Decomposition theorem]
Let $c$ be a low complexity $d$-dimensional finitary integral configuration. Then there exists $k \in \mathbb{N}$ and periodic $d$-dimensional configurations $c_1, \dots, c_k$ such that $c = c_1 + \dots + c_k$.
}
Note that the summands do not have to be finitary configurations. The minimal possible number of components $k$ in the decomposition plays an important role.
In this paper we prove:
\env{theorem}{
\label{thm:main-thm}
Let $c$ be a two-dimensional configuration satifying $P_c(m,n) \leq mn$ for some $m,n \in \mathbb{N}$. If $c$ is a sum of two periodic configurations then it is periodic.
}
In the proof of the asymptotic version of Nivat's conjecture given in \cite{KariSzabados2016Arxiv}, configurations which are a sum of horizontally and vertically periodic configuration had to be handled separately using a rather technical combinatorial approach. \autoref{thm:main-thm} is of particular interest since it covers this case.
In this paper we revisit the method of Van Cyr and Bryna Kra \cite{CyrKra15, CyrKra16}. They approach Nivat's conjecture from the point of view of symbolic dynamics. They use a refined version
of the classical notion of expansiveness of a subshift, a so called
\emph{one-sided non-expansiveness}. A key definition of theirs is that
of a \emph{balanced set} -- it is a shape $D \subset \mathbb{Z}^2$ which satisfies
a particular condition on the complexity $P_c(D)$.
(Note that this notion is different from balancedness usual in combinatorics on words.)
The crucial tool
they developed is a combinatorial lemma which links one-sided non-expansiveness
and balanced sets to periodicity of a configuration. However, in order to obtain
the main result of the paper from the lemma it still takes a rather lengthy
technical analysis.
We combine the algebraic method with ideas of Cyr and Kra. We start the exposition with a very basic introduction to the topic of symbolic dynamics. In \autoref{sec:symbolic-dynamics} we define a subshift, in \autoref{sec:geometry} we fix some geometric terminology, and in \autoref{sec:nonexpansiveness} we give definitions of non-expansiveness and one-sided non-expansiveness of a subshift.
In \autoref{sec:balanced-sets} we introduce a simplified version of a balanced set and prove \autoref{lem:cyr-kra} which connects balanced sets with periodicity using the ideas of Cyr and Kra. We use the lemma together with decomposition theorem to prove \autoref{thm:main-thm} in \autoref{sec:results}. As a corollary, we obtain an alternative proof of Theorem 1.2 of \cite{CyrKra15}, the main result of their paper:
\envparam[Cyr, Kra]{theorem*}{
Let $c$ be a configuration satisfying $P_c(m,n) \leq mn/2$ for some
$m,n \in \mathbb{N}$. Then $c$ is periodic.
}
\section{Symbolic dynamics and subshifts}
\label{sec:symbolic-dynamics}
Let us recall basic facts from symbolic dynamics, for a comprehensive reference and proofs see \cite{Kurka}.
Symbolic dynamics studies $\mathcal{A}^{\mathbb{Z}^d}$ as a topological space. Let us first make $\mathcal{A}$ a topological space by endowing it with the discrete topology. Then $\mathcal{A}^{\mathbb{Z}^d}$ is considered to be a topological space with the product topology.
Open sets in this topology are for example sets of the following form. Let $D \subset \mathbb{Z}^d$ be finite and $p \colon D \rightarrow \mathcal{A}$ arbitrary. Then
\env{align*}{
Cyl(p) := \makesetbig{c \in \mathcal{A}^{\mathbb{Z}^d}}{\forall {\bm{v}} \in D \colon c_{\bm{v}} = p_{\bm{v}}}
}
is an open set, also called a \emph{cylinder}. In fact, the collection of cylinders $Cyl(p)$ for all possible $p$ forms a subbase of the topology on $\mathcal{A}^{\mathbb{Z}^d}$.
For a vector ${\bm{u}} \in \mathbb{Z}^d$, the \emph{shift} operator $\tau_{\bm{u}} \colon \mathcal{A}^{\mathbb{Z}^d} \rightarrow \mathcal{A}^{\mathbb{Z}^d}$ is defined by $(\tau_{\bm{u}}(c))_{\bm{v}} = c_{{\bm{v}} - {\bm{u}}}$. Informally, $\tau_{\bm{u}}$ shifts a configuration in the direction of vector ${\bm{u}}$.
The set $\mathcal{A}^{\mathbb{Z}^d}$ is called the \emph{full shift}. A subset $X \subset \mathcal{A}^{Z^d}$ is called a \emph{subshift} if it is a topologically closed set which is invariant under all shifts $\tau_{\bm{u}}$:
\env{align*}{
\forall {\bm{u}} \in \mathbb{Z}^d \colon c \in X \Rightarrow \tau_{\bm{u}}(c) \in X.
}
Subshifts are the central objects of study in symbolic dynamics.
Let $c$ be a symbolic configuration. We denote by $X_c$ the \emph{orbit closure} of $c$, that is, the smallest subshift which contains $c$. It can be shown that $c$ contains exactly those configurations $c'$ whose finite patterns are among the finite patterns of $c$. In particular, for any $c' \in X_c$ and a finite domain $D$ we have $P_{c'}(D) \leq P_c(D)$.
\env{example}{
\label{ex:cross_orbit_closure}
Let us give an example of taking orbit closure. Let $c \in \{0,1\}^{\mathbb{Z}^2}$ be such that $c_{ij} = 1$ if $i = 0$ or $j = 0$, and $c_{ij} = 0$ otherwise. When pictured, the configuration $c$ consists of a large cross with its center at $(0,0)$. The orbit closure $X_c$ then consist of four types of configurations: a cross, a horizontal line, a vertical line and all zero configurations, with all possible translations, see \autoref{fig:cross_orbit_closure}. It is easy to see that any pattern which occurs in them also occurs in $c$, and not difficult to prove that those are all such configurations.
\qed}
\env{figure}{
\centering
\includegraphics[scale=0.7]{./img/cross_orbit_closure}
\caption{Four types of configurations in the orbit closure $X_c$ from \autoref{ex:cross_orbit_closure}. The gray color corresponds to value $1$, white is $0$.}
\label{fig:cross_orbit_closure}
}
\section{Geometric notation and terminology}
\label{sec:geometry}
In the sequel we will be concerned with the geometry of $\mathbb{Z}^2$. Let us establish some notation and terminology.
We view $\mathbb{Z}^2$ as a subset of the vector space $\mathbb{Q}^2$. A \emph{direction} is an equivalence class of $\mathbb{Q}^2 \setminus \{(0,0)\}$ modulo the equivalence relation $u \sim v$ iff $u=\lambda v$ for some $\lambda>0$. By a slight abuse of notation, we identify a non-zero vector ${\bm{u}} \in \mathbb{Z}^2$ with the direction ${\bm{u}} \mathbb{Q}^+$.
Let ${\bm{u}} \in \mathbb{Z}^2$ be non-zero. An (undirected) \emph{line} in $\mathbb{Z}^2$ is a set of the form
$$\makeset{{\bm{v}} + q{\bm{u}}}{q \in \mathbb{Q}} \cap \mathbb{Z}^2$$
for some ${\bm{v}} \in \mathbb{Z}^2$. We call both ${\bm{u}}$ and $-{\bm{u}}$ a \emph{direction} of the line. We define a \emph{directed line} to be a line augmented with one of the two possible directions.
Let $\ell$ be a directed line in direction ${\bm{u}}$ going through ${\bm{v}} \in \mathbb{Z}^2$. The \emph{half-plane} determined by $\ell$ is defined by
\env{align*}{
H_\ell = \makesetbig{{\bm{v}} + {\bm{w}}}{{\bm{w}} \in \mathbb{Z}^2, w_1 u_2 - u_1 w_2 \geq 0}.
}
With the usual choice of coordinates it is the half-plane ``on the right'' from the line.
Let $H_{\bm{u}}$ denote the half-plane determined by the directed line in direction ${\bm{u}}$ going through the origin.
We say that a non-empty $D \subset \mathbb{Z}^2$ is \emph{convex} if $D$ can be written as an intersection of half-planes. \emph{Convex hull} of $D$, denoted $Conv(D)$, is the smallest convex set containing $D$. Assume $\ell$ is a directed line in direction ${\bm{u}}$ such that $D \subset H_\ell$ and $\ell \cap D$ is non-empty. If $\abs{\ell \cap D} > 1$ we call it the \emph{edge} of $D$ in direction ${\bm{u}}$, otherwise we call it the \emph{vertex} of $D$ in direction ${\bm{u}}$. Note that a vertex is a vertex for many directions, but an edge has a unique direction (as long as $D$ is not contained in a line). See \autoref{fig:convex_set} for an example.
\env{figure}{
\centering
\includegraphics[scale=0.6]{./img/convex_set}
\caption{A convex set. The point ${\bm{v}}$ is a vertex of the set for both directions $\uu1$ and $\uu2$. The set of three marked points $e$ is the edge in direction $\uu3$.}
\label{fig:convex_set}
}
Let ${\bm{u}}$ be a direction and $\ell, \ell'$ two directed lines in direction ${\bm{u}}$. If
$$S = H_\ell \setminus H_{\ell'}$$
is non-empty, then $S$ is called a \emph{stripe} in direction ${\bm{u}}$. We call $\ell, \ell'$ the \emph{inner} and \emph{outer} boundary of $S$ respectively. Let $S^\circ = S \setminus \ell$ be the \emph{interior} of $S$.
For $A, B \subset \mathbb{Z}^2$, we say that $A$ \emph{fits in} $B$ if there exists a translation of $A$ which is a subset of $B$.
\section{Non-expansiveness and one-sided non-expansiveness}
\label{sec:nonexpansiveness}
It can be verified that the topology on $\mathcal{A}^{\mathbb{Z}^d}$ is compact and also metrizable. Note that shift operators $\tau_{\bm{u}}$ are continuous maps on $\mathcal{A}^{\mathbb{Z}^d}$. Expansiveness can be defined in general for a continuous action on a compact metric space, the definition is however too general for our purposes. We give a definition specific to the case of $\mathcal{A}^{\mathbb{Z}^2}$.
Let $X \subset \mathcal{A}^{\mathbb{Z}^2}$ be a subshift and ${\bm{u}}$ a direction. Then ${\bm{u}}$ is an \emph{expansive direction} for $X$ if there exists a stripe $S$ in direction ${\bm{u}}$ such that
\env{align*}{
\forall c, e \in X \colon \ \ c \restr S = e \restr S \ \ \Rightarrow \ \ c = e.
}
Informally speaking, ${\bm{u}}$ is an expansive direction for $X$ if a configuration in $X$ is uniquely determined by its coefficients in a wide enough stripe in direction ${\bm{u}}$.
A two-dimensional configuration is \emph{doubly periodic} if it has two linearly independent period vectors. The following classical theorem links double periodicity of a configuration with expansiveness. It is a corollary of a theorem by Boyle and Lind \cite{BoyleLind1997}.
\env{theorem}{
Let $c$ be a symbolic configuration. Then $c$ is doubly periodic iff all directions are expansive for $X_c$.
\qed}
Let $X \subset \mathcal{A}^{\mathbb{Z}^2}$ be a subshift and ${\bm{u}}$ a direction. Then ${\bm{u}}$ is a \emph{one-sided expansive direction} for $X$ if
$$\forall c, e \in X \colon \ \ c \restr{H_{\bm{u}}} = e \restr{H_{\bm{u}}} \ \ \Rightarrow \ \ c = e.$$
Equivalently, ${\bm{u}}$ is a one-sided expansive direction for $X$ if there exists a wide enough stripe $S$ in direction ${\bm{u}}$ such that $\forall c, e \in X \colon c \restr S = e \restr S \Rightarrow c \restr{H_{-{\bm{u}}}} = e \restr{H_{-{\bm{u}}}}$. See \autoref{fig:expansiveness} for a comparison of the notion of expansiveness and one-sided expansiveness.
\env{figure}{
\centering
\includegraphics[scale=0.35]{./img/expansiveness}
\caption{The figure on the left illustrates expansiveness -- values of the configuration inside the stripe determine the whole configuration. On the right we see one-sided expansiveness in direction $(1,2)$ -- values in the half-plane $H_\ell$, or equivalently in a wide enough stripe, determine the values in the half-plane $\mathbb{Z}^2 \setminus H_\ell$.}
\label{fig:expansiveness}
}
\env{example}{[Ledrappier's subshift]
It is possible for a subshift to be one-sided expansive but non-expansive in the same direction. Consider a subshift $X \subset \{0,1\}^{\mathbb{Z}^2}$ consisting of configurations $c$ which satisfy $c_{ij} \equiv c_{i,j+1} + c_{i+1,j+1} \pmod2$. Upper half-plane of a configuration determines the whole, since any single row determines the one below it. Therefore $(-1,0)$ is a one-sided expansive direction for $X$. However, no stripe in direction $(-1,0)$ determines a configuration from the subshift; for any row, there are always two possibilities for the row above it (they are complements of each other). Any horizontal stripe can be extended to the upper half-plane in infinitely many ways.
\qed}
We are primarily interested in non-expansive directions. In our setup, it is known that there are only finitely many of them, we omit the proof for space reasons. \onlinever{(See Appendix.)}
\env{lemma}{
\label{lem:finitely-many-nonexp-dirs}
Let $c$ be a low complexity two-dimensional configuration. Then there are at most finitely many one-sided non-expansive directions for $X_c$.
\qed}
For later use it will be practical to define non-expansiveness explicitly. Let $X \subset \mathcal{A}^{\mathbb{Z}^2}$ be a subshift and $S$ a stripe in direction ${\bm{u}}$. We say that $S$ is an \emph{ambiguous stripe in direction ${\bm{u}}$} if there exist $c, e \in X$ such that
\env{align}{
c \restr{S^\circ} = e \restr{S^\circ}, \ \textrm{ but }\ c \restr S \ne e \restr S. \label{eq:ambig_stripe}
}
We say that $c \in X$ \emph{contains} an ambiguous stripe $S$ if there exists $e \in X$ satisfying (\ref{eq:ambig_stripe}). Informally, a stripe is ambiguous if its interior does not determine the inner boundary.
\env{definition*}{
Let ${\bm{u}}$ be a direction and $X \subset \mathcal{A}^{\mathbb{Z}^2}$ a subshift. Then ${\bm{u}}$ is \emph{one-sided non-expansive direction} if there exists an ambiguous stripe in direction ${\bm{u}}$ of arbitrary width.
}
We leave the proof that this is the converse of the earlier definition of one-sided expansiveness to the reader.
\section{Balanced sets}
\label{sec:balanced-sets}
Let $c$ be a fixed symbolic configuration.
\env{definition}{
\label{def:balanced-set}
Let $B \subset \mathbb{Z}^2$ be a finite and convex set, ${\bm{u}}$ a direction and $E$ an edge or a vertex of $B$ in direction ${\bm{u}}$. Then $B$ is ${\bm{u}}$-balanced if:
\envparam[(i)]{enumerate}{
\item $P_c(B) \leq \abs B$
\item $P_c(B) < P_c(B \setminus E) + \abs E$ \label{few_extensions}
\item Intersection of $B$ with all lines in direction ${\bm{u}}$ is either empty or of size at least $\abs E - 1$. \label{long_cuts}
}
}
The three conditions of the definition can be interpreted as follows. The first one simply states that $B$ is a low complexity shape. The second condition limits the number of $(B \setminus E)$-patterns which do not extend uniquely to a $B$-pattern, there is strictly less than $\abs E$ of them. The third condition is implied if the length of the edge in direction ${\bm{u}}$ is smaller or equal to the length of the edge in the opposite direction, as can be seen in the next proof.
\env{lemma}{
\label{lem:balanced-set-existence}
Let $c$ be such that $P_c(m,n) \leq mn$ holds for some $m,n \in \mathbb{N}$ and ${\bm{u}}$ be a direction. Then there exists a ${\bm{u}}$-balanced or $(-{\bm{u}})$-balanced set. Moreover, if ${\bm{u}}$ is horizontal or vertical, then there exists a ${\bm{u}}$-balanced set.
}
\env{proof}{
Let $D$ be an $m \times n$ rectangle, we have $P_c(D) \leq \abs D$. Let us define a sequence of convex shapes $D = D_0 \supset D_1 \supset \dots \supset D_k = \emptyset$ such that $D_{i} \setminus D_{i+1}$ is the edge of $D_i$ in direction $(-1)^i{\bm{u}}$. Informally, the sequence represents shaving off an edge (or a vertex) of the shape alternately in directions ${\bm{u}}$ and $-{\bm{u}}$. See \autoref{fig:balanced_set_existence} for an illustration.
Consider the expression $P_c(D_i) - \abs{D_i}$ as a function of $i$. For $i = 0$ its value is non-positive and for $i = k$ its value is $1$. Let $i \in [0,k-1]$ be smallest such that $0 < P_c(D_{i+1}) - \abs{D_{i+1}}$, then we have
$$P_c(D_i) - \abs{D_i} \leq 0 < P_c(D_{i+1}) - \abs{D_{i+1}}.$$
Denote $E = D_i \setminus D_{i+1}$, it is an edge or a vertex of $D_i$ in direction ${\bm{u}}$ or $-{\bm{u}}$. Adding $\abs{D_i}$ to the inequality and rewriting gives $P(D_i) \leq \abs{D_i} < P(D_i \setminus E) + \abs E$.
We show that $B = D_i$ is a balanced set by showing that \textit{(\ref{long_cuts})} of \autoref{def:balanced-set} holds. Without loss of generality let the direction of $E$ be ${\bm{u}}$. Then, by construction, the length of $E$ is smaller or equal to the edge in direction $-{\bm{u}}$. In fact, if we consider the convex hull of $B$ in $\mathbb{Q}^2$, any line in direction ${\bm{u}}$ intersects it in a line segment longer or equal to $d$, the length of the edge. Any line segment of length at least $d$ in direction ${\bm{u}}$ intersects either none or at least $\abs E - 1$ integer points, and we are done.
If ${\bm{u}}$ is either horizontal or vertical, instead of alternating the direction of shaved off edges, we can always shave off the edge in direction ${\bm{u}}$. It will be always the shortest edge in direction ${\bm{u}}$, therefore verification of part \textit{(\ref{long_cuts})} goes through.
\qed}
\env{figure}{
\centering
\includegraphics[scale=1.0]{./img/balanced_set_existence}
\caption{Shaving off edges or vertices of a $5 \times 5$ rectangle alternately in directions $(2, 1)$ and $(-2, -1)$. Small numbers indicate the order in which the edges or vertices were removed.}
\label{fig:balanced_set_existence}
}
Next we present \autoref{lem:cyr-kra} which connects non-expansiveness and balanced sets with periodicity, based on the method of Cyr and Kra. Periodicity in the proof first arises in a stripe from the use of Morse--Hedlund theorem. This part of the proof follows Lemma 2.24 from \cite{CyrKra15}. The periodicity is then extended to the whole configuration by the following lemma, which is a corollary of Lemma 39 from \cite{KariSzabados2016Arxiv}. We omit the proof for space reasons. \onlinever{(See Appendix.)}
\env{lemma}{
\label{lem:periodic-stripe-extends}
Let $c$ be a two-dimensional configuration and $D$ a non-empty finite subset of $\mathbb{Z}^2$ such that $P_c(D) \leq \abs D$. Let $S$ be a stripe in direction ${\bm{u}}$ such that $D$ fits in $S$. If $S^\circ$ is periodic with a period in direction ${\bm{u}}$ then also $c$ is periodic with a period in direction ${\bm{u}}$.
\qed}
\env{lemma}{
\label{lem:cyr-kra}
Let $c$ be a configuration and $B$ a ${\bm{u}}$-balanced set. Assume that $c$ contains an ambiguous stripe for $X_c$ in direction ${\bm{u}}$ such that $B$ fits in the stripe. Then $c$ is periodic in direction ${\bm{u}}$.
}
\env{proof}{
Let $E$ be the edge or vertex of $B$ in direction ${\bm{u}}$, denote $S$ the stripe and let $\ell$ be the inner boundary of $S$ in direction ${\bm{u}}$. Without loss of generality assume $B \subset S$, $E \subset \ell$, and that ${\bm{u}}$ is not an integer multiple of a smaller vector. Let $e \in X_c$ be such that \autoref{eq:ambig_stripe} holds.
Denote points in $E$ consecutively by $e_1, \dots, e_n$ (see \autoref{fig:lemma_cyr_kra}). Define a sequence $B = D_n \supset \dots \supset D_1 \supset D_0 = B \setminus E$ by setting $D_{i-1} = D_i \setminus \{e_i\}$. Consider the values $P(D_i) - \abs{D_i}$. Since $B$ is a balanced set, by \textit{(\ref{few_extensions})} we have $P_c(D_n) - \abs{D_n} < P_c(D_0) - \abs{D_0}$, let $k \in [0,n-1]$ be such that
$$P_c(D_{k+1}) - \abs{D_{k+1}} < P_c(D_k) - \abs{D_k}.$$
Adding $\abs{D_{k+1}}$ to both sides yields $P_c(D_{k+1}) < P_c(D_k) + 1$. On the other hand, $P_c(D_k) \leq P_c(D_{k+1})$ since $D_k \subset D_{k+1}$, and therefore we have $P_c(D_k) = P(D_{k+1})$. In other words, a $D_k$-pattern uniquely determines the value at position $e_{k+1}$.
We will show that $\forall i \colon c \restr{D_k+i{\bm{u}}}\, \ne e \restr{D_k+i{\bm{u}}}$. For the contrary, assume that there is $j$ such that $c \restr{D_k+j{\bm{u}}} = e \restr{D_k+j{\bm{u}}}$. Using the property of $D_k$, we have $c \restr{e_{k+1}+j{\bm{u}}} = e \restr{e_{k+1}+j{\bm{u}}}$. Therefore $c \restr{D_k + (j+1) {\bm{u}}} = e \restr{D_k + (j+1) {\bm{u}}}$ and we can proceed by induction to show $c \restr{D_k + j' {\bm{u}}} = e \restr{D_k + j' {\bm{u}}}$ for all $j' > j$. Analogously, by constructing sets $D_i$ by removing edge points from the other end, it can be shown that also $c \restr{D_k + j' {\bm{u}}} = e \restr{D_k + j' {\bm{u}}}$ for all $j' < j$. We proved $c \restr S = e \restr S$, which is a contradiction with ambiguity of $S$.
We have that all $(B \setminus E)$-patterns $c \restr{(B \setminus E) + i {\bm{u}}}$ have at least two possible extensions into a $B$-pattern. Part \textit{(\ref{few_extensions})} of \autoref{def:balanced-set} implies that there are at most $\abs E - 1$ such patterns. Let $T$ be a thinner stripe in direction ${\bm{u}}$ defined by $T = \bigcup_{i \in \mathbb{Z}} (B \setminus E) + i {\bm{u}}$. Using part \textit{(\ref{long_cuts})} of \autoref{def:balanced-set}, values of $c$ on every line $\lambda \subset T$ in direction ${\bm{u}}$ contain at most $\abs E - 1$ distinct subsegments of length at least $\abs E - 1$. By Morse--Hedlund theorem, the values on the line repeat periodically. Therefore $c \restr{T}$ is periodic in direction ${\bm{u}}$.
$B$ fits in the stripe $T \cup \ell$ and its interior $T$ is periodic in direction ${\bm{u}}$. By \autoref{lem:periodic-stripe-extends} also $c$ is periodic in direction ${\bm{u}}$.
\qed}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.4]{./img/lemma_cyr_kra_1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.4]{./img/lemma_cyr_kra_2}
\end{subfigure}
\caption{Illustration of the proof of \autoref{lem:cyr-kra}.}
\label{fig:lemma_cyr_kra}
\end{figure}
\section{Main result}
\label{sec:results}
\env{theorem*}{[\autoref{thm:main-thm}]
Let $c$ be a two-dimensional configuration satisfying $P_c(m,n) \leq mn$ for some $m,n \in \mathbb{N}$. If $c$ is a sum of two periodic configurations then it is periodic.
}
\env{proof}{
For contradiction assume $c$ is non-periodic and denote $c_1, c_2$ periodic configurations such that $c = c_1 + c_2$. Let $\uu1, \uu2$ be their respective vectors of periodicity. If they are linearly dependent, $c$ is periodic and we are done. Otherwise, define a parallelogram
\env{align*}{
D = \makesetbig{a \uu1 + b \uu2}{a, b \in [0, 1)} \cap \mathbb{Z}^2.
}
We can choose $\uu1, \uu2$ large enough so that an $m \times n$ rectangle fits in. We can also assume that $\uu2 \in H_{\uu1}$. Denote $D_j = D + j \uu2$ and define a sequence of stripes $S_j = \bigcup_{i \in \mathbb{Z}} D_j + i\uu1$. The setup is illustrated in \autoref{fig:nivat_ord2_1}.
Assume that there are $j \ne j'$ such that $c \restr{D_j} = c \restr{D_{j'}}$. We claim that then $c \restr{S_j} = c \restr{S_{j'}}$. Note that since $c = c_1 + c_2$, for ${\bm{v}} \in \mathbb{Z}^2$ we have
\env{align*}{
(c_{({\bm{v}}+\uu1)+j\uu2} - c_{({\bm{v}}+\uu1)+j'\uu2}) - (c_{{\bm{v}}+j\uu2} - c_{{\bm{v}}+j'\uu2}) = 0.
}
In particular, if $c_{{\bm{v}}+j\uu2} = c_{{\bm{v}}+j'\uu2}$, then also $c_{({\bm{v}}+\uu1)+j\uu2} = c_{({\bm{v}}+\uu1)+j'\uu2}$. Since $c_{{\bm{v}}+j\uu2} = c_{{\bm{v}}+j'\uu2}$ holds for ${\bm{v}} \in D$, it also holds for ${\bm{v}} \in D + \uu1$, and by induction $c \restr{S_j} = c \restr{S_{j'}}$.
Since $c$ is finitary there are only finitely many possible $D$-patterns, let $N$ be an upper bound on their number. There are also finitely many stripe patterns $c \restr{S_j}$ since the pattern in $S_j$ is determined by the pattern in $D_{j}$. Because $c$ is not periodic, there exists $k \in \mathbb{Z}$ such that $c \restr{S_k} \ne c \restr {S_{k - N!}}$.
By \autoref{lem:balanced-set-existence}, there is either a $\uu1$-balanced or $(-\uu1)$-balanced set $B$, without loss of generality assume the former. Since $c$ is non-periodic, by \autoref{lem:cyr-kra} there is no ambiguous stripe in $c$ in direction $\uu1$ in which $B$ fits. $B$ fits in any stripe $S_j$, therefore values in any stripe $S_j$ determine the values in the whole half-plane on the side of the inner boundary of $S_j$.
By pigeonhole principle, there are $j < j' \in [0, N]$ such that $c \restr {S_{k+j}} = c \restr {S_{k+j'}}$. The two stripes extend uniquely to the half-planes on the side of their inner boundary. Therefore the half-plane $H = \bigcup_{i \leq j'} S_i$ has period $(j'-j)\uu2$. Since $j'-j$ divides $N!$ and $S_k, S_{k-N!} \subset H$, we have a contradiction with $c \restr{S_k} \ne c \restr {S_{k - N!}}$.
\qed}
\env{figure}{
\centering
\includegraphics[scale=0.35]{./img/nivat_ord2_1}
\caption{Proof of \autoref{thm:main-thm}.}
\label{fig:nivat_ord2_1}
}
\env{corollary}{
If a non-periodic configuration $c$ is a sum of two periodic ones, then $P_c(m,n) \geq mn+1$ for all $m,n \in \mathbb{N}$.
\qed}
We finish the exposition by reproving the result of Cyr and Kra from \cite{CyrKra15}. To do that, we need additional theory from \cite{KariSzabados2016Arxiv}. Multiplication of a two-dimensional configuration $c$ by a polynomial $f \in \mathbb{C}[x_1, x_2]$ is well defined. If $fc = 0$, we call $f$ an \emph{annihilator} of $c$.
The following two lemmas we state without a proof, they are direct corollaries of Corollary 24 and Lemma 32 of \cite{KariSzabados2016Arxiv}, respectively.
\env{lemma}{
\label{lem:explicit-decomposition}
Let $c$ be a low complexity two-dimensional integral configuration. Then there exists $k \in \mathbb{N}$ and polynomials $\phi_1, \dots, \phi_k \in \mathbb{C}[x_1,x_2]$ with the following properties:
Every annihilator of $c$ is divisible by $\phi_1 \cdots \phi_k$. Furthermore, $c$ can be written as a sum of $k$, but no fewer periodic configurations. If $g$ is a product of $0 \leq \ell < k$ of the polynomials $\phi_i$, then $gc$ can be written as a sum of $k-\ell$, but no fewer periodic configurations.
\qed}
Any polynomial in $\mathbb{C}[x_1, x_2]$ can be written as $f = \sum_{{\bm{v}} \in \mathbb{Z}^2} a_{\bm{v}} X^{\bm{v}}$. The \emph{support} of $f$, denoted $\supp(f)$, is defined as the finite set of vectors ${\bm{v}} \in \mathbb{Z}^2$ such that $a_{\bm{v}} \ne 0$. We say that $f$ \emph{fits in} a subset $D \subset \mathbb{Z}^2$ if its support fits in $D$.
\env{lemma}{
\label{lem:normalized}
Let $c$ be a finitary configuration. Then the symbols of $\mathcal{A}$ can be changed to suitable integers such that if $P_c(D) \leq \abs D$ for some $D \subset \mathbb{Z}^d$, then there exists an annihilator $f$ which fits in $-D$.
\qed}
\env{theorem}{
\label{thm:nivat-mn-over-2}
Let $c$ be a configuration such that $P_c(m,n) \leq mn/2$ for some $m,n \in \mathbb{N}$. Then $c$ is periodic.
}
\env{proof}{
Assume that the symbols of $\mathcal{A}$ have been renamed as in \autoref{lem:normalized}, then there exists $f$ an annihilator of $c$ which fits in an $m \times n$ rectangle. By \autoref{lem:explicit-decomposition}, we can write $f = \phi_1 \cdots \phi_k h$. If $k \leq 2$ then $c$ is periodic by \autoref{thm:main-thm}. Assume $k \geq 3$, we will show that it leads to a contradiction.
Let $g = \phi_3 \cdots \phi_k$, $c' = gc$ and let $m_g, n_g \in \mathbb{N}$ be smallest such that $g$ fits in an $(m_g+1)\times(n_g+1)$ rectangle, see \autoref{fig:mn_over_2}. Note that an $(m - m_g)\times(n - n_g)$ block in $c'$ is determined by multiplication by $g$ from an $m \times n$ block in $c$. Therefore $P_c(m,n) \geq P_{c'}(m-m_g, n-n_g)$.
By \autoref{lem:explicit-decomposition}, $c'$ is a sum of two but no fewer periodic configurations. Thus it is not periodic, and by \autoref{thm:main-thm},
$$P_c(m,n) \geq P_{c'}(m-m_g, n-n_g) > (m-m_g)(n-n_g).$$
Let ${\bm{v}}$ be an arbitrary vertex of the convex hull of $-\supp(g)$. Consider all translations of $-\supp(g)$ which are a subset of the rectangle $\llbracket m \rrbracket \times \llbracket n \rrbracket$, denote $R$ the locus of ${\bm{v}}$ under these translations. There are $(m-m_g)(n-n_g)$ such translations, therefore the size of $R$ is the same number.
Now let us define a shape $U = \llbracket m \rrbracket \times \llbracket n \rrbracket \setminus R$. It is a shape such that no polynomial multiple of $g$ fits in $-U$. In particular no annihilator of $c$ fits in $-U$, and thus by \autoref{lem:normalized},
\env{align*}{
P_c(m,n) \geq P_c(U) > \abs U.
}
Since either $(m-m_g)(n-n_g) = \abs R \geq mn/2$ or $\abs U \geq mn/2$, we have $P_c(m,n) > mn/2$, a contradiction.
\qed}
\env{figure}{
\centering
\includegraphics[scale=0.8]{./img/mn_over_2_new}
\caption{Proof of \autoref{thm:nivat-mn-over-2}. The quadrilateral depicts the convex hull of $-\supp(g)$ for a polynomial $g$, positioned in the bottom left corner of an $m \times n$ block. The white points form the set $R$ and the shaded points form the set $U$. We have $\abs U \geq mn/2$ or $\abs R \geq mn/2$.}
\label{fig:mn_over_2}
}
\newpage
\bibliographystyle{alpha}
|
3,212,635,537,732 | arxiv | \section{Introduction}
Despite their importance to galaxy evolution theory, the mechanisms driving the co-evolution of supermassive black holes (SMBHs) and their host galaxies, and indeed the processes that form SMBHs in the first place, are highly uncertain.
SMBHs are ubiquitous in galaxies ranging from massive ellipticals and bulge-dominated galaxies \citep[e.g.][]{Gehren84,kormendy95,kormendy2013} to smaller, bulge-less disk galaxies and dwarfs \citep{shields08,Filippenko03,reines11,reines12,reines13,moran14}. Empirical scaling relationships between the mass of SMBHs and that of their host galaxies are indicative of coeval growth \citep{haring04,gultekin09, schramm2013, kormendy2013,volonteribellovary12}.
Future observations of gravitational waves emitted from binary and merging SMBHs via pulsar timing arrays \citep{sesana13} and the planned LISA mission \citep{klein16} will provide unique information on the SMBH population and its dynamical evolution. Pulsar timing arrays probe relatively low-redshift (z<~2) BHs towards the high mass end ($> 10^8$ M$_{\odot}$), while LISA can detect mergers of SMBHs with mass $\sim10^4-10^7$ M$_{\odot}$ out to the highest redshift. LISA has therefore the capability to provide unique constraints to the SMBH mass function across cosmic time as well as critical insight into their possible formation mechanisms \citep{sesana07,volonteriMSIGMA2009,klein16} and their growth and spin evolution \citep{Berti08,Barausse12}. Further, on-going observations, as well as large-scale cosmological simulations, of active SMBHs that are offset from the centre of their host galaxies, possibly in galaxies with multiple luminous SMBHs, can potentially help constrain the extent to which galaxy mergers drive SMBH growth \citep{comerford15, steinborn15DualAGN,barrows17}.
The formation of a SMBH binary and subsequent merger of two SMBHs can be described in a number of stages. First, a dark matter halo falls into a halo of larger mass. It then sinks to the centre via dynamical friction and the two central galaxies then begin to strongly interact and merge. Following the merger of two galaxies hosting SMBHs, dynamical friction acting on the SMBHs causes them to sink to galactic centre and form a close pair with sub-kiloparsec (kpc) separation. The close pair, through dynamical interactions with gas and stars, then forms a bound SMBH binary ($D<10$ pc), which then itself hardens to the point where gravitational wave emission causes rapid orbital decay and the two SMBHs merge ($D<0.001$pc).
While the orbital evolution of close SMBH pairs and the bound binary systems that follow are extensively studied using numerical and analytic techniques \citep[e.g.][]{armitage02,Yu02,sesana15,dosopoulou17}, it is also critical to understand the evolution of SMBH pairs on larger scales, as these timescales can be quite long \citep[e.g.][]{callegariBH09,callegari11} and present a critical bottleneck to SMBH binary formation. However, studies of SMBH orbital evolution prior to the formation of close pairs has so far been severely limited. Semi-analytic models account for this timescale using simple models for dynamical friction \citep[e.g.][]{dosopoulou17,dvorkin17}. Detailed simulations of isolated mergers have indicated that SMBH sinking timescales following major mergers depend on the central stellar density of both galaxies \citep{G94} and can be quite short \citep{mayer07}, while SMBH sinking timescales following minor mergers can be much longer and depend sensitively on the orientation of the merging galaxies \citep{callegariBH09,callegari11}. However, these idealized simulations do not produce the realistic merger and gas accretion histories that real galaxies experience in a full cosmological context.
Cosmological simulations potentially provide a more self-consistent view of SMBH orbital decay timescales and are the logical next step from isolated galaxy merger simulations to better understand the timescales of close pair formation. With these simulations, the effects of different morphology and merger dynamics are naturally accounted for without \textit{a priori} assumptions, as each galaxy in the simulation has a cosmologically realistic accretion and merger history. However, past simulations generally had poor resolution, which required simplified assumptions such as `advection', where SMBHs quickly sink into
the deepest nearby potential well, resulting in unrealistic, nearly instantaneous SMBH orbital decay. This approximation contrasts with the above numerical results as it assumes that the orbital sinking timescale on kpc scales is effectively zero. In previous works we have shown that this technique often results in inaccurate SMBH dynamics within galaxies and a drastic underestimate of sinking timescales \citep{tremmel15}. While current simulations are beginning to employ more detailed approaches to SMBH dynamics \citep[e.g.][]{Hirschmann14,dubois16,steinborn15DualAGN}, accurate orbital evolution down to sub-kpc scales remains a challenge.
In this Paper, using the {\sc Romulus25} cosmological simulation \citep{tremmel17} which is uniquely able to predict the orbital evolution of SMBHs down to sub-kpc scales \citep{tremmel15}, \textit{we present the first robust estimate of SMBH sinking and subsequent close SMBH pair formation timescales over a range of cosmic epochs and galaxy properties.}
\section{The Romulus Simulations}
The {\sc Romulus} Simulations are a set of large-scale, high resolution cosmological simulations with emphasis on implementing a novel approach to SMBH formation, dynamics, and accretion. For this work, we focus on {\sc Romulus25}, our flagship 25 Mpc per side volume simulation, as it provides a uniform sample of galaxies within a wide range of halo masses ($3\times10^{9}$ to $2\times10^{13}$ M$_{\odot}$). The simulation is run assuming a $\Lambda$CDM cosmology following the most recent results from Planck \citep[$\Omega_0=0.3086$, $\Lambda=0.6914$, h$=0.67$, $\sigma_8=0.77$;][]{planck16}, a Plummer equivalent force softening of $250$ pc (a $350$ pc spline force softening is used), and mass resolution for dark matter and gas of $3.39 \times 10^5$ and $2.12 \times 10^5$ M$_{\odot}$ respectively. The simulation was run using the new Tree + SPH code, {\sc ChaNGa} \citep{changa15}, which includes an updated SPH implementation that accurately simulates shearing flows with Kelvin-Helmholtz instabilities. The Simulations also include the standard physics modules previously used in { \sc GASOLINE}, such as a cosmic UV background, star formation, `blastwave' SN feedback, low temperature metal cooling \citep{wadsley04,wadsley08,Stinson06,shen10}, as well as a novel implementation of SMBH formation, growth, and dynamics \citep{tremmel15,tremmel17}.
As described in more detail in \citet{tremmel17}, the free parameters within our sub-grid models for star formation and SMBH physics (see \S2.1) are optimized and held constant. This was achieved using a large set of `zoomed-in' simulations of galaxies within dark matter halos with masses $10^{10.5}$, $10^{11.5}$, and $10^{12}$ M$_{\odot}$. Each set of galaxies was 1) run using a different set of parameters and 2) graded against different $z = 0$ scaling relations related to star formation efficiency, gas fraction, angular momentum, and black hole growth. This resulted in {\it fully specified } sub-grid models governing star formation, stellar feedback, and SMBH accretion and feedback that are optimized to provide realistic $z = 0$ galaxies while maintaining predictive power at higher redshifts and high mass (M$_{h} > 10^{12}$ M$_{\odot}$). {\sc Romulus25} has been shown to reproduce the z$=$0 stellar mass halo mass and SMBH mass stellar mass relations across the entire range of resolved halos. It also {\it predicts} cosmic star formation and SMBH accretion histories at high redshift that are consistent with observations \citep{tremmel17}.
\subsection{SMBH Accretion and Feedback}
Accretion of gas onto SMBHs is governed by a modified Bondi-Hoyle prescription. Using the same energy balance argument as in the derivation of Bondi-Hoyle, we re-derive the SMBH accretion radius to include the effect of angular momentum support based on the resolved dynamics of gas in the simulation. We also apply a density dependent boost factor to account for the unresolved multiphase nature of the ISM near a SMBH \citep{BoothBH2009}, giving us the final equation
\begin{equation}
\dot{M} = \left ( \frac{n}{n_{th,*}} \right )^\beta\begin{cases}
\frac{\pi(GM)^2 \rho}{(v_{\mathrm{bulk}}^2+c_s^2)^{3/2}} & \text{ if } v_{\mathrm{bulk}}>v_{\theta} \\ \\
\frac{\pi(GM)^2 \rho c_s}{(v_{\theta}^2+c_s^2)^{2}} & \text{ if } v_{\mathrm{bulk}}<v_{\theta}.
\end{cases}
\end{equation}
\noindent The tangential velocity, $v_{\theta}$, is estimated at the smallest resolved scales in the simulation and compared to $v_{\mathrm{bulk}}$, the overall bulk motion of the gas that already enters into the Bondi-Hoyle model. When the bulk motion dominates over the nearby rotational motion, or the energetics are dominated by the internal energy of the gas, the accretion reverts to the normal Bondi-Hoyle prescription. The threshold for star formation, $n_{th,*}$, also determines the threshold beyond which we assume gas becomes multiphase and poorly resolved, requiring a boost to the approximated accretion rate. For lower densities, we assume that the gas is not sufficiently multiphase to require such a boost, as in \citet{BoothBH2009}. How much this boost increases with density is governed by $\beta$, constrained by our parameter search to be 2.
An accreting SMBH converts a fraction of that mass, $\epsilon_r$, into energy. A fraction of this energy, $\epsilon_f$, is thermally coupled to the 32 nearest gas particles according to the smoothing kernel. We assume the common value of 10\% for $\epsilon_r$ and take $\epsilon_f$ as a free parameter again set by our parameter search technique to be 0.02. For more details on SMBH accretion and feedback in {\sc Romulus}, we refer to the reader to \citet{tremmel17}. We note that while there still exists issues with the Bondi-Hoyle formalism even in the regime of non-rotating gas \citep[e.g.][]{hobbs12}, for the spatial and time resolution of these simulations, it still represents the best way of approximating long term accretion onto SMBHs based on large-scale gas properties without requiring additional assumptions.
\subsection{SMBH Seeding}
SMBHs are seeded in the simulation based on gas properties, forming in rapidly collapsing, low metallicity regions in the early Universe. We isolate pristine gas particles ($Z<3\times10^{-4}$) that have reached densities 15 times higher than what is required by our star formation prescription without forming a star or cooling beyond $9.5 \times 10^3$ K (just below the temperature threshold used for star formation, $10^{4}$ K). These regions are collapsing on timescales much shorter than the cooling and star formation timescales and are meant to approximate the regions that would form large SMBHs, regardless of the details of their formation mechanism. \citet{tremmel17} show how this method forms most SMBHs within the first Gyr of the simulation, compared with the later seeding times inherent to common approaches that seed based on halo mass thresholds \citep[e.g.][]{diMatteo2008,genel14,eagle15}.
The seed SMBH mass is set to $10^6$ M$_{\odot}$ and is justified by our choice of formation criteria, which would produce SMBHs that are able to attain higher masses quickly, as there is a lot of dense, collapsing gas nearby that is unlikely to form stars. Critically for our analysis presented here, this initial mass guarantees that SMBHs always have a mass significantly larger than DM and gas particles, allowing us to correctly resolve their dynamics without resorting to ad hoc simplifications \citep{tremmel15}. This approach results in an evolving occupation fraction. At early times, small halos (M$_{vir} \sim 10^{9-10}$ M$_{\odot}$) host massive, newly formed SMBHs. The occupation fraction evolves due to hierarchical merging and the fact that halos in less dense regions are less likely to host such dense collapsing regions at early times. Less than 10\% of halos with $3\times 10^9 <$ M$_{vir} < 10^{10}$ M$_{\odot}$ host a SMBH of mass at least $10^6$ M$_{\odot}$ at $z = 0$. Beyond the scope of this study is the examination of less massive black holes more common in smaller halos \citep[e.g.][]{ReinesVolonteri15,baldassare16}. Their lower masses will make them less likely to sink to the centre of their new host halo following a galaxy merger.
\subsection{SMBH Dynamics}
Following the merger of two galaxies hosting SMBHs, the accreted SMBHs sink toward the centre of the descendant galaxy through dynamical friction, the force exerted by the gravitational wake caused by a massive body moving through an extended medium \citep{DF43,stelios05,BinneyTremaine}. However, the limited mass and gravitational force resolution of cosmological simulations leaves this process largely unresolved. The Romulus simulations uniquely include the sub-grid correction accounting for this unresolved dynamical friction described in \citet{tremmel15} that has been shown to produce realistically sinking SMBHs (see appendix A for tests of this prescription at the specific resolution of {\sc Romulus25}).
As described in detail in \citet{tremmel15}, the dynamical friction acting on a SMBH of mass M from surrounding star and dark matter (DM) particles is approximated using Chandrasekhar's dynamical friction formula \citep{DF43} integrated out to our softening limit, $\epsilon_{g}$, and assuming a locally isotropic velocity distribution and constant density out to $\epsilon_{g}$ .
\begin{equation}
\mathbf{a}_{DF} = -4\pi\mathrm{G}^2\mathrm{M}\rho(<\mathrm{v}_{BH})\mathrm{ln}\Lambda\frac{\mathbf{v}_{BH}}{\mathrm{v}_{BH}^3}.
\end{equation}
\noindent The velocity of the SMBH, $\mathrm{v}_{BH}$, is taken relative to the local centre of mass (COM) velocity of stars and DM. We have also assumed that the contribution from objects moving faster than the SMBH is negligible, where $\rho(<\mathrm{v}_{BH}$) is the density of particles moving slower than the SMBH relative to the local COM. This is a good assumption to make for dynamical evolution on scales much larger than 1 pc \citep{antonini12}. The coulomb logarithm, $\mathrm{ln}\Lambda$, is taken to be $\mathrm{ln}(\frac{\mathrm{b}_{max}}{\mathrm{b}_{min}})$, where b$_{max}$ = $\epsilon_{g}$ to avoid double counting the resolved dynamical friction that is already occurring on larger scales and b$_{min}$ is the $90^{\circ}$ deflection radius, with a lower limit set to the Schwarzschild Radius, $\mathrm{R}_{Sch}$. The calculation is done based the 64 nearest star and DM particles to each SMBH. {\sc Romulus25} achieves mass resolution such that the ambient dark matter, gas, and star particles are significantly less massive than the smallest SMBHs, allowing it to avoid the numerical effects that persist at low resolution even with this dynamical friction prescription \citep{tremmel15}.
\begin{figure}
\centering
\includegraphics[trim=10mm 5mm -10mm 0mm, clip, width=90mm]{plots/figure1_BHMerger_Fraction_withnumbers.pdf}
\caption{{\sc Likelihood of Close SMBH Pair Formation}. The fraction of all galaxy mergers that result in a close SMBH pair as a function of the stellar mass of the primary galaxy and the stellar mass ratio at the time of first satellite in-fall. In addition to the colours, the fraction and, in parenthesis, associated uncertainty ($n_{pairs,i}^{0.5}/n_{i}$, where $n_i$ is the total number of galaxy mergers in each bin and $n_{pairs,i}$ the number of close SMBH pairs resulting from mergers in each bin) are labeled. Considered are galaxy mergers resulting from initial satellite in-fall at $z<5$. The formation of a close SMBH pair is not a common result of galaxy mergers. The likelihood of a close SMBH pair forming is sensitive to both stellar mass and mass ratio, and most likely to occur in massive, major mergers.}
\label{binfrac}
\end{figure}
This is a critical improvement over standard approaches to correcting SMBH dynamics that involve repositioning or pushing SMBHs toward their local potential minima \citep[e.g.][]{dimatteoBH05,diMatteo2008,genel14,eagle15}. Such methods force an un-physically fast sinking timescale for accreted SMBHs, leading to a nearly immediate formation of a close SMBH pair that does not sample the properties or kinematics of the merging galaxies \citep{tremmel15}. With this technique, the dynamics and morphology of the merging galaxies are self-consistently accounted for in the SMBH sinking timescales and the subsequent formation (or not) of a close SMBH pair.
\subsection{Formation of Close SMBH Pairs}
SMBHs are assumed to form a close pair when they become closer than two softening lengths ($\approx700$ pc in our simulations) with relative velocities small enough such that they can be considered bound, i.e. $\frac{1}{2}\Delta \textbf{v} < \Delta \textbf{a} \cdot \Delta \textbf{r}$, where $\Delta \textbf{v}$, $\Delta \textbf{a}$, and $\Delta \textbf{r}$ are the relative velocity, acceleration, and distance vectors between two SMBH particles. Below this distance limit, the simulation fails to resolve the relevant stellar and gas dynamical processes involved in SMBH pair evolution and such calculations are not attempted.
In the simulation, once two SMBHs form a close pair, they are taken to act as a single SMBH with the sum of the masses. While there are still many theoretical uncertainties in the timescales to form and merge a binary SMBH system, binary hardening timescales can be relatively quick, on the order of $10^7-10^8$ yrs, if even a small amount of gas is present \citep{armitage02,haiman09,colpi14}, and even in some cases for gas poor systems \citep{holleybockelmann15}. If the binary hardening timescales are significant compared to the relevant timescales of the simulation, because the smallest resolved scales are much larger than the typical binary separation, taking the pair to act as a single object with respect to accretion and feedback is still a reasonable approximation for those processes. The timescales that we predict in the following sections are therefore a lower limit to the timescales to form a SMBH binary and subsequent merger.
We predict that the formation of close SMBH pairs is a relatively rare occurrence, with an average formation rate per co-moving volume of $0.013$ cMpc$^{-3}$ Gyr$^{-1}$. Figure~\ref{binfrac} shows the likelihood that the merger of two galaxies will result in the formation of a close SMBH pair within a Hubble time. With our formation scheme (\S2.2), lower mass galaxies are naturally less likely to host SMBHs and so often their mergers do not result in any close pairs, as one more of the galaxies do not host any SMBHs to begin with. In addition, as we explore in the next section, galaxies in lower mass ratio mergers are more likely to become tidally disrupted and deposit their SMBHs on very wide orbits with larger sinking timescales. While we will focus in the following sections on close SMBH pairs that do form in the simulation, it is important to note that only a fraction of galaxy mergers result in a close SMBH pair forming within a Hubble time.
\section{Close SMBH Pair Formation Timescales}
While several different timescales are important for understanding the formation and evolution of SMBH pairs, the evolution of SMBH orbits on kpc scales is often simplified, relying on analytic approximations that do not self consistently account for the kinematics and internal properties of the merging galaxies \citep[e.g.][]{dvorkin17}, which previous studies have shown can have an important role in determining how the SMBHs will evolve following a galaxy merger \citep[.e.g][]{G94,callegariBH09,callegari11}. With the realistic model of SMBH dynamics included in {\sc Romulus25}, the simulation is uniquely capable of estimating this timescale for a realistic population of galaxy mergers taking place within a fully cosmological environment.
For our analysis we measure the time that each eventual close pair of SMBHs spends at `galaxy-scale' ($\sim1-10$ kpc) separations. Position information for each SMBH is recorded every 1.6 Myr and simulation snapshots are recorded every $10-100$ Myr, with higher time resolution at earlier epochs. In our analysis we only include close SMBH pairs formed within resolved DM halos, with at least 10,000 DM particles, resulting in a lower mass limit of $\sim3\times10^9$ M$_{\odot}$. We also only include close pairs that form at least 100 Myr after each SMBH has been seeded, in order to avoid counting pairings that occur as a result of multiple SMBHs forming from the same cloud of gas, a rare but possible result of our formation scheme and should be considered degenerate to a single SMBH growing quickly from a particularly large, dense cloud of gas. We confirm that our results are insensitive to the specific choice of this time threshold.
\begin{figure}
\centering
\includegraphics[trim=0mm 0mm 10mm 0mm, clip, width=85mm]{plots/figure2_cum_tdelay_all_new.pdf}
\caption{{\sc The Timescale to Form Close SMBH Pairs.} \textit{Top:} The cumulative distribution of time that SMBH pairs spend separated by less than 10 kpc prior to close pair formation for all close SMBH pairs formed in {\sc Romulus25} (dark/black solid) While about half of the close pairs form relatively quickly ($< 0.$5 Gyr) there is a significant fraction that spend several Gyr at galaxy-scale separations. Close pairs that form at low redshift (light/blue, solid) are mostly very far removed from their progenitor galaxy merger event. Also shown is the subset of close SMBH pairs resulting satellites in-falling after $z=5$ (dashed), used in much of our analysis and which, as shown here, have timescales representative of the whole population of close SMBH pairs. \textit{Bottom:} The cumulative distribution of timescales that SMBH pairs spent at 5, (red), 10 (green), and 20 (orange) kpc separations before forming a close pair with sub-kpc separation. As expected, closer proximity implies faster sinking timescales, as the dynamical time of the galaxy at smaller radii decreases. Overall, the distributions are quite similar, implying that our results are insensitive to the specific choice of separation scales explored. Vertical dashed lines show the 75th percentiles.}
\label{tdelay_all}
\end{figure}
The top panel of Figure~\ref{tdelay_all} shows the cumulative distribution of time that SMBH pairs spend within 10 kpc of one another before forming a close pair. The distance is small enough that the two target SMBHs must be within the same galaxy or interacting pair of galaxies. For the overall population (black line) most of the close pairs form with less than 1 Gyr spent at these intermediate separations, consistent with many studies of isolated galaxy mergers \citep[e.g.][]{mayer07}. However, there is a significant population of pairs that remain at galactic-scale separations for several Gyr. Taking only the population of close pairs that form at low redshift ($z < 2$; blue line) we see that the majority of these close pairs form several Gyr after their original galaxy merger event. We therefore predict that a significant fraction of low redshift SMBH pairs (and therefore subsequent SMBH binaries and SMBH merger events) are formed from a population of long-lived, `wandering' SMBHs \citep{schneider02,volonteriOffCenBH05,bellovaryBH10} born out of early galaxy mergers.
This result can have critical implications for gravitational wave analysis in the future, affecting how such signal is interpreted in terms of connecting SMBH mergers to galaxy evolution. It can also be important for interpreting dual and offset AGN observations, as it becomes unclear how connected they may be to actual galaxy mergers. Though beyond the scope of this paper, we will explore in more detail the implications of these results to gravitational wave predictions as well as the population of offset and dual AGN in future work.
The bottom panel of Figure~\ref{tdelay_all} shows the cumulative distribution of timescales that SMBH pairs spend at 5, 10, and 20 kpc separations. As expected, the evolution of SMBH pairs occurs on slightly shorter timescales for smaller separations. The sinking timescale due to dynamical friction depends on the local dynamical time, which decreases toward galactic centre. Still, we find SMBH pairs that spend several Gyrs separated by 5 kpc or less. This shows that our results are insensitive to our specific choice of separation threshold. In the following sections, we choose 10 kpc as our galaxy-scale separation threshold, as it corresponds to the size of the Galactic disk and is a good representation of the inner region of a dark matter halo that is dominated by baryonic processes. Additionally, we have confirmed that our other conclusions are also insensitive to this chosen scale.
\begin{figure*}
\centering
\includegraphics[trim=00mm 0mm 0mm 0mm, clip, width=180mm]{plots/figure3_cum_ttdyn_t_rho_mstar_new2.pdf}
\caption{{\sc Close Pair Formation Timescales and Merging Galaxy Properties.} The cumulative distribution of the number of dynamical times (left) and total time (right) that SMBH pairs spend within 10 kpc of one another before forming a close pair. The data is taken from 196 unique galaxy mergers taking place at $z<5$, resulting in 330 close SMBH pairs. Shown here are only those close pairs where the accreted SMBH is initially within the central 1 kpc of its host satellite galaxy (159 total pairs). The distributions are split up based on the 50th percentiles in central stellar density of the accreted galaxy and the stellar mass ratio ($3.4 \times 10^6$ M$_{\odot}$ kpc$^{-3}$ and 0.43 respectively) calculated at the in-fall time of the satellite halo. Accreted galaxies that have both high central stellar densities and high stellar mass ratios compared to the main galaxy are significantly more likely to result in a quick formation of a close SMBH pair.}
\label{tdelay_gal}
\end{figure*}
The distribution of timescales presented in Figure~\ref{tdelay_all} is likely due to several variables, including the kinematics of the merging galaxies, the morphology of the galaxy merger remnant, the mass of the SMBHs, and where within that galaxy the SMBHs are deposited. \citet{callegari11} find that the behaviour of in-falling satellite galaxies and their host SMBHs depend strongly on the angle of the interaction. How SMBHs are deposited within a galactic disk can also affect the efficiency of dynamical friction. If the host galaxy has a cored density profile, delay timescales can also be made longer \citep{reed06,dicintio17}. Similarly, a large stellar core with high velocity dispersion could also make dynamical friction less effective, as there would be more stars moving too fast to contribute. All of these merger and galaxy properties are a natural consequence of the simulation volume and are folded into the timescale distributions we predict.
Because these timescales are the result of many different variables interacting with one anther, we find little overall dependence on single parameters like SMBH mass or halo mass. However, we do find a strong dependence on the morphology of the accreted galaxy and its stellar mass relative to the primary galaxy, which we explore in the following section.
\subsection{Galaxy Disruption and Close SMBH Pair Formation Timescales}
\begin{figure*}
\centering
\includegraphics[trim=30mm 40mm 30mm 30mm, clip, width=150mm]{plots/figure_4_short_example_text.pdf}
\includegraphics[trim=30mm 40mm 30mm 30mm, clip, width=150mm]{plots/figure_4_long_example_text.pdf}
\caption{{\sc An Illustrative Example.} Two examples of galaxy mergers taking place around the same time and with galaxies of similar mass. Each set of plots shows the spatial distribution and colour of stars at five different times leading up to and following the merger of the two galaxies. Colours are based on the contribution of different bands within each pixel using U (blue), V (green), J (red) assuming a Kroupa IMF, so young stars look blue and older stars look yellow. The stellar emission is calculated using tables generated from population synthesis models \citep[http://stev.oapd.inaf.it/cgi-bin/cmd;][]{marigo08,girardi10}. Red and black crosses mark the positions of the SMBHs and the green cross in the top final frame represents a close pair of SMBHs. The initial stellar masses of the accreted galaxies in the top and bottom cases are $1.3\times10^{10}$ and $1.02 \times 10^{10}$ M$_{\odot}$ respectively and, for the main galaxies, stellar masses of $2.9\times10^{10}$ and $4.6\times10^{10}$ M$_{\odot}$ respectively. The accreted galaxy in the top case originally has a stellar core nearly five times denser than that of the bottom galaxy. This, combined with the higher stellar mass ratio, allows the core of the galaxy to avoid disruption, quickly resulting in a close SMBH pair. In the bottom case, the core of the original galaxy is tidally heated, becomes more diffuse, and is quickly assimilated into the main galaxy, leaving the SMBH to sink on its own. Despite the close passage shown in the last frame, the SMBHs will not form a close pair until $t = 7.34$ Gyr, after $1.7$ Gyr at galaxy-scale separations compared with only $0.3$ Gyr in the top example.}
\label{examples}
\end{figure*}
In this section, we examine how the close SMBH pair formation timescale depends on the properties of the interacting galaxies. We take a sub-set of our close SMBH pair population that result from galaxy mergers initiated by in-falling satellites at $z < 5$, where both halos are resolved ($M_{vir}>3\times10^{9}$ M$_{\odot}$) at the time of satellite in-fall. This time of satellite in-fall is taken as the time the secondary galaxy's host dark matter halo crosses the virial radius of the main halo. For halos that cross the virial radius multiple times, the final crossing time is used. The initial properties of each galaxy prior to the merger are taken at this final in-fall time. We do not include mergers at higher redshift, as often the details of these interactions are not fully captured by our snapshots, with halos attaining a mass that passes our strict definition of what is resolved and falling into the main halo in between snapshots. This sub-sample consists of 330 close SMBH pairs resulting from 196 unique galaxy mergers. Note that, because individual galaxies can host multiple SMBHs, it is common for single galaxy mergers to result in multiple close SMBH pairs. The dashed black line in Figure~\ref{tdelay_all} shows the distribution of delay timescales for this subset of close SMBH pairs, showing that this population is indeed representative of the whole.
In Figure~\ref{tdelay_gal} we plot the cumulative distribution of time that eventual close SMBH pairs spend within 10 kpc of one another. We group these pairs based on the central stellar density of the in-falling galaxy and the stellar mass ratio of the two merging galaxies. The stellar density is calculated within the central kpc of each in-falling satellite galaxy. Figure~\ref{tdelay_gal} shows the results in units of both Gyr (right) and number of dynamical times (left), where the dynamical time is calculated at a radius of 10 kpc of the main galaxy at the approximate time the two SMBHs come within 10 kpc of one another. The median values for the central stellar density and stellar mass ratio are $3.4\times10^6$ M$_{\odot}$ kpc$^{-3}$ and 0.43 respectively. Only systems where the accreted SMBH is within the central 1 kpc of its host galaxy at in-fall time are considered. Initially offset SMBHs are considered in the next section.
It is clear from this figure that accreted galaxies with high central densities and higher stellar mass ratios result in significantly shorter delay times. Galaxies with either low central densities or low stellar mass ratios experience longer times spent at galaxy-scale separations, implying that tidal disruption of the host galaxy is important for determining the timescale for close SMBH pair formation. During a galaxy interaction, ram pressure stripping can disrupt gas within galactic disks at larger radii and tidal heating can disrupt the inner core of the galaxies. Dense stellar cores within high mass ratio mergers are more likely to avoid disruption through both ram pressure stripping and tidal heating \citep{gnedin99,callegariBH09,vanWassenhove14}, so the central SMBHs remain embedded in a dense stellar core that aids in their orbital decay. In galaxies lacking a dense stellar core, or those involved in more minor mergers, tidal heating is more efficient at disrupting the inner parts of the galaxy, resulting in SMBHs deposited at large radii without any stellar core to assist in their orbital decay. This is consistent with analytical experiments showing how the orbital evolution of SMBHs is highly dependent on whether they are embedded in a stellar core or `naked' within their new host galaxy \citep{Yu02, dosopoulou17}.
Figure~\ref{examples} shows a series of snapshots from two example galaxy mergers taking place with both primary and secondary galaxies initially within a factor of 2 of one another in stellar mass. However, the stellar mass ratio of the top and bottom examples is 0.45 and 0.22 respectively. This, combined with the fact that the secondary galaxy in the top example has an initial central stellar density nearly 5 times higher than that in the bottom case, results in very different SMBH orbital evolution. In the bottom case, the secondary galaxy's core becomes tidally heated and eventually disrupted by the main galaxy, no longer maintaining its structure. In the top case, the denser core is able to avoid disruption and maintains its integrity up until the two cores merge, bringing the SMBHs along with them. The bottom example of a disrupted galaxy forms a close SMBH pair only after the SMBHs spend $1.7$ Gyr within 10 kpc of one another, while the top case results in a close pair after the SMBHs spend only 0.3 Gyr at galaxy-scale separations.
\subsection{Initially Offset SMBHs}
In the previous section, we focused on central SMBHs, those that are at the centre of their host galaxy at the time of satellite in-fall. However, approximately half of the close SMBH pairs in our sub-sample from {\sc Romulus25} form from accreted SMBHs initially offset from the centre of their host galaxy. As we have seen, the orbital decay of SMBHs can often take several Gyr and galaxy mergers often never result in a close SMBH pair. Massive galaxies in the {\sc Romulus25} simulation therefore often have several SMBHs that are offset from galactic centre, gathered throughout the host galaxy's merger history. In some rare cases, galaxies only have offset SMBHs.
Figure~\ref{tdelay_off} is similar to the left panel of Figure~\ref{tdelay_gal}, with SMBH binaries binned based on whether the accreted host galaxy is more likely to avoid complete disruption due to a dense stellar core and high mass ratio (orange/solid), less likely to avoid disruption (blue/dashed), or whether the target SMBH is offset from the centre of their host satellite galaxy by more than 1 kpc as it crosses the main halo's virial radius (green/dotted).
The close pair formation timescale distribution for initially offset SMBHs is similar to that for more easily disrupted satellite galaxies. When the SMBH is not central, it is likely not embedded within a dense stellar core, even if its host galaxy has one. It will therefore become accreted onto the main galaxy without a stellar core to aid in dynamical friction, just like SMBHs in galaxies whose cores become tidally disrupted.
\begin{figure}
\centering
\includegraphics[trim=0mm 0mm 0mm 0mm, clip, width=85mm]{plots/figure5_cum_tdyn_disrupt_offset_new2.pdf}
\caption{{\sc Close Pair Formation Timescales for Initially Offset SMBHs.} The cumulative distribution of the number of dynamical times SMBH pairs spend within 10 kpc of one another before forming a close pair. The solid orange line represents SMBHs from galaxies that are less susceptible to disruption (same as in Figure~\ref{tdelay_gal}) and the blue dashed line represents SMBHs from galaxies that are more likely to become tidally disrupted due to a lower stellar mass ratio and/or low central density (the union of the other three lines shown in Figure~\ref{tdelay_gal}). The green dotted line represents SMBHs that were initially offset from the centres of their host satellite galaxies by more than 1 kpc at the time of in-fall. The green and blue distributions are very similar, which is to be expected. In both cases, the SMBHs lack the extra support of a stellar core when making their way to the centre of their new galaxy.}
\label{tdelay_off}
\end{figure}
\subsection{The Importance of Galaxy-Scale Orbital Evolution}
The previous sections have shown that SMBH pairs can spend significant time at kpc-scale distances before forming a close pair. Their evolution on kpc scales is a phase that is very difficult to fully capture analytically, as it straddles the separations where galaxies are still merging and those where the sinking concerns the SMBHs themselves, naked or surrounded by the core of their satellite (see a discussion in \citet{mcwilliams14}). When estimating the time of binary formation (for which the time of pair formation studied here is a lower limit) semi-analytical models normally use satellite merging timescales that should account for the full `amalgamation' of the satellite (see a discussion in \citet{bk08}), typically estimated from large suites of dark matter-only simulations. \citet{bk08} argue that the inclusion of baryons (specifically, bulges, that are denser than dark matter and thus more resistant to disruption) would shorten the timescales compared to estimates for dark matter haloes alone. The results of the previous sections, however, show a more complex picture when dealing with SMBHs, rather than halos and galaxies only.
In order to test the approach of semi-analytic models, we estimate the close pair formation times that would be predicted from more simplistic models. We approximate the halo sinking timescale via the analytic fit derived by \citet{bk08}, given the halo masses of the primary and satellite halos and the virial radius of the primary halo taken from the simulation at the time of satellite in-fall. Following a procedure similar to modern semi-analytic models \citep[e.g.][]{Barausse12}, we give each halo pair a circularity, $\epsilon = j / j_{circ}$, sampled from a normal distribution centred at $\bar{\epsilon} = 0.5$ and with $\sigma = 0.23$ \citep{Khochfar06}. The circular radius is calculated from the periastron radius, approximated by $r_{peri} = R_{vir} \epsilon^{2.17}$ \citep{Khochfar06}. In order to remain in the regime where the fit from \citet{bk08} is accurate, we only allow $\epsilon$ to vary between 0.2 and 1.0. Below $\epsilon=0.2$, baryonic effects dominate due to the satellite galaxy's very radial orbit, making the approximation less accurate. This simple approach allows us to compare the sinking times predicted from {\sc Romulus25} to the average halo sinking timescales that would be included in most semi-analytic models.
We find that galaxy-scale orbital evolution is an important bottleneck to close SMBH pair formation for high redshift galaxy mergers. In Figure~\ref{tdelay_comp} we plot the close pair formation times directly from the {\sc Romulus25} cosmological simulation against the in-fall redshift of the parent satellite galaxy for the secondary SMBH (orange points). We compare this time to that which would be predicted solely using the analytic halo sinking timescale described above (blue points). In other words, these points represent the time for close pair formation if galaxy-scale orbital evolution and other baryonic effects were ignored, as they often are in both semi-analytic models and other cosmological simulations. We find that the orbital evolution of SMBHs from 10 kpc to sub-kpc scales is an important bottleneck to close pair formation (and the subsequent binary formation and merger) for high redshift galaxy interactions, where the dynamical timescale for satellite halos is comparatively small. At redshift less than $\sim2$ we find that there is less of a clear difference between the two types of points, indicating that halo sinking timescales are more similar to or even sometimes dominant compared to galaxy-scale SMBH orbital evolution. Semi-analytic models of SMBH binary evolution find similar results, with binary evolution timescales acting as a dominant bottleneck at high redshift and increasingly less important when compared to satellite sinking timescales at low redshift \citep{volonteri16IAUb}.
Examining the halo in-fall times and the predicted close SMBH pair formation it is clear that close pairs that form at later times are often the consequence of high redshift mergers, an effect also seen in Figure~\ref{tdelay_all}. These results show that SMBH orbital evolution on galaxy scales is a very important bottleneck for the formation of close SMBH pairs and, therefore, SMBH binaries and mergers, and must be accounted for when predicting the population of binary SMBHs and gravitational wave events across cosmic time.
\section{Discussion and Conclusions}
Using the {\sc Romulus25} cosmological simulation, which is uniquely capable of tracking the dynamics of SMBHs within galaxies down to sub-kpc scales, we examine the timescale for SMBH pairs to evolve from galaxy-scale separations ($1-10$ kpc) to form close pairs with separations less than a kpc, the precursor phase to a bound SMBH binary and (possible) future SMBH merger. The formation of close SMBH pairs is a relatively rare occurrence, becoming more common in major mergers of more massive galaxies. We find that galaxy mergers across cosmic time result in close SMBH pairs that often form several Gyr after the original galaxy merger event. SMBHs often accrete onto a new host galaxy via galaxy merger at high redshift, but only form a close SMBH pair at much lower redshift, resulting in a long lived population of `wandering' SMBHs \citep{schneider02,volonteriOffCenBH05,bellovaryBH10}. This can affect how we predict and interpret future observations of gravitational waves and dual/offset AGN, as well as the observational signatures of gravitational recoil events \citep{blecha16}.
\begin{figure}
\centering
\includegraphics[trim=10mm 0mm 20mm 20mm, clip, width=85mm]{plots/figure6_tmerge_zinfall_BK.pdf}
\caption{{\sc SMBH vs. Halo Sinking Timescales.} The formation time of close SMBH pairs as a function of satellite in-fall redshift. The black line denotes the time as a function of redshift. The orange points plot the time of close SMBH pair formation predicted directly from the {\sc Romulus25} simulation. The blue points estimate what the close pair formation time would be only accounting for halo sinking timescales approximated by the analytic fit from \citet{bk08}, as described in the text. The in-fall redshifts are shifted slightly between the two in order to make the distinction more clear. For high redshift halo mergers, the timescale for SMBH orbits to decay from galaxy-scale separations is a critical bottleneck to close pair formation, resulting in formation times that are often much later than those predicted solely based on halo sinking timescales. At lower redshift ($z<2$) the halo sinking timescales represent an increasingly important bottleneck to the formation of close SMBH pairs, resulting in less difference between the two types of points.}
\label{tdelay_comp}
\end{figure}
Using a set of 330 SMBH close pairs resulting from 196 unique galaxy mergers within {\sc Romulus25}, we show that the timescales for the formation of a close SMBH pair is dependent on galaxy morphology and stellar mass ratio. Galaxy mergers with similar mass and dense stellar cores result in faster close pair formation, as the secondary galaxy is less likely to become tidally disrupted. SMBHs that are embedded in stellar cores that are able to avoid disruption will be aided in sinking to galactic centre \citep{Yu02, dosopoulou17}. Satellite galaxies that are more susceptible to tidal disruption result in longer SMBH sinking timescales and close SMBH pairs that form long after the galaxy merger event (if they form at all). A similar situation is true for SMBHs that are initially offset from the centre of satellite galaxies. These SMBHs are not likely to be within the central stellar core, if one exists, of their host galaxy and so are deposited on their own at relatively large radii during the galaxy interaction.
The resolution limit of the {\sc Romulus25} simulation affects the scale at which tidal heating and disruption can be captured. Tidal processes become important when the impact parameter is similar to the effective radius of the disrupting object. With a Plummer equivalent gravitational force resolution of 250 pc, the effective radius of galaxies are well resolved for a wide range of masses and redshifts \citep{graham08,vanderwel14} and so disruption occurring on large scales is captured, but the internal structure on scales very close to the SMBHs remains unresolved. Dense cusps of stars can form in galaxies, particularly during gas rich mergers. These dense regions would persist for longer, as they require closer interactions to tidally heat. These unresolved stellar remnants can have an important effect on SMBH dynamics on scales much lower than 700 pc \citep{vanWassenhove14}, the limit beyond which we do not attempt to follow them in this work.
SMBHs deposited on larger scales may still have a dense stellar core or nuclear star cluster \citep{wehner06,ferrarese06b} around them that {\sc Romulus25} is unable to resolve, effectively increasing their dynamical mass. However, for the sample of close SMBH pairs formed from galaxy mergers where disruption likely takes place, we find that the sinking time does not show a clear dependence on SMBH mass. This indicates that the existence of an unresolved, dense stellar component around these SMBHs will only have a secondary effect on their orbital evolution. Rather, the sinking times depend more on the details of the galaxy merger, i.e. where and with what orbital energy the SMBHs deposited.
We show that orbital evolution of SMBHs within galaxies on scales between 1-10 kpc are a major bottle neck for forming close SMBH pairs, particularly for high redshift galaxy interactions. In agreement with the arguments by \citet{volonteri16IAUb}, at lower redshifts ($z<2$) the sinking timescale of satellite halos becomes a more dominant factor and the specific effect of galaxy-scale orbital decay is less important, though still not trivial. How much of an effect this timescale plays in the overall prediction for SMBH merger rates will also depend on the hardening timescales after formation of the binary. While there is evidence that such hardening times can be relatively short, on the order of $10^7-10^8 yr$ \citep{armitage02,haiman09,colpi14,holleybockelmann15}, other recent work suggests that these hardening timescales may be very long in some cases \citep{vasiliev15,kelley17,tamburello17}. Further, it is important to note that we do not include the effects of gravitational recoil, nor three-body SMBH encounters, both of which can further affect the formation of SMBH binaries.
It is clear that this stage of SMBH pair evolution plays a crucial role in determining when and where close SMBH pairs occur, and therefore the SMBH binaries and mergers that may result from such pairs. It is also important to understanding the time connection between AGN activity and galaxy interaction induced star formation, as the SMBH sinking timescale may be much larger than that of the typical observed starburst timescale, found to be on the order of 0.1 Gyr \citep{marcillac06,pereira-santaella15}. As illustrated in Figure~\ref{examples}, close SMBH pairs often form in relaxed galaxies that show no morphological disturbances indicative of a recent merger.
In future work, we will examine in more detail how this additional timescale can affect SMBH merger predictions from state-of-the-art SAMs, exploring in particular how the close pair formation timescale explored in this work compares with other affects such as three body interactions and binary hardening rates in determining the predicted signals for future gravitational wave observatories. We will also explore the occurrence of dual and offset AGN \citep[e.g.][]{comerford14,comerford15,barrows17}, to examine in more detail the phase of galaxy and SMBH evolution traced by these events.
\section*{Acknowledgments}
The Authors thank the anonymous referee for a thorough reading of the manuscript and their helpful comments. FG, TQ and MT were partially supported by NSF award AST-1514868. AP was supported by the Royal Society. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This work is also part of a PRAC allocation support by the National Science Foundation (award number OCI-1144357). MV acknowledges funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013 Grant Agreement no. 614199, project `BLACK'). Much of the analysis done in this work was done using the software packages Pynbody \citep{pynbody} nd TANGOS (Pontzen et al., in prep). The authors thank Priyamvada Natarajan, Angelo Ricarte, Enrico Barausse, Laura Blecha, Julie Comerford, and Lisa Steinborn for stimulating discussions and a careful reading of the manuscript.
|
3,212,635,537,733 | arxiv | \section{Introduction}
In recent years there are some experimental progresses on measurements of the charmless $B_{u,d,s}\to PP$ decays.
In year 2015, Belle reported a $3.0\sigma$ significant measurement on $\overline B{}^0\to\eta\pi^0$ decay rate with
${\cal B}(B^0\to\eta\pi^0)=(4.1^{+1.7+0.5}_{-1.5-0.7})\times10^{-7}$~\cite{Pal:2015ewa}
and
${\cal B}(B_s^0\to K^0\overline K{}^0)=(19.6^{+5.8}_{-5.1}\pm 1.0\pm 2.0)\times 10^{-6}$~\cite{Pal:2015ghq} with $5.1~\sigma$ significance,
while LHCb observed $B_s\to\eta'\eta'$ decay at $(3.31\pm 0.64\pm 0.28\pm0.12)\times 10^{-5}$ at $6.4~\sigma$ significance~\cite{Aaij:2015qga}.
In year 2016, LHCb reported on the observation of annihilation modes with ${\cal B}(B^0\to K^+K^-)=(7.80\pm1.27\pm0.81\pm0.21)\times 10^{-8}$ and ${\cal B}(B^0_s\to\pi^+\pi^-)=(6.91\pm 0.54\pm0.63\pm0.19\pm0.40)\times 10^{-7}$~\cite{Aaij:2016elb}.
Last year Belle reported the rate of $\overline B{}^0\to\pi^0\pi^0$ of ${\cal B}(\overline B{}^0\to\pi^0\pi^0)=(1.31\pm0.19\pm0.18)\times 10^{-6}$~\cite{Julius:2017jso}.
Some of these modes are highly suppressed and are among the rarest $B$ decays.
There were constant updates on other measurements, such as rates and
direct CP asymmetries on $B_{(s)}\to K\pi, KK, \pi\pi$ modes~\cite{HFAG, PDG, Gershon}.
It is well known that direct CP asymmetries and rates of suppressed modes are sensitive to final state interaction (FSI)~\cite{Cheng:2004ru, Gronau}.
In a study on the effects of FSI on $B_{u,d,s}\to PP$ modes~\cite{Chua:2007cm},
the so called (too large) ${\cal B}(\pi^0\pi^0)/{\cal B}(\pi^+\pi^-)$ ratio and (non-vanishing) $\Delta{\cal A}\equiv{\cal A}(K^-\pi^+)-{\cal A}(K^-\pi^0)$ direct CP asymmetry puzzles in $\overline B_{u,d}$ decays can both be resolved by considering rescattering among $PP$ states.~\footnote{One is referred to \cite{Cheng:2014rfa,Kpi} for some recent analyses on these puzzles.}
Several rates and CP asymmetries were predicted.
The newly observed $B_s^0\to K^0\overline K{}^0$ rate is consistent with the prediction.
However, there are some results that are in tension with the recent measurement.
In particular, the predicted $B_s\to\eta'\eta'$ rate is too high compared to data.
In fact, its central value is off by a factor of 3.
As new measurements are reported and more data will be collected in LHCb, and Belle II will be turned on in the very near future,
it is interesting and timely to revisit the subject.
It will be useful to give the physical picture.
From the time-invariant property of the Wilson operators in the weak Hamiltonian, one finds that the decay amplitude satisfies \cite{Suzuki:1999uc}~\footnote{See Appendix A for a derivation.}
\begin{eqnarray}
A_i=\sum_{k=1}^N{\cal S}^{1/2}_{ik} A^0_k,
\label{eq:master}
\end{eqnarray}
where $A_i$
is a $\overline B_q\to PP$ decay amplitude with weak as well as strong phases,
$A^0_k$ is a amplitude containing weak phase only, $i=1,\dots,n$, denotes all charmless $PP$ states and
$k=1,\dots,n,n+1,\dots,N,$ denotes {\it all} possible states that
can rescatter into the charmless $PP$ states
through the strong interacting $S$-matrix, ${\cal S}$.
Strong phases are encoded in the rescattering matrix.
This is known as the Watson theorem \cite{Watson:1952ji}.
There are two points needed to be emphasised.
First, the above result is exact. Every $\overline B{}_q\to PP$ decay amplitude should satisfy it.
Second, for a typical $\overline B{}_q$
decay, since the $B$ mass is large there is a large number of kinematically allowed states involved in the above equation, i.e. $N$ in the above equation is large.
Consequently, the equation is hard to solve.
Although the largeness of the $B$ mass
makes it difficult to solve the above equation,
it is interesting that on the contrary it is precisely the largeness of $m_B$ that makes the problem somewhat trackable.
According to the duality argument,
when the contributions from all hadronic
states at a large enough energy scale are summed over, one should
be able to understand the physics in terms of the quark and gluon
degrees of freedom.
Indeed, several quantum chromodynamics (QCD)-based factorization approaches, such
as pQCD~\cite{pQCD}, QCD factorization
(QCDF)~\cite{Beneke:2001ev,Beneke:2003zv} and soft collinear
effective theory (SCET)~\cite{SCET}
make use of the large $B$ mass
and give predictions on
the facrorization amplitudes, $A^{\rm fac}$.
In other words, using the largeness of $m_B$ comparing to $\Lambda_{QCD}$,
the factorization approaches provide solutions to Eq.~(\ref{eq:master}),
i.e. $ A^{\rm fac}_i=\sum_{k=1}^N{\cal S}^{1/2}_{ik} A^0_k$.
In the infinite $m_B$ limit, the above program may work perfectly.
However, in the physical $m_B$ case, power corrections can be important and may not be neglected.
In fact,
the effects of power corrections are strongly hinted from some unexpected enhancements in rates of several color suppressed modes, such as $\overline B{}^0\to\pi^0\pi^0$ decay~\cite{PDG,HFAG}, and some unexpected signs of direct $CP$ asymmetries, as in the difference of direct CP asymmetries of $\overline B{}^0\to\overline K{}^-\pi^+$ and $B^-\to K^-\pi^0$ decays~\cite{Lin:2008zzaa}.
These anomalies lead to the above mentioned $\pi\pi$ and $K\pi$ puzzles.
It is fair to say that the factorization approaches can reasonably produce rates of color allowed modes,
but it encounters some difficulties in rates of color-suppressed states and CP asymmeties.
It is to plausible to assume that factorization approaches do not give the full solution to Eq.~(\ref{eq:master}),
some residual rescattering or residual final state interaction is still allowed and needed in $\overline B{}_q\to PP$ decays.
Note that the group of charmless $PP$ states is unique
to $\overline B{}_q\to PP$ decays, as $P$ belongs to the same SU(3) multiplet
and
$PP$ states are well separated from all
other states, where the duality argument cannot be applied to
these limited number of states~\cite{quasielastic,quasielastic0}.
Note that residual rescattering among $PP$ modes only slightly affect the rates of color allowed modes, but it can easily change direct CP violation of most modes and the rates of color suppressed modes at the same time.
It can be a one stone two birds scenario.
It can potentially solve two problems at the same time without affecting the successful results of factorization approach on color allowed rates.
In fact, this approach is modest than the factorization approach
as it left some rooms for our ignorance on strong dynamics.
In the following text, unless indicated otherwise we use rescattering among $PP$ states or rescattering for short to denote this particular type of rescattering, while we assume that FSI contributions from all other states are contained in the factorization amplitudes.
The quark diagram or the so-called topological
approach has been extensively used in mesonic modes~\cite{Zeppenfeld:1980ex,Chau:tk,Chau:1990ay,Gronau:1994rj,Gronau:1995hn,Cheng:2014rfa, Gronau}.
It will be useful and interesting to study the FSI effects on topological amplitudes.
For some early works in different approach, one is referred to ref.~\cite{Gronau}.
The relation on topological amplitudes and rescattering will be explored
and it can help to provide a better understanding on the effects of residual rescattering.
The layout of the present paper is as follows: In Sec. II we
give the formalism. Results and discussions are presented in
Sec. IV. Sec. V contains our conclusions. Some useful formulas and derivations are collected in
Appendices A and B.
\section{Formalism}
In this section we will give the rescattering (res.) formulas, topological amplitudes (TA) of $\overline B{}_q\to PP$ decays and the relations between res. and TA.
\subsection{Rescattering Formulas}
Most of the following formulas are from~\cite{Chua:2007cm}, but some are new.
As noted in the Introduction section, in the rescattering we have~(see Appendix A)
\begin{eqnarray}
A_i=\sum_{j=1}^n({\cal S}_{res}^{1/2})_{ij} A^{fac}_j,
\label{eq:master1}
\end{eqnarray}
where
$i,j=1,\dots,n$ denote all charmless
$PP$ states. To apply the above formula, we need to specify the factorization
amplitudes. In this work, we use the factorization amplitudes
obtained in the QCD factorization approach~\cite{Beneke:2003zv}.
According to the quantum numbers of the final states, which can be mixed under FSI, $\overline B_q\to PP$ decays can be grouped into 4 groups.
Explicit formulas are collected in Appendix~A.
Here we give an example for illustration.
The $\overline B^0_d\to K^-\pi^+$ decay can rescatter with three other states, namely $\overline B^0_d\to \overline K{}^0\pi^0$, $\overline K{}^0\eta_8$ and $\overline K{}^0\eta_1$, via charge exchange, singlet exchange and annihilation rescatterings as denoted in Fig.~\ref{fig:r} (a)-(c). These states are the group-1 modes.
The relevant rescattering formula is given by
\begin{eqnarray}
\left(
\begin{array}{l}
A_{\overline B {}^0_{d}\to K^-\pi^+}\\
A_{\overline B {}^0_{d}\to \overline K {}^0 \pi^0}\\
A_{\overline B {}^0_{d}\to \overline K {}^0 \eta_8}\\
A_{\overline B {}^0_{d}\to \overline K {}^0 \eta_1}
\end{array}
\right)
&=& {\cal S}_{res,1}^{1/2}
\left(
\begin{array}{l}
A^{fac}_{\overline B {}^0_{d}\to K^-\pi^+}\\
A^{fac}_{\overline B {}^0_{d}\to \overline K {}^0 \pi^0}\\
A^{fac}_{\overline B {}^0_{d}\to \overline K {}^0 \eta_8}\\
A^{fac}_{\overline B {}^0_{d}\to \overline K {}^0 \eta_1}
\end{array}
\right),
\label{eq:FSIB0Kpi}
\end{eqnarray}
with
${\cal S}^{1/2}_{res,1}=(1+i{\cal T}_1)^{1/2}$ and
\begin{eqnarray}
{\cal T}_1 &=& \left(
\begin{array}{cccc}
r_0+r_a
&\frac{-r_a+r_e}{\sqrt2}
&\frac{-r_a+r_e}{\sqrt6}
&\frac{2\bar r_a+\bar r_e}{\sqrt3}
\\
\frac{-r_a+r_e}{\sqrt2}
&r_0+\frac{r_a+r_e}{2}
&\frac{r_a-r_e}{2\sqrt3}
&-\frac{2\bar r_a+\bar r_e}{3\sqrt2}
\\
\frac{-r_a+r_e}{\sqrt6}
&\frac{r_a-r_e}{2\sqrt3}
&r_0+\frac{r_a+5r_e}{6}
&-\frac{2\bar r_a+\bar r_e}{3\sqrt2}
\\
\frac{2\bar r_a+\bar r_e}{\sqrt3}
&-\frac{2\bar r_a+\bar r_e}{\sqrt6}
&-\frac{2\bar r_a+\bar r_e}{3\sqrt2}
&\tilde r_0+\frac{4\tilde r_a+2\tilde r_e}{3}
\end{array}
\right).
\end{eqnarray}
The rescattering parameters $r_{0,a,e,t}$, $\bar r_{0,a,e,t}$,
$\tilde r_{0,a,e,t}$, $\hat r_{0,a,e,t}$ and $\check r_{0,a,e,t}$
denote~\footnote{Note that $\hat r$ and $\check r$ do not appear in ${\cal T}_1$, but they will contribute to some other $PP$ modes.}
rescattering in $\Pi({\bf 8})\,\Pi({\bf 8})\to\Pi({\bf
8})\,\Pi({\bf 8})$, $\Pi({\bf 8})\,\Pi({\bf 8})\to\Pi({\bf
8})\,\eta_1$, $\Pi({\bf 8})\eta_1\to\Pi({\bf 8})\eta_1$ and
$\eta_1\eta_1\to\eta_1\eta_1$, respectively, with $\Pi(\bf 8)$ the SU(3) octet and
$\eta_1$ the singlet, and the subscripts
$0,a,e,t$ represent flavor singlet, annihilation, exchange and
total-annihilation rescatterings, respectively (see
Fig.~\ref{fig:r}).
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=6cm]{re.pdf}
}
\subfigure[]{
\includegraphics[width=6cm]{r0.pdf}
}
\subfigure[]{
\includegraphics[width=6cm]{ra.pdf}
}
\subfigure[]{
\includegraphics[width=6cm]{rt.pdf}
}
\caption{Pictorial representation of
(a) charge exchange $r_e$, (b) singlet exchange $r_0$,
(c) annihilation $r_a$ and (d) total-annihilation $r_t$
for $PP$ (re)scattering.}
\label{fig:r}
\end{figure}
Flavor symmetry requires that $({\cal S}_{res})^m$ with an arbitrary
power of $m$ should also have the same form as ${\cal S}_{res}$. More
explicitly, from SU(3) symmetry, we should have
\begin{eqnarray}
({\cal S}_{res})^m&=&(1+i{\cal T})^m\equiv 1+i{\cal T}^{(m)},
\label{eq:SSU3a}
\end{eqnarray}
where ${\cal T}^{(m)}$ is defined through the above equation and its
form is given by
\begin{eqnarray}
{\cal T}^{(m)}&=&{\cal T}\,\,\, {\rm with\,\,}(r_j,\bar r_j,\tilde r_j,\check r_j)
\to (r^{(m)}_j,\bar r^{(m)}_j,\tilde r^{(m)}_j,\check r^{(m)}_j),
\label{eq:SSU3b}
\end{eqnarray}
for $j=0,a,e,t$.
It is useful to note that we have $\bf 8\otimes 8$, $\bf 8\otimes 1$,
$\bf 1\otimes 8$ and $\bf 1\otimes 1$ SU(3) products for $P_1P_2$
final states, which has to be symmetric under the exchange of $P_1$ and
$P_2$ in the $\overline B\to PP$ decay as the meson pair is in $s$-wave configuration
and they have to satisfy the Bose-Einstein statistics.
The allowed ones are the $\bf 27$, $\bf 8$ and the $\bf 1$
from $\bf 8\otimes 8$, the $\bf 8'$ from the symmetrized $\bf
8\otimes 1$+$\bf 1\otimes 8$, and $\bf 1'$ from $\bf 1\otimes1$
(see, for example \cite{TDLee}, for the decomposition).
Hence, from SU(3) symmetry and
the Bose-Einstein statistics, we should have
\begin{eqnarray}
({\cal S}_{res})^m
=\sum_{a=1}^{27}|{\bf 27};a\rangle e^{2m\,i\delta_{27}}\langle {\bf 27};a|
+\sum_{b=1}^{8}\sum_{p,q=8,8'}|p;b\rangle {\cal U}^{m}_{pq} \langle q;b|
+\sum_{p,q=1,1'}|p;1\rangle {\cal V}^{m}_{pq} \langle q;1|,
\label{eq:SU3decomposition}
\end{eqnarray}
where $a$ and $b$ are labels of states within multiplets,
and matrices ${\cal U}^{m}$ and ${\cal V}^{m}$ are given by
\begin{eqnarray}
{\cal U}^{m}(\tau,\delta_8,\delta'_{8})
&\equiv&\left(
\begin{array}{cc}
\cos\tau
&\sin\tau
\\
-\sin\tau
&\cos\tau
\end{array}
\right) \left(
\begin{array}{cc}
e^{2m\,i\delta_8}
&0
\\
0
&e^{2m\,i\delta_8'}
\end{array}
\right) \left(
\begin{array}{cc}
\cos\tau
&-\sin\tau
\\
\sin\tau
&\cos\tau
\end{array}
\right),
\nonumber\\
{\cal V}^{m}(\nu,\delta_1,\delta'_{1})
&\equiv&\left(
\begin{array}{cc}
\cos\nu
&\sin\nu
\\
-\sin\nu
&\cos\nu
\end{array}
\right) \left(
\begin{array}{cc}
e^{2m\,i\delta_1}
&0
\\
0
&e^{2m\,i\delta_1'}
\end{array}
\right) \left(
\begin{array}{cc}
\cos\nu
&-\sin\nu
\\
\sin\nu
&\cos\nu
\end{array}
\right),
\label{eq:UandV}
\end{eqnarray}
respectively.
Rescattering parameters $r_i$ as the solutions to Eqs.~(\ref{eq:SSU3a}) and
(\ref{eq:SSU3b}) can be expressed in terms of these angles and phases:
\begin{eqnarray}
1+i(r^{(m)}_0+r^{(m)}_a)
&=&\frac{2 e^{2m\,i\delta_{27}}+3{\cal U}^{m}_{11}}{5},
\nonumber\\
i(r^{(m)}_e-r^{(m)}_a)
&=&\frac{3 e^{2m\,i\delta_{27}}-3{\cal U}^{m}_{11}}{5},
\nonumber\\
i (r^{(m)}_a+r^{(m)}_t)
&=&\frac{-e^{2m\,i\delta_{27}}- 4 {\cal U}^{m}_{11} + 5 {\cal V}^{m}_{11}}{20},
\nonumber\\
i(2\bar r^{(m)}_a+\bar r^{(m)}_e)
&=&\frac{3}{\sqrt5} {\cal U}^{m}_{12},
\nonumber\\
1+i(\tilde r^{(m)}_0+\frac{4\tilde r^{(m)}_a+2\tilde r_e^{(m)}}{3})
&=&{\cal U}_{22}^{m},
\nonumber\\
i(\hat r^{(m)}_t+\frac{4\hat r^{(m)}_a+2\hat r^{(m)}_e}{3})
&=&\frac{1}{\sqrt2} {\cal V}^{m}_{12},
\nonumber\\
1+i(\check r^{(m)}_0+\frac{4\check r^{(m)}_a+2\check r_e^{(m)}+3\check r^{(m)}_t}{6})
&=&{\cal V}_{22}^{m},
\label{eq:solution}
\end{eqnarray}
with ${\cal U}^{m}_{ij}$ and ${\cal V}^{m}_{ij}$ given in Eq. (\ref{eq:UandV}).
It is interesting to see how the rescattring behaves in a U(3)
symmetric case. It is known that the U$_A(1)$ breaking is
responsible for the mass difference between $\eta$ and $\eta'$ and U(3)
symmetry is not a good symmetry for low-lying pseudoscalars.
However, U(3) symmetry may still be a reasonable one for
a system that rescatters at energies of order $m_B$.
The mass difference between $\eta$
and $\eta'$, as an indicator of U(3) symmetry breaking effect,
does not lead to sizable energy difference of these particles in
charmless $B$ decays.
Note that in the literature, some authors also make
use of U(3) symmetry in charmless $B$ decays (see, for
example~\cite{Pham:2007nt}).
We note that in the U(3) case, we have
\begin{eqnarray}
r_i=\bar r_i=\tilde r_i=\hat r_i=\check r_i.
\label{eq: U3r}
\end{eqnarray}
Consequently, by requiring
\begin{eqnarray}
{\cal T}^{(m)}&=&{\cal T}\,\,\, {\rm with\,\,}(r_j,\bar r_j,\tilde r_j,\check r_j)
\to (r^{(m)}_j,r^{(m)}_j,r^{(m)}_j,r^{(m)}_j),
\end{eqnarray}
as required by Eq. (\ref{eq: U3r}), one must have~\cite{Chua:2007cm}
\begin{eqnarray}
r^{(m)}_a r^{(m)}_e=0.
\label{eq: rare=0}
\end{eqnarray}
There are two solutions, either $r^{(m)}_e=0$ or $r^{(m)}_a=0$~[see. Eqs.~(\ref{eq:solutionreU3ra}) and (\ref{eq:solutionreU3re})].
Note that in both solutions, we have
\begin{eqnarray}
\delta_{27}=\delta_8'=\delta'_1.
\end{eqnarray}
To reduce the number of the rescattering parameters and as a working assumption, the above relations will be used in this work,
although we are not imposing the full U(3) symmetry to FSI.
After imposing the above relation and factor out a over phase factor, say $\delta_{27}$,
we are left with two mixing angles and two phase
differences:
\begin{eqnarray}
\tau,\quad
&&\nu,
\quad
\delta\equiv\delta_8-\delta_{27},
\quad
\sigma\equiv\delta_1-\delta_{27},%
\label{eq:FSIparameters}
\end{eqnarray}
in the scattering matrices.
The rescattering formula Eq.~(\ref{eq:master1}) now becomes
\begin{eqnarray}
A={\cal S}_{res}^{1/2}(\tau,\nu;\delta,\sigma)\cdot
A^{\rm fac},
\label{eq:master2}
\end{eqnarray}
with the overall phase removed.
In summary, 4 additional parameters from Res are introduced to the decay amplitudes.
We find that it is useful to incorporate SU(3) breaking effect in the scattering matrix.
The idea is that we try to remove the SU(3) breaking effect in $A^{\rm fac}$ before recattering and put the SU(3) breaking effect back after the rescattering.
The underlying reason is as following.
In the core of FSI, the rescattering processes are occuring at the $m_B$ energy scale,
the SU(3) breaking effect cannot be very important at this stage.
Hence the amplitudes to be rescattered are taken in the SU(3) limit,
but after the rescattering, as the hadronization process takes place, SU(3) breaking cannot be neglected and their effect needs to be included.
In practice we use ratio of decay constants to model the SU(3) breaking effect.
For example, the $B^-\to K^0K^-$ factorization amplitude is multiplied by $(f_\pi/f_K)^2$ before recattering with other states and is multiplied by $(f_K/f_\pi)^2$ after rescattering.
For convenient these two factors are absorbed in ${\cal S}_{res}^{1/2}$.
These are new to Ref.~\cite{Chua:2007cm}.
The rescattering matrices needed in this work are collected in Appendix A.
As we will see in the next section, including these four rescattering parameters will enhance the agreement of theory and data notably.
\subsection{Rescattering and Topological Amplitudes in the SU(3) limit}
Topological amplitude approach or flavor flow approach is based on SU(3) symmetry.
The amplitudes can contain weak and strong phases.
FSI will generate additional strong phases and can potentially mixed up different topological amplitudes.
It is therefore interesting to investigate the relation of the FSI and topological amplitudes.
We will take a closer look of this issue in the presence of the rescattering among $PP$ states.
We will consider the topological amplitudes in the SU(3) limit,
rescattering of topological amplitudes in the SU(3) limit and, finally, topological amplitudes and rescattering in the U(3) limit.
The discussion will be useful to provide a better understanding of the effect of FSI in $\overline B{}_q\to PP$ decays.
These are all new to Ref.~\cite{Chua:2007cm}.
\subsubsection{Topological Amplitudes in the SU(3) limit}
It is well-known
that the fields annihilating $B^-,\,\overline B^0_{d,s}$ and creating $\pi,\,K,\,\eta_8$ transform
respectively as $\overline {{\bf} 3}$ and ${\bf} 8$
under SU(3) (see, for example~\cite{TDLee}),
\begin{eqnarray}
{\overline B}&=& \left(
\begin{array}{ccc}
B^- &\overline B {}^0 &\overline B {}^0_s
\end{array}
\right),
\nonumber\\
\Pi&=& \left(
\begin{array}{ccc}
{{\pi^0}\over\sqrt2}+{{\eta_8}\over\sqrt6}&{\pi^+} &{K^+}\\
{\pi^-}&-{{\pi^0}\over\sqrt2}+{{\eta_8}\over\sqrt6}&{K^0}\\
{K^-} &{\overline K {}^0}&-\sqrt{2\over3}{\eta_8}
\end{array}
\right).
\end{eqnarray}
For the $b\to u\bar u d$ and $b\to q\bar q d$
processes, the tree (${\cal O}_T$) and penguin (${\cal O}_P$)
operators respectively have the following flavor structure,
\begin{eqnarray}
&&{\cal O}_T\sim (\bar u b )(\bar d u )
=H^{ik}_j (\bar q_i b) (\bar q_k q^j),
\quad
{\cal O}_P\sim(\bar d b) (\bar q_i q^i )
=H^k (\bar q_k b) (\bar q_i q^i) ,
{\nonumber}\\
&&{\cal O}_{EWP}\sim Q_j(\bar d b) ( \bar q_j q^j)
=H_{EW}{}^{ik}_j (\bar q_i b) (\bar q_k q^j),
\end{eqnarray}
where we define
$H^k=\delta^k_2$, $H^{ik}_j=\delta^i_1\delta^1_j \delta^k_2$ and $(H_{EW})^{ik}_j=\delta^i_2 Q_j \delta^k_j$
(no sum in indices).
Note that it is easy to check that we have
$H^{ik}_i=H^k$, $H^{ik}_k=0$, $(H_{EW})^{ik}_k=0$, $(H_{EW})^{ik}_i=Q_2\delta^k_2=Q_2 H^k$.
The flavor
structures of $|\Delta S|=1$ tree and penguin operators can be
obtained by replacing $d$ to $s$,
$H^k=\delta^k_3$, $H^{ik}_j=\delta^i_1\delta^1_j \delta^k_3$ and $(H_{EW})^{ik}_j=\delta^i_3 Q_j \delta^k_j$.
The effective Hamiltonian, in term of the meson degree of freedom,
for the $\overline B\to PP$ decay should have the same SU(3)
transform property of $H_{\rm W}$. Consequently, we
have
\begin{eqnarray}
H_{\rm eff}&=&T\, \overline B_m H^{ik}_j (\Pi^{\rm out})^j_k (\Pi^{\rm out})^m_i
+C\,\overline B_m H^{ik}_j (\Pi^{\rm out})^j_i (\Pi^{\rm out})^m_k
\nonumber\\
&&+E\,\overline B_k H^{ik}_j (\Pi^{\rm out})^j_l (\Pi^{\rm out})^l_i
+A\,\overline B_i H^{ik}_j (\Pi^{\rm out})^j_l (\Pi^{\rm out})^l_k
\nonumber\\
&&+P\,\overline B_m H^k (\Pi^{\rm out})^m_i (\Pi^{\rm out})^i_k
+\frac{1}{2}PA\,\overline B_k H^k (\Pi^{\rm out})^l_m(\Pi^{\rm out})^m_l
{\nonumber}\\
&&
+P_{EW} \overline B_m H_{EW}{}^{ik}_j(\Pi^{\rm out})^m_i(\Pi^{\rm out})^j_k
+P^C_{EW} \overline B_m H_{EW}{}^{ik}_j (\Pi^{\rm out})^m_k (\Pi^{\rm out})^j_i
{\nonumber}\\
&&+P^E_{EW}\,\overline B_k H_{EW}{}^{ik}_j (\Pi^{\rm out})^j_l (\Pi^{\rm out})^l_i
+P^A_{EW}\,\overline B_i H_{EW}{}^{ik}_j (\Pi^{\rm out})^j_l (\Pi^{\rm out})^l_k
{\nonumber}\\
&& +(H_{\rm eff})_{\rm singlet},
\label{eq:B2PP Heff}
\end{eqnarray}
where the $A$, $P$, $PA$ and $P_{EW}$ terms correspond to annihilation,
penguin, penguin annihilation and electroweak penguin amplitudes, respectively.
and $(H_{\rm eff})_{\rm singlet}$ is the hamiltonain involving $\eta_1$, given by
\begin{eqnarray}
(H_{\rm eff})_{\rm singlet}&=&\bar T\, \overline B_m H^{ik}_j (\Pi^{\rm out})^j_k (\tilde\Pi^{\rm out})^m_i
+\bar C_1\,\overline B_m H^{ik}_j (\Pi^{\rm out})^j_i (\tilde \Pi^{\rm out})^m_k
\nonumber\\
&&+\bar C_2\,\overline B_m H^{ik}_j (\tilde \Pi^{\rm out})^j_i (\Pi^{\rm out})^m_k
+\tilde C\,\overline B_m H^{ik}_j (\tilde\Pi^{\rm out})^j_i (\tilde\Pi^{\rm out})^m_k
\nonumber\\
&&+\bar E_1\,\overline B_k H^{ik}_j (\tilde\Pi^{\rm out})^j_l (\Pi^{\rm out})^l_i
+\bar E_2\,\overline B_k H^{ik}_j (\Pi^{\rm out})^j_l (\tilde\Pi^{\rm out})^l_i
\nonumber\\
&&+\tilde E\,\overline B_k H^{ik}_j (\tilde\Pi^{\rm out})^j_l (\tilde\Pi^{\rm out})^l_i
+\bar A_1\,\overline B_i H^{ik}_j (\tilde \Pi^{\rm out})^j_l (\Pi^{\rm out})^l_k
\nonumber\\
&&+\bar A_2\,\overline B_i H^{ik}_j (\Pi^{\rm out})^j_l (\tilde \Pi^{\rm out})^l_k
+\bar P_1\,\overline B_m H^k (\tilde\Pi^{\rm out})^m_i (\Pi^{\rm out})^i_k
{\nonumber}\\
&&
+\bar P_2\,\overline B_m H^k (\Pi^{\rm out})^m_i (\tilde\Pi^{\rm out})^i_k
+\tilde P\,\overline B_m H^k (\tilde\Pi^{\rm out})^m_i (\tilde\Pi^{\rm out})^i_k
{\nonumber}\\
&&
+\frac{1}{2}\widetilde {PA}\,\overline B_k H^k (\tilde\Pi^{\rm out})^l_m(\tilde\Pi^{\rm out})^m_l
+\bar P_{EW} \overline B_m H_{EW}{}^{ik}_j(\tilde\Pi^{\rm out})^m_i(\Pi^{\rm out})^j_k
{\nonumber}\\
&&
+\bar P^C_{EW,1} \overline B_m H_{EW}{}^{ik}_j (\tilde\Pi^{\rm out})^m_k (\Pi^{\rm out})^j_i
+\bar P^C_{EW,2} \overline B_m H_{EW}{}^{ik}_j (\Pi^{\rm out})^m_k (\tilde\Pi^{\rm out})^j_i
{\nonumber}\\
&&
+\tilde P^C_{EW} \overline B_m H_{EW}{}^{ik}_j (\tilde\Pi^{\rm out})^m_k (\tilde\Pi^{\rm out})^j_i
+\bar P^E_{EW,1}\,\overline B_k H_{EW}{}^{ik}_j (\tilde\Pi^{\rm out})^j_l (\Pi^{\rm out})^l_i
{\nonumber}\\
&&
+\bar P^E_{EW,2}\,\overline B_k H_{EW}{}^{ik}_j (\Pi^{\rm out})^j_l (\tilde\Pi^{\rm out})^l_i
+\tilde P^E_{EW}\,\overline B_k H_{EW}{}^{ik}_j (\tilde\Pi^{\rm out})^j_l (\tilde\Pi^{\rm out})^l_i
{\nonumber}\\
&&
+\bar P^A_{EW,1}\,\overline B_i H_{EW}{}^{ik}_j (\tilde\Pi^{\rm out})^j_l (\Pi^{\rm out})^l_k
+\bar P^A_{EW,2}\,\overline B_i H_{EW}{}^{ik}_j (\Pi^{\rm out})^j_l (\tilde\Pi^{\rm out})^l_k
{\nonumber}\\
&&
+\tilde P^A_{EW}\,\overline B_i H_{EW}{}^{ik}_j (\tilde\Pi^{\rm out})^j_l (\tilde\Pi^{\rm out})^l_k,
\label{eq:B2PP Heff sing
\end{eqnarray}
with $(\tilde \Pi^{\rm out})^i_j=\eta_1^{\rm out}\delta^i_j/\sqrt3$.
Note that we introduce $P^E_{EW}$ and $P^A_{EW}$, namely the electroweak penguin exchange and electroweak penguin annihilation terms for completeness. The above $H_{\rm eff}$ contains all possible SU(3) invariant combinations in first order of $H^{ik}_j$, $H^k$ and $H_{EW}{}^{ik}_j$.
It should be emphasize that the effective Hamiltonian in Eq. (\ref{eq:B2PP Heff}) is obtained using flavor SU(3) symmetry argument only.
The TA amplitude can contain all possible FSI contributions, while the expressions of the decay amplitude in term of these TA will remain the same.
With redefinition of the following amplitudes:
\begin{eqnarray}
2\bar A\equiv \bar A_1+\bar A_2,
\quad
2\bar E\equiv\bar E_1+\bar E_2,
\quad
2\bar P\equiv\bar P_1+\bar P_2,
{\nonumber}\\
2\bar P^A_{EW}\equiv\bar P^A_{EW,1}+\bar P^A_{EW,2},
\quad
2\bar P^E_{EW}\equiv \bar P^E_{EW,1}+\bar P^E_{EW,2},
\end{eqnarray}
$(H_{\rm eff})_{\rm singlet}$ can be expressed in a more compact form,
\begin{eqnarray}
(H_{\rm eff})_{\rm singlet}&=&(\bar T+2\bar A)\, \overline B_i H^{ik}_j (\Pi^{\rm out})^j_k\eta_1^{\rm out}/\sqrt3
{\nonumber}\\
&&+(\bar C_1+2\bar E)\,\overline B_k H^{ik}_j (\Pi^{\rm out})^j_i\eta_1^{\rm out}/\sqrt3
{\nonumber}\\
&&+(\bar C_2+2\bar P-\frac{1}{3}\bar P^C_{EW,2}) \overline B_m H^k (\Pi^{\rm out})^m_k \eta_1^{\rm out}/\sqrt3
{\nonumber}\\
&&+(\bar P_{EW} +2\bar P^A_{EW})\,\overline B_i H_{EW}{}^{ik}_j (\Pi^{\rm out})^j_k\eta_1^{\rm out}/\sqrt3
{\nonumber}\\
&& +(\bar P^C_{EW,1}+2\bar P^E_{EW})\,\overline B_k H_{EW}{}^{ik}_j (\Pi^{\rm out})^j_i\eta_1^{\rm out}/\sqrt3
{\nonumber}\\
&&+(\tilde C+\tilde E +\tilde P+\frac{3}{2}\tilde {PA}-\frac{1}{3}\tilde P^C_{EW}-\frac{1}{3}\tilde P^E_{EW})\,\overline B_k H^k \eta_1^{\rm out}\eta_1^{\rm out}/3.
\label{eq:B2PP Heff0}
\end{eqnarray}
Using the above approach we can
reproduce familiar expressions of decay amplitudes in terms of TA~\cite{Gronau:1994rj,Gronau:1995hn}.~\footnote{See also \cite{He:2018php} for a recent discussion.}
Explicitly, we have the following amplitudes:
\begin{eqnarray}
A_{\overline B {}^0_{d}\to K^-\pi^+}&=&T'+P'+\frac{1}{3}(2 P_{EW}^{\prime\,C}-P_{EW}^{\prime\,E}),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \overline K {}^0 \pi^0}&=&\frac{1}{3\sqrt2}(3 C'-3P'+3P'_{EW}+P_{EW}^{\prime\,C}+P_{EW}^{\prime\,E}),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \overline K {}^0 \eta_8}&=&\frac{1}{3\sqrt6}(3 C'-3P'+3P'_{EW}+P_{EW}^{\prime\,C}+P_{EW}^{\prime\,E}),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \overline K {}^0 \eta_1}&=&\frac{1}{3\sqrt3}(3 \bar C'_2+6\bar P'-\bar P_{EW,1}^{\prime\,C}-\bar P_{EW,2}^{\prime\,C}-2\bar P_{EW}^{\prime\,E}),
\label{eq: TAgroup1}
\end{eqnarray}
for group-1 modes,
\begin{eqnarray}
A_{B^-\to \overline K^0\pi^-}&=&A'+P'+\frac{1}{3}(-P_{EW}^{\prime\,C}+2P_{EW}^{\prime\, E}),
{\nonumber}\\
A_{B^-\to K^- \pi^0}&=&\frac{1}{3\sqrt2}(3T'+3C'+3A'+3P'+3 P'_{EW}+2 P_{EW}^{\prime\,C}+2 P_{EW}^{\prime\, E}),
{\nonumber}\\
A_{B^-\to K^- \eta_8}&=&\frac{1}{3\sqrt6}(3T'+3C'-3A'-3P'+3 P'_{EW}+4 P_{EW}^{\prime\,C}-2 P_{EW}^{\prime\, E}),
{\nonumber}\\
A_{B^-\to K^- \eta_1}&=&\frac{1}{3\sqrt3}(3\bar T'+3 \bar C'_2+6 \bar A'+6\bar P'+2\bar P_{EW,1}^{\prime\,C}-\bar P_{EW,2}^{\prime\,C}+4\bar P_{EW}^{\prime\,E}),
\label{eq: TAgroup2}
\end{eqnarray}
for group-2 modes,
\begin{eqnarray}
A_{B^-\to \pi^-\pi^0}&=&\frac{1}{\sqrt2}(T+C+P_{EW}+P_{EW}^C),
{\nonumber}\\
A_{B^-\to K^0 K^-}&=&A+P+\frac{1}{3}(-P^C_{EW}+2 P_{EW}^E),
{\nonumber}\\
A_{B^-\to \pi^- \eta_8}&=&\frac{1}{3\sqrt6}(3 T+3 C+6 A+6P+3 P_{EW}+P_{EW}^C+4 P_{EW}^E),
{\nonumber}\\
A_{B^-\to \pi^- \eta_1}&=&\frac{1}{3\sqrt3}(3 \bar T+3\bar C_2+6 \bar A+6\bar P+2 \bar P_{EW,1}^C-\bar P_{EW,2}^C+4\bar P_{EW}^E),
\label{eq: TAgroup3}
\end{eqnarray}
for group-3 modes,
\begin{eqnarray}
A_{\overline B {}^0_{d}\to \pi^+\pi^-}&=&T+E+P+PA+\frac{1}{3}(2P_{EW}^C+P_{EW}^A-P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \pi^0 \pi^0}&=&\frac{1}{\sqrt2} (-C+E+P+ PA-P_{EW}-\frac{1}{3}P_{EW}^C+\frac{1}{3}P_{EW}^A-\frac{1}{3}P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \eta_8 \eta_8}&=&\frac{1}{9\sqrt2} (3 C+3 E+3 P+ 9PA+3P_{EW}-P_{EW}^C-3P_{EW}^A-P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \eta_8 \eta_1}&=&\frac{1}{9\sqrt2}(3 \bar C_1+3 \bar C_2+6 \bar E+6\bar P+3\bar P_{EW}+6 \bar P_{EW}^A-\bar P_{EW,1}^C-\bar P_{EW,2}^C-2 \bar P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \eta_1 \eta_1}&=&\frac{1}{9\sqrt2}(6\tilde C+6 \tilde E+6\tilde P+9\tilde{PA}-2\tilde P^C_{EW}-2 \tilde P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{d}\to K^+ K^-}&=&E+PA+\frac{1}{3} P_{EW}^A,
{\nonumber}\\
A_{\overline B {}^0_{d}\to K^0 \overline K {}^0}&=&P+PA-\frac{1}{3}(P_{EW}^C+2 P_{EW}^A+P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \pi^0 \eta_8}&=&\frac{1}{3\sqrt3}(3 E-3 P+P_{EW}^C+3 P_{EW}^A+P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{d}\to \pi^0 \eta_1}&=&\frac{1}{3\sqrt6}(3 \bar C_1-3 \bar C_2+6 \bar E-6\bar P+3\bar P_{EW}+6 \bar P_{EW}^A+\bar P_{EW,1}^C+\bar P_{EW,2}^C+2 \bar P_{EW}^E),
\label{eq: TAgroup4}
{\nonumber}\\
\end{eqnarray}
for group-4 modes,
and the following amplitudes:
\begin{eqnarray}
A_{\overline B {}^0_{s}\to K^+\pi^-}&=&T+P+\frac{1}{3}(2 P_{EW}^C-P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{s}\to K {}^0 \pi^0}&=&\frac{1}{3\sqrt2}(3 C-3P+3 P_{EW}+P_{EW}^C+P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{s}\to K {}^0 \eta_8}&=&\frac{1}{3\sqrt6}(3 C-3P+3 P_{EW}+P_{EW}^C+P_{EW}^E),
{\nonumber}\\
A_{\overline B {}^0_{s}\to K {}^0 \eta_1}&=&\frac{1}{3\sqrt3}(3 \bar C_2+6\bar P-\bar P_{EW,1}^C-\bar P_{EW,2}^C-2 \bar P_{EW}^E),
\label{eq: TABsgroup1}
\end{eqnarray}
and
\begin{eqnarray}
A_{\overline B {}^0_{s}\to \pi^+\pi^-}&=&E'+PA'+\frac{1}{3}P_{EW}^{\prime \,A},
{\nonumber}\\
A_{\overline B {}^0_{s}\to \pi^0 \pi^0}&=&\frac{1}{\sqrt2} (E'+ PA'+\frac{1}{3}P_{EW}^{\prime\,A}),
{\nonumber}\\
A_{\overline B {}^0_{s}\to \eta_8 \eta_8}&=&\frac{1}{9\sqrt2} (-6 C'+3 E'+12 P'+ 9PA'-6P'_{EW}-4 P_{EW}^{\prime\,C}-3P_{EW}^{\prime\,A}-4P_{EW}^{\prime\,E}),
{\nonumber}\\
A_{\overline B {}^0_{s}\to \eta_8 \eta_1}&=&\frac{1}{9\sqrt2}(3 \bar C'_1-6 \bar C'_2+6 \bar E'-12\bar P'+3\bar P'_{EW}+6 \bar P_{EW}^{\prime\,A}+2\bar P_{EW,1}^{\prime\,C}+2\bar P_{EW,2}^{\prime\,C}+4 \bar P_{EW}^{\prime\,E}),
{\nonumber}\\
A_{\overline B {}^0_{s}\to \eta_1 \eta_1}&=&\frac{1}{9\sqrt2}(6\tilde C'+6 \tilde E'+6\tilde P'+9\tilde{PA}'-2\tilde P^{\prime\,C}_{EW}-2 \tilde P_{EW}^{\prime\,E}),
{\nonumber}\\
A_{\overline B {}^0_{s}\to K^+ K^-}&=&T'+E'+P'+PA'+\frac{1}{3} (P_{EW}^{\prime\,A}+2P_{EW}^{\prime\,C}-P_{EW}^{\prime\,E}),
{\nonumber}\\
A_{\overline B {}^0_{s}\to K^0 \overline K {}^0}&=&P'+PA'-\frac{1}{3}(P_{EW}^{\prime\,C}+2 P_{EW}^{\prime\,A}+P_{EW}^{\prime\,E}),
{\nonumber}\\
A_{\overline B {}^0_{s}\to \pi^0 \eta_8}&=&\frac{1}{\sqrt3}(-C'+E'-P'_{EW}+P_{EW}^{\prime\,A}),
{\nonumber}\\
A_{\overline B {}^0_{s}\to \pi^0 \eta_1}&=&\frac{1}{\sqrt6}(\bar C'_1+2 \bar E'+\bar P'_{EW}+2 \bar P_{EW}^{\prime\,A}),
\label{eq: TABsgroup4}
\end{eqnarray}
for $\overline B_s\to PP$ decays,
where the $T$, $C$, $A$, $P$, $PA$ and $P_{EW}$ terms correspond to color-allowed tree, color-suppressed tree, annihilation,
penguin, penguin annihilation and electroweak penguin amplitudes, respectively.
Note that $P^E_{EW}$ and $P^A_{EW}$, namely the electroweak penguin exchange and electroweak penguin annihilation terms, are introduced for completeness. See Appendix B for details.
Those with (without) prime are for $\Delta S=-1$(0) transition.
The one-to-one correspondence of the SU(3) parameters and the
topological amplitudes is not a coincidence. It can be understood
by using a flavor flow analysis. We take the first term of $H_{\rm
eff}$ for illustration. In $H_{\rm W}$ the decays are governed by
the
${\cal O}_T\sim (\bar u b )(\bar d u )
=H^{ik}_j (\bar q_i b) (\bar q_k q^j),$
$b\to q_i\, \bar q^j\, q_k $ transition with the corresponded
$H^{ik}_j$ coupling. The first term of $H_{\rm eff}$ in
Eq.~(\ref{eq:B2PP Heff}) is $T\, \overline B_m H^{ik}_j (\Pi^{\rm out})^j_k (\Pi^{\rm out})^m_i$.
Note that we use subscript and superscript according to the field convention.
For example, we assign a subscript (superscript) to the initial
(final) state anti-quark $\bar q_m$~($\bar q^m$).
The $\overline B_m (\Pi^{\rm out})^m_i$ part in $T\, \overline B_m H^{ik}_j (\Pi^{\rm out})^j_k (\Pi^{\rm out})^m_i$ can be
interpreted as a $\overline B_m$ to $(\Pi^{\rm out})^m_i$ transition with the same
spectator anti-quark $\bar q_m$ from $\overline B_m$ becoming the final state spectator anti-quark $\bar q^m$, which ends up in $(\Pi^{\rm out})^m_i$.
The quark $q_i$ from $b\to q_i$ transition also ends up in $(\Pi^{\rm out})^m_i$, while
the $(\Pi^{\rm out})^j_k$ part is responsible for the creation of the meson
where the $W$-emitted $\bar q^j q_k$ pair ends up with.
The above
picture clearly corresponds to the external $W$-emission topology.
Similarly, the identification of the other topological amplitudes can be understood similarly.
One can check that all of the above amplitudes can be expressed in terms of the following combinations:
\begin{eqnarray}
T^{(\prime)}+C^{(\prime)},
\quad
C^{(\prime)}-E^{(\prime)},
\quad
A^{(\prime)}+C^{(\prime)},
{\nonumber}\\
P^{(\prime)}-C^{(\prime)}+\frac{1}{3}P_{EW}^{(\prime)C},
\quad
PA^{(\prime)}-\frac{4}{9} C^{(\prime)}+\frac{13}{9} E^{(\prime)}-\frac{1}{3} P_{EW}^{(\prime)C},
{\nonumber}\\
P^{(\prime)}_{EW}+P_{EW}^{(\prime)C},
\quad
P_{EW}^{(\prime)C}-P_{EW}^{(\prime)E},
\quad
P_{EW}^{(\prime)A}+P_{EW}^{(\prime)C},
{\nonumber}\\
\bar T^{(\prime)}+2\bar A^{(\prime)},
\quad
\bar C^{(\prime)}_1+2\bar E^{(\prime)},
\quad
\bar C^{(\prime)}_2+2\bar P^{(\prime)}-\frac{1}{3}\bar P^{(\prime)C}_{EW,2},
\quad
\bar P^{(\prime)}_{EW} +2\bar P^{(\prime)A}_{EW},
{\nonumber}\\
\bar P^{(\prime)C}_{EW,1}+2\bar P^{(\prime)E}_{EW},
\quad
\tilde C^{(\prime)}+\tilde E^{(\prime)} +\tilde P^{(\prime)}+\frac{3}{2}\tilde {PA}^{(\prime)}-\frac{1}{3}\tilde P^{(\prime)C}_{EW}-\frac{1}{3}\tilde P^{(\prime)E}_{EW}.
\label{eq: combinations}
\end{eqnarray}
For example, we can express the decay amplitude of $\overline B^0\to K^-\pi^+$ in the following combinations:
\begin{eqnarray}
A(\overline B^0\to K^-\pi^+)=(T'+C') + (P'-C'+\frac{1}{3}P_{EW}^{C\prime}) +\frac{1}{3}(P^{C\prime}_{EW}-P^{E\prime}_{EW}).
\end{eqnarray}
It is interesting to compare the amplitudes expressed in terms of the topological amplitudes with the those in the QCDF calculation.
We can obtain the following relations in the SU(3) limit: (using formulas in \cite{Beneke:2003zv} but taking the SU(3) limit)
\begin{eqnarray}
T^{(\prime)0}&=&A_{PP}\lambda^{(\prime)}_p\delta_{pu} \alpha_1,
{\nonumber}\\
C^{(\prime)0}&=&A_{PP}\lambda^{(\prime)}_p\delta_{pu} \alpha_2,
{\nonumber}\\
E^{(\prime)0}&=&A_{PP}\lambda^{(\prime)}_p\delta_{pu}\beta_1,
{\nonumber}\\
A^{(\prime)0}&=&A_{PP}\lambda^{(\prime)}_p\delta_{pu}\beta_2,
{\nonumber}\\
P^{(\prime)0}&=&A_{PP}\lambda^{(\prime)}_p(\alpha_4^p+\beta_3^p),
{\nonumber}\\
PA^{(\prime)0}&=&2 A_{PP}\lambda^{(\prime)}_p\beta^p_4,
{\nonumber}\\
P^{(\prime)0}_{EW}&=& \frac{3}{2} A_{PP}\lambda^{(\prime)}_p\alpha^p_{3,EW},
{\nonumber}\\
P^{(\prime)C0}_{EW}&=& \frac{3}{2} A_{PP}\lambda^{(\prime)}_p\alpha^p_{4,EW},
{\nonumber}\\
P^{(\prime)E0}_{EW}&=& \frac{3}{2} A_{PP}\lambda^{(\prime)}_p\beta^p_{3,EW},
{\nonumber}\\
P^{(\prime)A0}_{EW}&=& \frac{3}{2} A_{PP}\lambda^{(\prime)}_p\beta^p_{4,EW},
\label{eq: TAQCDF1}
\end{eqnarray}
where we use $\lambda^{(\prime)}_p\equiv V_{pb}V^*_{pd(s)}$, $p=u,c$ with $V_{pb, pd(s)}$ the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and summation over $p$ is implied. One can find detail definitions of $A_{PP}$, $\alpha$ and $\beta$ in \cite{Beneke:2003zv}.
Note that $A_{PP}$ involves a $\overline B_q\to P$ transition and a $P$ decay constant:
\begin{eqnarray}
A_{PP}=\frac{G_F}{\sqrt2}F^{BP}_0(m_P^2) f_P m_B^2.
\end{eqnarray}
It should be note that we have removed an overall $i$ in the definition of $A_{PP}$.
The superscript $0$ on TA is denoting the fact that rescattering among $PP$ states has not taken place.
In the SU(3) limit, we will use $F^{BP}_0(m_P^2)=F^{B\pi}_0(0)$ and $f_P=f_\pi$ in later discussion.
For $B$ decays to a final state with $\eta_1$, things are more complicated.
For example, $A_{P\eta_1}$ is in principle different from $A_{\eta_1 P}$.
We have in the SU(3) limit:~\cite{Beneke:2003zv}
\begin{eqnarray}
\bar T^{(\prime)0}&=&A_{\eta_1P}\lambda^{(\prime)}_p\delta_{pu} \alpha_1,
{\nonumber}\\
\bar C^{(\prime)0}_1&=&A_{\eta_1P}\lambda^{(\prime)}_p\delta_{pu} \alpha_2,
{\nonumber}\\
\bar C^{(\prime)0}_2&=&A_{P\eta_1}\lambda^{(\prime)}_p\delta_{pu} \alpha_2,
{\nonumber}\\
2\bar E^{(\prime)0}&\equiv&\bar E^{(\prime)0}_1+\bar E^{(\prime)0}_2
=A_{\eta_1P}\lambda^{(\prime)}_p\delta_{pu}\beta_1+A_{P\eta_1}\lambda^{(\prime)}_p\delta_{pu}(\beta_1+3\beta_{S1}),
{\nonumber}\\
2\bar A^{(\prime)0}&\equiv&\bar A^{(\prime)0}_1+\bar A^{(\prime)0}_2
=A_{\eta_1P}\lambda^{(\prime)}_p\delta_{pu}\beta_2+A_{P\eta_1}\lambda^{(\prime)}_p\delta_{pu}(\beta_2+3\beta_{S2}),
{\nonumber}\\
2\bar P^{(\prime)0}&\equiv&\bar P^{(\prime)0}_1+\bar P^{(\prime)0}_2
=A_{\eta_1P}\lambda^{(\prime)}_p(\alpha_4^p+\beta_3^p)+A_{P\eta_1}\lambda^{(\prime)}_p(\alpha_4^p+3\alpha^p_3+\beta_3^p+3\beta^p_{S3}),
{\nonumber}\\
\bar P^{(\prime)0}_{EW}&=& \frac{3}{2} A_{\eta_1P}\lambda^{(\prime)}_p\alpha^p_{3,EW},
{\nonumber}\\
\bar P^{(\prime)C0}_{EW,1}&=& \frac{3}{2} A_{\eta_1P}\lambda^{(\prime)}_p\alpha^p_{4,EW},
{\nonumber}\\
\bar P^{(\prime)C0}_{EW,2}&=& \frac{3}{2} A_{P\eta_1}\lambda^{(\prime)}_p\alpha^p_{4,EW},
{\nonumber}\\
2\bar P^{(\prime)E0}_{EW}&\equiv&\bar P^{(\prime)E0}_{EW,1}+\bar P^{(\prime)E0}_{EW,2}
= \frac{3}{2} [A_{\eta_1P}\lambda^{(\prime)}_p\beta^p_{3,EW}+A_{P\eta_1}\lambda^{(\prime)}_p(\beta^p_{3,EW}+3\beta^p_{S3,EW})],
{\nonumber}\\
2\bar P^{(\prime)A0}_{EW}&\equiv&\bar P^{(\prime)A0}_{EW,1}+\bar P^{(\prime)A0}_{EW,2}
= \frac{3}{2} [A_{\eta_1P}\lambda^{(\prime)}_p\beta^p_{4,EW}+A_{P\eta_1}\lambda^{(\prime)}_p(\beta^p_{4,EW}+3\beta^p_{4S,EW})].
\end{eqnarray}
Note that $A_{P\eta_1}$ involves a $\overline B\to P$ transition, while $A_{\eta_1P}$ involves a $\overline B\to \eta_1$ transition:
\begin{eqnarray}
A_{P\eta_1}=\frac{G_F}{\sqrt2}F^{\bar B\to P}_0 f_{\eta_1} m_B^2,
\quad
A_{\eta_1P}=\frac{G_F}{\sqrt2}F^{\bar B\to \eta_1}_0 f_P m_B^2\simeq\frac{G_F}{\sqrt2}F^{\bar B\to P}_0\frac{f_{\eta_1}}{f_P} f_P m_B^2,
\label{eq: TAQCDF2}
\end{eqnarray}
where in the second equation, we have made use of the approximation from ~\cite{Beneke:2003zv}.
In fact we have $A_{P\eta_1}\simeq A_{\eta_1 P}\simeq A_{PP}(f_{\eta_1}/f_P)$.
Finally comparing our expressions and those in Ref.~\cite{Beneke:2003zv}, we have
\begin{eqnarray}
\tilde C^{(\prime)0}&=&A_{\eta_1\eta_1}\lambda^{(\prime)}_p\delta_{pu} \alpha_2,
{\nonumber}\\
\tilde E^{(\prime)0}&=&A_{\eta_1\eta_1}\lambda^{(\prime)}_p\delta_{pu}(\beta_1+3\beta_{S1}),
{\nonumber}\\
\tilde P^{(\prime)0}&=&A_{\eta_1\eta_1}\lambda^{(\prime)}_p(\alpha_4^p+3\alpha^p_3+\beta_3^p+3\beta^p_{S3}),
{\nonumber}\\
\widetilde {PA}^{(\prime)}&=&2 A_{\eta_1\eta_1}\lambda^{(\prime)}_p(\beta^p_4+3\beta^p_{S4}),
{\nonumber}\\
\tilde P^{(\prime)C0}_{EW}&=& \frac{3}{2} A_{\eta_1\eta_1}\lambda^{(\prime)}_p\alpha^p_{4,EW},
{\nonumber}\\
\tilde P^{(\prime)E0}_{EW}&=& \frac{3}{2} A_{\eta_1\eta_1}\lambda^{(\prime)}_p(\beta^p_{3,EW}+3\beta^p_{S3,EW}),
\label{eq: TAQCDF3}
\end{eqnarray}
with
\begin{eqnarray}
A_{\eta_1\eta_1}=\frac{G_F}{\sqrt2}F^{\bar B\to \eta_1}_0 f_{\eta_1} m_B^2
\simeq \frac{G_F}{\sqrt2}F^{\bar B\to P}_0 \frac{f_{\eta_1}}{f_P}f_{\eta_1} m_B^2=A_{PP}\bigg(\frac{f_{\eta_1}}{f_P}\bigg)^2.
\end{eqnarray}
In the later discussion, we take $f_{\eta_1}=f_P=f_\pi$.
\subsubsection{Rescattering of Topological Amplitudes in the SU(3) limit}
We now turn to the rescattering part.
The matrices ${\cal T}_{1,2,3,4}$ can be obtained through a diagrammatic
method by matching the Clebsh-Gordan coefficients of scattering
mesons (see Fig.~\ref{fig:r}) or by using an operator method. We
have
\begin{eqnarray}
O_e&=&Tr(\Pi^{\rm in}\Pi^{\rm out}\Pi^{\rm in}\Pi^{\rm out})/2,
\quad
O_a=Tr(\Pi^{\rm in}\Pi^{\rm in}\Pi^{\rm out}\Pi^{\rm out}),
{\nonumber}\\
O_0&=&Tr(\Pi^{\rm in}\Pi^{\rm out})Tr(\Pi^{\rm in}\Pi^{\rm out})/2,
\quad
O_t=Tr(\Pi^{\rm in}\Pi^{\rm in})Tr(\Pi^{\rm out}\Pi^{\rm out})/4,
\label{eq:Oi}
\end{eqnarray}
corresponding to $r_e$, $r_a$, $r_0$ and $r_t$ contributions, in the combination of
\begin{eqnarray}
{\cal T}^{(m)}=r^{(m)}_0 O_0+r^{(m)}_e O_e+r^{(m)}_a O_a+r^{(m)}_t O_t+\cdots,
\end{eqnarray}
where the remaining terms will be specified in below.
The
above terms exhaust all possible combinations for $\Pi({\bf
8})\,\Pi({\bf 8})\to\Pi({\bf 8})\,\Pi({\bf 8})$ scatterings.
To obtain operators involving $\eta_1$, we simply replace $\Pi$ in the above operators to
$\Pi+\eta_1 1_{3\times 3}/\sqrt3$ and collect terms with different number of $\eta_1$ as
\begin{eqnarray}
\sqrt3 \bar O_e&=&\frac{\sqrt3}{2} \bar O_a
=Tr(\Pi^{\rm in}\Pi^{\rm out}\Pi^{\rm in})\eta^{\rm out}_1+Tr(\Pi^{\rm out}\Pi^{\rm in}\Pi^{\rm out})\eta^{\rm in}_1,
{\nonumber}\\
\tilde O_0&=&\frac{3}{4}\tilde O_a=\frac{3}{2}\tilde O_e=Tr(\Pi^{\rm in}\Pi^{\rm out})\eta_1^{\rm in}\eta_1^{\rm out},
{\nonumber}\\
4\hat O_t&=&3\hat O_a=6\hat O_e=\eta^{\rm in}_1\eta^{\rm in}_1Tr(\Pi^{\rm out}\Pi^{\rm out})
+\eta^{\rm out}_1\eta^{\rm out}_1Tr(\Pi^{\rm in}\Pi^{\rm in}),
{\nonumber}\\
2\check O_0&=&4\check O_t=3\check O_a=6\check O_e=\eta^{\rm in}_1\eta^{\rm out}_1\eta^{\rm in}_1\eta^{\rm out}_1.
\label{eq:Oieta1}
\end{eqnarray}
Note that it is impossible to obtain a term containing three $\eta_1$ as is prohibited from $SU(3)$ symmetry.
We now have
\begin{eqnarray}
{\cal T}^{(m)}&=&
r^{(m)}_0 O_0+r^{(m)}_e O_e+r^{(m)}_a O_a+r^{(m)}_t O_t
+\bar r^{(m)}_e \bar O_e+\bar r^{(m)}_a \bar O_a
{\nonumber}\\
&&+\tilde r^{(m)}_0 \tilde O_0+\tilde r^{(m)}_a \tilde O_a+\tilde r^{(m)}_e \tilde O_e
+\hat r^{(m)}_t \hat O_t+\hat r^{(m)}_a \hat O_a+\hat r^{(m)}_e \hat O_e
{\nonumber}\\
&&+\check r^{(m)}_0 \check O_0+\check r^{(m)}_t \check O_t+\check r^{(m)}_a \check O_a+\check r^{(m)}_e \check O_e.
\end{eqnarray}
Using Eq. (\ref{eq:Oieta1}), the above equation can be simplified into
\begin{eqnarray}
{\cal T}^{(m)}&=&
r^{(m)}_0 O_0+r^{(m)}_e O_e+r^{(m)}_a O_a+r^{(m)}_t O_t
+(\bar r^{(m)}_e+2\bar r^{(m)}_a) \bar O_e
{\nonumber}\\
&&+\left(\tilde r^{(m)}_0 +\frac{4\tilde r^{(m)}_a +2\tilde r^{(m)}_e}{3} \right)\tilde O_0
+\left(\hat r^{(m)}_t +\frac{4\hat r^{(m)}_a +2\hat r^{(m)}_e}{3} \right)\hat O_t
{\nonumber}\\
&&+\left(\check r^{(m)}_0+\frac{4\check r^{(m)}_a+2\check r^{(m)}_e+3\check r^{(m)}_t}{6}\right) \check O_0.
\label{eq:Tm}
\end{eqnarray}
Note that various $\bar r^{(m)}_i$, $\tilde r^{(m)}_i$, $\hat r^{(m)}_i$ and $\check r^{(m)}_i$ occur in ${\cal T}^{(m)}$ only through some very specific combinations.
We still preserve the subscripts ($i=0,t,a,e$), since these $\bar r^{(m)}_i$, $\tilde r^{(m)}_i$, $\hat r^{(m)}_i$ and $\check r^{(m)}_i$ for different $i$ correspond to different flavor flow patterns in rescattering diagrams (see Fig.~\ref{fig:r}) and they will, in fact, reduce to $r^{(m)}_i$ in the $U$(3) limit.
It is straightforward to obtain the rescattering effects on topological amplitudes.
In analogy to Eq. (\ref{eq:master2}):
\begin{eqnarray}
A={\cal S}_{res}^{1/2}\cdot
A^{\rm fac}=(1+i{\cal T}^{1/2})\cdot A^{\rm fac},
\label{eq:master3}
\end{eqnarray}
we have
\begin{eqnarray}
H_{\rm eff}=(1+i{\cal T}^{1/2})\cdot H^0_{\rm eff}=H^0_{\rm eff}+i{\cal T}^{1/2}\cdot H^0_{\rm eff},
\label{eq:H=(1+iT')H0}
\end{eqnarray}
where $H_{\rm eff}$ is given in Eq. (\ref{eq:B2PP Heff}), ${\cal T}^{1/2}$ in Eq. (\ref{eq:Tm}) but with $m=1/2$, $H^0_{\rm eff}$ is the un-scattered effective Hamiltonian with all $TA$ in $H_{\rm eff}$ replaced by $TA^0$ and the dot in the above equation implies all possible pairing of the $P^{\rm out}P^{\rm out}$ fields in $H_{\rm eff}^0$ to the $P^{\rm in}P^{\rm in}$ fields in ${\cal T}^{1/2}$ (the $P^{\rm out}P^{\rm out}$ in ${\cal T}^{1/2}$ remains unpaired).
As noted previously since the effective Hamiltonian in Eq. (\ref{eq:B2PP Heff}) is obtained using flavor SU(3) symmetry argument only,
its flavor structure will not be changed in the presence of rescattering, i.e. Eq. (\ref{eq:H=(1+iT')H0}) will not modify the flavor structure of $H_{\rm eff}$.
This feature is indeed verified in the explicit computation.
Therefore the expressions of the decay amplitude in term of the TA will remain the same, but now the these TA contain rescattering contributions.
The effect of rescattering on TA can be obtained using the above equation.
The computation is straightforward, but tedious.
Here we only give the final results, some derivations using the above equation can be found in Appendix B for illustration.
We obtain, in the presence of the rescattering, TA will receive corrections in the following ways:
\begin{eqnarray}
\delta T^{(\prime)}&=&ir'_0 T^{(\prime)0}+i r'_e C^{(\prime)0},
{\nonumber}\\
\delta C^{(\prime)}&=&ir'_0C^{(\prime)0}+i r'_e T^{(\prime)0},
{\nonumber}\\
\delta E^{(\prime)}&=&ir'_0 E^{(\prime)0}+ ir'_a T^{(\prime)0}-\frac{1}{3} i (r'_e+2r'_a) C^{(\prime)0}+\frac{1}{3}i(-2 r'_e+5 r'_a) E^{(\prime)0}
{\nonumber}\\
&& +\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar C^{(\prime)0}_1+2\bar E^{(\prime)0}),
{\nonumber}\\
\delta A^{(\prime)}&=&\frac{1}{3}i(3 r'_0-2 r'_e +5 r'_a )A^{(\prime)0}
-\frac{1}{3}i(r'_e+2r'_a) T^{(\prime)0}+ir'_a C^{(\prime)0}
{\nonumber}\\
&&+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar T^{(\prime)0}+2\bar A^{(\prime)0}),
{\nonumber}\\
\delta P^{(\prime)}&=&ir'_0 P^{(\prime)0}
+i r'_a T^{(\prime)0}
-\frac{1}{3}i( r'_e +2 r'_a) C^{(\prime)0}
+\frac{1}{3}i(-2 r'_e +5 r'_a) P^{(\prime)0}
{\nonumber}\\
&&-\frac{1}{3} i r'_a P^{(\prime)0}_{EW}
+\frac{1}{9}i(r'_e +2 r'_a) P_{EW}^{(\prime)C0}
+ \frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar C^{(\prime)0}_2+2\bar P^{(\prime)0}-\frac{1}{3}\bar P^{(\prime)C0}_{EW,2}),
{\nonumber}\\
\delta PA^{(\prime)}&=&
\frac{1}{3}i(3r'_0
- r'_e +16 r'_a +12 r'_t) PA^{(\prime)0}
+ i r'_t T^{(\prime)0}
+\frac{1}{9}(2 i r'_e +4 i r'_a -3 i r'_t) C^{(\prime)0}
{\nonumber}\\
&&
+\frac{2}{9}(i r'_e+11 ir'_a+12 ir'_t) E^{(\prime)0}
+\frac{2}{9}( ir'_e+11 ir'_a +12 ir'_t ) P^{(\prime)0}
-\frac{1}{3} i r'_t P^{(\prime)0}_{EW}
{\nonumber}\\
&&
+\frac{1}{27}i(-2 r'_e -4 r'_a +3 r'_t) P^{(\prime)C}_{EW}
-\frac{2}{27} i (r'_e +11 r'_a +12 r'_t )P_{EW}^{(\prime)E0}
{\nonumber}\\
&&-\frac{2}{9}i(\bar r'_e+2\bar r'_a)
\bigg(\bar C^{(\prime)0}_1+2\bar E^{(\prime)0}+\bar C^{(\prime)0}_2+2\bar P^{(\prime)0}-\frac{1}{3}\bar P^{(\prime)C0}_{EW,2}
-\frac{1}{3}\bar P^{(\prime)C0}_{EW,1} -\frac{2}{3}\bar P^{(\prime)E0}_{EW}\bigg)
{\nonumber}\\
&&+\frac{1}{3}i(\hat r'_t +\frac{4\hat r'_a +2\hat r'_e}{3})
\bigg(\tilde C^{(\prime)0}+\tilde E^{(\prime)0}+\tilde P^{(\prime)0} +\frac{3}{2}\widetilde {PA}^{(\prime)0}-\frac{1}{3}\tilde P^{(\prime)C0}_{EW} -\frac{1}{3}\tilde P^{(\prime)E0}_{EW}\bigg),
{\nonumber}\\
\delta P^{(\prime)}_{EW}&=&ir'_0 P^{(\prime)0}_{EW}+i r'_e P_{EW}^{(\prime)C0},
{\nonumber}\\
\delta P_{EW}^{(\prime)C}&=&ir'_0 P_{EW}^{(\prime)C0}+i r'_e P_{EW}^{(\prime)0},
{\nonumber}\\
\delta P_{EW}^{(\prime)E}&=&ir'_0 P_{EW}^{(\prime)E0}+ ir'_a P^{(\prime)0}_{EW}
-\frac{1}{3} i (r'_e+2 r'_a) P_{EW}^{(\prime)C0}
+\frac{1}{3}i(-2r'_e +5 r'_a) P_{EW}^{(\prime)E0}
{\nonumber}\\
&&+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar P^{(\prime)C0}_{EW,1} +2\bar P^{(\prime)E0}_{EW}),
{\nonumber}\\
\delta P^{(\prime)A}_{EW}&=&
\frac{1}{3} i (3r'_0-2 r'_e +5r'_a) P_{EW}^{(\prime)A0}
-\frac{1}{3}i(r'_e +2r'_a) P^{(\prime)0}_{EW}
+ir'_a P^{(\prime)C0}_{EW}
{\nonumber}\\
&&+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar P^{(\prime)0}_{EW} +2\bar P^{(\prime)A0}_{EW}),
\label{eq: delta TA1}
\end{eqnarray}
\begin{eqnarray}
\delta (\bar T^{(\prime)}+2\bar A^{(\prime)})
&=&i(\bar r'_e+2\bar r'_a) (-\frac{2}{3}T^{(\prime)0}+C^{(\prime)0}+\frac{5}{3} A^{(\prime)0})
{\nonumber}\\
&&
+i(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3})(\bar T^{(\prime)0}+2\bar A^{(\prime)0}),
{\nonumber}\\
\delta (\bar C^{(\prime)}_1+2\bar E^{(\prime)})
&=&i(\bar r'_e+2\bar r'_a) (T^{(\prime)0} -\frac{2}{3} C^{(\prime)0}+\frac{5}{3} E^{(\prime)0})
{\nonumber}\\
&&
+i(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3})(\bar C^{(\prime)0}_1+2\bar E^{(\prime)0}),
{\nonumber}\\
\delta (\bar C^{(\prime)}_2+2\bar P^{(\prime)}-\frac{1}{3}\bar P^{(\prime)C}_{EW,2})
&=&i(\bar r'_e+2\bar r'_a) (T^{(\prime)0}-\frac{2}{3} C^{(\prime)0}+\frac{5}{3} P^{(\prime)0}-\frac{1}{3} P^{(\prime)0}_{EW}+\frac{2}{9} P^{(\prime)C0}_{EW}
{\nonumber}\\
&&+i(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3})
(\bar C^{(\prime)0}_2+2\bar P^{(\prime)0}-\frac{1}{3}\bar P^{(\prime)C0}_{EW,2}),
{\nonumber}\\
\delta (\bar P^{(\prime)}_{EW} +2\bar P^{(\prime)A}_{EW})&=&
i(\bar r'_e+2\bar r'_a)(-\frac{2}{3} P^{(\prime)0}_{EW}+ P^{(\prime)C0}_{EW}+\frac{5}{3} P^{(\prime)A0}_{EW})
{\nonumber}\\
&&+i(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3})
(\bar P^{(\prime)0}_{EW} +2\bar P^{(\prime)A0}_{EW}),
{\nonumber}\\
\delta (\bar P^{(\prime)C}_{EW,1}+2\bar P^{(\prime)E}_{EW})
&=&i(\bar r'_e+2\bar r'_a) (P^{(\prime)0}_{EW}-\frac{2}{3} P^{(\prime)C0}_{EW}+\frac{5}{3} P^{(\prime)E0}_{EW})
{\nonumber}\\
&&+i(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3})
(\bar P^{(\prime)C0}_{EW,1} +2\bar P^{(\prime)E0}_{EW}),
\label{eq: delta TA2}
\end{eqnarray}
and
\begin{eqnarray}
&&\delta(\tilde C^{(\prime)}+\tilde E^{(\prime)}+\tilde P+\frac{3}{2}\widetilde {PA}^{(\prime)}-\frac{1}{3}\tilde P^{(\prime)C}_{EW} -\frac{1}{3}\tilde P^{(\prime)E}_{EW})
{\nonumber}\\
&&\quad
=i\left(\hat r'_t +\frac{4\hat r'_a +2\hat r'_e}{3}\right)
\bigg(\frac{3}{2}T^{(\prime)0}-\frac{1}{2} C^{(\prime)0}+4 E^{(\prime)0}+4 P^{(\prime)0}+6 PA^{(\prime)0}
{\nonumber}\\
&&\qquad
-\frac{1}{2} P^0_{EW}
+\frac{1}{6} (P^C_{EW})^0
-\frac{4}{3} (P_{EW}^E)^0\bigg)
{\nonumber}\\
&&\qquad
+i\left(\check r'_0+\frac{4\check r'_a+2\check r'_e+3\check r'_t}{6}\right)
\bigg(\tilde C^{(\prime)0}+\tilde E^{(\prime)0}+\tilde P^{(\prime)0}
+\frac{3}{2}\widetilde {PA}^{(\prime)0}
-\frac{1}{3}\tilde P^{(\prime)C0}_{EW} -\frac{1}{3}\tilde P^{(\prime)E0}_{EW}\bigg),
{\nonumber}\\
\label{eq: delta TA3}
\end{eqnarray}
where the superscript 0 denote un-scattered amplitudes
and we define
$r'_i\equiv r^{(1/2)}_i$,
$\bar r'_i\equiv \bar r^{(1/2)}_i$,
$\hat r'_i\equiv \hat r^{(1/2)}_i$,
$\check r'_i\equiv \check r^{(1/2)}_i$,
$\tilde r'_i\equiv \tilde r^{(1/2)}_i$.
The full topological amplitudes contain the un-scattered and the contribution from the scattering.
For example, for the tree amplitude the full amplitude is $T^{(\prime)}$, the un-scattered tree amplitude is $T^{(\prime)0}$.
After scattering we have
\begin{eqnarray}
T^{(\prime)}=T^{(\prime)0}+\delta T^{(\prime)}=T^{(\prime)0}+ir'_0 T^{(\prime)0}+i r'_e C^{(\prime)0}.
\end{eqnarray}
One can check that the above equations are consistent with the topological amplitude expressions
Eqs.~(\ref{eq: TAgroup1}), (\ref{eq: TAgroup2}), (\ref{eq: TAgroup3}), (\ref{eq: TAgroup4}), (\ref{eq: TABsgroup1}) and (\ref{eq: TABsgroup4}), and the rescattering formulas, Eqs.~(\ref{eq:FSIB0Kpi}), (\ref{eq:FSIBKpi0}), (\ref{eq:FSIBpipi0}), (\ref{eq:FSIB0pipi}) and those of $B_s$ decays.
It should be pointed out that this is a non-trivial check, as one can see that Eqs.~(\ref{eq: TAgroup1}), (\ref{eq: TAgroup2}), (\ref{eq: TAgroup3}), (\ref{eq: TAgroup4}), (\ref{eq: TABsgroup1}) and (\ref{eq: TABsgroup4}) are rather complicate and a single error in them can easily spoil the consistency check.
Note that decay amplitudes can be expressed in terms of several combinations of topological amplitudes, such as $T+C$, $C-E$ and so on,
and FSI affects these combinations only through,
\begin{eqnarray}
1+i(r'_0+r'_a),
\quad
i(r'_e-r'_a),
\quad
i (r'_a+r'_t),
\quad
i(2\bar r'_a+\bar r'_e),
\nonumber\\
1+i(\tilde r'_0+\frac{4\tilde r'_a+2\tilde r'_e}{3}),
\quad
i(\hat r'_t+\frac{4\hat r'_a+2\hat r'_e}{3}).
\label{eq: combinations of r}
\end{eqnarray}
We have
\begin{eqnarray}
&&T^{(\prime)}+C^{(\prime)}
=[(1+ir'_0+ir'_a)+i(r'_e-r'_a)](T^{(\prime)0}+C^{(\prime)0})=e^{i\delta_{27}}(T^{(\prime)0}+C^{(\prime)0}),
{\nonumber}\\
&&C^{(\prime)}-E^{(\prime)}
=(1+ir'_0+ir'_a)(C^{(\prime)0}-E^{(\prime)0})+\frac{1}{3}i(r'_e-r'_a)[3(T^{(\prime)0}+C^{(\prime)0})-2(C^{(\prime)0}-E^{(\prime)0})]
{\nonumber}\\
&&\qquad\qquad\qquad
-\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar C_1^{(\prime)0}+2\bar E^{(\prime)0}),
{\nonumber}\\
&&A^{(\prime)}+C^{(\prime)}
=(1+ir'_0+ir'_a)(A^{(\prime)0}+C^{(\prime)0})+\frac{2}{3}i(r'_e-r'_a)[(T^{(\prime)0}+C^{(\prime)0})-(A^{(\prime)0}+C^{(\prime)0})]
{\nonumber}\\
&&\qquad\qquad\qquad
+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar T^{(\prime)0}+2\bar A^{(\prime)0}),
{\nonumber}\\
&&P^{(\prime)}-C^{(\prime)}+\frac{1}{3}P^{(\prime)C}_{EW}
=\bigg[(1+ir'_0+ir'_a)-\frac{2}{3}i(r'_e-r'_a)\bigg] \bigg(P^{(\prime)0}-C^{(\prime)0}+\frac{1}{3}P^{(\prime)C0}_{EW}\bigg)
{\nonumber}\\
&&\qquad\qquad\qquad\qquad\qquad
+\frac{1}{3}i(r'_e-r'_a)
[-3 (T^{(\prime)0} + C^{(\prime)0})+ (P_{EW}^{(\prime)0} + P_{EW}^{(\prime)C0})]
{\nonumber}\\
&&\qquad\qquad\qquad\qquad\qquad
+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar C^{(\prime)0}_2+2\bar P^{(\prime)0}-\frac{1}{3}\bar P^{(\prime)C0}_{EW,2}),
{\nonumber}\\
&&PA^{(\prime)}-\frac{4}{9} C^{(\prime)}+\frac{13}{9} E^{(\prime)}-\frac{1}{3} P_{EW}^{(\prime)C}
{\nonumber}\\
&&\quad
=(1+ir'_0+ir'_a-\frac{1}{3} i(r'_e-r'_a)+4 i(r'_a+r'_t) ) \bigg(PA^{(\prime)0}-\frac{4}{9} C^{(\prime)0}+\frac{13}{9} E^{(\prime)0}-\frac{1}{3} P_{EW}^{(\prime)C}\bigg)
{\nonumber}\\
&&\qquad
+\bigg[-\frac{4}{9} i(r'_e-r'_a)+i(r'_a+r'_t)\bigg] (T^{(\prime)0} + C^{(\prime)0})
{\nonumber}\\
&&\qquad
+\frac{7}{9}\bigg[\frac{1}{3} i(r'_e-r'_a)+ 4 i(r'_a+r'_t) \bigg](C^{(\prime)0} - E^{(\prime)0})
{\nonumber}\\
&&\qquad
+\bigg[\frac{2}{9} i(r'_e-r'_a)+\frac{8}{3} i(r'_a+r'_t) \bigg]
\bigg(P^{(\prime)0} - C^{(\prime)0} + \frac{1}{3} P_{EW}^{(\prime)C0}\bigg)
{\nonumber}\\
&&\qquad
-\frac{1}{3} [i(r'_e-r'_a)+i(r'_a+r'_t)]
(P_{EW}^{(\prime)0} + P_{EW}^{(\prime)C0})
{\nonumber}\\
&&\qquad
+\bigg[\frac{2}{27} i(r'_e-r'_a)+\frac{8}{9} i(r'_a+r'_t) \bigg]
(P_{EW}^{(\prime)C0} - P_{EW}^{(\prime)E0})
{\nonumber}\\
&&\qquad
+\frac{1}{27} i(\bar r'_e+2\bar r'_a) \bigg[7 (\bar C_1^{(\prime)0} + 2 \bar E^{(\prime)0}) -
6 \bigg( \bar C_2^{(\prime)0} + 2\bar P^{(\prime)0} - \frac{1}{3} \bar P_{EW,2}^{(\prime)C0}\bigg)
+ 2 (\bar P_{EW,1}^{(\prime)C0} + 2 \bar P_{EW}^{(\prime)E0})\bigg]
{\nonumber}\\
&&\qquad
+\frac{1}{3}i(\hat r'_t +\frac{4\hat r'_a +2\hat r'_e}{3})
(\tilde C^{(\prime)0}+\tilde E^{(\prime)0}+\tilde P^{(\prime)0} +\frac{3}{2}\widetilde {PA}^{(\prime)0}-\frac{1}{3}\tilde P^{(\prime)C0}_{EW} -\frac{1}{3}\tilde P^{(\prime)E0}_{EW}),
{\nonumber}\\
&&P^{(\prime)}_{EW}+P^{(\prime)C}_{EW}
=[(1+ir'_0+ir'_a)+i(r'_e-r'_a)](P^{(\prime)0}_{EW}+P^{(\prime)C0}_{EW})=e^{i\delta_{27}}(P^{(\prime)0}_{EW}+P^{(\prime)C0}_{EW}),
{\nonumber}\\
&&P_{EW}^{(\prime)C}-P_{EW}^{(\prime)E}
=(1+ir'_0+ir'_a)(P_{EW}^{(\prime)C0}-P_{EW}^{(\prime)E0})
{\nonumber}\\
&&\qquad\qquad\qquad\quad
+\frac{1}{3}i(r'_e-r'_a)[3(P_{EW}^{(\prime)0}+P_{EW}^{(\prime)C0})
-2(P_{EW}^{(\prime)C0}-P_{EW}^{(\prime)E0})]
{\nonumber}\\
&&\qquad\qquad\qquad\quad
-\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar P_{EW,1}^{(\prime)C0}+2\bar P_{EW}^{(\prime)E0}),
{\nonumber}\\
&&P_{EW}^{(\prime)A}+P_{EW}^{(\prime)C}
=(1+ir'_0+ir'_a)(P_{EW}^{(\prime)A0}+P_{EW}^{(\prime)C0})
{\nonumber}\\
&&\qquad\qquad\qquad\quad
+\frac{2}{3}i(r'_e-r'_a)[(P_{EW}^{(\prime)0}+P_{EW}^{(\prime)C0})-(P_{EW}^{(\prime)A0}+P_{EW}^{(\prime)C0})]
{\nonumber}\\
&&\qquad\qquad\qquad\quad
+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar P_{EW}^{(\prime)0}+2\bar P_{EW}^{(\prime)A0}),
\label{eq: rcombinations1}
\end{eqnarray}
\begin{eqnarray}
&&(\bar T^{(\prime)}+2\bar A^{(\prime)})
=
\bigg[1+i\bigg(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3}\bigg)\bigg](\bar T^{(\prime)0}+2\bar A^{(\prime)0})
{\nonumber}\\
&&\qquad\qquad\qquad\quad
+i(\bar r'_e+2\bar r'_a) \bigg[-\frac{2}{3}(T^{(\prime)0}+C^{(\prime)0})+\frac{5}{3} (A^{(\prime)0}+C^{(\prime)0})\bigg],
{\nonumber}\\
&&(\bar C^{(\prime)}_1+2\bar E^{(\prime)})
=\bigg[1+i\bigg(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3}\bigg)\bigg](\bar C^{(\prime)0}_1+2\bar E^{(\prime)0})
{\nonumber}\\
&&\qquad\qquad\qquad\quad
+i(\bar r'_e+2\bar r'_a) \bigg[(T^{(\prime)0}+C^{(\prime)0})-\frac{5}{3}(C^{(\prime)0}- E^{(\prime)0})\bigg],
{\nonumber}\\
&&
\bar C^{(\prime)}_2+2\bar P^{(\prime)}-\frac{1}{3}\bar P^{(\prime)C}_{EW,2}
=\bigg[1+i(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3}\bigg)\bigg]
(\bar C^{(\prime)0}_2+2\bar P^{(\prime)0}-\frac{1}{3}\bar P^{(\prime)C0}_{EW,2})
{\nonumber}\\
&&\qquad\qquad\qquad\qquad\qquad\quad
+i(\bar r'_e+2\bar r'_a) \bigg[(T^{(\prime)0}+C^{(\prime)0})]
+\frac{5}{3} \bigg(P^{(\prime)0}-C^{(\prime)0}+\frac{1}{3}P^{(\prime)C0}_{EW}\bigg)
{\nonumber}\\
&&\qquad\qquad\qquad\qquad\qquad\quad
-\frac{1}{3} (P^{(\prime)0}_{EW}+P^{(\prime)C0}_{EW})\bigg],
{\nonumber}\\
&&(\bar P^{(\prime)}_{EW} +2\bar P^{(\prime)A}_{EW})
=
\bigg[1+i\bigg(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3}\bigg)\bigg]
(\bar P^{(\prime)0}_{EW} +2\bar P^{(\prime)A0}_{EW})
{\nonumber}\\
&&\qquad\qquad\qquad\qquad
+i(\bar r'_e+2\bar r'_a)\bigg[-\frac{2}{3} (P^{(\prime)0}_{EW}+P^{(\prime)C0}_{EW})+\frac{5}{3} (P^{(\prime)A0}_{EW}+P^{(\prime)C0}_{EW})\bigg],
{\nonumber}\\
&&(\bar P^{(\prime)C}_{EW,1}+2\bar P^{(\prime)E}_{EW})
=
\bigg[1+i\bigg(\tilde r'_0 +\frac{4\tilde r'_a +2\tilde r'_e}{3}\bigg)\bigg]
(\bar P^{(\prime)C0}_{EW,1} +2\bar P^{(\prime)E0}_{EW})
{\nonumber}\\
&&\qquad\qquad\qquad\qquad
+
i(\bar r'_e+2\bar r'_a) \bigg[(P^{(\prime)0}_{EW}+P^{(\prime)C0}_{EW})-\frac{5}{3} (P^{(\prime)C0}_{EW}-P^{(\prime)E0}_{EW})\bigg],
\label{eq: rcombinations2}
\end{eqnarray}
and
\begin{eqnarray}
&&(\tilde C^{(\prime)}+\tilde E^{(\prime)}+\tilde P^{(\prime)}+\frac{3}{2}\widetilde {PA}^{(\prime)}-\frac{1}{3}\tilde P^{(\prime)C}_{EW} -\frac{1}{3}\tilde P^{(\prime)E}_{EW})
{\nonumber}\\
&&\quad
=i\left(\hat r'_t +\frac{4\hat r'_a +2\hat r'_e}{3}\right)
\bigg[\frac{3}{2}(T^{(\prime)0}+C^{(\prime)0})+\frac{14}{3}(C^{(\prime)0}-E^{(\prime)0})+4 \bigg(P^{(\prime)0}-C^{(\prime)0}+\frac{1}{3}P_{EW}^{(\prime)C0}\bigg)
{\nonumber}\\
&&\qquad
+6 \bigg(PA^{(\prime)0}-\frac{4}{9} C^{(\prime)0}+\frac{13}{9} E^{(\prime)0}-\frac{1}{3} P_{EW}^{(\prime)C0}\bigg)
-\frac{1}{2} (P^{(\prime)0}_{EW}+P_{EW}^{(\prime)C0})
+\frac{4}{3}(P^{(\prime)C0}_{EW}-P_{EW}^{(\prime)E0})\bigg]
{\nonumber}\\
&&\qquad
\left[1+i\left(\check r'_0+\frac{4\check r'_a+2\check r'_e+3\check r'_t}{6}\right)\right]
\bigg(\tilde C^{(\prime)0}+\tilde E^{(\prime)0}+\tilde P^{(\prime)0}
+\frac{3}{2}\widetilde {PA}^{(\prime)0}
-\frac{1}{3}\tilde P^{(\prime)C0}_{EW} -\frac{1}{3}\tilde P^{(\prime)E0}_{EW}\bigg).
{\nonumber}\\
\label{eq: rcombinations3}
\end{eqnarray}
With the help of Eq. (\ref{eq:solution}) (with $m=1/2$)
we will be able to study the effect of rescattering to the above combinations
and give a clearer picture.
Note that the above transformation formulas of the combined topological amplitudes, in Eqs. (\ref{eq: rcombinations1}), (\ref{eq: rcombinations2}) and (\ref{eq: rcombinations3}) are not as powerful compared to transformation formulas of the individual topological amplitudes, Eqs.~(\ref{eq: delta TA1}), (\ref{eq: delta TA2}) and (\ref{eq: delta TA3}).
They are, however, the ones that can have in terms of the combinations of $r'_i$ [Eq. (\ref{eq: combinations of r})] and hence the rescattering angles and phases, $\tau$, $\nu$, $\sigma$ and $\delta$ [see Eqs. (\ref{eq:solution}) and (\ref{eq:FSIparameters})], without introducing additional assumption.
\subsubsection{Topological Amplitudes and rescattering in the U(3) limit}
It is interesting to investigate the above relations in the U(3) limit, where we take Eq.~(\ref{eq: U3r}) and
\begin{eqnarray}
\bar T=\tilde T=T,
\quad
\bar C_{1}=\bar C_{2}=\tilde C=C,
\quad
\bar E=\tilde E=E,
\quad
\bar A=\tilde A=A,
{\nonumber}\\
\bar P=\tilde P=P,
\quad
\bar P_{EW}=\tilde P_{EW}=P_{EW},
\quad
\bar P^C_{EW,1}=\bar P^C_{EW,2}=\tilde P^C_{EW}=P^C_{EW},
{\nonumber}\\
\bar P^E_{EW}=\tilde P^E_{EW}=P^E_{EW},
\quad
\bar P^A_{EW}=\tilde P^A_{EW}=P^A_{EW},
\quad
\quad
\bar {PA}=\widetilde {PA}=PA.
\label{eq: U3TA}
\end{eqnarray}
Using Eq. (\ref{eq: U3r}) and Eqs. (\ref{eq: delta TA1}), (\ref{eq: delta TA2}) and (\ref{eq: delta TA3}), we find that
\begin{eqnarray}
\delta (\bar T+2\bar A)-\delta (T+2A)
&=&3i r'_e A^0,
{\nonumber}\\
\delta (\bar C_1+2\bar E)-\delta (C+2E)
&=&3i r'_e E^0,
{\nonumber}\\
\delta (\bar C_2+2\bar P-\frac{1}{3}\bar P^C_{EW,2})
-\delta (C+2P-\frac{1}{3}P^C_{EW})
&=&3i r'_e P^0,
{\nonumber}\\
\delta (\bar P_{EW} +2\bar P^A_{EW})
-\delta (P_{EW} +2P^A_{EW})
&=& 3i r'_e P^{A0}_{EW},
{\nonumber}\\
\delta (\bar P^C_{EW,1}+2\bar P^E_{EW})
-\delta (P^C_{EW}+2P^E_{EW})
&=& 3i r'_e P^{E0}_{EW},
\label{eq: U3 cal1}
\end{eqnarray}
and
\begin{eqnarray}
&&\delta(\tilde C+\tilde E+\tilde P +\frac{3}{2}\widetilde {PA}-\frac{1}{3}\tilde P^C_{EW} -\frac{1}{3}\tilde P^E_{EW})
-\delta(C+E+P +\frac{3}{2}{PA}-\frac{1}{3}P^C_{EW}-\frac{1}{3}P^E_{EW})
{\nonumber}\\
&&\quad
=\frac{1}{2} i r'_e (6 E^0 + 6 P^0 + 9 PA^0 - 2 P_{EW}^{E0}).
\label{eq: U3 cal2}
\end{eqnarray}
The above relations can be consistent with the relations in the U(3) limit, Eq.~(\ref{eq: U3TA}), only if we take
\begin{eqnarray}
r'_e=0.
\label{eq: re=0}
\end{eqnarray}
It is useful to recall that by requiring U(3) symmetry to the rescattering matrix ${\cal T}$ [Eq. (\ref{eq: U3r})] one only leads to $r'_e r'_a=0$ [see Eq. (\ref{eq: rare=0})], which can either be $r'_e=0$ or $r'_a=0$. Now we can select out the $r'_e=0$ solution.
The reason of being more specify here is that we now apply U(3) symmetry to both rescattering matrix ${\cal T}$ [Eq. (\ref{eq: U3r})] and to the topological amplitudes [Eq. (\ref{eq: U3TA})].
Hence it leads to a more specify relation.
\section{Numerical Results}
In this section, we will present our numerical results.
First, we will give an overview of the results of the fits.
We will then discuss the rescattering effects on topological amplitudes.
Finally, numerical results for decay rates and CP asymmetries will be shown.
\subsection{Overview of the Results of the Fits}
Before present our numerical results, we specify the inputs used
in the following numerical study.
Masses of all particles and total widths of $B_{u,d,s}$ mesons are taken from the
review of the Particle Data Group (PDG)~\cite{PDG}
and the branching ratios of $B$ to
charmless meson decays are taken
from the latest averages in~\cite{HFAG}.
For theoretical inputs,
we use $f_\pi=$ 130.2 MeV, $f_{K}=$ 155.6 MeV and
$f_{B_{(s)}}=$ 187.1 (227.2) MeV for decay constants
and $m_s(2{\rm GeV})=93.5$ MeV for the strange quark mass, which is taken from the central value of the PDG averaged value, $93.5\pm 2$ MeV~\cite{PDG}.~\footnote{Note that in the previous study~\cite{Chua:2007cm} $m_s$ is taken as a fit parameter in the range of $100\pm30$ MeV, but now as the value becomes more precisely known it is better to use the present central value as an input parameter.~\label{footnote4}}
The values of CKM matrix elements, except $\gamma/\phi_3$, are also taken from the central
values of the latest PDG's results~\cite{PDG}.
We use the QCD factorization calculated amplitudes
\cite{Beneke:2003zv} for the factorization amplitudes in the
right-hand-side of Eq.~(\ref{eq:master1}). We take the
renormalization scale as $\mu=4.2$~GeV and the power correction
parameters $X_{A,H}=\ln(m_B/\Lambda_h)(1+\rho_{A,H}
e^{i\phi_{A,H}} )$.
For meson wave functions, we use the following Gegenbauer moments: $\alpha_1^{\bar K}=-\alpha_1^K=0.2$, $\alpha_2^{\bar K}=\alpha_2^K=0.1$,
$\alpha_1^\pi=0$, $\alpha_2^\pi=0.2$ and $\alpha_{1,2}^{\eta,\eta'}=0$~\cite{Beneke:2003zv}.
Several hadronic parameters, in additional to the CKM phase $\gamma/\phi_3$, $\rho_{A,H}$ and $\phi_{A,H}$, in factorization
amplitudes are fit parameters and
are allowed to vary in the following ranges:
\begin{eqnarray}
&&F^{B\pi}_0(0)=0.25\pm0.05,
\quad
F^{BK}_0(0)=0.35\pm0.08,
\quad
F^{B_sK}(0)=0.31\pm0.08,
\nonumber\\
&&\gamma/\phi_3=(73.2\pm 10)^\circ,
\quad
\lambda_B=0.35\pm 0.25\, {\rm GeV},
\quad
\lambda_{B_s}=0.35\pm 0.25\, {\rm GeV}.
\label{eq:QCDFHparameters}
\end{eqnarray}
These
estimations agree with those in \cite{Beneke:2003zv,LF,MS,Ball:2004ye,Duplancic:2008ix,Khodjamirian:2011ub,Bharucha:2012wy}, while
the ranges of form factors and $\gamma/\phi_3$ are slightly enlarged.
For example, the above $F^{B\pi}_0(0)$ can be compared to the following reported values for the quantity:
$0.28\pm0.05$ \cite{Beneke:2003zv},
$0.25$ \cite{LF},
$0.29$ \cite{MS},
$0.258\pm0.031$ \cite{Ball:2004ye},
$0.26^{+0.04}_{-0.03}$ \cite{Duplancic:2008ix},
$0.281^{+0.027}_{-0.029}$ \cite{Khodjamirian:2011ub}
and
$0.261^{+0.020}_{-0.023}$ \cite{Bharucha:2012wy}.~\footnote{It is preferable to use the form factors as inputs instead of variables in the fit, but in the present situation no definite values for these form factors can be found (see for example the collected $F^{B\pi}_0(0)$ values from \cite{Beneke:2003zv,LF,MS,Ball:2004ye,Duplancic:2008ix,Khodjamirian:2011ub,Bharucha:2012wy}) and we therefore treat them as fitting variables to avoid bias in this work. Hopefully the situation can be improved in future. See also Footnote~\ref{footnote4}.}
It is known that semileptonic $B\to\pi l\nu$ decays are related to the $B\to \pi$ form factor and the determination of $|V_{ub}|$~\cite{PDG}.
Using data from BaBar~\cite{delAmoSanchez:2010af,Lees:2012vv} and Belle \cite{Ha:2010rf,Sibidanov:2013rkk}, HFLAG obtain the following result in 2014:~\cite{HFAG}
\begin{eqnarray}
F^{B\pi}_0(0)|V_{ub}|=(9.23\pm0.24)\times 10^{-4},
\label{eq: SL}
\end{eqnarray}
We will use this in our $\chi^2$ analysis.
In summary, 9 hadronic parameters,
$\rho_{A,H}$, $\phi_{A,H}$, $F^{B\pi}_0(0)$, $F^{BK}_0(0)$, $F^{B_sK}(0)$, $ \lambda_B$, $ \lambda_{B_s}$,
and one CKM phase, $\gamma/\phi_3$, involved in the QCDF amplitudes will be fitted from data.
The residue rescattering part add 4 more parameters,
$ \tau$, $\nu$, $\delta$ and $\sigma$, giving 14 parameters in total.
Note that the majority of the fitted parameters are from the factorization part.
In this analysis there are totally 93 measurable quantities, including 34 rates, 34 direct CP asymmetries, 24 mixing induced CP asymmetries and one measurement from semileptonic $B$ decay [Eq. (\ref{eq: SL})].
Among them we will fit to all available data, including 26 rates, 16 direct CP asymmetries, 5 mixing induced CP asymmetries and 1 semileptonic decay data,
giving 48 in total,
and will have prediction on 8 rates, 18 direct CP asymmetries and 19 mixing induced CP asymmetries.
The explicit list of these 48 items will be shown later.
The total numbers of data in fit and in predictions are roughly the same.
The summary of these numbers is shown in Table~\ref{tab:overview}.
\begin{table}[t!]
\caption{
Numbers of rates ${\cal B}$, direct CP asymmetries ${\cal A}$ and mixing induced CP asymmetries $S$ of $\overline B_q\to PP$ decays involved in this study.
\label{tab:overview}
}
\begin{ruledtabular}
{\footnotesize
\begin{tabular}{cccccc}
&number of ${\cal B}$
&number of ${\cal A}$
&number of $S$
& number of $SL$
& Total number
\\
\hline
All
& 34
& 34
& 24
& 1
&93
\\
Fitted
& 26
& 16
& 5
& 1
& 48
\\
Predicted
& 8
& 18
& 19
& 0
& 45
\end{tabular}
}
\end{ruledtabular}
\end{table}
We perform a $\chi^2$ analysis with all available data on
CP-averaged rates and CP asymmetries in $\overline B{}_{u,d,s}\to PP$
decays.
In the following study we use two different scenarios: Fac and Res.
For the formal we use only factorization amplitudes (i.e. $A_i=A_i^{\rm fac}$),
while for the latter we add residue FSI effect as well (i.e. $A_i=\sum_{j=1}^n({\cal S}_{res}^{1/2})_{ij} A^{\rm fac}_j$).
Both are fitted to data.
The confidence levels and $\chi^2$s for the best fitted cases in both senarios are shown
in Table~\ref{tab:chisquare}.
Contributions to $\chi^2_{\rm min.}$
from various sub-sets of data are also given.
Modes that are related through the Res are grouped together [see
Eq.~(\ref{eq:FSIB0Kpi}), and see Eqs.
(\ref{eq:FSIBKpi0})--(\ref{eq:FSIB0pipi}) for other groups].
Off course only those with data can contribute to $\chi^2$.
Numbers of data used are shown in parentheses.
Explicitly,
$\chi^2_{\{{\cal B}(\overline B{}^0\to K\pi),\dots\}}$ and $\chi^2_{\{{\cal A}(\overline B{}^0\to K\pi),\dots\}}$ in the table denote
the $\chi^2$ contribution obtained from 4 CP-average rates and 3 direct CP asymmetries, respectively, of the group-1 modes consisting of $\overline
B{}^0\to K^-\pi^+,\, \overline K {}^0\pi^0,\,\overline K{}^0\eta,\,\overline
K{}^0\eta'$ decays (except ${\cal A}(\overline B{}^0\to \overline K{}^0\eta$));
$\chi^2_{\{{\cal B}(B^-\to K\pi),\dots\}}$ and $\chi^2_{\{{\cal A}(B^-\to K\pi),\dots\}}$
are contributed from the group-2 modes: $B^-\to \overline K{}^0\pi^-,\, K^-\pi^0,\,K^-\eta,\,K^-\eta'$ decays;
$\chi^2_{\{{\cal B}(B^-\to \pi\pi),\dots\}}$ and $\chi^2_{\{{\cal A}(B^-\to \pi\pi),\dots\}}$ are contributed from the group-3 modes:
$B^-\to \pi^-\pi^0,\, K^-K^0,\,\pi^-\eta,\,\pi^-\eta'$ decays;
$\chi^2_{\{{\cal B}(\overline B{}^0\to \pi\pi),\dots\}}$ is contributed from the group-4 modes:
$\overline B{}^0\to \pi^+\pi^-,\,\pi^0\pi^0,\, \eta\eta,\,\eta\eta',\,\eta'\eta',K^+K^-,\,\overline K{}^0 K^0,\,\pi^0\eta,\,\pi^0\eta'$
decays, while $\chi^2_{\{{\cal A}(\overline B{}^0\to \pi\pi),\dots\}}$ only contributed from 3 of the above modes, $\overline B{}^0\to \pi^+\pi^-,\,\pi^0\pi^0,\, \overline K{}^0 K^0$ decays;
$\chi^2_{\{{\cal B}(\overline B_s),{\cal A}(\overline B_s)\}}$ is contributed from 5 CP-averaged rates in
$\overline B{}^0_s\to K^+\pi^-,\,\pi^+\pi^-,\,\eta'\eta',\,K^+K^-,\, K^0\overline K{}^0$ decays and
from 2 direct CP asymmetries in
$\overline B{}^0_s\to K^+\pi^-,\,K^+K^-$ decays;
$\chi^2_{\{S(\overline B {}^0)),\,S(\overline B {}^0_s))\}}$ is contributed from mixing induced CP asymmetries
of $\overline B{}^0\to \overline K{}^0\pi^0, \,\overline K{}^0\eta',\,\pi^+\pi^-,\,K_S\overline K_S$ and $\overline B{}_s^0\to K^+K^-$ decays.
The semiloptonic data, Eq. (\ref{eq: SL}) is also included in the fit.
The above lists are the 26 rates, 16 direct CP asymmetries, 5 mixing induced asymmetries and 1 semileptonic data [Eq. (\ref{eq: SL})], 48 in totally, that go into the fit.
\begin{table}[t!]
\caption{ Confidence level (C.L.), $\chi^2_{\rm min}/{\rm d.o.f.}$
and various contributions to $\chi^2_{\rm min}$ for the best
fitted solutions. The $p$-value of the rescattering (Res) case is 5.5\%.
Numbers of data used are shown in parentheses.
\label{tab:chisquare}
}
\begin{ruledtabular}
{\footnotesize
\begin{tabular}{cccccc}
&$\chi^2_{\rm min.}/{\rm d.o.f.}$
&$\chi^2_{\{{\cal B}(\overline B{}^0\to K\pi),\dots\}}$
&$\chi^2_{\{{\cal A}(\overline B{}^0 \to K\pi),\dots\}}$
&$\chi^2_{\{{\cal B}(B^-\to K\pi),\dots\}}$
&$\chi^2_{\{{\cal A}(B^- \to K\pi),\dots\}}$
\\
\hline
Fac
& 213.4/38 (48)
& 10.1 (4)
& 1.8 (3)
& 24.7 (4)
& 5.2 (4)
\\
\hline
Res
& 48.1/34 (48)
& 7.2 (4)
& 1.1 (3)
& 6.3 (4)
& 0.6 (4)
\\
\hline
& $\chi^2_{\{{\cal B}(B^-\to \pi\pi),\dots\}}$
&$\chi^2_{\{{\cal A}(B^- \to \pi\pi),\dots\}}$
&$\chi^2_{\{B(\overline B{}^0\to \pi\pi),\dots\}}$
&$\chi^2_{\{{\cal A}(\overline B{}^0 \to \pi\pi),\dots\}}$
&$\chi^2_{\{{\cal B}(\overline B_s),{\cal A}(\overline B_s)\}}$
\\
\hline
Fac
& 10.6 (4)
& 6.5 (4)
& 55.3 (9)
& 15.7 (3)
& 64.0 (7)
\\
\hline
Res
& 6.4 (4)
& 7.5 (4)
& 7.8 (9)
& 4.7 (3)
& 0.6 (7)
\\
\hline
&$\chi^2_{\{S(\overline B {}^0)),\,S(\overline B {}^0_s))\}}$
&$\chi^2_{SL}$
\\
\hline
Fac
& 12.9 (5)
& 8.0 (1)
\\
\hline
Res
& 5.2 (5)
& 0.7 (1)
\end{tabular}
}
\end{ruledtabular}
\end{table}
Table~\ref{tab:chisquare} shows the overall performances of the fits.
We discuss the factorization case first.
The $\chi^2$ per degree of freedom of Fac is $213.4/(48-10)$.
One can compare the $\chi^2$ values and the numbers of data used in
the corresponding groups. When the ratio of $\chi^2$ and the number of data is smaller than one,
the fit in the group is reasonably well.
By inspecting the table, we see that Fac gives a good fit in the direct CP asymmetries of group-1 modes ($\overline B{}^0\to K^-\pi^+,\cdots$), and produces reasonable fits in the direct CP asymmetries of group-2 modes ($B^-\to \overline K{}^0\pi^-,\cdots$) and of group-3 modes ($B^-\to\pi^-\pi^0,\cdots$),
but the fits in rates and mixing induced CP asymmetries of all modes (including $B_s$ decay modes) and direct CP asymmetries of group-4 modes are poor.
In particular, the ratios of $\chi^2$ per number of data used in rates of the group-2 modes ($B^-\to \overline K{}^0\pi^-,\cdots$), group-4 modes ($\overline B{}^0\to\pi^+\pi^-,\cdots$), in the rates and direct CP asymmetries of $B_s$ modes and in the semileptonic quantity
are as large as $24.7/4$, $55.3/9$, $64.0/7$ and $8.0/1$, respectively, indicating the badness of the fit in these sectors.
The fit is significant improved when the rescattering is added.
In the best fitted case, the $\chi^2$ per degree of freedom of the fit is $48.1/(48-14)$ giving the $p$-value of $5.5\%$.
It should be noted that with 4 additional parameters the quality of the fit is improved significantly.
All $\chi^2$, except the direct CP of group-3 modes ($B^-\to\pi^-\pi^0,\cdots$), which is slightly enhanced, are reduced.
In particular, the $\chi^2$ per number of data of rates of the group-2 modes ($B^-\to \overline K{}^0\pi^-,\cdots$), group-4 modes ($\overline B{}^0\to\pi^+\pi^-,\cdots$),
the rates and direct CP asymmetries of $B_s$ modes and in the semileptonic quantity are $6.3/4$, $7.8/9$, $0.6/7$ and $0.7/1$, respectively.
The performance of the fit in these sector is improved significantly.
Detail results will be shown later.
\begin{table}[t!]
\caption{ Fitted hadronic and FSI parameters. Upper table contains
fitted parameters in factorization amplitudes (Fac), while the lower ones are parameters
in the rescattering (Res) case.
\label{tab:parameters1}
}
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
& $\rho_{A}$
&$\rho_{H}$
&$\phi_A({}^\circ)$
&$\phi_H({}^\circ)$
&$F_0^{B\pi}(0)$
&$F_0^{BK}(0)$
&$F_0^{B_sK}(0)$
\\
\hline
Fac
& $0.97_{-0.02}^{+0.01}$
& $2.82_{-0.61}^{+0.20}$
& $-28.4_{-0.1}^{+0.3}$
& $-111.5_{-13.6}^{+4.4}$
& $0.239\pm0.002$
& $0.27_{-0.00}^{+0.00}$
& $0.23_{-0.00}^{+0.00}$
\\
\hline
Res
& $2.87_{-0.03}^{+0.02}$
& $2.33\pm0.63$
& $165.1\pm0.9$
& $-111.7\pm20.6$
& $0.253\pm0.002$
& $0.28\pm0.01$
& $0.24\pm0.01$
\\
\hline
&$\lambda_B$(GeV)
&$\lambda_{B_s}$(GeV)
&$\gamma/\phi_3({}^\circ)$
&$\tau({}^\circ)$
&$\nu({}^\circ)$
&$\delta({}^\circ)$
&$\sigma({}^\circ)$
\\
\hline
Fac
& $0.19_{-0.05}^{+0.02}$
& $0.60_{-0.04}^{+0.00}$
& $75.4_{-1.6}^{+1.7}$
& --
& --
& --
& --
\\
\hline
Res
& $0.22\pm0.06$
& $0.45_{-0.34}^{+0.15}$
& $68.9\pm1.8$
& $22.2\pm2.2$
& $78.1\pm2.9$
& $23.3\pm4.0$
& $120.7\pm22.3$
\end{tabular}
\end{ruledtabular}
\end{table}
The fitted parameters are shown in Table~\ref{tab:parameters1}.
Uncertainties are obtained by scanning the parameter space with
$\chi^2\leq\chi^2_{\rm min}+1$. The parameters consist of those in
factorization amplitude and of Res.
The Fac fit gives $F^{B\pi}=0.239\pm0.002$, while the Res fit gives $F^{B\pi}=0.253\pm0.002$.
They correspond to
$F^{B\pi}|V_{ud}|=(8.55_{-0.05}^{+0.08})\times 10^{-4}$ and
$F^{B\pi}|V_{ud}|=(9.03\pm0.09)\times 10^{-4}$ for $|V_{ub}|= 35.76\times 10^{-4}$ employed in the numerical study,
respectively, and they can be compared
the HFLAG average,
$F^{B\pi}_0(0)|V_{ub}|=(9.23\pm0.24)\times 10^{-4}$.
The Res result agrees better with the data.
Both fits prefer large $\lambda_{B_s}$.
Except $\rho_A$ and $\phi_A$, most common parameters in Fac and Res have similar fitted values.
A closer look reveals that Fac prefers $\gamma/\phi_3$ close to its center value [see Eq.~(\ref{eq:QCDFHparameters})],
while Res prefers a lower $\gamma/\phi_3$.
Comparing the fitted phases to those in the U(3) exchange-type solution~[see Eq.~(\ref{eq:solutionreU3re})]
$\tau=24.1^\circ$, $\nu=35.3^\circ$ and $\sigma-\delta=0$ and in
the U(3) annihilation-type solution~[see Eq.~(\ref{eq:solutionreU3ra})]
$\tau=-41.8^\circ$, $\nu=-19.5^\circ$ and $\sigma-\delta\neq 0$,
we see that the fitted $\tau\simeq 22^\circ$ and $\nu\simeq 78^\circ$ seem to prefer the exchange-type solution,
while the fitted $\sigma-\delta\simeq 97.4^\circ$ supports the annihilation-type solution.
\subsection{Rescattering effects on Topological Amplitudes}\label{subsec: FSITA}
In this part, we will show the rescattering effects on topological amplitudes in certain combinations and on some individual topological amplitudes of interest.
Note that the discussion in the first part is generic, while we need to impose further assumption in the second part.
\subsubsection{Rescattering effects on the Combinations of Topological Amplitudes}
It is useful to show the fitted results on residual rescattering parameters $r'_i$ (or $r^{(1/2)}_i$):
\begin{eqnarray}
1+i(r'_0+r'_a)
&=&
(0.979_{-0.008}^{+0.007})\exp[i(11.98_{-1.81}^{+1.66})^\circ+i\delta_{27}],
\nonumber\\
i(r'_e-r'_a)
&=&
(0.208_{-0.031}^{+0.028})\exp[i(-78.36\pm2.02)^\circ+i\delta_{27}],
\nonumber\\
i (r'_a+r'_t)
&=&
(0.059\pm 0.009)\exp[i(-92.06_{-13.21}^{+9.09})^\circ+i\delta_{27}]
\nonumber\\
i(2\bar r'_a+\bar r'_e)
&=&
(0.189_{-0.044}^{+0.048})\exp[i(-78.36\pm2.02)^\circ+i\delta_{27}]
\nonumber\\
1+i(\tilde r'_0+\frac{4\tilde r'_a+2\tilde r'_e}{3})
&=&
(0.990_{-0.006}^{+0.004})\exp[i(3.27_{-1.01}^{+1.24})^\circ+i\delta_{27}]
\nonumber\\
i \hat r'_t+i\frac{4\hat r'_a+2\hat r'_e}{3}
&=&
(0.248_{-0.068}^{+0.067})\exp[i(-29.66_{-11.13}^{+11.13})^\circ+i\delta_{27}],
\nonumber\\
1+i(\check r'_0+\frac{4\check r'_a+2\check r'_e+3\check r'_t}{6})
&=&
(0.936_{-0.041}^{+0.031})\exp[i(118.43_{-21.73}^{+22.24})^\circ+i\delta_{27}].
\label{eq:rfit}
\end{eqnarray}
From the above equation, we see that most of these parameters have large phases (with respect to $\delta_{27}$).
Note that $i \hat r'_t+i({4\hat r'_a+2\hat r'_e})/3$, $i(r'_e-r'_a)$ and $i(2\bar r'_a+\bar r'_e)$ are three most sizable combinations and they are close to
$\lambda$, $-i\lambda$ and $-i\lambda$ (taking the overall phase $\delta_{27}=0$), respectively, where $\lambda$ is the Wolfenstein parameter.
\begin{table}[t!]
\caption{Combinations of topological amplitudes of $\Delta S=0$, $\bar B_q\to PP$ and $B_q\to PP$ decays before rescattering ($A^0$) and after rescattering ($A_{FSI}$) in the unit of $10^{-8}$ GeV.
These results are obtained using the best fitted solution and Eqs.~(\ref{eq: TAQCDF1}), (\ref{eq: TAQCDF2}), (\ref{eq: TAQCDF3}), (\ref{eq: rcombinations1}), (\ref{eq: rcombinations2}) and (\ref{eq: rcombinations3}).
Without lost of generality the overall phase ($\delta_{27}$) for $A_{FSI}$ is set to 0 for simplicity.
\label{tab:TADS0}
}
{\footnotesize
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
& $A^0(\overline B)$
& $A_{FSI}(\overline B)$
& $A_{FSI}/A^0(\overline B)$
& $A^0(B)$
& $A_{FSI}(B)$
& $A_{FSI}/A^0(B)$
\\
\hline
$T+C$
&$ 3.23 e^{-i 79.8^\circ}$
&$ 3.23 e^{-i 79.8^\circ}$
& $1$
&$ 3.23 e^{i 57.9^\circ}$
&$ 3.23 e^{i 57.9^\circ}$
& $1$
\\
\hline
$C-E$
& $1.13 e^{-i119.5^\circ}$
& $1.58 e^{-i118.8^\circ}$
& $1.40 e^{i0.7^\circ}$
& $1.18 e^{i18.2^\circ}$
& $1.58 e^{i19.0^\circ}$
& $1.40 e^{i0.7^\circ}$
\\
\hline
$A+C$
& $1.07 e^{-i122.4^\circ}$
& $1.52 e^{-i120.7^\circ}$
& $1.42 e^{i1.7^\circ}$
& $1.07 e^{i15.3^\circ}$
& $1.52 e^{i17.0^\circ}$
& $1.42 e^{i1.7^\circ}$
\\
\hline
{\footnotesize
$P-C+\frac{1}{3}P_{EW}^{C}$
}
& $1.77 e^{i34.1^\circ}$
& $2.23 e^{i44.1^\circ}$
& $1.26 e^{i10.0^\circ}$
& $0.80 e^{-i102.1^\circ}$
& $0.94 e^{-i128.3^\circ}$
& $1.17 e^{-i26.2^\circ}$
\\
\hline
$PA-\frac{4}{9} C$
& $0.56 e^{i75.4^\circ}$
& $0.45 e^{i69.7^\circ}$
& $0.81 e^{-i5.7^\circ}$
& $0.64 e^{-i160.6^\circ}$
& $0.80 e^{-i141.5^\circ}$
& $1.25 e^{i19.1^\circ}$
\\
$+\frac{13}{9} E-\frac{1}{3} P_{EW}^{C}$
&
\\
\hline
$P_{EW}+P_{EW}^{C}$
& $0.10 e^{i11.9^\circ}$
& $0.10 e^{i11.9^\circ}$
& $1$
& $0.10 e^{-i32.1^\circ}$
& $0.10 e^{-i32.1^\circ}$
& $1$
\\
\hline
$P_{EW}^{C}-P_{EW}^{E}$
& $0.04 e^{-i36.7^\circ}$
& $0.05 e^{-i31.0^\circ}$
& $1.40 e^{i5.7^\circ}$
& $0.04 e^{-i79.5^\circ}$
& $0.05 e^{-i74.9^\circ}$
& $1.37 e^{i4.6^\circ}$
\\
\hline
$P_{EW}^{A}+P_{EW}^{C}$
& $0.03 e^{-i44.2^\circ}$
& $0.05 e^{-i38.5^\circ}$
& $1.53e^{i5.7^\circ}$
& $0.03 e^{-i89.8^\circ}$
& $0.05 e^{-i83.6^\circ}$
& $1.55 e^{i6.1^\circ}$
\\
\hline
$\bar T+2\bar A$
& $2.66 e^{-i63.9^\circ}$
& $2.43 e^{-i56.2^\circ}$
& $0.92 e^{i7.7^\circ}$
& $2.66 e^{i73.8^\circ}$
& $2.43 e^{i81.5^\circ}$
& $0.92 e^{i7.7^\circ}$
\\
\hline
$\bar C_1+2\bar E$
& $0.90 e^{-i134.9^\circ}$
& $1.29 e^{-i129.3^\circ}$
& $1.44 e^{i5.6^\circ}$
& $0.96 e^{i2.8^\circ}$
& $1.29 e^{i8.4^\circ}$
& $1.44 e^{i5.6^\circ}$
\\
\hline
$\bar C_2+2\bar P-\frac{1}{3}\bar P^{C}_{EW,2}$
& $1.97 e^{-i36.6}$
& $2.22 e^{-i49.1}$
& $1.13e^{-i12.5^\circ}$
& $2.99 e^{-i29.7}$
& $3.34 e^{-i27.1}$
& $1.12 e^{i2.6^\circ}$
\\
\hline
$\bar P_{EW} +2\bar P^{A}_{EW}$
& $0.09 e^{i 28.4^\circ}$
& $0.08 e^{i 37.4^\circ}$
& $0.92 e^{i8.9^\circ}$
& $0.09 e^{-i 15.8^\circ}$
& $0.08 e^{-i 6.7^\circ}$
& $0.92 e^{i9.2^\circ}$
\\
\hline
$\bar P^{C}_{EW,1}+2\bar P^{E}_{EW}$
& $0.04 e^{-i58.8^\circ}$
& $0.05 e^{-i46.9^\circ}$
& $1.34 e^{i11.9^\circ}$
& $0.03 e^{-i72.1^\circ}$
& $0.04 e^{-i68.8^\circ}$
& $1.53 e^{i3.3^\circ}$
\\
\hline
$\tilde C+\tilde E +\tilde P$
& $1.34 e^{-i65.0^\circ}$
& $1.56 e^{-i2.6^\circ}$
& $1.16 e^{i62.4^\circ}$
& $1.89 e^{-i32.2^\circ}$
& $1.92 e^{i59.1^\circ}$
& $1.02 e^{i91.3^\circ}$
\\
$+\frac{3}{2}\widetilde {PA}-\frac{1}{3}\tilde P^{C}_{EW}-\frac{1}{3}\tilde P^{E}_{EW}$
\end{tabular}
\end{ruledtabular}
}
\end{table}
In Tables \ref{tab:TADS0} and \ref{tab:TADS-1} we show the topological amplitudes of $\overline B_q\to PP$ and $B_q\to PP$ decays before rescattering ($A^0$) and after rescattering ($A_{FSI}$) in the unit of $10^{-8}$ GeV.~\footnote{The $A^0$ are obtained by using the rescattering parameters as shown in Table~\ref{tab:parameters1}, but with $\tau$, $\nu$, $\delta$ and $\sigma$ set to zero. Do not confuse it with the annihilation amplitude, where they may share the same notation occasionally.}
These amplitudes are expressed in certain combinations as noted in Eq. (\ref{eq: combinations}).
Note that without lost of generality the overall phase ($\delta_{27}$) is set to 0 from now on for simplicity.
The ratios $A_{FSI}/A^0$ are also shown.
These results are obtained using the best fitted solution and Eqs.~(\ref{eq: TAQCDF1}), (\ref{eq: TAQCDF2}), (\ref{eq: TAQCDF3}), (\ref{eq: rcombinations1}), (\ref{eq: rcombinations2}) and (\ref{eq: rcombinations3}). Both $\Delta S=0$ and $\Delta S=-1$ amplitudes are shown.
Note that we do not use them directly in the fitting.
In fact, they can be obtained only after the best fit result is available.
Nevertheless they will provide useful information.
From Table~\ref{tab:TADS0}, we see that before rescattering, we have the following order for $\overline B_q\to PP$ amplitudes:
\begin{eqnarray}
&&|T^0+C^0|
> |\bar T^0+2\bar A^0|
> |\bar C^0_2+2\bar P^0-\frac{1}{3}\bar P^{C0}_{EW,2}|
{\nonumber}\\
&&
> |P^0-C^0+\frac{1}{3}P_{EW}^{C0}|
>|\tilde C^0+\tilde E^0 +\tilde P^0+\frac{3}{2}\tilde {PA}^0-\frac{1}{3}\tilde P^{C0}_{EW}-\frac{1}{3}\tilde P^{E0}_{EW}|
> |C^0-E^0|
{\nonumber}\\
&&
\gtrsim |A^0+C^0|
\gtrsim |\bar C^0_1+2\bar E^0|
>|PA^0-\frac{4}{9} C^0+\frac{13}{9} E^0-\frac{1}{3} P_{EW}^{C0}|,
{\nonumber}
\end{eqnarray}
while the rest are rather small.
After rescattering, we have:
\begin{eqnarray}
&&|T+C|
> |\bar T+2\bar A|
> |P-C+\frac{1}{3}P_{EW}^{C}|
{\nonumber}\\
&&
\gtrsim |\bar C_2+2\bar P-\frac{1}{3}\bar P^{C}_{EW,2}|
> |\tilde C+\tilde E +\tilde P+\frac{3}{2}\tilde {PA}-\frac{1}{3}\tilde P^{C}_{EW}-\frac{1}{3}\tilde P^{E}_{EW}|
\gtrsim |C-E|
{\nonumber}\\
&&
\gtrsim |A+C|
>|\bar C_1+2\bar E|
>|PA-\frac{4}{9} C+\frac{13}{9} E-\frac{1}{3} P_{EW}^{C}|,
{\nonumber}
\end{eqnarray}
where $ |C-E|$, $|A+C|$ and $|\bar C_1+2\bar E|$ are enhanced by $40\sim~44$\%,
$|P-C+\frac{1}{3}P_{EW}^{C}|$ by 26\%
and
$|\tilde C+\tilde E +\tilde P+\frac{3}{2}\tilde {PA}-\frac{1}{3}\tilde P^{C}_{EW}-\frac{1}{3}\tilde P^{E}_{EW}|$ by 16\%.
Note that the orders of
$|\bar C_2+2\bar P-\frac{1}{3}\bar P^{C}_{EW,2}|$
and
$ |P-C+\frac{1}{3}P_{EW}^{C}| $
are switched after turning on Res.
Sub-leading tree amplitudes and penguin amplitudes are enhanced.
We will return to this shortly.
Note that except in $\tilde C+\tilde E +\tilde P+\frac{3}{2}\tilde {PA}-\frac{1}{3}\tilde P^{C}_{EW}-\frac{1}{3}\tilde P^{E}_{EW}$
Res does not introduce sizable phases to these topological amplitude combinations.
Similarly, from Table~\ref{tab:TADS0}, we see that before rescattering, we have the following order for the conjugated $B_q\to PP$ decay amplitudes:
\begin{eqnarray}
&&|T^0+C^0|
> |\bar C^0_2+2\bar P^0-\frac{1}{3}\bar P^{C0}_{EW,2}|
> |\bar T^0+2\bar A^0|
{\nonumber}\\
&&
>|\tilde C^0+\tilde E^0 +\tilde P^0+\frac{3}{2}\tilde {PA}^0-\frac{1}{3}\tilde P^{C0}_{EW}-\frac{1}{3}\tilde P^{E0}_{EW}|
> |C^0-E^0|
\gtrsim |A^0+C^0|
{\nonumber}\\
&&
\gtrsim |\bar C^0_1+2\bar E^0|
> |P^0-C^0+\frac{1}{3}P_{EW}^{C0}|
>|PA^0-\frac{4}{9} C^0+\frac{13}{9} E^0-\frac{1}{3} P_{EW}^{C0}|,
{\nonumber}
\end{eqnarray}
while the rest are rather small.
Note that the above order is different form the one in $\overline B_q\to PP$ decays.
After rescattering, only the first two terms switch order,
where $ |\bar C_2+2\bar P-\frac{1}{3}\bar P^{C}_{EW,2}|$ is enhanced by 12\%,
$|P-C+\frac{1}{3}P_{EW}^{C}|$ by 17\%
and
$ |C-E|$,
$|A+C|$
and
$|\bar C_1+2\bar E|$ by $40\sim44$\%.
Note that Res introduces sizable phases to some of these topological amplitude combinations
and $ |\bar C_2+2\bar P-\frac{1}{3}\bar P^{C}_{EW,2}|$,
$|\tilde C+\tilde E +\tilde P+\frac{3}{2}\tilde {PA}-\frac{1}{3}\tilde P^{C}_{EW}-\frac{1}{3}\tilde P^{E}_{EW}|$ and
$|PA-\frac{4}{9} C+\frac{13}{9} E-\frac{1}{3} P_{EW}^{C}|$
are quite different to those in $\overline B_q\to PP$ decays.
Some comments will be useful.
(i) A large number of combinations of topological amplitudes are sizable.
(ii) After rescattering one sees that the phases introduced to $\bar B\to PP$ and $B\to PP$ amplitudes are quite different.
(iii) The above facts imply that the effect of Res on direct CP violations can be complicate and rich.
(iv) The enhancement of rescattering on some of the $\Delta S=0$ topological amplitudes can be up to $55\%$,
such as on $P^A_{EW}+P^C_{EW}$,
but their sizes are still small even after the enhancement.
Nevertheless this may have impact on some suppressed modes.
It is useful to see the above enhancements in details.
It is clear from Eq.~(\ref{eq: rcombinations1}) that the effects of Res on $T+C$ and $P_{EW}+P^C_{EW}$
are just adding the common phase $\delta_{27}$ to them.
The effects on other combinations of topological amplitudes are more interesting.
In $\overline B_q\to PP$ decays, considering only the dominant contributions in Eq. (\ref{eq: rcombinations1}), we have
\begin{eqnarray}
C-E
&\simeq&
(1+ir'_0+ir'_a)(C^{0}-E^{0})
+i(r'_e-r'_a)(T^{0}+C^{0}),
{\nonumber}\\
A+C
&\simeq&
(1+ir'_0+ir'_a)(A^{0}+C^{0})
+\frac{2}{3}i(r'_e-r'_a)(T^{0}+C^{0})
+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar T^{0}+2\bar A^{0}).
{\nonumber}\\
\end{eqnarray}
We can estimation the above values by taking the central values of $(1+ir'_0+ir'_a)$, $i(r'_e-r'_a)$ and $i(\bar r'_e+2\bar r'_a)$ from Eq. (\ref{eq:rfit}) and the central values of $C^{0}-E^{0}$, $A^{0}+C^{0}$, $T^{0}+C^{0}$ and $\bar T^{0}+2\bar A^{0}$ from Table ~\ref{tab:TADS0}, obtaing
\begin{eqnarray}
\frac{C-E}{C^{0}-E^{0}}\simeq 1.4\,e^{-i 7^\circ}, %
\quad
\frac{A+C}{A^0+C^0}\simeq 1.4 \,e^{-i 4^\circ},
\end{eqnarray}
which are close the values of $1.40\,e^{i0.7^\circ}$
and $1.42\,e^{i1.7^\circ}$
shown in Table~\ref{tab:TADS0}.
Even using a crude estimation by taking $(1+ir'_0+ir'_a)\simeq 1$ and $i(r'_e-r'_a)\simeq i(\bar r'_e+2\bar r'_a)\simeq -i \lambda$, one still get
$1.5\, e^{-i 19^\circ}$ and $1.5\, e^{-i 16^\circ}$, which are not too far off.
It is clear that the effect of Res in $C-E$ mainly comes from the exchange and annihilation rescatterings fed from the $T^0+C^0$ amplitude,
while those in $A+C$ comes from the exchange and annihilation rescatterings fed from both $T^0+C^0$ and $\bar T^0+2\bar A^0$ amplitudes.
Similarly
from Eq. (\ref{eq: rcombinations3}), we have
\begin{eqnarray}
&&(\tilde C+\tilde E+\tilde P+\frac{3}{2}\widetilde {PA}-\frac{1}{3}\tilde P^{C}_{EW} -\frac{1}{3}\tilde P^{E}_{EW})
{\nonumber}\\
&&\simeq\left[1+i\left(\check r'_0+\frac{4\check r'_a+2\check r'_e+3\check r'_t}{6}\right)\right]
\bigg(\tilde C^{0}+\tilde E^{0}+\tilde P^{0}
+\frac{3}{2}\widetilde {PA}^{0}
-\frac{1}{3}\tilde P^{C0}_{EW} -\frac{1}{3}\tilde P^{E0}_{EW}\bigg)
{\nonumber}\\
&&\quad
+i\left(\hat r'_t +\frac{4\hat r'_a +2\hat r'_e}{3}\right)
\bigg[\frac{3}{2}(T^{0}+C^{0})+\frac{14}{3}(C^{0}-E^{0})+4 \bigg(P^{0}-C^{0}+\frac{1}{3}P_{EW}^{C0}\bigg)
{\nonumber}\\
&&\qquad
+6 \bigg(PA^{0}-\frac{4}{9} C^{0}+\frac{13}{9} E^{0}-\frac{1}{3} P_{EW}^{C0}\bigg)\bigg],
\end{eqnarray}
and we find that
the $T^{0}+C^{0}$ and $C^{0}-E^{0}$ terms give (sizable) destructive contributions, while
$P^{0}-C^{0}+\frac{1}{3}P_{EW}^{C0}$ and $PA^{0}-\frac{4}{9} C^{0}+\frac{13}{9} E^{0}-\frac{1}{3} P_{EW}^{C0}$
terms give (sizable) constructive contributions via the same Res parameter $i\hat r'_t +i(4\hat r'_a +2\hat r'_e)/3$.
The final result of the $16\%$ enhancement in
$|\tilde C+\tilde E+\tilde P+\frac{3}{2}\widetilde {PA}-\frac{1}{3}\tilde P^{C}_{EW} -\frac{1}{3}\tilde P^{E}_{EW}|$
is the complicate interplay of these contributions.
We now turn to the Res effect on the penguin amplitudes.
From [see Eq. (\ref{eq: rcombinations1})]
\begin{eqnarray}
P-C+\frac{1}{3}P^{C}_{EW}
&\simeq &\bigg[(1+ir'_0+ir'_a)-\frac{2}{3}i(r'_e-r'_a)\bigg] \bigg(P^{0}-C^{0}+\frac{1}{3}P^{C0}_{EW}\bigg)
{\nonumber}\\
&&\qquad
-i(r'_e-r'_a)
(T^{0} + C^{0})
+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar C^{0}_2+2\bar P^{0}-\frac{1}{3}\bar P^{C0}_{EW,2}),
\end{eqnarray}
we obtain for $\overline B\to PP$ decay:
\begin{eqnarray}
\frac{P-C+\frac{1}{3}P^{C}_{EW}}{P^{0}-C^{0}+\frac{1}{3}P^{C0}_{EW}}\simeq 1.3\, e^{i 10^\circ},
\end{eqnarray}
which is close to the value $1.26\, e^{i 10.0^\circ}$ shown in Table~\ref{tab:TADS0}.
where the main contribution is from the $r'_e-r'_a$ rescattering term fed from $T^{0} + C^{0}$.
\begin{table}[t!]
\caption{Same as Fig.~\ref{tab:TADS0}, but for $\Delta S=-1$ transition decay amplitudes.
\label{tab:TADS-1}
}
{\footnotesize
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
& $A^0(\overline B)$
& $A_{FSI}(\overline B)$
& $A_{FSI}/A^0(\overline B)$
& $A^0(B)$
& $A_{FSI}(B)$
& $A_{FSI}/A^0(B)$
\\
\hline
$T^{\prime}+C^{\prime}$
&$ 0.75e^{-i 79.8^\circ}$
&$ 0.75e^{-i 79.8^\circ}$
& $1$
&$ 0.75e^{i 57.9^\circ}$
&$ 0.75e^{i 57.9^\circ}$
& $1$
\\
\hline
$C^{\prime}-E^{\prime}$
& $0.26e^{-i119.5^\circ}$
& $0.36e^{-i118.8^\circ}$
& $1.40e^{i0.7^\circ}$
& $0.26e^{i18.2^\circ}$
& $0.36e^{i19.0^\circ}$
& $1.40 e^{i0.7^\circ}$
\\
\hline
$A^{\prime}+C^{\prime}$
& $0.25e^{-i122.4^\circ}$
& $0.35e^{-i120.7^\circ}$
& $1.42e^{i1.7^\circ}$
& $0.25e^{i15.3^\circ}$
& $0.35e^{i17.0^\circ}$
& $1.42e^{i1.7^\circ}$
\\
\hline
$P^{\prime}-C^{\prime}+\frac{1}{3}P_{EW}^{\prime C}$
& $4.36 e^{i164.2^\circ}$
& $4.00 e^{i174.0^\circ}$
& $0.92 e^{i9.8^\circ}$
& $4.64 e^{i170.3^\circ}$
& $4.48 e^{-i178.6^\circ}$
& $0.97 e^{i11.1^\circ}$
\\
\hline
$PA^{\prime}-\frac{4}{9} C^{\prime}$
& $0.42 e^{i3.5^\circ}$
& $0.99 e^{i73.2^\circ}$
& $2.37 e^{i69.7^\circ}$
& $0.29 e^{-i31.2^\circ}$
& $0.74 e^{i81.8^\circ}$
& $2.58 e^{i113.0^\circ}$
\\
$+\frac{13}{9} E^{\prime}-\frac{1}{3} P_{EW}^{\prime C}$
&
\\
\hline
$P^{\prime}_{EW}+P_{EW}^{\prime C}$
& $0.46e^{i168.9^\circ}$
& $0.46e^{i168.9^\circ}$
& $1$
& $0.46e^{i171.0^\circ}$
& $0.46e^{i171.0^\circ}$
& $1$
\\
\hline
$P_{EW}^{\prime C}-P_{EW}^{\prime E}$
& $0.18e^{i120.8^\circ}$
& $0.26e^{i124.8^\circ}$
& $1.40e^{i3.9^\circ}$
& $0.18e^{i122.9^\circ}$
& $0.26e^{i126.8^\circ}$
& $1.40e^{i3.9^\circ}$
\\
\hline
$P_{EW}^{\prime A}+P_{EW}^{\prime C}$
& $0.14 e^{i113.6^\circ}$
& $0.22 e^{i118.9^\circ}$
& $1.53 e^{i5.3^\circ}$
& $0.14 e^{i115.6^\circ}$
& $0.22 e^{i121.0^\circ}$
& $1.53 e^{i5.4^\circ}$
\\
\hline
$\bar T^{\prime}+2\bar A^{\prime}$
& $0.61 e^{-i63.9^\circ}$
& $0.56 e^{-i56.2^\circ}$
& $0.92e^{i7.7^\circ}$
& $0.61 e^{i73.8^\circ}$
& $0.56 e^{i81.5^\circ}$
& $0.92e^{i7.7^\circ}$
\\
\hline
$\bar C^{\prime}_1+2\bar E^{\prime}$
& $0.21 e^{-i134.9^\circ}$
& $0.30 e^{-i129.3^\circ}$
& $1.44 e^{i5.6^\circ}$
& $0.21 e^{i2.8^\circ}$
& $0.30 e^{i8.4^\circ}$
& $1.44 e^{i5.6^\circ}$
\\
\hline
$\bar C^{\prime}_2+2\bar P^{\prime}-\frac{1}{3}\bar P^{\prime C}_{EW,2}$
& $10.62 e^{i152.6}$
& $11.13 e^{i149.9}$
& $1.05 e^{-i2.7^\circ}$
& $10.38 e^{i152.4}$
& $10.85 e^{i148.8}$
& $1.05 e^{-i3.6^\circ}$
\\
\hline
$\bar P^{\prime}_{EW} +2\bar P^{\prime A}_{EW}$
& $0.42 e^{-i 174.8^\circ}$
& $0.38 e^{-i 165.9^\circ}$
& $0.92 e^{i8.8^\circ}$
& $0.42 e^{-i 172.6^\circ}$
& $0.38 e^{-i 163.8^\circ}$
& $0.92 e^{i8.9^\circ}$
\\
\hline
$\bar P^{\prime C}_{EW,1}+2\bar P^{\prime E}_{EW}$
& $0.08 e^{i61.0^\circ}$
& $0.12 e^{i93.9^\circ}$
& $1.48 e^{i33.0^\circ}$
& $0.08 e^{i63.0^\circ}$
& $0.12 e^{i93.0^\circ}$
& $1.48 e^{i33.0^\circ}$
\\
\hline
$\tilde C^{\prime}+\tilde E^{\prime} +\tilde P^{\prime}$
& $6.04 e^{i140.0^\circ}$
& $5.01 e^{-i141.5^\circ}$
& $0.83 e^{i78.5^\circ}$
& $5.89 e^{i138.1^\circ}$
& $4.88 e^{-i146.1^\circ}$
& $0.83e^{i75.8^\circ}$
\\
$+\frac{3}{2}\widetilde {PA}^{\prime}-\frac{1}{3}\tilde P^{\prime C}_{EW}-\frac{1}{3}\tilde P^{\prime E}_{EW}$
\end{tabular}
\end{ruledtabular}
}
\end{table}
We now turn to $\Delta S=-1$ processes. The results are shown in Table~\ref{tab:TADS-1}.
We see from the table that before rescattering,
we have the following order for $\overline B_q\to PP$ amplitudes:
\begin{eqnarray}
&&|\bar C^{\prime 0}_2+2\bar P^{\prime 0}-\frac{1}{3}\bar P^{\prime C0}_{EW,2}|
>|\tilde C^{\prime 0}+\tilde E^{\prime 0} +\tilde P^{\prime 0}+\frac{3}{2}\tilde {PA}^{\prime 0}-\frac{1}{3}\tilde P^{\prime C0}_{EW}-\frac{1}{3}\tilde P^{\prime E0}_{EW}|
{\nonumber}\\
&&
> |P^{\prime 0}-C^0+\frac{1}{3}P_{\prime EW}^{C0}|
\gg |T^{\prime 0}+C^{\prime 0}|
> |\bar T^{\prime 0}+2\bar A^{\prime 0}|
> |P^{\prime 0}_{EW}+P^{\prime C0}_{EW}|
{\nonumber}\\
&&\gtrsim |PA^{\prime 0}-\frac{4}{9} C^{\prime 0}+\frac{13}{9} E^{\prime 0}-\frac{1}{3} P_{EW}^{\prime C0}|
\gtrsim |\bar P^{\prime 0}_{EW}+2\bar P^{\prime A0}_{EW}|
> |C^{\prime 0}-E^{\prime 0}|
\gtrsim |A^{\prime 0}+C^{\prime 0}|,
{\nonumber}
\end{eqnarray}
while the rest are rather small.
Note that as expected penguin amplitudes dominate over trees.
In fact, even the electroweak penguin amplitudes, which were neglected in the $\Delta S=0$ case, cannot be neglected now.
After rescattering, the above orders are rearranged into:
\begin{eqnarray}
&&
|\bar C^\prime_2+2\bar P^\prime-\frac{1}{3}\bar P^{\prime C}_{EW,2}|
>|\tilde C^\prime+\tilde E^\prime +\tilde P^\prime+\frac{3}{2}\tilde {PA}^\prime-\frac{1}{3}\tilde P^{\prime C}_{EW}-\frac{1}{3}\tilde P^{\prime E}_{EW}|
> |P^\prime-C^\prime+\frac{1}{3}P_{EW}^{\prime C}|
{\nonumber}\\
&&
\gg |PA^\prime-\frac{4}{9} C^\prime+\frac{13}{9} E^\prime-\frac{1}{3} P_{EW}^{\prime C}|
>|T^\prime+C^\prime|
> |\bar T^\prime+2\bar A^\prime|
{\nonumber}\\
&&
> |P^{\prime}_{EW}+P^{\prime C}_{EW}|
> |\bar P^{\prime }_{EW}+2\bar P^{\prime A}_{EW}|
\gtrsim |C^\prime-E^\prime|
\gtrsim |A^\prime+C^\prime|.
{\nonumber}
\end{eqnarray}
We see that the combinations with sub-leading tree amplitudes, $C'-E'$ and $A'+C'$, are enhanced,
while the one with the penguin term, $P'-C'+P^{\prime C}_{EW}/3$, is slightly reduced.
Note that $|PA^\prime-\frac{4}{9} C^\prime+\frac{13}{9} E^\prime-\frac{1}{3} P_{EW}^{\prime C}|$ is enhanced by a factor of 2,
but $|\tilde C^\prime+\tilde E^\prime +\tilde P^\prime+\frac{3}{2}\tilde {PA}^\prime-\frac{1}{3}\tilde P^{\prime C}_{EW}-\frac{1}{3}\tilde P^{\prime E}_{EW}|$ is reduced by about 20\%.
Similar pattern occurs in the conjugated $B_q\to PP$ decays.
The effect of rescattering on $A'+C'$ is similar to the one in $A+C$.
It is enhanced from the exchange and annihilation rescatterings fed from both $T^{\prime 0}+C^{\prime 0}$
and $\bar T^{\prime 0}+2\bar A^{\prime 0}$ amplitudes.
We also note that the effect of rescattering on $P^{\prime C}_{EW}-P^{\prime E}_{EW}$ is similar to the one in $C^{\prime}-E^{\prime}$,
but with tree amplitudes replaced by electroweak penguins.
Hence $P^{\prime C}_{EW}-P^{\prime E}_{EW}$ is affected most from $P^{\prime 0}_{EW}+P^{\prime C0}_{EW}$ and the effect is an enhancement in size.
It is useful to see the enhancement and reduction in $|PA^\prime-\frac{4}{9} C^\prime+\frac{13}{9} E^\prime-\frac{1}{3} P_{EW}^{\prime C}|$
and $|\tilde C^\prime+\tilde E^\prime +\tilde P^\prime+\frac{3}{2}\tilde {PA}^\prime-\frac{1}{3}\tilde P^{\prime C}_{EW}-\frac{1}{3}\tilde P^{\prime E}_{EW}|$, respectively, in more detail.
In $\overline B_q\to PP$ decays,
keeping only the $(PA^{\prime 0}-\frac{4}{9} C^{\prime 0}+\frac{13}{9} E^{\prime 0}-\frac{1}{3} P_{EW}^{\prime C})$ and
the $(P^{\prime 0} - C^{\prime 0} + \frac{1}{3} P_{EW}^{\prime C0})$ terms in the corresponding formula shown in Eq. (\ref{eq: rcombinations1}),
we obtain
\begin{eqnarray}
\frac{PA^{\prime}-\frac{4}{9} C^{\prime}+\frac{13}{9} E^{\prime}-\frac{1}{3} P_{EW}^{\prime C}}
{PA^{\prime 0}-\frac{4}{9} C^{\prime 0}+\frac{13}{9} E^{\prime 0}-\frac{1}{3} P_{EW}^{\prime C}}
\simeq 2.6\, e^{i 52^\circ},
\end{eqnarray}
which is close to the value $2.37\,e^{i69.7^\circ}$ shown in Table~\ref{tab:TADS-1}.
Similarly using the corresponding formula in Eq. (\ref{eq: rcombinations1}) and keep only the $(\tilde C^{\prime 0}+\tilde E^{\prime 0}+\tilde P^{\prime 0}
+\frac{3}{2}\widetilde {PA}^{(\prime)0}
-\frac{1}{3}\tilde P^{\prime C0}_{EW} -\frac{1}{3}\tilde P^{\prime E0}_{EW})$ and
the $(P^{\prime 0} - C^{\prime 0} + \frac{1}{3} P_{EW}^{\prime C0})$ terms
we obtain
\begin{eqnarray}
\frac
{\tilde C^{\prime}+\tilde E^{\prime}+\tilde P^{\prime}+\frac{3}{2}\widetilde {PA}^{\prime}-\frac{1}{3}\tilde P^{\prime C}_{EW} -\frac{1}{3}\tilde P^{\prime E}_{EW}}
{\tilde C^{\prime 0}+\tilde E^{\prime 0}+\tilde P^{\prime 0}+\frac{3}{2}\widetilde {PA}^{(\prime)0}
-\frac{1}{3}\tilde P^{\prime C0}_{EW} -\frac{1}{3}\tilde P^{\prime E0}_{EW}}
\simeq 0.8\,e^{i 76^\circ},
\end{eqnarray}
which is close to the value $0.83\,e^{i 78.5^\circ}$ shown in Table~\ref{tab:TADS-1}.
In both cases the most important contributions are from the $(P^{\prime 0} - C^{\prime 0} + \frac{1}{3} P_{EW}^{\prime C0})$ term.
\begin{table}[t!]
\caption{Some topological amplitudes of $\Delta S=0,-1$, $\bar B_q\to PP$ and $B_q\to PP$ decays before rescattering ($A^0$) and after rescattering ($A_{FSI}$) in the unit of $10^{-9}$ GeV.
These results are obtained using the best fitted solution and Eqs.~(\ref{eq: TAQCDF1}), (\ref{eq: TAQCDF2}), (\ref{eq: TAQCDF3}) and
(\ref{eq: delta TA1}).
We use an additional assumption, $r'_e=0$ as suggested from U(3) symmetry on TA [see Eq. (\ref{eq: re=0})].
Without lost of generality the overall phase ($\delta_{27}$) for $A_{FSI}$ is set to 0.
Results in combinations of $\bar A$ and $\tilde A$ can be found in Tables \ref{tab:TADS0} and \ref{tab:TADS-1}.
\label{tab:TADS0-1}
}
{\footnotesize
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
& $A^0(\overline B)$
& $A_{FSI}(\overline B)$
& $A_{FSI}/A^0(\overline B)$
& $A^0(B)$
& $A_{FSI}(B)$
& $A_{FSI}/A^0(B)$
\\
\hline
$T$
&$ 25.84e^{-i 63.5^\circ}$
&$ 25.84e^{-i 63.5^\circ}$
& $1$
&$ 25.84e^{i 74.2^\circ}$
&$ 25.84e^{i 74.2^\circ}$
& $1$
\\
\hline
$C$
& $10.45 e^{-i123.9^\circ}$
& $10.45 e^{-i123.9^\circ}$
& $1$
& $10.45e^{i13.8^\circ}$
& $10.45e^{i13.8^\circ}$
& $1$
\\
\hline
$E$
& $1.19e^{i102.6^\circ}$
& $5.46e^{i71.0^\circ}$
& $4.61e^{-i31.5^\circ}$
& $1.19e^{-i119.7^\circ}$
& $5.46e^{-i151.2^\circ}$
& $4.61e^{-i31.5^\circ}$
\\
\hline
$A$
& $0.38e^{-i77.4^\circ}$
& $4.78e^{-i113.8^\circ}$
& $12.67e^{-i36.4^\circ}$
& $0.38e^{i60.3^\circ}$
& $4.78e^{i23.9^\circ}$
& $12.67e^{-i36.4^\circ}$
\\
\hline
$P$
& $8.89 e^{i8.6^\circ}$
& $12.26e^{i34.3^\circ}$
& $1.38e^{i25.7^\circ}$
& $9.94 e^{-i31.6^\circ}$
& $6.43e^{-i47.6^\circ}$
& $0.65e^{-i16.0^\circ}$
\\
\hline
$PA$
& $0.76 e^{-i166.4^\circ}$
& $7.95 e^{-i116.2^\circ}$
& $10.50e^{i50.2^\circ}$
& $0.76 e^{i149.3^\circ}$
& $5.19 e^{-i1.6^\circ}$
& $6.86e^{-i150.9^\circ}$
\\
\hline
$P_{EW}$
& $0.86e^{i29.0^\circ}$
& $0.86e^{i29.0^\circ}$
& $1$
& $0.86e^{-i15.3^\circ}$
& $0.86e^{-i15.3^\circ}$
& $1$
\\
\hline
$P_{EW}^{C}$
& $0.29e^{-i46.8^\circ}$
& $0.29e^{-i46.8^\circ}$
& $1$
& $0.29e^{-i89.1^\circ}$
& $0.29e^{-i89.1^\circ}$
& $1$
\\
\hline
$P_{EW}^{E}$
& $0.11 e^{i170.5^\circ}$
& $0.27 e^{i166.0^\circ}$
& $2.42e^{-i4.4^\circ}$
& $0.11 e^{i126.2^\circ}$
& $0.26 e^{i121.2^\circ}$
& $2.31e^{-i5.0^\circ}$
\\
\hline
$P_{EW}^{A}$
& $0.02 e^{i13.6^\circ}$
& $0.18 e^{-i24.7^\circ}$
& $11.21e^{-i38.3^\circ}$
& $0.02 e^{-i30.7^\circ}$
& $0.18 e^{-i68.8^\circ}$
& $11.08e^{-i.38.1^\circ}$
\\
\hline
$T^{\prime}$
&$ 5.98e^{-i 63.5^\circ}$
&$ 5.98e^{-i 63.5^\circ}$
& $1$
&$ 5.98e^{i 74.2^\circ}$
&$5.98e^{i 74.2^\circ}$
& $1$
\\
\hline
$C^{\prime}$
& $2.41e^{-i123.9^\circ}$
& $2.41e^{-i123.9^\circ}$
& $1$
& $2.41e^{i13.8^\circ}$
& $2.41e^{i13.8^\circ}$
& $1$
\\
\hline
$E^{\prime}$
& $0.27e^{i102.6^\circ}$
& $1.26e^{i71.0^\circ}$
& $4.61e^{-i31.5^\circ}$
& $0.27e^{-i119.7^\circ}$
& $1.26e^{-i151.2^\circ}$
& $4.61e^{-i31.5^\circ}$
\\
\hline
$A^{\prime}$
& $0.09e^{-i77.4^\circ}$
& $1.10e^{-i113.8^\circ}$
& $12.67e^{-i36.4^\circ}$
& $0.09e^{i60.3^\circ}$
& $1.10e^{i23.9^\circ}$
& $12.67e^{-i36.4^\circ}$
\\
\hline
$P^{\prime}$
& $44.12 e^{i167.6^\circ}$
& $40.99e^{i177.5^\circ}$
& $0.93e^{i9.9^\circ}$
& $43.90 e^{i169.6^\circ}$
& $42.29e^{-i178.7^\circ}$
& $0.96e^{i11.7^\circ}$
\\
\hline
$PA^{\prime}$
& $3.54 e^{-i9.6^\circ}$
& $7.43 e^{i78.3^\circ}
& $2.10e^{i87.9^\circ}$
& $3.54 e^{-i7.5^\circ}$
& $9.56 e^{i68.4^\circ}
& $2.70e^{i75.9^\circ}$
\\
\hline
$P^{\prime}_{EW}$
& $4.01e^{-i174.2^\circ}$
& $4.01e^{-i174.2^\circ}$
& $1$
& $4.01e^{-i172.1^\circ}$
& $4.01e^{-i172.1^\circ}$
& $1$
\\
\hline
$P_{EW}^{\prime C}$
& $1.38e^{i111.0^\circ}$
& $1.38e^{i111.0^\circ}$
& $1$
& $1.38e^{i113.0^\circ}$
& $1.38e^{i113.0^\circ}$
& $1$
\\
\hline
$P_{EW}^{\prime E}$
& $0.53 e^{-i32.7^\circ}$
& $1.27 e^{-i40.3^\circ}$
& $2.40e^{-i7.6^\circ}$
& $0.53 e^{-i30.6^\circ}$
& $1.27 e^{-i38.2^\circ}$
& $2.41e^{-i7.6^\circ}$
\\
\hline
$P_{EW}^{\prime A}$
& $0.07 e^{i170.4^\circ}$
& $0.83 e^{i132.2^\circ}$
& $11.14 e^{-i38.2^\circ}$
& $0.07 e^{i172.5^\circ}$
& $0.83 e^{i134.3^\circ}$
& $11.15 e^{-i38.2^\circ}$
\end{tabular}
\end{ruledtabular}
}
\end{table}
\subsubsection{Rescattering effects on some Individual Topological Amplitudes}
The results in Tables~\ref{tab:TADS0} and \ref{tab:TADS-1} are all we can have, if no further assumption is made.
It is, however, desirable to reveal the effect of Res on some individual topological amplitudes instead of their combinations.
To explore the effect one needs the information of various $r'_i$ instead of their combinations shown in Eq.~(\ref{eq:rfit}).
For example, the Res effect on exchange amplitude is given by [see Eq.~(\ref{eq: delta TA1})]
\begin{eqnarray}
\delta E^{(\prime)}
&=&ir'_0 E^{(\prime)0}+ ir'_a T^{(\prime)0}-\frac{1}{3} i (r'_e+2r'_a) C^{(\prime)0}+\frac{1}{3}i(-2 r'_e+5 r'_a) E^{(\prime)0}
{\nonumber}\\
&& +\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar C^{(\prime)0}_1+2\bar E^{(\prime)0}).
\end{eqnarray}
It is clear that we need the information of $r'_0$, $r'_a$, $r'_e$ and so on to obtain $\delta E^{(\prime)}$.
From the fit we only have information on some combinations of these rescattering parameters,
such as $1+i (r'_0+ r'_a)$, $i(r'_e-r'_a)$ and so on [see Eq. (\ref{eq:rfit})], but not on individual ones.
To study the effect of Res on individual topological amplitudes, we make an additional assumption:
\begin{eqnarray}
r'_e=0,
\end{eqnarray}
which is suggested by the U(3) symmetry on TA~[see Eq.~(\ref{eq: re=0})].
Note that we only assume $r'_e=0$ and do not impose any condition on $\bar r'_e$, $\hat r'_e$ and $\check r'_e$.
Hence we are not using the full U(3) symmetry, but rather consider the case of a suppressed $r'_e$.
Using the above assumption and the results in Eq. (\ref{eq:rfit}) one can now extract the effect of Res on some individual TAs of interest.
The results are shown in Table~\ref{tab:TADS0-1}. One should keep in mind of the assumption made.
Note that the above assumption will affect our interpretation of the effect of Res on individual topological amplitudes,
but not on the interpretation of the effect of Res on the combinations of topological amplitudes as discussed previously.
In other words, the above assumption will affect the results stated in Table~\ref{tab:TADS0-1}, but not on those in Tables~\ref{tab:TADS0} and \ref{tab:TADS-1}.
From Table~\ref{tab:TADS0-1} we see that, before Res, for $\overline B_q\to PP$ and $B_q\to PP$ decays, we have
\begin{eqnarray}
&&|T^0|
>|C^0|
>|P^0|
\gg|E^0|
>|P_{EW}^0|
>|PA^0|
>|A^0|
\gtrsim |P_{EW}^{0C}|
>|P_{EW}^{0E}|
\gg|P_{EW}^{0A}|,
{\nonumber}\\
&&|P^{\prime 0}|
\gg |T^{\prime 0}|
>|P_{EW}^{\prime 0}|
>|PA^{\prime 0}|
\gtrsim |C^{\prime 0}|
>|P_{EW}^{\prime C0}|
>|P_{EW}^{\prime E0}|
>|E^{\prime 0}|
\gg |A^{\prime 0}|
\gtrsim |P_{EW}^{\prime A0}|,
\end{eqnarray}
while after Res, we have
\begin{eqnarray}
&&|T|
>|P|
>|C|
>|PA|
>|E|
>|A|
\gg|P_{EW}|
> |P_{EW}^{C}|
>|P_{EW}^{E}|
>P_{EW}^{A}|,
{\nonumber}\\
&&|P^{\prime }|
\gg |PA^{\prime }|
>|T^{\prime }|
>|P_{EW}^{\prime }|
> |C^{\prime }|
>|P_{EW}^{\prime C}|
> |P_{EW}^{\prime E}|
\gtrsim|E^{\prime }|
>|A^{\prime }|
> |P_{EW}^{\prime A}|,
\end{eqnarray}
for $\overline B_q\to PP$ decays, and
\begin{eqnarray}
&&|T|
>|C|
>|P|
>|E|
>|PA|
>|A|
\gg|P_{EW}|
> |P_{EW}^{C}|
>|P_{EW}^{E}|
>|P_{EW}^{A}|,
{\nonumber}\\
&&|P^{\prime}|
\gg |PA^{\prime}|
>|T^{\prime}|
> |P^{\prime}_{EW}|
> |C^{\prime}|
> |P_{EW}^{\prime C}|
> |P_{EW}^{\prime E}|
\gtrsim |E^{\prime}|
> |A^{\prime}|
> |P_{EW}^{\prime A}|,
\end{eqnarray}
for $B_q\to PP$ decays.
Note that the positions of $|P|$ and $|PA|$ in the above orders are different in $\overline B_q\to PP$ and $B_q\to PP$ decays. We will come to that later.
We see from Table~\ref{tab:TADS0-1} that $|E|$, $|E'|$, $|A|$, $|A'|$, $|PA|$, $|PA'|$, $|P^{A,E}_{EW}|$ and $|P^{\prime A,E}_{EW}|$ are enhanced significantly with factors ranging from $2\sim 11$,
while $|P|$ is enhanced by $35\%$ in $\overline B_q\to PP$ decay,
but is suppressed by $35\%$ in $B_q\to PP$ decay and $|P'|$ are suppressed by $6\%$ and $3\%$ in $\overline B_q\to PP$ and $B_q\to PP$ decays, respectively.
Note that in particular $|A|$ and $|A'|$ are enhanced by a factor of $11.5$. It is useful to look into the enhancement.
From Eq.~(\ref{eq: delta TA1}), we have
\begin{eqnarray}
A^{(\prime)}&=&(1+i r'_0-\frac{2}{3} i r'_e +\frac{5}{3} ir'_a )A^{(\prime)0}
-\frac{1}{3}i(r'_e+2r'_a) T^{(\prime)0}+ir'_a C^{(\prime)0}
{\nonumber}\\
&&+\frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar T^{(\prime)0}+2\bar A^{(\prime)0}).
\end{eqnarray}
Now make use of $r'_e=0$ and Eq. (\ref{eq:rfit}), we obtain
\begin{eqnarray}
\frac{A^{(\prime)}}{A^{(\prime)0}}=0.99 e^{i 20.0^\circ}+9.48 e^{-i64.4^\circ}+5.75 e^{i 55.2^\circ}+4.45^{-i 64.8^\circ}
=12.67 e^{-i 36.4},
\end{eqnarray}
where the terms in the right hand side of the first equality are from $A^{(\prime)0}$, $T^{(\prime)0}$, $C^{(\prime)0}$, $\bar T^{(\prime)0}+2\bar A^{(\prime)0}$ contributions, respectively.
We see that the $T^{(\prime)0}$, $C^{(\prime)0}$, $\bar T^{(\prime)0}+2\bar A^{(\prime)0}$ terms give sizable contributions to $A^{(\prime)}$,
via $r'_a$, $r'_a$ and $\bar r_e+2 \bar r'_a$ rescatterings, respectively,
and enhance its size significantly.
Similarly we have
\begin{eqnarray}
\frac{E^{(\prime)}}{E^{(\prime)0}}=0.99 e^{i 20.0^\circ}+4.53 e^{-i64.4^\circ}+1.22 e^{i 55.2^\circ}+0.48^{i 44.2^\circ}=4.61 e^{-i 31.5},
\label{eq: E/E0}
\end{eqnarray}
where the terms in the right hand side of the first equality are from $A^{(\prime)0}$, $T^{(\prime)0}$, $C^{(\prime)0}$, $\bar C^{(\prime)0}_1+2\bar E^{(\prime)0}$ contributions, respectively. The dominate contribution is from $T^{(\prime)0}$ via annihilation rescattering $r'_a$.
\begin{table}[t!]
\caption{Same as Table~\ref{tab:TADS0-1}, but for $u$-penguins and $c$-penguins.
\label{tab:ucpenguinsDS0-1}
}
{\footnotesize
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
& $A^0(\overline B)$
& $A_{FSI}(\overline B)$
& $A_{FSI}/A^0(\overline B)$
& $A^0(B)$
& $A_{FSI}(B)$
& $A_{FSI}/A^0(B)$
\\
\hline
$P^u$
& $3.51 e^{i108.3^\circ}$
& $6.98 e^{i84.7^\circ}$
& $1.99e^{-i23.6^\circ}$
& $3.51 e^{-i113.9^\circ}$
& $6.98 e^{-i137.6^\circ}$
& $1.99e^{-i23.6^\circ}$
\\
\hline
$P^c$
& $10.09 e^{-i11.4^\circ}$
& $9.49 e^{-i0.2^\circ}$
& $0.94e^{i11.2^\circ}$
& $10.09 e^{-i11.4^\circ}$
& $9.49 e^{-i0.2^\circ}$
& $0.94e^{i11.2^\circ}$
\\
\hline
$PA^u$
& $0.31 e^{-i77.4^\circ}$
& $5.98 e^{-i120.1^\circ}$
& $19.55e^{-i42.7^\circ}$
& $0.31 e^{i60.3^\circ}$
& $5.98 e^{i17.6^\circ}$
& $19.55e^{-i42.7^\circ}$
\\
\hline
$PA^c$
& $0.81 e^{i171.4^\circ}$
& $2.02 e^{-i104.6^\circ}$
& $2.49 e^{i84.0^\circ}$
& $0.81 e^{i171.4^\circ}$
& $2.02 e^{-i104.6^\circ}$
& $2.49 e^{i84.0^\circ}$
\\
\hline
$P^{u\prime}$
& $0.81 e^{i108.3^\circ}$
& $1.61 e^{i84.7^\circ}$
& $1.99 e^{-i23.6^\circ}$
& $0.81 e^{-i113.9^\circ}$
& $1.61 e^{-i137.6^\circ}$
& $1.99 e^{-i23.6^\circ}$
\\
\hline
$P^{c\prime}$
& $43.71 e^{i168.6^\circ}$
& $41.10e^{i179.8^\circ}$
& $0.94e^{i11.2^\circ}$
& $43.71 e^{i168.6^\circ}$
& $41.10e^{i179.8^\circ}$
& $0.94e^{i11.2^\circ}$
\\
\hline
$PA^{u\prime}$
& $0.07e^{-i77.4^\circ}$
& $1.38 e^{-i120.1^\circ}$
& $19.55e^{-i42.7^\circ}$
& $0.07e^{i60.3^\circ}$
& $1.38 e^{i17.6^\circ}$
& $19.55e^{-i42.7^\circ}$
\\
\hline
$PA^{c\prime}$
& $3.51 e^{-i8.6^\circ}$
& $8.75 e^{i75.4^\circ}$
& $2.49 e^{i84.0^\circ}$
& $3.51 e^{-i8.6^\circ}$
& $8.75 e^{i75.4^\circ}$
& $2.49 e^{i84.0^\circ}$
\end{tabular}
\end{ruledtabular}
}
\end{table}
As noted previously $P^{(\prime)}$ and $PA^{(\prime)}$ receive different Res contributions in $\overline B_q\to PP$ and $B_q\to PP$ decays.
It is interesting to investigate the effects of Res on these penguin amplitudes in details.
First, we decompose $P^{(\prime)}$ into the so-called $u$-penguin ($P^{(\prime)u}$) and $c$-penguin ($P^{(\prime)c}$)
as $P^{(\prime)}=P^{(\prime)u}+P^{(\prime)c}$ according to the different CKM factors.
Now from Eq.~(\ref{eq: delta TA1}), we have
\begin{eqnarray}
P^{(\prime)u}&=&
\left[1+ir'_0 +\frac{1}{3}i(-2 r'_e +5 r'_a)\right] P^{(\prime)u0}
+i r'_a T^{(\prime)0}
-\frac{1}{3}i( r'_e +2 r'_a) C^{(\prime)0}
{\nonumber}\\
&&-\frac{1}{3} i r'_a P^{(\prime)u0}_{EW}
+\frac{1}{9}i(r'_e +2 r'_a) P_{EW}^{(\prime)uC0}
+ \frac{1}{3}i(\bar r'_e+2\bar r'_a)(\bar C^{(\prime)0}_2+2\bar P^{(\prime)u0}-\frac{1}{3}\bar P^{(\prime)uC0}_{EW,2}),
{\nonumber}\\
P^{(\prime)c}&=&\left[1+ir'_0+\frac{1}{3}i(- 2r'_e +5 r'_a)\right] P^{(\prime)c0}
{\nonumber}\\
&&-\frac{1}{3} i r'_a P^{(\prime)c0}_{EW}
+\frac{1}{9}i(r'_e +2 r'_a) P_{EW}^{(\prime)cC0}
+ \frac{1}{3}i(\bar r'_e+2\bar r'_a)(2\bar P^{(\prime)c0}-\frac{1}{3}\bar P^{(\prime)cC0}_{EW,2}).
\end{eqnarray}
Using these formulas and the best fit parameters, we obtain
\begin{eqnarray}
\frac{P^{(\prime)u}}{P^{(\prime)u0}}
=0.99 e^{i 20.0^\circ}+1.53 e^{-i70.2^\circ}+0.41 e^{i 49.4^\circ}+0.01^{-i 56.1^\circ}+0.10^{-i 2.3^\circ}
=1.99 e^{-i 23.6^\circ},
\label{eq: PuPu0}
\end{eqnarray}
where the terms in the right hand side of the first equality are from $P^{(\prime)u0}$, $T^{(\prime)0}$, $C^{(\prime)0}$, $P^{(\prime)u0, (\prime)uC0}_{EW}$
and
$\bar C^{(\prime)0}_2+2\bar P^{(\prime)u0}-\frac{1}{3}\bar P^{(\prime)uC0}_{EW,2}$, respectively,
and
\begin{eqnarray}
\frac{P^{(\prime)c}}{P^{(\prime)c0}}=0.99 e^{i 20.0^\circ}+0.01^{-i 46.8^\circ}+0.15^{-i 94.6^\circ}
=0.94 e^{i 11.2^\circ},
\end{eqnarray}
where the terms in the right hand side of the first equality are from $P^{(\prime)c0}$, $P^{c(\prime)0, c(\prime)C0}_{EW}$
and
$2\bar P^{(\prime)c0}-\frac{1}{3}\bar P^{(\prime)cC0}_{EW,2}$, respectively.
It is clear that $T^{(\prime)0}$ and $C^{(\prime)0}$ only contribute to $P^{(\prime)u}$ (via the annihilation rescattering $r'_a$)
and $|P^{(\prime)u}|$ is enhanced by about a factor of 2.
On the other hand $P^{(\prime)c}$ is only slightly affected by rescattering and is still close to the original $P^{(\prime)c0}$.
The results are shown in Table~\ref{tab:ucpenguinsDS0-1}.
It is useful to note that the ratio of $u$-penguin and $c$-penguin in $\Delta S=0$ process before rescattering is expected to proportional to the CKM factors giving
\begin{eqnarray}
\bigg|\frac{P^u}{P^c}\bigg|\simeq \bigg|\frac{V_{ub} V^*_{ud}}{V_{cb} V^*_{cd}}\bigg|\simeq 0.38.
\end{eqnarray}
The estimation is close to the ratio $|P^{u0}/P^{c0}|=3.51/10.09\simeq0.35$ using $P^{u0}$ and $P^{c0}$ shown in Table~\ref{tab:ucpenguinsDS0-1}.
The CKM ratio implies that $u$-penguin and the $c$-penguin are not as hierarchical as in the $\Delta S=-1$ case.
Furthermore, when rescattering is turned on, the $u$-penguin and $c$-penguin receive different contributions as only $P^u$ can receive contribution fed from $T^0$,
see Eq.~(\ref{eq: PuPu0}),
and, consequently, the above ratio is enhanced to $6.98/9.49\simeq 0.74$ (see Table~\ref{tab:ucpenguinsDS0-1}).
These will affect the CP asymmetries of $\Delta S=0$ modes to be discussed later.
We now turn to $PA^{(\prime)}$.
Similarly we decompose $PA^{(\prime)}$ into $PA^{(\prime)u}+PA^{(\prime)c}$ and from Eq.~(\ref{eq: delta TA1}) we have
\begin{eqnarray}
PA^{(\prime)u}&=&
\frac{1}{3}(3+3ir'_0
- ir'_e +16 ir'_a +12 ir'_t) PA^{(\prime)u0}
+ i r'_t T^{(\prime)0}
+\frac{1}{9}(2 i r'_e +4 i r'_a -3 i r'_t) C^{(\prime)0}
{\nonumber}\\
&&
+\frac{2}{9}(i r'_e+11 ir'_a+12 ir'_t) E^{(\prime)0}
+\frac{2}{9}( ir'_e+11 ir'_a +12 ir'_t ) P^{(\prime)u0}
{\nonumber}\\
&&
+\bigg(-\frac{1}{3} i r'_t P^{(\prime)u0}_{EW}
+\frac{1}{27}i(-2 r'_e -4 r'_a +3 r'_t) P^{(\prime)uC0}_{EW}
-\frac{2}{27} i (r'_e +11 r'_a +12 r'_t )P_{EW}^{(\prime)uE0}\bigg)
{\nonumber}\\
&&-\frac{2}{9}i(\bar r'_e+2\bar r'_a)
\bigg(\bar C^{(\prime)0}_1+2\bar E^{(\prime)0}+\bar C^{(\prime)0}_2+2\bar P^{(\prime)u0}-\frac{1}{3}\bar P^{(\prime)uC0}_{EW,2}
-\frac{1}{3}\bar P^{(\prime)uC0}_{EW,1} -\frac{2}{3}\bar P^{(\prime)uE0}_{EW}\bigg)
{\nonumber}\\
&&+\frac{1}{3}i(\hat r'_t +\frac{4\hat r'_a +2\hat r'_e}{3})
\bigg(\tilde C^{(\prime)0}+\tilde E^{(\prime)0}+\tilde P^{(\prime)u0} +\frac{3}{2}\widetilde {PA}^{(\prime)u0}-\frac{1}{3}\tilde P^{(\prime)uC0}_{EW} -\frac{1}{3}\tilde P^{(\prime)uE0}_{EW}\bigg),
{\nonumber}\\
PA^{(\prime)c}&=&
\frac{1}{3}(3+3ir'_0
- ir'_e +16 ir'_a +12 ir'_t) PA^{(\prime)c0}
+\frac{2}{9}( ir'_e+11 ir'_a +12 ir'_t ) P^{(\prime)c0}
{\nonumber}\\
&&+\bigg(-\frac{1}{3} i r'_t P^{(\prime)c0}_{EW}
+\frac{1}{27}i(-2 r'_e -4 r'_a +3 r'_t) P^{(\prime)cC0}_{EW}
-\frac{2}{27} i (r'_e +11 r'_a +12 r'_t )P_{EW}^{(\prime)cE0}
\bigg)
{\nonumber}\\
&&-\frac{2}{9}i(\bar r'_e+2\bar r'_a)
\bigg(2\bar P^{(\prime)c0}-\frac{1}{3}\bar P^{(\prime)cC0}_{EW,2}
-\frac{1}{3}\bar P^{(\prime)cC0}_{EW,1} -\frac{2}{3}\bar P^{(\prime)cE0}_{EW})\bigg)
{\nonumber}\\
&&+\frac{1}{3}i(\hat r'_t +\frac{4\hat r'_a +2\hat r'_e}{3})
\bigg(\tilde P^{(\prime)c0} +\frac{3}{2}\widetilde {PA}^{(\prime)c0}-\frac{1}{3}\tilde P^{(\prime)cC0}_{EW}
-\frac{1}{3}\tilde P^{(\prime)cE0}_{EW}\bigg),
\end{eqnarray}
Using these formulas and the best fit parameters, we obtain
\begin{eqnarray}
\frac{PA^{(\prime)u}}{PA^{(\prime)u0}}
&=&0.94 e^{i 2.2^\circ}
+22.39 e^{-i67.4^\circ}
+6.17 e^{i 53.7^\circ}
+0.78^{i 91.1^\circ}
{\nonumber}\\
&&
+2.31^{i 96.8^\circ}
+0.09^{-i 47.6^\circ}
+1.91^{i 28.5^\circ}
+1.52^{-i 99.7^\circ}
{\nonumber}\\
&=&19.55 e^{-i 42.7^\circ},
\label{eq: PAu/PAu0}
\end{eqnarray}
where the terms in the right hand side of the first equality are from
$PA^{(\prime)u0}$,
$T^{(\prime)0}$,
$C^{(\prime)0}$,
$E^{(\prime)0}$,
$P^{(\prime)u0}$,
$P^{(\prime)u0, (\prime)uC0,(\prime)uE0}_{EW}$,
$\bar C^{(\prime)0}_1+2\bar E^{(\prime)0}+\bar C^{(\prime)0}_2+2\bar P^{(\prime)u0}-\frac{1}{3}\bar P^{(\prime)uC0}_{EW,2}
-\frac{1}{3}\bar P^{(\prime)uC0}_{EW,1} -\frac{2}{3}\bar P^{(\prime)uE0}_{EW}$
and
$\tilde C^{(\prime)0}+\tilde E^{(\prime)0}+\tilde P^{(\prime)u0} +\frac{3}{2}\widetilde {PA}^{(\prime)u0}-\frac{1}{3}\tilde P^{(\prime)uC0}_{EW} -\frac{1}{3}\tilde P^{(\prime)uE0}_{EW}$
contributions, respectively.
Note that $|PA^{(\prime)u}|$ is enhanced by a factor of 18,
and the main contributions are from $T^{(\prime)0}$, $C^{(\prime)0}$ and $P^{(\prime)u0}$ terms via the total annihilation rescattering $r'_t$, the annihilation $r'_a$ and total annihilation $r'_t$ rescatterings, respectively.
In particular, the enhancement from $T^{(\prime)0}$ via $r'_t$ is the most prominent one.
Similarly we have
\begin{eqnarray}
\frac{PA^{(\prime)c}}{PA^{(\prime)c0}}
=0.94 e^{i 2.2^\circ}
+2.51 e^{i 88.2^\circ}
+0.09^{-i 47.0^\circ}
+1.26^{-i 97.3^\circ}
+1.41 e^{i 117.7^\circ}
=2.49 e^{i 84.0^\circ},
\label{eq: PAc/PAc0}
\end{eqnarray}
where the terms in the right hand side of the first equality are from $PA^{(\prime)c0}$,
$P^{(\prime)c0}$,
$P^{(\prime)c0, (\prime)cC0,(\prime)cE0}_{EW}$,
$2\bar P^{(\prime)c0}-\frac{1}{3}\bar P^{(\prime)cC0}_{EW,2}
-\frac{1}{3}\bar P^{(\prime)cC0}_{EW,1} -\frac{2}{3}\bar P^{(\prime)cE0}_{EW}$
and
$\tilde P^{(\prime)c0} +\frac{3}{2}\widetilde {PA}^{(\prime)c0}-\frac{1}{3}\tilde P^{(\prime)cC0}_{EW} -\frac{1}{3}\tilde P^{(\prime)cE0}_{EW}$
contributions, respectively. Note that $|PA^{(\prime)c}|$ is enhanced by a factor of $2.5$, while the main contribution is from the $P^{(\prime)c0}$ term
via the annihilation $r'_a$ and total annihilation $r'_t$ rescatterings. The effect of rescattering in $PA^{(\prime)c}$ is not as prominent as in the $PA^{(\prime)u}$ case.
We see that in the presence of rescattering, the resulting $|PA^u|$ is even greater than $|PA^c|$, while $PA^{(\prime)u}$ can no longer be neglected (see Table~\ref{tab:ucpenguinsDS0-1}).
The above observations can shed light on the results in the following discussions.
\subsection{Numerical results for decay rates and CP asymmetries}
In this part we will present the numerical results on
rates in $\overline B {}^0$ and $B^-$ decays,
direct CP violations in $\overline B {}^0$ and $B^-$ Decays,
rates and direct CP asymmetries in $\overline B{}^0_s$ decays,
and
time-dependent CP violations in $\overline B{}^0$ and $\overline B{}^0_s$ decays.
\subsubsection{Rates in $\overline B {}^0$ and $B^-$ Decays}
In Table~\ref{tab:table-br}, we show the CP-average rates of $\overline
B{}^0,B^-\to PP$ decays. In the table, Fac and Res denote
the factorization (without rescattering) and the rescattering results, respectively.
To see the effect of rescattering, we also show the results from the rescattering solution, but with all rescattering phases turn off, i.e. with rescattering turn off,
in the parentheses.
In the table the contributions from various modes to $\chi^2_{\rm min}$ in the best fitted solutions
are also shown.
\begin{table}[t!]
\caption{ \label{tab:table-br}
\small Branching ratios of various $\overline B{}_{u,d}\to PP$ modes in units of $10^{-6}$.
Fac and Res denote factorization and rescattering
results, respectively.
Experimental results are taken from \cite{HFAG, PDG}.
Contributions to $\chi^2_{\rm min}$ from the best fitted solutions
are also shown. The values in parenthesis are the results from the rescattering solution, but with all rescattering rescattering phases turn off.}
\begin{ruledtabular}
\begin{tabular}{lccccc}
Mode
&Exp
&Fac
&Res
& $\chi^{2\,\rm (Fac)}_{\rm min}$
& $\chi^{2\,\rm (Res)}_{\rm min}$
\\
\hline
$\overline B{}^0\to K^-\pi^+$
& $19.57^{+0.53}_{-0.52}$
& $19.3_{-0.3}^{+0.3}$
& $20.7_{-0.3}^{+0.3}$ (23.1)
& 0.2
& 4.7 (44.7)
\\
$\overline B {}^0\to \overline K {}^0\pi^0$
& $9.93\pm0.49$
& $8.5\pm0.1$
& $9.6_{-0.1}^{+0.2}$ (10.7)
& 8.2
& 0.4 (2.4)
\\
$\overline B {}^0\to \overline K {}^0\eta$
& $1.23_{-0.24}^{+0.27}$
& $1.3\pm0.1$
& $1.6\pm0.1$ (1.6)
& 0.0
& 1.6 (1.8)
\\
$\overline B {}^0\to \overline K {}^0\eta'$
& $66.1\pm3.1$
& $70.0\pm1.2$
& $68.3_{-1.4}^{+2.6}$ (64.6)
& 1.6
& 0.5 (0.2)
\\
\hline
$B^-\to \overline K{}^0\pi^-$
& $23.79\pm0.75$
& $21.1\pm0.3$
& $22.5\pm0.3$ (25.4)
& 11.8
& 3.1 (4.5)
\\
$B^-\to K^-\pi^0$
& $ 12.94_{-0.51}^{+0.52} $
& $12.1\pm0.1$
& $12.3\pm0.2$ (13.8)
& 2.7
& 1.7 (2.7)
\\
$B^-\to K^-\eta$
& $2.36_{-0.21}^{+0.22}$
& $1.7\pm0.1$
& $2.1_{-0.2}^{+0.1}$ (2.1)
& 8.2
& 1.5 (1.1)
\\
$B^-\to K^-\eta'$
& $71.1\pm 2.6$
& $74.7\pm1.3$
& $71.4_{-1.5}^{+2.9}$ (66.5)
& 1.9
& 0.0 (3.1)
\\
\hline
$B^-\to \pi^-\pi^0$
& $5.48^{+0.35}_{-0.34}$
& $4.7\pm0.1$
& $4.9_{-0.1}^{+0.2}$ (4.9)
& 5.7
& 2.9 (2.9)
\\
$B^-\to K^0 K^-$
& $1.32\pm0.14$
& $1.43\pm 0.03$
& $1.31\pm0.03$ (1.5)
& 0.6
& 0.0 (1.8)
\\
$B^-\to \pi^-\eta$
& $4.02\pm0.27$
& $3.4\pm0.1$
& $4.2\pm0.1$ (4.3)
& 4.1
& 0.3 (0.8)
\\
$B^-\to \pi^-\eta'$
& $2.7^{+0.5}_{-0.4}$
& $2.9\pm0.1$
& $3.5\pm0.1$ (3.3)
& 0.1
& 3.3 (1.5)
\\
\hline
$\overline B {}^0\to \pi^+\pi^-$
& $5.10\pm 0.19$
& $6.2\pm0.1$
& $5.3\pm0.1$ (6.0)
& 36.1
& 0.7 (23.7)
\\
$\overline B {}^0\to \pi^0 \pi^0$
& $1.59\pm0.26$\footnotemark[1]
& $0.98_{-0.03}^{+0.05}$
& $1.09_{-0.05}^{+0.06}$ (0.82)
& 5.6
& 3.7 (9.7)
\\
$\overline B {}^0\to \eta\eta$
& $0.76\pm0.29$
& $0.28\pm0.01$
& $0.41_{-0.06}^{+0.04}$ (0.11)
& 2.8
& 1.5 (5.1)
\\
$\overline B {}^0\to \eta \eta'$
& $0.5\pm0.4(<1.2)$
& $0.32_{-0.01}^{+0.02}$
& $0.30_{-0.04}^{+0.05}$ (0.26)
& 0.2
& 0.2 (0.4)
\\
$\overline B {}^0\to \eta'\eta'$
& $0.6\pm0.6(<1.7)$
& $0.24\pm0.01$
& $0.40_{-0.12}^{+0.15}$ (0.08)
& 0.4
& 0.1 (0.7)
\\
$\overline B {}^0\to K^+ K^-$
& $0.084\pm0.024$
& $0.065\pm0.002$
& $0.100_{-0.007}^{+0.012}$ (0.03)
& 0.6
& 0.5 (4.3)
\\
$\overline B {}^0\to K^0\overline K^0$
& $1.21\pm0.16$
& $1.67\pm0.03$
& $1.19\pm0.03$ (1.21)
& 8.4
& 0.0 (0.0)
\\
$\overline B {}^0\to \pi^0 \eta$
& $0.41\pm0.17
& $0.37\pm0.01$
& $0.36_{-0.00}^{+0.02}$ (0.41)
& 0.1
& 0.1 (0.0)
\\
$\overline B {}^0\to \pi^0\eta'$
& $1.2\pm0.6$\footnotemark[2]
& $0.52\pm0.02$
& $0.60\pm0.02$ (0.47)
& 1.3
& 1.0 (1.5)
\\
\end{tabular}
\footnotetext[1]{An $S$ factor of 1.4 is included in the uncertainty.}
\footnotetext[2]{Taken from PDG with an $S$ factor of 1.7 included in the uncertainty.}
\end{ruledtabular}
\end{table}
From the table, we see that, except for rates in $\overline B{}^0\to K^-\pi^+$, $\overline K{}^0\eta$ and $B^-\to
\pi^-\eta^{\prime}$
decays, the $\chi^2$ in Res for the other modes are lower than the Fac ones.
In particular, the $\chi^2$ in the $\overline B{}^0\to \overline K{}^0\pi^0$, $\pi^+\pi^-$, $K^0\overline K{}^0$ and $B^-\to\overline K{}^0\pi^-$,
$K^-\eta$, $\pi^-\pi^0$, $\pi^-\eta$ rates are improved significantly, as Fac encounters difficulties to fit some of these rates well.
In fact, in Fac the $\chi^2$ in $\overline B{}^0\to\pi^+\pi^-$ is as large as $36.1$, while it is reduced to $0.7$ in Res.
We see that in each group the $\chi^2$ is improved in the presence of Res.
The total $\chi^2$ from these 21($=4+4+4+9$) modes reduced from
$100.7(=10.1 + 24.7 + 10.6 + 55.3)$ to
$27.7(=7.2 + 6.3 + 6.4 + 7.8)$ (the breakdown can be found in Table~\ref{tab:chisquare} as well).
Overall speaking rescattering significantly improves the fit in this sector, especially in the last group, and can reproduce all the measured $\overline B_{u,d}\to PP$ rates reasonably well.
Note that both Fac and Res can successfully reproduce the newly measured $\overline B{}^0\to \pi^0\eta$ and $ K^+K^-$ rates~\cite{Pal:2015ewa,Aaij:2016elb}.
On the other hand,
both Fac and Res results on the $\overline B{}^0\to\pi^0\pi^0$ rate have tension with the data,
while Res is somewhat better as its $\chi^2$ (=3.7) is smaller than the one (5.6) in Fac.
It should be note that the uncertainty in the present data is still large
and it will be interesting to see the updated measurement.
Both Fac and Res fits on the $B^-\to \pi^-\pi^0$ rates are smaller than the experimental result.
The $\chi^2$ from Fac on this mode is $5.7$, while the Res fit improves it to $2.9$ with a slightly large rate,
but both results are in tension with data.
We will investigate how rescattering improves the fit in $\overline B {}^0_{d}\to \pi^+\pi^-, \pi^0\pi^0, K^+K^-$ and $B^-\to\overline K{}^-\pi^0$ rates.
For simplicity we will concentrate on the dominant contributions to the decay amplitudes in the following discussion.
By neglecting the electroweak penguin contributions, the $\overline B{}^0\to\pi^+\pi^-$ amplitude in Eq.~(\ref{eq: TAgroup4}) can be expressed as
\begin{eqnarray}
A_{\overline B {}^0_{d}\to \pi^+\pi^-}
&\simeq&T+P+E+PA.
\label{eq: pipi TA}
\end{eqnarray}
Using the results in Sec.~\ref{subsec: FSITA}, we see that before rescattering and after rescattering, we have (in unit of $10^{-8}$ GeV)
\begin{eqnarray}
(A_{\overline B {}^0_{d}\to \pi^+\pi^-})^0
&\simeq& 2.58e^{-i 63.5^\circ}+0.89 e^{i8.6^\circ}+0.19 e^{i102.6^\circ}+0.08 e^{-i166.4^\circ}
{\nonumber}\\
&\simeq& 2.98 e^{-i 47.0^\circ}+0.14 e^{i 135.4^\circ}\simeq 2.84 e^{-i 47.1^\circ},
{\nonumber}\\
(A_{B {}^0_{d}\to \pi^+\pi^-})^0
&\simeq& 2.58e^{i 74.2^\circ}+0.99 e^{-i31.6^\circ}+0.12e^{-i119.7^\circ}+0.08 e^{i149.3^\circ}
{\nonumber}\\
&\simeq& 2.50 e^{i 51.8^\circ}+0.14 e^{-i 152.6^\circ}\simeq 2.38 e^{i 53.2^\circ},
{\nonumber}\\
(A_{\overline B {}^0_{d}\to \pi^+\pi^-})_{FSI}
&\simeq& 2.58e^{-i 63.5^\circ}+1.23e^{i34.3^\circ}+0.55e^{i71.0^\circ}+0.79 e^{-i116.2^\circ}
{\nonumber}\\
&\simeq& 2.71 e^{-i 136.8^\circ}+0.26 e^{-i 131.4^\circ}\simeq 2.70 e^{-i 42.4^\circ},
{\nonumber}\\
(A_{B {}^0_{d}\to \pi^+\pi^-})_{FSI}
&\simeq& 2.58e^{i 74.2^\circ}+0.64 e^{-i47.6^\circ}+0.55e^{-i152.2^\circ}+0.52 e^{-i1.6^\circ}
{\nonumber}\\
&\simeq& 2.31e^{i 60.6^\circ}+0.28e^{-i81.7^\circ}\simeq 2.10 e^{i 55.9^\circ},
\label{eq: pipiTA}
\end{eqnarray}
respectively,
where expressions with four terms are given in the order of $T$, $P$, $E$ and $PA$ and those in two terms are
with the first two terms ($T+P$) and the last two terms ($E+PA$) summed separately.
Before we proceed we may compare the above estimation to our full numerical results,
where we have $(A_{\overline B {}^0_{d}\to \pi^+\pi^-})^0$, $(A_{B {}^0_{d}\to \pi^+\pi^-})^0$, $(A_{\overline B {}^0_{d}\to \pi^+\pi^-})_{FSI}$ and $(A_{B {}^0_{d}\to \pi^+\pi^-})_{FSI}$ given by
$2.86 e^{-i 47.1^\circ}$,
$2.36 e^{i 52.8^\circ}$,
$2.71 e^{-i40.8^\circ}$ and
$2.16 e^{i57.2^\circ}$ (in unit of $10^{-8}$ GeV), respectively, which are close to the above estimation.
Note that $T+P$ are dominant contributions, while $E+PA$ are sub-leading contributions, and these two groups interfere destructively.
In the presence of rescattering, the sizes of the dominant parts, $T+P$, are reduced,
while the sizes of the destructive and sub-leading parts, $E+PA$, are enhanced,
resulting more effective destructive interferences.
From the estimation we see that
the $\overline B {}^0_{d}\to \pi^+\pi^-$ rate is reduced by about $15\%$ bringing ${\cal B}(\overline B{}^0\to\pi^+\pi^-)\simeq 6\times 10^{-6}$ down to $\sim 5\times 10^{-6}$,
which agrees well with the data [$(5.1\pm 0.19)\times 10^{-6}$] shown in Table~\ref{tab:table-br}
and, consequently, the quality of the fit is improved significantly.
Similarly for $\overline B{}^0\to\pi^0\pi^0$ decays, we have
\begin{eqnarray}
\sqrt 2 A_{\overline B {}^0_{d}\to \pi^0 \pi^0}
&\simeq&-C+P+E+ PA,
\end{eqnarray}
which is close to the above $\overline B{}^0\to\pi^+\pi^-$ amplitudes, but with $T$ replaced by $-C$.
Before rescattering and after rescattering, we have (in unit of $10^{-8}$ GeV)
\begin{eqnarray}
\sqrt2 (A_{\overline B {}^0_{d}\to \pi^0\pi^0})^0
&\simeq& 1.05e^{i 56.1^\circ}+0.89 e^{i8.6^\circ}+0.19 e^{i102.6^\circ}+0.08 e^{-i166.4^\circ}
{\nonumber}\\
&\simeq& 1.82e^{i37.9^\circ}+0.08 e^{-i166.4^\circ}\simeq 1.75 e^{i 38.9^\circ},
{\nonumber}\\
\sqrt 2 (A_{B {}^0_{d}\to \pi^0 \pi^0})^0
&\simeq& 1.05e^{-i 166.2^\circ}+0.99 e^{-i31.6^\circ}+0.12e^{-i119.7^\circ}+0.08 e^{i149.3^\circ}
{\nonumber}\\
&\simeq&
0.90e^{-i104.5^\circ}+0.08 e^{i149.3^\circ}\simeq 0.88 e^{-i 109.2^\circ},
{\nonumber}\\
\sqrt 2 (A_{\overline B {}^0_{d}\to \pi^0\pi^0})_{FSI}
&\simeq&1.05e^{i 56.1^\circ}+1.23e^{i34.3^\circ}+0.55e^{i71.0^\circ}+0.79 e^{-i116.2^\circ}
{\nonumber}\\
&\simeq& 2.73e^{i49.5^\circ}+0.79 e^{-i116.2^\circ}\simeq 1.97 e^{i 43.8^\circ},
{\nonumber}\\
\sqrt 2 (A_{B {}^0_{d}\to \pi^0\pi^0})_{FSI}
&\simeq&1.05e^{-i 166.2^\circ}+0.64 e^{-i47.6^\circ}+0.55e^{-i152.2^\circ}+0.52 e^{-i1.6^\circ}
{\nonumber}\\
&\simeq&1.45 e^{-i137.0^\circ}+0.52 e^{-i1.6^\circ}\simeq 1.14 e^{-i 118.3^\circ},
\end{eqnarray}
respectively,
where terms are given in the order of $-C$, $P$, $E$ and $PA$
and the expressions with the first three terms ($-C+P+E$) combined are also shown.
The above estimation is close to the values in the full numerical results with $\sqrt2 (A_{\overline B {}^0_{d}\to \pi^0\pi^0})^0$, $\sqrt 2 (A_{B {}^0_{d}\to \pi^0 \pi^0})^0$, $\sqrt 2 (A_{\overline B {}^0_{d}\to \pi^0\pi^0})_{FSI}$ and $\sqrt 2 (A_{B {}^0_{d}\to \pi^0\pi^0})_{FSI}$
given by
$1.67 e^{i39.6^\circ}$,
$0.89 e^{-i 114.8^\circ}$,
$1.96 e^{i45.0^\circ}$ and
$1.08 e^{-i 126.0^\circ}$ in the unit of $10^{-8}$ GeV,
respectively.
In the above estimation the first three terms and the last term interfere destructively.
With Res $P$ and $E$ are enhanced giving a larger $-C+P+E$,
while the enhanced $PA$ cannot be neglected anymore,
producing a slightly larger decay amplitude and resulting a 35\% enhancement in rate,
which brings the rate up from ${\cal B}(\overline B{}^0\to\pi^0\pi^0)\simeq 0.8\times 10^{-6}$ to $\sim 1.1\times 10^{-6}$ as shown in Table~\ref{tab:table-br}.
As noted previously the rate is still smaller than the central value of the data, which however accompanies with large uncertainty.
For the newly observed $\overline B {}^0_{d}\to K^+ K^-$ mode, we note that as shown in Table~\ref{tab:table-br} rescattering enhances the rate by $0.100/0.03=3.33$ times.
It will be useful to see the enhancement in details.
From Tables~\ref{tab:TADS0} and \ref{tab:ucpenguinsDS0-1} and Eq.~(\ref{eq: TAgroup4}),
\begin{eqnarray}
A_{\overline B {}^0_{d}\to K^+ K^-}&=&E+PA^u+PA^c+\frac{1}{3} P_{EW}^A,
\end{eqnarray}
we have (in unit of $10^{-8}$ GeV)
\begin{eqnarray}
(A_{\overline B {}^0_{d}\to K^+ K^-})^0
&\simeq& 0.119 e^{i102.6^\circ}+0.031 e^{-i 77.4^\circ}+0.081 e^{i171.4^\circ}+0.001 e^{i 13.6^\circ}
{\nonumber}\\
&\simeq& 0.139 e^{i 135.2^\circ},
{\nonumber}\\
(A_{B {}^0_{d}\to K^+ K^-})^0
&\simeq& 0.119 e^{-i119.7^\circ}+0.031 e^{i 60.3^\circ}+0.081 e^{i171.4^\circ}+0.001 e^{-i 30.7^\circ}
{\nonumber}\\
&\simeq& 0.139 e^{-i 152.4^\circ},
{\nonumber}\\
(A_{\overline B {}^0_{d}\to K^+ K^-} )_{FSI}
&\simeq& 0.546 e^{i 71.0^\circ}+0.598 e^{-i120.1^\circ}+0.202 e^{-i 104.6^\circ}+0.006 e^{-i24.7^\circ}
{\nonumber}\\
&\simeq& 0.261 e^{-i 130.1^\circ}
{\nonumber}\\
(A_{B {}^0_{d}\to K^+ K^-} )_{FSI}
&\simeq&0.546 e^{-i151.2^\circ}+0.552 e^{i17.1^\circ}+0.202 e^{-i 105.4^\circ}+0.006 e^{-i68.8^\circ}
{\nonumber}\\
&\simeq& 0.286 e^{-i 81.4^\circ},
\label{eq: B0KK}
\end{eqnarray}
for the decay amplitudes before and after rescattering,
where terms are given in the order of $E$, $PA^u$, $PA^c$ and $P_{EW}^A/3$.
Compare the above estimation to the values in our full numerical result, which have
$0.200 e^{i 135.0^\circ}$,
$0.200 e^{-i 152.6^\circ}$,
$0.332 e^{-i 139.8^\circ}$, and
$0.350 e^{-i 87.2^\circ}$ for
$(A_{\overline B {}^0_{d}\to K^+ K^-})^0$, $(A_{B {}^0_{d}\to K^+ K^-})^0 $, $(A_{\overline B {}^0_{d}\to K^+ K^-} )_{FSI}$
and $(A_{B {}^0_{d}\to K^+ K^-} )_{FSI}$ in unit of $10^{-8}$ GeV, respectively.
The discrepancy is mainly from SU(3) breaking effects, which are not included in the above equation.
In fact, by scaling the numbers in Eq. (\ref{eq: B0KK}) by $(f_K/f_\pi)^2$, the sizes become
0.199, 0.199, 0.373 and 0.409,
which agree better to the above values now.
From the above equation, we see that $E$, $PA^u$, $PA^c$ and $P^A_{EW}$ are all enhanced.
Note that $E$ interferes destructively with $PA^u$ and $PA^c$ in $A_{\overline B {}^0_{d}\to K^+ K^-}$,
while $PA^u$ interferes destructively with $E$ and $PA^c$ in $A_{B {}^0_{d}\to K^+ K^-}$.
The result is an enhancement of $3.8$ in the averaged rate,
which is close to our numerical result ($0.100/0.03=3.33$) as shown in Table~\ref{tab:table-br}.
We will return to this mode again in the discussion of direct CP asymmetry.
Finally we turn to the $B^-\to\overline K^0\pi^-$ decay. From Eq.~(\ref{eq: TAgroup2}) we have
\begin{eqnarray}
A_{B^-\to \overline K^0\pi^-}
&=&A'+P^{\prime u}+P^{\prime c}+\frac{1}{3}(-P_{EW}^{\prime\,C}+2P_{EW}^{\prime\, E}),
\end{eqnarray}
which gives before and after rescattering (in unit of $10^{-8}$ GeV)
\begin{eqnarray}
(A_{B^-\to \overline K^0\pi^-})^0
&\simeq&
0.01 e^{-i77.4^\circ}
+0.08 e^{i 108.3^\circ}
+4.37 e^{i 168.6^\circ}
+0.08 e^{-i53.3^\circ}
\simeq
4.35 e^{i168.4^\circ},
{\nonumber}\\
(A_{B^+\to K^0\pi^+})^0
&\simeq&
0.01 e^{i60.3^\circ}
+0.08 e^{-i 113.9^\circ}
+4.37 e^{i 168.6^\circ}
+0.08 e^{-i51.2^\circ}
\simeq4.33 e^{i170.2^\circ},
{\nonumber}\\
(A_{B^-\to \overline K^0\pi^-})_{FSI}
&\simeq&
0.11 e^{-i113.8^\circ}
+0.16 e^{i 84.7^\circ}
+4.11 e^{i 179.8^\circ}
+0.13 e^{-i50.3^\circ}
\simeq 4.06 e^{-i 179.7^\circ},
{\nonumber}\\
(A_{B^+\to K^0\pi^+})_{FSI}
&\simeq&
0.11 e^{i23.9^\circ}
+0.16 e^{-i 137.6^\circ}
+4.11 e^{i 179.8^\circ}
+0.13 e^{-i48.2^\circ}
\simeq4.05 e^{-i 178.0^\circ},
{\nonumber}\\
\label{eq: K0pi estimation}
\end{eqnarray}
respectively,
where terms are given in the order of $A'$, $P^{\prime u}$, $P^{\prime c}$ and $(-P_{EW}^{\prime\,C}+2P_{EW}^{\prime\, E})/3$.
Note that in our numerical result, we have
$5.17 e^{i 167.2^\circ}$,
$5.29 e^{i171.2^\circ}$,
$4.86 e^{i 179.6^\circ}$ and
$4.98 e^{i 175.7^\circ}$, for
$(A_{B^-\to \overline K^0\pi^-})^0$, $(A_{B^+\to K^0\pi^+})^0$, $(A_{B^-\to \overline K^0\pi^-})_{FSI}$ and $(A_{B^+\to K^0\pi^+})_{FSI}$ in unit of $10^{-8}$ GeV, respectively.
By scaling the values in the Eq.~(\ref{eq: K0pi estimation}) by $f_K/f_\pi$, the sizes become
$5.20$, $5.17$, $4.85$ and $4.83$, respectively,
which are close to the numerical results.
In the full numerical result either in the presence of rescattering or without it, the sizes of $A_{B^+\to K^0\pi^+}$ is slightly greater than $A_{B^-\to \overline K^0\pi^-}$,
but it is the other way around in the estimation.
In fact, in the numerical result, we have
$P^{\prime u}=0.10 e^{i 107.8^\circ}$ and $P^{\prime c}=5.19 e^{i167.5^\circ}$ in $(A_{B^-\to \overline K^0\pi^-})^0$ and
$P^{\prime u}=0.10 e^{-i 114.4^\circ}$ and $P^{\prime c}=5.33 e^{i169.6^\circ}$ in $(A_{B^+\to K^0\pi^+})^0$.
The latter $|P^{\prime c}|$ in $(A_{B^+\to K^0\pi^+})^0$ is greater than the one in $(A_{B^-\to \overline K^0\pi^-})^0$. The difference can be traced to the non-vanishing first Gegenbauer moment of the kaon wave function ($\alpha_1^{\bar K}=-\alpha_1^K=0.2)$, which will change sign in changing from $\overline K$ to $K$.
This will affect the direct CP asymmetry and such a feature is absent in the above estimation.
From Eq.~(\ref{eq: K0pi estimation}) we see that $A'+\frac{1}{3}(-P_{EW}^{\prime\,C}+2P_{EW}^{\prime\, E})$ interferes destructively to
the dominating $P^{\prime c}$ term.
Since the sizes of $A'$ and $\frac{1}{3}(-P_{EW}^{\prime\,C}+2P_{EW}^{\prime\, E})$ are enhanced,
while the size of $P^{\prime c}$ is slightly reduced,
the size of the total amplitude is reduced under the rescattering resulting a reduction of $13\%$ in the averaged rate,
which brings the rate from ${\cal B}(B^-\to\overline K{}^0\pi^+)\simeq 25\times 10^{-6}$ to
$\sim 22\times 10^{-6}$, which is closer to the data [$(23.79\pm0.75)\times 10^{-6}$] as shown in Table~\ref{tab:table-br}.
\subsubsection{Direct CP Violations in $\overline B {}^0$
and $B^-$ Decays}
\begin{table}[t!]
\caption{ \label{tab:table-acp} Same as Table~\ref{tab:table-br},
except for the direct CP asymmetries ${\cal A}$ (in units of percent)
in various $\overline B{}_{u,d}\to PP$ modes. }
\begin{ruledtabular}
\begin{tabular}{lccccc}
Mode
&Exp
&Fac
&Res
&$\chi^{2\,\rm(Fac)}_{\rm min}$
&$\chi^{2\,\rm(Res)}_{\rm min}$
\\
\hline
$\overline B{}^0\to K^-\pi^+$
& $-8.2\pm0.6$
& $-8.0\pm0.1$
& $-8.2\pm0.3$ $(-9.5)$
& 0.1
& 0.0 (4.8)
\\
$\overline B {}^0\to \overline K {}^0\pi^0$
& $-1\pm13$\footnotemark[1]
& $-15.2\pm0.6$
& $-14.3\pm1.0$ $(-8.5)$
& 1.2
& 1.0 (0.3)
\\
$\overline B {}^0\to \overline K {}^0\eta$
& --
& $-29.3_{-1.9}^{+1.3}$
& $-27.7_{-2.2}^{+1.4}$ $(-17.5)$
& --
& --
\\
$\overline B {}^0\to \overline K {}^0\eta'$
& $5\pm4$
& $7.8\pm0.2$
& $6.1_{-0.9}^{+1.3}$ $(6.3)$
& 0.5
& 0.1 (0.1)
\\
\hline
$B^-\to \overline K{}^0\pi^-$
& $-1.7\pm1.6$
& $-3.5\pm0.1$
& $-2.4_{-0.4}^{+0.6}$ $(-2.3)$
& 1.2
& 0.2 (0.1)
\\
$B^-\to K^-\pi^0$
& $4.0\pm 2.1 $
& $4.0\pm0.4$
& $4.9_{-1.1}^{+0.8}$ $(-1.9)$
& 0.0
& 0.2 (7.8)
\\
$B^-\to K^-\eta$
& $-37\pm 8$
& $-42.0_{-3.7}^{+2.5}$
& $-33.9\pm2.6$ $(-10.8)$
& 0.4
& 0.1 (10.7)
\\
$B^-\to K^-\eta'$
& $1.3\pm 1.7 $
& $4.5_{-0.1}^{+0.2}$
& $1.8_{-0.7}^{+1.6}$ $(2.7)$
& 3.6
& 0.1 (0.7)
\\
\hline
$B^-\to \pi^-\pi^0$
& $2.6\pm3.9$
& $-0.11\pm0.00$
& $-0.09\pm0.01$ $(-0.09)$
& 0.5
& 0.5 (0.5)
\\
$B^-\to K^0 K^-$
& $-8.7\pm10
& $-5.7\pm0.1$
& $-4.8_{-5.3}^{+3.8}$ $(-8.8)$
& 0.1
& 0.2 (0.0)
\\
$B^-\to \pi^-\eta$
& $-14\pm5$
& $-11.9_{-0.7}^{+0.8}$
& $-10.3_{-1.6}^{+1.7}$ $(0.8)$
& 0.2
& 0.5 (8.7)
\\
$B^-\to \pi^-\eta'$
& $6\pm15$
& $37.8_{-1.3}^{+0.8}$
& $43.6_{-2.4}^{+2.0}$ $(34.6)$
& 4.5
& 6.3 (3.6)
\\
\hline
$\overline B {}^0\to \pi^+\pi^-$
& $31\pm5$
& $14.0\pm0.4$
& $22.5_{-1.0}^{+0.9}$ $(19.1)$
& 11.5
& 2.9 (5.7)
\\
$\overline B {}^0\to \pi^0 \pi^0$
& $34\pm22$
& $79.1_{-1.5}^{+1.2}$
& $53.7_{-7.1}^{+3.3}$ $(55.9)$
& 4.2
& 0.8 (1.0)
\\
$\overline B {}^0\to \eta\eta$
& --
& $-64.5_{-1.4}^{+1.5}$
& $-31.1_{-5.5}^{+7.2}$ $(-73.5)$
& --
& --
\\
$\overline B {}^0\to \eta \eta'$
& --
& $-35.6\pm1.1$
& $-29.8_{-8.0}^{+9.4}$ $(-52.1)$
& --
& --
\\
$\overline B {}^0\to \eta'\eta'$
& --
& $-20.0\pm0.4$
& $-7.6_{-19.8}^{+19.2}$ $(-12.9)$
& --
& --
\\
$\overline B {}^0\to K^+ K^-$
& --
& $0$
& $-5.2_{-5.0}^{+5.2}$ (0)
& --
& --
\\
$\overline B {}^0\to K^0\overline K^0$
& $-6\pm36$\footnotemark[2]
& $-8.4\pm 0.1$
& $-41.8_{-3.9}^{+2.6}$ $(-10.0)$
& 0.0
& 1.0 (0.0)
\\
$\overline B {}^0\to \pi^0 \eta$
& --
& $-45.6_{-1.7}^{+1.8}$
& $-40.9_{-3.6}^{+4.6}$ $(-36.3)$
& --
& --
\\
$\overline B {}^0\to \pi^0\eta'$
& --
& $-30.4_{-0.5}^{+0.9}$
& $-8.8\pm1.4$ $(-8.8)$
& --
& --
\\
\end{tabular}
\footnotetext[1]{An $S$ factor of 1.4 is included in the uncertainty.}
\footnotetext[2]{An $S$ factor of 1.4 is included in the uncertainty.}
\end{ruledtabular}
\end{table}
Results for direct CP asymmetries (${\cal A}$) in $\overline B{}_{u,d}\to PP$
decays are summarized in Table~\ref{tab:table-acp}.
The Fac and Res fits give similar results in the first group of data, namely the direct CP asymmetries in
$\overline B{}^0\to K^-\pi^+$, $\overline K{}^0\pi^0$ and $\overline K{}^0\eta'$ decays.
Both can explain the so-call $K\pi$ CP puzzle by producing positive ${\cal A}(B^-\to K^-\pi^0)$ and negative ${\cal A}(\overline B{}^0\to K^-\pi^+)$,
but the Res give a slightly larger ${\cal A}(B^-\to K^-\pi^0)$.
Fac fits better than Res in the $B^-\to \pi^-\eta'$ and $B{}^0\to K^0\overline K{}^0$ modes,
while Res fits better than Fac in the $B^-\to \overline K{}^0\pi^-$, $K^-\eta'$, $\overline B{}^0\to\pi^+\pi^-$ and $\pi^0\pi^0$ modes.
In particular, the $\chi^2$ in ${\cal A}$ of $\overline B{}^0\to\pi^+\pi^-$ is reduced significantly from 11.5 (Fac) to 2.9 (Res).
Overall speaking the fit in Res in this sector (see also Table II) is better than Fac, as the corresponding $\chi^2$ are
13.9($=1.1+0.6+7.5+4.7$) and 29.2($=1.8+5.2+6.5+15.7$), respectively.
It is interesting to see how rescattering solve the so-call $K\pi$ CP puzzle, where
experimental data gives $\Delta{\cal A}\equiv{\cal A}(K^-\pi^+)-{\cal A}(K^-\pi^0)=(12.2\pm2.2)\%$, in details.
The $\overline B{}^0\to K^-\pi^+$ and $B^-\to K^- \pi^0$ decay amplitudes can be expressed as
\begin{eqnarray}
A_{\overline B {}^0_{d}\to K^-\pi^+}
&=&T'+P'+\frac{1}{3}(2 P_{EW}^{\prime\,C}-P_{EW}^{\prime\,E}),
{\nonumber}\\
\sqrt 2A_{B^-\to K^- \pi^0}
&=&T'+C'+A'+P'+ P'_{EW}+\frac{2}{3} P_{EW}^{\prime\,C}+\frac{2}{3} P_{EW}^{\prime\, E}.
\end{eqnarray}
It is useful to note that these two amplitudes are related by the following relation:
\begin{eqnarray}
\sqrt 2A_{B^-\to K^- \pi^0}
&=&A_{\overline B {}^0_{d}\to K^-\pi^+}
+C'+A'
+P'_{EW}
+P^{\prime E}_{EW}.
\end{eqnarray}
Using the values in Table~\ref{tab:TADS-1} and the above equation, we have (in unit of $10^{-8}$ GeV and in the corresponding order of the above equation) before and after Res
\begin{eqnarray}
\sqrt 2(A_{B^-\to K^- \pi^0})^0
&\simeq&
4.12e^{i173.0^\circ}
+0.24 e^{-i 123.9^\circ}
+0.01 e^{-i77.4^\circ}
+0.40 e^{-i 174.2^\circ}
+0.05 e^{-i32.7^\circ}
{\nonumber}\\
&\simeq& 4.58 e^{i177.2^\circ},
{\nonumber}\\
\sqrt 2(A_{B^+\to K^+ \pi^0})^0
&\simeq&
4.45e^{i160.8^\circ}
+0.24 e^{i 13.8^\circ}
+0.01 e^{i60.3^\circ}
+0.40 e^{-i 172.1^\circ}
+0.05 e^{-i30.6^\circ}
{\nonumber}\\
&\simeq& 4.55 e^{i161.5^\circ},
{\nonumber}\\
\sqrt 2(A_{B^-\to K^- \pi^0})_{FSI}
&\simeq&
3.90e^{-i 176.4^\circ}
+0.24 e^{-i 123.9^\circ}
+0.11 e^{-i 113.8^\circ}
+0.40 e^{-i 174.2^\circ}
+0.13 e^{-i 40.3^\circ}
{\nonumber}\\
&\simeq& 4.43 e^{-i171.3^\circ},
{\nonumber}\\
\sqrt 2(A_{B^+\to K^+ \pi^0})_{FSI}
&\simeq&
4.18e^{i171.9^\circ}
+0.24 e^{i 13.8^\circ}
+0.11 e^{i 23.9^\circ}
+0.40 e^{-i 172.1^\circ}
+0.13 e^{-i38.2^\circ}
{\nonumber}\\
&\simeq& 4.14 e^{i172.2^\circ},
\label{eq: Kpi0ACP}
\end{eqnarray}
respectively.
In our full numerical results, for $\overline B{}^0\to K^-\pi^+$ decay,
we have
$4.91 e^{i 172.0^\circ}$,
$5.40 e^{i 161.8^\circ}$,
$4.68 e^{-i 176.8^\circ}$ and
$5.08 e^{i 174.0^\circ}$ for
$(A_{\overline B {}^0_{d}\to K^-\pi^+})^0$, $(A_{B {}^0_{d}\to K^+\pi^-})^0$, $(A_{\overline B {}^0_{d}\to K^-\pi^+})_{FSI}$ and
$(A_{ B {}^0_{d}\to K^+\pi^-})_{FSI}$ in unit of $10^{-8}$ GeV, respectively, which are close to the scaled (by $f_K/f_\pi$) estimations,
$4.93 e^{i 177.2^\circ}$,
$5.32 e^{i 161.5^\circ}$,
$4.66 e^{-i 171.3^\circ}$ and
$4.99 e^{i 172.2^\circ}$, from Eq.~(\ref{eq: Kpi0ACP}).
For $B^-\to K^-\pi^0$ decays, we have
$5.40 e^{i 176.4^\circ}$,
$5.50 e^{i 162.7^\circ}$,
$5.26 e^{-i 171.6^\circ}$ and
$5.01 e^{i 174.8^\circ}$ for
$\sqrt2(A_{B^-\to K^-\pi^0})^0$, $\sqrt2(A_{B^-\to K^-\pi^0})^0$, $\sqrt2(A_{B^-\to K^-\pi^0})_{FSI}$ and
$\sqrt2(A_{B^-\to K^-\pi^0})_{FSI}$ in unit of $10^{-8}$ GeV, respectively, which are close to the scaled (by $f_K/f_\pi$) estimations,
$5.48 e^{i 177.2^\circ}$,
$5.44 e^{i 161.5^\circ}$,
$5.29 e^{-i 171.3^\circ}$ and
$4.94 e^{i 172.2^\circ}$, from Eq.~(\ref{eq: Kpi0ACP}).
From Eq.~(\ref{eq: Kpi0ACP}) we see that the asymmetries are
${\cal A}(\overline B {}^0_{d}\to K^-\pi^+)\simeq -7.7\%$, ${\cal A}(B^-\to K^- \pi^0)\simeq 0.6\%$ and $\Delta {\cal A}\simeq 8.3\%$ before Res,
which are not too far from the values $-9.5\%$, $-1.9\%$ and $7.6\%$ shown in Table~\ref{tab:table-acp}, and
${\cal A}(\overline B {}^0_{d}\to K^-\pi^+)\simeq -6.8\%$, ${\cal A}(B^-\to K^- \pi^0)\simeq 6.8\%$ and $\Delta{\cal A}\simeq 13.6\%$ after Res,
which are close to the values $-8.2\%$, $4.9\%$ and $13.0\%$ shown in Table~\ref{tab:table-acp}.
As noted in the discussion of the $B^-\to \overline K{}^0\pi^-$ rate in the last sub-section,
the first Gegenbauer moment of the kaon wave function is the main source of the discrepancies between the estimations and the full numerical results.
As shown in Eq.~(\ref{eq: Kpi0ACP}), it is interesting that before rescattering the $C'$ and $P'_{EW}$ terms are the sources of deviation of
${\cal A}(B^-\to K^- \pi^0)$ from ${\cal A}(\overline B {}^0_{d}\to K^-\pi^+)$,
while with the presence of Res, the sizes of $A'$ and $P^{\prime E}_{EW}$ are enhanced
and hence further enlarges the deviation of ${\cal A}(\overline B {}^0_{d}\to K^-\pi^+)$ and ${\cal A}(B^-\to K^- \pi^0)$ producing a larger $\Delta {\cal A}$.
Note that comparing to the discussion in $\overline B{}^0\to\pi^+\pi^-$ and $\pi^0\pi^0$ decay rates [see discussion after Eq.~(\ref{eq: pipi TA})],
we see that the correlation of the effects of Res on these two sectors is not prominent.
Indeed, in the $\pi^0\pi^0$ mode the most affected TAs under rescattering are $P$, $E$ and $PA$,
while at here $A'$ and $P^{\prime E}_{EW}$ are the most affected and relevant ones.
We now turn to ${\cal A}(\overline B{}^0\to\pi^+\pi^-)$.
From previous discussion [see Eq. (\ref{eq: pipiTA})],
we find that before Res
$A(\overline B{}^0\to\pi^+\pi^-)\simeq2.84 e^{-i 47.1^\circ}\times 10^{-8}$ GeV and
$A(B{}^0\to\pi^+\pi^-)\simeq2.38 e^{i 53.2^\circ}\times 10^{-8}$ GeV,
giving ${\cal A}\simeq 18\%$,
while in the presence of Res, the sizes of the dominant parts, $T+P$, are reduced,
but the sizes of the destructive but the sub-leading parts, $E+PA$, are enhanced,
resulting richer interferences, giving
$A(\overline B{}^0\to\pi^+\pi^-)\simeq2.70 e^{-i 42.4^\circ}\times 10^{-8}$ GeV and
$A(B{}^0\to\pi^+\pi^-)\simeq2.10 e^{i 55.9^\circ}\times 10^{-8}$ GeV,
and, consequently,
producing an enhanced ${\cal A}\simeq 24.7\%$, which is closer to the data, $(31\pm 5)\%$.
Note that the results of Fac and Res in ${\cal A}(\overline B{}^0\to K^0\overline K{}^0)$ are different,
while with large uncertainty the present data, ${\cal A}(\overline B{}^0\to K^0\overline K{}^0)=(-6\pm36)\%$, allows both.
Note that the uncertainty in the data is enlarged by an $S$ factor of 1.4, as Belle and BaBar give very different results in ${\cal A}(\overline B{}^0\to K_s K_s)$, namely,
Belle gives ${\cal A}(\overline B{}^0\to K_s K_s)=-0.38\pm0.38\pm0.5$~\cite{Nakahama:2007dg}, while BaBar gives $0.40\pm0.41\pm0.06$~\cite{Aubert:2006gm}.
The result of Res,
${\cal A}(\overline B {}^0\to K^0\overline K^0)=-0.418_{-0.039}^{+0.026}$,
prefers the Belle result.
One should be reminded that Res can reproduce the $\overline B{}^0\to K^0\overline K{}^0$ CP-averaged rate much better than Fac (see Table~\ref{tab:table-br}).
We need more data to clarify the situation and to verify these predictions.
It will be useful to see the effect of Res on the $\overline B {}^0_{d}\to K^0 \overline K {}^0$ direct CP asymmetry.
From Eq. (\ref{eq: TAgroup4}), we can approximate the $\overline B {}^0_{d}\to K^0 \overline K {}^0$ amplitude as
\begin{eqnarray}
A_{\overline B {}^0_{d}\to K^0 \overline K {}^0}
&\simeq&P+PA\simeq P^u+PA^u+P^c+PA^c.
\end{eqnarray}
From Table~\ref{tab:ucpenguinsDS0-1}, before Res and after FSI, we have (in unit of $10^{-8}$ GeV)
\begin{eqnarray}
(A_{\overline B {}^0_{d}\to K^0 \overline K {}^0})^0&\simeq&
0.35 e^{i 108.3^\circ}+0.03 e^{-i77.4^\circ}+1.01 e^{-i11.4^\circ}+0.08 e^{i 171.4^\circ}
\simeq 0.81 e^{i 8.1^\circ},
{\nonumber}\\
(A_{B {}^0_{d}\to K^0 \overline K {}^0})^0
&\simeq& 0.35 e^{-i 113.9^\circ}+0.03 e^{i60.3^\circ}+1.01 e^{-i11.4^\circ}+0.08 e^{i 171.4^\circ}
\simeq 0.92 e^{-i 31.6^\circ},
{\nonumber}\\
(A_{\overline B {}^0_{d}\to K^0 \overline K {}^0})_{FSI}
&\simeq& 0.70 e^{i 84.7^\circ}+0.60 e^{-i120.1^\circ}+0.95 e^{-i0.2^\circ}+0.20 e^{-i104.6^\circ}
\simeq 0.66 e^{-i 1.9^\circ},
{\nonumber}\\
(A_{B {}^0_{d}\to K^0 \overline K {}^0})_{FSI}
&\simeq& 0.70 e^{-i 137.6^\circ}+0.60 e^{i17.6^\circ}+0.95 e^{-i0.2^\circ}+0.20 e^{-i104.6^\circ}
\simeq1.07 e^{-i 27.2^\circ},
{\nonumber}\\
\label{eq: AK0K0 Res}
\end{eqnarray}
respectively,
where the values of $P^u$, $PA^u$, $P^c$ and $PA^c$ are shown in the corresponding order.
In our full numerical result, we have
$1.12 e^{i 8.6^\circ}$,
$1.24 e^{-i 33.3^\circ}$,
$0.90 e^{i 1.6^\circ}$ and
$1.40 e^{-i 27.8^\circ}$ for
$(A_{\overline B {}^0_{d}\to K^0 \overline K {}^0})^0$, $(A_{B {}^0_{d}\to K^0 \overline K {}^0})^0$,
$(A_{\overline B {}^0_{d}\to K^0 \overline K {}^0})_{FSI}$ and $(A_{B {}^0_{d}\to K^0 \overline K {}^0})_{FSI}$ in unit of $10^{-8}$ GeV,
respectively, which are close to the scaled [by $(f_K/f_\pi)^2$] estimations,
$1.16 e^{i 8.1^\circ}$,
$1.31 e^{-i 31.6^\circ}$,
$ 0.95 e^{-i 1.9^\circ}$ and
$1.53 e^{-i 27.2^\circ}$,
from Eq.~(\ref{eq: AK0K0 Res}).
In Eq. (\ref{eq: AK0K0 Res}), we see that both $P^u$ and the $PA^u$ terms are enhanced under Res (mainly through rescattering from $T^0$)
and produce richer inference pattern contributing to the direct CP asymmetry.
The $\overline B {}^0_{d}\to K^0 \overline K {}^0$ amplitude is reduced,
while the amplitude of the conjugated decay mode, $B {}^0_{d}\to K^0 \overline K {}^0$, is enhanced under Res,
producing an enlarged direct CP asymmetry,
which is changed from $-12\%$ to $-45\%$ and hence close to the Belle result.
As shown in Table~\ref{tab:table-acp}, we see that before Res the direct CP asymmetry of $\overline B{}^0\to K^+K^-$ is vanishing.
Indeed, as one can infer from Eq. (\ref{eq: B0KK}) that the rates of $\overline B{}^0\to K^+K^-$ and $B{}^0\to K^+K^-$ are the same before Res.
This can be understood in the following.
In QCDF, $E$, $PA$ and $P_{EW}^A$ can be expressed in terms of the so-called $A^i_1$ and $A^i_2$ terms,
and these $A^i_1$ and $A^i_2$ terms are identical
when the asymptotic distribution amplitudes are used (as in the present case)~\cite{Beneke:2003zv}.
Since we have $A_{\overline B{}^0\to K^+K^-}=E+PA+P_{EW}^A/3$
and these three topological amplitudes all have a common strong phase resulting a vanishing direct CP asymmetry.
Note that in the presence of Res, $E$ and $PA^u$ are enhanced mostly from $T^0$ [see Eqs. (\ref{eq: E/E0}) and (\ref{eq: PAu/PAu0})],
while $PA^c$ from $P^c$ [see Eq. (\ref{eq: PAc/PAc0})], consequently,
the strong phases of these terms are no longer degenerate.
In fact, from Eq. (\ref{eq: B0KK}) one can infer that the direct CP asymmetry is estimated to be $-18\%$,
which can be compared to the value of $(-7.7_{-6.2}^{+6.0})\%$ obtained in the full numerical result as shown in Table~\ref{tab:table-acp}.
For prediction, we see that except $\overline B {}^0\to K^+ K^-$, the sizes of the predicted direct CP asymmetries from Res are smaller than those in Fac.
\subsubsection{Rates and Direct CP asymmetries in $\overline B{}^0_s$
Decays}
\begin{table}
\caption{ \label{tab:table-Bs} Same as Table~\ref{tab:table-br},
except for the branching ratios (upper table) in the unit of
$10^{-6}$ and direct CP asymmetries (lower table) in the unit of
percent for various $\overline B_s\to PP$ modes.}
\begin{ruledtabular}
\begin{tabular}{lccccc}
Mode
&Exp
&Fac
&Res
&$\chi^{2\,\rm(Fac)}_{\rm min}$
&$\chi^{2\,\rm(Res)}_{\rm min}$
\\
\hline
${\cal B}(\overline B{}_s^0\to K^+\pi^-)$
& $5.5\pm0.5$
& $5.5\pm0.1$
& $5.5_{-1.8}^{+0.4}$ (6.3)
& 0.0
& 0.0 (2.6)
\\
${\cal B}(\overline B {}_s^0\to K^0\pi^0)$
& --
& $0.59\pm0.01$
& $1.02_{-0.13}^{+3.64}$ (0.68)
& --
& --
\\
${\cal B}(\overline B {}_s^0\to K^0\eta)$
& --
& $0.18_{-0.00}^{+0.01}$
& $0.48_{-0.06}^{+1.87}$ (0.22)
& --
& --
\\
${\cal B}(\overline B {}_s^0\to K^0\eta')$
& --
& $1.76\pm0.03$
& $2.02_{-0.19}^{+4.30}$ (1.75)
& --
& --
\\
\hline
${\cal B}(\overline B {}_s^0\to \pi^+\pi^-)$
& $0.671\pm 0.083$
& $0.30\pm0.01$
& $0.67_{-0.06}^{+0.49}$ (0.14)
& 20.2
& 0.0 (41.1)
\\
${\cal B}(\overline B {}_s^0\to \pi^0 \pi^0)$
& --
& $0.15\pm0.00$
& $0.33_{-0.03}^{+0.25}$ (0.07)
& --
& --
\\
${\cal B}(\overline B {}_s^0\to \eta\eta)$
& --
& $24.7_{-0.4}^{+0.3}$
& $19.6_{-6.5}^{+0.6}$ (20.4)
& --
& --
\\
${\cal B}(\overline B {}_s^0\to \eta \eta')$
& --
& $67.2_{-1.4}^{+0.9}$
& $75.1_{-3.5}^{+67.4}$ (68.7)
& --
& --
\\
${\cal B}(\overline B{}_s^0\to \eta'\eta')$
& $33.1\pm7.1$
& $60.5_{-1.1}^{+0.8}$
& $34.9_{-4.7}^{+16.0}$ (46.6)
& 16.0
& 0.0 (3.6)
\\
${\cal B}(\overline B{}_s^0\to K^+ K^-)$
& $24.8\pm1.7$
& $32.7_{-0.6}^{+0.5}$
& $24.6_{-0.6}^{+2.7}$ (24.5)
& 21.3
& 0.0 (0.0)
\\
${\cal B}(\overline B{}_s^0\to K^0\overline K^0)$
& $19.6\pm9.5$
& $34.3_{-0.6}^{+0.5}$
& $24.6_{-1.0}^{+0.7}$ (25.6)
& 2.4
& 0.3 (0.4)
\\
${\cal B}(\overline B{}_s^0\to \pi^0 \eta)$
& --
& $0.07\pm0.00$
& $0.07_{-0.00}^{+0.09}$ (0.06)
& --
& --
\\
${\cal B}(\overline B{}_s^0\to \pi^0\eta')$
& --
& $0.09_{-0.00}^{+0.00}$
& $0.11_{-0.01}^{+0.10}$ (0.10)
& --
& --
\\
\hline
\hline
${\cal A}(\overline B{}_s^0\to K^+\pi^-)$
& $26\pm4$
& $17.4_{-0.5}^{+0.4}$
& $24.8_{-1.0}^{+22.1}$ (28.2)
& 4.6
& 0.1 (0.3)
\\
${\cal A}(\overline B{}_s^0\to K^0\pi^0)$
& --
& $66.8_{-1.6}^{+1.5}$
& $74.9_{-50.8}^{+4.8}$ (53.7)
& --
& --
\\
${\cal A}(\overline B{}_s^0\to K^0\eta)$
& --
& $88.1_{-1.2}^{+0.9}$
& $81.2_{-54.8}^{+6.9}$ (78.2)
& --
& --
\\
${\cal A}(\overline B{}_s^0\to K^0\eta')$
& --
& $-38.7_{-0.5}^{+0.9}$
& $-38.6_{-2.2}^{+13.0}$ $(-34.4)$
& --
& --
\\
\hline
${\cal A}(\overline B{}_s^0\to \pi^+\pi^-)$
& --
& $0$
& $1.7_{-2.5}^{+0.5}$ (0)
& --
& --
\\
${\cal A}(\overline B{}_s^0\to \pi^0 \pi^0)$
& --
& $0$
& $1.7_{-2.5}^{+0.5}$ (0)
& --
& --
\\
${\cal A}(\overline B{}_s^0\to \eta\eta)$
& --
& $-2.4\pm0.1$
& $-3.7_{-8.2}^{+0.6}$ $(-2.8)$
& --
& --
\\
${\cal A}(\overline B{}_s^0\to \eta \eta')$
& --
& $-0.01\pm0.01$
& $0.95_{-0.19}^{+0.39}$ $(-0.01)$
& --
& --
\\
${\cal A}(\overline B{}_s^0\to \eta'\eta')$
& --
& $2.0\pm0.0$
& $-1.2_{-4.7}^{+1.0}$ $(1.9)$
& --
& --
\\
${\cal A}(\overline B{}_s^0\to K^+ K^-)$
& $-14\pm 11$
& $-5.8\pm0.0$
& $-10.5_{-0.4}^{+1.1}$ $(-9.9)$
& 0.6
& 0.1 (0.1)
\\
${\cal A}(\overline B{}_s^0\to K^0\overline K^0)$
& --
& $-0.9\pm0.0$
& $0.9_{-0.3}^{+2.2}$ $(-0.6)$
& --
& --
\\
${\cal A}(\overline B{}_s^0\to \pi^0 \eta)$
& --
& $46.0_{-1.2}^{+1.5}$
& $92.9_{-15.4}^{+2.9}$ $(69.9)$
& --
& --
\\
${\cal A}(\overline B{}_s^0\to \pi^0\eta')$
& --
& $64.3_{-1.1}^{+1.4}$
& $77.7_{-6.9}^{+8.5}$ (54.0)
& --
& --
\\
\end{tabular}
\end{ruledtabular}
\end{table}
We show the CP-averaged rates and direct CP violations of $\overline B{}^0_s\to PP$ decays in Table~\ref{tab:table-Bs}.
There are five measured $\overline B{}_s$ decay rates,
namely $K^+\pi^-$, $\pi^+\pi^-$, $\eta'\eta'$, $K^+K^-$ and $K^0\overline K{}^0$ decay rates.
Among them $\overline B{}_s\to\pi^+\pi^-$ and $\eta'\eta'$ decays
are newly observed by LHCb~\cite{Aaij:2015qga,Aaij:2016elb}.
From the table we see that both Fac and Res can fit the $\overline B{}_s\to K^+\pi^-$ rate well,
but Fac is having difficulties in fitting all other four modes: in particular the $\chi^2$ of $\pi^+\pi^-$, $\eta'\eta'$ and $K^+K^-$ are as large as
20.2, 16.0 and 21.3, respectively, while Res can fit all $B_s$ decay modes very well and brings down these $\chi^2$ efficiently, giving $0.0$, $0.0$ and $0.0$, respectively.
Note that the rates of the two newly measured modes ($\pi^+\pi^-$ and $\eta'\eta'$) can be easily reproduced in the Res fit, but not in the Fac fit.
For other modes, we see from the table that Res predicts larger rates in $\overline B{}^0_s\to K^0\pi^0$, $K^0\eta$, $\pi^0\pi^0$ decays,
but gives similar predictions on $K^0\eta'$, $\eta\eta$, $\eta\eta'$, $\pi^0\eta$ and $\pi^0\eta'$ rates.
The $\overline B{}^0_s\to\pi^+\pi^-$ rate in the factorization calculation is too small compared to data.
As shown in Table~\ref{tab:table-Bs}, through Res the rate can be enhanced significantly.
It is useful to see the enhancement of the $\pi^+\pi^-$ rate more closely.
From Eq.~(\ref{eq: TABsgroup4}),
\begin{eqnarray}
A_{\overline B {}^0_{s}\to \pi^+\pi^-}
&=&E'+PA^{\prime u}+PA^{\prime c}+\frac{1}{3}P_{EW}^{\prime \,A},
\end{eqnarray}
and the values in Tables~\ref{tab:TADS-1} and \ref{tab:ucpenguinsDS0-1}, before and after Res, we have (in unit of $10^{-9}$ GeV)
\begin{eqnarray}
(A_{\overline B {}^0_{s}\to \pi^+\pi^-})^0
&\simeq& 0.27 e^{i102.6^\circ}+0.07 e^{-i77.4^\circ}+3.51 e^{-i 8.6^\circ}+0.02 e^{i 170.4}\simeq 3.42 e^{-i5.4^\circ},
{\nonumber}\\
(A_{\overline B {}^0_{s}\to \pi^+\pi^-})^0
&\simeq& 0.27 e^{-i120.1^\circ}+0.07 e^{i60.3^\circ}+3.51 e^{-i 8.6^\circ}+0.02 e^{i 172.5}\simeq 3.42 e^{-i11.7^\circ},
{\nonumber}\\
(A_{\overline B {}^0_{s}\to \pi^+\pi^-})_{FSI}
&\simeq& 1.26 e^{i71.0^\circ}+1.38 e^{-i120.1^\circ}+8.75 e^{i 75.4^\circ}+0.28 e^{i132.2^\circ}\simeq 8.88 e^{i78.7^\circ},
{\nonumber}\\
(A_{\overline B {}^0_{s}\to \pi^+\pi^-})_{FSI}
&\simeq& 1.26 e^{-i151.2^\circ}+1.38 e^{i17.6^\circ}+8.75 e^{i 75.4^\circ}+0.28 e^{i134.3^\circ}\simeq 8.76 e^{i75.3^\circ},
\label{eq: Bspipi}
\end{eqnarray}
respectively, where terms are given in the order of $E'$, $PA^{\prime u}$, $PA^{\prime c}$ and $P_{EW}^{\prime \,A}/3$.
In our full numerical result, we have
$4.17 e^{-i 5.3^\circ}$,
$4.17 e^{-i11.7^\circ}$,
$9.19 e^{i 66.7^\circ}$ and
$9.04 e^{i 64.8^\circ}$ for
$(A_{\overline B {}^0_{s}\to \pi^+\pi^-})^0$, $(A_{\overline B {}^0_{s}\to \pi^+\pi^-})^0$, $(A_{\overline B {}^0_{s}\to \pi^+\pi^-})_{FSI}$ and $(A_{\overline B {}^0_{s}\to \pi^+\pi^-})_{FSI}$ in unit of $10^{-9}$ GeV,
respectively, which are close to the scaled (by $f_{B_s}/f_B\simeq f_K/f_\pi$) estimations,
$4.15 e^{-i5.4^\circ}$, $4.15 e^{-i11.8^\circ}$, $10.74 e^{i78.7^\circ}$ and $10.64 e^{i75.3^\circ}$, from Eq. (\ref{eq: Bspipi}).
From Eq. (\ref{eq: Bspipi}), we see that
the sizes of the amplitudes of the $\overline B{}^0_s$ and the conjugated $B^0_s$ decays
are enhanced by factors of $2.58$ and $2.56$, respectively,
where the enhancements are mainly from the enhancement in $PA^{\prime c}$.
Consequently, the CP-averaged rate is enhanced by a factor of $6.6$,
while ${\cal A}$ is changed from $0$ to $0.9\%$ as $E'$ and $PA^{\prime u}$ are also enhanced.
Note that the above estimation of rate enhancement is somewhat larger than the one in our full numerical result ($0.67/0.14=4.79$) in Table~\ref{tab:table-Bs},
but the direct CP asymmetry is close the value ($1.9\%$) shown in the table.
The reason of the vanishing ${\cal A}$ before Res is similar to those in the $\overline B{}^0\to K^+K^-$ decay as discussed previously.
Hence, in the presence of Res, $E'$ and $PA^{\prime u}$ are enhanced mostly from $T^{\prime 0}$ [see Eqs. (\ref{eq: E/E0}) and (\ref{eq: PAu/PAu0})],
while $PA^{\prime c}$ from $P^{\prime c}$ [see Eq. (\ref{eq: PAc/PAc0})], which help to enhance the $\overline B{}^0_s\to \pi^+\pi^-$ rate and bring in non-vanishing direct CP asymmetry.
We now compare our results to the data in direct CP asymmetries.
There are two reported measurements in direct CP asymmetries of $\overline B{}_s$ modes:
${\cal A}(\overline B{}_s^0\to K^+\pi^-)$ and ${\cal A}(\overline B_s\to K^+K^-)$.
A better measurement is reported in the $K^+\pi^-$ mode with a much reduced uncertainty.
From the table we see that Res gives a better fit to this data than Fac with $\chi^{2(\rm Fac)}=4.6$ and $\chi^{2(\rm Res)}=0.1$.
On the other hand both Fac and Res can fit ${\cal A}(\overline B_s\to K^+K^-)$ well,
as the uncertainty in data is still large to accommodate both results, but Res has a smaller $\chi^2$.
For predictions on direct CP asymmetries,
we note that the signs of ${\cal A}(\overline B{}_s\to\eta'\eta')$ and ${\cal A}(\overline B{}_s\to K^0\overline K{}^0)$ are opposite in Fac and Res;
Res predicts non-vanishing ${\cal A}(\overline B{}_s\to\pi^+\pi^-,\pi^0\pi^0)$ and larger ${\cal A}(\overline B{}_s\to\pi^0\eta)$,
while predictions of Fac and Res on other modes are similar. These predictions can be checked in near future.
\subsubsection{Time-dependent CP violations in $\overline B{}^0$ and $\overline
B{}^0_s$ Decays}
\begin{table}
\caption{ \label{tab:table-S} Results on the time-dependent CP
asymmetry $\sin2\beta_{\rm eff}$ (for the first three modes) and $S$ of various $\overline B_{d,s}\to PP$ modes. }
\begin{ruledtabular}
\begin{tabular}{lcrrcc}
Mode
&Exp
&Fac~$\qquad$
&Res~$\qquad$
&$\chi^{2\,\rm (Fac)}_{\rm min}$
&$\chi^{2\,\rm(Res)}_{\rm min}$
\\
\hline
$\overline B {}^0\to \overline K{}^0\pi^0$
& $0.57\pm0.17$
& $0.798\pm0.002$
& $0.806^{+0.010}_{-0.003}$ (0.793)
& 1.8
& 1.9 (1.7)
\\
$\overline B{}^0\to \overline K{}^0\eta$
& --
& $0.672_{-0.015}^{+0.009}$
& $0.728_{-0.018}^{+0.030}$ (0.757)
&
&
\\
$\overline B{}^0\to \overline K{}^0\eta'$
& $0.63\pm0.06$
& $0.689^{+0.001}_{-0.002}$
& $0.683_{-0.008}^{+0.006}$ (0.693)
& 1.0
& 0.8 (1.1)
\\
\hline
$\overline B{}^0\to \pi^+\pi^-$
& $-0.66\pm 0.06$
& $-0.477_{-0.041}^{+0.039}$
& $-0.598\pm0.040$ $(-0.578)$
& 9.3
& 1.1 (1.9)
\\
$\overline B{}^0\to \pi^0 \pi^0$
& --
& $0.602\pm0.023$
& $0.675_{-0.049}^{+0.055}$ (0.778)
&
&
\\
$\overline B{}^0\to \eta\eta$
& --
& $-0.741_{-0.015}^{+0.014}$
& $-0.663_{-0.033}^{+0.031}$ $(-0.669)$
&
&
\\
$\overline B{}^0\to \eta \eta'$
& --
& $-0.847_{-0.014}^{+0.013}$
& $-0.953_{-0.021}^{+0.028}$ $(-0.795)$
\\
$\overline B{}^0\to \eta'\eta'$
& --
& $-0.922_{-0.004}^{+0.003}$
& $-0.753_{-0.089}^{+0.067}$ $(-0.962)$
\\
$\overline B{}^0\to K^+ K^-$
& --
& $-0.835_{-0.017}^{+0.016}$
& $-0.992_{-0.007}^{+0.017}$ $(-0.895)$
\\
$\overline B{}^0\to K_S\overline K_S$
& $-0.38^{+0.69}_{-0.77}\pm0.09$
& $-0.016\pm0.002$
& $-0.231_{-0.042}^{+0.048}$ $(-0.037)$
& 0.2
& 0.0 (0.2)
\\
& $-1.28^{+0.80}_{-0.73}{}^{+0.11}_{-0.16}$
\\
$\overline B{}^0\to \pi^0 \eta$
& --
& $0.215_{-0.006}^{+0.005}$
& $-0.473_{-0.068}^{+0.043}$ $(-0.494)$
\\
$\overline B{}^0\to \pi^0\eta'$
& --
& $-0.002_{-0.012}^{+0.010}$
& $-0.414_{-0.025}^{+0.035}$ $(-0.440)$
&
&
\\
\hline
$\overline B {}_s^0\to \pi^+\pi^-$
& --
& $0.152\pm0.001$
& $0.071_{-0.009}^{+0.011}$ (0.149)
&
&
\\
$\overline B {}_s^0\to \pi^0 \pi^0$
& --
& $0.152\pm0.001$
& $0.071_{-0.009}^{+0.011}$ (0.149)
\\
$\overline B{}_s^0\to \eta\eta$
& --
& $-0.005\pm0.000$
& $-0.035_{-0.067}^{+0.004}$ $(-0.027)$
\\
$\overline B{}_s^0\to \eta \eta'$
& --
& $-0.004\pm0.000$
& $0.005_{-0.001}^{+0.007}$ $(0.006)$
\\
$\overline B{}_s^0\to \eta'\eta'$
& --
& $0.021\pm0.000$
& $0.046_{-0.003}^{+0.006}$ (0.025)
\\
$\overline B{}_s^0\to K^+ K^-$
& $0.30\pm0.13$
& $0.200\pm0.002$
& $0.149_{-0.066}^{+0.005}$ (0.176)
& 0.6
& 1.4 (1.0)
\\
$\overline B{}_s^0\to K^0\overline K^0$
& --
& $-0.022_{-0.000}^{+0.001}$
& $-0.019_{-0.017}^{+0.004}$ $(-0.027)$
\\
$\overline B{}_s^0\to \pi^0 \eta$
& --
& $-0.059_{-0.004}^{+0.009}$
& $0.100_{-0.475}^{+0.050}$ (0.308)
\\
$\overline B {}_s^0\to \pi^0\eta'$
& --
& $0.232_{-0.008}^{+0.013}$
& $-0.016_{-0.319}^{+0.065}$ $(0.053)$
\\
\hline
$\overline B{}_s^0\to K_S\pi^0$
& --
& $-0.738_{-0.020}^{+0.017}$
& $-0.311_{-0.092}^{+0.541}$ $(-0.784)$
\\
$\overline B{}_s^0\to K_S\eta$
& --
& $-0.296_{-0.037}^{+0.041}$
& $0.274_{-0.076}^{+0.369}$ $(-0.273)$
\\
$\overline B{}_s^0\to K_S\eta'$
& --
& $-0.395_{-0.004}^{+0.011}$
& $-0.049_{-0.052}^{+0.367}$ $(-0.276)$
\\
\end{tabular}
\end{ruledtabular}
\end{table}
Results on time-dependent CP-asymmetries $S$ are given in
Table~\ref{tab:table-S}.
We fit to data on mixing induced CP asymmetries.
There are reported experimental results of mixing induced CP asymmetries in the following 5 modes:
$\overline B {}^0\to \overline K{}^0\pi^0$, $\overline B{}^0\to \overline K{}^0\eta'$, $\overline B{}^0\to \pi^+\pi^-$,
$\overline B{}^0\to K_S\overline K_S$ and $\overline B{}_s^0\to K^+ K^-$.
Since the measurements are subtle, the experimental progress in this sector is slower than those in rates and direct CP asymmetries.
Currently, the $\overline B{}^0\to \overline K{}^0\pi^0$ mode was updated up to 2010;
the $\overline B{}^0\to \overline K{}^0\eta'$ mode was updated up to 2014;
the $\overline B{}^0\to \pi^+\pi^-$ mode was updated up to 2013,
the $\overline B{}^0\to K_S\overline K{}_S$ mode was updated up to 2007
and
the $\overline B{}_s\to K^+K^-$ mode was included in these measurement in 2013~\cite{updates,Aaij:2013tna,Aubert:2006gm,Nakahama:2007dg}.
New data are eagerly awaited.
Note that for the
$\overline B{}^0\to K^0\overline K{}^0$ mode, the mixing induced CP asymmetry
obtained by Belle
($-0.38^{+0.69}_{-0.77}\pm0.09$~\cite{Nakahama:2007dg})
and BaBar
($-1.28^{+0.80+0.11}_{-0.73-0.16}$~\cite{Aubert:2006gm}) are different.
As the central value of the latter exceeds the physical range,
we only include the former one in our fit.
From Table~\ref{tab:table-S} we see that fit in Res for the $\overline B{}^0\to\pi^+\pi^-$ mode is much better than the one in Fac, where the
$\chi^2$ are 1.1 and 9.3 for the former and the latter, respectively.
On the contrary, the fit in Fac is better than Res in the $\overline B{}_s\to K^+K^-$ mode, where the $\chi^2$ are 0.6 and 1.4 for the former and the latter, respectively.
Note that the uncertainty in the data of the $\overline B{}_s\to K^+K^-$ mode is much larger than the one in the $\overline B{}^0\to\pi^+\pi^-$ mode. It will be interesting to see the updated data on the $\overline B{}_s\to K^+K^-$ mode.
Overall speaking the quality of fit to mixing induced CP asymmetries is improved ($\chi^2$ reduced from $12.9$ to $5.2$, see also Table II) in the presence of Res.
It is useful to look into the mixing induced asymmetry in the $\overline B {}^0_{d}\to K^0 \overline K {}^0$ mode.
Recall in Eq. (\ref{eq: AK0K0 Res}) that, before and after Res, we have (in unit of $10^{-8}$ GeV, without SU(3) breaking correction)
\begin{eqnarray}
(A_{\overline B {}^0_{d}\to K^0 \overline K {}^0})^0&\simeq& 0.81 e^{i 8.1^\circ},
\quad
(A_{B {}^0_{d}\to K^0 \overline K {}^0})^0\simeq 0.92 e^{-i 31.6^\circ},
{\nonumber}\\
(A_{\overline B {}^0_{d}\to K^0 \overline K {}^0})_{FSI}&\simeq&0.66 e^{-i 1.9^\circ},
\quad
(A_{B {}^0_{d}\to K^0 \overline K {}^0})_{FSI}\simeq1.07 e^{-i 27.2^\circ},
\end{eqnarray}
respectively.
Using the well known formula:
\begin{eqnarray}
S=\frac{2 {\rm Im} \lambda_A}{1+|\lambda_A|^2}
\end{eqnarray}
with
\begin{eqnarray}
\lambda_A\equiv \frac{q}{p} \frac{A_{\overline B {}^0_{d}\to K^0 \overline K {}^0}}{A_{B {}^0_{d}\to K^0 \overline K {}^0}}
=e^{-i2\beta}\frac{A_{\overline B {}^0_{d}\to K^0 \overline K {}^0}}{A_{B {}^0_{d}\to K^0 \overline K {}^0}},
\end{eqnarray}
we obtain
$S\simeq -0.08$ and $-0.29$ without and with Res, respectively, which are close to the values reported in Table~\ref{tab:table-S}.
As explained previously, although $\overline B {}^0_{d}\to K^0 \overline K {}^0$ is a pure penguin mode,
its $S$ is not necessary close to $-\sin2\beta$,
as the $u$-penguin contribution is not negligible ($|P^{0 u}/P^{0 c}|\simeq 0.35$, see Table~\ref{tab:ucpenguinsDS0-1}).
When Res is turned on, the $u$-penguin and $c$-penguin receive different contributions, where it is clear that trees can only contribute to the former giving $|P^u/P^c|\simeq 0.74$ (see Table~\ref{tab:ucpenguinsDS0-1}),
and, consequently, the value of $S$ can be changed drastically.
We now compare the predictions of Fac and Res on mixing induced CP asymmetries. We note that they have different predictions on the mixing induced CP asymmetries of $\overline B{}^0\to\eta\eta$, $\eta\eta'$, $\pi^0\eta$, $\pi^0\eta'$, $\overline B{}_s\to \pi^0\eta$, $\pi^0\eta'$, $K_s\pi^0$, $K_s\eta$ and $K_S\eta'$ modes.
In particular, the signs of central values of the asymmetries of $\overline B{}^0\to\pi^0\eta$, $\pi^0\eta'$, $\overline B{}_s\to \pi^0\eta$, $\pi^0\eta'$, and $K_s\eta$ are opposite.
\section{conclusion}
Various new measurements in charmless $B_{u,d,s}\to PP$ modes
are reported by Belle and LHCb.
These include the rates of
$B^0\to\pi^0\pi^0$, $\eta\pi^0$, $B_s\to\eta'\eta'$, $B^0\to K^+K^-$ and $B^0_s\to\pi^+\pi^-$ decays.
Some of these modes are highly suppressed and are among the rarest $B$ decays.
Direct CP asymmetries on various modes
are constantly updated.
It is well known that direct CP asymmetries and rates of suppressed modes are sensitive to final state interaction.
As new measurements are reported and more data will be collected,
it is interesting and timely to studied the rescattering on $B_{u,d,s}\to PP$ decays.
We perform a $\chi^2$ analysis with all available data on
CP-averaged rates and CP asymmetries in $\overline B{}_{u,d,s}\to PP$
decays.
Our numerical results are compared to data and those from factorization approach.
The quality of the fit is improved significantly in the presence of Res,
especially in the decay rates in the $\overline B{}^0$ $\Delta S=0$ sector and in rates and direct CP asymmetries in the $\overline B{}^0_s$ decay modes.
Indeed, the $\chi^2$ in the $\overline B{}^0\to \overline K{}^0\pi^0$, $\pi^+\pi^-$, $K^0\overline K{}^0$, $B^-\to\overline K{}^0\pi^-$,
$K^-\eta$, $\pi^-\pi^0$, $\pi^-\eta$ and $\overline B{}_s^0\to \pi^+\pi^-$, $\eta'\eta'$ and $K^+K^-$ rates,
and in $\overline B{}^0\to\pi^+\pi^-$ and $\overline B{}_s^0\to K^+\pi^-$ direct CP asymmetries are improved significantly.
Res also fit. better to the semileptonic data on $|V_{ub}| F^{B\pi}(0)$ [see Eq.~(\ref{eq: SL})].
The relations on topological amplitudes and rescattering are explored
and they help to provide a better understanding of the effects of rescattering.
As suggested by U(3) symmetry on topological amplitudes and FSI, a vanishing exchange rescattering scenario is considered.
The exchange, annihilation, $u$-penguin, $u$-penguin annihilation and some electroweak penguin amplitudes are enhanced significantly
via annihilation and total annihilation rescatterings.
In particular, the $u$-penguin annihilation amplitude is sizably enhanced by the tree amplitude via total annihilation rescattering.
These enhancements affect rates and CP asymmetries.
For example,
the enhanced $PA^u$ changes the $\overline B {}^0_{d}\to K^0 \overline K {}^0$ direct CP asymmetry significantly;
the enhanced $P$, $E$ and $PA$ produce (through complicate interference) a slightly larger $\overline B{}^0\to\pi^0\pi^0$ decay amplitude and resulting a 35\% enhancement in rate;
$A'$ and $P^{\prime E}_{EW}$ are enhanced
and enlarges the deviation of ${\cal A}(\overline B {}^0_{d}\to K^-\pi^+)$ and ${\cal A}(B^-\to K^- \pi^0)$ producing a larger $\Delta {\cal A}$;
the $\overline B{}^0_s\to\pi^+\pi^-$ rate is sizably enhanced through the enhancement in $PA^{\prime c}$;
the $|P^u/P^c|$ ratio is enhanced from $0.35$ to $0.74$ and can change mixing induced CP asymmetries drastically.
For the comparison of the predictions of Fac and Res, we observed the following points.
(i) Belle and BaBar give very different results in ${\cal A}(\overline B{}^0\to K_s K_s)$ mode, namely
Belle gives ${\cal A}(\overline B{}^0\to K_s K_s)=-0.38\pm0.38\pm0.5$~\cite{Nakahama:2007dg}, while BaBar gives $0.40\pm0.41\pm0.06$~\cite{Aubert:2006gm}.
The result of Res prefers the Belle result, while Fac prefers a negative but less sizable direct CP asymmetry.
(ii) Except $\overline B {}^0\to K^+ K^-$,
the sizes of the predicted direct CP asymmetries of $B^-, \overline B{}^0\to PP$ modes from Res are smaller than those in Fac.
(iii) For $B_s$ decay rates, Res predicts larger rates in $\overline B{}^0_s\to K^0\pi^0$, $K^0\eta$, $\pi^0\pi^0$ decays,
but gives similar predictions on $K^0\eta'$, $\eta\eta$, $\eta\eta'$, $\pi^0\eta$ and $\pi^0\eta'$ rates.
(iv) For predictions on direct CP asymmetries,
we note that the signs of ${\cal A}(\overline B{}_s\to\eta'\eta')$ and ${\cal A}(\overline B{}_s\to K^0\overline K{}^0)$ are opposite in Fac and Res;
Res predicts non-vanishing ${\cal A}(\overline B{}_s\to\pi^+\pi^-,\pi^0\pi^0)$ and larger ${\cal A}(\overline B{}_s\to\pi^0\eta)$,
while predictions of Fac and Res on other modes are similar.
(v) Finally, Fac and Res have different predictions on the mixing induced CP asymmetries of $\overline B{}^0\to\eta\eta$, $\eta\eta'$, $\pi^0\eta$, $\pi^0\eta'$, $\overline B{}_s\to \pi^0\eta$, $\pi^0\eta'$, $K_s\pi^0$, $K_s\eta$ and $K_S\eta'$ modes.
In particular, the signs of central values of the asymmetries of $\overline B{}^0\to\pi^0\eta$, $\pi^0\eta'$, $\overline B{}_s\to \pi^0\eta$, $\pi^0\eta'$, and $K_s\eta$ modes are opposite.
These predictions can be checked in the future.
\begin{acknowledgments}
This work is
supported in part by Ministry of Science and Technology of R.O.C. under Grant Nos. 103-2112-M-033-002-MY3 and 106-2112-M-033-004-MY3.
\end{acknowledgments}
|
3,212,635,537,734 | arxiv | \section{Introduction}
Fix $d \geq 3$, which we write as $p+1$, and consider $d$-regular graphs $X$ on $n$ vertices with $n \rightarrow \infty$. During the last decade, there has been a lot of interest in studying the simple random walk (SRW) and the non-backtracking random walk (NBRW) on such graphs. The focus has been to understand mixing times and related cutoff phenomena \cite{CP, Hermon, LP, LLP, ON, Sardari}. The non-backtracking random walk was introduced by Hashimoto \cite{Has}, it mixes faster, has sharper transitions and has been very useful in multiple cases \cite{ABLS, BLM, BS, FH, GK, Hermon2, HSS, OW}. We focus exclusively on the NBRW on $X$, which is defined as follows:
\[K_t(x,y)=\# \bigg \lbrace (x=x_1, x_2,\ldots, x_t=y) \bigg \vert \substack{ x_i \in X \\ x_i\sim x_{i+1}\\ x_{i-1}\neq x_{i+1}} \bigg \rbrace ,\]
where $x_i \sim x_{i+1}$ indicates that $(x_i, x_{i+1})$ is an edge.
Let $N(t):= \sum_y K_t(x,y) =(p+1)p^{t-1}$ and let $P^t(x,y)= \frac{1}{N(t)} K_t(x,y)$ be the transition matrix of the non-backtracking random walk on $G$. The total variation of $P^t_x= P^t(x, \cdot)$ from the uniform measure is defined as
$d_x(t):= \frac{1}{2} \sum_{y \in X} \bigg \vert P^t(x,y) - \frac{1}{n} \bigg \vert $.
We will also consider the total variation distance when starting at the worst possible starting point
\[ d(t):=\max_{x \in X} \{ d_x(t) \} .\]
For $0<\eta <1$, the total variation mixing time is defined as
\[t_{mix}(\eta)= \min \{t \geq 0 : d(t) \leq \eta\}.\]
The main focus of this paper is studying the cutoff phenomenon. We say that the NBRW on $X$ exhibits cutoff at $t_n$ with window $w_n$ if
\begin{equation}\label{cutoff}
\lim_{c \rightarrow \infty} \lim_{n \rightarrow \infty} d \left(t_n - c w_n \right)= 1 \mbox{ and } \lim_{c \rightarrow \infty} \lim_{n \rightarrow \infty} d \left(t_n + cw_n \right)= 0.
\end{equation}
If $N(t) \leq n$ one checks that
$d_x(t)= \frac{U_x(t)}{n},$
where $U_x(t)$ is the number of vertices that are not reached by the walk at time t, when starting at $x$. Hence,
\begin{equation*}
d(t) \geq 1- \frac{N(t)}{n},
\end{equation*}
if $N(t) \leq n$, which implies that
\begin{equation}\label{lowercut}
t_{mix}(1- \eta) \geq \log_p n - \log_p \eta^{-1}.
\end{equation}
This gives an absolute lower limit in \eqref{cutoff} for the cutoff time $t_n=\log_p n$ and bounded $w_n$ and we are interested in graphs $X$ for which this $t_n$ is indeed the cutoff time for the NBRW.
We will search for such $X$ among different types of expanders. For $\lambda<d$ an $(n, d , \lambda)$ graph $X$ is a $d$ regular graph on $n$ nodes for which the eigenvalues $\{\lambda_{j} \}_{j=0}^n$ of the adjacency matrix of $X$ satisfy
\[
\begin{cases}
\lambda_0= d:=p+1 &\quad\text{if} \quad j = 0 \\
\vert \lambda_j \vert \leq \lambda &\quad\text{if} \quad j \neq 0 .
\end{cases}
\]
If $\lambda= 2 \sqrt{p}$ then $X$ is called a Ramanujan graph.
The key results in this direction are due to Lubetzky and Sly \cite{LP} and Lubetzky and Peres \cite{LS}. In the first, it is shown that for the random $d$-regular graph
\begin{equation}\label{upcut}
t_{mix}(\eta) \leq \log_p n + 3 \log \eta^{-1} +4.
\end{equation}
Equations \eqref{lowercut} and \eqref{upcut} show that for such graphs the NBRW exhibits total variation cutoff at $\log_p n$ with a bounded window. In the second one, $X$ is assumed to be Ramanujan, and they show that the NBRW on any Ramanujan graph exhibits cutoff at $\log_p n$, but whether it occurs with a bounded window is not resolved. More precisely, they show that
\begin{equation*}
d(t) \leq \frac{1}{\log n},
\end{equation*}
for every $t > \log_p n + 3\log_p \log n$. As a corollary they also prove a purely combinatorial fact about the almost diameter. For Ramanujan graphs, for any $x$ we have that
\begin{equation}\label{diameter}
\# \{ y \in X : \vert \mbox{dist}(x,y) - \log_p n \vert > 3 \log_p \log n\}= o(n).
\end{equation}
The same result was independently proven by Sardari in \cite{Sardari}.
Our first result shows that the NBRW on a Ramanujan graph with large girth $g$ exhibits cutoff with a bounded window.
\begin{theorem}\label{Ram2}
Fix $\delta>0$. The NBRW on a Ramanujan graph with $g \geq \delta \log_p n$ satisfies
\[ t_{mix}(\varepsilon) \leq \log_p n +2 \log_p \varepsilon^{-1} + 2 \log_p (2 +20 \delta^{-1}), \]
for every $\varepsilon>0$.
\end{theorem}
\begin{remark}
The girth condition of Theorem \ref{Ram2} is satisfied for Ramanujan graphs \cite{LPS} with $\delta= 2/3 $. This shows that the NBRW on these graphs exhibits cutoff with a bounded window, which was one of our goals in this note.
\end{remark}
It is important to note that most of the examples (other than the result of Lubetzky and Sly \cite{LS}) that are known where the cutoff window is bounded are non-local Markov chains, such as riffle shuffles \cite{BD} and random transvections \cite{Hild}.
Next, we discuss what can be said about cutoff if we drop the Ramanujan condition. Writing the eigenvalues in form
\[\lambda_j = 2 \sqrt{p} \cos \theta_j ,\]
where for $\vert \lambda_j \vert \leq 2 \sqrt{p}$, we have a unique $\theta_j \in [0, \pi]$, and otherwise for the "exceptional" eigenvalues we choose $\theta_j$ uniquely in the from
\[
\begin{cases}
\theta_j= i \phi_j \log p&\quad\text{if} \quad \lambda_j > 2 \sqrt{p} \\
\theta_j = \pi + i \psi_j \log p &\quad\text{if} \quad \lambda_j < - 2 \sqrt{p},
\end{cases}
\]
with $\phi_j, \psi_j \in (0, \frac{1}{2})$.
\begin{definition}\label{S-X}
A sequence of graphs $X$ is said to satisfy the density hypothesis if for every $0\leq \alpha < 1/2$ and $\varepsilon >0$, the number of exceptional eigenvalues $M$ satisfies
\[M(\alpha,X):= \# \{ j: \phi_j \geq \alpha \} + \# \{ j: \psi_j \geq \alpha \} \ll_{\varepsilon} n^{1-2 \alpha + \varepsilon}. \]
\end{definition}
For a discussion of this density hypothesis see \cite{Sa} and \cite{GoKa}. The point is that this density can often be established in cases where the Ramanujan is not known or even fails.
In \cite{BoLa} and \cite{GoKa} it is shown that the density together with the assumption that $X$ is an expander suffice to show that the SRW on $X$ exhibits cutoff at $\frac{p+1}{p-1} \log_p n$. We show that the shortest possible cutoff applies to the NBRW.
\begin{theorem}\label{SX}
Let $X$ be a homogeneous sequence (that is the automorphisms act transitively on the vertices) of $(n,d, \lambda)$ expander graphs which satisfy the density hypothesis. Then the NBRW on $X$ exhibits cutoff at $\log_p n$. That is,
\[ d((1+ \eta) \log_p n) \rightarrow 0,\]
for every $\eta>0$.
\end{theorem}
The next results focus on the diameter of $(n, d, \lambda)$ graphs and strengthens \eqref{diameter}.
Let $\mathcal{N}_x(\ell)$ be the number of vertices $y\in X$ such that $d(x,y)> \ell$.
\begin{theorem}\label{expa}
Let $X$ be an $(n, d , \lambda)$ graph; then for $\xi>0$ we have that
\[\max_{x\in X} \bigg \{ \frac{1}{n}\mathcal{N}_x \left( \frac{1}{2}\log_b n + \xi \right) \bigg\}\leq \frac{4}{b^{2 \xi}} ,\]
where $b= \frac{d}{ \lambda} + \sqrt{ \left(\frac{d}{ \lambda} \right)^2 -1 } $.
\end{theorem}
We note that if we choose $\xi$ (bounded) so that $4 b^{-2 \xi}<1/2$, then given $x,y \in X$ we can find a common $z$ with $d(x,z) <\frac{1}{2}\log_b n + \xi $ and $d(y,z) <\frac{1}{2}\log_b n + \xi $. Therefore, $d(x,y) <\log_b n +2 \xi $. This shows that the diameter is at most $\log_b n +2 \xi $. This matches the bounds for the diameter that were derived in \cite{LPS} for Ramanujan graphs and in \cite{CFM} for $(n, d, \lambda)$ graphs. As in these papers, a crucial element in the analysis are the Chebychev polynomials of the first kind.
Let $p=d-1$. For the case where $X$ is Ramanujan, we have that $\lambda=2 \sqrt{p}$ and $b=\sqrt{p}$. Theorem \ref{expa} gives the following.
\begin{corollary}\label{Ram}
Let $X$ be a Ramanujan graph on $n$ vertices, then for $\xi>0$ we have that
\[ \max_{x\in X} \bigg \{ \frac{1}{n}\mathcal{N}_x(\log_p n + \xi ) \bigg \}\leq \frac{4}{p^{ \xi}} .\]
\end{corollary}
\begin{remark}
Corollary \ref{Ram} gives a bounded window strengthening \eqref{diameter} and if it is not optimal, it is very close to being so. In particular, it allows one to replace the $3 \log_p \log n$ term in \eqref{diameter} by any function $f(n)$ which goes to infinity with $n$.
\end{remark}
In the context of $d$-regular graphs , the almost diameter bound of Corollary \ref{Ram} is essentially the smallest it could be among all such graphs. On the other hand, the bound $2 \log_p n +4 $ for the diameter of a Ramanujan graph is probably not optimal. The random $d$-regular graph has diameter $(1+o(1)) \log_p n$ (see \cite{BF}), however the \cite{LPS} Ramanujan graphs can have diameter at least $\frac{4}{3} \log_p n$, as was shown in \cite{Sardari}. We expect that this $\frac{4}{3} \log_p n$ is an upper bound for the diameter of a Ramanujan graph.
As is standard in cutoff analysis, our proofs involve the $\ell^2$ distance
$\Vert P^t_x -U\Vert_2^2= \sum_{y \in X} \bigg \vert P^t(x,y) - \frac{1}{n} \bigg \vert^2 $
and its average over $x$
\[d_{2}(t):= \frac{1}{n} \sum_x \Vert P^t_x -U\Vert_2^2.\]
Note that if $X$ is homogeneous, then $d_2(t)=\Vert P^t_x -U\Vert_2^2$ for all $x$, as are all of the quantities defined in terms of the starting point $x$.
For the case of reversible Markov chains, such as the SRW on $X$, one can express $d_2(t)$ in terms of the eigenvalues and eigenfunctions of the transition matrix (see chapter $12$ of \cite{LPb}). Studying the spectrum of the transition has been a powerful tool for proving cutoff for many well known Markov chains, such as \cite{DS, Hild, BN}. We make judicious use of Chebychev polynomials and the eigenvalues and eigenfunctions of the adjacency matrix of $X$ to prove our results, and avoid using the NBRW on the edges of the graph.
Our analysis leads to the following basic conjecture.
\begin{conjecture}\label{conj}
If $X$ is a sequence of Ramanujan graphs and $t < 2 \log_p n$, then
\begin{equation}\label{13}
d_{2}(t) \sim \frac{1}{N(t)}
\end{equation}
as $n \rightarrow \infty$.
\end{conjecture}
This is consistent with the model that in this window the $N(t)$ end points of walks of length $t$
are placing themselves at random among the $n$ vertices.
Our proofs involve approximations to \eqref{13}. The source of the gain being that the Kesten measure on $[-2 \sqrt{p}, 2 \sqrt{p}]$ vanishes to second order at $-2 \sqrt{p}$ and $2 \sqrt{p}$ (see \eqref{new}). In \cite{AGV} it is proven that the probability measure supported on $[-2 \sqrt{p}, 2 \sqrt{p}]$ corresponding to the eigenvalues of a Ramanujan graph, converges to the Kesten measure as $n \rightarrow \infty$. Conjecture \ref{conj} requires that this convergence holds with polynomials of degree as large as $\log_p n$. In a forthcoming paper \cite{SZ} this convergence and in particular Conjecture \ref{conj} is established for various arithmetic Ramanujan graphs. Our Conjecture \ref{conj} implies that the NBRW on these Ramanujan graphs exhibit cutoff with an explicit and tight bounded window, namely
\[ t_x(\varepsilon) \leq \log_p n + 2 \log_p \varepsilon^{-1},\]
for every starting point $x$.
\section{Preliminaries}
Let $X$ be a connected, $d$ regular graph on $n$ vertices, where $d$ is fixed. Let $A$ denote the adjacency matrix of $X$. $A$ is a symmetric matrix with eigenvalues
\[-d \leq \lambda_{n-1} \leq \ldots \leq \lambda_1<\lambda_0=d .\]
Denote the corresponding orthonormal basis of eigenfunctions as $f_{n-1}, \ldots ,f_0 ,$ with $f_0(x)=\frac{1}{\sqrt{n}}$ for every $x\in X$. The fact that the $\{f_j\}$ are orthonormal means that
\begin{equation} \label{orthog}
\sum_{x \in X} f_i(x) f_{j}(x)=\delta_{i,j}.
\end{equation}
The fact that $\{f_j\}$ is an orthonormal basis gives that
\begin{equation*}
\delta_x(y)= \sum_{j=0}^{n-1} \langle f_j, \delta_x \rangle f_j(y),
\end{equation*}
which translates to
\begin{equation}\label{sum}
\delta_x(y)= \sum_{j=0}^{n-1}f_j(x) f_j(y).
\end{equation}
When considering the $t$-th power of $A$, we have that the $(x,y)$ entry $A^t(x,y)$ is equal to the number of walks of length $t$ starting at $x$ and ending at $y$.
Let $P$ be a polynomial of the form
\[P(x)= a_0 + a_1 x+ \ldots + a_{\ell}x^{\ell}.\]
We have that the matrix $P(A)$ can be expressed as
\begin{equation*}
P(A)(x,y)= \sum_{j=0}^{n-1} P(\lambda_j)f_j(x) f_j(y).
\end{equation*}
The key quantity that we estimate is the variance $W$ with respect to $P,$ defined as
\begin{equation}\label{ee}
W(P,x):= \sum_{y } \left( P(A)(x,y) - \frac{P(\lambda_0)}{n} \right)^2,
\end{equation}
which by \eqref{orthog} is equal to the spectral sum
\begin{equation}\label{var}
\sum_{j \neq 0} \vert P(\lambda_j) \vert^2 f_j^2(x).
\end{equation}
\section{The almost diameter}
To estimate the almost diameter of $X$, we use the following key lemma.
\begin{lemma}\label{poly}
Let $\ell(P)$ be the degree of $P$, then for any $x$
\begin{equation*}
\left( \frac{P(\lambda_0)}{n}\right)^2 \mathcal{N}_x(\ell(P)) \leq \max_{\lambda\neq \lambda_0 } \{ \vert P(\lambda ) \vert^2 \},
\end{equation*}
where $\mathcal{N}_x(\ell(P))$ is the number of vertices $y\in X$ such that $d(x,y)> \ell(P)$.
\end{lemma}
\begin{proof}
First of all, we note that since $A^t(x,y)$ is equal to the number of walks of length $t$ starting at $x$ and ending at $y$, we have that
\begin{equation}
\mbox{for every $x, y \in X$, if }d(x,y) > \ell(P) \mbox{ then } P(A)(x,y)=0.
\end{equation}
Combining this with \eqref{ee} and \eqref{var} we have that
\begin{equation}\label{ineq}
\sum_{y : d(x,y)> \ell(P)} \left( \frac{P(\lambda_0)}{n}\right)^2 \leq W(P,x) \leq \max_{\lambda \neq \lambda_0 } \{ \vert P(\lambda ) \vert^2 \} \sum_{j \neq 0} \vert f_j(x) \vert^2 .
\end{equation}
Equation \eqref{sum} gives that $\sum_{j \neq 0} \vert f_j(x) \vert^2 \leq 1 $, which finishes the proof.
\end{proof}
\subsection{Chebychev polynomials of the first kind}
Let $T_{\ell}$ be the Chebychev polynomials of the first kind of degree $\ell$, that is $T_{\ell}(x)= \cos (\ell \arccos x)$ and therefore $T_{\ell}(x ) \in [-1,1]$ for every $x\in [-1,1]$.
\begin{lemma}\label{Che}
For $\lambda \leq \lambda_0$, the Chebychev polynomials of the first kind satisfy
$$T_{\ell} \left(\frac{\lambda_0}{\lambda} \right) \geq \frac{b^{\ell}}{2},$$
where $b=\left( \frac{\lambda_0}{ \lambda} + \sqrt{ \left(\frac{\lambda_0}{ \lambda} \right)^2 -1 } \right)$.
\end{lemma}
\begin{proof}
Using the fact that $\cos \theta= \frac{e^{i \theta} +e^{-i \theta}}{2}$, we can write $\lambda_0=\lambda \cos \theta_0$, where $\theta_0= i \log \left( \frac{\lambda_0}{ \lambda} + \sqrt{ \left(\frac{\lambda_0}{ \lambda} \right)^2 -1 } \right)$. This gives that
$$T_{\ell} \left(\frac{\lambda_0}{\lambda} \right) =T_{\ell} \left( \cos \theta_0 \right)= \frac{1}{2} (b^{\ell} + b^{-\ell}) \geq \frac{b^{\ell}}{2}.$$
\end{proof}
\subsection{The almost diameter for expanders}
In this section, we present the proof of Theorem \ref{expa} concerning the almost diameter of $(n, d, \lambda)$ graphs.
\begin{proof}[Proof of Theorem \ref{expa}]
Let $T_{\ell}$ be the Chebychev polynomial of the first kind of degree $\ell$. We apply Lemma \ref{poly} to the polynomial
\[P(x)= T_{\ell}\left(\frac{x}{\lambda} \right),\]
where $\ell$ will be determined later.
The right hand side of the equation in Lemma \ref{poly} satisfies that
\begin{equation}\label{one}
\max_{\lambda_i \neq \lambda_0 } \{ \vert P(\lambda_i ) \vert^2 \}\leq 1,
\end{equation}
since all $\lambda_i \neq \lambda_0$ satisfy that $\vert \lambda_i \vert \leq \lambda$ and $T_{\ell}(x)= \cos (\ell \arccos x)$
for $x \in [-1,1]$. At the same time, Lemma \ref{Che} gives that
\begin{equation}\label{two}
\left( P(\lambda_0)\right)^2= \left( T_{\ell}\left(\frac{\lambda_0}{\lambda} \right)\right)^2\geq \frac{b^{2\ell}}{4}.
\end{equation}
Lemma \ref{poly} and equations \eqref{one} and \eqref{two} give that
\begin{equation}\label{final}
\frac{1}{n}\mathcal{N}_x(\ell) \leq \frac{4n}{b^{2\ell}}.
\end{equation}
Let $\xi>0$ be as in Theorem \ref{Ram} and set $\ell=\frac{1}{2} \log_{b}n + \xi$. Then equation \ref{final} gives the desired result.
\end{proof}
\section{The mixing time for the non-backtracking random walk}
In this section, we present our results concerning the mixing time of the NBRW on $X$.
\subsection{Chebychev polynomials of the second kind}
The NBRW can be expressed in terms of the Chebychev polynomials of the second kind. In this section, we explain this connection and we prove some useful properties for the Chebychev polynomials of the second kind.
Let $U_{\ell}$ be the Chebychev polynomials of the second kind of degree $\ell$, defined as
\[U_{\ell}(\cos \theta) =\frac{\sin \left((\ell+1) \theta \right)}{\sin \theta}. \]
The Chebychev polynomials of the second kind satisfy the following recurrence relation:
\[\begin{cases}
U_0(x)&=1\cr
U_1(x)&=2x\cr
U_{\ell+1}(x) &= 2x U_{\ell}(x) - U_{\ell-1}(x).
\end{cases}\]
Set
\begin{equation} \label{defini}
P_{\ell}(x)= p^{\ell/2} U_{\ell}\left( \frac{x}{2 \sqrt{p}} \right).
\end{equation}
\begin{lemma}\label{stype}
Let $A$ be the adjacency matrix of a regular graph. We have that
\[P_{\ell}(A)(x,y)= \sum_{0 \leq j \leq \ell/2} K_{\ell -2j} (x,y),\]
where $K_t(x,y)$ is the number of non-backtracking random walks of length $t$ from $x$ to $y$.
\end{lemma}
\begin{proof}
The two sides have the following generating function
\[\sum_{\ell=0}^{\infty} P_{\ell} t^{\ell} = \frac{1}{1-At +p t^2}, \]
and therefore they are equal. For more details, we refer to Lemma 1.4.3 of \cite{DSV}.
\end{proof}
We start with the following lemma. Set $ \lambda_j=2 \sqrt{p}\cos \theta_j$. Notice that $\theta_0= i \log \sqrt{p}$ and therefore
\begin{equation}\label{l0}
P_{\ell}(\lambda_0)= \frac{p^{\ell+1} -1}{p-1}.
\end{equation}
\begin{lemma}\label{s2}
Let $g$ be the girth of $X$ and let $\ell \leq g/5$. For $n$ large enough, we have that
\[\sum_{j=1}^{n-1} \left( U_{\ell}(\cos \theta_j) \right)^2 f_j^2(x) \leq 2,\]
for every $x \in X$.
\end{lemma}
\begin{proof}
Since $\ell < g$, the $\ell$ first steps of the NBRW on $X$ are the same as the $\ell $ first steps on a $d$ regular tree. Therefore,
\[ \sum_{1 \leq j \leq \ell/2} K_{\ell -2j} (x,y) = \begin{cases}
1 & d(x,y)\leq \ell \mbox{ and } d(x,y) \equiv \ell \mod 2,\\
0 & \mbox{otherwise.}
\end{cases}
\]
Combined with \eqref{ee}, \eqref{l0} and Lemma \ref{stype}, this gives that
\begin{align*}
W(P_{\ell},x)&= \sum_{\substack{ d(x,y)> \ell \mbox{ or} \\ d(x,y) \equiv \ell +1 \mod 2}} \left( \frac{p^{\ell+1} -1}{n(p-1)} \right)^2 + \sum_{\substack{ d(x,y)\leq \ell \\ d(x,y) \equiv \ell \mod 2}} \left( 1- \frac{p^{\ell+1} -1}{n(p-1)} \right)^2 \cr
& \leq \frac{1}{n} \left( \frac{p^{\ell+1} -1}{p-1} \right)^2 + \sum_{\substack{ d(x,y)\leq \ell \\ d(x,y) \equiv \ell \mod 2}} 1\cr
&\leq \frac{1}{n} \left( \frac{p^{\ell+1}-1}{p-1}\right)^2+ \left( \frac{p^{\ell+1}-1}{p-1}\right) .
\end{align*}
For $n $ large, we use the fact that $2 \ell+2 \leq \frac{2}{5}g +2 \leq \frac{4}{5} \log_p n+2$ to get that
\begin{align}
\label{c}W(P_{\ell},x)&\leq \frac{p^{\ell+1}}{p-1} .
\end{align}
Equations \eqref{var} and \eqref{defini} give that
\[\sum_{j=1}^{n-1} \left( U_{\ell}(\cos \theta_j) \right)^2 f_j^2(x) \leq \frac{p}{p-1} \leq 2,\]
as desired.
\end{proof}
\subsection{The non-backtracking random walk}
The first lemma gives $K_t$ as an explicit polynomial in $A$ (see also \cite{CA} and \cite{ABLS}).
\begin{lemma}\label{polynomial}
Set $Q_t(x)= p^{t/2} \left( \frac{p-1}{p} U_t \left(\frac{x}{2 \sqrt{p}}\right) + \frac{2}{p} T_t \left(\frac{x}{2 \sqrt{p}} \right) \right)$. We have that
\[Q_t(A)(x,y)=K_t(x,y),\]
for every $x,y \in X$.
\end{lemma}
\begin{proof}
Using Lemma \ref{stype}, we can write that
\begin{align}
\label{k}K_t(x,y)= P_t(A)(x,y)-P_{t-2}(A)(x,y).
\end{align}
Using the following relationship between Chebychev polynomials of the two types
\[U_{t}=U_{t-2} + 2 T_t\]
and \eqref{defini}, we can rewrite \eqref{k} as
\begin{align}
\label{k2}K_t(x,y)= p^{t/2} \left( \frac{p-1}{p} U_t \left(\frac{A}{2 \sqrt{p}}\right) + \frac{2}{p} T_t \left(\frac{A}{2 \sqrt{p}} \right) \right).
\end{align}
In other words,
\begin{align}
Q_t(A)(x,y)=K_t(x,y).
\end{align}
\end{proof}
We now use Lemma \ref{polynomial} to write the following expression for the variance.
\begin{align}
W(Q_t(A),x)&=p^t \sum_{j \neq 0} \left(\frac{p-1}{p} \frac{\sin ((t+1) \theta_j)}{\sin \theta_j} + \frac{2}{p} cos (t \theta_j) \right)^2f_j^2(x)\cr
&\label{last5} \leq p^t \left( t+1 \right)^2,
\end{align}
which is the bound given in Lubetzky and Peres \cite{LP}. As they note in Remark 3.7 of \cite{LP}, in order to get rid of the factor $(t+1)^2$ in \eqref{last5}, one needs some control on the distribution of the $\theta_j$.To do so, we assume a lower bound on the girth $g$ of $X$.
\begin{lemma} \label{tv}
Fix $\delta>0$ and assume that $X$ has girth $g \geq \delta \log_p n$ and is Ramanujan, then
\[W(Q_t(A),x) \leq 12 \left( \frac{10}{\delta}+1 \right)^2p^t, \]
for $\log_p n \leq t \leq 2 \log_p n$.
\end{lemma}
\begin{proof}
Set $k= \lfloor \frac{10}{\delta}\rfloor +1$. For $t \in [\log_p n, 2 \log_p n]$, write $t+1$ as $m k + r$ with $0 \leq r <k$. Notice that then
$m \leq \frac{2}{k} \log_p n \leq g/5$ and so we can apply Lemma \ref{s2} with this $m$.
According to Lemma \ref{polynomial} with $\lambda_j= 2 \sqrt{p} \cos \theta_j,$ we have that
\begin{align}
W(Q_t(A),x)&=p^t \sum_{j \neq 0} \left(\frac{p-1}{p} \frac{\sin ((t+1) \theta_j)}{\sin \theta_j} + \frac{2}{p} cos (t \theta_j) \right)^2f_j^2(x)\cr
& \label{cut} \leq 2 p^t \sum_{j \neq 0} \left( \left( \frac{\sin ((mk+r) \theta_j)}{\sin \theta_j} \right)^2+ \frac{4}{p^2} cos^2 (t \theta_j) \right)f_j^2(x)
\end{align}
Using standard trigonometric identities we have that
\begin{align}
\eqref{cut}& \leq 2 p^t \sum_{j \neq 0}\left( \frac{\sin (mk \theta_j) \cos r \theta_j+ \cos (mk \theta_j) \sin (r \theta_j)}{\sin \theta_j} \right)^2f_j^2(x) + 8p^{t-2}\cr
&\leq 4 p^t \sum_{j \neq 0} \left( \left( \frac{\sin (mk \theta_j) }{\sin \theta_j} \right)^2 + \left(\frac{\sin (r \theta_j) }{\sin \theta_j} \right)^2 \right)f_j^2(x) + 8p^{t-2}\cr
&\leq 4 p^t \sum_{j \neq 0} \left( \left( \frac{\sin (mk \theta_j) }{\sin \theta_j} \right)^2 + r^2 \right)f_j^2(x) + 8p^{t-2}\cr
&\label{last1} \leq 4 p^t \sum_{j \neq 0} \left( \left( \frac{\sin (mk \theta_j)}{\sin \theta_j} \right)^2 \right)f_j^2(x) + 4r^2p^t+ 8p^{t-2}
\end{align}
We use the fact that
\[\bigg \vert \frac{\sin (mk \theta_j)}{\sin \theta_j} \bigg \vert= \bigg \vert \frac{\sin (mk \theta_j)}{\sin(m \theta_j)} \frac{\sin(m \theta_j)}{\sin \theta_j} \bigg \vert \leq k \bigg \vert \frac{\sin(m \theta_j)}{\sin \theta_j} \bigg \vert \]
and Lemma \ref{s2} to get
\[4 p^t \sum_j \left( \frac{\sin (mk \theta_j)}{\sin \theta_j} \right)^2 f_j^2(x) \leq 8k^2 p^t . \]
Equation \eqref{last1} gives that
\[W(Q_t(A),x) \leq 12 k^2 p^t \leq 12\left(\frac{10}{\delta} +1\right)^2 p^t,\]
since $0\leq r <k$. This completes the proof of Lemma \ref{tv}.
\end{proof}
\subsection{The bounded window}\label{bd}
In this section, we present the proof of Theorem \ref{Ram2}. Let $P $ be the transition matrix of the non-backtracking random walk on $X$. For $t \geq 0$, we have that $P^t(x,y)=\frac{1}{(p+1)p^{t-1}}K_t(x,y)$.
Therefore, applying Cauchy-Schwartz we get that
\begin{align*}
4 d^2_x(t) & \leq \sum_y \bigg \vert \frac{1}{(p+1)p^{t-1}}K_t(x,y)- \frac{1}{n} \bigg \vert^2.
\end{align*}
Since $Q_t(d)= (p+1)p^t$, equation \eqref{ee} gives that
\begin{align*}
W(Q_t(A),x)&= \sum_y \left( K_t(x,y) - \frac{(p+1) p^{t}}{n} \right)^2.
\end{align*}
Therefore,
\begin{equation}
4 d^2_x(t) \leq
\label{ff}\frac{n}{(p+1)^2p^{2t-2}} W(Q_t(A),x).
\end{equation}
Using Lemma \ref{tv}, we get that for $\log_p n \leq t \leq 2 \log_p n,$
\begin{align}
d_x(t) & \leq\frac{n^{1/2}}{2(p+1)p^{t-1}}( W(Q_t(A),x))^{1/2}\cr
& \leq 2 \left( 1+ \frac{10}{\delta} \right) \left(\frac{n}{p^t}\right)^{1/2}.
\end{align}
By taking $t= \log_p n + 2 \log_p \varepsilon^{-1} +2 \log_p \left( 2(1+\frac{10}{\delta}) \right),$ we get that
\[ d_x(t) \leq \varepsilon,\]
and this holds uniformly for $x \in X$.
\section{The density hypothesis}
Let $X$ be an $(n, d ,\lambda)$ graph satisfying the density property, as defined in \ref{S-X}. The goal of this section is to prove that the mixing time of the non-backtracking random walk on $X$ is at most $(1+ \eta) \log_p n$ for every $\eta>0$.
The following Lemma is key to proving Theorem \ref{SX}.
\begin{lemma}\label{eps}
Let $X$ be an $(n, d ,\lambda)$ expander sequence, that satisfies the density hypothesis. We set $I_n=\sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \phi_j]2t} +\sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \psi_j]2t} $. Then
$$\lim_{n \rightarrow \infty}I_n=0,$$
if $t\geq(1+\eta) \log_p n $.
\end{lemma}
\begin{proof}
Since $X $ is an expander, we have that there is $\delta_1>0$ such that $0 \leq \phi_j \leq \frac{1}{2} - \delta_1$. We express the sums in $I_n$ in terms of the function $M$ in Definition \ref{S-X};
\[ \sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \phi_j]2t} =-\int^{\frac{1}{2}- \delta_1}_0 p^{- [\frac{1}{2}- \alpha]2t} dM(\alpha). \]
Integrating by parts, we get that
\[ \sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \phi_j]2t} = M(0)p^{-t} - M\left(\frac{1}{2}- \delta_1\right) p^{-2 t \delta_1} + 2 t \int^{\frac{1}{2}- \delta_1}_0 p^{- [\frac{1}{2}- \alpha]2t} M(\alpha)d \alpha. \]
The density hypothesis asserts that $M(\alpha) \ll_{\varepsilon} n^{1-2 \alpha + \varepsilon}$ for $0\leq \alpha \leq 1/2$. Therefore,
\begin{align*}
\sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \phi_j]2t} &\ll_{\varepsilon}n^{1 + \varepsilon}p^{-t} + 2 t \int^{\frac{1}{2}- \delta_1}_0 p^{- [\frac{1}{2}- \alpha]2t} n^{1-2 \alpha + \varepsilon} d \alpha\\
&= n^{1 + \varepsilon}p^{-t} \left( 1 + 2t \int^{\frac{1}{2}- \delta_1}_0\left( \frac{ p^{2t}}{ n^{2 }} \right)^{\alpha} d \alpha \right).
\end{align*}
Since $t > \log_p n$, we have that
\begin{align*}
\sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \phi_j]2t} &\ll_{\varepsilon} n^{1 + \varepsilon}p^{-t} \left( 1 + 2t \left(\frac{p^t}{n} \right)^{1- 2 \delta_1} \right)\\
& \ll_{\varepsilon} n^{1 + \varepsilon}p^{-t} + 2t n^{\varepsilon} \left( \frac{n}{p^t} \right)^{2 \delta_1}.
\end{align*}
Therefore,
\begin{align}
\sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \phi_j]2t} &\ll_{\varepsilon} n^{1 + \varepsilon}p^{-t} + 2t n^{\varepsilon} \left( \frac{n}{p^t} \right)^{2 \delta_1}.
\end{align}
We can get a similar bound for $\sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \psi_j]2t}$.
Since $\varepsilon>0$ is arbitrarily small and $\delta_1>0$ is fixed, it follows that $\lim_{n \rightarrow \infty}I=0$ if $t\geq(1+\eta) \log_p n $.
\end{proof}
We are now ready to prove Theorem \ref{SX}.
\begin{proof}[Proof of Theorem \ref{SX}]
We recall that when $\lambda_j >2 \sqrt{p}$, we have that $\theta_j= i \phi_j \log p$ for $\phi_j \in (0, 1/2]$. Then,
\begin{equation}\label{c}
\vert \cos (t \theta_j) \vert = \bigg \vert \frac{1}{2} \left( p^{t \phi_j} + p^{-t \phi_j} \right) \bigg \vert \leq p^{t \phi_j}
\end{equation}
and
\begin{equation}\label{s}
\bigg \vert U_t \left( \frac{\lambda_j}{2 \sqrt{p}} \right) \bigg \vert = \bigg \vert \frac{p^{(t+1)\phi_j} - p^{-(t+1)\phi_j}}{p^{\phi_j} - p^{-\phi_j}} \bigg \vert \leq p^{(t+2)\phi_j} .
\end{equation}
We can get similar bounds in terms of the $\psi_j$ for the case $\lambda< - 2 \sqrt{p}$.
\begin{align*}
W(Q_t(A),x)&=p^t \sum_{j \neq 0} \left(\frac{p-1}{p} \frac{\sin ((t+1) \theta_j)}{\sin \theta_j} + \frac{2}{p} cos (t \theta_j) \right)^2f_j^2(x)
\end{align*}
Summing over $x$ and using the fact that $X$ is homogeneous, we have that
\begin{align}
\label{v2}W(Q_t(A),x) & \leq \frac{p^t}{n} \sum_{j =1} \left(\frac{p-1}{p} \frac{\sin ((t+1) \theta_j)}{\sin \theta_j} + \frac{2}{p} \cos (t \theta_j) \right)^2.
\end{align}
We set
\[ \phi_j'=
\begin{cases}
\phi_j &\quad\text{if} \quad \lambda_j > 2 \sqrt{p}, \\
\psi_j \ &\quad\text{if} \quad \lambda_j < - 2 \sqrt{p}.
\end{cases}
\]
Considering the terms corresponding to all $\vert \lambda_j \vert \leq 2 \sqrt{p}$ and using equations \eqref{c} and \eqref{s}, we have that
\begin{align}
\label{v3}W(Q_t(A),x) & \leq p^t \left( t+1 \right)^2+ \frac{p^t}{n} \sum_{j =1}^{n-1} \left(\frac{p-1}{p} p^{(t+2)\phi_j'} + \frac{2}{p}p^{t \phi_j'} \right)^2\\
&\leq p^t \left( t+1 \right)^2+\frac{p^t}{n} \sum_{j =1}^{n-1} \left(\frac{p+1}{p} p^{(t+2)\phi_j'}\right)^2\\
&\leq p^t \left( t+1 \right)^2+3p^2\frac{p^{t}}{n} \sum_{j =1}^{n-1} p^{2t \phi_j'}.
\end{align}
Plugging this into the $\ell^2$ bound, we have that
\begin{align}
d_x(t) & \leq\frac{n^{1/2}}{2(p+1)p^{t-1}}( W(Q_t(A),x))^{1/2}\cr
& \leq \frac{1}{2} \left(np^{-t} \left( t+1 \right)^2+3 p^2\sum_{j =1}^{n-1} p^{- [\frac{1}{2}- \phi_j']2t} \right)^{1/2},
\end{align}
for every $x \in X$. Lemma \ref{eps} finishes the proof of Theorem \ref{SX}.
\end{proof}
\begin{remark}
In Theorem \ref{SX}, if $X$ is not homogeneous then in as much as we summed over all $x \in X$ in the proof, the result remains true for almost all $x$ in place of all $x$.
\end{remark}
\section{Conjecture \ref{conj}}
We end with some comments about our conjectured asymptotics of the variance for Ramanujan graphs. The spectral expansion \eqref{var} gives that
\begin{align}
W_2(t) :&= \frac{1}{n} \sum_{x \in X} W(Q_t,x )\cr
&=\sum_{j \neq 0} Q_t^2(2 \sqrt{p} \cos \theta_j)\cr
&\label{61}= \frac{p^t}{n} \sum_{j \neq 0} R_t^2( \theta_j),
\end{align}
where $R_t= \frac{p-1}{p} U_t+ \frac{2}{p} T_t$. We write \eqref{61} as
\begin{equation}\label{63}
W_2(t)= p^t \mu_X(R_t^2),
\end{equation}
where $\mu_X$ is the density of the eigenvalues on $[0, \pi]:$
\[\mu_X= \frac{1}{n} \sum_{j \neq 0 } \delta_{\theta_j}.\]
For any sequence of Ramanujan graphs $X$, $\mu_X$ is known to converge to the Plancherel measure $\nu_p,$ as $n \rightarrow \infty$ \cite{AGV}. That is for a fixed polynomial $R$
\[ \mu_X(R) \rightarrow \int_{0}^{\pi} R(\theta) d \nu_{p}(\theta)\]
as $n \rightarrow \infty$. Here the Plancherel, or Kesten measure, $\nu_p$ is;
\begin{equation}\label{new}
d \nu_p= \frac{2(p+1) \sin^2 \theta}{\pi[(p^{1/2} + p^{-1/2})^2 - 4 \cos ^2\theta]} d \theta.
\end{equation}
For $X$'s whose girth is at least $\delta \log_p n$, the calculation in Section \ref{bd}, which was used to establish the bounded window for these, yields that for $t<g/5,$
\[W_2(t) \sim (p+1)p^{t-1},\]
as $n\rightarrow \infty$.
Hence for these $X$'s and in this range of $t$'s
\begin{equation}
\label{meas}\mu_{X}(R^2_t) \sim \frac{p+1}{p} ,\end{equation}
as $n \rightarrow \infty$.
One can check that the $R_t$'s are orthogonal polynomials for the measure $\nu_p$ on $[0,\pi] $ (see \cite{CA} for example) and that for $t \geq 1$
\begin{equation}\label{normal}
\int^{\pi}_0 R^2_t(\theta) d \nu_p(\theta)= \frac{p+1}{p}
\end{equation}
Thus \eqref{meas} reads that the large girth $X$'s and in the range $t< g/5$
\begin{equation} \label{69}\mu_X(R^2_t) \sim \nu_p(R^2_t)= \frac{p+1}{p},\end{equation}
as $n \rightarrow \infty$. Our conjecture is that \eqref{69} holds in general for any sequence of Ramanujan graphs and in the larger range $t< 2 \log_p n$. From \eqref{63} the conjecture is equivalent to
\[W_2(t) \sim N(t),\]
for $t< 2 \log_p n$ as $n \rightarrow \infty$.
In the forthcoming paper \cite{SZ}, Conjecture \ref{conj} is proven for various families of arithmetical Ramanujan graphs, such as the ones discussed in \cite{CFLMP}.
\section{Acknowledgements} We would like to thank Eyal Lubetzky for his comments and insights concerning cutoff for the NBRW.
\bibliographystyle{plain}
|
3,212,635,537,735 | arxiv | \section{Introduction}
The quest for discovery in particle physics has always required experiments at the highest possible energies. Novel techniques have been developed to accelerate beams to very high energy, to bring these beams into collision and to detect the particles that these collisions create. The successful discovery of the Higgs boson and the search for new particles and forces at the LHC rests on this tradition. Further advances in technique will be required to create a facility capable of continuing the search for phenomena beyond the Standard Model into an extended energy domain. Such a facility will be required to deliver collisions at centre-of-mass energies in excess of few TeV and be capable of measuring the collision products with precision, in order to explore new physics both directly and indirectly. These features would be needed also to characterise possible LHC or HL-LHC discoveries. The fundamental nature of leptons makes lepton-antilepton colliders the ideal candidate to serve at the energy frontier. Muon colliders offer great potential for discovery in the multi-TeV energy range. The large muon mass, $207$ times that of the electron, suppresses synchrotron radiation by a factor of $10^9$ compared with electron beams of the same energy. Rings can therefore be used to accelerate muon beams efficiently and to bring them into collision repeatedly. Also, more than one experiment can be served simultaneously.
The physics reach of the muon collider is extended over that of a proton-proton collider of the same energy since all of the beam energy is available for the hard collision, compared to the fraction of the proton-beam energy carried by the colliding partons. A $14$~TeV muon collider provides an effective energy reach similar to that of the 100 TeV FCC, and potentially the c.o.m. energy in colliding muons can go well beyond. Furthermore a dedicated muon collider is the only lepton-antilepton collider option able to scan the Higgs resonance and to precisely measure its mass and width. A muon collider is therefore ideal to search for and/or study new physics and for resolving narrow resonances both as a precision facility and/or as an exploratory machine as is developed in section~\ref{sec:physics}.
A muon collider has the potential to provide excellent performance at reasonable power consumption. A figure of merit for energy-frontier facilities may be defined as the luminosity per unit power. This figure of merit appears to be substantially larger for the muon collider than for any other lepton collider technology in the multi-TeV range \cite{Boscolo:2018ytm}.
The main challenges of a muon collider design are those arising from the short muon lifetime, which is $2.2~\mu{\rm{s}}$ at rest, and the difficulty of producing large numbers of muons in bunches with small emittance. This requires the development of demanding technologies and innovative concepts, as presented in section~\ref{sec:machine}. The beam background from the decay of the muons impacts both machine and detector design (see section~\ref{sec:background}).
In addition, muon decays offer a window on another important area of physics by generating electron and muon neutrinos with a well-known flux and energy spectrum. These neutrinos can serve as the source for a Neutrino Factory (NF \cite{Choubey:2011zzq}), which would constitute the ideal complement \cite{Huber:2014nga} to Long Base Line Facilities.
\section{Physics}\label{sec:physics}
Circular muon colliders have the potential to reach center-of-mass energies of tens of TeV. Since such high collision energies are currently achievable only with proton-proton machines, it is natural to start an assessment of the muon collider physics potential from a comparison with the hadronic option. The obvious advantage in colliding muons rather than protons is that the muon collider center of mass energy, $\sqrt{s}_\mu$, is entirely available to produce short-distance reactions. At a proton collider, instead, the relevant interactions occur between the proton constituents, which carry a small fraction of the collider energy, $\sqrt{s}_p$. This can be illustrated by considering the pair production of heavy particles with mass $M$ approximately equal to half the muon collider energy.\footnote{The estimate that follows generically applies to any $2\rightarrow2$ reaction with a high threshold in energy.} The cross-sections scales as $1/s$ at both colliders, but at a proton collider the cross-section is to be multiplied by an $M/\sqrt{s}_p$-dependent suppression factor due to the steeply-falling parton luminosities. Equal muon and proton collider cross-sections are thus obtained for $\sqrt{s}_\mu\ll \sqrt{s}_p$, as shown on the left panel of Figure~\ref{fig:muvsp}.
Naively, one would expect the lower background level could be another advantage of the muon collider relative to hadronic machines. However it is unclear to what extent this is the case because of the large beam background from the decay of the muons, discussed in section~\ref{sec:background}.
Figure~\ref{fig:muvsp} suggests that a $14$~TeV muon collider with sufficient luminosity might be very effective as a {\bf{direct exploration}} machine, with a physics motivation and potential similar to that of a $100$~TeV proton-proton collider \cite{Arkani-Hamed:2015vfh}. Although detailed analyses are not yet available, it is expected that a future energy frontier muon collider could make decisive progress on several beyond-the-SM questions, and to be conclusive on some of these questions. By exploiting the very large vector-boson fusion (VBF) cross-section, a muon collider could search extensively for new particles coupled with the Higgs boson, possibly related to electroweak baryogenesis \cite{Buttazzo:2018qqp}. It might also discover Higgsinos or other heavy WIMP dark matter scenarios \cite{DiLuzio:2018jwd}. In this context, it is important to remark that motivated ``minimal'' WIMP dark matter candidates might have a mass of up to $16$~TeV. Generic electroweak-charged particle with easily identifiable decay products up to a mass of several TeV can be searched for. Relevant benchmarks are the (coloured) top partners related with naturalness, which should be present at this high mass even in elusive ``neutral naturalness'' scenarios.
\begin{figure}[t]
\includegraphics[width=.45\linewidth]{figures/muvsp.pdf}\hspace{30pt}
\raisebox{.006\linewidth}{\includegraphics[width=.42\linewidth]{figures/XS.pdf}}
\caption{Left panel: the energy at which the proton collider cross-section equals that of a muon collider. The dashed line assumes comparable Feynman amplitudes for the muon and the proton production processes. A factor of ten enhancement of the proton production amplitude squared, possibly due to QCD production, is considered in the continuous line. Right panel: Higgs and top-quark production cross-sections at high energy lepton colliders.}
\label{fig:muvsp}
\end{figure}
The ability to perform measurements, which {\bf{probe New Physics indirectly}} \footnote{Precision would also allow the characterization of newly discovered particles.}, is another important goal of future collider projects. The high energy of a muon collider could also be beneficial from this viewpoint, in two ways. First, indirect New Physics effects are enhanced at high energy, so that they can show up even in relatively inaccurate measurements. This is the mechanism by which the $3$~TeV CLIC might be able to probe the Higgs compositeness scale above $10$~TeV (or a weakly-coupled $Z^\prime$ up to $30$~TeV) with di-fermion and di-boson measurements at the $1\%$ level \cite{deBlas:2018mhx}, while an exquisite precision of $10^{-4}/10^{-5}$ would be needed to achieve the same goal with low-energy (e.g., $Z$-pole) observables. At a $30$~TeV muon collider, with suitably scaled luminosity, the reach would increase by a factor of $10$. The second important aspect is that some of the key processes for Higgs physics, namely those initiated by the vector boson fusion (see the right panel of Figure~\ref{fig:muvsp}), have very large cross-sections. For instance with an integrated luminosity of $10~\rm{ab}^{-1}$, a $10$~TeV muon collider would produce $8$ million Higgs bosons, with \mbox{$30$'$000$} of them by the pair production mechanism that is sensitive to the trilinear Higgs coupling. While further study is required, especially in view of the significant level of machine background that is expected at a muon collider, these numbers might allow a satisfactory program of Higgs couplings determination.
A detailed assessment of the muon collider luminosity requirements will result from a comprehensive investigation of the physics potential, which is not yet available. However a simple and robust estimate of the minimal useful luminosity can be readily obtained as follows. Reactions induced by electroweak interactions, with cross-section of order
\begin{equation}\label{eq:XS}
\sigma=\left(
\frac{10\,{\rm{TeV}}}{\sqrt{s}_\mu}\right)^2\cdot1\,\rm{fb}\,,
\end{equation}
are the benchmark processes at a muon collider. This is the cross-section estimate for both the production of new particles with a mass of the order of (or much below) the collider energy, or large-$P_T$ $2\rightarrow2$ SM processes. The number of events corresponding to Eq.~(\ref{eq:XS}), normalized to $5$~years run and assuming \mbox{$10^7\,{\rm{s}}/$year} operation, is
\begin{equation}
N=
\frac{\rm{time}}{5\,{\rm{yrs}}}
\left(
\frac{10\,{\rm{TeV}}}{\sqrt{s}_\mu}\right)^2\frac{\rm{luminosity}}{10^{34}{\rm{cm}}^{-2}\rm{s}^{-1}}
\cdot
500
\,.
\end{equation}
Collecting $100$ events might be sufficient to discover new particles with easily identifiable decay products, such as Stops and Top Partners related with Naturalness. An instantaneous luminosity of $2\cdot10^{33}{\rm{cm}}^{-2}\rm{s}^{-1}$, at $10$~TeV, would be sufficient to probe these particles up to the collider reach. Ten thousands events would instead be needed to aim at percent-level measurements of electroweak SM processes at high invariant mass, allowing to probe hundreds of TeV New Physics scales indirectly as previously mentioned. In this case the luminosity requirement becomes:
\begin{equation}
L\gtrsim\frac{5\,{\rm{years}}}{\rm{time}}
\left(
\frac{\sqrt{s}_\mu}{10\,{\rm{TeV}}}
\right)^2
2\cdot10^{35}{\rm{cm}}^{-2}{\rm{s}}^{-1}\,.
\end{equation}
If extrapolated down to $3$~TeV, this formula corresponds to a luminosity of around $2\cdot10^{34}{\rm{cm}}^{-2}{\rm{s}}^{-1}$, which is a factor of 3 less than that of the 3 TeV CLIC~\cite{Aicheler:2012bya}. However thanks to the absence of beamstrahlung this luminosity is entirely available for high-energy reactions, and furthermore two interaction points are foreseen at the muon collider. At $10$~TeV, the formula corresponds to $10\,\rm{ab}^{-1}$ integrated luminosity in $5$ years, which is the one assumed above in the estimate of the total number of produced Higgs bosons.
\subsubsection*{Physics potential along the way}
A vigorous and ambitious R\&D program is needed to assess the feasibility of a tens-of-TeV's muon collider. Therefore it is important to investigate the physics potential of smaller-scale machines that might be built along the way as technology demonstrators. Starting from medium energy, the first option to be considered is a muon collider operating around the top production threshold ($\sim400$~GeV). This could have the same potential as the CLIC Stage~1 \cite{CLIC:2016zwp} in terms of top \cite{Abramowicz:2018rjq} and Higgs \cite{Abramowicz:2016zbo} physics, provided a comparable luminosity (of the order of $10^{34}{\rm{cm}}^{-2}{\rm{s}}^{-1}$) is obtained. The physics case becomes less clear at lower energies, where the muon collider luminosity is expected to be not comparable with that of circular $e^+e^-$ machines. A remarkable exception is of course a muon collider operating at the Higgs pole, which might study the Higgs boson line-shape. See Ref.s~\cite{Janot} and \cite{Blondel} for a recent study, which assumes $4~\rm{fb}^{-1}$ integrated luminosity and $3\cdot 10^{-5}$ relative energy spread of the muon beams. Comparisons with updated projections from HL-LHC shows that the impact on the Higgs coupling will be limited, but significant progress is possible in the measurement of the Higgs mass and in the model-independent determination of the Higgs width. In particular it is worth emphasizing that the Higgs mass is a fundamental input parameter of the SM, which would be measured at one part per million. On the other hand, the practical impact of such an astonishingly precise measurement is not fully clear yet.
\section{Machine Design}\label{sec:machine}
\subsection{Introduction}
Because of their great potential and critical challenges, muon-based facility concepts have been developed for more than three decades. The present status of the various options considered and the necessary R\&D to address their feasibility is summarized in Ref.~\cite{Boscolo:2018ytm}. Their basic layouts are shown in Figure~\ref{fig:scheme}, emphasizing synergies. The idea of muon colliders was first introduced in the early 1980's \cite{Skrinsky:1981ht, Neuffer:1983xya} and further developed by a series of world-wide collaborations \cite{Palmer,Palmer:2014nza} culminating in creation of the US Muon Accelerator Program (MAP) \cite{MAP} in 2011. MAP developed the concepts of a proton driver scheme and addressed the feasibility of the novel technologies required for Muon Colliders and Neutrino Factories \cite{JINST}. In the scheme (see section~\ref{sec:prds}), the muons are generated as tertiary particles in the decays of the pions created by an intense proton beam interacting a heavy material target. In order to achieve high luminosity in the collider, the resulting initial low energy muon beam with short lifetime, with large transverse and longitudinal emittances, has to be cooled by five orders of magnitude in the six-dimensional phase-space and rapidly accelerated to minimize the decrease of the intensity due to muon decays.
\begin{figure}[t]
\centering
\includegraphics[width=.65\linewidth]{figures/scheme.pdf}
\caption{Schematic layouts of Muon Collider complexes based on the proton driver scheme and on the low emittance positron driver scheme emphasizing synergies.}
\label{fig:scheme}
\end{figure}
A novel approach of the Low Emittance Muon Accelerator (LEMMA) based on muon pair production with a positron beam impinging on electrons at rest in a target \cite{Antonelli:2013mmk} was recently proposed and is now under conceptual study \cite{Antonelli:2015nla}. The corresponding positron driver scheme is described in section~\ref{sec:pods}. The muons produced in the $e^+e^-$ interactions close to threshold are constrained into a small phase-space region, effectively producing a muon beam with very small transverse emittances \cite{Boscolo:2017xmn}, comparable to those typically obtained in electron beams without necessitating any cooling. These muon pairs are produced with an average energy of $22$~GeV corresponding to an average laboratory lifetime of $\sim500\,\mu{\rm{s}}$, which mitigates the intensity losses by muon decay and eases the acceleration scheme. Potentially high luminosity could be reached with relatively small muon fluxes, reducing background and activation problems due to high energy muon decays, and thus mitigating the on-site neutrino radiation issue. Consequently, the LEMMA scheme, although not appropriate for a Higgs Factory due to a too large beam energy spread, is very attractive for a collider in the multi-TeV range, extending the energy reach of muon colliders which can be limited by neutrino radiation.
\subsection{Proton driver scheme}\label{sec:prds}
\subsubsection{Design status}
In the proton driver scheme \cite{Palmer:2014nza, MAP} muons are produced as tertiary particles from decay of pions created by a high-power proton beam impinging a high Z material target. The majority of the produced pions have momenta of a few hundred \mbox{MeV$/c$}, with a large momentum spread and large transverse momentum components. Hence, the daughter muons are produced at low energy within a large longitudinal and transverse phase-space. This initial muon population must be confined transversely, captured longitudinally, and have its phase-space manipulated to fit within the acceptance of an accelerator. These beam manipulations must be done quickly, before the muons decay.
A schematic layout of a proton driven muon collider facility is sketched in Figure~\ref{fig:scheme}. The main parameters of the enabled facilities are summarized in Table~\ref{tab:pars}.
\begin{table}[t]\caption{Main parameters of the proton driver muon facilities}\label{tab:pars}
\begin{tabular}{lccccc}
\hline
{\bf{Parameter}} & {\bf{Units}} & {\bf{Higgs}} & \ & {\bf{Multi-TeV}} & \ \\
CoM Energy & TeV & $0.126$ & $1.5$ & $3.0$ & $6.0$\\
Avg. Luminosity & $10^{34}\rm{cm}^{-2}{\rm{s}}^{-1}$ & $0.008$ & $1.25$ & $4.4$ & $12$\\
Beam Energy Spread & $\%$ & $0.004$ & $0.1$ & $0.1$ & $0.1$\\
Higgs Production$/10^7$ sec & \ & $13$'$500$ & $37$'$500$ & $200$'$000$ & $820$'$000$\\
Circumference & $\rm{km}$ & $0.3$ & $2.5$ & $4.5$ & $6$ \\
No. of IP's & \ & $1$ & $2$ & $2$ &$2$ \\
Repetition Rate & $\rm{Hz}$ & $15$ & $15$ & $12$ & $6$ \\
$\beta^*_{x,y}$ & $\rm{cm}$ & $1.7$ & $1$ & $0.5$ & $0.25$ \\
No. muons/bunch & $10^{12}$ & $4$ & $2$ & $2$ & $2$ \\
Norm. Trans. Emittance, $\varepsilon_{\rm{TN}}$ & $\mu{\rm{m}}$-\rm{rad} & $200$ & $25$ & $25$ & $25$ \\
Norm. Long. Emittance, $\varepsilon_{\rm{LN}}$ & $\mu{\rm{m}}$-\rm{rad} & $1.5$ & $70$ & $70$ & $70$ \\
Bunch Length, $\sigma_{\rm{S}}$ & $\rm{cm}$ & 6.3 & 1 & 0.5 & 0.2 \\
Proton Driver Power & $\rm{MW}$ & $4$ & $4$ & $4$ & $1.6$ \\
Wall Plug Power & $\rm{MW}$ & $200$ & $216$ & $230$ & $270$ \\
\hline
\end{tabular}
\end{table}
The functional elements of the muon beam generation and acceleration systems are:
\begin{itemize}
\item a proton driver producing a high-power multi-GeV, multi-$\rm{MW}$ bunched $H^- beam$,
\item a buncher made of an accumulator and a compressor that forms intense and short proton bunches,
\item a pion production target in a heavily shielded enclosure able to withstand the high proton beam power, which is inserted in a high field solenoid to capture the pions and guide them into a decay channel,
\item a front-end made of a solenoid decay channel equipped with RF cavities that captures the muons longitudinally into a bunch train, and then applies a time-dependent acceleration that increases the energy of the slower (low-energy) bunches and decreases the energy of the faster (high-energy) bunches,
\item an ``initial'' cooling channel that uses a moderate amount of ionization cooling to reduce the 6D phase space occupied by the beam by a factor of $50$ ($5$ in each transverse plane and $2$ in the longitudinal plane), so that it fits within the acceptance of the first acceleration stage. For high luminosity collider applications, further ionization cooling stages are necessary to reduce the 6D phase space occupied by the beam by up to five orders of magnitude,
\item the beam is then accelerated by a series of fast acceleration stages such as Recirculating Linacs Accelerators (RLA) or Fixed Field Alternating Gradient (FFAG) and Rapid Cycling Synchrotron (RCS) to take the muon beams to the relevant energy before injection in the muon collider Ring.
\end{itemize}
\subsubsection{R\&D}
The MAP R\&D program (2011-2018) addressed many issues toward technical and design feasibility of a muon based neutrino factory or collider \cite{JINST} . Significant R\&D progress, also summarized in \cite{Boscolo:2018ytm}, was achieved.
\noindent\underline{\it{Operation of RF Cavities in High Magnetic Fields}}
Accelerating gradients in excess of $50~\rm{MV}/\rm{m}$ in a $3$~T magnetic field have been demonstrated in the FNAL MuCool Test Area (MTA).
\noindent\underline{\it{Initial and 6D Ionization Cooling Designs and pioneering demonstration}}
\noindent Concepts were developed for Initial Cooling, and 6D Cooling with RF cavities operating in vacuum (VCC), including a variant on this design where the cavities were filled with gas used as discrete absorber (hybrid scheme), and a Helical Cooling Channel (HCC) design operating as a gas-filled channel. The recent results concerning the operation of RF cavities in high magnetic fields exceed the requirements for these lattices, thus opening up the possibility of further improvements. Additionally, lower emittances can potentially be achieved by adding a final section utilizing High Temperature Superconductor (HTS) technology. The International Muon Ionization Cooling Experiment (MICE) \cite{MICE} collaboration, hosted at STFC Rutherford-Appleton Laboratory (RAL), undertook the task of characterizing the energy-loss and multiple scattering characteristics of muons in the momentum regime relevant for the construction of an ionization cooling channel ($\sim200$~MeV$/c$). It also demonstrated the technical feasibility of the key required components (magnets, RF, aborbers, etc). The MICE pioneering observation of the ionization-cooling of muon beams -- an emittance reduction of 6\% -- is well described by theory and simulations \cite{Mohayai:2018rxn,Blackmore:2018mfr}. However, the experiment has been carried out with a transverse emittance of two orders of magnitude above the one required by a collider.\\
A test facility is required to demonstrate significant cooling in the transverse and longitudinal planes and to levels relevant for a collider scenario. Using a significant bunch charge will be essential to perform more complete measurements. This effort will help to select a cooling scheme among the various possible options and to gain confidence that it will achieve the targeted performance. In parallel, a feasibility study of the promising Parametric-resonance Ionization Cooling (PIC) \cite{Derbenev:2012dh} should be launched for its potential of improved cooling performance.
\noindent\underline{\it{Very High Field Solenoid Magnets}} are required for the final cooling to achieve low transverse beam emittances. A recent demonstration of a $32$~T, superconducting solenoid with $34~\rm{mm}$ cold bore has been carried out. This is in excess of the previously assumed magnetic field of $30$~T with a $25~\rm{mm}$ aperture used in final cooling scenario. This result together with the rapid improvements in HTS-based solenoids opens up the possibility of further optimization of the Final Cooling system design.
\noindent\underline{\it{Fast Acceleration to collider energies}}
\noindent Since synchrotron radiation is not a limiting factor in accelerating muons to the TeV-scale, efficient multi-pass acceleration can be used for cost-effective collider facilities. At low energy, acceleration in Recirculating Linacs (RLA) and, from the $\sim100$~GeV-scale, a hybrid Rapid Cycling Synchrotron (RCS) concept where fixed field superconducting dipoles are interleaved with fast normal-conducting ramping magnets, have been identified as the most cost-effective solutions with efficient power consumption. Such a ring can be used to accelerate the muons to the final energy before they are injected into the high-field collider ring; hence its circumference can be much larger than the collider ring - in inverse proportion to the average field level achieved. Initial studies indicated that magnets meeting the requirements of $2$~T peak-to-peak operation at a minimum of $400~\rm{Hz}$ frequency are achievable, although further work is required to prepare a full-scale working prototype. A collider ring that also serves as the RCS has been suggested \cite{Neuffer:2018yof} and it is an option that requires further exploration.
\noindent\underline{\it{High-field and fast-ramping dipoles}}
\noindent High-field dipoles are essential in the collider ring to reach high energy and luminosity. The magnets have to be robust against the decay products of the muon beams that could quench them. High-field dipoles and fast-ramping dipoles with a large field variation are essential to limit the size and cost of the RCS.
\subsection{Positron driver scheme}\label{sec:pods}
The proton driver muon source generates beams with relatively large emittance. This is due to the limited capacity to cool the initial large emittance muon beams by ionization in matter, where multiple scattering also occurs. Therefore relatively larger muon beam currents in the collider are mandatory to reach a high luminosity. When the muons decay while circulating in the collider ring, they generate a large background in the detector and also lead to significant radiation at the surface. This ultimately limits the energy and luminosity that can be reached in a muon collider.
The LEMMA scheme aims at overcoming this limitation by generating very small emittance muon beams, without any need for muon cooling.
This would allow for high luminosities with much smaller beam currents and consequently reduced detector background and surface radiation. This scheme strongly relaxes the limitations in energy and luminosity arising from the muon decay.
\subsubsection{Design status}
The Low Emittance Muon Accelerator (LEMMA) concept \cite{Antonelli:2015nla} is based on muon production from a $45$~GeV positron beam annihilating with the electrons of a target close to threshold for $\mu^{+}\mu^{-}$ pair creation, thus generating, without any cooling, muon beams with low enough transverse emittance for a high luminosity collider. The low muon conversion efficiency ($\sim 9\cdot10^{-8}$ muons per positron using a $3~\rm{mm}$ Beryllium target), motivated an extremely ambitious production scheme allowing to reach a high collider luminosity, similar to that of the MAP proposal.
The initial design foresees a positron storage ring with an internal target, in order to allow multiple interactions of the positrons with the electrons at rest in the target.
However this first layout has encountered several limiting difficulties.
One of them is the required intensity of the positron beam source. With a mean positron lifetime of 100 turns, a positron source providing $10^{16}~e^+/$sec allows for a flux of $10^{18}~e^+/$sec on target. This source is already two orders of magnitude more intense than the International Linear Collider one.
Other issues are related to the instantaneous and the accumulated Peak Energy Deposition Density (PEDD) in the target for muon production; the impact of multiple passages through the target on the muon and positron beams; the recombination of multiple muon bunches before and after acceleration.
Presently an alternative accelerator complex design is under study, identifying the challenges within reach of the existing technology, and those requiring further innovation. The final goal of this process is to come up with a revised start-to-end machine design and a well-defined R\&D roadmap.
\subsubsection{R\&D}
\noindent\underline{\it{Overall facility concept}}
\noindent A redesign of this concept is needed to address the issues found in the initial scheme and to evaluate the achievable collider instantaneous luminosity, as limited by the muon production efficiency. The new scheme under study is still based on a $45$~GeV positron beam, but the positron bunches will be extracted to impinge on multiple targets in long straight sections with multiple IPs. This scheme could release the impact of the average power on the targets and also reduce the number of positrons needed from the source. Also, it would improve the luminosity by increasing the production yield with respect to the muon beam emittance.
\noindent\underline{\it{Bunch stacking in the collider ring}}
\noindent A major shortcoming in the initial scheme is that no realistic concept is presented for the combination of the bunches that are injected into the collider ring into a single bunch. A proposed scheme to overcome this in the longitudinal plane requires important R\&D for the related hardware in order to establish feasibility. Alternatively, one should explore the option to produce higher muon beam intensities already at the level of the source.
\noindent\underline{\it{Positron source and muon production target}}
\noindent A novel design of the positron source could take advantage of synergies with other future collider studies. A more elaborate machine complex, exploiting both ring and linac assets should be envisaged, to produce and extract a high flux and low emittance positron beam, accelerated to 45 GeV, to impinge onto a system of multiple targets. The positron storage ring should be reoptimized toward longer positron lifetime with stronger focusing on the production target(s).
Systematic research on positron and muon production targets is required. For muon production an optimized choice of the target material is needed: beryllium, carbon composites or other solutions like liquid Lithium and liquid Hydrogen are under consideration. R\&D on specific target shapes and configurations to reduce the emittance of the produced muon beam are highly desirable to avoid the luminosity reduction. R\&D on the positron source could explore the use of Tungsten crystals, relying on the intense channeling radiation in axially oriented crystalline. Moreover R\&D on positron capture efficiency and the possibility of a multi-target scheme also for increasing the positron production are envisageable. The muon accumulation rings need to be designed with optimized matching onto the production target.
\noindent\underline{\it{Acceleration}}
\noindent Depending on the source and collider ring design, two options can be explored.\\
If a novel scheme can produce intense muon bunches at $15~\rm{Hz}$, the acceleration complex can be similar to the RCS-based proton driver scheme; the main difference being the lower bunch charge with smaller emittances. If the muon bunches are combined in the collider ring, higher repetition rate is required. In this case, the acceleration scheme will likely differ from the proton-driver case, relying on FFAG-RLA scenarios rather than a RCS.
\section{Backgrounds and Detector Studies}\label{sec:background}
The primary source of backgrounds in a muon collider detector is the $\mu\rightarrow e\nu\overline\nu$ decay of the muon beam. Detailed studies have been performed by the MAP group \cite{JINST} for a muon collider with center of mass energy of $1.5$~TeV. The electrons coming from the muon decays interact with the beamline components and generate electromagnetic showers. As a result, a large number of low-energy photons and soft neutrons may reach the accelerator elements and the detectors. Large heat deposition from the showers induced by decay electrons requires using large-aperture magnets to accommodate the thick high-Z absorbers needed to protect the superconducting coils. In particular intensive simulations are needed to design sophisticated radiation protection systems to shield the magnets apertures and interconnections. The lower intensity LEMMA source would enable thinner absorbers and smaller apertures.
Part of the background is generated by beam interactions many meters upstream of the interaction point. There is also a halo of decay electrons accompanying the beam to the interaction point. If there were no specifically designed shielding, this halo would produce an unacceptable level of electromagnetic background in the detectors. Any detector at a muon collider will then have to be designed with the backgrounds from muon decays taken into account. Detector studies performed in the context of the MAP effort \cite{JINST} show that the current approach to handle high detector backgrounds appear adequate to preserve the required physics capabilities. However, these studies need to be extended, and to higher c.o.m. energies, to explore the full range of physics processes in detail and characterize the overall physics reach of any muon collider design.
Another important aspect should be taken into account: the decay neutrinos will produce a secondary radiation, with hadrons, muons and electrons traversing the earth that may constitute a radiological hazard. Even without detailed studies, \cite{JINST} suggests that the ultimate energy of a muon collider might be limited by neutrino radiation at ground level. It was evaluated that, following US federal limits, the ultimate center-of-mass colliding-beam energy could be limited to about $10$~TeV with a depth of $500~\rm{m}$. Some ideas (such as conveying neutrinos generated in the straight sections of the accelerator through a modest vacuum beam pipe) to mitigate the problem were proposed in \cite{JRS} and these require further development. In the case of the LEMMA scheme, the fact that muon beams from $e^+e^-$ collisions have a low emittance may allow high luminosity with a smaller number of muons per bunch. Therefore, a lower level of background is expected, enabling a higher center of mass energy before producing a radiological hazard. The radiation hazard is also an important concern in the use of existing tunnels that have straight insertions. They can increase the radiation at the locations where the straight extension of the tunnel would intersect the surface, if no countermeasures are used.
\section{Implementations}
Muon-based facilities offer unique potential to provide a next generation of capabilities and world-leading experimental support, spanning physics at both the Intensity and Energy Frontiers. A complete picture has been identified within the framework of the Muon Accelerator Staging Study (MASS) \cite{Delahaye:2015yxa}.
A preliminary study of a center of mass $14$~TeV muon collider in the CERN LHC tunnel has recently been considered \cite{Neuffer:2018yof}. It leverages the existing CERN facilities, including the $26.7~\rm{km}$ circumference LHC tunnel and its injectors. Collisions at $14$~TeV center of mass energy of fundamental leptons could provide physics reach comparable to the interaction of the proton constituents in a $100$~TeV FCC-hh, but within the present CERN footprint.
The performance based on a proton driver scheme in a collider equipped with $16$~T magnets as developed for the FCC project and adapted for muon decay shielding would be especially attractive with a luminosity up to $10^{35}\rm{cm}^{-2}{\rm{s}}^{-1}$ at the limit of on-site neutrino radiation. A positron driver scheme with a potentially lower neutrino radiation could possibly extend further this energy range. The realistic performance and feasibility of such a scheme would have to be confirmed by a detailed feasibility study identifying the required R\&D to address its specific issues, especially the limitations from neutrino radiation on site and the compatibility of the existing CERN facilities with the muon decays. Synergies with the FCC developments in high magnetic field magnets and/or with the CLIC development of high accelerating gradients should be further explored.
\section{Conclusions and recommendations }
Muon-based technology represents a unique opportunity for the future of high energy physics research: the multi-TeV energy domain exploration.
The development of the challenging technologies for the frontier muon accelerators has shown enormous progress in addressing the feasibility of major technical issues with R\&D performed by international collaborations. In Europe, the reuse of existing facilities and infrastructure for a muon collider is of interest. In particular the implementation of a muon collider in the LHC tunnel appears promising, but detailed studies are required to establish feasibility, performance and cost of such a project. A set of recommendations listed below will allow to make the muon technology mature enough to be favorably considered as a candidate for high-energy facilities in the future.\\[4pt]
\noindent{\bf{Set-up an international collaboration}} to promote muon colliders and organize the effort on the development of both accelerators and detectors and to define the road-map towards a CDR by the next Strategy update. As demonstrated in past experiences, the resources needed are not negligible in terms of cost and manpower and this calls for a well-organized international effort. \\
For example, the MAP program required an yearly average of about \mbox{$10$M\$} and \mbox{$20$ FTE} staff/faculty in the $3$-year period 2012-2014.\\[3pt]
\noindent{\bf{Develop a muon collider concept based on the proton driver and considering the existing infrastructure.}} This includes the definition of the required R\&D program, based on previously achieved results, and covering the major issues such as cooling, acceleration, fast ramping magnets, detectors, \ldots.\\[3pt]
\noindent{\bf{Consolidate the positron driver scheme}} addressing specifically the target system, bunch combination scheme, beam emittance preservation, acceleration and collider ring issues.\\[3pt]
\noindent{\bf{Carry out the R\&D program toward the muon collider.}} Based on the progress of the proton-driver and positron-based approaches, develop hardware and research facilities as well as perform beam tests.\\[4pt]
Preparing and launching a conclusive R\&D program towards a multi-TeV muon collider is mandatory to explore this unique opportunity for high energy physics.
A well focused international effort is required in order to exploit existing key competences and to draw the roadmap of this challenging project.\\
The development of new technologies should happen in synergy with other accelerator projects. \\
Moreover, it could also enable novel mid-term experiments.
\section*{Acknowledgements}
We are pleased to acknowledge the MAP, LEMMA and MICE collaborations for all the excellent developments already achieved on the promising but challenging muon technology, and for all the discussions and useful comments while preparing this document. We particularly thank M.~A.~Palmer.
\newpage
{\small{
|
3,212,635,537,736 | arxiv | \section{Guessing and covering}
For the remainder of the article,
$N$ will usually denote an elementary substructure of $H(\theta)$ for some
regular cardinal $\theta \ge \omega_2$, although we will not strictly require this for many of the definitions.
For a set or class $M$, a set $x \subseteq M$ is said to be \emph{bounded in $M$} if
there exists $Y \in M$ such that $x \subseteq Y$.
\begin{definition}
A set $N$ is said to be \emph{guessing}
if for any set $x \subseteq N$ which is bounded in $N$,
if for all $a \in N \cap [N]^\omega$,
$x \cap a \in N$,
then there exists $E \in N$ such that $x = N \cap E$.
\end{definition}
\begin{definition}
For any regular cardinal $\kappa \ge \omega_2$,
let $\textsf{ISP}(\kappa)$
be the statement that for any regular cardinal
$\theta \ge \kappa$, the collection of
guessing sets is stationary in $P_{\kappa}(H(\theta))$.
Let $\textsf{ISP}$ be the statement $\textsf{ISP}(\omega_2)$.
\end{definition}
Being \emph{stationary} in $P_{\kappa}(H(\theta)) =
\{ a \subseteq H(\theta) : |a| < \kappa \}$ means meeting every club,
where a club is any cofinal subset of
$P_\kappa(H(\theta))$ closed under unions of $\subseteq$-increasing
sequences of length less than $\kappa$.
The collection of all sets
$N$ such that $N \cap \kappa \in \kappa$ is club in
$P_{\kappa}(H(\theta))$, so $\textsf{ISP}(\kappa)$ implies stationarily
many guessing models $N$ such that $N \cap \kappa \in \kappa$.
It is easy to prove from the definition that if $N$ is an elementary substructure which is guessing,
then for any regular uncountable cardinal $\kappa \in N$, $\sup(N \cap \kappa)$ has uncountable cofinality.
\begin{definition}
A set $N$ is said to be \emph{internally unbounded} if
for any countable set $x \subseteq N$ which is bounded
in $N$, there exists $y \in N \cap [N]^\omega$
such that $x \subseteq y$.
\end{definition}
Recall that $N$ has \emph{countable covering} if
any countable subset of $N$ is covered by a countable
set in $N$.
Obviously, if $\sup(N \cap On)$ has cofinality
$\omega$, then $N$ does not have this property, but under some typical assumptions, if $\sup(N \cap On)$ has
uncountable cofinality then countable covering is
equivalent to being internally unbounded.
Viale (\cite[Remark 4.3]{viale}) asked whether it is consistent to have a guessing model which is
not internally unbounded.
In \cite[Section 4]{JK30}
we showed that \textsf{PFA} implies the existence of
stationarily many
elementary substructures $N$ of $H(\omega_2)$
of size $\omega_1$ such that $N$ is guessing but
$\sup(N \cap \omega_2) = \omega$.
Such models do not have countable covering, but they are
internally unbounded according to Definition 1.3.
This result solved an easy special case of Viale's
question, but the next theorem provides the complete
solution.
\begin{thm}
Let $\theta \ge \omega_2$ be a regular cardinal, and suppose that
$N$ is an elementary substructure of $H(\theta)$ such that $\omega_1 \subseteq N$.
If $N$ is guessing, then $N$ is internally unbounded.
\end{thm}
\begin{proof}
Let $x \subseteq N$ be countably infinite and bounded in $N$.
Fix a set $Y \in N$ such that $x \subseteq Y$.
Our goal is to find a countable set $y$ in $N$ such that $x \subseteq y$.
Observe that by elementarity, the set $[Y]^{<\omega}$ is a member of $N$.
Fix a bijection $g : \omega \to x$, and for each $n$ let $x_n := g[n]$.
Then $x_m \subseteq x_n$ for all $m < n$, $\bigcup_n x_n = x$, and
$\{ x_n: n < \omega \} \subseteq [Y]^{<\omega}$.
We consider two possibilities.
The first is that there exists $\mathcal X \in N \cap [N]^\omega$ such that
$$
| \mathcal X \cap \{ x_n : n < \omega \} | = \omega.
$$
By intersecting $\mathcal X$ with $[Y]^{<\omega}$ if necessary, we may assume without loss of
generality that $\mathcal X \subseteq [Y]^{<\omega}$.
Since $\mathcal X$ is countable and its elements are finite, $y := \bigcup \mathcal X$ is a
countable subset of $Y$.
Also, $y \in N$ by elementarity.
We claim that $x \subseteq y$, which completes the proof in this case.
Consider $a \in x$.
Fix $m$ such that $a \in x_m$.
Since $\mathcal X \cap \{ x_n : n < \omega \}$ is infinite, we can fix $n > m$ such that
$x_n \in \mathcal X$.
Then $a \in x_m \subseteq x_n \subseteq y$, so $a \in y$.
The second possibility is that for all $\mathcal X \in N \cap [N]^\omega$,
$\mathcal X \cap \{ x_n : n < \omega \}$ is finite.
Since $N$ is closed under finite subsets, for all such $\mathcal X$,
$\mathcal X \cap \{ x_n : n < \omega \}$ is a member of $N$.
In this case we will show that $x$ itself is a
member of $N$, which completes the proof.
Since $N$ is guessing, we can fix $E \in N$ such that $\{ x_n : n < \omega \} = N \cap E$.
Observe that $E$ is countable.
Otherwise there would exist an injection of $\omega_1$ into $E$ in $N$ by elementarity.
Since $\omega_1 \subseteq N$, it would follow that $N \cap E$ is uncountable.
This is impossible since $N \cap E = \{ x_n : n < \omega \}$, which is countable.
As $E$ is countable, $E \subseteq N$ by elementarity.
So $\{ x_n : n < \omega \} = N \cap E = E$.
Therefore, the set $\{ x_n : n < \omega \}$ is a member of $N$.
Thus, $x = \bigcup \{ x_n : n < \omega \}$ is a member of $N$.
\end{proof}
\begin{corollary}
Let $\kappa \ge \omega_2$ be a regular cardinal.
Then $\textsf{ISP}(\kappa)$ implies that for all regular cardinals $\theta \ge \kappa$,
there are stationarily many $N \in P_{\kappa}(H(\theta))$ such that $N$ is
guessing and internally unbounded.
\end{corollary}
\begin{proof}
We already know that $\textsf{ISP}(\kappa)$ implies the existence of
stationarily many $N \in P_{\kappa}(H(\theta))$
such that $N$ is guessing and $N \cap \kappa \in \kappa$.
By definability, $\omega_1 \in N \cap \kappa$, and it follows that $\omega_1 \subseteq N$.
By Theorem 1.4, $N$ is internally unbounded.
\end{proof}
Viale \cite[Section 7.2]{viale} proved that the existence of stationarily many internally unbounded
guessing models implies \textsf{SCH}, but it was unknown whether guessing models alone imply \textsf{SCH}.
This problem also appears in \cite[Section 1]{sinapova}.
By Corollary 1.5 together with Viale's result,
\textsf{ISP} does indeed imply \textsf{SCH}.\footnote{After announcing the results of this paper, we learned that
S.\ Hachtman had recently and independently
proven that \textsf{ISP}
implies \textsf{SCH} using essentially the same
argument as presented in this section.}
\begin{corollary}
\textsf{ISP} implies \textsf{SCH}.
\end{corollary}
\section{\textsf{ISP} and \textsf{SCH}}
In the previous section we showed that guessing models
are internally unbounded, which combined with
Viale's argument \cite[Section 7.2]{viale} proves that
$\textsf{ISP}$ implies $\textsf{SCH}$.
S. Hachtman and D. Sinapova \cite{sinapova} asked a more general question, which is
whether for a regular cardinal $\kappa \ge \omega_2$, $\textsf{ISP}(\kappa)$ implies
$\textsf{SCH}$ above $\kappa$.
In this section we solve this problem in the affirmative.
We note that our proof avoids the idea of internally unbounded models entirely.
We will in fact prove something a bit stronger.
\begin{thm}
Let $\kappa \ge \omega_2$ be regular and
assume that $\textsf{ISP}(\kappa)$ holds.
Then either $\kappa$ is supercompact, or
$\textsf{SCH}$ holds.
\end{thm}
\begin{proposition}
Let $\kappa \ge \omega_2$ be regular and assume
that $\textsf{ISP}(\kappa)$ holds.
If $2^\omega < \kappa$, then $\kappa$ is supercompact.
Hence, \textsf{SCH} holds above $\kappa$.
\end{proposition}
\begin{proof}
If $\kappa$ is strongly inaccessible and
$\textsf{ISP}(\kappa)$ holds, then $\kappa$ is
supercompact by \cite[Theorem 2.10]{weiss}.
And if $\kappa$ is supercompact, then
$\textsf{SCH}$ holds above $\kappa$
by a well-known result of Solovay
(\cite[Theorem 20.8]{jech}).
So it suffices to show that $\kappa$ is strongly
inaccessible.
Let $\mu < \kappa$ be a cardinal
and we will show that $|P(\mu)| < \kappa$.
Using $\textsf{ISP}(\kappa)$, we can fix an elementary substructure $N$ of
$H(\kappa)$ of size less than $\kappa$ such that $N \cap \kappa \in \kappa$,
$N \cap \kappa$ is larger than $2^\omega$ and $\mu$, and $N$ is guessing.
It suffices to show that $P(\mu) \subseteq N$.
Let $x \subseteq \mu$.
Then $x$ is a subset of $N$ which is bounded in $N$.
Consider $a \in N \cap [N]^\omega$.
Since $2^\omega < N \cap \kappa$, $P(a) \subseteq N$.
In particular, $a \cap x \in N$.
As $N$ is guessing,
it follows that there exists $E \in N$
such that $x = N \cap E$.
By intersecting $E$ with $\mu$ if necessary, we may assume without loss of
generality that $E \subseteq \mu$.
Since $\mu$ is a subset of $N$, so is $E$, and hence
$x = N \cap E = E$.
Thus, $x \in N$, as desired.
\end{proof}
See \cite[Theorem 2.1]{sinapova} for a similar argument.
Fix a regular cardinal $\kappa \ge \omega_2$ for the remainder of the section, and assume that
$\textsf{ISP}(\kappa)$ holds.
If $2^\omega < \kappa$, then $\kappa$ is supercompact, and we are done.
Assume that $2^\omega \ge \kappa$.
We will show that \textsf{SCH} holds.
By a well-known theorem of Silver, the first cardinal
for which \textsf{SCH} fails, if it exists,
has cofinality $\omega$ (\cite[Theorem 8.13]{jech}).
Let $\lambda$ be a singular cardinal of
cofinality $\omega$, and assume that
\textsf{SCH} holds below $\lambda$.
If \textsf{SCH} fails at $\lambda$, that means that $2^\omega < \lambda$
and $\lambda^\omega > \lambda^+$.
Now $2^\omega \ge \kappa$, so $\lambda > \kappa$.
Since \textsf{SCH} holds below $\lambda$,
an easy inductive argument shows that
for all cardinals $\mu < \lambda$, $\mu^\omega < \lambda$
(\cite[Theorem 5.20]{jech}).
Putting it all together, assuming $\textsf{ISP}(\kappa)$
and $2^\omega \ge \kappa$, \textsf{SCH} follows from
the statement: for all cardinals $\lambda > \kappa$ of
cofinality $\omega$, if $\mu^\omega < \lambda$
for all $\mu < \lambda$, then $\lambda^\omega = \lambda^+$.
Our proof of this statement follows along the lines of
Viale's proof \cite[Section 7.2]{viale}, but avoids
consideration of internal unboundedness.
\begin{lemma}[{\cite[Lemma 6]{vialecovering}}]
Let $\lambda > 2^\omega$ be a cardinal with cofinality $\omega$.
Then there exists a matrix
$$
\langle K(n,\beta) : n < \omega, \ \beta < \lambda^+ \rangle
$$
of sets of size less than $\lambda$ satisfying:
\begin{enumerate}
\item for all $\beta < \lambda^+$, $\beta = \bigcup \{ K(n,\beta) : n < \omega \}$;
\item for all $\beta < \lambda^+$ and $m < n < \omega$,
$K(m,\beta) \subseteq K(n,\beta)$;
\item for all $\gamma < \beta < \lambda^+$ there exists $m < \omega$ such that
for all $m \le n < \omega$, $K(n,\gamma) \subseteq K(n,\beta)$;
\item for all $x \in [\lambda^+]^\omega$ there exists $\gamma < \lambda^+$
such that for all $\gamma < \beta < \lambda^+$,
there exists $m < \omega$ such that for all $m \le n < \omega$,
$K(n,\beta) \cap x = K(n,\gamma) \cap x$.
\end{enumerate}
\end{lemma}
\begin{proof}
Fix an increasing sequence of uncountable
cardinals $\langle \lambda_n : n < \omega \rangle$ cofinal in
$\lambda$.
By a straightforward argument, it is possible to
fix, for each $\beta < \lambda^+$,
a surjection $g_\beta : \lambda \to \beta$ satisfying
that for all $\gamma < \beta$ there exists $m$
such that for all $n \ge m$,
$g_\gamma[\lambda_n] \subseteq
g_\beta[\lambda_n]$.
Define $K(n,\emptyset) := \emptyset$ for all $n < \omega$.
Now fix $\beta < \lambda^+$ and assume that $K(n,\gamma)$ is defined for all $n < \omega$
and $\gamma < \beta$.
Define for each $n < \omega$
$$
K(n,\beta) := g_\beta[\lambda_n] \cup \bigcup \{ K(n,\gamma) : \gamma \in g_\beta[\lambda_n] \}.
$$
This completes the definition.
It is easy to prove by induction that (1), (2), and
(3) hold, and each $K(n,\beta)$ has
size at most $\lambda_n$.
For (4), fix $x \in [\lambda^+]^\omega$.
For each $\beta < \lambda^+$, define a function $f_\beta : \omega \to P(x)$ by
$f_\beta(n) := K(n,\beta) \cap x$.
Observe that there are $2^\omega$ many possibilities for such a function $f_\beta$.
Since $2^\omega < \lambda$, we can fix a set $S \subseteq \lambda^+$ of size $\lambda^+$
and a function $f$ such that for all $\beta \in S$, $f_\beta = f$.
Let $\gamma := \min(S)$.
To verify that (4) holds for $x$,
consider $\beta > \gamma$.
Let $\xi := \min(S \setminus \beta)$.
Using (3), fix $m$ such that for all $n \ge m$,
$$
K(n,\gamma) \subseteq K(n,\beta) \subseteq K(n,\xi).
$$
In particular,
$K(n,\gamma) \cap x \subseteq K(n,\beta) \cap x$.
For the reverse inclusion,
$$
K(n,\beta) \cap x \subseteq K(n,\xi) \cap x =
f_\xi(n) = f(n) = f_{\gamma}(n) =
K(n,\gamma) \cap x.
$$
\end{proof}
\begin{lemma}[{\cite[Fact 9]{vialecovering}}]
Let $\lambda > 2^\omega$ be a singular cardinal
with cofinality $\omega$
such that
for all cardinals $\mu < \lambda$, $\mu^\omega < \lambda$.
Fix $\langle K(n,\beta) : n < \omega, \ \beta < \lambda^+ \rangle$ as described in Lemma 2.3.
Assume that there exists a set $S \subseteq \lambda^+$ of size $\lambda^+$
such that for all $x \in [S]^\omega$, there exists $n < \omega$ and $\beta < \lambda^+$
such that $x \subseteq K(n,\beta)$.
Then $\lambda^\omega = \lambda^+$.
\end{lemma}
\begin{proof}
Since $S$ has size $\lambda^+$, the cardinality of $[S]^\omega$ is equal to $(\lambda^+)^\omega$,
which in turn equals $\lambda^\omega$.
So it suffices to show that $[S]^\omega$ has cardinality $\lambda^+$.
By assumption, every member of $[S]^\omega$ is a subset of $K(n,\beta)$ for some $n < \omega$
and $\beta < \kappa^+$.
Thus,
$$
[S]^\omega \subseteq \bigcup \{ [K(n,\beta)]^\omega : n < \omega, \ \beta < \lambda^+ \}.
$$
Now each $K(n,\beta)$ has cardinality less than $\lambda$, so by our assumptions,
$[K(n,\beta)]^\omega$ has cardinality less than $\lambda$.
Thus, the union in the above inclusion has cardinality $\lambda^+$.
\end{proof}
Assume that $\textsf{ISP}(\kappa)$ holds, and let $\lambda > \kappa$ be a singular cardinal
of cofinality $\omega$ such that for all $\mu < \lambda$, $\mu^\omega < \lambda$.
We will prove that $\lambda^\omega = \lambda^+$.
Fix $\mathcal K = \langle K(n,\beta) : n < \omega, \ \beta < \lambda^+ \rangle$ as described in Lemma 2.3.
In order to show that $\lambda^\omega = \lambda^+$, by Lemma 2.4 it suffices to show that
there exists a set $S \subseteq \lambda^+$ of size $\lambda^+$
such that for all $x \in [S]^\omega$,
there exists $n < \omega$ and $\beta < \lambda^+$
such that $x \subseteq K(n,\beta)$.
Using $\textsf{ISP}(\kappa)$,
fix an elementary substructure $N$ of $H(\lambda^{++})$
of size less than $\kappa$ such that $N \cap \kappa \in \kappa$,
$\mathcal K \in N$, and $N$ is guessing.
For each $x \in [\lambda^+]^\omega$, let $\gamma_x < \lambda^+$ be the minimal ordinal
satisfying that for all
$\gamma_x < \beta < \lambda^+$, there exists $n$ such that for all $m \ge n$,
$K(m,\beta) \cap x = K(m,\gamma_x) \cap x$.
Observe that $\langle \gamma_x : x \in [\lambda^+]^\omega \rangle$ is a member of
$N$ by elementarity.
Consider $x \in N \cap [\lambda^+]^\omega$.
Then $\gamma_x \in N \cap \lambda^+$.
So there exists $n$ such that for all $m \ge n$,
$$
K(m,\sup(N \cap \lambda^+)) \cap x = K(m,\gamma_x) \cap x.
$$
Since $x$, $\gamma_x$, and $\mathcal K$ are in $N$,
$K(m,\gamma_x) \cap x$ is a member of $N$.
Therefore,
$$
K(m,\sup(N \cap \lambda^+)) \cap x \in N.
$$
Now for each $x \in N \cap [\lambda^+]^\omega$, fix the smallest integer
$k_x$ satisfying that for all $m \ge k_x$,
$K(m,\sup(N \cap \lambda^+)) \cap x$ is in $N$.
We claim that if $x$ and $y$ are in $N \cap [\lambda^+]^\omega$
and $x \subseteq y$, then $k_x \le k_y$.
By the minimality of $k_x$, it suffices to show that
for all $m \ge k_y$, $K(m,\sup(N \cap \lambda^+)) \cap x \in N$.
Let $m \ge k_y$.
Then $K(m,\sup(N \cap \lambda^+)) \cap y \in N$.
Since $x$ is in $N$ and $x \subseteq y$, we have that
$K(m,\sup(N \cap \lambda^+)) \cap x = (K(m,\sup(N \cap \lambda^+)) \cap y) \cap x$
is in $N$.
Next, we claim that the collection of integers
$$
A := \{ k_x : x \in N \cap [\lambda^+]^\omega \}
$$
is finite.
Suppose for a contradiction that $A$ is infinite.
For each $n \in A$, fix $x_n \in N \cap [\lambda^+]^\omega$ such that $n = k_{x_n}$.
Now define, for each $n \in A$, $y_n := \bigcup \{ x_k : k \in A \cap (n+1) \}$,
which is in $N \cap [\lambda^+]^\omega$.
Observe that if $m < n$ are in $A$, then $y_m \subseteq y_n$.
Also, for each $n \in A$, $x_n \subseteq y_n$, and therefore by the previous paragraph,
$n = k_{x_n} \le k_{y_n}$.
By thinning out the sequence $\langle y_n : n \in A \rangle$ if necessary, it is easy to find a
sequence $\langle z_n : n < \omega \rangle$ of distinct sets in $N \cap [\lambda^+]^\omega$ satisfying that
for all $m < n$, $z_m \subseteq z_n$ and
$k_{z_m} < k_{z_n}$.
We now consider two possibilities, both of which will lead to a contradiction.
First, assume that there exists a countable set $\mathcal X \in N$ such that
$$
|\mathcal X \cap \{ z_n : n < \omega \}| = \omega.
$$
By intersecting $\mathcal X$ with $[\lambda^+]^\omega$ if necessary, we may assume without loss of generality
that $\mathcal X \subseteq [\lambda^+]^\omega$.
Since $\mathcal X$ is countable and consists of countable sets, $x^* := \bigcup \mathcal X$ is in
$N \cap [\lambda^+]^\omega$.
We claim that for all $m < \omega$, $z_m \subseteq x^*$.
Indeed, given $m$, we can find $n \ge m$ such that $z_n \in \mathcal X$.
Then $z_m \subseteq z_n \subseteq \bigcup \mathcal X = x^*$.
Now for all $n < \omega$, $z_n \subseteq x^*$ implies that $k_{z_n} \le k_{x^*}$.
This is impossible, since $\{ k_{z_n} : n < \omega \}$ is unbounded in $\omega$,
whereas $k_{x^*} < \omega$.
Secondly, assume that for all countable sets $\mathcal X \in N$,
$\mathcal X \cap \{ z_n : n < \omega \}$ is finite.
Then in particular, for all countable sets $\mathcal X \in N$,
$\mathcal X \cap \{ z_n : n < \omega \}$ is a member of $N$.
Also note that this assumption implies that
$\{ z_n : n < \omega \}$ is not in $N$,
for otherwise we could let $\mathcal X$
be equal to it and get a contradiction.
Since $N$ is guessing, it follows that there exists $E \in N$ such that
$\{ z_n : n < \omega \} = N \cap E$.
In particular, $N \cap E$ is countable.
Since $\omega_1 \subseteq N$, this implies that $E$ is countable, for otherwise by elementarity
$N \cap E$ would be uncountable.
Therefore, $E \subseteq N$.
So $\{ z_n : n < \omega \} = N \cap E = E$, and hence $\{ z_n : n < \omega \}$ is a member of $N$,
which is a contradiction.
This concludes the proof that the set
$A = \{ k_x : x \in N \cap [\lambda^+]^\omega \}$ is finite.
Let $n^*$ be the largest member of $A$.
Then for all $x \in N \cap [\lambda^+]^\omega$, $k_x \le n^*$ implies that for all $m \ge n^*$,
$K(m,\sup(N \cap \lambda^+)) \cap x \in N$.
It easily follows that for all $m \ge n^*$, for any countable set $Y \in N$,
$K(m,\sup(N \cap \lambda^+)) \cap Y \in N$.
Since $N$ is guessing, for all $m \ge n^*$ there exists a set $E_m \in N$
such that $N \cap K(m,\sup(N \cap \lambda^+)) = N \cap E_m$.
By intersecting $E_m$ with $\lambda^+$ if necessary, we may assume without loss
of generality that $E_m \subseteq \lambda^+$.
Since $\sup(N \cap \lambda^+)$ is equal to
$\bigcup \{ K(m,\sup(N \cap \lambda^+)) : m < \omega \}$, we have that
$$
N \cap \sup(N \cap \lambda^+) = \bigcup \{ N \cap K(m,\sup(N \cap \lambda^+)) : m < \omega \}.
$$
As $\mathrm{cf}(\sup(N \cap \lambda^+))$ is uncountable, there exists $m \ge n^*$ such that
$N \cap K(m,\sup(N \cap \lambda^+)) = N \cap E_m$ is unbounded in $\sup(N \cap \lambda^+)$.
By elementarity, it easily follows that the set $S := E_m$ is unbounded in $\lambda^+$.
To complete the proof, it suffices to show that for all
$x \in [S]^\omega$, there exists $n < \omega$ and $\beta < \lambda^+$
such that $x \subseteq K(n,\beta)$.
Since $S \in N$, by elementarity it suffices to show that for all $x \in N \cap [S]^\omega$,
there exists $n < \omega$ and $\beta < \lambda^+$ such that $x \subseteq K(n,\beta)$.
Fix $x \in N \cap [S]^\omega$.
Then $x \subseteq N \cap S = N \cap E_m = N \cap K(m,\sup(N \cap \lambda^+))$.
By elementarity, there exists $\beta \in N \cap \lambda^+$ such that $x \subseteq K(m,\beta)$.
\section{Approximation and covering}
In Section 1 we saw that guessing implies
internally unbounded for elementary substructures.
In this section we provide analogous results concerning the
approximation property implying the covering property, for models and forcing posets.
\begin{definition}
Let $\kappa$ be a regular uncountable cardinal.
Let $W_1 \subseteq W_2$ be transitive (sets or classes) with $\kappa \in W_1$.
\begin{enumerate}
\item The pair $(W_1,W_2)$ is said to have the \emph{$\kappa$-approximation property}
provided that whenever $X \in W_2$ is a bounded subset of $W_1$,
if $X \cap y \in W_1$ for any set $y \in W_1$ such that $W_1 \models |y| < \kappa$,
then $X \in W_1$;
\item The pair $(W_1,W_2)$ is said to have the \emph{$\kappa$-covering property} if whenever
$X \in W_2$ is a bounded subset of $W_1$,
if $W_2 \models |X| < \kappa$, then there exists $Y \in W_1$ such that
$W_1 \models |Y| < \kappa$ and $X \subseteq Y$.
\end{enumerate}
\end{definition}
\begin{definition}
Let $\kappa$ be a regular uncountable cardinal and $\mathbb{P}$ a forcing poset.
We say that $\mathbb{P}$ has the \emph{$\kappa$-approximation property} if $\mathbb{P}$ forces that
$(V,V^\mathbb{P})$ has the $\kappa$-approximation property, and has the \emph{$\kappa$-covering property}
if $\mathbb{P}$ forces that $(V,V^\mathbb{P})$ has the $\kappa$-covering property.
\end{definition}
\begin{thm}
Let $\kappa$ be a regular uncountable cardinal and $W_1 \subseteq W_2$ be
transitive models of \textsf{ZFC} minus power set such that $\kappa \in W_1$.
Assume that for all $W_2$-cardinals $\mu < \kappa$, any subset of $W_1$ which is a member of
$W_2$ and has $W_2$-cardinality less than $\mu$ is a member of $W_1$.
If $(W_1,W_2)$ has the $\kappa$-approximation property, then it has the $\kappa$-covering property.
\end{thm}
\begin{proof}
Let $x \in W_2$ satisfy that $W_2 \models |x| < \kappa$ and $x \subseteq Y$
for some $Y \in W_1$.
We will prove that $x$ is covered by some set in $W_1$ which has $W_1$-cardinality less than $\kappa$.
Define $\mu := |x|^{W_2}$.
Since $x$ has cardinality $\mu$ in $W_2$, fix a bijection $g : \mu \to x$ in $W_2$, and define for
each $i < \mu$ $x_i := g[i]$.
Then the sequence $\langle x_i : i < \mu \rangle$ is in $W_2$, is $\subseteq$-increasing, and has
union equal to $x$.
Moreover, each $x_i$ has size less than $\mu$ in $W_2$, hence is in $W_1$ by our assumptions,
and has $W_1$-cardinality less than $\mu$.
We consider two possibilities.
First, assume that there exists a set $\mathcal X \in W_1$ of $W_1$-cardinality less than $\kappa$ such that
$$
W_2 \models | \mathcal X \cap \{ x_i : i < \mu \} | = \mu.
$$
By intersecting $\mathcal X$ with $([Y]^{<\mu})^{W_1}$ if necessary, we may assume without loss of generality that
$\mathcal X \subseteq ([Y]^{<\mu})^{W_1}$.
Since $\mu < \kappa$, $z := \bigcup \mathcal X$ is a subset of $Y$ of $W_1$-cardinality less than $\kappa$.
For all $i < \mu$, there exists $j > i$ in $\mu$ such that $x_j \in \mathcal X$,
so $x_i \subseteq x_j \subseteq z$.
Hence, $z$ is a member of $W_1$ of $W_1$-cardinality less than $\kappa$ such that
$x = \bigcup \{ x_i : i < \mu \}$ is a subset of $z$, as required.
Secondly, assume that for all $\mathcal X \in W_1$ of $W_1$-cardinality less than $\kappa$,
$$
W_2 \models | \mathcal X \cap \{ x_i : i < \mu \} | < \mu.
$$
Since each $x_i$ is a member of $W_1$, it follows from our assumptions that
$\mathcal X \cap \{ x_i : i < \mu \}$ is a member of $W_1$.
Also, the set $\{ x_i : i < \mu \}$ is a subset of a member of $W_1$, namely
the set $([Y]^{<\mu})^{W_1}$.
As the pair $(W_1,W_2)$ has the $\kappa$-approximation property, it follows that
$\{ x_i : i < \mu \}$ is a member of $W_1$.
This is impossible, since letting $\mathcal X$ be equal
to $\{ x_i : i < \mu \}$,
we get a contradiction
to the assumption of this case.
\end{proof}
\begin{corollary}
Let $\lambda$ be a regular cardinal and $\mathbb{P}$
a forcing poset.
Assume that $\mathbb{P}$ is $<\!\lambda$-distributive.
If $\mathbb{P}$ has the $\lambda^+$-approximation property,
then $\mathbb{P}$ has the $\lambda^+$-covering property.
\end{corollary}
\begin{proof}
By Theorem 3.3, it suffices to show that $\mathbb{P}$
preserves $\lambda^+$.
If not, then there exists a cofinal set
$x \subseteq (\lambda^+)^V$ in $V^{\mathbb{P}}$ of order
type at most $\lambda$.
If $a \in V$ has $V$-cardinality less than
$(\lambda^+)^V$,
then $a \cap x$ is bounded in $(\lambda^+)^V$,
and hence has order type less than $\lambda$.
As $\mathbb{P}$ is $<\!\lambda$-distributive,
$a \cap x \in V$.
Since $\mathbb{P}$ has the $\lambda^+$-approximation property,
it follows that $x \in V$, which is impossible.
\end{proof}
Observe that if $\kappa$ is weakly inaccessible or
the successor of a singular cardinal, then a forcing poset
$\mathbb{P}$ being $<\mu$-distributive for all cardinals
$\mu < \kappa$ implies that $\mathbb{P}$ is $<\!\kappa$-distributive,
and hence has the $\kappa$-covering property.
That is why we restricted the statement of the corollary
to successors of regulars.
\begin{corollary}
If $\mathbb{P}$ is a forcing poset which has the $\omega_1$-approximation property, then $\mathbb{P}$ has the
$\omega_1$-covering property.
\end{corollary}
This follows from the fact that $\mathbb{P}$ forces
that $V^{<\omega} \cap V^\mathbb{P} \subseteq V$.
\bibliographystyle{plain}
|
3,212,635,537,737 | arxiv | \section{Introduction}
\hspace{0.5cm}
The Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC) are
nearby irregular galaxies at a
distance of about 50 kpc and 60 kpc respectively. Both these galaxies are nearly face on.
Magellanic Clouds (MCs) were believed to have had interactions with our Galaxy
as well as between each other
(Westerlund \cite{westerlund}). It is also believed that the tidal forces due to these interactions have
caused structural changes in these galaxies. The recent proper motion estimates by Kallivayalil et al
(\cite{K06a}, \cite{K06b}) \& Besla et al (\cite{Besla07}) indicate that these Clouds may be approaching
our Galaxy for the first time. These results also claim that the MCs might not
have always been a binary system. Therefore, it is not clear whether the structure of the
MCs is modified due
to their mutual interactions, interactions with our Galaxy or something else, like minor merges.\\
The N-body simulations by Weinberg (\cite{Weinberg}) predicted that the LMC's evolution is
significantly affected by its interactions with the Milky Way, and the tidal forces will thicken
and warp the LMC disk. Alves and Nelson (2000) studied the carbon star kinematics and found that
the scale height, h, increased from 0.3 to 1.6 kpc over the range of radial distance, R,
0.5 to 5.6 kpc and hence concluded that the LMC disk is flared. Using an expanded sample
of carbon stars, van der Marel et al. (\cite{van02}) also found that the thickness of the LMC disk increases
with the radius. The depth of the clouds could vary as a function of radial distance
from the center due to tidal forces. The presence of variation in depth across the LMC, if present,
is likely to give valuable clues to the interactions it has experienced in the past.
There has not been any direct estimate of the thickness or the line of sight
depth of the bar and disk of the LMC so far.\\
Mathewson, Ford and Visvanathan (\cite{MFV86a} \& \cite{MFV86b}) found that SMC cepheids extend from
43 to 75 kpc with most cepheids found in the neighbourhood of 59 kpc. Later, the line of sight depth of SMC was
estimated (Welch \cite{Welch87}) by investigating the line of sight distribution and
period - luminosity relation of cepheids. They accounted for various factors
which could contribute to the larger depth estimated by Mathewson et al. (\cite{MFV86a} \& \cite{MFV86b}),
and found the line of sight depth of the SMC to be $\sim$ 3.3 kpc. Hatzidimitriou et al. (\cite{H89}),
estimated the line of sight depth in the outer regions of the SMC to be around 10-20 kpc.\\
Red Clump (RC) stars are core helium burning stars, which are metal rich and slightly
more massive counterparts of the
horizontal branch stars. They have tightly defined colour and magnitude, and appear as an
easily identifiable component in colour magnitude diagrams (CMDs). RC stars were used as standard
candles for distance determination by Stanek et al. (\cite{PS98}). They used the intrinsic
luminosity to determine the distance to MCs as well as to the bulge
of our Galaxy. Olsen and Salyk (\cite{OS02})
used their constant I band magnitude to show that the southern LMC disk is warped.
Subramaniam (\cite{S03}) used the constant magnitude of RC stars to show that the LMC has some
structures and warps in the bar region.
Their characteristic colour was used by Subramaniam (\cite{S05}) to
estimate the reddening map towards the central region of the LMC. \\
In this paper, we used the dispersions in the
colour and magnitude distribution of RC stars for depth estimation. The dispersion in
colour is due to a combination of observational error, internal reddening (reddening within the disk
of the LMC/SMC) and population effects. The dispersion in magnitude is due to internal
disk extinction, depth of the distribution, population effects and photometric errors
associated with the observations. By deconvolving other effects from the dispersion of
magnitude, we estimated the dispersion only due to the depth of the disk.
The advantage of choosing RC stars
as a proxy is that there are large numbers of these stars available to determine the
dispersions in their distributions with good statistics, throughout the L\&SMC disks.
The depth estimated here would correspond to the depth of the intermediate age L\&SMC disks.
The depth of the intermediate age disk of these galaxies may give clues to their formation and evolution, and thus
in turn would give clues to their mutual interactions. This could also place some
constraints on their interaction with our Galaxy.
Measurements of line of sight depth in the central regions of MCs,
especially the LMC, is of strong interest to understand
the observed microlensing towards these galaxies.\\
The next section deals with the contribution of population effects to the observed
dispersion of RC stars. Data sources are explained in section 3 and details of the analysis is
described in section 4. Internal reddening in the MCs is explained in section 5. The LMC and SMC
results are presented in sections 6 and 7 respectively and their implications are discussed in
section 8. These results for the disks are compared with the depth estimates of the halo, as
defined by RR Lyrae stars, in sections 9 and 10. Conclusions are given in section 11.
\section{Effect of a heterogeneous population of RC stars}
The RC stars in the L\&SMC disks are a heterogeneous population and hence, they would
have a range
in mass, age and metallicity. The density of stars in various location will also vary
with the local star formation rate as a function of time. These factors result in
a range of magnitude and colour of the net population of RC stars in any given location
and would
contribute to the observed dispersion in magnitude and colour distributions.
Girardi \& Salaris (\cite{GS01}) simulated the RC stars in the LMC using the star formation rate
estimated by Holtzman et al. (\cite{HGC99}) and the age metallicity relation from
Pagel and Tautvaisiene (\cite{PT98}). They also simulated the RC stars in SMC using
star formation results and the age metallicity relation from Pagel and Tautvaisiene (\cite{PT98}).
The synthetic CMDs of the two systems were obtained and the distribution of RC stars
is fitted using numerical analysis to obtain the mean and dispersion of the magnitude and
colour distributions. The estimated intrinsic dispersions in
magnitude and colour distributions for LMC are 0.1 and 0.025 mag respectively.
In the case of the SMC, the values are 0.076 and 0.03 mag respectively. The values of colour dispersions
are measured from the CMDs given in the above reference.
We used these estimates of the intrinsic dispersion, to account
for the population effects in our analysis.\\
The above reference used two models of star formation history, one for the bar and the other
for the disk of the LMC. The width of the RC distribution for the above two populations is not very
different. They also simulated the RC distribution found by Dolphin (\cite{D00}) in the northern LMC.
The RC stars in this region were found to be very different due to a significantly different star formation
history and metallicity. The effect on the width of the RC was found to be large. The intrinsic dispersion in this region
was found to be 0.218 mag, double that of the bar population. Therefore, if the population of a region
is different, its effect on the width of the RC distribution will be to increase it. All the regions
in the bar and the disk were corrected for population effects using the
bar and the disk model estimates of width. There may be an uncorrected component due to
population effects in the estimated depth, due to variations in the RC population between regions.
The final estimated variation of RC width will be a combination of variation in depth and population
across the LMC. In the SMC also, there can be an effect of different RC populations in the
dispersion corresponding to depth. The contribution of these components in various locations of the MCs
will be discussed later.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{cmd.ps}}
\caption{Colour magnitude diagram of an LMC region. The box used to identify the RC
population is shown.}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{smccmd.ps}}
\caption{Colour magnitude diagram of an SMC region. The box used to identify the RC
population is shown.}
\end{figure}
\section{Data}
\vspace{0.25cm}
\subsection{LMC data}
The OGLE II survey (Udalski et al. \cite{Uda00}) scanned the central region of the LMC to detect microlensing
events. One of the outcomes of this survey is a catalogue of stars in the central/bar region of the LMC,
consisting of photometric data of 7 million stars in the B,V and I pass bands. This catalogue presents data
of 26 fields which cover the central 5.7 square degrees of the LMC in the sky. Out of the 26 regions/strips,
21 regions are within 2.5 degrees of the optical center of the LMC representing the bar region,
and the other 5 regions are in the north western part of the LMC disk. The average photometric error
of red clump stars in I and V bands are $\sim$ 0.05 magnitude. Photometric data with errors less
than 0.15 mag are considered for the analysis.
Each strip was divided into 4$\times$16 smaller regions, each having an area of 3.56$\times$3.56 square arcmin.
Thus 26 strips of the LMC were divided into 1664 regions. (V$-$I) vs I CMDs were plotted for each region and a sample CMD of one such
region is shown in figure 1. For all the regions, red clump stars were well within a box in the CMD, with widths
0.65 - 1.35 mag in (V$-$I)colour and 17.5 - 19.5 mag in I magnitude. Thus, the red clump stars
were identified in each region. The OGLE II data suffers from incompleteness due to crowding effects and the incompleteness
in the RC distribution is corrected using the values given in Udalski et al. (\cite{Uda00}).\\
The Magellanic Cloud Photometric Survey (MCPS, Zaritsky et al. \cite{ZHT04}) of the central 64 square
degrees of the LMC contains photometric data of around 24 million stars in the U,B,V and I pass
bands. Data with errors less than 0.15 mag are taken for the analysis. The regions away from
the bar are less dense compared to the bar region. The total observed regions are divided into
1512 sub-regions each having an area of approximately 10.53$\times$15 square arcmin. Out of 1512
regions only 1374 regions have a reasonable number of RC stars to do the analysis. (V-I) vs I CMDs
for each region were plotted and red clump stars were identified as described above.\\
\subsection{SMC data}
The OGLE II survey (Udalski et al \cite{Uda98}) of the central region of the SMC contains photometric data
of 2 million stars in the B,V and I pass bands. The catalogue of SMC presents data of
11 fields which cover the central 2.5 square degrees of the SMC in the sky.
The observed regions
of the SMC are divided into 176 regions. Each strip was divided into 2$\times$8 regions,
each having an area of 7.12$\times$7.12 square arcmin to obtain enough stars
in each region.
Data selection and analysis is similar to that for the LMC, including the box
used to identify the RC stars in the CMD.
A sample CMD for one location is shown in Figure 2.
We used the incompleteness corrections given in Udalski et al. (\cite{Uda98}).\\
The Magellanic Cloud Photometric Survey (MCPS, Zaritsky et al. \cite{ZHT02}) of the central 18 square
degrees of the SMC contains photometric data of around 5 million stars in the U,B,V and I pass
bands. Data with errors less than 0.15 mag are taken for the analysis. The regions away from
the bar are less dense compared to the bar region. The total observed regions are divided into
876 sub-regions each having an area of approximately 8.9$\times$ 10 square arcmin. Out of 876
regions, 755 regions with a reasonable number of RC stars
were considered for analysis.\\
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{lmccolour.ps}}
\caption{A typical colour distribution of red clump stars in the LMC. The best fit to the distribution is also shown.
The reduced $\chi^2$ value of this fit is 1.33.}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{lmcmagnitude.ps}}
\caption{A typical magnitude distribution of red clump stars in the LMC. The best fit to the distribution is also shown.
The reduced $\chi^2$ value of this fit is 0.99.}
\end{figure}
\section{Analysis}
\hspace{0.5cm}
A spread in magnitude and colour of red clump stars is observed in the CMDs of both
the LMC and SMC. Their number distribution profiles roughly
resemble a Gaussian. The width of the Gaussian in the distribution of colour is
due to the internal reddening, apart from observational error and population effects.
The width in the distribution of magnitude is due to population effects, observational error, internal
extinction and depth. By deconvolving the effects of observational error, extinction and population
effects from the distribution of magnitude, an estimate of depth can be obtained.\\
To obtain the number distribution of the red clump stars in each region,
the data are binned with a bin size of 0.01 and 0.025 mag in
colour and magnitude respectively. The obtained distributions in colour and magnitude are
fitted with a function, a Gaussian + quadratic polynomial. The Gaussian represents the red
clump stars and the other terms represent the red giants in the region. A non linear least
square method is used for fitting and the parameters are obtained.
In figures 3 and 4, the distribution as well as the fitted curve are shown for both colour and
magnitude distribution of an LMC region (OGLE II data). Similarly for SMC (OGLE II data), the
distribution as well as the fitted curve are shown for both colour and magnitude in figures 5 and 6
. The parameters obtained
are the coefficients of each term in the function used to fit the profile, the error in the
estimation of each parameter and the goodness of the fit, which is the same as the reduced $\chi^2$ value.
Regions with reduced $\chi^2$ values greater than 2.6 are omitted from the analysis. As the
important parameter for our calculations is the width associated with the two distributions,
we also omitted regions with fit errors of width greater than 0.1 mag from our analysis.
After these omissions,
the number of regions useful for analysis in LMC (OGLE II data) and LMC (MCPS data) is reduced from 1664 to 1528
and from 1374 to 1301 respectively. Similarly for the SMC, after omitting regions with larger
reduced $\chi^2$ values and fit error values, the number of regions useful for analysis
in OGLE II data and MCPS data is reduced
from 176 to 150 and from 755 to 600 respectively.
Thus, the total observed dispersion in
(V$-$I) colour and I magnitude were estimated for RC stars in all these regions.
The number of RC stars in each region studied in the MCs depends
on the RC density. The number is large in the central regions, whereas it decreases
in the disk. In LMC, the RC stars range between 500 - 2000 in the bar region, whereas
the range is 200 - 1500 in the disk. In the central regions of the SMC,
the RC stars range between 1000 - 3000. The disk is found to have a range 200 - 1500.\\
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{newsmccolour1.ps}}
\caption{A typical colour distribution of red clump stars in the SMC. The best fit to the distribution is also shown.
The reduced $\chi^2$ value of this fit is 1.18.}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{smcmagnitude1.ps}}
\caption{A typical magnitude distribution of red clump stars in the SMC. The best fit to the distribution is also shown.
The reduced $\chi^2$ value of this fit is 1.36.}
\end{figure}
The following relations are used to estimate the resultant dispersion due to depth from the above
estimated dispersions.
$\sigma^2_{col}$ =
$\sigma^2_{internal reddening}$ +
$\sigma$$^2$$_{intrinsic}$ + $\sigma$$^2$$_{error}$\\ \\
$\sigma$$^2$$_{mag}$ = $\sigma$$^2$$_{depth}$ +
$\sigma$$^2$$_{internal extinction}$ +
$\sigma$$^2$$_{intrinsic}$ + $\sigma$$^2$$_{error}$\\
The average photometric errors in I and V band magnitudes were calculated for each region
and the error in I magnitude and (V$-$I) colour were estimated. These were subtracted from the observed
width of magnitude and colour distribution respectively, thus accounting for the photometric errors
(last term in the above equations). The contribution from the heterogeneous population of RC
stars were discussed in section 2, and the dispersion in colour and magnitude due to this
effect ($\sigma_{intrinsic}$) were also subtracted from the observed dispersions.
After correcting for the population effects and the observational error in colour, the remaining
spread in colour distribution (first equation) is taken as due to the internal reddening, E(V$-$I).
This is converted into extinction in I band using the relation A(I) = 0.934 E(V$-$I), where E(V$-$I) is the
internal reddening estimated for each location. This was used to deconvolve the effect of
internal extinction from the spread in magnitude. The above relation is derived from the relations
E(V$-$I) = 1.6 E(B$-$V) and A(I) = 0.482 A(V) (Rieke and Lebofsky \cite{RF85}). The interstellar extinction law of our Galaxy is
adopted for the calculations of Magellanic Clouds based on the results of the studies by Nandy
and Morgan (\cite{NM78}), Lequeux et al (\cite{L1982}) and Misslet, Clayton and Gordon (\cite{MCG99}), which showed that both
LMC and SMC have extinction curves qualitatively similar to those found in Milky Way.\\
Thus, the net dispersion in magnitude due to depth alone was estimated for considered
regions in the LMC and SMC. The resultant width in magnitude is converted into depth in kpc using the
distance modulus formula and taking a distance of 50 kpc to the LMC and a distance of 60 kpc to the SMC.\\
The error in the estimation of the dispersion corresponding to depth is obtained from the errors
involved in the estimation of width of colour and magnitude distribution. The random
error associated with the width corresponding to depth is $\Delta$depth$^2$ =
$\Delta$I$^2$$_w$$_i$$_d$$_t$$_h$ + $\Delta$(V-I)$^2$$_w$$_i$$_d$$_t$$_h$. Thus the associated error
in the estimation of depth is calculated for all the locations. This error will also translate
as the minimum depth that can be estimated. The minimal depth that can be
estimated is $\sim$ 360 pc in the central regions of the LMC and $\sim$
650 pc in the outer regions of the LMC. In the SMC, the minimal thickness that can be estimated
is $\sim$ 350pc in the central regions of the SMC and $\sim$ 670pc in the
outer regions of the SMC.\\
\section{Internal reddening in the MCs}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{lmcinternalreddening.ps}}
\caption{Two dimensional plot of the internal reddening in the LMC. The colour code is given in the figure. The magenta dot
represents the optical center of the LMC. The upper plot is derived from the MCPS data, whereas the lower plot is
derived from the OGLE II data.}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{smcinternalreddening.ps}}
\caption{Two dimensional plot of the internal reddening in the SMC. The colour code is given in the figure. The magenta dot
represents the optical center of the SMC. The upper plot is derived from the MCPS data, whereas the lower plot is derived
from the OGLE II data.}
\end{figure*}
One of the by products of this study is the estimation of internal reddening in the MCs.
The shift of the peak of the (V$-$I) colour distribution from the expected value was used by Subramaniam (\cite{S05}) to
estimate the line of sight reddening map to the OGLE II region of the LMC. The above study
estimated the reddening between the observer and the observed region in the LMC. In this study,
we used the width of the (V$-$I) colour distribution to estimate the internal reddening map across the MCs.
This estimates the front to back reddening of a given region in the MCs, which we call as the internal
reddening (in E(V$-$I)), and does not estimate the reddening between the front end of the region
and the observer. Thus, this estimate traces the reddening within the bar/disk of the MCs and hence the location of the dust.
The estimates and figures given below thus gives the internal reddening within the MCs.\\
The colour coded figures of the internal reddening in the LMC and SMC are presented in figure 7 and 8 respectively.
It can be seen that the internal reddening is high only in some specific regions in both the MCs. Most of the
regions have very negligible internal reddening suggesting that most of the regions in the MCs are optically thin.
The regions of high internal reddening in the LMC are located near the eastern end of the bar and the 30 Dor star forming
region. The highest internal reddening estimated is E(V$-$I) = 0.13 mag in the OGLE II region of the LMC,
which is the bar region and 0.16 mag in the MCPS region, close to the 30 Dor location. It is noteworthy
that these are not very high values.
The OGLE II internal reddening map shows that, apart from the eastern end of the bar, some regions near the
center also have internal reddening. The MCPS data shows that the
internal reddening across the LMC disk as seen by the RC stars is very small. The error associated with the
width of colour distribution is translated as the minimum internal reddening that can be estimated. The minimum internal reddening
that can be estimated in the central regions as well as in the disk of LMC is 0.003 mag.
In the case of the SMC, a region of high internal reddening is found to the west of the
optical center. Also, the bar region is found to have some internal reddening, whereas the disk has very little
internal reddening (within the area studied). The highest reddening estimated is E(V$-$I) = 0.08 mag
in the OGLE II regions and 0.12 mag in the MCPS region. These regions are located close to the
optical center. The rest of the bar as well as the disk have very little internal reddening. Thus,
our results indicate small extinction across the SMC, as seen by the RC stars. The minimum internal reddening that
can be estimated is 0.002 mag in the central regions of SMC and 0.005 mag in the disk of SMC.\\
It is interesting to see low internal reddening across the MCs, as seen by the RC stars. This is in contradiction
to the large extinction expected near the star forming regions, especially near 30 Dor. The reddening is estimated here using the same type of stars that are used to estimate the depth. Thus, we maintain the consistency of the
same tracer for the above two properties. The higher values for the foreground reddening estimates were
obtained by Harris et al (\cite{Harris97}) using the MCPS data for OB stars, whereas lower values were obtained using RC stars for the same regions
by Subramaniam (\cite{S05}) using OGLE II data. Thus, the reddening values estimated varied with respect to the tracer used.
Once again, the present results confirm that the RC disk has much less internal extinction. This variation
of reddening as a function of the population is suggestive of population segregation across the LMC. The above
results suggest that the star forming regions in the LMC are likely to be behind the RC disk.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{modlmcthickness.ps}}
\caption{Two dimensional plot of depth (t) in the LMC. The colour code
is given in the figure. The magenta dot represents the optical center of the LMC.The empty squares represent
the omitted regions with poor fit.}
\end{figure*}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{lmcogleraplot.ps}}
\caption{Width corresponding to depth with error bars plotted against RA for both the central
bar region and north western disk region of LMC.}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{lmcmcpsraplot.ps}}
\caption{Width corresponding to depth with error bars plotted against RA and Dec for the LMC
MCPS data.}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{lmcogleraprofile.ps}}
\caption{Width corresponding to depth averaged along the declination plotted against RA for
both the central bar region (closed circles) and north western disk region (open circles)of LMC.}
\end{figure}
\section{Results : The LMC }
The line of sight depth in the LMC has been derived using two data sets. OGLE II data cover the central
region and this is basically the bar region of the LMC. OGLE II data also cover a small detached disk
region in the north-western direction. On the other hand, MCPS data cover a
significant area of the disk, besides the bar. Thus the OGLE II data is suitable to derive the
depth of the bar region, whereas the MCPS data is suitable to derive the depth of the bar as well
as the disk regions. The two data sets can be used for consistency checks and the results
derived will be compared.\\
The depth of 1528 regions of LMC (OGLE II data) were calculated. Out of 1528 regions,
1214 are in the central bar
region and the remaining 314 regions are in the north western disk region.
In the north western disk
region there are regions with minimal thickness. The depth of 1301 regions of LMC (MCPS data) are
also calculated. Regions with minimal thickness are seen in the disk of LMC.\\
A two dimensional plot of the depth for the 1528 OGLE II regions is shown in the lower panel of figure 9.
This plot is colour coded as explained.
The optical center of the LMC is taken to be RA = 05$^h$ 19$^m$ 38$^s$,
Dec = -69$^0$ 27' 5".2 (J2000.0,de Vaucoulers and Freeman, \cite{dVF73}).
The OGLE II data show a range of dispersion values from 0.03 to 0.46 mag (a depth of
of 690 pc to 10.5 kpc; avg: 3.95 $\pm$ 1.42 kpc) for the LMC central bar region. For the N-W disk region,
the dispersion estimated ranges from 0.023 mag to 0.33 mag (a depth of 500 pc to 7.7 kpc;
avg: 3.56 $\pm$ 1.04 kpc).The minimum value in the N-W disk region is limited by errors. The fraction
of such regions, where the minimum value is limited by errors, is 4.06\%.
Regions in the bar between
RA 80 - 84 degrees show a reduced depth (0.5 - 4 kpc, as indicated by yellow and black points).
The regions to the east and west of the above region are found to have larger depth (2.0 - 8.0 kpc,
black, red and green points). Thus, the depth of the bar at its ends is greater than that near
its center. Since the center of the bar does not coincide with the optical center, the thinner
central bar region is located to the east of the optical center. The average values of these three
regions of the bar are 3.21$\pm$ 1.03 kpc for the central minimum, 4.13$\pm$1.35 kpc for the western region
and 4.95$\pm$1.49 kpc for the eastern region.
This is better brought out in figure 10, where the depth is shown as a function of RA (the Dec range
in OGLE II data is less when compared to the RA range). The lower panel shows the bar region and the
upper panel shows the N-W disk region. An indication of flaring is seen here. In this figure,
the depth of all the regions are shown and the error bar on each point denotes the error in the estimation
of depth at each location. The depth averaged along the Dec of each RA is shown in figure 12. This plot
clearly suggests that the bar is flared at its ends. The open circles indicate
the N-W disk points. Thus the N-W region has a depth similar to the central region of the bar.
The plot also suggests that the eastern end of the bar is more flared than the western end. The errors
shown are the standard deviation of the average, thus a large error indicates a large range in the
depth values.\\
We also used the MCPS data to estimate the depth in the bar region. This data set could not be corrected
for data incompleteness and thus might underestimate the RC stars, especially in the crowded bar region.
This is also reflected in the number counts, since a larger area is required to obtain a similar number
of red clump stars. We used larger area bins for the MCPS data and estimated the depth.
The colour-coded figure for the LMC (bar and the disk) is shown in the upper panel of figure 9.
A prominent feature of the plot is the lop-sided distribution of the red dots (greater depth)
when compared to the black dots (lesser depth). Thus, this plot reveals that
there is a variation in depth across the disk of the LMC.
The average depth for the bar estimated based on 320 regions is 4.3 $\pm$ 1.0 kpc, which is very similar
to the value estimated from the OGLE II data.
The large dispersion is not due to errors in the estimation of the depth of
individual regions, but is due to the presence of regions with varying depths in the bar region.\\
We used the MCPS data to estimate the depth of the disk. We considered regions outside the location
of the bar as disk regions. The estimated depth ranges from 650 pc to 7.01 kpc.The minimum value of depth is limited by errors. The fraction of such regions, where the minimum value is limited by errors, is 0.44\%.
The average of the disk alone is estimated to be 3.44$\pm$1.16 kpc.
The average values of the depth for different disk regions were estimated and are tabulated in
table 1, along with the values for the bar for comparison. It can be seen that the
average depth of the northern disk is greater than the southern disk by more than 2 $\sigma$ (of the
southern disk). The depth of the northern disk is
similar to the depth estimated for the bar. Thus the bar and the northern disk of the LMC have
greater line of sight depth, whereas the east, west and the southern disk have reduced depth.
The variation of depth as a function of RA (bottom panel) and Dec (top panel) is shown
in figure 11. The depth variation as a function of RA indicates that the east and the west ends have
a depth less than the bar, whereas the depth variation as a function of Dec indicates that the
depth reduces from north to the south disk, with increased depth in the bar regions. The fact that
the northern disk of the LMC has a greater depth compared to the other regions seems to be a surprising
result. It will be interesting to study the line of sight depth of regions located further north, to
find out how far this trend continues in the disk. Figure 9 (upper panel) also
gives a mild suggestion that
the depth of the disk gradually reduces towards the south, especially on the south-western side.
This is indicated by the increase of yellow points and the appearance of cyan points.
The plot also mildly suggests that the maximum gradient in the depth is seen from the
north-east to the south west of the LMC disk. This is similar to the position angle of the
minor axis of the LMC (major axis $\sim$ 120$^o$, van der Marel et al. \cite{van02}).
Within the radius of the disk studied here, there is no evidence for flaring of the disk. An estimation
of depth of outer disk at large radii is essential to confirm the above indicated trends.\\
\section{Results : The SMC}
In the case of the SMC also, we used OGLE II and MCPS data sets. Similar to the LMC, the area
covered by OGLE II is mainly the bar region, whereas the bar and disk are covered by the
MCPS data.
The depth of 150 regions (OGLE II data) and 600 regions (MCPS data) of the SMC were calculated.\\
A colour coded, two dimensional plot of depth for these two data sets are shown in figure 13 (OGLE II data in the lower panel
and MCPS data in the upper panel). The optical center of the SMC is
taken to be RA = 00$^h$ 52$^m$ 12.5$^s$ , Dec = -72$^0$ 49' 43" (J2000, de Vaucoulers and Freeman,
\cite{dVF73}). There is no indication of a variation of depth across the disk as indicated by the
uniform distribution of the red and black dots. The prominent feature in both the plots is the
presence of blue and green points indicating increased depth, for regions located
near the SMC optical center. The OGLE II data cover only the bar region and it can be
seen that this data is not adequate to identify the extension of the central feature, whereas
the MCPS data clearly delineates this feature.\\
The net dispersions range from 0.10 to 0.35 mag (a depth of 2.8 kpc to 9.6 kpc) in
the OGLE II data set and from 0.025 mag to 0.34
mag (a depth of 670 pc to 9.47 kpc) in the MCPS data set. The minimum depth estimated in the MCPS
data is limited by errors. The fraction of such regions where the minimum value is limited by errors is 2.83\%. The average value of the SMC
thickness estimated using the OGLE II data set in the central bar region is 4.9$\pm$ 1.2 kpc and
the average thickness estimated using MCPS data set, which covers a larger area than
OGLE II data, is 4.42 $\pm$ 1.46 kpc. The average depth obtained for the bar region alone
is 4.97 $\pm$1.28 kpc, which is very similar to the value obtained from OGLE II data. The depth
estimated for the disk alone is 4.23$\pm$1.47 kpc. Thus the disk and the bar of the SMC do
not show any significant difference in the depth. The marginal difference between the bar and
the disk depths is due to the presence of higher depth regions near the center. Thus, except
for the central feature, the depth across the SMC appears uniform.
Our estimate is in good agreement with the depth estimate of the SMC using eclipsing
binary stars by North et al. (\cite{N08}). They estimated a 2-sigma depth of 10.6 kpc,
which corresponds to a 1-sigma depth of 5.3 kpc.\\
In order to study the variation of depth of the SMC (OGLE II data) along the RA,
dispersion corresponding to the depth is plotted against RA in figure 14.
The lower panel shows all the regions along with the error in depth estimation for each location.
The upper panel shows the depth averaged along Dec and the error indicates the standard deviation
of the average. Both the panels clearly show the increased depth near the SMC center. There is no
significant variation of depth along the bar.\\
For MCPS data, the dispersion corresponding to depth is plotted against RA as well as DEC in figure
15. There is an indication of increased depth near the center, as seen before.
The plot also indicates that there is no significant variation in depth between the bar
and the disk, and there is no indication of variation of depth across the disk. In figure 16, the
depth averaged over RA and Dec are shown in the upper and lower panel respectively. These are
plotted for a small range of Dec ($-$72.0 - $-$73.8 degrees) and RA (10 - 15 degrees),
in order to identify the increased depth in the central region.The increased depth near the
center is clearly indicated. Thus, the depth near the center is about 9.6 kpc, which is twice
the average depth of the bar region (4.9 kpc). Thus, the SMC has a more or less uniform depth
of 4.9 $\pm$1.2kpc over bar as well as the disk region, with double the depth near the center.\\
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{modsmcthickness.ps}}
\caption{Two dimensional plot of depth (t) in the SMC. Upper panel is for the MCPS data and lower panel
is for OGLE II data.The colour code is same for both the panels. The magenta dot represents the optical center of the SMC.
The empty squares represent the omitted regions with poor fit.}
\end{figure*}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{ogle.ps}}
\caption{Lower panel: Width corresponding to depth against RA for bar region of
the SMC (OGLE II data). Upper panel: Average of depth along the declination against
RA in the bar region of the SMC (OGLE II data).}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{smcmcpsraplot.ps}}
\caption{Width corresponding to depth against RA in the lower panel and against Dec in the upper panel
for the SMC (MCPS data).}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{smcmcpsavg1.ps}}
\caption{Lower panel: Width corresponding to depth averaged over Dec and plottedagainst RA for a small range of Dec in the central region of the SMC (MCPS data). Upper panel: Width corresponding to depth averaged over RA and plotted against Dec for a small range of
RA in the central region of the SMC (MCPS data).}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{distribution.ps}}
\caption{Two dimensional plot of density distribution estimated from MCPS data. The small open circles
in the central region indicate the high density regions. The ellipse shows the boundary of regions
with large depth, the large hexagons indicate the stellar peaks found by Cioni et al. (\cite{C00}), the
large triangle indicate the HI peak (Stanimirovic et al. \cite{SS04}) and the large square denotes
the optical center.}
\end{figure}
\begin{table*}
\centering
\caption{Depths of different regions in the LMC \&\ SMC. These are line of sight depths and
need to be corrected for inclination, to estimate the actual depth.}
\label{Table:1}
\vspace{0.25cm}
\begin{tabular}{lrrr}
\hline \\
Region & Range of depth (kpc) & Avg.depth (kpc)& Std.deviation (kpc)\\ \\
\hline
\hline \\
LMC eastern bar (RA $>$ 84$^o$.0) & 0.69-9.10 & 4.95 & 1.49 \\
LMC western bar (RA $<$ 80$^o$.0) & 1.26-10.44 & 4.14 & 1.35 \\
LMC central bar (80$^o$.0 $<$ RA $<$ 84$^o$.0)& 0.69-8.50 & 3.21 & 1.03 \\ \\
LMC bar average & 0.69-10.44 & 3.95 & 1.42\\ \\
\hline
\hline \\
LMC eastern disk (RA $>$ 88$^o$.0, $-$68$^o$.0 $>$ Dec $>$ $-$71$^o$.0) & 0.65-5.89 & 2.80 & 0.92\\
LMC western disk (RA $<$ 74$^o$.0, $-$68$^o$.0 $>$ Dec $>$ $-$71$^o$.0) & 0.65-5.58 & 3.08 & 0.99\\
LMC northern disk (Dec $>$ $-$68$^o$.0) & 1.00-7.01 & 4.17 & 0.97\\
LMC southern disk (Dec $<$ $-$71$^o$.0)& 0.65-4.58 & 2.63 & 0.79\\ \\
LMC disk average & 0.65-7.01 & 3.44 & 1.16\\ \\
\hline
\hline \\
SMC bar & 3.07-9.53 & 4.90 & 1.23 \\
SMC disk & 0.67-9.16 & 4.23 & 1.48 \\\\
\hline
\end{tabular}
\end{table*}
\section{Discussion}
The line of sight depth of MCs are estimated using red clump stars as tracers. They are intermediate age
stars with ages greater than 1 Gyr, hence, the depth estimates correspond to the intermediate age disk.
The depth of the intermediate age disk may give clues to its evolution, as a thin disk is indicative
of an undisturbed disk, whereas a thick disk would indicate a heated up and hence disturbed disk.
Thus the depth of the disks as well as the bar regions of these galaxies are clues
to the evolution of these galaxies. The analysis presented here
estimates the dispersion
in the RC magnitude distribution due to depth, after correcting for dispersion due to other effects.
The corrections due to the internal reddening and observational error were estimated for each region
and were corrected accordingly.
The correction for the presence of an age and metallicity range in the RC population, along with
star formation history is done using the dispersion estimated by Girardi \& Salaris (\cite{GS01}).
Thus the values estimated here have a bearing on the assumptions made during the correction
of the above effect.\\
The variation in the estimated dispersion (after all the corrections) is
assumed to be due to variations in depth across the galaxy. If this is actually due to
the differences in RC population, as a result of variation in age, metallicity and star
formation history, then the results indicate that the RC population and their properties are
significantly different in the bar, north and south of the disk, for the LMC. Recent studies show that
the above parameters are more or less similar across the LMC (Subramaniam \& Anupama (\cite{SA02}),
Olsen \& Salyk (\cite{OS02}) and van der Marel \& Cioni (\cite{vc01})).
Variation of star formation history and metallicity across the LMC has been
studied by Cioni et al. (\cite{C06LMC}), Carrera et al. (\cite{Car08}). They found
small variations in the inner regions, but large variations in the outer
regions. Cioni et al. (\cite{C06LMC})
mapped the variation of star formation history as well as metallicity using the AGB stars. They
found that the south and south-western regions could be metal poor whereas
the north and north-eastern regions could be metal rich. Regions near the
Shapley constellation III were found to have a younger population. This region is
close to the northern limit studied here. Piatti et al. (\cite{Piatti99}) and Dolphin (\cite{D00}) found that the RC
population in the far northern regions is structured. All the above regions with varying RC population
are outside the regions studied here.
The results presented in this study suggests that the northern regions and the
bar have large dispersion, probably due to depth when compared to the east, west and the southern regions.
If this is not due to increased depth in these regions, then it would mean
that the stellar populations in these regions are significantly different. In
either case, the northern disk and the bar seem to be different from the rest
of the LMC disk.\\
As incompleteness correction is done in one data set (OGLE II ) and not in the other (MCPS)
we compared the depth estimates before and after adopting the completeness correction. We
found that the change is within the bin sizes adopted here.
Thus, incorporating the incompleteness correction has not changed the
results presented here. The incompleteness correction is large in the central regions of the LMC bar,
where the correction is between 30-40 percent. The outer regions of the
bar (6-15\%) as well as the NW region (5-9\%) have less correction. The outer regions
are thus less affected by the incompleteness. Thus the incompleteness problem
is unlikely to affect the MCPS RC distribution in the LMC disk, whereas it may be
unreliable in the central regions of the LMC bar. In the SMC, the incompleteness correction in the central regions is about 12\% and that in the outer region is about 5\%. This is the case for SMC also. The
incompleteness in the MCPS data does not
affect the results presented here.\\
We have removed regions in the MCs with poor fit as explained in section 4.
These regions are likely to have different RC structures suggesting a large variation
in metallicity and/or population. The fraction of such regions is about 8\% in the LMC and 5.3\% in the SMC.
Such regions are indicated in figures 9 \& 11.
Thus to a certain extent, the above procedure has eliminated the regions with
very different metallicity and star formation history that are seen in
most of the regions. Apart from the above, the remaining regions studied here might have some variation
in the the RC population contributing to the depth estimated.
The results presented in this study will include some contribution from the population
effect.\\
The estimated depth for various regions in the LMC is given in table 1. These values
correspond to the 1-sigma depth. In the case of the
LMC, the line of sight depth estimated here is for the inclined disk. To estimate the
actual depth of the disk, one needs to correct
for the inclination. Assuming the inclination to be 35 degrees (van der Marel et al. \cite{vc01}),
the actual depth of the bar is 4.0$\times$cos{\it(i)} = 3.3 $\pm$ 1.0 kpc.
Similarly, the southern disk has an actual depth of about 2.2 $\pm$ 1.0 kpc, whereas
the northern disk is similar to the bar. Note that, after correcting for inclination, the
depths in the northern and the souther regions are within the errors. This is because the difference
in depth reduces due to correction for inclination, but the error does not.
These values should be used when one compares the depth
of the LMC with that of other galaxies. Thus, the LMC bar and the disk are thicker than the thin disk
of our Galaxy ($\sim$ 100 pc).
The scale height of the bar could be taken as half of its depth, assuming that the
bar is optically thin.
The z-extent of the bar, which is the scale height (1.65 kpc), is found to be
similar to its scale length (1.3 - 1.5 kpc, van der Marel \cite{van01}).
Hence the bar has a depth similar to its
width. Thus the bar continues to be an unexplained structure/component of the LMC.\\
The LMC bar is found to be fairly thick (line of sight depth = 4.0$\pm$1.4 kpc).
We also find evidence for flaring and a disturbed structure at the ends of the bar region.
The thick and flared bar of the LMC indicates that this region of the LMC is perturbed.
The structure of the LMC bar as delineated by the RC stars showed warps (Subramaniam \cite{S03}), which is also
clear indication of disturbance.
The depth estimates also suggest that the LMC disk is fairly thick (2.4 - 4.0 kpc), with a decrease
in depth/thickness and/or varying stellar population from the north to the south.
The tidal effects due to LMC-Galaxy interactions (if they were interacting) are unlikely
to cause this, as
the tidal effects are stronger near the outer regions and weaker towards the inner regions. Flaring
of the disk is expected if tidal interactions are present. On the other hand,
except for the thicker northern region, flaring of the disk is not seen, at least up to the radii
studied here. Hence we do not see any
evidence for tidal interactions, at least in the inner disk. The recent results on
the proper motion of the LMC and SMC (Kallivayalil et al \cite{K06a}) suggested that the Clouds are approaching our
Galaxy for the first time. This would suggest that the LMC has not interacted with our Galaxy
before. Our results are in good agreement with this scenario.\\
In general, thicker and heated up disks are considered as signatures of
minor mergers (Quinn \& Goodman \cite{QG86}; Velazquez \& White \cite{VW99}).
Thus, the LMC is likely to have experienced minor mergers in its history.
The presence of warps in the bar (Subramaniam \cite{S03})
and evidence of counter rotation in the central regions (Subramaniam \& Prabhu \cite{SP05}) also support
the minor merger scenario. Thus, it is possible that the LMC has experienced minor mergers
during its evolution. These mergers have affected the northern disk and the bar. The variation
in depth observed across the LMC disk could constrain the way in which these mergers could
have happened.\\
The SMC is found to have a depth greater than the LMC. The disk and the bar does not show
much difference in depth.
A striking result is the increased depth in the central region of the SMC. The profile
of the depth near the
center (figure 16) looks very similar to a typical luminosity profile of a bulge. This could
suggest the presence
of a bulge near the optical center of the SMC. If a bulge is present, then a density/luminosity
enhancement in this region is also expected. We plotted the observed stellar density in each region
from the MCPS data to see whether there is any such central enhancement. This is shown in
figure 17. The regions with high density are shown as open circles, located close to the optical center.
The regions with large depth are found to be within the ellipse
shown in the figure. It can be seen that regions with highest stellar density lie more or less
within this ellipse.
Cioni et al. (\cite{C00}) studied the morphology of the SMC using the DENIS catalogue. They
found that the distribution of AGB and RGB stars show two central
concentrations, near the optical center,
which match with the carbon stars by Hardy et al. (\cite{Hardy89}). They also found
that the western concentration is dominated by old stars. The approximate locations
of these two concentrations found by Cioni et al. (\cite{C00}) are shown as hexagons in figure 17.
Also, the strongest HI concentration in the SMC map by Stanimirovic et al. (\cite{SS99}) falls between these
two concentrations. The maximum HI column density, 1.43 $\times$ 10$^{22}$ atoms cm$^{-2}$
is located at RA = 00$^h$ 47$^m$ 33$^s$, Dec = -73$^0$ 05' 26" (J2000.0)
(Stanimirovic et al. \cite{SS04}). This location is shown as a large triangle in figure 17. The
optical center of the SMC is shown as a large square. All these peaks as well as the
optical center are located on or within the boundary of the ellipse.
Thus, the peaks of stellar as well as the HI density are found within the central region with large depth.
This supports the idea that a bulge may be present in the central SMC.
This bulge is not very luminous, but clearly shows enhanced density.
It is also the central region of the triangular shaped bar.\\
The increased dispersion near the SMC center, which is interpreted as due to large depth, could
be partially due to the presence of RC population which is different.
Cioni et al. (\cite{C06SMC}) did not find any different
population or metallcity gradient near the central regions.
Tosi et al. (\cite{T08}) obtained deep CMDs of 6 SMC regions to study the
star formation history. Three of their regions are located close to the bar and three are outside the bar.
They found an apparent
homogeinty of the old stellar population populating the subgiant
branch and the clump. This suggested that there is no large differences
in age and metallicity among old stars in these locations.
Their SF1 region is located close to the region of large
depth identified here. The RC population in this region is found to be
very rich and the spread in magnitude is greater than those found in the
other CMDs. This spread is also suggestive of increased depth near
this location. It will be worthwhile to study the star formation history of regions near the
SMC center to understand how different the stellar population is in this
suggested bulge.\\
It may be worthwhile to see whether this bar is actually an extended/deformed bulge.
It is interesting that the so-called triangular
shaped bar of the SMC is also an unexplained component, which does not show the
signatures of a typical bar. This could naturally
explain the formation of the odd shaped bar in the SMC. Thus, we propose that the central SMC has
a bulge.
The elongation and the rather non-spherical appearance of the
bulge could
be due to tidal effects or minor mergers (Bekki \& Chiba \cite{BC08}).\\
\section {Disk and halo of the LMC}
Subramaniam (\cite{S06}) studied the distribution of RR Lyrae stars in the bar region of
the LMC. She found that
the RR Lyrae stars in the bar region have a disk-like distribution, but halo-like location.
The RR Lyrae stars are in the same evolutionary state as the RC stars, except that the RR Lyrae
stars belong to an older and metal poor population. Therefore they are good tracers of the halo.
Thus, it will be interesting to compare the depth of the halo as defined by the RR Lyrae stars
and the depth of the disk as defined by the RC stars.\\
Subramaniam (\cite{S06}) derived the dispersion in the extinction corrected average I magnitude
of RR Lyrae stars in
the bar region. After correcting for contribution to the dispersion due to other factors, the
dispersion due to depth was estimated. The total depth estimated for RR Lyrae stars in this paper
was compared with the RC depth.
The upper panel of figure 18 shows the dispersion
as estimated from RR Lyrae stars as open circles and that estimated average from RC stars as dots.
The figure shows that the RR Lyrae depth ranged between 4.0 and 8.0 kpc (corresponding to a
scale height of 2.0 -- 4.0 kpc, as reported in the above paper).
Since the RR Lyrae stars were studied in the bar region, this comparison is valid only
for the bar region. It is assumed that the bar is part of the disk, and hence allowing the comparison of
disk vs halo. It can be seen that the depth as indicated by the RC stars is approximately
the lower limit set by the RR Lyrae stars. That is, the RR Lyrae stars span a greater depth
than the RC stars. Thus, at least in the central region of the LMC, the halo, as delineated by
the RR Lyrae stars, has a much greater
depth than the disk, as delineated by the RC stars. This supports the idea that there is an
inner halo for the LMC. It is interesting that, in an outside-in collapse scenario, the
disk starts to form at the end of the halo formation. This transition is more or less
indicated in the figure as the transition where the RR Lyrae stars stop forming in the halo
and the RC stars take over and the disk forms. In order to make this statement conclusive,
we need to make such a comparison
for the entire disk region of the LMC, not just the bar region. The present study, though
indicative, suggests that the halo to disk transition of the LMC follows the outside-in
formation, at least in the inner regions.\\
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{rrl_rc_fig.eps}}
\caption{Lower panel: Width corresponding to depth estimated from RR-Lyrae (open circles)
and from red clump stars (closed circles) is plotted against RA for the SMC. Theupper panel shows a similar plot for the LMC.}
\end{figure}
\section {Disk and halo of the SMC}
A similar comparison can be made between the halo and disk/bar of the SMC. The RR Lyrae
stars from the OGLE II data were analysed similarly to the procedure adopted by
Subramaniam (\cite{S06}).
The depth along the line of sight is estimated from the observed
dispersion in the extinction-corrected mean I magnitude of 458 ab type
RR Lyrae stars. The dispersion due to depth alone was estimated after correcting for the
metallicity and evolutionary effects. These were compared
with the dispersion estimated from RC stars in the lower panel of figure 18.
It can be seen that, contrary to what is
seen in the case of the LMC, both populations show a very similar dispersion in the SMC.
The figure not only suggests that the RR Lyrae stars and the RC stars occupy a similar depth, but
also indicates that they show a similar depth profile across the bar. The increased depth near the
optical center is also closely matched.
This suggests that the RR Lyrae stars and RC stars are born in the same location
and occupy a similar volume in the galaxy. This is a puzzling combination, as in general,
the RR Lyrae stars and RC stars
do not co-exits, as they belong to the halo and disk population respectively.\\
On the other hand, there is one location in the galaxy where this can take place, which is the bulge.
That is, we do see the very metal-poor low mass stars (RR Lyrae stars) and
the higher mass metal rich counter parts (RC stars) co-existing in the bulge. Thus, the co-existence
and the similar depth of RR Lyrae stars and the RC stars in the central region of the SMC
can be easily explained, if it is the bulge. Thus the bar region of the SMC could in fact
be a bulge. This is in good agreement with the result obtained earlier, where a bulge-like
depth, and enhanced stellar and HI density were found near the optical center.
If this is true, then most of the RR Lyrae stars in the central region belong to this
bulge and not the halo. The depth of both these populations in the outer regions need to
be compared to make this picture complete.\\
Thus, the formation and evolution of the two clouds do not seem to be similar. The LMC is more or less
an irregular galaxy with a disk. On the other hand, the SMC could be a spheroid. In this study
we propose that the SMC has a bulge and the so-called bar is possibly this
deformed/extended bulge. The LMC does not
have a bulge, but, the SMC has managed to form a bulge. The LMC seems to have
undergone minor mergers, whereas the SMC seems to have experienced tidal forces and/or minor mergers.
The structure of the inner LMC agrees well with the outside-in formation of the LMC.
We do not find any evidence for such an inner halo in the SMC.
Thus, even though the clouds are located close to each other now, the early formation and
evolution of these two galaxies appear different.\\
\section {Conclusions}
1. The LMC and the SMC are found to have large line of sight depths (1-sigma) for the bar
(4.0$\pm$1.4 and 4.9$\pm$1.2 kpc) and disk (3.44$\pm$ 1.16 kpc and 4.23$\pm$1.48 kpc).\\
2. The LMC bar(4.0$\pm$1.4 kpc) and the northern disk (4.17$\pm$0.97 kpc) have similar, but large depth. The eastern (2.8$\pm$0.92 kpc), western (3.08$\pm$0.99 kpc) and the southern (2.63$\pm$0.79 kpc) disk have similar, but reduced depth. This may also be interpreted as due to different stellar populations.\\
3. The depth profile indicates flaring of the LMC bar.\\
4. The LMC halo is found to have greater depth than the disk/bar, which
supports the presence of the inner halo for the LMC.
The structure of the inner LMC agrees well with the outside-in formation of the LMC.\\
5. The SMC bar and the disk have similar depth, with no significant depth variation across the disk. \\
6. Increased depth is found near the optical center of the SMC.\\
7. The co-existence of RR Lyrae stars and RC stars in the central volume, along with the
increased depth and stellar \& HI density near the center, suggest that the SMC possibly
has a bulge. The central bar may be this deformed/extended bulge.\\
8. The large depths of the L\&SMC suggest that they have experienced heating,
probably due to minor mergers.\\
Acknowldegements: We thank the anonymous referee for useful comments which improved the presentation of the paper.
|
3,212,635,537,738 | arxiv | \section{Introduction}
Quantum spin liquids (SL) are ``quantum disordered'' ground states of spin systems,
in which zero-point fluctuations are so strong that they prevent conventional magnetic long-range order.
The main avenues towards realizing SL phases in magnetic systems are frustration and quantum phase
transitions. \cite{Savary17}
A particularly interesting example of SL is realized by tuning a frustrated magnetic
system close to a Lifshitz point (LP) that separates collinear and spiral states.
In the vicinity of the Lifshitz transition the quantum fluctuations are strongly enhanced, resulting in a plethora of novel intermediate quantum phases \cite{Balents16}.
A general argument in favour of a universal gapped SL phase near LP in two-dimensional
frustrated Heisenberg antiferromagnets (AF) was first proposed by Ioffe and Larkin \cite{Ioffe88}.
They showed that in the proximity of the LP quantum fluctuations destroy long-range spin correlations
and create a region in the phase diagram with a finite magnetic correlation length.
Subsequent studies found evidence for SL phases in various two-dimensional systems near the LP,
including Heisenberg models on square and honeycomb lattices with second and third nearest
neighbor antiferromagnetic couplings \cite{Ferrer93, Capriotti04, Reuther11a,Reuther11b,Zhu13,Zhang13,Bishop15,Oitmaa16,Merino18}.
However, the universality of the SL
phase near LP, its ubiquitous properties, and the relation of the general argument to
specific Heisenberg models has not previously been addressed.
In the present paper we revisit the Ioffe-Larkin scenario and consider a
field theory for a quantum Lifshitz transition between collinear and spiral phases in $D=2+1$.
Disregarding microscopic properties of specific lattice models
we focus on the generic infrared physics
at the LP.
We develop a field-theoretic description of the $O(3)$ Lifshitz point based on the extended nonlinear
sigma model.
The nonlinear sigma model provides a unifying theoretical framework that allows us to analyze the phase diagram, calculate positions of critical points,
excitation spectra, and static spin-spin correlations functions.
We demonstrate universal scalings of observables (gaps, position of critical points, etc) in terms of
the dimensionless SL gap at the LP, $\delta_0$, and show that
the correlation length in the SL phase scales as $\xi\sim 1/\sqrt{\delta_0}$.
We also argue that the LP spin liquid has a similarity to the gapped Haldane phase\cite{Haldane83} in integer-spin 1D chains.
However, for the 2D SL there is no significant difference between the integer and half-integer spin cases.
A particular example of a system that has a N\'eel-spiral LP and hence manifests the spin liquid behavior is the frustrated antiferromagnetic $J_1-J_2-J_3$ Heisenberg model on the square lattice with the second and third nearest neighbour couplings as well as it's simplified version, the $J_1-J_3$ model.
We perform numerical series expansion calculations for the $J_1-J_3$ model and compare results of the calculations
with predictions of the developed field theory.
The structure of the paper is as follows. In Sec. \ref{sec:field_theory} we introduce
the effective field theory describing the N\'eel to Spin Spiral Lifshitz point.
Section \ref{sec:QLP} addresses the quantum LP, quantum fluctuations, and the criterion for quantum `melting'.
Next, in Sec. \ref{sec:crit_rho_delta} we calculate the spin-wave gap and positions of critical points.
Section \ref{sec:corr} addresses the static spin-spin correlator in the spin liquid phase.
In Sec. \ref{sec:J1-J3} we describe our numerical series calculations for the $J_1-J_3$ model with spin $S=1/2$ and $S=1$
and compare results of these calculations with predictions of the field theory.
Finally our conclusions are presented in Sec. \ref{sec:concl}.
\section{Effective field theory}\label{sec:field_theory}
We start with the following $O(3)$ symmetric Lagrangian describing a transition from the N\'eel to a
spiral phase in two dimensional antiferromagnets:
\begin{equation}
\mathcal{L} = \frac{\chi_\perp}{2}(\partial_t n_\mu)^2 - \frac{1}{2}n_\mu K(\partial_i) n_\mu, \quad (n_\mu)^2=1.\label{eq:L}
\end{equation}
Here $\chi_{\perp}$ is the transverse magnetic susceptibility,
$n_{\mu}$ is a unit length vector with $N=3$ components corresponding to the staggered magnetization, $\partial_i$ are the spatial gradients.
The general form of the ``elastic energy'' operator $K(\partial_i)$ in inversion symmetric systems reads
\begin{equation}\label{eq:K(q)}
K(\partial_i) = -\rho(\partial_i)^2 + \frac{b_1}{2}(\partial_x^4+\partial_y^4)+b_2\partial_x^2 \partial_y^2 + \mathcal{O}(\partial_i^6),
\end{equation}
where we assume that the $n$-field is sufficiently smooth.
The spin stiffness $\rho$ is the tuning parameter that drives the system across the Lifshitz
transition.
The spin stiffness is positive in the N\'eel phase, negative in the spiral phase and vanishes at
the Lifshitz point. The $b$-terms containing higher order spatial derivatives are necessary for stabilization of spiral order at negative $\rho$, and we will assume that $b_{1,2}>0$.
While the kinematic form of the Lagrangian (\ref{eq:L}) is dictated by global symmetries of the system,
a formal derivation
starting from a frustrated Heisenberg model can be found e.g. in Ref. \cite{Ioffe88}
Note that in Lagrangian (\ref{eq:L}) we do not take into account topological terms. We will
discuss their possible role later in the text.
The Lagrangian (\ref{eq:L}) is relevant to a number of models and systems mentioned in the
Introduction.
Here we would like to mention another example motivated by rare-earth manganite materials
(Tb,La,Dy)MnO$_3$ (see Ref. \cite{Milstein15}).
These materials have a layered structure with the individual ferromagnetic layers coupled antiferromagnetically. Due to the antiferromagnetic interlayer coupling the dynamics of the system is described by the second-order time derivative as in usual antiferromagnets in agreement with Eq. (\ref{eq:L}).
Within each plane there are ferromagnetic nearest neighbour and antiferromagnetic second nearest neighbour Heisenberg interactions leading to an inplane frustration.
These compounds could be tuned to the N\'eel-Spin-Spiral LP by performing chemical substitution.
Of course real materials are three-dimensional and contain many planes, however thin films can manifest some physics considered here.
\begin{figure}
\includegraphics[scale=0.3]{./rho_phase_diagr.eps}
\caption{ Schematic phase diagram in the vicinity of the Lifshitz transition between collinear antiferromagnetic and spiral states: a) classical Lifshitz transition, b) quantum phase diagram; strong quantum fluctuations in the vicinity of the Lifshitz point result in the intermediate spin liquid phase. c) Excitation energy $\omega_q$ in the spin liquid phase below and above LP.}\label{fig:phase_diagr}
\end{figure}
In the AF phase of (\ref{eq:L}), $\rho > 0$,
the rotational symmetry is spontaneously broken and the N\'eel vector has a nonzero expectation value,
e.g. is directed along the $z$ axis $\langle \bm n \rangle=\bm e_z$.
In the spin spiral phase, with $\rho<0$, there is an incommensurate ordering
\begin{equation}\label{eq:n}
\bm n(\bm r) = \bm {e_1} \cos(\bm{Qr}) + \bm {e_2} \sin(\bm{Qr}),
\end{equation}
where $\bm{e_{1,2}}$ are orthogonal unit vectors and $\bm Q$ is the pitch of
the spiral.
For $b_1\leq b_2$ the spiral wave vector is directed along $x$ or $y$:
$\bm Q= (\pm Q,0), (0,\pm Q)$, where $Q^2=|\rho|/b_2$. In the opposite case $b_1>b_2$ the wave vector is directed along the main diagonals: $\bm Q= \frac{1}{\sqrt{2}}(\pm Q,\pm Q), \frac{1}{\sqrt{2}}(\pm Q,\mp Q)$, where $Q^2 = 2|\rho|/(b_1+b_2)$.
The relation between the coefficients $b_1$ and $b_2$ depends on the specific choice of the lattice model.
In the ``isotropic'' case, $b_1=b_2$, the system has additional rotational degeneracy in the momentum space due to the arbitrary orientation of wave vector $\bm Q$.
The additional degeneracy can destabilize spiral states and result in quantum spin liquid states that have been predicted for 3D antiferromagnets.\cite{Bergman07}
In the present paper we will stay away from this special critical point.
The classical phase diagram is shown schematically in Fig.\ref{fig:phase_diagr}a.
We would like to make a comment regarding Lagrangian (\ref{eq:L}).
Parameters of any field theory depend on the momentum and energy scales that is described by renormalization group procedure.
We assume that parameters in (\ref{eq:L}),(\ref{eq:K(q)})
are fixed at the ultraviolet cutoff
$\Lambda \approx 1$, where unity corresponds to the inverse lattice spacing.
Quantum fluctuations at scales larger than $\Lambda$ but smaller
than the boundary of magnetic Brillouin zone lead to a renormalization of the parameters $\rho\rightarrow\rho^{ren}$, $ b_{1,2}\rightarrow b_{1,2}^{ren}$, $\ldots$. Therefore, the values of the parameters in
(\ref{eq:L}),(\ref{eq:K(q)})
can be different from those naively derived using spin wave theory.
As was pointed out by Ioffe and Larkin \cite{Ioffe88} this renormalization is especially relevant for the
spin stiffness. The correction to the spin stiffness arises due to the the $b$-terms in (\ref{eq:K(q)}).
The easiest way to understand the correction\cite{Polyakov75} is to consider the N\'eel phase
and decompose the order parameter into two transverse components and a longitudinal component
\begin{equation}
\bm n = (\bm \pi,n_z), \quad n_z=\sqrt{1-\bm \pi^2}\approx 1- \bm \pi^2/2 \ .
\end{equation}
Hence the following contribution from the $b$-term arises
\begin{eqnarray}
\label{nz}
\partial^2 n_z \partial^2 n_z
\sim b \ (\partial^2 \bm{\pi}^2 ) ( \partial^2 \bm\pi^2 )\ .
\end{eqnarray}
The field $\bm\pi$ has fluctuations with momenta smaller than $\Lambda$, ${\bm \pi}_<$,
and fluctuations with momenta larger than $\Lambda$, ${\bm \pi}_>$, ${\bm \pi}={\bm \pi}_<+{\bm \pi}_>$.
Substitution in (\ref{nz}) and averaging over high energy fluctuations gives
\begin{eqnarray}
\label{nz1}
b \ (\partial^2 \bm{\pi}^2 )( \partial^2 \bm\pi^2 )
\to
b \ (\partial \bm{\pi}_<)^2 \langle(\partial \bm\pi_>)^2\rangle
=\delta\rho_{\Lambda}(\partial \bm{\pi}_<)^2\ .
\end{eqnarray}
Note, when averaging $(\partial^2 \bm{\pi}^2 )\times (\partial^2 \bm\pi^2)$ each multiplier must
contain the high ($\bm\pi_>$) and the low ($\bm\pi_<$) energy components. The terms with one multiplier containing
only the high energy and another only the low energy components give rise to a total derivative contributions to the Lagrangian and can be neglected.
Equation (\ref{nz1}) demonstrates a positive correction to the spin stiffness.
Therefore quantum fluctuations always extend the N\'eel phase
compared to the prediction of spin-wave theory that is indicated in Panel b of Fig. \ref{fig:phase_diagr}.
The Lifshitz point in the quantum case is shifted to the left compared to the Lifshitz
point in the classical case.
In the quantum case the Lifshitz point is ``buried'' in the spin liquid phase.
Nevertheless, it is unambiguously defined as we discuss in the following Sections.
\section{Quantum Lifshitz point: the phase diagram and the spin liquid gap}\label{sec:QLP}
Quantum fluctuations destroy the classical N\'eel to spin-spiral Lifshitz transition \cite{Ioffe88}.
Let us calculate local staggered magnetization $n_z$ when approaching the LP from
the N\'eel phase.
Representing the staggered magnetization as
$\langle n_z \rangle \approx 1 - \frac{1}{2}\langle \bm \pi^2 \rangle$, we obtain
\begin{eqnarray}\label{eq:pi2}
\langle \bm \pi^2 \rangle \approx (N-1) \sum_q \int \frac{id\omega }{(2\pi)} \frac{1}{\chi_\perp \omega^2 - K(\bm q) + i0} \nonumber \\ =
(N-1) \int \frac{d^2q}{(2\pi)^2} \frac{1/\chi_\perp}{2\omega_q},
\end{eqnarray}
where $\omega_q = \chi_\perp^{-1/2}\sqrt{\rho q^2 + b_1/2 (q_x^4+q_y^4) + b_2q_x^2q_y^2}$.
In the vicinity of the LP, $\rho\rightarrow 0$, the integral (\ref{eq:pi2}) is logarithmically divergent, $\langle \bm \pi^2\rangle \propto \ln \left(\frac{\Lambda}{\sqrt\rho}\right)$, where $\Lambda$ is the ultraviolet momentum cutoff.
Hence at some critical value of the spin stiffness $\rho=\rho_{cN}$ the staggered
magnetization $\langle n_z \rangle$ vanishes, indicating a transition to the spin liquid phase.
In the spin liquid phase, $\rho<\rho_{cN}$, a gap $\Delta$ must open to regularize
the integral in Eq. (\ref{eq:pi2})
\begin{eqnarray}\label{eq:omega_q}
&&\omega_q\rightarrow \sqrt{\omega_q^2+\Delta^2} \nonumber\\
&&=\sqrt{\Delta^2 + \chi_\perp^{-1}[\rho q^2 + b_1/2 (q_x^4+q_y^4) + b_2q_x^2q_y^2]}.
\end{eqnarray}
Opening of the gap indicates an existence of a spin liquid phase at which the long range AF order is lost and the order parameter correlations are exponentially decaying.
Importantly, this is a generic gapped spin liquid originating from long range fluctuations and is unrelated to a spin-dimer ordering.
The SL gap is zero, $\Delta=0$, at the critical point $\rho_{cN}$ and the gap increases when we proceed deeper into the spin liquid phase.
The SL phase stretches across a finite window $[\rho_{cS},\rho_{cN}]$ in the vicinity of the LP, as depicted in Fig. \ref{fig:phase_diagr}b.
The elementary spin excitations in the AF phase are two gapless Goldstone modes - transverse spin-waves and a
massive longitudinal ('Higgs') mode. Due to the unit length constraint ($\bm n^2=1$) the Higgs mode has a very
large energy and can be disregarded.
In the spiral phase there are three Goldstone modes: a sliding mode and two out of plane excitations. These three
modes correspond to the three Euler angles defining the orientation of the $(\bm e_1, \bm e_2, \bm e_3 )$ triad,
where $\bm e_3 = [\bm e_1 \times \bm e_2]$.\cite{Azaria90, Milstein15}.
The excitation modes (\ref{eq:omega_q}) in the SL phase are three-fold degenerate due to $O(3)$ rotational invariance of the model. Above the LP ($\rho>0$) the minimum of dispersion is located at $q=0$, whereas below the LP ($\rho<0$) the dispersion has four degenerate minima at the 'spiral' wave vectors $q=\bm Q$.
The evolution of the dispersion across the LP is schematically shown in Fig. \ref{fig:phase_diagr}c.
The change of the shape of the dispersion indicates the Lifshitz point.
The location of this critical point $\rho_{cN}$ can be found by imposing the condition $\langle n_z\rangle \rightarrow 0$,
which naively provides the following criterion for the transverse spin fluctuations $\langle \bm\pi^2 \rangle_c \approx 2 $.
This critical value for $\langle \bm\pi^2 \rangle$ is largely overestimated and it is not consistent with the
unit length constraint.
One can find a more accurate value of $\langle \bm\pi^2 \rangle_c$ by accounting for the next order terms in the Taylor
series expansion of $n_z = \sqrt{1-\bm\pi^2}$ (see Appendix \ref{sec:nz_Taylor}), or alternatively by using the $1/N$ expansion for $O(N)$ theory.
The $1/N$ expansion has been extensively applied to describe quantum antiferromagnets. For the most relevant examples see Refs. \cite{Read90, Chubukov94, Affleck89}.
In the $1/N$ expansion approach we lift the hard constraint $\bm n^2=1$ by introducing a Lagrange multiplier
\begin{equation}\label{eq:L+lmbda}
\mathcal{L} \rightarrow \mathcal{L} - \lambda (\bm n^2 - 1).
\end{equation}
After integrating out the $\bm n$ field in the new Lagrangian (\ref{eq:L+lmbda}), we obtain an effective Lagrangian depending only on the auxiliary field $\lambda$:
\begin{equation}\label{eq:L_eff}
\mathcal{ L}_\lambda = N tr \ln (-\chi_\perp\partial_{tt} - K(\bm q) - \lambda) + \lambda.
\end{equation}
We can find the saddle point in the Lagrangian $\mathcal L_\lambda$ by calculating the variational derivative in (\ref{eq:L_eff}) with respect to $\lambda$ and regarding $\lambda$ as a constant, $\lambda=\chi_\perp\Delta^2$:
\begin{equation}\label{eq:N_gap_cond}
N \sum_q \int \frac{id\omega}{(2\pi)}\frac{1}{\chi_\perp(\omega^2 - \Delta^2) - K(\bm q) } = 1.
\end{equation}
The Lagrange multiplier in Eq. (\ref{eq:N_gap_cond}) has the meaning of the spin gap. Equation (\ref{eq:N_gap_cond}) determines the evolution of the gap $\Delta(\rho)$ with the spin stiffness in the SL phase.
Comparing Eq. (\ref{eq:N_gap_cond}) with Eq. (\ref{eq:pi2}) we conclude that at the boundary between SL and AF phases
$ \langle \bm\pi^2\rangle_c = (N-1)/N=2/3$.
This criterion is quite natural for the $O(3)$ symmetric quantum critical point separating N\'eel and SL states.
Nevertheless, this criterion underestimates $\langle \bm\pi^2\rangle_c$.
One can see this from the example of the $S=1/2$ 2D Heisenberg model on the square lattice.
A textbook expression for the staggered magnetization is well known
\begin{eqnarray}
\label{hm}
\langle n_z\rangle =2\langle S_z\rangle= 1-2\int_{MBZ}\frac{d^2q}{(2\pi)^2}\left(\frac{1}{\sqrt{1-\gamma_q^2}}-1\right) ,
\end{eqnarray}
where $\gamma_q=\frac{1}{2}(\cos q_x+\cos q_y)$, and integration is performed over the magnetic Brillouin zone.
In the limit $q<1$ Eq. (\ref{hm}) is consistent with (\ref{eq:pi2}) since in this case $\chi_{\perp}=1/8J$ and
$\omega_q/J \approx \sqrt{2}q$, where $J$ is the Heisenberg AF coupling. Integration over $q$ in (\ref{hm}) gives a well known result $\langle n_z\rangle \approx 2\times0.305$
which corresponds to $\langle \bm\pi^2\rangle \approx 0.78$ in the equation
$\langle n_z \rangle \approx 1 - \frac{1}{2}\langle \bm \pi^2 \rangle$.
The integration in the corresponding long-wavelength approximation (\ref{eq:pi2}) with $N=3$,
$\chi_{\perp}=1/8J$, $\omega_q\approx \sqrt{2}J q$ and the ultraviolet cutoff $\Lambda=1$
gives a close value $\langle \bm\pi^2\rangle \approx 0.89$.
Both values are above 2/3 and we know that the long range AF order in the unfrustrated Heisenberg model
still persists.
Based on this analysis we estimate the critical value of fluctuation as
\begin{eqnarray}
\label{pc}
\langle \bm\pi^2\rangle_c \approx 1.
\end{eqnarray}
Equation (\ref{pc}) is an analogue of the Lindemann criterion for quantum melting of long range magnetic order
in 2D quantum magnets.
Our approach implicitly violates rotational invariance, but it allows us to calculate approximately
the positions of critical points and the value of the spin liquid gap.
The spin liquid gap $\Delta$ is determined by Eqs. (\ref{eq:pi2}) and (\ref{eq:omega_q})
from the condition $\langle \bm\pi^2\rangle = \langle \bm\pi^2\rangle_c \approx 1$.
At $\rho> 0$ (the N\'eel side of LP) $\Delta$ coincides with the physical gap.
On the spiral side of LP, $\rho<0$, the physical gap corresponds to the excitation energy at the ``spiral''
wave vector $\bm Q$: $\Delta_{ph} = \min \omega_q = \sqrt{\Delta^2+\frac{1}{\chi_\perp}K(\bm Q)}$,
see Fig. \ref{fig:phase_diagr}c.
This gap is closed at the spin-spiral-SL critical point. Therefore, the position of this critical point $\rho_{cS}$ is determined from the following
two equations
\begin{equation}
\begin{cases}
2 \sum_{q< \Lambda} \int \frac{id\omega }{(2\pi)} \frac{1}{\chi_\perp(\omega^2 - \Delta^2) - K(\bm q) + i0}=1,\\
\Delta^2_{ph}=\Delta^2 + \frac{1}{\chi_\perp}K(Q) = 0.
\end{cases}
\end{equation}
At $\rho<\rho_{cS}$, the magnon Green's function acquires a pole at imaginary frequency
$\omega=\pm i \sqrt{|\Delta^2 + K(Q)/\chi_\perp|}$. This is the indication of an instability of the SL
phase towards condensation of a static spiral with the wave vector $\bm Q$.
It is instructive to draw an analogy between the SL physics at 2D Lifshitz point and the one-dimensional Haldane spin chain.
A condition similar to (\ref{eq:N_gap_cond}) determines the value of the Haldane gap. \cite{Affleck89}
Indeed, the integer spin $S$ Heisenberg model in the continuous limit can be mapped to the $O(3)$
relativistic nonlinear sigma model in $D=1+1$.\cite{Haldane83}
The model parameters are the speed of the magnon, $c=\sqrt{\rho/\chi_\perp}=2J S$, and the
transverse magnetic susceptibility, $\chi_\perp = 1/4J$ ($J$ is the Heisenberg coupling constant).
Proceeding by analogy with (\ref{eq:pi2}) we find the fluctuations of the spin in the Haldane model
\begin{equation}
\label{eq:NLSM_Haldane}
\langle \bm\pi^2\rangle_c = 2\int_0^\Lambda \frac{dq}{2\pi} \frac{1}{2\chi_\perp\sqrt{c^2q^2
+ \Delta^2}} \approx \frac{1}{2\pi c\chi_{\perp}} \ln \frac{c\Lambda}{\Delta},
\end{equation}
As we already discussed, the ultraviolet cutoff is $\Lambda\approx 1$.
The logarithmically divergent $\langle \bm \pi^2 \rangle$ in the Haldane model is analogous to the
log-divergence in (\ref{eq:pi2}) at the LP.
Numerical values of the Haldane gaps for $S=1$ and $S=2$ are known from DMRG calculations: see e.g.
Ref. \cite{Renard03}, $\Delta_{S=1}/J \approx 0.41$, $\Delta_{S=2}/J \approx 0.08$.
Taking these values of the gap Eq.(\ref{eq:NLSM_Haldane}) we obtain the following critical values of
fluctuations, $\langle \bm\pi^2 \rangle_c \approx 0.5$ (for $S=1$) and $\langle \bm\pi^2 \rangle_c \approx 0.6$
(for $S=2$), which
are smaller than (\ref{pc}). We believe that the difference is due to different dimensionality.
While DMRG is more reliable it is interesting to note that the renormalization group analysis \cite{Affleck89}
for the Haldane chain gives $\langle \pi^2\rangle_c = 1$.
The differences in the values of $\langle \pi^2\rangle_c$ is not crucial when making comparisons between 1D and 2D systems.
However, it is well known that properties of the spin chains with half-integer and integer spins are very different.
The gapped SL phase in 1D appears only in the integer spin chains, while in contrast the excitations of
half-integer spin chains are gapless spinons in agreement with the Lieb-Shultz-Mattis theorem. \cite{Lieb61}
We believe that the 2D spin liquid in the vicinity of LP point is generic and
independent of the spin value.
The Lieb-Shultz-Mattis theorem states that in systems with half-integer spin per unit lattice cell and
full rotational $SU(2)$ symmetry the excitations are gapless or otherwise the ground state of the system
is degenerate. The theorem was initially formulated for $D=1+1$ systems and later generalized for higher spatial
dimensions \cite{Hastings04}.
Technically in $D=1+1$ the dramatic difference between integer and half integer spin is due to the topological Berry
phase term which is not included in the Lagrangian (\ref{eq:L}).\cite{Haldane83}
Topological effects in $D=2+1$ correspond to skyrmions or merons.\cite{Takayoshi16}
In principle topological configurations become more important when approaching the Lifshitz point. \cite{Kharkov17}
However such topological solutions are unstable within the model (\ref{eq:L}). Using scaling arguments one can see that due to the fourth spatial derivative term in the Lagrangian (\ref{eq:L}) the energy of localized skyrmions at LP behaves as $\sim b_{1,2}/R^2$, where $R$ is the skyrmion radius.
Therefore any localized skyrmions energetically prefer to have large size $R\rightarrow\infty$ and only contribute to the boundary terms.
Although the topological solutions might play a role to reconcile with the Lieb-Shultz-Mattis theorem, these configurations are statistically irrelevant in the bulk.
\section{Positions of N\'eel-Spin liquid and Spin-Spiral-Spin liquid critical points}\label{sec:crit_rho_delta}
In order to make our calculations more specific and having in mind comparison with the $J_1-J_3$ model, in this Section we set $b_2=0$.
It is convenient to introduce dimensionless spin stiffness and dimensionless gap parameters
\begin{eqnarray}
\label{rd}
&&\bar\rho = \frac{2\rho}{b_1},\quad \delta = \sqrt{\frac{2\chi_\perp}{b_1}}\Delta\ .
\end{eqnarray}
At negative $\rho$ the spiral wave vector is directed along the main diagonals
$\bm Q = \frac{1}{\sqrt{2}}(Q,\pm Q)$,
\begin{eqnarray}
\label{qdd}
Q^2=|\bar\rho| \ .
\end{eqnarray}
As we already discussed in Section \ref{sec:QLP} the condition of criticality reads
\begin{eqnarray}\label{eq:pi2_b2=0}
\langle{\bm\pi^2}\rangle_c\approx 1\approx
\frac{\sqrt{2}}{(4\pi^2)\sqrt{\chi_{\perp}b_1}}
\int\frac{d^2q}{\sqrt{{\bar \rho}q^2+q_x^4+q_y^4+\delta^2}} \ .
\end{eqnarray}
First, we determine the gap exactly at the LP, $\delta_0=\delta(\rho= 0)$.
For $\delta_0 \ll 1$ the solution of (\ref{eq:pi2_b2=0}) is
\begin{eqnarray}
\label{eq:delta_0}
\delta_0=1.7 \Lambda^2 e^{- \frac{2\sqrt{2}\pi}{\zeta} \sqrt{\chi_{\perp}b_1} }.
\end{eqnarray}
The constant $\zeta$ in the exponent is given by the angular part of the $q$-integral
$\zeta = \frac{2}{\pi}K\left(\frac{1}{2}\left[1-\frac{b_2}{b_1}\right]\right)$,
where $K(m) = \int_0^{\pi/2} d\phi\frac{1}{\sqrt{ 1 - m \sin^2{\phi} }}$ is the complete elliptic integral.
In the specific case under consideration, $b_2 = 0$, $\zeta = \frac{2}{\pi}K(1/2)\approx 1.18$.
The numerical prefactor $A=1.7$ in (\ref{eq:delta_0}) is found by performing a least-squares
fitting of the integral in Eq. (\ref{eq:pi2_b2=0}).
While Eq. (\ref{eq:delta_0}) is derived for $\delta_0 \ll 1$, however direct numerical integration in (\ref{eq:pi2_b2=0}) shows that (\ref{eq:delta_0}) practically works up to
$\delta_0 \leq 0.6 - 0.7$.
In order to determine the position of the N\'eel critical point $\rho_{cN}$ we evaluate
the integral in (\ref{eq:pi2_b2=0}) at $\delta \ll {\bar{\rho}}\ll 1$,
\begin{eqnarray}
\label{qf_main}
\frac{1}{2\pi}
\int\frac{d^2q}{\sqrt{{\bar {\rho}}q^2+q_x^4+q_y^4+\delta^2}}
\approx \frac{\zeta}{2} \ln\left(\frac{2.9 \Lambda^2}{\bar{\rho}}\right)- \frac{\delta}{\bar{\rho}}
\end{eqnarray}
The condition $\delta=0$ gives the position of the N\'eel-SL critical point $\bar\rho_{cN}$:
\begin{eqnarray}
\label{fi1_main}
{\bar {\rho}_{cN}}\approx 2.9 \Lambda^2 e^{- \frac{2\sqrt{2}\pi}{\zeta} \sqrt{\chi_{\perp}b_1} }
\approx 1.65\delta_0 \ .
\end{eqnarray}
According to (\ref{qf_main}) in the vicinity of the N\'eel-SL critical point, ${\bar {\rho}} < {\bar {\rho}_{cN}}$,
the gap grows linearly as $\delta \approx 0.64({\bar {\rho}_{cN}}-{\bar {\rho}})$, that corresponds to a mean-field prediction.
The spin stiffness $\rho_{cN}$ at the transition point from the N\'eel phase to the spin liquid phase is small but still finite.
Therefore, we believe that the transition belongs to the standard $O(3)$ universality class, the same as that in the bilayer
quantum antiferromagnet, see e.g. Ref. \cite{Shevchenko2000}
The correct critical index for $O(3)$ transition is $\nu\approx0.7$, which implies
$\delta \propto ({\bar {\rho}_{cN}}-{\bar {\rho}})^{\nu}$.
On the side of negative spin stiffness, $\bar\rho_{cS}<\bar\rho<0$, the dimensionless physical gap reads
\begin{equation}\label{eq:delta_ph}
\alpha = \sqrt{\frac{2\chi_\perp}{b_1}}\Delta_{ph} = \sqrt{\delta^2 - \bar\rho^2/2}.
\end{equation}
The condition $\alpha=0$ determines the position of the spin-spiral to SL critical point $\rho_{cS}$.
Calculating the integral in (\ref{eq:pi2_b2=0})
at $\alpha \ll Q^2 \ll 1$ we find
\begin{eqnarray}
\label{qf7_main}
&&\frac{1}{2\pi}
\int\frac{d^2q}{\sqrt{Q^4/2-Q^2q^2+q_x^4+q_y^4+\alpha^2}}\nonumber\\
&&\approx \zeta \ln\left(\frac{5.4 \Lambda}{Q}\right) -2 \frac{\alpha}{Q^2} \ .
\end{eqnarray}
The condition $\alpha=0$ gives the position of the critical point $\bar\rho_{cS}$:
\begin{equation}
\label{eq:rho_cS}
\bar\rho_{cS} = - Q^2 \approx - 15\delta_0 \ .
\end{equation}
The gap in the vicinity of this critical point is $\alpha = 0.27 (\bar\rho-\bar\rho_{cS})$.
This is a mean-field result and we believe that the transition at $\rho_{cS}$ does not belong
to a standard universality class.
The dimensionless gap found by numerical solution of Eq. (\ref{eq:pi2_b2=0}) for different values of $\delta_0$
in the entire SL
region $\rho_{cS}<\rho<\rho_{cN}$ is presented in Fig. \ref{fig:gaps_spins}.
\begin{figure}
\includegraphics[scale=0.3]{./gaps_spins1.eps}
\caption{ Dimensionless spin liquid gap versus spin stiffness
for different values of $\delta_0$.
}
\label{fig:gaps_spins}
\end{figure}
From this figure we conclude that asymptotic solutions given by Eqs. (\ref{fi1_main})
and (\ref{eq:rho_cS}) become valid only at sufficiently small values of $\delta_0$
(i.e large values of S): Eq. (\ref{fi1_main}) is valid at $\delta_0 \lesssim 0.2$ and
Eq. (\ref{eq:rho_cS}) is valid only for very small gaps, $\delta_0 \lesssim 0.02$.
The asymmetry between $\rho_{cS}$ and $\rho_{cN}$ evident from Fig. \ref{fig:gaps_spins}
is due to stronger quantum fluctuations in the spiral ($\rho<0$) region compared to
the $\rho>0$ domain.
An alternative method to determine $\bar{\rho}_{cS}$ is to approach the spiral-SL critical point from the spiral phase and find the condition when quantum fluctuations melt the spiral.
The fluctuations of spiral consist of the out-of-plane $h({\bm r},t)$ and in-plane modes $\phi(\bm r,t)$, can be parametrized in the form
\begin{eqnarray}
\label{wh0}
{\vec n}=(\sqrt{1-h^2}\cos({\bm Q}\cdot{\bm r}+\phi),\sqrt{1-h^2}\sin({\bm Q}\cdot{\bm r} + \phi),h) \ .
\end{eqnarray}
The total quantum fluctuation orthogonal to the spin alignment in the spiral state reads
\begin{eqnarray}
\label{qf}
&&\langle \bm\pi^2 \rangle = \langle \phi^2 \rangle+\langle h^2 \rangle, \\
&&\langle \phi^2 \rangle=
\frac{1}{(4\pi^2)\sqrt{2\chi_{\perp}b_1}}
\int\frac{d^2q}{\sqrt{2Q^2q^2+q_x^4+q_y^4}},\nonumber\\
&&\langle h^2 \rangle=
\frac{1}{(4\pi^2)\sqrt{2\chi_{\perp}b_1}}
\int\frac{d^2q}{\sqrt{Q^4/2-Q^2q^2+q_x^4+q_y^4}}.\nonumber
\end{eqnarray}
The denominators in the integrals for $\langle\phi^2\rangle$ and $\langle h^2\rangle$ in (\ref{qf}) represent the dispersions for the Nambu-Goldstone excitations: the sliding mode and the out of plane mode, see details in Appendix \ref{sec:append_spiral}.
Evaluating the integrals with logarithmic accuracy, we obtain
\begin{eqnarray}
\label{qf2}
&&\langle \bm\pi^2 \rangle \approx \frac{1}{(2\pi)\sqrt{2}\sqrt{\chi_{\perp}b_1}}
\zeta\ln\left(\frac{6.5 \Lambda^2}{Q^2}\right).
\end{eqnarray}
Now, applying the same criterion for the critical point, $\langle \bm\pi^2 \rangle_c \approx 1$,
we find the critical $\bar\rho_{cS}$
\begin{eqnarray}
\label{qc}
\bar\rho_{cS} \approx -6.5 \Lambda^2 e^{- \frac{2\sqrt{2}\pi}{\zeta} \sqrt{\chi_{\perp}b_1} } \approx -4\delta_0.
\end{eqnarray}
The prefactor in (\ref{qc}) is significantly smaller then the prefactor in Eq.(\ref{eq:rho_cS}).
This emphasizes the fact that our calculation is only approximate.
Pragmatically this uncertainty is not very significant.
We already pointed out that Eq. (\ref{eq:rho_cS}) is valid only for extremely small gaps,
$\delta_0 \lesssim 0.02$. At larger values of $\delta_0$ the position of the critical
point $\rho_{cS}$ is different from (\ref{eq:rho_cS}), see Fig. \ref{fig:gaps_spins}.
Numerical evaluation of (\ref{qf}) combined with the criticality condition (\ref{pc})
gives the following locations of the critical points $\rho_{cS}$:
${\bar \rho}_{cS}/\delta_0=-3.7$ at $\delta_0=0.06$;
${\bar \rho}_{cS}/\delta_0=-3.6$ at $\delta_0=0.2$;
${\bar \rho}_{cS}/\delta_0=-2.2$ at $\delta_0=0.7$.
Comparing these values with positions of the critical point that
follow from Fig.\ref{fig:gaps_spins} we conclude that, for the practically interesting case
$\delta_0 \gnsim 0.15$, both methods give close positions of the critical point.
As was mentioned in Sec. \ref{sec:field_theory} in the presence of inplane rotational symmetry $b_1=b_2$ (e.g. frustrated Heisenberg model on the hexagonal lattice), quantum fluctuations become especially strong.
In fact, when approaching the critical point $\rho_{cS}$ the integral $\int_q \frac{1}{\sqrt{\Delta^2 + K(q)}}\propto \int_q \frac{1}{\sqrt{\alpha^2 + (q^2-Q^2)^2}}$ is logarithmically divergent for $\alpha\rightarrow0$ at $q=Q$. It implies that one has to keep higher order terms $\mathcal{O}(q_i^6)$ in the expansion (\ref{eq:K(q)})
\begin{equation}
K(\bm q) = \rho q^2 + \frac{b}{2}q^4 + c (q_x^6 + q_y^6) + d(q_x^4 q_y^2 + q_x^2 q_y^4)\label{eq:K6}
\end{equation}
which break the symmetry with respect to spatial rotations in the $\{xy\}$ plane and remove the degeneracy with respect to the choice of the direction of $\bm Q$. After accounting for the higher order anisotropic terms $\propto \mathcal O (q_i^6)$ the integral for $\langle \bm\pi^2 \rangle$ becomes convergent at $|\bm q|=Q$ and the value $\rho_{cS}$ is well defined.
\section{Spin-spin correlation function}\label{sec:corr}
Spin-spin correlations of a standard tool to analyze quantum critical properties of a magnetic system.
In the SL phase the correlator provides an essential information about the properties of the ground state.
The equal time two-point spin-spin correlation function reads
\begin{eqnarray}\label{eq:C(r)}
C(r) = \langle n^\alpha(r) n^\alpha (0)\rangle = 1 + 2[R(r) - R(0)]
+\ldots,
\end{eqnarray}
where $\langle \pi^\alpha(r) \pi^\beta(0) \rangle = \delta^{\alpha\beta} R(r)$ and indices $\alpha,\beta$ refer only to the $x$ and $y$ spin components.
The two-point correlator is normalized such that $C(0)=\langle n_\alpha^2\rangle = 1$. In the
SL phase the correlation function should vanish at large distances
$C(r\rightarrow\infty)\rightarrow 0$ and $R(r\rightarrow\infty)\rightarrow 0$. These conditions are consistent with the ``melting criterion'' in Eq.(\ref{pc}) if we truncate the asymptotic expansion
in Eq.(\ref{eq:C(r)}) keeping only the terms explicitly presented there.
The $\langle \bm \pi (r) \bm \pi (0) \rangle$ correlation function in the SL phase reads
\begin{equation}\label{eq:R(r)}
R(r) = \int \frac{id\omega d^2q}{(2\pi)^3} \frac{e^{i \bm{qr}}}{\chi_\perp(\omega^2-\Delta^2) - K(\bm q) + i0}.
\end{equation}
Calculating (\ref{eq:R(r)}) and substituting the result in Eq. (\ref{eq:C(r)}), we obtain the
two-point spin-spin correlation function $C(r)$; the numerical results are plotted in Fig (\ref{fig:Corr}).
\begin{figure}
\includegraphics[scale=0.27]{Corr.eps}
\includegraphics[scale=0.28]{Corr_diag.eps}
\begin{picture}(0,0)
\put(-125,225){ \text{a)}}
\put(-125,95){\text{b)}}
\end{picture}
\caption{Static spin-spin correlation function $C(r)$ in the spin liquid phase for positive and negative spin stiffness ($b_2=0$, $\delta_0\approx0.04$). The radius vector $\bm r$ is directed along a) the principal lattice axes ($x$ or $y$), b) $\bm r$ is along the diagonal direction.}
\label{fig:Corr}
\end{figure}
Similar to the previous Section these plots correspond to the case $b_2=0$.
Therefore, the correlator is somewhat anisotropic.
There are two points to note, one is physical and another is technical.
(i) The correlation length scales as one over the square root of the gap, $\xi \propto 1/\sqrt{\delta_0}$,
instead of the standard relation, $\xi \propto 1/\delta_0$.
(ii) When integrating in Eq.(\ref{eq:R(r)}) we use the soft ultraviolet cutoff by multiplying
the integrand by $e^{-q^2/(2\Lambda^2)}$. The soft cutoff allows us to avoid nonphysical oscillations
in $R(r)$ due to the Gibbs phenomenon. The Gibbs phenomenon results in spurious oscillations, which always exist for a sharp cutoff and are well known in
Fourier analysis.
The asymptotic behaviour of the correlation function $R(r\rightarrow\infty)$ in the spin liquid phase at $\rho=0$ can be analytically obtained in the simplified isotropic approximation ($b_1=b_2$):
\begin{eqnarray}\label{eq:R_LP}
R(r) \sim \frac{e^{- r \sqrt{\frac{\delta_0}{2}}}}{r} \cos\left(r \sqrt{\frac{\delta_0}{2}}-\frac{\pi}{4}\right),
\end{eqnarray}
Using Eq. (\ref{eq:R_LP}) we deduce the spin-spin correlation length $\xi=\sqrt{\frac{2}{\delta_0}}$. In the case of negative spin-stiffness ($\rho_{cS}<\rho<0$) the correlation function $R(r)$ becomes oscillating, see Fig. (\ref{fig:Corr}).
In the vicinity of the critical point $\rho_{cN}$ the correlations decay as
\begin{equation}
R(r)=\frac{1}{2\pi \sqrt{2 \chi_\perp b_1}} I_0\left(r\frac{\sqrt{\bar\rho_{cN}}}{2}\right)K_0\left(r\frac{\sqrt{\bar\rho_{cN}}}{2}\right) \underset{r\rightarrow\infty}{\sim} \frac{1}{r}.\label{eq:R_cN}
\end{equation}
Formula (\ref{eq:R_cN}) is consistent with the well known $\propto 1/r$ decay of correlations of transverse spin components in the N\'eel phase (see e.g. Ref. \cite{Takahashi89}).
We stress that the ``isotropic approximation'', $b_1=b_2$, provides a qualitative and quantitative description of the correlation function $C(r)$ only away from the critical point $\rho_{cS}$. In the vicinity of the point $\rho_{cS}$ the isotropic model (\ref{eq:L}) becomes unstable, see comments to Eq. (\ref{eq:K6}).
Now we would like to make a comparison between $O(3)$ and $O(2)$ quantum Lifshitz transitions. The $O(2)$ version of Lagrangian (\ref{eq:L}) describes the XY frustrated Heisenberg antiferromagnet in the continuous limit.
The physics in the $O(2)$ model is quite different from the $O(3)$ model and the Ioffe-Larkin argument is inapplicable in this case.
The $O(2)$ Lagrangian can be mapped to the scalar Lifshitz model described by a polar angle $\theta$: $n_x+in_y=e^{i\theta}$. This model has an exact solution for the correlation function $C(r)$ at the LP: $C(r)$ decays algebraically\cite{Ardonne04} at the LP in contrast to the non-vanishing correlations at $r\rightarrow\infty$ in long-range ordered N\'eel or spin-spiral phase. Therefore we conclude that there exist a finite region in the vicinity of the LP with algebraically decaying correlations. The region with algebraic spin correlations in some extent is analogous to the SL phase in the $O(3)$ model addressed in the present paper.
\section{$J_1-J_3$ model on the square lattice}\label{sec:J1-J3}
In the present Section we compare the field theory predictions with results of numerical calculations
for the antiferromagnetic $J_1-J_3$ Heisenberg model on the square lattice.
Frustrated $J_1-J_2$ and $J_1-J_2-J_3$ models have been discussed in numerous studies (see e.g. Refs. \cite{Ferrer93, Sindzingre10, Reuther11a}): some references are
also presented in the Introduction.
In the classical limit both models exhibit the spin spiral state at a sufficiently large frustration.
Quantum versions of the models show a magnetically disordered state at a sufficiently large frustration.
Classically the $J_1-J_2$ model at $J_2/J_1=1/2$ has three degenerate ground states,
the N\'eel, the spin-spiral, the spin-stripe. The tricritical point is somewhat special; the proximity of the columnar spin stripe phase enhances spin-dimer correlations and makes the physics of the $J_1-J_2$ model
different from that considered in the present work. On the other hand if we set $J_2=0$
and consider only the $J_3$ frustration then classically there is a Lifshitz
point with a transition to the spin-spiral at $J_3=J_1/4$, and the spin-stripe state
has much higher energy than the spin-spiral and the N\'eel states.
Therefore the $J_1-J_3$ model is a good testing ground
for the generic theory of a ``soft'' Lifshitz transition developed in the present work.
The Hamiltonian of the $J_1-J_3$ model reads
\begin{equation}
H = J_1 \sum_{<ij>} \bm S_i \bm S_j + J_3 \sum_{\langle\langle\langle ij \rangle\rangle\rangle} \bm S_i \bm S_j,
\end{equation}
where $<ij>$ and $\langle\langle\langle ij \rangle\rangle\rangle$ denotes first and third nearest neighbour interaction.
The classical spin-spiral to N\'eel LP is located at $J_3/J_1=1/4$.
As we already pointed out in Section \ref{sec:field_theory}
quantum fluctuations must shift the LP towards larger values $J_3/J_1>1/4$.
In the long-wavelength approximation we can map the Heisenberg model to the Lagrangian (\ref{eq:L}).
The magnetic susceptibility is well known,
\begin{equation}
\label{cperp}
\chi_{\perp}=\frac{1}{8J_1}.
\end{equation}
The elasticity parameters of the Lagrangian can be found in two ways.
(i) The first way is a straightforward expansion of the classical elastic energy
at small wave number $q$, that gives
\begin{eqnarray}
\label{rb}
&&\rho=S^2(J_1-4J_3),\nonumber\\
&&b_1=S^2\frac{(16J_3-J_1)}{12},\nonumber\\
&&b_2=0.
\end{eqnarray}
(ii) An alternative way is to calculate the magnon dispersion in the N\'eel phase
using the standard spin-wave theory. The dispersion reads~\cite{Ferrer93}:
\begin{eqnarray}\label{eq:spin-wave}
&&\omega_q = 4S J_1 \sqrt{\left(1-\frac{J_3}{J_1}(1-\gamma_{2q})\right)^2-\gamma_q^2},\\
&&\gamma_{q} = \frac{1}{2}(\cos{q_x}+\cos{q_y}),\nonumber\\
&&\gamma_{2q} = \frac{1}{2}(\cos{2q_x}+\cos{2q_y}).
\end{eqnarray}
Expanding $\omega_q$ at small $q$ and comparing the results with Eq.(\ref{eq:omega_q}) (at $\Delta=0$)
we find
\begin{eqnarray}
\label{rb1}
&&\rho=S^2(J_1-4J_3),\nonumber\\
&&b_1=4 J_1 S^2\left[-\frac{5}{48}+\frac{2}{3}\left(\frac{J_3}{J_1}\right)+\left(\frac{J_3}{J_1}\right)^2\right],\nonumber\\
&&b_2=4 J_1 S^2\left[-\frac{1}{8}+2\left(\frac{J_3}{J_1}\right)^2\right].
\end{eqnarray}
Expressions for $b_1$ and $b_2$ in Eqs.(\ref{rb}) and (\ref{rb1}) do not coincide.
At the LP, $J_3=J_1/4$, both Eqs. give $b_2=0$, however, values of $b_1$ are different,
Eq.(\ref{rb}) gives $b_1=0.25S^2J_1$ while Eq.(\ref{rb1}) gives $b_1=0.5S^2J_1$.
Of course the spin-wave theory value is more reliable.
We have performed extensive series calculations both in the N\'eel phase and the spin-spiral phase.
Unfortunately the series expansion method does not allow to assess properties of the
spin liquid phase directly.
However, it allows to estimate the range of parameters where the spin liquid exists which can be
compared with predictions of the field theory.
In the N\'eel phase the series starts from the simple Ising antiferomagnetic state.
In the spiral phase the calculation is more tricky. We first impose a classical diagonal spiral with
some wave vector $Q$ and find the total energy of this state $E(Q)$. This includes the classical
energy and the
quantum corrections calculated by means of series expansions. We perform this calculations for many
values of $Q$ and then find numerically the minimum of $E(Q)$. Such procedure gives us the
ground state energy $E_{gs}$ and the physical wave vector $Q$.
The ground state energy $E_{gs}$ is plotted in Fig. \ref{gse} versus $J_3$.
\begin{figure}
\includegraphics[scale=0.2]{Energy_half.eps}
\includegraphics[scale=0.2]{Energy_one.eps}
\begin{picture}(0,0)
\put(-211,67){ \text{a)}}
\put(-93,67){\text{b)}}
\end{picture}
\caption{ $J_1-J_3$ model ground state energy in the N\'eel and in the Spin Spiral states for a) $S=1/2$ and b) $S=1$ calculated by numerical series expansion method.
}
\label{gse}
\end{figure}
The plot of the wave vector squared, $Q^2$, versus $J_3$ is presented in Fig. \ref{fig:Q_rho}.
\begin{figure}
\includegraphics[scale=0.3]{./Q_rho.eps}
\caption{ Spiral wave vector (squared) $Q^2$ versus $J_3$. Dots show results of numerical series
expansion. Blue (red) dots correspond $S=1/2$ ( $S=1$).
Dashed lines show fits of data by cubic polynomials, $Q^2=a_1(J_3-J_3^{LP})+a_2(J_3-J_3^{LP})^2 + a_3(J_3-J_3^{LP})^3$.
}
\label{fig:Q_rho}
\end{figure}
From the field theory description we expect that near the LP the wave vector behaves as
\begin{eqnarray}
\label{QQa}
Q^2=\frac{2|\rho|}{b_1}=\frac{8S^2}{b_1} ({J_3}-J_3^{LP}).
\end{eqnarray}
Therefore, from Fig. \ref{fig:Q_rho} we determine positions of Lifshiz points and, using Eq.(\ref{QQa}) we find the values of the elastic constant $b_1$
at the LP:
\begin{eqnarray}
\label{LP}
&& S=1/2: \ \ \ J_3^{LP} \approx 0.45 J_1, \ \ \ b_1/S^2\approx 0.60 J_1, \nonumber\\
&& S=1: \ \ \ \ \ \ J_3^{LP} \approx 0.3 J_1, \ \ \ b_1/S^2\approx 0.74J_1.
\end{eqnarray}
As expected, (see the very end of Section \ref{sec:field_theory}),
quantum fluctuations extend the N\'eel phase in relation to the classical LP
$J_3^{LP}=0.25J_1$. Values of the elastic constant $b_1$ are larger than that given
by Eq.(\ref{rb}) and smaller than that given by Eq.(\ref{rb1}).
We have also calculated the magnon dispersion in the N\'eel phase.
The series expansion becomes erratic at $J_3 > 0.2J_1$ and the errorbars in the calculations of $\omega_q$ grow very quickly.
The dispersion at $J_3 = 0.2J_1$ is shown in Fig. \ref{fig:omega_q}.
\begin{figure}
\includegraphics[scale=0.33]{./omega_q.eps}
\caption{ Magnon dispersion $\omega_q$ for $J_1-J_3$ model on the square lattice in
the N\'eel phase at $J_3/J_1=0.2$. Red circles correspond to the series expansion results,
black line is the linear spin-wave dispersion in Eq. (\ref{eq:spin-wave}). }
\label{fig:omega_q}
\end{figure}
We see that the shape of the dispersion is somewhat different from the prediction of the
spin-wave theory (\ref{eq:spin-wave}). On the the other hand the total bandwidth
is consistent with the spin-wave theory. The situation is different in the case of a simple
Heisenberg model ($J_3=0$), when the shape of magnon dispersion is consistent with the spin-wave theory
but the total bandwidth is about 20\% larger compared to the spin-wave theory value.
We also compute the static on-site magnetization in the N\'eel and spiral phases.
The magnetization vanishes at $J_3^{cN}$ and $J_3^{cS}$ critical points.
We already pointed out that the N\'eel-SL transition at $J_3^{cN}$ belongs to the $O(3)$ universality class.
Therefore, we expect
scaling
$\langle S_z\rangle \propto |J_3-J_3^{cN}|^\beta $ when approaching the critical point from the N\'eel phase, here
$\beta = (D-2+\eta)\nu/2 \approx \nu/2 \approx 0.35$.
Due to this reason in Fig. \ref{fig:M3} we show series expansion results for the
static on-site magnetization cubed.
\begin{figure}
\includegraphics[scale=0.39]{./M3.eps}
\caption{ Average onsite magnetization cubed. Blue squares (red circles) show
series expansion results for $S=1/2$ ($S=1$),
solid lines are guides for the eye. }
\label{fig:M3}
\end{figure}
From here we locate the critical points.
\begin{eqnarray}
\label{cp1}
&& S=1/2: \ \ \ J_3^{cN} \approx 0.35 J_1, \ \ \ J_3^{cS}\approx 0.55J_1, \nonumber\\
&& S=1: \ \ \ \ \ \ J_3^{cN} \approx J_3^{cS} \approx 0.35J_1.
\end{eqnarray}
Our result for the SL range $\Delta J_3$ in the case $S=1/2$ is different from the recent
work \cite{Reuther11a}, that suggest the SL phase at
$0.4 \leq J_3/J_1 \leq 0.8$. However, our predictions are reasonably close to the exact diagonalization results \cite{Sindzingre10}, suggesting the gapped SL phase for $0.45 \leq J_3/J_1 \leq 0.65$.
Note also that the critical index for the the $ J_3^{cS}$ critical point
is smaller than the $O(3)$ value, $M \propto (J_3-J_3^{cS})^{\beta}$, $\beta \sim 0.2$.
Now we can compare the results of series calculations with predictions
of the field theory. Eqs.(\ref{cperp}),(\ref{LP}) give values of
$\chi_{\perp}$ and $b_1$. Hence, according to Eqs.(\ref{eq:delta_0}) and (\ref{rd})
values of the gap at the LP are
\begin{eqnarray}
\label{rb2}
&& S=1/2: \ \ \ \ \delta_0 \approx 0.66, \quad \Delta_0 \approx 0.53 J_1,\nonumber\\
&&S=1: \ \ \ \ \ \ \ \delta_0 \approx 0.17, \quad \Delta_0 \approx 0.29 J_1 \ .
\end{eqnarray}
Formally the field-theoretical prediction (\ref{rd}) is derived within logarithmic accuracy and valid at $\delta_0 \ll 1$, while these values, especially that
at $S=1/2$, are not small. Nevertheless, we believe that Eq.(\ref{rb2}) gives a reasonable
estimate of the gaps.
Knowing the dimensionless gaps and using Fig. \ref{fig:gaps_spins} we can deduce the window
$\delta{\bar \rho}$ occupied by the spin liquid phase. Combining this with Eq.(\ref{QQa}) we find
the spin liquid window $\Delta J_3=|J_3^{cS}-J_3^{cN}|$ that follows from the field theory,
\begin{eqnarray}
&&\Delta J_{3}/J_1 \approx 0.3, \quad (S=1/2),\nonumber \\
&&\Delta J_{3}/J_1\approx 0.1, \quad (S=1) \ .
\label{eq:deltaJ3}
\end{eqnarray}
These values while being slightly larger are in a reasonable agreement with the SL phase windows following from series expansion data in
Fig. \ref{fig:M3}.
In conclusion of this Section we would like to comment on the anisotropic $J_1-J_3$ model
on square lattice. \cite{Oitmaa16}
In this model $J_3$ frustrates $J_1$ only in one direction, say $J_3$ connects only the
third nearest neighbours in the $y$-direction.
This results in an anisotropic LP: the spin stiffness $\rho_y$ vanishes
at some value of $J_3$ while $\rho_x$ remains finite and positive.
The wave vector of the spin spiral is always directed along the y-axis.
In this case quantum fluctuations at the LP are described as
$\langle \bm\pi^2\rangle \propto \int \frac{d^2 q}{ \sqrt{q_x^4 + q_y^4 + \rho_x q_x^2} }$.
The integral is infrared convergent unlike that in the isotropic LP.
Therefore generically one cannot expect a spin liquid in this case.
The fluctuations are still enhanced and there must be a suppression
of the on-site magnetization at the LP. This is exactly what series expansions for
the anisotropic $J_1-J_3$ model with S=1/2 indicate. \cite{Oitmaa16}
It is likely that a similar scenario is valid for thin films of frustrated manganites
(Tb,La,Dy)MnO$_3$ tuned close to LP.
\section{Conclusion}\label{sec:concl}
In this work, using field theory techniques, we have studied properties of the universal spin
liquid phase in a vicinity of an isotropic Lifshitz point in a system of localized frustrated spins.
Our general analysis includes the phase diagram, positions of critical points, excitation spectra,
and spin-spin correlations functions.
In the semiclassical regime of large spin S the spin liquid phase forms an exponentially narrow region in the vicinity of the Lifshitz point.
The derivation of these results is accompanied with a thorough discussion of the criterion for quantum melting of long range
magnetic order in two dimensions, an analogue of Lindemann criterion.
We argue the 2D Lifshitz point spin liquid is similar to the gapped Haldane phase in integer-spin
1D chains.
In order to check our general field theory results, and in particular to check the quantum melting criterion,
we have performed numerical series expansion calculations for the $J_1-J_3$ model on square lattice.
We demonstrate that results of these two different approaches are in a good agreement.
\section{Acknowledgments}
We would like to thank G. Khaliullin for insightful comments and suggestions.
The work has been supported by Australian Research Council No DP160103630.
|
3,212,635,537,739 | arxiv | \section{Introduction}
The theory $ACF_0$ of algebraically closed fields of characteristic zero is model theoretically very well behaved. In particular, it is strongly minimal, hence uncountably categorical.
Moreover, the model ${\mathfrak C}=({\mathbb C},+,-,\cdot,0,1)$ has many interesting expansions formed by adding a unary predicate $U$. If one interprets $U$ as the algebraics $\tilde{{\mathbb Q}}$,
then we get a theory of pairs $({\mathfrak C},\tilde{{\mathbb Q}})$, whose models are also well behaved. Interpreting $U$ as the reals $\R$ gives an interesting structure $({\mathfrak C},\R)$, whose theory is unstable, but is otherwise rather tame. Things are worse if one interprets $U$ as the integers ${\mathbb Z}$, as the theory $({\mathfrak C},{\mathbb Z})$ admits rather unpleasant G\"odel phenomena. But artificial expansions can be even worse. Let $I$ be any countably infinite, linearly independent subset and let $U_S$ be interpreted as $I\cup S$, where $S$ is a carefully chosen subset of the sum set $I+I$. By choosing $S$ appropriately, one can definably encode any given countable graph into an expansion $({\mathfrak C},U_S)$ of this form. We codify this behavior in the following definition.
\begin{definition}
A theory $T$ {\em monadically codes graphs} if for every graph $G$, there is some $M \models T$ and $M^*$ and expansion of $T$ by unary predicates such that $G$ definably embeds into $M^*$, in the sense of Definition \ref{def:de}.
\end{definition}
Thus, whether or not a theory $T$ monadically codes graphs depends on the {\em worst case} complexity of a monadic expansion of a model of $T$.
In the example above, even though $ACF_0$ has some tame expansions, $ACF_0$ monadically codes graphs.
To ease notation, in what follows assume that $T$ is a complete theory in a countable language (with an infinite model) and we consider models of $T$ with universe $\omega$. Now, given such a $T$ and a subset $Y\subseteq\omega^k$ for some $k\ge 1$, we ask:
\begin{quotation}
\noindent
What is the worst-case complexity of $Th(M,Y)$ among all models $M$ of $T$ with universe $\omega$?
\end{quotation}
To measure this, for $P$ any property of theories, we say {\em $(T,Y)$ is always $P$} if $Th(M,Y)$ has $P$ for all models $M$ of $T$ with universe $\omega$.
It turns out that among all complete, countable theories $T$ and all $Y\subseteq\omega^k$, there are very few combinations we need to consider.
\begin{definition} A complete theory $T$ is {\em purely monadic} if, for every model $M\models T$ with universe $\omega$, every definable (with parameters)
$Y\subseteq\omega^k$ is definable in a monadic structure $(\omega,U_1,\dots,U_n)$.
A complete theory $T$ is {\em monadically NFCP} if every monadic expansion $(M,U_1,\dots,U_n)$ of any model of $T$ has NFCP, with
$T$ being {\em monadically stable} and {\em monadically NIP} being defined analogously. (The definitions of NFCP, stability, and NIP are recalled in the next section.)
\end{definition}
We remark here that $T$ is monadically NFCP if and only if $T$ is mutually algebraic (Definition \ref{MA}).
It is well known that $NFCP \ \Rightarrow \ \hbox{stable} \ \Rightarrow\ NIP$, hence we have the implications
$$T \ \hbox{purely monadic}\quad \Rightarrow \quad \hbox{mon.\ NFCP}\quad\Rightarrow \quad\hbox{mon.\ stable}\quad\Rightarrow\quad\hbox{mon.\ NIP}$$
\begin{definition} A subset $Y\subseteq \omega^k$ is {\em monadically definable} if it is definable in some monadic structure $(N,U_1,\dots,U_n)$.
$Y\subseteq\omega^k$ is {\em monadically NFCP definable} if it is definable in some monadically NFCP structure $N$.
Analogously, $Y$ is {\em monadically stable/monadically NIP definable} if it is definable in some monadically stable/monadically NIP structure $N$.
Equivalently, a subset $Y\subseteq\omega^k$ is monadically definable (respectively, any of monadically NFCP, stable, NIP) if and only if
the structure $N=(\omega,Y)$ in a language with a single $k$-ary predicate symbol, is purely monadic (respectively, monadically NFCP, stable, NIP).
\end{definition}
Thus, we have the implications
$$Y \ \hbox{mon.\ definable}\ \Rightarrow \ \hbox{mon.\ NFCP-def}\ \Rightarrow \ \hbox{mon.\ stable-def}\ \Rightarrow\ \hbox{mon.\ NIP-def}$$
Our main theorem, Theorem~\ref{big}, characterizes the worst-case complexity of $(T,Y)$. To make the statement more readable, note that two special cases are immediate,
simply by unpacking the definitions.
\begin{fact} Let $T$ be a complete theory and $Y\subseteq \omega^k$.
\begin{enumerate}
\item If $T$ is purely monadic and $Y$ is mon.\,def/mon.\,NFCP def/mon.\,stable def/mon.\,NIP def,
then $(T,Y)$ is always purely monadic/mon.\,NFCP/\break mon.\,stable/mon.\,NIP.
\item If $T$ is any of purely monadic/mon.\,NFCP/mon.\,stable/mon.\,NIP and $Y\subseteq \omega^k$ is monadically definable, then
$(T,Y)$ is always purely monadic/mon.\,NFCP/mon.\,stable/mon.\,NIP, respectively.
\end{enumerate}
\end{fact}
It is also helpful to record the following equivalents of a theory monadically coding graphs that are proved by Baldwin and Shelah \cite{BS}.
\begin{fact} \label{fact:8.1.10} The following are equivalent for a complete theory $T$.
\begin{enumerate}
\item $T$ does not monadically code graphs;
\item $T$ does not monadically admit coding (see Definition~\ref{coding}); and
\item $T$ is monadically NIP.
\end{enumerate}
\end{fact}
With this in hand, we can state our main result.
Note that $(T,Y)$ is not always monadically NIP if and only if there is some model $M$ of $T$ with universe $\omega$ such that $(M,Y)$ monadically codes graphs.
\begin{theorem} \label{big} Suppose a complete theory $T$ is not purely monadic and $Y\subseteq\omega^k$ is not monadically definable.
Then $(T,Y)$ is always monadically NIP if and only if both $T$ is monadically NFCP and $Y$ is monadically NFCP definable.
Furthermore, in that case, $(T,Y)$ is always monadically NFCP.
\end{theorem}
A consequence of Theorem~\ref{big} is that monadically stable and monadically NIP theories are aptly named. That is, every monadic expansion of any model of such a theory has the same complexity, but for any non-monadically definable $Y$, some expansion $(M,Y)$ of a model of $T$ monadically codes graphs.
The new material begins in
Section~3, where we discuss complete theories that are not monadically NFCP.
As noted above, some are monadically stable, others are monadically NIP but not monadically stable,
and some are not monadically NIP. We exhibit paradigms of structures that are in these differences, and also prove that each paradigm definably embeds into some
monadic expansion of some model of any theory of that class.
In Section~4, we characterize the sets $Y$ that are monadically NFCP-definable, but not monadically definable, and in Section 5 we put these results together and give the proof
of Theorem~\ref{big}.
We remark that in the discussion above, we considered theories in a countable language and sets $Y\subseteq\omega^k$, but this was not necessary. In what follows, we consider complete theories in languages $L$ of arbitrary size, and sets $Y\subseteq\lambda^k$ for any cardinal $\lambda\ge ||L||$.
\section{Preliminaries}
We recall the following well known conditions on a partitioned formula $\varphi(\overline{x},\overline{y})$, when we are working in a sufficiently saturated model ${\mathfrak C}$ of a complete theory $T$:
$\varphi(\overline{x},\overline{y})$ has the {\em finite cover property (FCP)} if, for arbitrarily large $n$, there are $\<\overline{a}_i:i<n\>$ in ${\mathfrak C}$ such that,
$${\mathfrak C} \models \neg\exists \overline{x} ( \bigwedge_{i<n} \varphi(\overline{x},\overline{a}_i)) \wedge \bigwedge _{\ell <n} \exists \overline{x} ( \bigwedge_{i < n, i\neq \ell} \varphi(\overline{x},\overline{a}_i))$$
$\varphi(\overline{x},\overline{y})$ has the {\em order property} if, for each $n$, there are $\<\overline{a}_i:i<n\>$ in ${\mathfrak C}$ such that, for each $k<n$,
$${\mathfrak C} \models \bigwedge_{k<n}\left[\exists \overline{x} (\bigwedge_{i<k} \varphi(\overline{x},\overline{a}_i)\wedge\bigwedge_{k\le i<n} \neg\varphi(\overline{x},\overline{a}_i))\right]$$
$\varphi(\overline{x},\overline{y})$ has the {\em independence property} if, for each $n$, there are $\<\overline{a}_i:i<n\>$ in ${\mathfrak C}$ such that,
$${\mathfrak C} \models \bigwedge_{s\subseteq [n]}\left[\exists \overline{x} (\bigwedge_{i \in s} \varphi(\overline{x},\overline{a}_i)\wedge\bigwedge_{i \in n \backslash s} \neg\varphi(\overline{x},\overline{a}_i))\right]$$
A complete theory $T$ is {\em NFCP} if no partitioned formula $\varphi(\overline{x},\overline{y})$ has the FCP, $T$ is {\em stable} if no partitioned formula $\varphi(\overline{x},\overline{y})$ has the order property, and
$T$ is NIP if no partitioned formula $\varphi(\overline{x},\overline{y})$ has the independence property.
There are many equivalents to monadic NFCP (see e.g., \cites{MCLarch,JSL,BLcell}) and monadic NIP (see \cites{BS,ShHanf,BLmonNIP}. What we use is encapsulated in the following facts.
\begin{fact}[\cite{JSL}*{Theorem~3.3}] \label{MAchar} The following are equivalent for a complete theory $T$.
\begin{enumerate}
\item $T$ is monadically NFCP;
\item $T$ is mutually algebraic (see Definition~\ref{MA} below); and
\item $T$ is weakly minimal and trivial, i.e., for any pair $M\preceq N$ of models, every non-algebraic 1-type $p\in S_1(M)$ has a unique non-algebraic
extension $q\in S_1(N)$ and, for every model $M$, ${\rm acl}(A)=\bigcup_{a\in A} {\rm acl}(a)$ for every subset $A\subseteq M$.
\end{enumerate}
\end{fact}
Finally, we will also make use of the following sufficient condition from \cite{BS} for monadically coding graphs, or equivalently by Fact \ref{fact:8.1.10}, for the failure of monadic NIP. The proof that this implies monadically coding graphs will be the content of the first part of Theorem \ref{paradigm}.
\begin{definition} \label{coding}
A structure $M$ {\em admits coding} if there are infinite subsets $A,B,C\subseteq M^1$ and a formula $\varphi(x,y,z)$ whose restriction to $A\times B\times C$
is the graph of a bijection $f:A\times B\rightarrow C$. A theory $T$ {\em monadically admits coding} if some monadic expansion $M^*$ of some model $M$ of $T$
admits coding.
\end{definition}
\section{Finding paradigms of non-monadically NFCP theories}
In this section, we show the following classical structures will always witness the failure of monadic NIP/stability/NFCP in a suitable monadic expansion.
\begin{itemize}
\item The random graph, sometimes called the Rado graph, ${\mathcal R}=(A,E)$ is the standard example of a structure whose theory has the independence property.
In particular, its theory is not monadically NIP.
\item Dense linear order (DLO), the theory of $({\mathbb Q},\le)$, is one of the simplest non-stable theories as $\le$ visibly witnesses the order property. Thus, DLO is not monadically stable,
but it is monadically NIP, as can be seen by the classification of colored linear orders.
\item Let ${\mathcal E}=(X,E)$, where $X=\omega\times\omega$ (so each element of $X$ can be uniquely written as $(a,b)\in\omega^2$) and $E((a_1,b_1),(a_2,b_2))$ holds if and only if
$a_1=a_2$. Thus, ${\mathcal E}$ is the (unique) model of the $\omega$-categorical theory of an equivalence relation with infinitely many classes, with each class infinite. The theory $Th({\mathcal E})$ is monadically stable, but it is not monadically NFCP. To the the latter, one can add a single unary predicate whose interpretation contains exactly $n$ elements from the $n^{{\rm th}}$ $E$-class. This expanded structure is a paradigm of a stable structure with the finite cover property.
\end{itemize}
We next show that these paradigms all {\em definably embed} into a monadic expansion of any model of its class. In the definitions below, it is crucial that the embedding be into the universe $M$, as opposed to a cartesian power.
\begin{definition} \label{def:de}
Let ${\mathcal A}=(A,R)$ be any structure in a language with a binary relation, and let $M$ be an $L$-structure in some arbitrary language.
We say {\em ${\mathcal A}$ definably embeds into $M$} if there are $L$-definable $X\subseteq M^1$ and $R'\subseteq X^2$ and a bijection $f:A\rightarrow X$ such that
for all $a,b\in A$, ${\mathcal A}\models R(a,b)$ iff $M\models R'(f(a),f(b))$. [Informally, $(X,R')$ is an `isomorphic copy of ${\mathcal A}$'.]
A definable embedding $f:(A,R)\rightarrow (X,R')$ is {\em type-respecting} if, in addition, for any tuples $\overline{a},\overline{a}'\in A^n$, if $\mathrm{qftp}_{{\mathcal A}}(\overline{a})=\mathrm{qftp}_{{\mathcal A}}(\overline{a}')$, then
${\rm tp}_M(f(\overline{a}))={\rm tp}_M(f(\overline{a}'))$.
\end{definition}
\begin{theorem} \label{paradigm} Let $T$ be a complete $L$-theory that is not monadically NFCP.
\begin{enumerate}
\item If $T$ is not monadically NIP, then the random graph ${\mathcal R}$ definably embeds into some monadic expansion $M^*$ of a model $M$ of $T$.
\item If $T$ is monadically NIP but not monadically stable, then there is a definable, type-respecting embedding of $({\mathbb Q},\le)$
into some monadic expansion $M^*$ of a model $M$ of $T$.
\item If $T$ is monadically stable, then there is a definable, type-respecting embedding of ${\mathcal E}$ into some monadic expansion $M^*$ of a model $M$ of $T$.
\item For any (non-monadically NFCP) theory $T$, ${\mathcal E}$ definably embeds into some monadic expansion $M^*$ of a model $M$ of $T$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Assume $T$ is not monadically NIP. By either \cite{BS} or \cite{BLmonNIP}, there is an expansion $M^*$ of a model of $T$ that admits coding, i.e., there are
infinite sets $A,B,C$ and a 3-ary $L^*$-formula $\varphi(x,y,z)$ coding the graph of a bijection from $A\times B$ to $C$. By adding more unary predicates, we may assume each of
$A,B,C$ are definable in $M^*$ and are countably infinite, and by replacing $\varphi$ by $\varphi(x,y,z)\wedge A(x)\wedge B(y)\wedge C(z)$, the graph of $\varphi$ is precisely the bijection.
Now add a unary predicate $D\subseteq C$ so that for every $b\in B$ there are exactly two $a\in A$ such that $\exists (d\in D) \varphi(a,b,d)$.
Thus, in this expansion, one can think of $B$ as coding (symmetric) edges of $A$ via this formula. For the whole of $D$, we get a complete graph on $A$, but for any predetermined
graph ${\mathcal G}$ with universe $A$, one can add a single unary predicate $E\subseteq D$ so that for any $a_1,a_2\in A$,
$$\exists y\exists z (E(z)\wedge \varphi(a_1,y,z)\wedge\varphi(a_2,y,z))\ \Longleftrightarrow \ \hbox{$a_1,a_2$ are edge-related in ${\mathcal G}$}$$
In particular, we get a definable embedding of ${\mathcal R}$ into this expansion of $M^*$.
(2) By passing to a monadic expansion, we may assume $T$ itself is unstable. [In fact, any monadically NIP, non-monadically stable theory must itself be unstable,
but we don't need this.] By \cite{Pierre}, after adding parameters, there is a formula
$\varphi(x,y)$ with the order property, where $x$ and $y$ are both singletons. Thus, by adding an additional unary predicate for each of the parameters $c$ (with interpretation $\{c\}$)
there is an expansion $M^*$ of a model of $T$ with a 0-definable $L^*$-formula $\psi(x,y)$ with the order property.
By Ramsey and compactness and by passing to an $L^*$-elementary extension, we may assume there are order-indiscernible subsets
$A=\{a_i:i\in{\mathbb Q}\}$ and $B=\{b_j:j\in{\mathbb Q}\}$ of $M^*$ such that
$M^*\models \varphi(a_i,b_j)$ iff $i\le j$.
By replacing $M^*$ by a mondadic expansion of itself, we may additionally assume there are predicates for $A$ and $B$.
But now, the ordering $a_i \le' a_j$ is definable on $A$ via the 0-definable $L^*$-formula
$(\forall b\in B)[\varphi(a_j,b)\rightarrow\varphi(a_i,b)]$. Then $(A,\le')$ witness that there is a type-respecting, definable embedding of $({\mathbb Q},\le)$ into $M^*$.
(3) By \cite{BS}*{Lemma 4.2.6}, if $T$ is monadically stable, then $T$ has trivial forking. Thus, as $T$ is not monadically NFCP, it follows from the characterization
in \cite{JSL} that $T$ is not weakly minimal.
\begin{fact} \label{wm} If $T$ is stable but not weakly minimal, then, working in a large, saturated model ${\mathfrak C}$ of $T$,
there is a model $M\preceq {\mathfrak C}$ and singletons $a,b$ such that ${\rm tp}(a/Mb)$ is not algebraic, but forks over $M$.
\end{fact}
\begin{proof} As $T$ is not weakly minimal, there are $M_0\preceq N$ and $p\in S_1(M_0)$ that has two non-algebraic extensions to $S_1(N)$. As $p$ is stationary,
this implies there is a non-algebraic $q\in S_1(N)$ that forks over $M_0$. Let $a$ be any realization of $q$, and choose $Y$ to be maximal such that $M\subseteq Y\subseteq N$
and $\fg a {M_0} Y$. As ${\rm tp}(a/N)$ forks over $M_0$, $Y\neq N$, so choose any singleton $b\in N\setminus Y$. By the maximality of $Y$, $\nfg a Y b$.
To complete the proof, choose a model $M\supseteq Y$ with $\fg M Y {ab}$. It follows by symmetry and transitivity of non-forking that $\nfg a M b$.
Also, since ${\rm tp}(a/N)$ is non-algebraic, so is ${\rm tp}(a/Yb)$. But, as ${\rm tp}(a/M)$ does not fork over $Yb$, ${\rm tp}(a/M)$ is non-algebraic as well.
\end{proof}
Fix $a,b,M$ as in Fact~\ref{wm} and choose an formula $\varphi(x,y)\in {\rm tp}(ab/M)$ (with parameters from $M$) that witnesses the forking over $M$.
Let $r={\rm tp}(b/M)$ and choose a Morley sequence $B=\{b_n:n\in\omega\}$ in $r$. Let $q={\rm stp}(a/Mb)$ and, for each $n$, let $q_{b_n}$ be the strong type over $Mb_n$ conjugate to $q$.
Recursively construct sets $\{I_n:n\in\omega\}$ where each $I_n=\{a_{n,m}:m\in\omega\}$ is a Morley sequence of realizations of the non-forking extension $q^*_{b_n}$ of $q_{b_n}$
to $M\cup B\cup \bigcup\{I_k:k<n\}$.
It follows by symmetry and transitivity of non-forking that each $I_n$ is independent and fully indiscernible over $MB\cup\bigcup\{I_k:k\neq n\}$.
Let $A=\{a_{n,m}:n,m\in\omega\}$.
Now,
any permutation $\sigma\in Sym(B)$ is $L_M$-elementary, and in fact, induces an $L_M$-elementary permutation $\sigma^*\in Sym(AB)$.
Let $L^*=L\cup\{A,B,C_1,\dots,C_n\}$ and let ${\mathfrak C}^*$ be the natural monadic expansion of ${\mathfrak C}$ formed by interpreting
$A$ and $B$ as above, and interpreting each $C_i$ as $\{c_i\}$, where $\{c_1,\dots,c_n\}$ are the parameters occurring in $\varphi$. [We silently replace $\varphi(x,y)$
by the natural 0-definable $L^*$-formula formed by replacing each $c_i$ by $C_i$.]
Finally, define an $L^*$-definable binary relation $E$ on $A^2$ by:
$$E(a,a')\quad\Longleftrightarrow\quad (\exists b\in B)[\varphi(a,b)\wedge\varphi(a',b)]$$
It is easily checked that $E$ is an equivalence relation, whose classes are precisely $\{I_n:n\in\omega\}$.
Thus, $(A,E)$ is the image of a type-respecting, definable embedding of ${\mathcal E}$ into ${\mathfrak C}^*$.
(4) We prove this by cases. If $T$ is not monadically NIP, then as some monadic expansion codes graphs, any structure will be definably embeddable in some further monadic expansion, so assume $T$ is monadically NIP. If $T$ is also monadically stable, we are done by (3), so assume $T$ is not monadically stable. Then, by (2),
there is a type-respecting, definable embedding of $({\mathbb Q},\le)$ into some monadic expansion $M^*$ of a model of $T$. Thus, it suffices to prove that ${\mathcal E}$ definably embeds into some monadic expansion of
$({\mathbb Q},\le)$. But this is easy.
Let $A = {\mathbb Q} \backslash {\mathbb Z}$. Then $A$ is 0-definable in the monadic expansion $({\mathbb Q},\le, A)$, as is the relation $E\subseteq A^2$ given by
$$E(a,a')\quad\Longleftrightarrow\quad \forall x([a<x<a' \vee a'<x<a]\rightarrow A(x))$$
It is easily checked that $(A,E)$ is isomorphic to ${\mathcal E}$.
\end{proof}
We close this section by stating one `improvement' of Theorem~\ref{paradigm}(4) that will be used in Section~\ref{expand}.
Whereas Theorem~\ref{paradigm} speaks about a definable embedding of ${\mathcal E}$ into some monadic expansion of some model of $T$, we isolate
the following corollary, which describes a weaker configuration that can be found in arbitrary models of $T$ in the original language.
\begin{corollary} \label{config} Suppose $T$ is a complete $L$-theory that is monadically NIP, but not monadically NFCP.
Then there is an $L$-formula $\varphi(x,y,\overline{z})$ such that, for every model $N$ of $T$ and every $n\ge 1$, there is $\overline{d}_n$ and disjoint sets $B_n=\{b_i^n:i\in\omega\}$,
$A_n=\{a^n_{i,j}:i,j<n\}$ that are without repetition such that
\begin{enumerate}
\item The sets $\{A_n,B_m:n,m\in\omega\}$ are pairwise disjoint;
\item For all $n$, all $i,j,k<n$, one of the following holds.
\begin{enumerate}
\item $T$ is stable and
$N\models\varphi(b^n_k,a^n_{i,j})$ if and only if $k=i$;
\item $T$ is unstable and $N\models\varphi(b^n_k,a^n_{i,j})$ if and only if $k\le i$.
\end{enumerate}
\end{enumerate}
Moreover, we may additionally assume that the set $X=N\setminus \bigcup_{n\ge 1}(A_n\cup B_n)$ is infinite.
\end{corollary}
\begin{proof} As in the proof of Theorem~\ref{paradigm}(2),(3), we split into cases depending on whether or not $T$ is stable.
If $T$ is unstable, as in the proof of Theorem~\ref{paradigm}(2), choose an $L$-formula $\varphi(x,y,\overline{z})$ witnessing the order property in large, sufficiently saturated models of $T$.
Now, choose any $N\models T$. As there is some sufficiently saturated $N'\succeq N$ in which $\varphi(x,y,\overline{d})$ codes the order property, it follows from elementarity that,
for any fixed $n$, there are $\overline{d}_n\in N^{\lg(\overline{z})}$ and disjoint sets $\{b_i:i<n\}$ and $\{a_{i,j}:i,j<n\}$ such that for all $k,i,j<n$, $N\models \varphi(b_k,a_{i,j},\overline{d}_n)$
if and only if $k\le i$.
To get the pairwise disjointness, note that if $\{b_i:i<n\}$, $\{a_{i,j}:i,j<n\}$ work for $n$, then for any subset $s\subseteq n$, the subsets $\{b_i:i\in s\}$, $\{a_{i,j}:i,j\in s\}$ work for
$n'=|s|$.
Thus, given any fixed finite set $F$ to avoid, given any $n$, by choosing $m\ge n$ large enough and choosing an appropriate $s\subseteq m$, we can find
disjoint sets $\{b_i:i<n\}$ and $\{a_{i,j}:i,j<n\}$, each of which are disjoint from $F$.
Using this, we can recursively define sequences $\overline{d}_n$ and
pairwise disjoint families $B_n=\{b_i^n:i<n\}$ and $A_n=\{a^n_{i,j}:i,j<n\}$ such that for all $k,i,j<n$, $N\models \varphi(b_k,a_{i,j},\overline{d}_n)$
if and only if $k\le i$. By passing to an infinite subsequence, using the remarks above, and reindexing
we can shrink any family $\{B_n,A_n:n\in\omega\}$ to one satisfying the Moreover clause.
If $T$ is stable, then as in the proof of Theorem~\ref{paradigm}(3), $T$ monadically NIP but not monadically NFCP implies $T$ is not weakly minimal.
Thus, as in the proof of Theorem~\ref{paradigm}(3), there is a sufficiently saturated elementary extension $N'\succeq N$ and a formula $\varphi(x,y,\overline{z})$ that witnesses forking.
That is, in $N'$ there are $\{b_i:i\in {\mathbb Z}\}$, $\{a_{i,j}:i,j\in{\mathbb Z}\}$, and $\overline{d}$ such that for all $i,j,k\in {\mathbb Z}$, $N'\models\varphi(b_k,a_{i,j},\overline{d})$ if and only if $k=i$.
Now, using this configuration, the methods used in the unstable case apply here as well.
\end{proof}
\section{Sets definable in purely monadic and monadically NFCP structures}
Fact~\ref{MAchar} asserts that a theory is monadically NFCP if and only if it is mutually algebraic, so we recall what is known about sets definable in a mutually algebraic structure.
Throughout this section, fix an infinite cardinal $\lambda$ and think of the set $\lambda=\{\alpha:\alpha\in\lambda\}$ as being a universe.
The goal of this section is Lemma~\ref{monthree}, which gives a configuration that is present in any structure $M$ whose theory is monadically NFCP but not purely monadic.
\begin{definition} \label{MA} Fix any infinite cardinal $\lambda$ and any integer $k\ge 1$.
\begin{itemize}
\item A subset $Y\subseteq \lambda^k$ is {\em mutually algebraic} if there is some integer $m$ so that for every $a\in \lambda$,
$\{\overline{a}\in Y:a\in\overline{a}\}$ has size at most $m$.
\item A subset $Y^*\subseteq \lambda^{k+\ell}$ is {\em padded mutually algebraic} if, for some permutation $\sigma\in Sym(k+\ell)$ of the coordinates,
there is a mutually algebraic $Y\subseteq \lambda^k$ and $Y^*=\sigma(Y\times \lambda^\ell)$.
\item A model $M$ with universe $\lambda$ is {\em mutually algebraic} if, for every $n$, every definable (with parameters) $D\subseteq \lambda^n$ is a boolean combination of definable (with parameters) padded mutually algebraic sets.
\item A complete theory $T$ is {\em mutually algebraic} if some (equivalently, all) models of $T$ are mutually algebraic.
\end{itemize}
\end{definition}
Trivially, every unary subset $Y\subseteq \lambda^1$ is mutually algebraic.
\begin{fact}[\cite{LTspeeds}*{Theorem 2.1}] \label{MAcrit} An $L$-structure $M$ is mutually algebraic if and only if every
atomic $L$-formula $\alpha(x_1,\dots,x_n)$ is equivalent to a boolean combination of quantifier-free definable (with parameters) padded mutually algebraic sets.
\end{fact}
\noindent It follows immediately that any purely monadic structure $M=(\lambda,U_1,\dots,U_n)$ is mutually algebraic.
In this section, our goal is to obtain a particular configuration, described in Lemma \ref{monthree}, appearing in any mutually algebraic structure whose theory is not purely monadic. This will be used in the proof of Theorem \ref{big}, when a non-monadically definable $Y$ induces a mutually algebraic structure.
We begin by characterizing which mutually algebraic sets $Y\subseteq \lambda^k$ are monadically definable.
Obviously, every $Y\subseteq\lambda^1$ is monadically definable, so we concentrate on $k\ge 2$.
As notation, let $\Delta_k=\{(a,a,\dots,a)\in\lambda^k:a\in\lambda\}$ denote set of constant $k$-tuples.
\begin{lemma} \label{monone} Fix any infinite cardinal $\lambda$ and any integer $k\ge 2$. A mutually algebraic subset $Y\subseteq \lambda^k$ is
monadically definable if and only if $Y\setminus \Delta_k$ is finite.
\end{lemma}
\begin{proof} First, suppose $Y\setminus\Delta_k$ is finite. Let $F=\bigcup(Y\setminus\Delta_k)=\{a_1,\dots,a_n\}\subseteq\lambda$ and let $Z=\{a\in\lambda:(a,a,\dots,a)\in Y\}$.
Let $N=(\lambda,U_1,\dots,U_n,U_{n+1})$ be the structure in which $U_i$ is interpreted as $\{a_i\}$ for each $i\le n$ and $U_{n+1}$ is interpreted as $Z$.
Then $Y$ is definable in $N$, so $Y$ is monadically definable.
Conversely, suppose $Y$ is mutually algebraic and definable in some monadic $N=(\lambda,U_1,\dots,U_n)$. It is easily seen that $N$ admits elimination of quantifiers in a very nice way.
Collectively, the unary predicates $U_i$ color each element $a\in\lambda$ into one of $2^n$ colors. Some of these $2^n$ colors will have infinitely many elements of $\lambda$, while other colorings have only finitely many elements. Let $F=\{a\in\lambda:$ there are only finitely many $b\in\lambda$ such that $N\models\bigwedge_{i=1}^n U_i(a)\leftrightarrow U_i(b)\}$.
The set $F$ is clearly finite. Now, the elements of $\lambda\setminus F$ are partitioned into finitely many infinite chunks, each of which is fully indiscernible over its complement.
Thus, it follows that $F={\rm acl}_N(\emptyset)$ and for any $a\in \lambda$, ${\rm acl}_N(a)=F\cup\{a\}$. To show $Y\setminus \Delta_k$ finite, it suffices to prove the following.
\begin{claim*} $Y\subseteq F^k\cup \Delta_k$.
\end{claim*}
\begin{claimproof} Choose any $\overline{a}\in\lambda^k\setminus (F^k\cup\Delta_k)$. Since $\overline{a}\not\in F^k$, choose a coordinate $a^*\in\overline{a}$ with $a^*\not\in F$.
Since the $k$-tuple $\overline{a}$ is not constant, choose $b\in\overline{a}$ with $b\neq a^*$. Now, by way of contradiction, suppose $\overline{a}\in Y$. As $Y$ is mutually algebraic,
$a^*\in{\rm acl}_N(b)=F\cup\{b\}$, which it isn't.
\end{claimproof}
\end{proof}
\begin{lemma} \label{montwo} Suppose $M$ is a mutually algebraic structure with universe $\lambda$ such that $Th(M)$ is not purely monadic.
Then, for some $k\ge 2$ there is some $L_M$-definable, mutually algebraic $Y\subseteq\lambda^k$ with $Y\setminus \Delta_k$
infinite.
\end{lemma}
\begin{proof}
Fix such an $M$ and assume that no such $L_M$-definable, mutually algebraic set existed. By Lemma~\ref{monone} we would have that for every $k$,
every $L_M$-definable,
mutually algebraic subset of $\lambda^k$ is monadically definable. From this, it follows easily that every $L_M$-definable, padded mutually algebraic set would
be monadically definable, as would every boolean combination of these. But then, as $M$ is mutually algebraic, by Fact~\ref{MAcrit} we would have that the solution set of
$L$-atomic formula is monadically definable. From this, it would follow that every $L_M$-definable set is monadically definable, contradicting $Th(M)$ not being purely monadic.
\end{proof}
We now obtain our desired configuration.
\begin{lemma} \label{monthree} Suppose $M$ is a mutually algebraic structure with universe $\lambda$ whose theory is not purely monadic.
Then there is some $k\ge 2$, some $L_M$-definable $Y\subseteq\lambda^k$ and an infinite set ${\mathcal F}=\{\overline{a}_n:n\in\omega\}\subseteq Y\setminus\Delta_k$ such that
\begin{enumerate}
\item For each $n\in\omega$, $(\overline{a}_n)_1\neq(\overline{a}_n)_2$ (the first two coordinates differ); and
\item $\overline{a}_n\cap\overline{a}_m=\emptyset$ for distinct $n,m\in\omega$.
\end{enumerate}
In particular, if $F=\bigcup{\mathcal F}$, then for every $a\in F$ there is exactly one $\overline{a}\in Y$ with $\overline{a}\subseteq F$ (and hence $(\overline{a})_1\neq(\overline{a}_2)$).
\end{lemma}
\begin{proof} By Lemma~\ref{montwo}, choose $k\ge 2$ and an $L_M$-definable, mutually algebraic $Y\subseteq\lambda^k$ such that $X:=Y\setminus\Delta_k$ is infinite.
By mutual algebraicity, choose an integer $K$ such that for every $a\in\lambda$, there are at most $K$ $k$-tuples $\overline{a}\in Y$ with $a\in\overline{a}$.
As each element of $X$ is a non-constant $k$-tuple, by the pigeonhole principle we can find an infinite $X'\subseteq X$ and $i\neq j\in[k]$ such that $(\overline{a})_i\neq(\overline{a})_j$ for each $\overline{a}\in X'$. By applying a permutation $\sigma\in Sym([k])$ to $Y$, we may assume $i=1$ and $j=2$, so after this transformation (1) holds for any $\overline{a}\in X'$.
But now, as $X'\subseteq Y$ is infinite, while every element $a\in\lambda$ occurs in only finitely many $\overline{a}\in X'$, it is easy to recursively construct ${\mathcal F}=\{\overline{a}_n:n\in\omega\}\subseteq X'$.
\end{proof}
\section{Monadically stable and monadically NIP are aptly named} \label{expand}
In this section, we prove Theorem~\ref{big}. We actually prove this without assuming the language $L$ of $T$ is countable, and where the set $Y$ is a subset of $\lambda^k$ for an arbitrary cardinal $\lambda\ge ||L||$. The positive part, that $(T,Y)$ is always monadically NFCP whenever both $T$ is and $Y\subseteq\lambda^k$ is monadically NFCP-definable,
is immediate from the following.
\begin{lemma} \label{positive} Suppose $N_1$ and $N_2$ are structures, both with universe $\lambda$, in disjoint languages $L_1$ and $L_2$. If both $N_1$ and $N_2$ are monadically NFCP (=mutually algebraic) then the expansion $N^*=(N_1,N_2)$ is monadically NFCP as well.
\end{lemma}
\begin{proof} By replacing each function and constant symbol by its graph, we may assume both $L_1$ and $L_2$ only have relation symbols.
As the languages are disjoint, this implies that every $L_1\cup L_2$-atomic formula is either $L_1$-atomic or $L_2$-atomic.
Thus, by
Fact~\ref{MAcrit}, every atomic formula in $N^*$ is either equivalent to a boolean combination of either $L_1$-definable or $L_2$-definable padded, mutually algebraic formulas.
As the notion of a set $Y\subseteq\lambda^k$ being padded mutually algebraic is independent of any structure, the result follows by a second application of Fact~\ref{MAcrit}.
\end{proof}
The negative directions are more involved. To efficiently handle the various cases, we first prove two propositions, from which all of the negative results follow in
Theorem~\ref{negative}.
For the following proposition, first note that a structure with two cross-cutting equivalence relations admits coding. We will essentially encode this configuration, but since we don't want to assume that either $N_1$ or $N_2$ is saturated for our eventual application, we must work with the finitary approximations to an equivalence relation with infinitely many infinite classes provided by Corollary \ref{config}.
\begin{proposition} \label{prop1} Suppose $L_1$ and $L_2$ are disjoint languages, $\lambda\ge ||L_1\cup L_2||$ a cardinal,
$N_1$ is an $L_1$-structure with universe $\lambda$, and $N_2$ is an $L_2$-structure with universe $\lambda$.
If both $Th(N_1)$ and $Th(N_2)$ are monadically NIP but not monadically NFCP, then there is a permutation $\sigma\in Sym(\lambda)$ such that the
$L_1\cup L_2$-structure $(N_1,\sigma(N_2))$ has a theory that is not monadically NIP.
\end{proposition}
\begin{proof} Apply Corollary~\ref{config} to both $N_1$ and $N_2$. This gives an $L_1$-formula $\varphi(x,y,\overline{z})$ and, for each $n$, pairwise disjoint sets $A_n=\{\alpha^n_{i,j}:i,j<n\}$,
$B_n=\{\beta^n_{i}:i<n\}$ and $\overline{r}_n$ as there, with exceptional set $X=\lambda\setminus\bigcup_{n\ge 1} (A_n\cup B_n)$. Note that as each $A_n,B_n$ is finite, $|X|=\lambda$.
On the $L_2$-side, choose an $L_2$-formula $\psi(x,y,\overline{w})$ such that, for all $n\ge 1$, there is $\overline{s}_n\in \lambda^{\lg(\overline{w})}$ and pairwise disjoint sets
$C_n=\{\gamma^n_{i,j}:i,j<n\}$ and $D_n=\{\delta^n_i:i<n\}$ as there.
Now choose $\sigma\in Sym(\lambda)$ to be any permutation satisfying: For all $n\ge 1$,
\begin{enumerate}
\item $\sigma(D_n)\subseteq X$; and
\item $\sigma$ maps $C_n$ bijectively onto $A_n$ via $\sigma(\gamma^n_{i,j})=\alpha_{j,i}^n$.
\end{enumerate}
Note that there are many permutations $\sigma$ satisfying these constraints. Choose one, and let
$\sigma(N_2)$ be the unique $L_2$ structure with universe $\lambda$ so that $\sigma$ is an $L_2$-isomorphism.
\medskip\noindent{\bf Claim.} The $L_1\cup L_2$-theory $Th(N_1,\sigma(N_2))$ is not monadically NIP.
\begin{claimproof} We will produce $M^*$, a monadic expansion of an $L_1\cup L_2$-elementary extension $\overline{M}\succeq (N_1,\sigma(N_2))$ that admits coding, which suffices.
To do this, we first argue that by compactness, there is an $L_1\cup L_2$-elementary extension $\overline{M}\succeq (N_1,\sigma(N_2))$ that contains disjoint sets
$A=\{a_{i,j}:i,j\in {\mathbb Z}\}$, $B=\{b_i:i\in {\mathbb Z}\}$, $D=\{d_j:j\in {\mathbb Z}\}$, and tuples $\overline{r},\overline{s}$ such that, for all $k,i,j\in{\mathbb Z}$,
either (if $Th(N_1)$ is unstable) $\overline{M}\models \varphi(b_k,a_{i,j},\overline{r})$ if and only if $k\le i$, or (if $Th(N_1)$ is stable) $\overline{M}\models \varphi(b_k,a_{i,j},\overline{r})$ if and only if $k=i$;
and dually,
either (if $Th(N_2)$ is unstable) $\overline{M}\models \psi(d_k,a_{i,j},\overline{s})$ if and only if $k\le j$, or (if $Th(N_2)$ is stable) $\overline{M}\models \psi(d_k,a_{i,j},\overline{s})$ if and only if $k=j$.
To see that such an $\overline{M}$ exists, consider an expansion of $L_1\cup L_2$, adding constants for all $a_{i,j},b_i,d_j,\overline{r},\overline{s}$, and considering a theory that contains the
$L_1\cup L_2$-elementary diagram of $(N_1,\sigma(N_2))$ and the conditions on the constants described above.
By compactness, it suffices to show $(N_1,\sigma(N_2))\models T_0$ for any finite subset $T_0$ of this theory. But, for any such $T_0$, there is an $n$
and some $\{\beta^n_i:i<n\}$, $\{\alpha^n_{i,j}:i,j<n\}$, and $\{\sigma(\delta^n_j):j<n\}$ from $\lambda$ that realize the requisite sentences in $(N_1,\sigma(N_2))$ because of the identification $\sigma(\gamma^n_{i,j})=\alpha_{j,i}^n$.
Now, given $\overline{M}$, let $L^*=L_1\cup L_2\cup\{A,B,D\}$ and let $M^*$ be the natural monadic expansion of $\overline{M}$ described by $A,B,D$ above. To show that $M^*$ admits coding, we need to rectify the ambiguity between the stable and unstable cases.
Specifically, we claim that there is an $L^*$-formula $\varphi^*(x,y,\overline{z})$ such that for all $b_i\in B$, the solution set $\varphi^*(b_i,M^*, \overline{r})$ is $\{a_{i,j}:j\in {\mathbb Z}\}$.
If $Th(N_1)$ were stable, this is easy, just take $\varphi^*(x,y,\overline{z}):=A(y)\wedge\varphi(x,y,\overline{z})$.
However, when $Th(N_1)$ is unstable, we need some more $L^*$-definability in $M^*$. Specifically, note that in this case, the natural ordering on $B$ is $L^*$-definable via
$$b_i\le b_j \ \hbox{if and only if} \ \forall y [(A(y)\wedge\varphi(b_j,y,\overline{r}))\rightarrow\varphi(b_i,y,\overline{r})]$$
As the ordering on $B$ is discrete, every element $b\in B$ has a unique successor, $S(b)$, and this operation is $L^*$-definable since $\le$ is.
Using this, the $L^*$-formula
$$\varphi^*(x,y,\overline{z}):=B(x)\wedge A(y)\wedge\varphi(x,y,\overline{z})\wedge\neg\varphi(S(x),y,\overline{z})$$
is as desired.
Arguing similarly, there is an $L^*$-formula $\psi^*(x,y,\overline{w})$ such that for all $d_j\in D$, the solution set $\psi^*(d_j,M^*,\overline{s})$ is $\{a_{i,j}\in A:i\in{\mathbb Z}\}$.
Putting these together, let $\theta(u,v,y,\overline{z},\overline{w})$ be the $L^*$-formula
$$B(u)\wedge D(v)\wedge A(y)\wedge \varphi^*(u,y,\overline{z})\wedge \psi^*(v,y,\overline{w})$$
Then the solution set of $\theta(u,v,y,\overline{r},\overline{s})$ is precisely the graph of a bijection from $B\times D$ onto $A$.
Thus, $M^*$ admits coding, which suffices.
\end{claimproof}
\end{proof}
The proof of the next proposition is in many ways similar. Here our ideal infinitary configuration consists of an equivalence relation with infinitely many infinite classes, with each tuple from the configuration in Lemma \ref{monthree} pairing two classes by intersecting them. But again, instead of our ideal equivalence relation, we must restrict ourselves to the finitary approximations from Corollary \ref{config}.
\begin{proposition} \label{prop2} Suppose $L_1$ and $L_2$ are disjoint languages, $\lambda\ge ||L_1\cup L_2||$ a cardinal,
$N_1$ is an $L_1$-structure with universe $\lambda$, and $N_2$ is an $L_2$-structure with universe $\lambda$.
If $Th(N_1)$ is monadically NIP but not monadically NFCP, and if $Th(N_2)$ is monadically NFCP but not purely monadic,
then there is a permutation $\sigma\in Sym(\lambda)$ such that the
$L_1\cup L_2$-structure $(N_1,\sigma(N_2))$ has a theory that is not monadically NIP.
\end{proposition}
\begin{proof} Apply Corollary~\ref{config} to $N_1$, obtaining an $L_1$-formula $\varphi(x,y,\overline{z})$ and, for each $n$, pairwise disjoint sets $A_n=\{\alpha^n_{i,j}:i,j<n\}$,
$B_n=\{\beta^n_{i}:i<n\}$ and $\overline{r}_n$ as there, with exceptional set $X=\lambda\setminus\bigcup_{n\ge 1} (A_n\cup B_n)$. Note that as each $A_n,B_n$ is finite, $|X|=\lambda$.
For the $N_2$ side, apply Lemma~\ref{monthree}, getting an $N_2$-definable $Y\subseteq \lambda^k$ and a distinguished set ${\mathcal F}=\{\overline{e}_\ell:\ell\in\omega\}\subseteq Y$
as there. Say $Y$ is defined using parameters $\{c_1,\dots,c_n\}$.
Let $L_2^V=L_2\cup \{V,C_1,\dots,C_n\}$ and let $N_2^V$ be the monadic expansion of $N_2$, interpreting $V$ as $F=\bigcup{\mathcal F}$ and each
$C_i$ as $\{c_i\}$. Note that in $N_2^V$, the subsets
$F_1=\{(\overline{e})_1:\overline{e}\in{\mathcal F}\}$, $F_2=\{(\overline{e})_2:\overline{e}\in{\mathcal F}\}$ of $F$ are $L_2^V$-definable (without parameters), along with the bijection $f:F_1\rightarrow F_2$ given by: $f(x)=(\overline{e})_2$, where $\overline{e}$ is the unique element of ${\mathcal F}$ containing $x$. Fix an enumeration $\{\gamma_\ell:\ell\in\omega\}$ of $F_1\subseteq\lambda$.
We now choose a permutation $\sigma\in Sym(\lambda)$ that satisfies:
\begin{itemize}
\item For all $n\ge 1$ and all distinct $i<j<n$, there is some (in fact, unique) $\ell\in\omega$ such that $\sigma(\gamma_\ell)=\alpha^n_{i,j}$ and $\sigma(f(\gamma_\ell))=\alpha^n_{j,i}$.
\end{itemize}
Let $\sigma(N_2^V)$ be the $L_2^V$-structure with universe $\lambda$ so that $\sigma$ is an $L_2^V$-isomorphism and let $M_0^V=(N_1,\sigma(N_2^V))$ be the expansion of
$N_1$ to an $L_1\cup L_2^V$-structure. So $M_0^V$ has universe $\lambda$ and satisfies:
\begin{itemize}
\item For all $n\ge 1$ and $i<j<n$, $f(\alpha^n_{i,j})=\alpha^n_{j,i}$; and
\item The relationships given by $N_1$.
\end{itemize}
Let $M_0$ be the $L_1\cup L_2$-reduct of $M_0^V$.
\medskip
\noindent{\bf Claim.} The $L_1\cup L_2$-theory of $M_0$ is not monadically NIP.
\begin{claimproof} We show that the $L_1\cup L_2^V$-theory of $M_0^V$ is not monadically NIP, which suffices. For this,
the strategy is similar to the proof of Proposition~\ref{prop1}. We will find an $L_1\cup L_2^V$-elementary extension $\overline{M}$ of $M_0^V$ and then
find a monadic expansion $M^*$ of $\overline{M}$ that admits coding.
Specifically, choose an $L_1\cup L_2\cup \{V\}$-elementary extension $\overline{M}$ for which there are sets
$B=\{b_i:i\in{\mathbb Z}\}$, $A=\{a_{i,j}:i \neq j\in{\mathbb Z}\}$ such that
\begin{enumerate}
\item For all $i<j$ from ${\mathbb Z}$, $f(a_{i,j})=a_{j,i}$.
\item One of the following holds.
\begin{enumerate}
\item $Th(N_1)$ is unstable, and $\overline{M}\models \varphi(b_k,a_{i,j},\overline{r})$ if and only if $k\le i$.
\item $Th(N_1)$ is stable, and $\overline{M}\models \varphi(b_k,a_{i,j},\overline{r})$ if and only if $k=i$.
\end{enumerate}
\end{enumerate}
Given such an $\overline{M}$, let $L^*=L_1\cup L_2^V\cup\{A,B\}$, and let $M^*$ be the expansion of $\overline{M}$ interpreting $A$ and $B$ as themselves.
Exactly as in the proof of Proposition~\ref{prop1}, find an $L^*$-formula $\varphi^*(x,y,\overline{z})$ such that for all $b_i\in B$, the solution set $\varphi^*(b_i,M^*,\overline{r})$ is $\{a_{i,j}:j\in {\mathbb Z}, j \neq i\}$.
Now let $\theta(u,v,y,\overline{z})$ be the $L^*$-formula
$$B(u)\wedge B(v)\wedge A(y)\wedge \varphi^*(u,y,\overline{z})\wedge \varphi^*(v,f(y),\overline{z})$$
Then the formula $\theta(u,v,y,\overline{r})\vee \theta(v,u,y,\overline{r})$ is the graph of a bijection from $(B\times B) \backslash \{(b, b) : b \in B\}$ onto $A$, which suffices.
\end{claimproof}
\end{proof}
Using Propositions~\ref{prop1} and \ref{prop2} we are now able to prove the negative portions of Theorem~\ref{big}.
As the positive portion was proved in Lemma~\ref{positive}, this suffices.
\begin{theorem} \label{negative} Suppose $T$ is a complete $L$-theory and $Y\subseteq \lambda^k$ with $\lambda\ge ||L||$. Then:
\begin{enumerate}
\item If $T$ is not monadically NFCP and $Y$ is not monadically definable, then $(T,Y)$ is not always monadically NIP; and
\item If $T$ is not purely monadic and $Y$ is not monadically NFCP definable, then $(T,Y)$ is not always monadically NIP.
\end{enumerate}
\end{theorem}
\begin{proof} (1) Choose $N_1\models T$ with universe $\lambda$, and let $N_2=(\lambda,Y)$ be the structure in the language $L_2=\{Y\}$ with the obvious interpretation.
If $T$ is not monadically NIP, then the expansion $(N_1,Y)$ suffices, so assume $Th(N_1)$ is not monadically NIP. Similarly, if $Th(N_2)$ is not monadically NIP, then
again $(N_1,Y)$ suffices, so also assume $Th(N_2)$ is also monadically NIP. Now, depending on whether or not $Th(N_2)$ is monadically NFCP or not, apply either Proposition~\ref{prop1} or Proposition~\ref{prop2} to get a permutation $\sigma\in Sym(\lambda)$ such that $Th(N_1,\sigma(N_2))$ is not monadically NIP.
Of course, $Y$ need not be preserved here, so apply $\sigma^{-1}$. That is, let $(\sigma^{-1}(N_1),Y)$ be the $L\cup\{Y\}$-structure so that $\sigma^{-1}$ is an $L\cup\{Y\}$ isomorphism. As $\sigma(N_1)\models T$, this structure witnesses that $(T,Y)$ is not always monadically NIP.
(2) Let $N_1=(\lambda,Y)$ and let $N_2$ be any model of $T$ with universe $\lambda$. As in (1), if either $Th(N_1)$ or $Th(N_2)$ does not have a monadically NIP theory we are done, so assume both do. Again, by either Proposition~\ref{prop1} or Proposition~\ref{prop2} (depending on $Th(N_2)$), we get a permutation $\sigma\in Sym(\lambda)$
such that $(N_1,\sigma(N_2))$ has a non-monadically NIP theory. But this structure is precisely $(\sigma(N_2),Y)$ and $\sigma(N_2)\models T$, so again $(T,Y)$ is not always
monadically NIP.
\end{proof}
\bibliographystyle{amsalpha}
|
3,212,635,537,740 | arxiv | \section{Introduction}
The experimental study of nuclei around the doubly-magic $^{132}$Sn nucleus is of special interest. Such new
data allow us to test ingredients of shell-model (SM) calculations far away from
the valley of stability.
For instance, a realistic effective interaction to be used in the $50-82$ valence shell was
derived from the CD-Bonn
nucleon-nucleon potential and has been tested in several nuclei having few
proton particles and neutron holes away from the doubly-magic core:
$^{134}_{52}$Te$_{82}$, $^{130}_{50}$Sn$_{80}$, $^{132}_{51}$Sb$_{81}$ and
$^{132}_{52}$Te$_{80}$~\cite{br05}. The experimental energy levels of these
four nuclei are well reproduced showing that such an approach for determining the two-body
matrix elements could provide an accurate description of nuclear structure properties.
More recently, the experimental high-spin states of five $N=82$ isotones~\cite{as12b} were
described using this interaction~\cite{sr13}, named SN100PN. Nevertheless because of
their neutron magic
number, only the proton parts of the interaction take place in such SM calculations.
In order to assess more precisely the quality of this realistic effective
interaction, the results of SM calculations have to be compared to experimental
data in other nuclei. For instance all the parts of the effective interaction can be tested
using nuclei having neutron holes in presence of a few proton particles, such as the $_{52}$Te
isotopes with $N < 80$.
Yrast excitations in several heavy-mass Te isotopes have been already experimentally
obtained using deep-inelastic
reactions, the states of $^{127-131}$Te being identified up to spin (23/2) and those of
$^{126,128}$Te up to spin (12) or (14)~\cite{zh98}, i.e., up to the breaking of
the first $\nu h_{11/2}$ pair. More recently, a new level scheme of $^{130}$Te has
been established, showing an isomeric state at 4.4~MeV excitation energy~\cite{br04}.
The use of binary fission induced by heavy ions allows us to extend the yrast line to
higher-spin states, particularly those coming from the breaking of several pairs.
Thus, in this work, the $^{124-131}$Te isotopes have been
produced as fragments of two fusion-fission reactions. Their level schemes have been built
from the $\gamma$ high-fold events and $\gamma- \gamma$ angular
correlations have been analyzed in order to assign spin and parity values to most
of the states.
In addition, the half-lives of isomeric states have been measured from the
delayed coincidences between a fission fragment detector and the gamma array, or from the
timing of the Ge detectors.
Several Te isotopes with heavier masses were also observed in the C+U reaction. The identification
of their complementary fragments by means of the cross coincidences of their $\gamma$-rays
indicates that they are not produced from the same reaction channel as $^{124-131}$Te, but from
fissions following transfer or incomplete fusion.
Many other nuclei belonging to the $A \sim 140-144$ region have also
singular partners, implying that they are produced from the same mechanism.
In the last part of this paper, all the yrast states of $^{124-131}$Te are firstly
discussed in comparison with the general features known in this mass region. The predictions
from SM calculations using the SN100PN effective interaction~\cite{br05} then are
presented, starting from the description of the high-spin states of several Sn isotopes in which
the breakings of several neutron pairs occupying the $\nu h_{11/2}$
orbit were experimentally identified~\cite{as12,pi11,lo08}. With regard to the Te isotopes,
the SM calculations are done in the full valence space in four cases, $^{128-131}$Te and the theoretical predictions are compared to the experimental results. In addition, the detailed analysis of the wave functions of the high-spin
states shows the effects of the proton-pair breaking along the yrast lines of the heavy Te
isotopes.
\section{Experimental details}
\subsection{Reactions, $\gamma$-ray detection and analysis\label{exp}}
The $_{52}$Te isotopes of interest were obtained as fission fragments in
two experiments. First, the $^{12}$C + $^{238}$U reaction was studied at 90 MeV incident
energy, with a beam provided by the Legnaro XTU Tandem accelerator. Second, the
$^{18}$O + $^{208}$Pb reaction was studied with a 85 MeV incident
energy beam provided by the Vivitron accelerator of IReS (Strasbourg).
The $\gamma$ rays were detected with the Euroball array~\cite{si97}.
The spectrometer contained 15
cluster germanium detectors placed in the backward hemisphere with
respect to the beam, 26 clover germanium detectors located
around 90$^\circ$ and 30 tapered single-crystal germanium detectors
located at forward angles. Each cluster detector consists of seven
closely packed large-volume Ge crystals~\cite{eb96} and each
clover detector consists of four smaller Ge crystals~\cite{du99}.
In order to get rid of the Doppler effect, both experiments
have been performed with thick targets in order to stop the recoiling nuclei
(47 mg/cm$^{2}$ for $^{238}$U and 100 mg/cm$^{2}$ for $^{208}$Pb targets, respectively).
The data of the C+U experiment were recorded in an event-by-event mode with the
requirement that a minimum of five unsuppressed Ge
detectors fired in prompt coincidence. A set of 1.9$\times
10^{9}$ three- and higher-fold events was available
for a subsequent analysis. For the O+Pb experiment, a lower trigger condition
(three unsuppressed Ge) allowed us to register 4$\times 10^{9}$ events with a
$\gamma$-fold greater than or equal to 3. The offline analysis consisted
of both multigated spectra and three-dimensional 'cubes' built
and analyzed with the Radware package~\cite{ra95}.
More than one hundred nuclei are produced at high spin in
such experiments, and this gives several thousands of $\gamma$
transitions which have to be sorted out. Single-gated
spectra are useless in most of the cases. The selection of one
particular nucleus needs at least two energy conditions, implying
that at least two transitions have to be known.
It is worth noting that prompt $\gamma$ rays emitted by couples of complementary
fragments are detected in coincidence~\cite{ho91,po96}.
Because the isotopes of interest are produced from two different
fissioning compound nuclei in this work, the complementary fragments are
different in the two reactions. This gives a fully unambiguous assignment
of transitions seen in both experiments.
The relative intensity of the lowest transitions of the even-$A$ Te isotopes
have been measured in spectra in double coincidences with two transitions
emitted by one partner. Then, we have used the spectra in double coincidences
with one low-lying transition of the Te isotope of interest and one transition
of a partner. Finally, for determining the intensity of the weak
transitions, we have analyzed spectra in double coincidences with two
transitions of the Te level schemes and normalized the obtained results by using
relative intensities extracted from the spectra mentioned above.
\subsection{Isomer identification\label{isomer}}
As reported in previous papers~\cite{lu02,po05,as12,as12b}, another experiment
was performed using the SAPhIR\footnote{SAPhIR, Saclay Aquitaine Photovoltaic cells
for Isomer Research.} heavy-ion detector~\cite{Saphir}, here composed
of 32 photovoltaic cells, in order to identify new isomeric states in the fission
fragments. Placed in the target chamber of Euroball, SAPhIR was used to
detect the escaping fission-fragments of the $^{12}$C (90~MeV) + $^{238}$U
reaction from a thin 0.14 mg/cm$^{2}$ uranium target.
The detection of the two fragments in coincidence providing a clean signature of fission
events was used as the trigger for Euroball. The Euroball
time window was [50~ns--$1\mu$s], allowing detection of delayed $\gamma$-rays emitted during the
de-excitation of isomeric states. Time spectra between
fragments and $\gamma$-rays were analyzed in order to measure the half-life of
isomeric levels, in a range of several tens to several hundreds of nanoseconds.
\subsection{$\gamma$-$\gamma$ angular correlations \label{correl}}
It is well known that the $\gamma$ rays emitted by fusion-fission fragments do not show
any anisotropy in their angular distributions with respect to the incident beam.
However, angular correlations of two successive transitions are meaningful.
In order to determine the spin values of excited states, the coincidence rates
of two successive $\gamma$ rays are
analyzed as a function of $\theta$, the average relative angle between the
two fired detectors.
The Euroball spectrometer had $C^{2}_{239}$=28441 combinations of two crystals, out
of which $\sim$ 2000
involved different values of relative angle within 2$^\circ$. Therefore, in order
to keep reasonable numbers of counts, all these angles have been
gathered around three average relative angles : 22$^\circ$, 46$^\circ$,
and 75$^\circ$. The coincidence rate increases between 0$^\circ$ and
90$^\circ$ for the dipole-quadrupole cascades, whereas it decreases for
the quadrupole-quadrupole or dipole-dipole ones.
The theoretical values of several coincidence rates for the Euroball geometry have been already given
in previous papers~\cite{as06,po11,as13}. The method has been checked
by correctly reproducing the expected angular correlations of $\gamma$-transitions
having well-known multipole orders and belonging to various
fission fragments.
When the statistics of our data are too low to perform
such a measurement, the spin assignments are based upon
(i) the already known spins of some states, (ii) the assumption
that in yrast decays, spin values increase with the excitation energy,
(iii) the possible existence of cross-over transitions, and
(iv) the analogy with the level structures of the other isotopes.
\section{Experimental results}\label{results}
The $\gamma$-rays emitted by the low-lying states of $^{122-132}$Te isotopes
have been observed in both fusion-fission reactions used in the present work.
Regarding $^{122}$Te and $^{132}$Te, their yield are so low that only the decays of
their low-lying yrast states are observed and the transitions of their partners
could not be identified in gated spectra.
On the other hand, we have measured many new $\gamma$-rays
emitted by the high-spin states of $^{124-131}$Te. The results are
presented in the two following sections. In the third one, we discuss the
particular cases of $^{133-136}$Te which have been only observed in the
$^{12}$C + $^{238}$U reaction.
\subsection{Study of the even-$A$ $^{124-130}$Te isotopes}
\subsubsection{Level scheme of $^{124}$Te\label{te124}}
Previous information of the medium-spin excited states of $^{124}$Te comes
from results of the ($\alpha$,2n$\gamma$) reaction~\cite{wa98}. The
positive-parity yrast band was identified up to the $I=10^+$ state at 3152~keV
and the negative-parity one up to the $I=11^-$ state at 3987~keV. In addition, a
$I=11$ level was proposed at 3850~keV. We confirm the decay schemes
of the 10$^+$ and 11$^-$ states and we have
added a few new states at higher energy (see the colored states in Fig.~\ref{schema124}).
We have gathered in Table~\ref{gammas_te124} the properties of all the transitions
assigned to $^{124}$Te from this work.
\begin{figure}[h]
\begin{center}
\includegraphics*[width=7.7cm]{f01_124Te_color.ps}
\caption[]{(Color online) Level scheme of $^{124}$Te deduced in the present work.
The colored levels are new. The width of
the arrows is proportional to the relative intensity of the $\gamma$ rays.
}
\label{schema124}
\end {center}
\end{figure}
\begin{table}[!ht]
\begin{center}
\caption{Properties of the transitions assigned to $^{124}$Te observed in this
work.}\label{gammas_te124}
\begin{tabular}{rrccc}
\hline
E$_\gamma^{(a)}$(keV)& I$_\gamma^{(a),(b)}$& J$_i^\pi \rightarrow$J$_f^\pi$ &E$_i$&E$_f$ \\
\hline
297.4(4)& 4.1(16)& (12$^-$) $\rightarrow$ 11$^-$ &4284.5 &3987.1 \\
330.2(4)& 2(1)& (16$^+$) $\rightarrow$ (15$^+$)&5476.8 &5146.6 \\
489.5(3)& 32(6)& 10$^+$ $\rightarrow$ 8$^+$ &3152.4 &2662.9 \\
498.1(3)& 77(12)& 6$^+$ $\rightarrow$ 4$^+$ &1746.0 &1247.9 \\
553.1(4)& 4.8(19)& (15$^+$) $\rightarrow$ (14$^+$) &5146.6 &4593.5 \\
602.4(3)& -& 2$^+$$\rightarrow$ 0$^+$ &602.4 &0.0 \\
616.5(4)& 11(3)& 9$^-$ $\rightarrow$ 7$^-$ &3288.7 &2672.2 \\
637.3(5)& 4.4(18)& 11$^-$ $\rightarrow$ 9$^-$ &3987.1 &3349.8 \\
645.5(2)& 100 & 4$^+$ $\rightarrow$ 2$^+$ &1247.9 &602.4 \\
677.6(5)& 5.7(23)& 9$^-$ $\rightarrow$ 7$^-$ &3349.8 &2672.2 \\
697.3(3)& 25(5)& (12$^+$) $\rightarrow$ 10$^+$ &3849.7 &3152.4 \\
698.5(4)& 5.9(24)& 11$^-$ $\rightarrow$ 9$^-$ &3987.1 &3288.7 \\
743.8(4)& 15(4)& (14$^+$) $\rightarrow$ (12$^+$) &4593.5 &3849.7 \\
906.1(5)& 4.7(19)& (14$^+$) $\rightarrow$ (12$^+$) &4755.8 &3849.7 \\
916.9(3)& 44(9)& 8$^+$ $\rightarrow$ 6$^+$ &2662.9 &1746.0 \\
926.2(3)& 23(5)& 7$^-$ $\rightarrow$ 6$^+$ &2672.2 &1746.0 \\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
$^{(b)}$ The relative intensities are normalized to $I_\gamma(645) = 100$.\\
\end{table}
The statistics of our $^{124}$Te
data is too low to perform $\gamma-\gamma$ angular correlation analyses. Therefore, the
spin assignments of all the new states shown in Fig.~\ref{schema124} are based on close
similarity with the results obtained in the other isotopes, which are presented in the
following sections.
It is worth noting that the angular distributions of the two transitions at 697 and
698 keV were measured in the previous work dealing with the ($\alpha$,2n$\gamma$) reaction~\cite{wa98}. While the latter exhibits the
standard coefficients of a quadrupole transition, the extremely large
value of the $a_2$ coefficient [$a_2=0.82(3)$] of the former led the authors to assign a
dipole character to the 697-keV transition (implying a $I=11$ value for the decaying
state at 3850 keV, at variance with the $I=12$ value proposed in the present work).
Nevertheless it has to be noticed that these two transitions are also
close in energy with the broad $\gamma$ line at 697 keV emitted by the 2$^+$ state of
$^{74}$Ge, casting doubt on these results of angular distributions which were performed by
using a direct spectrum.
\subsubsection{Level scheme of $^{126}$Te\label{te126}}
The yrast excitations of $^{126}$Te had been already studied up to $I^\pi = 10^+$
and 7$^-$ from $^{124}$Sn ($\alpha$,2n) reaction~\cite{ke71}. Two new
transitions were added in the positive-parity branch, leading to the (14$^+$) state, by
means of deep inelastic $^{130}$Te + $^{64}$Ni reactions~\cite{zh98}. These two sets
of states are confirmed in the present work. Moreover by using all the mutual
$\gamma-\gamma-\gamma$ coincidences, we have extended the level scheme up to 6-MeV
excitation energy, as shown in Fig.~\ref{schema126}.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f02_126Te_color.ps}
\caption{(Color online) Level scheme of $^{126}$Te deduced in the present work.
The colored levels are new. The width of
the arrows is proportional to the relative intensity of the $\gamma$ rays. The
half-life of the 2972-keV level is from Ref.~\cite{nndc}.
}
\label{schema126}
\end{center}
\end{figure}
Several new transitions which have been located above the 7$^-$ state form doublets
with the low-lying $\gamma$ rays. The 696.2-keV transition is close in energy with the
$4^+ \rightarrow 2^+$ transition at 694.6 keV, and the 412.1- and 410.0-keV ones are
close in energy with the $6^+ \rightarrow 4^+$ transition at 414.4 keV.
The spectra given in Fig.~\ref{spectre126} reveal these doublets.
They show that 412- and 410-keV lines are detected in
coincidence with the 414-keV one and that the 696-keV line is detected in coincidence
with the 695-keV one.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=7.5cm]{f03_spectres_126Te.eps}
\caption{Examples of coincidence spectra double-gated on two transitions of
$^{126}$Te, built from the $^{18}$O + $^{208}$Pb data set. The $\gamma$ rays emitted by
the Sr complementary fragments are labeled.}
\label{spectre126}
\end{center}
\end{figure}
Angular correlations of successive $\gamma$ rays have been extracted for the most
intense transitions of $^{126}$Te. The experimental results are given in
Table~\ref{correl_126Te}.
The coincidence rates between the low-lying $\gamma$ rays are in agreement with the results
of the previous experiment~\cite{ke71} where the angular distributions were measured
following the $^{124}$Sn ($\alpha$, 2n) reaction. Moreover, the results given in
Table~\ref{correl_126Te} indicate that the 713-, 850-, 696-, and 571-keV transitions are
quadrupole. Therefore the spin values of the 3686-, 4536-, 3191- and 3763-keV levels are
12$^+$, 14$^+$, 9$^-$, and 11$^-$, respectively.
For the higher-spin part of the level scheme, we have assumed that the spin values
increase with excitation energy and that the low-energy transitions have an $M1$
character.
\begin{table}[!ht]
\begin{center}
\caption{Coincidence rates between the low-lying $\gamma$ rays of $^{126}$Te
as a function of their relative angle of detection, normalized to
the ones obtained around 75$^\circ$.}\label{correl_126Te}
\begin{tabular}{cccc}
\hline
E$_\gamma$-E$_\gamma$&R(22$^\circ)^{(a)}$&R(46$^\circ)^{(a)}$ &R(75$^\circ$)\\
\hline
414~-~571 & 1.11(8) & 1.04(5) &1.00 \\
414~-~666 & 1.12(8) & 1.06(5) &1.00 \\
414~-~695/696 & 1.08(7) & 1.05(5) &1.00 \\
414~-~713 & 1.10(9) & 1.03(5) &1.00 \\
414~-~720 & 0.90(7) & 0.95(5) &1.00 \\
414~-~989 & 1.15(9) & 1.08(6) &1.00 \\
& & & \\
720~-~695/696 & 0.94(6) & 0.98(5) &1.00 \\
720~-~571 & 0.93(6) & 0.98(5) &1.00 \\
& & & \\
208~-~713 & 1.09(7) & 1.03(6) &1.00 \\
208~-~850 & 1.10(7) & 1.03(6) &1.00 \\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
\end{table}
We have gathered in Table~\ref{gammas_te126} the properties of all the transitions
assigned to $^{126}$Te from this work.
\begin{table}[!ht]
\begin{center}
\caption{Properties of the transitions assigned to $^{126}$Te observed in this
work.}\label{gammas_te126}
\begin{tabular}{rrccc}
\hline
E$_\gamma^{(a)}$(keV)& I$_\gamma^{(a),(b)}$& J$_i^\pi \rightarrow$J$_f^\pi$ &E$_i$&E$_f$ \\
\hline
208.1(3)& 35(7)& 10$^+$ $\rightarrow$ 8$^+$ &2972.4 &2764.3 \\
356.5(5)& 1.0(5)& $\rightarrow$(6$^-$) &3194.6 &2838.1 \\
410.0(5)& 2(1)& (13$^-$) $\rightarrow ($12$^-$) &4584.9 & 4174.9 \\
412.1(4)& 5(2)& (12$^-$) $\rightarrow$11$^-$ &4174.9 &3762.8 \\
414.4(2)& 92(14)& 6$^+$ $\rightarrow$4$^+$ &1775.0 &1360.6 \\
442.4(5)& 2.3(11)& (16$^+$) $\rightarrow$ (15$^+$) &5535.5 &5093.1 \\
451.8(5)& 1.8(9)& $\rightarrow$ 12$^+$ &4137.4&3685.6 \\
461.0(5)& 1.2(6)& (15$^+$) $\rightarrow$(14$^+$) &5093.1 &4631.9 \\
526.4(4)& 4.9(15)& (15$^-$) $\rightarrow$(13$^-$) &5111.3 &4584.9 \\
549(1)& 2(1)& (13$^-$) $\rightarrow$(12$^-$) &4724 &4174.9 \\
557.4(5)& 2.8(14)& (15$^+$) $\rightarrow$14$^+$ &5093.1 &4535.7 \\
571.4(3)& 23(5)& 11$^-$ $\rightarrow$9$^-$ &3762.8 &3191.4 \\
575.7(5)& 1.5(7)& (15$^-$) $\rightarrow$14$^+$ &5111.3 &4535.7 \\
593(1)& 5.0(2)& $\rightarrow$ 5$^-$ &2811 &2218 \\
666.0(2)& 100 & 2$^+$ $\rightarrow$0$^+$ &666.0 & 0 \\
694.6(3)& 96(14)& 4$^+$ $\rightarrow$ 2$^+$ &1360.6 &666.0 \\
696.2(3)& 28(6)& 9$^-$ $\rightarrow$ 7$^-$ &3191.4 &2495.2 \\
713.2(3)& 30(6)& 12$^+$ $\rightarrow$ 10$^+$ &3685.6 &2972.4 \\
720.2(3)& 34(7)& 7$^-$ $\rightarrow$ 6$^+$ &2495.2 &1775.0 \\
764.7(4)& 4.2(17)& (13$^+$) $\rightarrow$ 12$^+$ &4450.3 & 3685.6 \\
822.1(4)& 7(2) & (13$^-$) $\rightarrow$ 11$^-$ &4584.9 &3762.8 \\
850.1(4)& 13(3)& 14$^+$ $\rightarrow$12$^+$ &4535.7 &3685.6 \\
857(1)& 10(3)& 5$^-$ $\rightarrow$4$^+$ &2218 &1360.6 \\
945.8(5)& 1.5(7)& (17$^-$) $\rightarrow$ (15$^-$) &6057.1 &5111.3 \\
946.3(4)& 5.5(16)& (14$^+$) $\rightarrow$12$^+$ &4631.9 &3685.6 \\
962(1)& 1.6(8)& (13$^-$) $\rightarrow$ 11$^-$ &4724 &3762.8 \\
989.3(3)& 45(9)& 8$^+$ $\rightarrow$ 6$^+$ &2764.3 &1775.0 \\
1061.2(5)& 1.3(6)& (16$^+$) $\rightarrow$(14$^+$) &5693.1 &4631.9 \\
1063.1(5)& 5.1(15)& (6$^-$)$\rightarrow$ 6$^+$ &2838.1 &1775.0 \\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
$^{(b)}$ The relative intensities are normalized to $I_\gamma(666) = 100$.\\
\end{table}
A two-quasiparticle $K^\pi=8^-$ isomeric state has been identified in many $N=74$
isotones ($Z=54-64$). Its excitation energy depends on the nuclear deformation, the
minimum value (2233 keV) being observed for the most deformed isotone,
$^{138}$Gd~\cite{br97}.
Such a state has been predicted at 2980-keV excitation energy in $^{126}$Te~\cite{xu99},
which would exhibit shape coexistence as its ground state is quasi-spherical while the
two quasineutrons leading to the $K^\pi=8^-$ state drive the nucleus to a prolate shape
($\beta_2=0.12$).
We have looked for isomeric states in $^{126}$Te by using the data registered with the
SAPhIR detector.
Only one $\gamma$-ray cascade has been found to be delayed, the one decaying the known
10$^+$ isomeric state (see Fig.~\ref{schema126}).
In conclusion, the $K^\pi=8^-$ state of $^{126}$Te could not be measured in our work,
its energy is likely too large to allow for its population in the fusion-fission
process. Moreover,
because of its high excitation energy, the $K^\pi=8^-$ state likely has a very short
half-life, since it can decay to several excited states, such as the 7$^-$ level at 2495
keV (see Fig.~\ref{schema126}).
\subsubsection{Level scheme of $^{128}$Te\label{te128}}
Several medium-spin states were known in $^{128}$Te prior to this work. From the $\beta$
decay of the $I^\pi=8^-$ isomeric state of $^{128}$Sb, yrast structures were
unambiguously identified up to spin 6$^+$ and 7$^-$~\cite{ke72a}. Later on, by using
deep-inelastic reactions, a long-lived isomeric state was established by means of two
delayed transitions populating the 6$^+$ level and assigned as the 10$^+$ level from
the $(\nu h_{11/2})^2$ configuration~\cite{zh98}.
Moreover four new $\gamma$ rays were measured and located above the 10$^+$ state,
spin and parity values of (12$^+$) and (14$^+$) being suggested for two of the
newly-established levels.
All these yrast states are confirmed by the analyses of both data sets of the
present work. Moreover, the spectra doubly-gated on the known transitions allowed us to
identify many new $\gamma$ lines which extend the level scheme up to 6.2 MeV excitation
energy (see Fig.~\ref{schema128}).
Three parallel structures are found to populate the 3506-keV level.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f04_128Te_color.ps}
\caption{(Color online) Level scheme of $^{128}$Te deduced in the present work.
The colored levels are new. The width of
the arrows is proportional to the relative intensity of the $\gamma$ rays. The
half-life of the 2789-keV level is from this work and the one of the
2336-keV level is from Ref.~\cite{nndc}.
}
\label{schema128}
\end{center}
\end{figure}
One of them is also linked to the 7$^-$ state, defining the negative-parity
band already known in the lighter isotopes. The two spectra shown in
Fig.~\ref{spectres128} display some of the new transitions belonging to these
new structures.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=7.5cm]{f05_spectres_128Te.eps}
\caption{Examples of coincidence spectra double-gated on two transitions of
$^{128}$Te, built from the $^{18}$O + $^{208}$Pb data set.
The first gate is set on one of the first three yrast transitions and the second
gate is set either on the 717-keV transition (a) or on the 526-keV transition (b).
The $\gamma$ rays emitted by the $^{92-94}$Sr complementary fragments are labeled.
The peaks marked with a star are
contaminants.}
\label{spectres128}
\end{center}
\end{figure}
Angular correlations of successive $\gamma$ rays have been extracted for the most
intense transitions of $^{128}$Te. The experimental results are given in
Table~\ref{correl_128Te}.
The coincidence rates between the low-lying $\gamma$ rays are in agreement
with the results of the internal conversion electron measurements done in
Ref.~\cite{ke72a}, following the $\beta$ decay of $^{128}$Sb, from which the spin
and parity of the states located below 2.4-MeV excitation energy were
determined. Moreover, the results given in Table~\ref{correl_128Te} indicate
that the 387-, 563-, 629-, 717-, 813-, and 923-keV transitions have a stretched
quadrupole character, while the 326-, 629-, 833-keV transitions have a
stretched dipole one. Thus the spin and parity values of most of excited
states in the energy range between 2.4 and 4.8 MeV are firmly assigned, such as
the 12$^+$ and 14$^+$ states (see the left part of Fig.~\ref{schema128}) and the
8$^-$, 9$^-$, 11$^-$, 13$^-$, 14$^-$, and 15$^-$ states (see the right part of
Fig.~\ref{schema128}). In addition, the electric character of the 833-keV
$\gamma$ ray (which is a stretched-dipole transition from results of
Table~\ref{correl_128Te}) is unambiguously
determined from the fact that the $I=13$ state is linked to the 7$^-$ state by means of a
cascade of three $\gamma$ rays (two of them being stretched-quadrupole transitions).
For the higher-spin part of the level scheme, we have assumed that
the spin values increases with excitation energy and that states close in excitation
energy have the same spin value.
\begin{table}[!ht]
\begin{center}
\caption{Coincidence rates between the low-lying $\gamma$-rays of $^{128}$Te
as a function of their relative angle of detection, normalized to
the ones obtained around 75$^\circ$.}
\label{correl_128Te}
\begin{tabular}{cccc}
\hline
E$_\gamma$-E$_\gamma$&R(22$^\circ)^{(a)}$&R(46$^\circ)^{(a)}$ &R(75$^\circ$)\\
\hline
314~-~753 & 1.14(5) & 1.07(5) &1.00 \\
314~-~743 & 1.13(5) & 1.06(5) &1.00 \\
877~-~314 & 1.13(6) & 1.04(6) &1.00 \\
526~-~314 & 0.88(6) & 0.97(6) &1.00 \\
629~-~314 & 0.86(8) & 0.92(7) &1.00 \\
813~-~314 & 1.12(9) & 1.05(5) &1.00 \\
563~-~314 & 1.19(12) & 1.10(8) &1.00 \\
717~-~314 & 1.18(5) & 1.06(5) &1.00 \\
& & & \\
717~-~743 & 1.15(5) & 1.10(5) &1.00 \\
833~-~717 & 0.80(1) & 0.90(5) &1.00 \\
923~-~717 & 1.12(8) & 1.08(7) &1.00 \\
326~-~717 & 0.85(8) & 0.95(6) &1.00 \\
387~-~717 & 1.12(8) & 1.05(5) &1.00 \\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
\end{table}
Noteworthy is the fact that the 14$^+$ yrast state is not the one previously
proposed in Ref.~\cite{zh98}. There, the argument of the large intensity of
the 833-keV transition was used to suggest that the 4340-keV level belongs to
the positive-parity yrast band. Moreover the decay of the 3149-keV level, as
well as the multipolarity of the 813-keV transition is at
variance with that given in Ref.~\cite{ke72a}. In that work,
the 3149-keV level was firmly defined by the coincidence relationships of the
intense 813-keV transition while two transitions having lower intensity
were also proposed to deexcite this level, because their energies fit well
the difference between the 3149-keV level and two low-lying levels. These two
transitions (at 227 and 1340 keV) have not been observed in our gated spectra.
In addition, the very low value of the K conversion coefficients given in
Table~3 of
Ref.~\cite{ke72a} indicates that the 813-keV has an $E1$ character, while
the results of the angular correlation measurements done in the present work
lead to an E2 one. Nevertheless, the value of the K conversion
coefficient is questionable, as it does not seem to be in agreement with what
can be seen in the conversion electron spectrum (see the figure 2 of
Ref.~\cite{ke72a}). Indeed the number of counts of the 813K line is very low, but
it is of the same order as those of the 743L or 753L lines. Thus given the theoretical
values of the L conversion coefficients (0.0003) and the relative intensity of the
813-keV transition (13\%), one may compute the value of the K conversion
coefficient, $\alpha _K(813) \sim 0.002$, in good agreement with an $E2$
multipolarity.
As expected, the five $\gamma$ rays deexciting the 10$^+$ isomeric state are
observed in the SAPhIR experiment.
The time distribution between the detection of two fragments by SAPhIR and the
emission of one of the first four $\gamma$-rays of the yrast cascade (877-314-753-743)
is shown in Fig.~\ref{TAC128Te}.
\begin{figure}[h]
\begin{center}
\includegraphics*[width=7cm]{f06_tac_128Te.eps}
\caption[]{(Color online) Half-life of the 2789~keV state of $^{128}$Te obtained
from the sum of the time distributions of the first four yrast transitions
(743, 753, 314, and 877 keV). See text for further details about the procedures
and the gating conditions.
}
\label{TAC128Te}
\end{center}
\end{figure}
In order to reduce the background, we have selected
the events containing a second $\gamma$-ray belonging to the cascade.
The least-squares fit of this spectrum gives
$T_{1/2}=236(20)$~ns, a more precise value than that obtained
from a previous analysis of the same data~\cite{ho00}, $T_{1/2}=250(40)$~ns.
Thus the half-life of the 10$^+$ isomeric state of $^{128}$Te is well shorter
than that was reported in Ref.~\cite{zh98}, $T_{1/2}=0.37(3) \mu$s, and adopted in
the last compilation~\cite{ka01}. The new value of the
$B(E2; 10^+ \rightarrow 8^+)$, 85(7)~$e^2fm^4$, i.e., 2.2(2)~W.u., will be
discussed below (see Sec.~\ref{discuss_even}).
We have gathered in Table~\ref{gammas_te128} the properties of all the transitions
assigned to $^{128}$Te from this work.
\begin{table}[!ht]
\begin{center}
\caption{Properties of the transitions assigned to $^{128}$Te observed in this
work.}\label{gammas_te128}
\begin{tabular}{rrccc}
\hline
E$_\gamma^{(a)}$(keV)& I$_\gamma^{(a),(b)}$& J$_i^\pi \rightarrow$J$_f^\pi$ &E$_i$&E$_f$ \\
\hline
101.3(3)& 17(4)& 10$^+$ $\rightarrow$ 8$^+$ & 2788.9& 2687.6 \\
262.1(4)& 3.5(14)& (14$^+$) $\rightarrow$ (13$^+$) & 4525.2& 4263.1 \\
313.6(2)& 80(12)& 6$^+$ $\rightarrow$ 4$^+$ &1810.1 &1496.5 \\
322.4(5)& 1.8(9)& $\rightarrow$ 5$^-$ &2454.9 & 2132.5 \\
326.0(4)& 6.2(19)& 14$^-$ $\rightarrow$ 13$^-$ &4665.6 & 4339.6 \\
387.0(4)& 5.4(22)& 15$^-$ $\rightarrow$ 13$^-$ &4726.6 &4339.6 \\
457.1(5)& 2.8(14)& (12$^-$)$\rightarrow$ 11$^-$ & 4169.7& 3712.6 \\
467.3(5)& 1.0(5)& (16$^+$)$\rightarrow$ (15$^+$) &5542.7 &5075.4 \\
526.3(3)& 28(6)& 7$^-$ $\rightarrow$ 6$^+$ & 2336.4&1810.1 \\
527.6(4)& 5.6(17)& $\rightarrow$ 12$^+$ & 4033.6&3506.0 \\
550.2(5)& 1.6(8)& (15$^+$)$\rightarrow$ (14$^+$) &5075.4 &4525.2 \\
563.1(4)& 10(3)& 11$^-$ $\rightarrow$ 9$^-$ &3712.6 & 3149.5 \\
627.1(5)& 4.0(16)& 13$^-$ $\rightarrow$ 11$^-$ & 4339.6& 3712.6 \\
629.2(4)& 8(2)& 8$^-$ $\rightarrow$ 7$^-$ & 2965.6& 2336.4 \\
636.0(4)& 12(4)& 5$^-$ $\rightarrow$ 4$^+$ &2132.5 & 1496.5 \\
670.1(4)& 4.1(16)& $\rightarrow$ 8$^-$ & 3635.7& 2965.6 \\
706.9(5)& 2.9(14)& (16$^-$) $\rightarrow$ 15$^-$ & 5433.4& 4726.6 \\
717.1(3)& 40(8)& 12$^+$ $\rightarrow$ 10$^+$ & 3506.0& 2788.9 \\
743.0(2)& 100 & 2$^+$ $\rightarrow$ 0$^+$ &743.0 &0.0 \\
753.5(2)& 95(14)& 4$^+$ $\rightarrow$ 2$^+$ & 1496.5&743.0 \\
757.1(4)& 7(2)& (13$^+$) $\rightarrow$ 12$^+$ & 4263.1&3506.0 \\
764.1(7)& 1.9(9)& (18$^+$) $\rightarrow$ (16$^+$) & 6209.2&5445.1 \\
767.6(5)& 4.8(19)& (16$^-$) $\rightarrow$ 14$^-$ & 5433.4& 4665.6 \\
813.1(4)& 15(4)& 9$^-$ $\rightarrow$ 7$^-$ & 3149.5& 2336.4 \\
833.5(4)& 18(4)& 13$^-$ $\rightarrow$ 12$^+$ & 4339.6&3506.0 \\
877.5(3)& 48(10)& 8$^+$ $\rightarrow$ 6$^+$ & 2687.6& 1810.1 \\
922.6(4)& 10(3)& 14$^+$ $\rightarrow$ 12$^+$ & 4428.6& 3506.0 \\
1016.5(5)& 3.2(15)& (16$^+$) $\rightarrow$ 14$^+$ & 5445.1&4428.6 \\
1217.7(6)& 1.8(9)& (17$^-$) $\rightarrow$ 15$^-$ & 5944.3& 4726.6 \\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
$^{(b)}$ The relative intensities are normalized to $I_\gamma(743) = 100$.\\
\end{table}
\subsubsection{Level scheme of $^{130}$Te\label{te130}}
The $\beta$ decay of $^{130}$Sb, similar to the one of $^{128}$Sb,
populates medium spin states of $^{130}$Te. Its study
led to the identification of the first yrast states of $^{130}$Te, by
means of $\gamma$-ray and conversion electron measurements~\cite{ke72b}.
Then, by using deep-inelastic reactions, a long-lived
isomeric state ($T_{1/2}=4.2(9) \mu$s) was proposed thanks to the observed
delay of the first three yrast transitions as well as of two new
$\gamma$ lines, interpreted as the $8^+ \rightarrow 6^+$ and
$8^+ \rightarrow 7^-$ transitions~\cite{zh98}. This isomeric state was
assumed to be the expected
10$^+$ state from the $(\nu h_{11/2})^2$ configuration,
the $10^+ \rightarrow 8^+$ transition being not detected because of its
very low energy. Some years later, the conversion electrons of this
transition was looked for, the $^{130}$Te nuclei being produced by
thermal neutron
induced fission of Pu~\cite{ge01}. The $10^+ \rightarrow 8^+$
transition could not be observed, thus a conservative upper limit of its
energy was proposed to be 25~keV. Nevertheless, by using the delayed
$\gamma$ rays, the half-life of the isomeric state was remeasured,
$T_{1/2}=1.90(8) \mu$s, i.e., more than a factor 2 smaller than the previous
value.
High-spin states lying above 3-MeV excitation energy were identified from
the observation of a new cascade of three delayed transitions,
during the preliminary analysis of our SAPhIR experiment~\cite{ho98}. Later,
the analysis of another deep-inelastic experiment, $^{136}$Xe + $^{232}$Th,
performed with the Gammasphere array~\cite{br04} led to a more detailed
decay of this new isomeric state. It displays new paths which allowed the
authors of Ref.~\cite{br04} to determine the energy of the long-lived 10$^+$
state, 18.5 keV above the 8$^+$ state.
The new isomeric state was interpreted as the 15$^-$ state arising
from the maximum spin coupling of the four neutron holes,
$(\nu h_{11/2})^{-3}(\nu d_{3/2})^{-1}$~\cite{br04}. It is worth pointing out that such a
configuration has been recently established in neighboring even-$A$ Sn
isotopes, where the 15$^-$ state is also an isomeric state in the range of
several tens to several hundreds of nanoseconds~\cite{pi11,as12}.
All the yrast states of $^{130}$Te previously identified have been observed
in the present work. Moreover the careful analyses of the coincidence
relationships allowed us to extend the level scheme by a few transitions
(see Fig.~\ref{schema130}).
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f07_130Te_color.ps}
\caption{(Color online) Level scheme of $^{130}$Te deduced in the present work.
The colored levels are new. The width of
the arrows is proportional to the relative intensity of the $\gamma$ rays. The
half-lives of the 1814- and 2664-keV levels are from Ref.~\cite{nndc} and the
ones of the 2145- and 4374-keV levels from this work.
}
\label{schema130}
\end{center}
\end{figure}
Many values of spin and parity are unambiguously established, using
both previous results on the electron conversion measurements~\cite{ke72b}
and present results on $\gamma-\gamma$ angular correlations. The K-conversion
coefficients of the 182-, 793-, and 839-keV transitions lead to an
$E2$ multipolarity, while that of the 331-keV transition corresponds to an $E1$
multipolarity. This is in good agreement with our results of angular correlations
(see Table~\ref{correl_130Te}), which also give information on the 935- and 458-keV
$\gamma$-rays, which have a quadrupole character. Thus the spin and parity values of
all the yrast states are now determined up to $I=11$.
The spin and parity values proposed for the other states of the level scheme
(see Fig.~\ref{schema130}) are based on the arguments already used in the
preceding sections.
\begin{table}[!ht]
\begin{center}
\caption{Coincidence rates between the low-lying $\gamma$ rays of $^{130}$Te
as a function of their relative angle of detection, normalized to
the ones obtained around 75$^\circ$.}\label{correl_130Te}
\begin{tabular}{cccc}
\hline
E$_\gamma$-E$_\gamma$&R(22$^\circ)^{(a)}$&R(46$^\circ)^{(a)}$ &R(75$^\circ$)\\
\hline
182~-~793 & 1.10(7) & 1.05(5) &1.00 \\
182~-~839 & 1.09(7) & 1.04(5) &1.00 \\
331~-~182 & 0.89(8) & 0.96(4) &1.00 \\
331~-~839 & 0.89(7) & 0.96(4) &1.00 \\
935~-~839 & 1.11(8) & 1.04(4) &1.00 \\
935~-~331 & 0.88(7) & 0.96(5) &1.00 \\
458~-~331 & 0.85(9) & 0.90(7) &1.00 \\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
\end{table}
As mentioned above, a high-energy isomeric state was measured in
$^{130}$Te~\cite{ho98,br04}. Its main decay path is illustrated by the
spectrum of Fig.~\ref{spsaphir130}, which has been built from the data of
SAPhIR experiment.
The time distribution between the detection of two fragments by SAPhIR and
the emission of one $\gamma$-ray of the 710-458-935 cascade is shown in
Fig.~\ref{TAC130Te_15moins}.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f08_935saphir.eps}
\caption{Spectrum of $\gamma$-rays which are detected in
the time interval 50~ns-$1 \mu$s after the detection of two fragments
by SAPhIR and in prompt coincidence with the 935-keV transition of $^{130}$Te.}
\label{spsaphir130}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics*[width=7cm]{f09_tac_130Te_15moins.eps}
\caption[]{(Color online) Half-life of the 4374~keV state of $^{130}$Te
obtained
from the sum of the time distributions of the 935-, 458- and the 710-keV
transitions. See text for further details about the procedures
and the gating conditions.
}
\label{TAC130Te_15moins}
\end{center}
\end{figure}
In order to reduce the background, we have selected
the events containing a second $\gamma$-ray belonging to the
main decay path of the isomeric state, the 710-458-935-331-182-793-839
cascade. The fit of the time distribution gives $T_{1/2}=53(8)$ns, which is
in agreement with the value given in Ref.~\cite{br04} within the error bars.
In addition, we have computed the value of the total internal conversion
coefficient of the 126-keV transition. Its intensity imbalance measured in
spectra in double coincidence with one $\gamma$ ray located above it and
the other below it, leads to
$\alpha_{tot}(126)=0.7(2)$, in agreement with an $E2$ multipolarity,
$\alpha_{tot}(126, E2, Z=52)=0.75$~\cite{BRICC}.
Thus the $B(E2)$ value of the isomeric decay is 193(29)~$e^2fm^4$, i.e.,
4.9(7)~W.u.. This value will be discussed in Sect.~\ref{discuss_even}.
For sake of completeness, we have remeasured the half-life of the 7$^-$ state. For that
purpose we did not use the data of the SAPhiR experiment since part of the population
of the 7$^-$ state comes from the decays of higher-lying isomeric states, mainly the
10$^+$ state with $T_{1/2}=1.9~\mu$s. That gives rise to a second component in the time
spectrum, which is not easy to subtract as the time window extends only to 1~$\mu$s.
Therefore we have used the timing information of the Ge detectors of Euroball (see
Ref.~\cite{as06} for the procedures and the calibrations). The time distribution
between the emission of one transition populating the 7$^-$ state
(458- and 935-keV $\gamma$ rays) and one transition involved in its decay (331- and
793-keV $\gamma$ rays) is shown in Fig.~\ref{TAC130Te_7moins}.
The slope, $T_{1/2}=110(5)$~ns, is in good agreement with the previous
value, 115(8)~ns~\cite{ke72b}.
\begin{figure}[h]
\begin{center}
\includegraphics*[width=7cm]{f10_tac_130Te_7moins.eps}
\caption[]{(Color online) Time distribution between the emission of the
458- or 935-keV $\gamma$ rays and of the 331- or 793-keV $\gamma$ rays, built
from the timing information of the Ge detectors. The slope is the half-life of
the 2145-keV state of $^{130}$Te.
}
\label{TAC130Te_7moins}
\end{center}
\end{figure}
We have gathered in Table~\ref{gammas_te130} the properties of all the transitions
assigned to $^{130}$Te from this work.
\begin{table}[!ht]
\begin{center}
\caption{Properties of the transitions assigned to $^{130}$Te observed in this
work.}\label{gammas_te130}
\begin{tabular}{rrccc}
\hline
E$_\gamma^{(a)}$(keV)& I$_\gamma^{(a),(b)}$& J$_i^\pi \rightarrow$J$_f^\pi$ &E$_i$&E$_f$ \\
\hline
126.2(3)& 14(4)& (15$^-$) $\rightarrow$ (13$^-$) & 4373.7 &4247.5\\
181.8(3)& 60(12)& 6$^+$ $\rightarrow$ 4$^+$ &1814.2 & 1632.4 \\
252.8(5)& 4(2)&(17$^-$) $\rightarrow$( 16$^-$) &5702.6 &5449.7 \\
299.0(5)& 3(1) & (13$^-$) $\rightarrow$ (11$^-$) &4247.5 & 3948.6 \\
330.7(3)& 43(11)& 7$^-$ $\rightarrow$ 6$^+$ &2144.9 & 1814.2 \\
360.1(5)& 4(2)& (14$^+$) $\rightarrow$ (13$^+$) &4534.8 &4174.7 \\
411.4(5)& 3(1) & (11$^-$) $\rightarrow$ 11$^-$ & 3948.6 & 3537.4 \\
458.0(4)& 18(4)& 11$^-$ $\rightarrow$ 9$^-$ &3537.4 &3079.4 \\
467.9(4)& 12(3)& 5$^-$ $\rightarrow$ 4$^+$ &2100.3 &1632.4 \\
501.5(5)& 4.5(10)& 8$^+$ $\rightarrow$ 7$^-$ &2646.4 &2144.9 \\
654.4(5)& 8(3)& &3531.0 &2876.6 \\
710.1(4)& 13(4)& (13$^-$) $\rightarrow$ 11$^-$ &4247.5 & 3537.4 \\
717.7(4)& 19()& (12$^+$) $\rightarrow$ 10$^+$ &3382.2 & 2664.5 \\
731.7(5)& 9(3)& $\rightarrow$ 7$^-$ &2876.6 & 2144.9 \\
792.5(5)& 8(3)& (13$^+$) $\rightarrow$ (12$^+$) &4174.7 & 3382.2 \\
793.2(3)& 74(15)& 4$^+$ $\rightarrow$ 2$^+$ &1632.4 & 839.2 \\
832.7(3)& 25(5)& 8$^+$ $\rightarrow$ 6$^+$ & 2646.4 & 1814.2 \\
839.2(3)& 100 & 2$^+$ $\rightarrow$ 0$^+$ &839.2& 0.0 \\
865.2(5)& 7(3)& (13$^-$) $\rightarrow$ (12$^+$) &4247.5 &3382.2 \\
869.3(6)& 2(1) & (11$^-$) $\rightarrow$ 9$^-$ & 3948.6 & 3079.4 \\
872.9(5)& 6(2)& 11$^-$ $\rightarrow$ 10$^+$ &3537.4& 2664.5 \\
899(1)& 2(1)& (14$^+$) $\rightarrow$ (12$^+$) &4281.2 & 3382.2 \\
934.5(4)& 24(6)& 9$^-$ $\rightarrow$ 7$^-$ &3079.4 & 2144.9 \\
1076.0(5)& 6(2)& (16$^-$) $\rightarrow$ (15$^-$) & 5449.7& 4373.7\\
1329.1(6)& 2.5(12)& (17$^-$) $\rightarrow$ (15$^-$) &5702.6 & 4373.7\\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
$^{(b)}$ The relative intensities are normalized to $I_\gamma(839) = 100$.\\
\end{table}
\subsection{Study of the odd-$A$ $^{125-131}$Te isotopes}
\subsubsection{Level scheme of $^{125}$Te\label{te125}}
Medium-spin states of $^{125}$Te had been investigated using the
$^{124}$Sn($\alpha$, 3n$\gamma$) reaction and in-beam techniques
(excitation functions, $\gamma-\gamma$ coincidences and $\gamma$-ray angular
distributions)~\cite{ke72c}. This study led to the
identification of three sets of levels, (i) three levels built on
the 3/2$^+_1$ state at 35.5~keV, (ii) three levels built on
the 11/2$^-_1$ state at 144.8~keV, (iii) two states decaying to both
structures. Thus the level scheme extended up to 2.57~MeV excitation energy
and a maximum spin value of (23/2). Moreover some $\gamma$ lines, assigned to
$^{125}$Te because of their excitation function, were not placed in the
published level scheme.
The 601-555-378 triple coincidence is observed in our data set, meaning that
the three levels built on the 3/2$^+_1$ state at 35.5~keV [set (i)] are populated in the
fusion-fission reactions. No other $\gamma$ lines have been detected in coincidence
with these three transitions, thus we do not confirm the existence of the 805- and
195-keV transitions which were placed above this structure because they fit the
differences in energy between states
established from coincidence relationships~\cite{ke72c}. Hence the two highest-spin
states [set (iii)] only decay to the levels of set (ii).
Thanks to all the mutual $\gamma-\gamma-\gamma$ coincidences of our two data
sets, we have extended the level scheme of $^{125}$Te up to 5453-keV excitation
energy (see Fig.~\ref{schema125}).
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f11_125Te_color.ps}
\caption{(Color online) Level scheme of $^{125}$Te deduced in the present work.
The colored levels are new. The width of
the arrows is proportional to the relative intensity of the $\gamma$ rays.
The energies of the 3/2$^+_1$ state and of the isomeric 11/2$^-$
state are from Ref.~\cite{nndc}.
}
\label{schema125}
\end{center}
\end{figure}
An example of doubly-gated coincidence spectra
showing the transitions deexciting the new states of the left part of the
level scheme is given in Fig.~\ref{spectre125}. This spectrum demonstrates
that the 430-keV transition is a doublet, in self-coincidence.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=7.5cm]{f12_spectre_125Te.eps}
\caption{(Color online) Coincidence spectrum double-gated on the 659- and
430-keV transitions of
$^{125}$Te, built from the $^{12}$C + $^{238}$U data set.
The $\gamma$ ray emitted by its main Pd complementary fragment is labeled.
The peaks marked with a star are contaminants from
$^{127}$I, as its high-spin level scheme
displays two transitions close in energy to those used to build the present
spectrum (the transition emitted
by its main Rh complementary fragment is marked with two stars).}
\label{spectre125}
\end{center}
\end{figure}
Moreover, our level scheme includes another doublet at 696 and 697~keV,
the latter $\gamma$ line being located between the 350- and the 703-keV
transitions. Noteworthy is the fact that this doublet was not suspected in
the previous work~\cite{ke72c}, thus the 703-keV transition was put in a wrong place.
The statistics of our $^{125}$Te data being too low to perform
$\gamma-\gamma$ angular correlation analyses, the
spin assignments given in Fig.~\ref{schema125} come from the $\gamma$-ray
angular distribution results of Ref.~\cite{ke72c}, namely, the 378-,
555-, 601-, 696-, 659-,
873- and 762-keV transitions are quadrupole ones with $\Delta I=2$, and the
350-, 523- and 717-keV transitions are dipole ones with $\Delta I=1$.
That leads to the spin and parity values given without parentheses in
Fig.~\ref{schema125}. The other values have been chosen by using the same
arguments as in the preceding sections.
We have gathered in Table~\ref{gammas_te125} the properties of all the transitions
assigned to $^{125}$Te from this work.
\begin{table}[!ht]
\begin{center}
\caption{Properties of the transitions assigned to $^{125}$Te observed in this
work. The energies of the 3/2$^+_1$ state and of the isomeric 11/2$^-$
state are from Ref.~\cite{nndc}.}\label{gammas_te125}
\begin{tabular}{rrccc}
\hline
E$_\gamma^{(a)}$(keV)& I$_\gamma^{(a),(b)}$& J$_i^\pi \rightarrow$J$_f^\pi$ &E$_i$&E$_f$ \\
\hline
349.9(3)& 91(14)& 21/2$^-$ $\rightarrow$ 19/2$^-$ &1850.0 & 1500.1\\
377.7(4)& 26(6)& (15/2$^+$) $\rightarrow$ (11/2$^+$) & 1568.5& 1190.8\\
430.3(4)& 10(3)& (33/2$^-$) $\rightarrow$ (31/2$^-$) & 4484.0& 4053.7\\
430.3(5)& 5(2)& (37/2$^-$) $\rightarrow$ (35/2$^-$) & 5452.7& 5022.4\\
523.4(4)& 27(5)& 23/2$^-$ $\rightarrow$ 21/2$^-$ & 2373.4& 1850.0\\
538.4(5)& 7(2)& (35/2$^-$) $\rightarrow$ (33/2$^-$) & 5022.4& 4484.0\\
554.7(4)& $>$26& (11/2$^+$) $\rightarrow$ 7/2$^+$$^{(c)}$ & 1190.8& 636.1\\
600.6(4)& $>$26& 7/2$^+$$^{(c)}$ $\rightarrow$ 3/2$^+$ & 636.1&35.5\\
659.3(3)& $>$100 & 19/2$^-$ $\rightarrow$ 15/2$^-$ &1500.1 &840.8 \\
696.0(3)& $>$100 & 15/2$^-$ $\rightarrow$ 11/2$^-$ &840.8 &144.8\\
696.8(4)& 25(5)& (25/2$^-$) $\rightarrow$ 21/2$^-$ & 2546.8& 1850.0\\
702.7(4)& 15(4)& (27/2$^-$) $\rightarrow$ (25/2$^-$) &3249.7 &2546.8 \\
717.4(4)& 34(7)& 23/2$^{(+)}$ $\rightarrow$ 21/2$^-$ & 2567.4&1850.0 \\
731.3(5)& 9(3)& (31/2$^+$) $\rightarrow$ 27/2$^{(+)}$ & 4060.6& 3249.7\\
761.9(4)& 27(5)& 27/2$^{(+)}$ $\rightarrow$ 23/2$^{(+)}$ & 3329.3& 2567.4\\
804.0(4)& 14(3)& (31/2$^-$) $\rightarrow$ (27/2$^-$) & 4053.7& 3680.0\\
873.2(5)& 9(3)& 23/2$^-$ $\rightarrow$ 19/2$^-$ & 2373.4& 1500.1\\
876.4(4)& 14(4)& (27/2$^-$) $\rightarrow$ 23/2$^-$ & 3249.7& 2373.4\\
975.3(5)& 6(2)& (29/2$^-$) $\rightarrow$ (25/2$^-$) & 3912.6& 2937.3\\
1087.3(5)& 8(3)& (25/2$^-$) $\rightarrow$ 21/2$^-$ &2937.3 &1850.0 \\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
$^{(b)}$ The relative intensities are normalized to the sum of the
populations of the 19/2$^-$ state, $I_\gamma(350)+ I_\gamma(873)= 100$.\\
$^{(c)}$ Spin value from Ref.~\cite{nds125}.\\
\end{table}
\subsubsection{Level scheme of $^{127}$Te\label{te127}}
The $\beta$ decay of $^{127}$Sb was studied many years ago~\cite{nndc},
leading to the identification of many states of $^{127}$Te with an
excitation energy below 1.4~MeV and $I$ values around 7/2, which is the spin
value of the ground state of $^{127}$Sb. Among them, two states deserve to be
quoted as they are expected to be populated in reactions induced by heavy ions,
the 7/2$^+$ state built on the 3/2$^+$ ground state as well as the 11/2$^-$ state
from the promotion of the odd neutron to the $\nu h_{11/2}$ subshell. Indeed
these two states were observed in deep-inelastic reactions, and new structures
were proposed above them~\cite{zh98}. In this work, two new transitions were
assigned to the structure built on the 3/2$^+$ ground state and several states
were discovered in the top of the structure built on the 11/2$^-$ state.
Regarding the structure built on the 3/2$^+$ ground state, we have not observed the
685-668-263 triple coincidence proposed in the latter work (see Table 2
of~\cite{zh98}). On the other hand, the 685-keV being detected in coincidence
with transitions emitted by Sr fragments in the $^{18}$O + $^{208}$Pb reaction
and by Pd fragments in the $^{12}$C + $^{238}$U reaction, we have looked for
the other members of the cascade built on the 3/2$^+$ ground state. As an
example, the coincidence spectrum double-gated on the 685-keV
transition of $^{125}$Te and the 837-keV transition of $^{94}$Sr,
built from the $^{18}$O + $^{208}$Pb data set, shows two new transitions at 604
and 381 keV [see Fig~\ref{spectre127}(a)], which do not belong to $^{94}$Sr.
Second, the coincidence spectrum double-gated on the 685- and 604-keV transition of
$^{125}$Te confirms the existence of the 381-keV $\gamma$ line, as well as
the correlation of all the Sr complementary fragments.
Similar results are obtained by using the $^{12}$C + $^{238}$U data set,
the 685-604-381 triple coincidence of $^{127}$Te and the correlation to the
transitions emitted by the Pd complementary fragments.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=7.8cm]{f13_bande3demi_127Te.eps}
\caption{(a) Coincidence spectrum double-gated on the 685-keV
transition of $^{127}$Te and the 837-keV transition of $^{94}$Sr,
built from the $^{18}$O + $^{208}$Pb data set.
The $\gamma$ rays known to be emitted by $^{94}$Sr are labeled
by Sr. The peak at 374 keV marked with a star is a contaminant belonging
to $^{110}$Pd, where the 374-838-688 triple coincidence occurs.
(b) Coincidence spectrum double-gated on the 685- and 604-keV transition of
$^{125}$Te, the $\gamma$ rays emitted by its Sr complementary fragments are
labeled.
}
\label{spectre127}
\end{center}
\end{figure}
All the states belonging to the structure built on the 11/2$^-$
state~\cite{zh98} have been confirmed by the analyses of both data sets of
the present work.
In total, nine new states have been identified at higher energy,
extending the level scheme up to 4.82~MeV excitation energy
(see Fig.~\ref{schema127}).
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f14_127Te_color.ps}
\caption{(Color online) Level scheme of $^{127}$Te deduced in the present work.
The colored levels are new. The width of
the arrows is proportional to the relative intensity of the $\gamma$ rays.
The energy of the long-lived isomeric 11/2$^-$ state is from Ref.~\cite{nndc}.
}
\label{schema127}
\end{center}
\end{figure}
Angular correlations of successive $\gamma$ rays have been extracted
for the most intense transitions of $^{127}$Te. The experimental
results are given in Table~\ref{correl_127Te}.
\begin{table}[!ht]
\begin{center}
\caption{Coincidence rates between the low-lying $\gamma$ rays of $^{127}$Te
as a function of their relative angle of detection, normalized to
the ones obtained around 75$^\circ$.
}
\label{correl_127Te}
\begin{tabular}{cccc}
\hline
E$_\gamma$-E$_\gamma$&R(22$^\circ)^{(a)}$&R(46$^\circ)^{(a)}$ &R(75$^\circ$)\\
\hline
358~-~491 &1.06(4) &1.05(4) &1.00\\
358~-~678 &0.95(4) &0.96(4) &1.00\\
&&&\\
392~-~458 &1.08(4) &1.03(3) &1.00\\
392~-~678 &0.80(5) &0.94(4) &1.00\\
392~-~697 &0.80(4) &0.95(4) &1.00\\
&&&\\
678~-~458 &0.95(4) &0.95(4) &1.00\\
678~-~591 &1.14(5) &1.04(4) &1.00\\
678~-~953 &1.14(5) &1.07(3) &1.00\\
&&&\\
816~-~392 & 0.90(5) &0.95(5) &1.00\\
816~-~458 & 0.93(3) &0.96(4) &1.00\\
816~-~697 & 1.2(1) & 1.07(6) &1.00\\
&&&\\
953~-~591 &1.11(5) &1.07(4) &1.00\\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
\end{table}
They show that the 697-, 678-, 953-keV transitions, as well as the 591- and 816-keV ones,
are quadrupole, while the 358-, 392-, 458-, and 491-keV transitions are dipole.
This defines the
spin values of several excited states, which are given in Fig.~\ref{schema127}.
The spin assignments of the higher-lying states, given in parentheses, are based on
the same arguments as those used in the preceding sections.
We have gathered in Table~\ref{gammas_te127} the properties of all the
transitions assigned to $^{127}$Te from this work.
\begin{table}[!ht]
\begin{center}
\caption{Properties of the transitions assigned to $^{127}$Te observed in this
work. The energy of the isomeric 11/2$^-$
state is from Ref.~\cite{nndc}.}\label{gammas_te127}
\begin{tabular}{rrccc}
\hline
E$_\gamma^{(a)}$(keV)& I$_\gamma^{(a),(b)}$& J$_i^\pi \rightarrow$J$_f^\pi$ &E$_i$&E$_f$ \\
\hline
337.8(3)& 9(3)& (31/2$^+$) $\rightarrow$ (29/2$^+$) &4029.7 &3691.9\\
358.2(3)& 30(6)& 23/2$^{(+)}$ $\rightarrow$ 21/2$^-$ &2313.2 &1954.9\\
381.0(5)& 4(2)& (15/2$^+$) $\rightarrow$ (11/2$^+$) &1669.8 &1288.8\\
392.0(3)& 31(6)& 21/2$^-$ $\rightarrow$ 19/2$^-$ &1855.7 &1463.7 \\
394.1(5)& 2(1)& (35/2$^-$) $\rightarrow$ (33/2$^-$) &4816.1 &4422.1\\
410.3(3)& 10(3)& 21/2$^-$ $\rightarrow$ (19/2$^-$) &1954.9 &1544.6\\
457.6(3)& 18(4)& 23/2$^{(+)}$ $\rightarrow$ 21/2$^-$ &2313.2 &1855.7\\
491.3(3)& 17(4)& 21/2$^-$ $\rightarrow$ 19/2$^-$ &1954.9 &1463.7\\
560.5(3)& 10(3)& 23/2$^-$ $\rightarrow$ 21/2$^-$ &2416.3 &1855.7\\
562.3(3)& 15(4)& (29/2$^+$) $\rightarrow$ 27/2$^{(+)}$ &3691.9 &3129.6\\
590.6(3)& 20(5)& 27/2$^-$ $\rightarrow$ 23/2$^-$ &3006.9 &2416.3\\
599.2(5)& 3.6(14)& (33/2$^-$) $\rightarrow$ (31/2$^-$) &4422.0 &3822.6\\
604.2(6)& $>$4& (11/2$^+$) $\rightarrow$ 7/2$^+$$^{(c)}$ &1288.8 &684.6\\
629.4(6)& 2.8(14)& (33/2$^-$) $\rightarrow$ (29/2$^-$) &4422.0 &3792.7\\
678.1(2)& 87(13)& 19/2$^-$ $\rightarrow$ 15/2$^-$ &1463.7 &785.6\\
684.6(6)& $>$4& 7/2$^+$$^{(c)}$ $\rightarrow$ 3/2$^+$ &684.6 &0\\
697.4(2)& $>$100& 15/2$^-$ $\rightarrow$ 11/2$^-$ &785.6 & 88.2\\
759.0(4)& 13(3)& (19/2$^-$) $\rightarrow$ 15/2$^-$ &1544.6 &785.6\\
785.8(5)& 5.7(17)& (29/2$^-$) $\rightarrow$ 27/2$^-$ &3792.7 &3006.9\\
815.7(5)& 8(3)& (31/2$^-$) $\rightarrow$ 27/2$^-$ &3822.6 &3006.9\\
816.4(4)& 25(5)& 27/2$^{(+)}$ $\rightarrow$ 23/2$^{(+)}$ &3129.6 &2313.3\\
889.3(7)& 1.0(5)& (31/2$^-$) $\rightarrow$ (27/2$^-$) &4240.9 &3351.6\\
935.2(6)& 3.2(15)& (27/2$^-$) $\rightarrow$ 23/2$^-$ &3351.6 &2416.3\\
952.7(4)& 24(5)& 23/2$^-$ $\rightarrow$ 19/2$^-$ &2416.3 &1463.7 \\
993.8(7)& 1.4(7)& (35/2$^-$) $\rightarrow$ (31/2$^-$) &4816.1 &3822.6\\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
$^{(b)}$ The relative intensities are normalized to the sum of the
populations of the 15/2$^-$ state, $I_\gamma(678)+ I_\gamma(759)= 100$.\\
$^{(c)}$ Spin value from Ref.~\cite{nds127}.\\
\end{table}
\subsubsection{Level scheme of $^{129}$Te\label{te129}}
The first results on the excited states of $^{129}$Te were obtained from the
$\beta$-decay studies of $^{129}$Sb, in which an isomeric state was discovered many
years ago~\cite{hu82}. Several years later, its spin value,
$I^\pi=19/2^-$, was established thanks to the $M4$ internal transition~\cite{st87}.
Given that the $\beta$ branching of the $^{129}$Sb$^m$
decay is 85\%, several high-spin states of $^{129}$Te populated by this decay were
finally unambiguously identified by using results of another measurement, in which
the $^{129}$Te nuclei were produced in deep-inelastic $^{130}$Te + $^{64}$Ni
reactions~\cite{zh95}. Then the high-spin level scheme of $^{129}$Te was defined up
to 2.1~MeV excitation energy and spin value of 23/2$^+$. Later on, a more detailed
analysis of the deep-inelastic measurements led to assign two new
transitions, which extended the level scheme to 3~MeV excitation energy~\cite{zh98}.
All the yrast states of $^{129}$Te have been confirmed by the analyses of both data
sets of the present work. Moreover the spectra doubly-gated on the known transitions
allowed us to identify many new $\gamma$ lines which extend the level scheme up to
4.825~MeV. For instance, the spectrum of Fig.~\ref{spectre129} shows the new
transitions located above the 2136-keV state.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8cm]{f15_G251et759_129Te.eps}
\caption{Coincidence spectrum double-gated on the 759- and
251-keV transitions of
$^{129}$Te, built from the $^{12}$C + $^{238}$U data set.
The $\gamma$ rays emitted by its main Pd complementary fragments are labeled.
The peaks marked with a star are identified
contaminants.}
\label{spectre129}
\end{center}
\end{figure}
The level scheme built from these analyses is shown in Fig.~\ref{schema129}.
Its high-energy part comprises two independent structures, one built on the 2136-keV state and the
other on the 2510-keV state, which have different parities according to the previous
work~\cite{zh98}.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f16_129Te_color.ps}
\caption{(Color online) Level scheme of $^{129}$Te deduced in the present work.
The colored levels are new. The width of
the arrows is proportional to the relative intensity of the $\gamma$ rays.
The energy of the long-lived isomeric 11/2$^-$ state is from Ref.~\cite{nndc} and the half-life of
the 2136-keV isomeric state from Ref.~\cite{zh95}.
}
\label{schema129}
\end{center}
\end{figure}
In addition the triple coincidence, 812-703-211, proposed in the
previous work, which defines the structure built on the 3/2$^+$ ground state, is observed
in both data sets of the present work. Even though the number of counts associated
to these $\gamma$ lines is too low to observe their coincidences with the
transitions emitted by the complementary fragments, we adopt the same assignment as
previously (see Fig.~\ref{schema129}).
Angular correlations of successive $\gamma$ rays have been extracted
for the most intense transitions of $^{129}$Te. The experimental
results are given in Table~\ref{correl_129Te}.
\begin{table}[!ht]
\begin{center}
\caption{Coincidence rates between the low-lying $\gamma$ rays of $^{129}$Te
as a function of their relative angle of detection, normalized to
the ones obtained around 75$^\circ$.
}
\label{correl_129Te}
\begin{tabular}{cccc}
\hline
E$_\gamma$-E$_\gamma$&R(22$^\circ)^{(a)}$&R(46$^\circ)^{(a)}$ &R(75$^\circ$)\\
\hline
759~-~330 &1.10(5) &1.06(3) &1.00\\
759~-~433 &0.87(8) &0.96(4) &1.00\\
759~-~673 &0.8(1) &0.90(7) &1.00\\
759~-~914 &1.17(8) &1.08(4) &1.00\\
759~-~988 &1.12(6) &1.04(3) &1.00\\
&&&\\
914~-~180 &0.91(5)&0.95(5)&1.00\\
914~-~251 &0.92(6)&0.89(8)&1.00\\
914~-~363 &0.8(1) &0.92(6)&1.00\\
914~-~658 &1.16(8)&1.05(4)&1.00\\
&&&\\
988~-~330 &1.15(9) &1.07(5) &1.00\\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
\end{table}
They show that five transitions have a quadrupole character, the 759-, 658-, 988-,
330-keV transitions located in the left side of the level scheme
(see Fig.~\ref{schema129}) and the 914-keV one located in the right side. In addition,
the four transitions linking the two parts (at 180, 251, 363 and 433 keV) are dipole.
Then the spin values of most of the states located below 3513-keV excitation energy are
now determined. A positive parity is assigned to the 2136-keV state because of its
delayed decay, which can be explained only if the dipole 180- and 251-keV $\gamma$ rays
are $E1$~\cite{zh95}. The values of the reduced transition probabilities (given in Table~\ref{BE1})
will be discussed in Sect.~\ref{discuss_odd}. The spin values given in parentheses
have been chosen by using the same arguments as in the preceding sections.
We have gathered in Table~\ref{gammas_te129} the properties of all the
transitions assigned to $^{129}$Te from this work.
\begin{table}[!ht]
\begin{center}
\caption{Properties of the transitions assigned to $^{129}$Te observed in this
work. The energy of the isomeric 11/2$^-$
state is from Ref.~\cite{nndc}.}\label{gammas_te129}
\begin{tabular}{rrccc}
\hline
E$_\gamma^{(a)}$(keV)& I$_\gamma^{(a),(b)}$& J$_i^\pi \rightarrow$J$_f^\pi$ &E$_i$&E$_f$ \\
\hline
130.9(5)& 4(2)& (19/2$^-$) $\rightarrow$ 19/2$^-$ &1653.5 &1522.4\\
180.1(4)& 9.4(28)& 23/2$^+$ $\rightarrow$ 21/2$^-$ &2135.8 &1955.8\\
211.4(5) & 1.4(7)& (15/2$^+$) $\rightarrow$ (11/2$^+$) &1727.1 &1515.7\\
231.6(3)& 8(2)& 21/2$^-$ $\rightarrow$ (19/2$^-$) &1885.2 &1653.5\\
250.5(3)& 17(4)& 23/2$^+$ $\rightarrow$ 21/2$^-$ &2135.8 &1885.2\\
330.4(3)& 19(5)& 27/2$^-$ $\rightarrow$ 23/2$^-$ &2840.3 &2509.9\\
362.9(3)& 23(5)& 21/2$^-$ $\rightarrow$ 19/2$^-$ &1885.2 &1522.4\\
389.9(5)& 1.4(7)& (35/2$^-$) $\rightarrow$ (33/2$^-$) &4825.1 &4435.3\\
396.4(5)& 2.7(13)& (31/2$^+$) $\rightarrow$ (29/2$^+$) &4033.0 &3636.6\\
433.4(3)& 16(4)& 21/2$^-$ $\rightarrow$ 19/2$^-$ &1955.8 &1522.4\\
541.2(5)& 2(1)& (33/2$^+$) $\rightarrow$ (31/2$^+$) &4696.7 &4155.5\\
586.8(4)& 6.4(19)& (29/2$^+$) $\rightarrow$ 27/2$^+$ &3636.6 &3049.8\\
657.5(2)& 82(12)& 19/2$^-$ $\rightarrow$ 15/2$^-$ &1522.4 &864.9\\
672.6(4)& 10(3)& 29/2$^-$ $\rightarrow$ 27/2$^-$ &3512.9 &2840.3\\
703.3(6) & $>$1.4& (11/2$^+$) $\rightarrow$ (7/2$^+$)$^{(c)}$ &1515.7 &812.4\\
748.0(6)& 1.8(9)& $\rightarrow$ 23/2$^+$ &2883.8 &2135.8\\
759.4(2)& 100 &15/2$^-$ $\rightarrow$ 11/2$^-$ &864.9 &105.5\\
776.7(6)& 2(1)& (31/2$^-$) $\rightarrow$ 27/2$^-$ &3617.0 &2840.3\\
788.8(6)& 5(2)& (19/2$^-$) $\rightarrow$ 15/2$^-$ &1653.5 &864.9\\
812.4(6) & $>$1.4& (7/2$^+$)$^{(c)}$ $\rightarrow$ 3/2$^+$ &812.4 &0.0\\
914.1(4)& 13(3)& 27/2$^+$ $\rightarrow$ 23/2$^+$ &3049.8 &2135.8\\
922.4(6)& 2(1)& (33/2$^-$) $\rightarrow$ 29/2$^-$ &4435.3 &3512.9\\
987.5(4)& 22(4)& 23/2$^-$ $\rightarrow$ 19/2$^-$ &2509.9 &1522.4\\
1105.7(6)& 3.0(12)& (31/2$^+$) $\rightarrow$ 27/2$^+$ &4155.5 &3049.8\\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
$^{(b)}$ The relative intensities are normalized to $I_\gamma(759) = 100$.\\
$^{(c)}$ Spin value from Ref.~\cite{nds129}.\\
\end{table}
\subsubsection{Level scheme of $^{131}$Te\label{te131}}
Very few high-spin levels were known in $^{131}$Te prior to this work.
In a first experiment, three transitions (at 833, 565 and 361~keV) were identified from deep inelastic $^{130}$Te~+~$^{64}$Ni
reactions~\cite{zh98} and assigned, on the basis of sytematics, as the
21/2$^-$ $\rightarrow$ 19/2$^-$ $\rightarrow$ 15/2$^-$ $\rightarrow$ 11/2$^-$ cascade.
Since these transitions were observed to decay slowly in the off-beam spectra
($T_{1/2} > 1\mu$s), the authors assumed that the 23/2$^+$ state lies just above
the 21/2$^-$ state and decays by a low-energy transition which is delayed.
In a second experiment performed at the Osiris mass separator, a very long-lived
isomeric state of $^{131}$Te ($T_{1/2}=93$~ms) was discovered following the thermal
fission of U isotopes and identified as the expected 23/2$^+$ state~\cite{fo98}. The
same three transitions were observed for its
decay. In addition, the conversion electron measurements showed that the 361-keV
transition, having a high multipolarity, is the delayed transition. On the basis
of the values of the reduced transition probabilities, the authors of
Ref.~\cite{fo98} chose the $E3$ multipolarity. But it is worth mentioning that
$M2$ and $E3$ multipolarities lead to similar values of $\alpha_K$ coefficient
in this case, so an
$M2$ character for the 361-keV transition has not to be excluded when only considering
the experimental results.
The 833-564-361 triple coincidence has been also observed in the two data sets of the
present work. No other transition is correlated to this cascade, in agreement with the
decay of an isomeric state, as proposed in the previous works. Moreover, since none of
these $\gamma$ lines are measured in the SAPhIR experiment, the half-life has to be
well longer than $10~\mu$s.
On the other hand, the spectrum double-gated on the two first transitions exhibits
new transitions which do not belong to the complementary fragments.
Two examples of coincidence spectra double-gated on new
transitions of $^{131}$Te are shown in Fig.~\ref{spectre131}.
So a cascade of five new transitions is unambiguously assigned to $^{131}$Te, extending the
structure built on the 11/2$^-$ state up to 4688 keV (see Fig.~\ref{schema131}).
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=7cm]{f17_spectres_131Te.eps}
\caption{Coincidence spectra double-gated on new transitions of
$^{131}$Te, built from the $^{12}$C + $^{238}$U data set. (a) the gates are set on the
391- and 1104-keV transitions, (b) the gates are set on the 267- and 391-keV
transitions.}
\label{spectre131}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=6cm]{f18_131Te_color.ps}
\caption{(Color online) Level scheme of $^{131}$Te deduced in the present work.
The colored levels are new. The width of
the arrows is proportional to the relative intensity of the $\gamma$ rays.
The energy of the long-lived isomeric 11/2$^-$ state is from Ref.~\cite{nndc} and the half-life of
the 1941-keV isomeric state from Ref.~\cite{fo98}.
}
\label{schema131}
\end{center}
\end{figure}
Angular correlation measurements could not be performed, since the statistics of the $^{131}$Te
$\gamma$ rays is too low. Thus the spin and parity values (given in parentheses
in Fig.~\ref{schema131}) have been chosen by analogy with the level structures of the
lighter isotopes. The 1016- and 1580-keV states are assumed to be the 15/2$^-$ and 19/2$^-$
levels, respectively and the long-lived isomeric state, the 23/2$^+$ level.
Thus the 361-keV transition has an
$M2$ character, at variance with the choice made by the authors of Ref.~\cite{fo98}. The
very low value of the reduced transition probability of the isomeric transition,
$B(M2; 23/2^+ \rightarrow 19/2^-) = 2 \times 10^{-6}$~W.u., will be discussed in
Sect.~\ref{discuss_odd}.
We have gathered in Table~\ref{gammas_te131} the properties of all the
transitions assigned to $^{131}$Te from this work.
\begin{table}[!ht]
\begin{center}
\caption{Properties of the transitions assigned to $^{131}$Te observed in this
work. The energy of the isomeric 11/2$^-$
state is from Ref.~\cite{nndc}.}\label{gammas_te131}
\begin{tabular}{rrccc}
\hline
E$_\gamma^{(a)}$(keV)& I$_\gamma^{(a),(b)}$& 2J$_i^\pi \rightarrow$2J$_f^\pi$ &E$_i$&E$_f$ \\
\hline
266.7(5)& 7(3)& (35$^-$) $\rightarrow$ (33$^-$) &4688.0 &4421.3\\
330.2(5)& 12(5)& (33$^-$) $\rightarrow$ (29$^-$,31$^-$) &4421.3 &4091.1\\
361(1) & 18(6)& (23$^+$) $\rightarrow$ (19$^-$) &1941 &1579.9\\
390.7(5)& 25(5)& (27$^-$) $\rightarrow$ (23$^-$) &3074.8 &2684.1\\
564.3(4)& 100 & (19$^-$) $\rightarrow$ (15$^-$) &1579.9 &1015.6\\
833.3(4)& $>$100& (15$^-$) $\rightarrow$ 11$^-$ &1015.6 &182.3\\
1016.3(5)& 17(6)&(29$^-$,31$^-$)$\rightarrow$ (27$^-$) &4091.1 &3074.8\\
1104.2(5)& 36(9)& (23$^-$) $\rightarrow$ (19$^-$) &2684.1 &1579.9\\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parentheses is the error in the least significant digit shown.\\
$^{(b)}$ The relative intensities are normalized to $I_\gamma(564) = 100$.\\
\end{table}
\subsection{$^{132-136}$Te and observation of several singular partners
\label{te_lourds}}
As mentioned in the introduction of Sect.~\ref{results}, $^{132}$Te is located in
the high-$A$ tail of the Te fragment distribution
in both fusion-fission reactions used in the present work. Thus the $\gamma$ rays emitted
by its first excited states are barely observed and we could not find any new
cascade to be placed above its two long-lived isomeric states, $I^\pi=7^-$ and
10$^+$~\cite{ge01}.
However, in the C+U reaction, we have also identified $\gamma$ rays emitted by four other
isotopes with heavier masses, $^{133-136}$Te. The transitions deexciting
the $I^\pi=19/2^-$ isomeric states of $^{133}$Te and $^{135}$Te, as well as those of the
$I^\pi=6^+$ isomeric state of $^{134}$Te are clearly observed in the SAPhIR experiment.
Moreover, several $\gamma$ lines known to be located in the medium-spin part of the
$^{134,136}$Te level schemes~\cite{zh96,ko00} are seen in the spectrum gated by their
two first transitions, in the Euroball experiment (see, for instance, the spectrum
of Fig.~\ref{spectre134}).
In addition, $\gamma$ rays emitted by $^{102,104}$Mo are
unambiguously observed in the doubly-gated spectrum of $^{134}$Te
(five transitions are labeled in Fig.~\ref{spectre134}).
In this case, the total
number of protons of the two partners is 94, instead of $Z_{tot}=98$
obtained for $^{124-130}$Te which are produced from the C+U complete fusion.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=7cm]{f19_G1279et297_134Te.eps}
\caption{(Color online) Coincidence spectrum double-gated on the two
first transitions of $^{134}$Te, built from the $^{12}$C + $^{238}$U data set.
The $\gamma$ rays emitted by $^{134}$Te are written in red
and those by the $_{42}$Mo complementary fragments in black.
}
\label{spectre134}
\end{center}
\end{figure}
The spectra doubly-gated by the first transitions of many neighboring fission
fragments were then carefully analyzed in order to know
whether a total number of protons lower than 98 is obtained for other couples of partners.
The results are gathered in Table~\ref{partners}.
\begin{table}[!ht]
\begin{center}
\caption{List of singular partners observed in the $^{12}$C + $^{238}$U
reaction.}\label{partners}
\begin{tabular}{cccc}
\hline
fragment & partners & partners & partners \\
& set 1 & set 2 & set 3\\
\hline
&&&\\
$^{146}$Ce &$^{94-97}$Zr &$^{94}$Sr &\\
$^{144}$Ce &$^{94-97}$Zr &$^{94}$Sr &\\
$^{144}$Ba &$^{98}$Mo & &$^{94}$Sr\\
$^{142}$Ba &$^{100-102}$Mo &$^{96-98}$Zr &$^{96}$Sr\\
$^{140}$Xe & & &$^{98}$Zr\\
$^{138}$Xe &$^{104}$Ru &$^{102}$Mo &$^{98-100}$Zr\\
$^{134}$Te & & &$^{102,104}$Mo\\
&&&\\
$A_a+A_b$ &240-244 &238-240 &236-238\\
$Z_a+Z_b$ &98 &96 &94\\
\hline
\end{tabular}
\end{center}
\end{table}
While the $\gamma$ rays of $^{128-136}_{54}$Xe isotopes are only detected in
coincidence with those of $_{44}$Ru, the partners of $^{138}$Xe are numerous,
Zr, Mo and Ru. This shows that the fissioning nuclei are $_{94}$Pu, $_{96}$Cm
and $_{98}$Cf, respectively. Similar results are found for several heavy $_{56}$Ba
and $_{58}$Ce isotopes.
In summary, many heavy-$A$ fragments, belonging to the region located
just above $^{132}$Sn, come from the fission of Pu and Cm isotopes. The Pu and Cm nuclei
are produced in particular exit channels of the C+U reaction,
$^{238}$U ($^{12}$C,$^{8}$Be) $^{242}$Pu* and
$^{238}$U ($^{12}$C,$^{4}$He) $^{246}$Cm*, i.e., incomplete fusion or transfer reactions.
Thanks to the identification of their
partners, the production of the most neutron-rich Te, Xe, and Ba isotopes is
unambiguously attributed to the fission of Pu isotopes and thus cannot be
misinterpreted by the existence of particular shell effects which would be
at work in the low-energy fission of Cf isotopes, produced in the C+U complete
fusion at 90-MeV bombarding energy.
\section{Discussion}\label{discuss}
\subsection{General features of the high-spin structures of the heavy Te isotopes}\label{general}
The high-spin structures of the heavy Te isotopes are expected to be more
intricate than the ones of the Sn isotopes since, in addition to the breaking of
neutron pairs observed in Sn isotopes~\cite{pi11,as12,lo08}, the breaking of the
proton pair has to be taken into account.
Table~\ref{spinmax} gives the various configurations expected in the heavy-$A$ Te
isotopes when the valence space includes the $\nu h_{11/2}$ and
$\nu d_{3/2}$ subshells, as well as the $\pi g_{7/2}$ and $\pi d_{5/2}$ ones.
Noteworthy is the fact that the increase of the angular momentum within the
yrast line would involve alternatively or simultaneously the neutron-pair and
the proton-pair breakings.
\begin{table}[!ht]
\begin{center}
\caption{Various configurations expected in heavy-$A$ Te isotopes
with several broken pairs belonging to the subshells close to the Fermi
levels ($\nu h_{11/2}$, $\nu d_{3/2}$, $\pi g_{7/2}$, and $\pi d_{5/2}$).
The seniority, S, of each configuration is given in the fourth column.
The $I^\pi_{max}$ value corresponding to the breaking of one $\nu h_{11/2}$ pair
is written in bold.}\label{spinmax}
\begin{tabular}{llcc}
\hline
Neutron part& Proton part & $I^\pi_{max}$&S\\
\hline
&$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$& 6$^+$& 2\\
$(\nu h_{11/2})^2$& & {\bf 10$^+$} & 2 \\
$(\nu h_{11/2})^2$&$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$ & 16$^+$ & 4 \\
$(\nu h_{11/2})^4$& & 16$^+$ & 4 \\
$(\nu h_{11/2})^6$& & 18$^+$ & 6 \\
$(\nu h_{11/2})^1(\nu d_{3/2})^1$& & 7$^-$ & 2\\
$(\nu h_{11/2})^1(\nu d_{3/2})^1$&$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$& 13$^-$& 4\\
$(\nu h_{11/2})^3(\nu d_{3/2})^1$& &{\bf 15$^-$} & 4\\
$(\nu h_{11/2})^3(\nu d_{3/2})^1$&$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$&21$^-$& 6\\
\hline
$(\nu h_{11/2})^1$ &$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$ & 23/2$^-$ & 3\\
$(\nu h_{11/2})^3$ & & {\bf 27/2$^-$} & 3\\
$(\nu h_{11/2})^5$ & & 35/2$^-$ & 5\\
$(\nu h_{11/2})^3$ &$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$ & 39/2$^-$ & 5\\
$(\nu d_{3/2})^1$&$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$ & 15/2$^+$& 3\\
$(\nu h_{11/2})^2(\nu d_{3/2})^1$& & {\bf 23/2$^+$}&3 \\
$(\nu h_{11/2})^2(\nu d_{3/2})^1$&$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$ & 35/2$^+$& 5\\
$(\nu h_{11/2})^4(\nu d_{3/2})^1$& & 35/2$^+$& 5\\
$(\nu h_{11/2})^4(\nu d_{3/2})^1$&$(\pi g_{7/2})^2$ or $(\pi g_{7/2})^1(\pi d_{5/2})^1$ &47/2$^+$& 7\\
\hline
\end{tabular}
\end{center}
\end{table}
In addition, it is well known that most of the even-$A$ Te isotopes have a
vibrational behavior at low spin.
With regard to the Te isotopes studied in this work, the one-phonon energy lies between
600 and 840~keV. Such an energy may compete favorably with the one involved in the
successive breakings of nucleon pairs given in Table~\ref{spinmax}. Thus
the yrast lines likely comprise several fragments of vibrational bands (a cascade
of 2 or 3 transitions with $E_\gamma \sim 700$~keV) lying just
above the fully aligned states issued from the pair breakings.
\subsubsection{Evolution of states in the even-$A$ isotopes}\label{discuss_even}
The systematics of the high-spin states in the $^{122-134}$Te even-$A$ isotopes is
shown in Figs.~\ref{systematic_evenTe}(a) and ~\ref{systematic_evenTe}(b).
Regarding the positive-parity states,
the breaking of the proton pair shows up when
$N > 76$ [see the closeness of the 6$^+$ and 4$^+$ states, drawn with diamonds in
Fig.~\ref{systematic_evenTe}(a)] and the one
of the first neutron pair when $N > 74$ (see the closeness of the 8$^+$ and 10$^+$
states, drawn with circles).
From $N=70$ to 78, a cascade of two transitions of similar energies
defines two states with $I^\pi$ = 12$^+$ and 14$^+$ which can be interpreted as a
vibrational motion
above the 10$^+$ state [see the squares in Fig.~\ref{systematic_evenTe}(a)].
In addition, a 16$^+$ state has been observed in $^{126,128}$Te which only decays
towards the quasi-vibrational 14$^+$ level (see Figs.~\ref{schema126}
and~\ref{schema128}).
Around 5~MeV, the group of several states with spin 13$^+$,
14$^+$, 15$^+$, and 16$^+$ [see the empty diamonds in Fig.~\ref{systematic_evenTe}(a)]
involves likely the breakings of one
neutron pair (with $I_{max}=10^+$) and of one proton pair (with $I_{max}=6^+$).
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f20_syst_even_N.eps}
\caption{Evolution of the high-spin states of the even-$A$
Te isotopes as a function of the neutron number (this work and Ref.~\cite{pa96} for
$^{122}$Te), (a) positive-parity states, (b) negative-parity states.
}
\label{systematic_evenTe}
\end{center}
\end{figure}
The evolution of the negative-parity states is shown in
Fig.~\ref{systematic_evenTe}(b). Above the well-known 7$^-$ state having the
$(\nu h_{11/2})^1(\nu d_{3/2})^1$ configuration, we expect the breaking of the proton
pair (giving $I_{max}=13^-$, see Table~\ref{spinmax}), as well as the breaking of another
neutron pair (giving $I_{max}=15^-$, see Table~\ref{spinmax}). Such configurations can be assigned to
the experimental states with $I^\pi=9^-$, 11$^-$, 12$^-$, 13$^-$, and 15$^-$. Because of the closeness
of the 13$^-$ and 15$^-$ states in $^{128}$Te, on could assume that both of them have
a $(\nu h_{11/2})^3(\nu d_{3/2})^1$ configuration, as observed in
$^{120-128}$Sn~\cite{as12,pi11}.
In summary, the closeness of the 8$^+$ and 10$^+$ states on the one hand and that
of the 13$^-$ and 15$^-$ states on the other hand, for $N > 76$, could be interpreted in terms of
pure neutron configurations.
Such an assumption can be tested using the values of the transition probabilities since the
$B(E2)$ reduced transitions probabilities show large
values as soon as proton components are involved. Table~\ref{BE2} presents the characteristics of the $E2$
decay of the 10$^+$ state of $^{126-132}$Te and the $B(E2)$ values are drawn in Fig.~\ref{BE2_Te_Sn}(a) in
comparison with those obtained in Sn isotopes.
\begin{table}[!ht]
\begin{center}
\caption{Properties of the 10$^+$ isomeric states of $^{126-132}$Te.}
\label{BE2}
\begin{tabular}{cccccc}
\hline
& $E_i$&$E_\gamma$& T$_{1/2}^{(a)}$ &$B(E2)^{(a)}$ &$B(E2)^{(a)}$ \\
& keV & keV & & $e^2fm^4$ & W.u.\\
\hline
$^{126}$Te &2972 &208.1 &10.7(9) ns $^{(b)}$ & 120(10) & 3.2(3)\\
$^{128}$Te &2789 &101.3 &236(20) ns$^{(c)}$ & 85(7) & 2.2(2)\\
$^{130}$Te &2664 &18.5(5) &1.90(8) $\mu$s$^{(d)}$ & 85(4) & 2.2(1)\\
$^{132}$Te &2723 &22(1)$^{(d)}$ &3.70(9) $\mu$s$^{(d)}$ & 42(1) & 1.05(3)\\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parenthesis is the error in the least significant digit shown.\\
$^{(b)}$ From Ref.~\cite{nndc}.\\
$^{(c)}$ This work.\\
$^{(d)}$ From Ref.~\cite{ge01}.\\
\end{table}
The behavior of the Sn isotopes is a textbook example about the effect of the gradual filling
of a $j$ subshell on the $B(E2;I_{max} \rightarrow I_{max}-2 )$ value for a $j^2$ configuration.
A very low value is obtained at mid-shell, namely at $N=73$. The Te isotopes do
not follow the same trend, their much higher $B(E2;10^+ \rightarrow 8^+)$ values indicate that the wave functions of both the
10$^+$ and 8$^+$ states do have proton components~\cite{ge01}. The slow decrease when $N$ is
increasing indicates that
the proton component is decreasing when approaching the neutron magic number.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f21_BE2_Te_Sn.eps}
\caption{(Color online) Comparison of the B(E2) reduced probabilities measured in Te and
Sn isotopes, (a) for the $10^+ \rightarrow 8^+$ transitions, (b) for
the $15^- \rightarrow 13^-$ transitions. Experimental data are from this work and
Refs.~\cite{ge01,as12,nndc}.
}
\label{BE2_Te_Sn}
\end{center}
\end{figure}
The reduced probabilities of the $15^- \rightarrow 13^-$ transitions are drawn in
Fig.~\ref{BE2_Te_Sn}(b). It is worth recalling that the $B(E2;15^- \rightarrow 13^-)$ and
$B(E2;10^+ \rightarrow 8^+)$ values are explicitely linked provided that the states have
the $j^3$ and $j^2$ configuration, respectively
(this is observed in Sn isotopes, see the Fig.~20 of Ref.~\cite{as12}). Regarding
the value measured in $^{130}$Te, it is much higher than the one of $^{128}$Sn, implying
that the $^{130}$Te states do have proton components.
\subsubsection{Evolution of states in the odd-$A$ isotopes}\label{discuss_odd}
The evolution of the high-spin states of the odd-$A$ Te isotopes is drawn in
Figs.~\ref{systematic_oddTe}(a) and~\ref{systematic_oddTe}(b). The 23/2$^-$
level is likely the fully-aligned $(\nu h_{11/2})^1$ $(\pi g_{7/2})^2$ state.
Its excitation
energy above the 11/2$^-$ level increases as a function of neutron number.
Such an effect is due to the residual interaction between the odd neutron and the two
protons. At $N=81$, the intensity of the interaction between the neutron
hole and the proton particles is maximum, while the interaction strongly decreases
at mid-shell, i.e., for $N \sim 71$. This behavior has been already pointed out in
$_{49}$In for the evolution of the high-spin states of the
$(\nu h_{11/2})^2$ $(\pi g_{9/2})^1$ configuration, where the proton state is a
hole~\cite{po04}. Using the empirical two-body residual interactions, the evolution
of the three-quasiparticle multiplet was computed as a function of the filling
of the neutron orbit. Starting from mid-shell where all the states are close to
each other, the
decrease of the $\nu h_{11/2}$ occupation probability (in order to obtain a
particle-hole configuration, such as the one of Te isotopes discussed here) leads to
an increase of the energies of the highest spin states, as
observed experimentally~\cite{po04,lu02}.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=8.5cm]{f22_syst_odd_N.eps}
\caption{Evolution of the high-spin states of the odd-$A$
Te isotopes as a function of the neutron number (this work, Ref.~\cite{bl96} for
$^{123}$Te and Ref.~\cite{bh01} for $^{133}$Te), (a) positive-parity states, (b)
negative-parity states.
}
\label{systematic_oddTe}
\end{center}
\end{figure}
Two 21/2$^-$ levels have been identified close in
energy in $^{127,129}$Te (see Figs.~\ref{schema127} and~\ref{schema129}). The
21/2$^-_2$ state of $^{129}$Te was assigned as
$(\nu h_{11/2})^1$ $(\pi g_{7/2})^1(\pi d_{5/2})^1$
because it is well populated in the $\beta$ decay of the
high-spin isomeric state of $^{129}$Sb~\cite{zh95}. The 21/2$^-_2$ state of
$^{127}$Te has likely the same configuration.
Regarding the 27/2$^-$ level, it is likely the fully aligned $(\nu h_{11/2})^3$ state. Its
excitation energy decreases as a function of the neutron number, such as the 10$^+$ state of
the even-$N$ isotopes.
For the spin values higher than 27/2$^-$, the breaking of another pair has to be considered,
either a neutron one, leading to $I_{max}=35/2^-$, or a proton one, leading to
$I_{max}=39/2^-$ (see Table~\ref{spinmax}).
The positive-parity states are shown in Fig.~\ref{systematic_oddTe}(b). The
15/2$^+$ level is due to the $(\nu d_{3/2})^1(\pi g_{7/2})^2$ configuration, while
the 23/2$^+$ state comes from the breaking of one $(\nu h_{11/2})^2$ pair
(see Table~\ref{spinmax}). Noteworthy is the fact that the 23/2$^+$ level of
$^{123-129}$Te decays towards the 21/2$^-$ states by means of $E1$ transitions.
As the difference in energy of these states decreases when $N$ is increasing,
the 23/2$^+$ level of $^{129}$Te becomes isomeric, with $T_{1/2}=33(3)$~ns. The
obtained values of the $B(E1)$ reduced transitions are of the same
order of magnitude than those measured for the $7^- \rightarrow 6^+$ transition in the two neighboring
even-$A$ isotopes (see Table~\ref{BE1}).
\begin{table}[!ht]
\begin{center}
\caption{Characteristics of the isomeric $E1$ transitions measured
in $^{129}$Te and $^{126-132}$Te.}\label{BE1}
\begin{tabular}{ccccc}
\hline
Nucleus & $E_i$ &$E_\gamma$ & T$_{1/2}^{(a)}$& $B(E1)^{(a)}$\\
& keV & keV & & W.u.\\
\hline
$^{129}$Te &2135.8 &180 &33(3) ns$^{(b)}$ & $4.7(12) \times 10^{-7}$\\
& &251 & & $3.2(8) \times 10^{-7}$\\
&&&&\\
$^{126}$Te &2495 &720 &0.152(5) ns$^{(c)}$ & $4.55(16) \times 10^{-6}$\\
$^{128}$Te &2336 &526 &2.404(24) ns$^{(c)}$ & $7.4(6) \times 10^{-7}$\\
$^{130}$Te &2145 &331 &110(5) ns$^{(d)}$ & $6.6(3) \times 10^{-8}$\\
$^{132}$Te &1925 &151 &28.1(15) $\mu$s$^{(c)}$ & $2.56(14) \times 10^{-9}$\\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parenthesis is the error in the least significant digit shown.\\
$^{(b)}$ From Ref.~\cite{zh95}.\\
$^{(c)}$ From Ref.~\cite{nndc}.\\
$^{(d)}$ This work.\\
\end{table}
The last point of this section is devoted to the discussion of the isomeric transition
of $^{131}$Te which populates the second excited state measured above the 11/2$^-$ level
(see Fig.~\ref{schema131}). Using the excitation energy of the 21/2$^-_1$ levels above
the 11/2$^-$ level in $^{129}$Te and $^{133}$Te [see Fig.~\ref{systematic_oddTe}(a)],
we can estimate the excitation energy of the 21/2$^-_1$ level in $^{131}$Te,
$E(21/2^-) \sim 2070$~keV. Such a value explains why
the 23/2$^+$ level of $^{131}$Te, lying at 1943~keV, is a very-long lived isomeric
state since it can only decay towards the 19/2$^-$ state.
The authors of the previous work~\cite{fo98} had chosen an
$E3$ multipolarity for the 361-keV transition, since a $M2$ multipolarity leads to a
$B(M2)=1.9 \times 10^{-6}$~W.u., a very low value as compared to those measured in Sn
isotopes. Such a choice would imply that the 1580-keV state has
$I^\pi = 17/2^-$. Given that no 17/2$^-$ state is observed in the yrast lines of the neighboring Te isotopes,
while a 19/2$^-$ state is measured around this energy, the $M2$ multipolarity is much
more likely.
Nevertheless it is worth checking the configurations involved in the $M2$ transitions of the Sn
and Te isotopes.
In $^{123-127}$Sn, the main components of the initial state are
$(\nu h_{11/2})^2 (\nu d_{3/2})^1/(\nu h_{11/2})^2 (\nu s_{1/2})^1$ and the one of the
final state, $(\nu h_{11/2})^3$. The hindrance of $M2$ transition is due to the change of the
neutron orbit, $\nu d_{3/2}/ \nu s_{1/2} \rightarrow \nu h_{11/2}$, implying
$\Delta \ell =3$ and $\Delta j =4$, at least.
In $^{131}$Te, the main component of the 23/2$^+$ state is
$(\nu h_{11/2})^2 (\nu d_{3/2})^1$ and
the one of the 19/2$^-$ state is $(\nu h_{11/2})^1 (\pi g_{7/2})^2$. The $M2$
transition is then more hindered because it has to involve the change of one
proton state (for the breaking of one proton pair), in addition to the one of the neutron
orbit.
\subsection{Shell-model calculations}
In order to have a deeper understanding of the excitations involved in the high-spin states
identified in the heavy-$A$ Te isotopes, particularly the respective roles of the neutrons
and the protons, we have performed shell-model (SM) calculations using the interaction
SN100PN taken from Brown {\it et al}~\cite{br05} which countains four parts, the
proton-proton, neutron-neutron and proton-neutron interactions, the Coulomb
repulsion being added to the interaction between protons. We used the shell-model code
NuShellX@MSU~\cite{br07}. The valence space includes five proton orbits and five neutron orbits
($g_{7/2}$, $d_{5/2}$, $d_{3/2}$, $s_{1/2}$, $h_{11/2}$), which is suitable
for the description of nuclei with $Z \ge 50$ and $N \le 82$.
We have calculated the excited states of $^{128-131}$Te in the full space.
The lightest isotopes, $^{128,129}$Te, with their numerous holes in
the $N=50-82$ shell, are good cases to investigate
the breaking of several neutron pairs in the presence of protons.
On the other hand, the calculations
of more lighter isotopes would lead to too large dimensions and would need to make
valence-space truncations which may
lead to ambiguity in the understanding of the results.
We have also calculated, without any truncation, the excited states of $^{125-130}$Sn
isotopes using the same interaction
and valence space, this allows us to test the neutron part of the SN100PN interaction.
In Sec.~\ref{Sn_SM}, we first compare the experimental and calculated level schemes for
$^{126,125}$Sn. Then in Sec.~\ref{Te_SM}, we discuss the results obtained for the excited
states of $^{128-131}$Te. The analysis of the wave functions of some selected states allows
us to determine to what extent the breaking of the proton pair affects the Te high-spin
states, particularly the existence of states having the same neutron configurations
as those of the Sn isotopes.
\subsubsection{Description of the high-spin states of Sn isotopes}\label{Sn_SM}
The evolution of the experimental high-spin states of the heavy Sn isotopes as a function
of the neutron number is very smooth (see Figs.~19 and 21 of Ref.~\cite{as12}). This feature
is well described by the SM calculations done in the present work. Thus we only show two
typical results, those of $^{126}$Sn and $^{125}$Sn.
Results of the shell-model calculations for $^{126}$Sn are compared to the experimental
results~\cite{as12} in Fig.~\ref{SM_126Sn}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{f23_Sn126_exp_SM_color.ps}
\caption{(Color online) Comparison of experimental~\cite{as12} and calculated
high-spin states of $^{126}$Sn. The SM levels drawn in red are due to the
complete alignment of the angular momenta of the neutron of broken pairs
(see text).
}
\label{SM_126Sn}
\end{center}
\end{figure}
The excitation energies of most of the SM states
are very close to the experimental ones, only the 12$^+$, 14$^+$, 16$^+$, and 18$^+$ states are predicted
$\sim 200$~keV too low.
The four SM states drawn in red,
with $I^\pi=7^-$, 10$^+$, 15$^-$ and 16$^+$, are the fully aligned states of
broken-pair configurations involving $n$ neutrons in the $\nu h_{11/2}$ orbit,
with $n=$1, 2, 3 and 4, respectively (see Table~\ref{spinmax}).
Similarly, results of the shell-model calculations for $^{125}$Sn are compared to the experimental
results~\cite{as12,lo08,nndc} in Fig.~\ref{SM_125Sn}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8.5cm]{f24_Sn125_exp_SM_color.ps}
\caption{(Color online) Comparison of experimental~\cite{as12,lo08,nndc} and calculated
high-spin states of $^{125}$Sn. The SM levels drawn in red are due to the
complete alignment of the angular momenta of the neutron of broken pairs
(see text).
}
\label{SM_125Sn}
\end{center}
\end{figure}
In this case also, the deviation between experimental and calculated energies is low, mostly
below 100~keV. Regarding the positive-parity states, it is important to notice that
the 15/2$^+$ and 19/2$^+$ levels are predicted very close to each other
but in a reverse order. Moreover, even though no excited state lying above the 23/2$^+$ level
has yet been identified experimentally in $^{125}$Sn, the excitation energy of the expected
35/2$^+$ state can be
estimated from the experimental results of $^{121-123}$Sn, where the gap in energy between the
35/2$^+$ and 23/2$^+$ states is
2280~keV and 2225~keV, respectively~\cite{as12}. This is in very good agreement with the SM
prediction for $^{125}$Sn, the 35/2$^+$ state lying 2099~keV above the 23/2$^+$ one.
The three states drawn in red, with $I^\pi=23/2^+$, 27/2$^-$ and 35/2$^+$,
are the fully aligned states of broken-pair configurations involving $n$ neutrons
in the $\nu h_{11/2}$ orbit, with $n=$2, 3, and 4, respectively (see Table~\ref{spinmax}).
The 35/2$^-$ state is partially drawn in red as only 60\% of its wave function is the
$(\nu h_{11/2})^5$ fully aligned state.
\subsubsection{ Description of the high-spin states of Te isotopes}\label{Te_SM}
\paragraph{Even-$N$ isotopes: $^{128,130}$Te.}
Results of the shell-model calculations for $^{128}$Te are given in Fig.~\ref{SM_128Te}, in
comparison with the experimental results obtained in the present work.
Every experimental state has a theoretical counterpart in the same energy range.
The new features of the high-spin level scheme of $^{128}$Te,
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=9cm]{f25_Te128_exp_SM_color.ps}
\caption{(Color online) Comparison of experimental and calculated
high-spin states of $^{128}$Te. The major part ($\ge $50\%) of the wave functions
of the states drawn in magenta is only due to the breaking of
neutron pairs ($I_p=0$), those drawn in green have a broken proton pair with
$I_p=6$, and those drawn in blue have several components with various
values of $I_n$ and $I_p$ (some cases are shown in Fig.~\ref{config128}).
}
\label{SM_128Te}
\end{center}
\end{figure}
as compared to the one of its isotone, $^{126}$Sn, are well described:
(i) the closeness of the $6^+$ and $4^+$ states, (ii)
the presence of a second set of states above the 12$^+$ level, forming a $\Delta I=1$
series, and (iii) the existence of the 12$^-$ and 14$^-$ states in the negative-parity yrast
line.
Comparison of experimental and calculated levels of $^{130}$Te is shown in Fig.~\ref{SM_130Te}.
The excitation energies of most of the SM states are even much closer to the experimental ones
than in the case of $^{128}$Te.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=9cm]{f26_Te130_exp_SM_color.ps}
\caption{(Color online) Comparison of experimental and calculated high-spin states
of $^{130}$Te.
The color code of the SM states is the same as that of Fig.~\ref{SM_128Te}.
}
\label{SM_130Te}
\end{center}
\end{figure}
The analysis of the wave functions allows us to identify which nucleon pairs are broken to
obtain the total angular momentum of the calculated states.
For that purpose, we use (i) the values of the two components, $I_n$ and $I_p$, which are
coupled to give the total angular momentum of each state and (ii) for each $I_n$ and $I_p$
component, its decomposition in terms of proton-neutron configurations, i.e., the occupation
numbers of the ten valence orbits which are considered in the present calculations.
Typical results of positive-parity states of $^{128}$Te are given in
Figs.~\ref{config128}(a)--~\ref{config128}(d).
\begin{figure*}[!ht]
\begin{center}
\includegraphics*[angle=-90,width=17cm]{f27_config_128Te.eps}
\caption{(Color online) Decomposition of the total angular momentum of selected
states of $^{128}$Te into their $I_n \otimes I_p$ components. The percentages
above 10\% are
written inside the squares, drawn with an area proportional to it. Percentages
below 5\% are not written. The color code is the same as the one of
Fig.~\ref{SM_128Te}. (a)--(d) positive-parity states, (e)--(h) negative-parity states.
}
\label{config128}
\end{center}
\end{figure*}
The major component (62\%) of the 10$^+$ state predicted at 2560~keV comes from the
breaking of the neutron pair ($I_n=10$), the two protons being paired ($I_p=0$). Such a
feature does not hold for the 12$^+$ state predicted at 3221~keV, where the component
corresponding to the breaking of the neutron pair ($I_n=12$), the two protons being paired
($I_p=0$), is only 38\%, while another large component (32\%) involves the breaking of both
neutron and proton pairs. The comparison of the wave functions of the two 14$^+$
states is very instructive. The 14$^+_1$ state calculated at 4064~keV shows many
components, while the 14$^+_2$ state calculated at 4259~keV has mainly $I_p=6$ (with
$I_n=$8--12), i.e., the proton pair being broken and the two angular momenta being fully
aligned. In summary, the positive-parity states of $^{128}$Te given in Fig.~\ref{SM_128Te}
can be sorted in three families drawn with three colors. The major part ($\ge $50\%) of
the wave functions of the states drawn in magenta is only due to the breaking
of a neutron pair ($I_p=0$), this is the case of the 8$^+$ and 10$^+$ states.
Those drawn in green have the broken proton pair with $I_p=6$, such as the 6$^+$ calculated
at 1770~keV and the set of $\Delta I=1$ states, from 13$^+$ to 18$^+$. Finally,
those drawn in blue have several components with various values of $I_n$ and $I_p$, such as
the 2$^+$, 4$^+$, 12$^+$ states, as well as the set of $\Delta I=2$ states, from 14$^+$
to 18$^+$. Because of their large numbers of components, these levels resemble 'collective'
states.
Finally, it is important to note that the SM calculations give a coherent
picture of the two structures measured above the 12$^+_1$ states in the even-$N$ Te isotopes.
The components of typical negative-parity states of $^{128}$Te are given in
Figs.~\ref{config128}(e)--~\ref{config128}(h). The major component (63\%) of the 7$^-$ state predicted at 2319~keV comes from the
breaking of the neutron pair ($I_n=7$), the two protons being paired ($I_p=0$). The 13$^-$
state predicted at 4028~keV and the 15$^-$ state (4397~keV) also have their major component
(52\% and 50\%, respectively) coming from the breaking of neutron pairs ($I_n=13$ and
15, respectively), the two protons being paired ($I_p=0$).
On the other hand, the 11$^-$ state calculated at 3665~keV shows many
components. The negative-parity states of $^{128}$Te given in Fig.~\ref{SM_128Te}
can then be sorted in two families, the major part ($\ge $50\%) of the wave function of states
drawn in magenta is due to the breaking of a neutron pair ($I_p=0$) while the states drawn in
blue have several components with various values of $I_n$ and $I_p$.
\begin{table}[!ht]
\begin{center}
\caption{
Comparaison of the experimental and calculated values of $B(E2)$ for transitions de-exciting
isomeric states in $^{128,130,132}$Te.}\label{BE2_SM}
\begin{tabular}{cccc}
\hline
Nucleus & J$_i^\pi \rightarrow$J$_f^\pi$ & $B_{exp}(E2)^{(a)}$ &$B_{SM}(E2)$\\
& &$e^2fm^4$ &$e^2fm^4$\\
\hline
$^{128}$Te &$10^+ \rightarrow 8^+$ &85(7) &110 \\
$^{130}$Te & &85(4) &154 \\
$^{132}$Te & &42(1) &30 \\
&&&\\
$^{128}$Te &$15^- \rightarrow 13^-$ & - &274 \\
$^{130}$Te & &193(29) &135 \\
\hline
\end{tabular}
\end{center}
$^{(a)}$ The number in parenthesis is the error in the least significant digit shown.\\
\end{table}
\paragraph{Odd-$N$ isotopes: $^{129,131}$Te.}
The negative-parity yrast line predicted in $^{129,131}$Te (see
Figs.~\ref{SM_129Te} and~\ref{SM_131Te}) exhibits several new high-spin states
as compared to the one of the odd-$N$ Sn isotopes, such as the 21/2$^-$ level lying
above the 19/2$^-$ one, or the 29/2$^-$ and 33/2$^-$ levels lying between the 31/2$^-$
and 35/2$^-$ ones. This is in good agreement with the experimental results.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=9cm]{f28_Te129_exp_SM_color.ps}
\caption{(Color online) Comparison of experimental and calculated
high-spin states of $^{129}$Te. The color code of the SM states
is the same as that of Fig.~\ref{SM_128Te}.
}
\label{SM_129Te}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=9cm]{f29_Te131_exp_SM_color.ps}
\caption{(Color online) Comparison of experimental and calculated high-spin states
of $^{131}$Te.
The color code of the SM states is the same as that of Fig.~\ref{SM_128Te}.
}
\label{SM_131Te}
\end{center}
\end{figure}
All the calculated levels of $^{129,131}$Te show the same sequence as the experimental ones
and have an energy in the right energy range. Nevertheless it is important to notice that
the differences between the experimental and calculated excitation energies can amount up
to $\sim$400~keV in $^{129}$Te, its SM level scheme being more compressed than the
experimental one.
The components of typical negative-parity states of $^{129}$Te are given in
Figs.~\ref{config129}(a)--~\ref{config129}(d). All the components of the 21/2$^-$ state predicted at 1796~keV
come from the breaking of the proton pair ($I_p=6$), and the major component
(63\%) has one odd neutron in the $h_{11/2}$ orbit ($I_n=11/2$).
The major component (55\%) of the 27/2$^-$ state predicted at 2485~keV comes from the
breaking of a neutron pair ($I_n=27/2$), the two protons being paired ($I_p=0$). Above the
27/2$^-$ state, two sets of states are predicted to coexist in the same energy range. The
wave function of the 31/2$^-$ state calculated at 3189~keV displays many components with
various values of both $I_p$ and $I_n$ (similar results are obtained for the 35/2$^-$ state at
4193~keV and the 39/2$^-$ state at 5254~keV). On the other hand, the components of the
29/2$^-$ state calculated at 3464~keV come mainly from the breaking of the proton pair
($I_p=6$). The same is observed for the 33/2$^-$ state at
4034~keV and the 37/2$^-$ state at 4855~keV.
\begin{figure*}[!ht]
\begin{center}
\includegraphics*[angle=-90,width=17cm]{f30_config_129Te.eps}
\caption{(Color online) Decomposition of the total angular momentum of selected
states of $^{129}$Te into their $I_n \otimes I_p$ components. The percentages
above 10\% are written inside the squares, drawn with an area proportional
to it. Percentages below 5\% are not written. The color code is the same
as the one of Fig.~\ref{SM_129Te}. (a)--(d) negative-parity states,
(e)--(h) positive-parity states.
}
\label{config129}
\end{center}
\end{figure*}
The components of typical positive-parity states of $^{129}$Te are given in
Figs.~\ref{config129}(e)--~\ref{config129}(h). The main components of the 15/2$^+$ state calculated at
1687~keV come from the breaking of the proton pair ($I_p=6$), and the major component
(64\%) has one odd neutron in the $d_{3/2}$ orbit ($I_n=3/2$). Less than 300~keV above, the
23/2$^+$ state has a major component (59\%) coming from the breaking of a $\nu h_{11/2}$ pair
($I_n=23/2$ and $I_p=0$). The two sets of states predicted above the 23/2$^+$ have different
configurations. The 27/2$^+$ level exhibits many components with
various values of both $I_p$ and $I_n$ (similar results are obtained for the 31/2$_1^+$
state at 3545~keV and the 35/2$^+$ state at 4513~keV), while the components of the
29/2$^+$ state calculated at 3422~keV come mainly from the breaking of the proton pair
($I_p=6$), as the ones of the 31/2$_2^+$ and 33/2$^+$ states.
In summary, the calculated states of $^{129,131}$Te given in Figs.~\ref{SM_129Te}
and~\ref{SM_131Te} can be sorted in three families drawn with the same colors as used for
$^{128,130}$Te. The breaking of the first neutron pair leads to the major component of the
19/2$^+$, 23/2$^+$, and 27/2$^-$ states. Two sets of states are predicted to coexist above them.
The levels of one set (drawn in blue) have many components, resembling 'collective' states, while all the
levels of the other one (drawn in green) have mainly a broken proton pair.
\subsubsection{Conclusion}
The neutron part of the SN100PN effective interaction gives a very good
description of the high-spin states of the heavy Sn isotopes.
The agreement between the calculated and experimental
energies of the excited states of the heavy Te isotopes is slightly less good,
the predicted level schemes being too compressed as compared to the experimental results.
One could suspect that the values of some proton-proton two-body matrix elements (TBME)
are slightly too attractive, as similar features were observed previously in several
$N=82$ isotones, where only such TBME are active. It is worth
recalling that the SN100PN interaction has been recently used~\cite{sr13} to calculate their
high-spin states which were measured previously~\cite{as12b}.
The theoretical levels schemes of $^{137}$Cs, $^{138}$Ba, $^{139}$La, and $^{140}$Ce
display the same sequences as the experimental ones, but some high-spin states are
predicted too low in energy (by an amount from $\sim$~200~keV to 400~keV).
Noteworthy is the fact that only the fully aligned states corresponding to the breaking of
one neutron pair in the $h_{11/2}$ orbit (see the cases written in bold in
Table~\ref{spinmax})
are observed in the heavy-$A$ Te isotopes. On the other hand, the main configurations of
all the higher spin states contain the breaking of the proton pair. This is at variance
with the Sn isotopes where the breaking of two and three neutron pairs are
identified~\cite{as12}.
\section{Summary}
In this work, the $^{124-131}$Te isotopes were produced in two fusion-fission reactions,
$^{18}$O+$^{208}$Pb and $^{12}$C+$^{238}$U, and the emitted
$\gamma$ rays were detected by the Euroball array. A fragment detector was also
associated to the $\gamma$-ray detection, in another experiment. All the data sets recorded
in these experiments have allowed us to extend the level schemes of $^{124-131}$Te
isotopes to higher spins ($\sim 17\hbar$) and higher excitation energies ($\sim 5-6$~MeV).
Furthermore, the performed $\gamma-\gamma$ angular correlations supported the former
spin assignments in most cases and yielded about 30 new spin assignments.
The half-lives of three isomeric states were measured, showing that the decay of the
10$^+$ state of $^{128}$Te is much faster than reported in the last compilation.
In addition, the
unexpected production of heavier-$A$ Te nuclei as well as of some isotopes of Xe, Ba,
and Ce was discussed in terms of transfer/incomplete fusion in the $^{12}$C+$^{238}$U reaction,
thanks to the unambiguous identification of their complementary fragments.
The high-spin structures of the $^{124-131}$Te have been first discussed in comparison with
the general features known in the mass region. Then shell-model calculations using the SN100PN
effective interaction have been successfully compared to experimental results. Thanks to the
$I_n$ and $I_p$ components of the SM wave functions, the effect of the proton-pair breaking
has been identified. The fully aligned component with $I_p=6$ shows up clearly by the appearance
of sets of excited levels, for instance above the $(\nu h_{11/2})^2$ 10$^+$ state in the
even-$N$ isotopes and the $(\nu h_{11/2})^3$ 27/2$^-$ state in the odd-$N$ ones.
\begin{acknowledgments}
The Euroball project was a collaboration among France, the
United Kingdom, Germany, Italy, Denmark and Sweden.
The first experiment has been performed under
U.E. contract (ERB FHGECT 980 110) at Legnaro.
The second experiment has
been supported in part by the EU under contract HPRI-CT-1999-00078 (EUROVIV).
We thank many colleagues for their
active participation in the experiments, Drs. A.~Bogachev, A.~Buta, J.L.~Durell,
Th.~Ethvignot, F.~Khalfalla, I.~ Piqueras, A.A.~Roach, A.G.~Smith and B.J.~Varley.
We thank the crews of the Vivitron.
We are very indebted to M.-A. Saettle
for preparing the Pb target, P. Bednarczyk, J. Devin, J.-M. Gallone,
P. M\'edina, and D. Vintache for their help during the experiment.
\end{acknowledgments}
|
3,212,635,537,741 | arxiv | \section{Introduction}
\subsection{Motivations and state of the art}
In the past few years there has been a growing interest in the study of Dirac operators among the mathematical physics community; the main reason being that low-energy electrons in a single-layered sheet of graphene are driven by an effective hamiltonian being a two-dimensional massless Dirac operator.
Various mathematical studies have been undertaken, starting with a rigorous mathematical derivation of such hamiltonians, see \emph{e.g.} \cite{FW12} for the effective hamiltonian derivation or \cite{ALTMR,BCLTS,MOBP,SV18} for the justification of the so-called infinite mass boundary conditions. Many properties of such operators have been investigated as their self-adjointness in bounded domains with specified boundary conditions or coupled with the so-called $\delta$-interactions, see \cite{BHPOB,BFVdBS17}. Let us also mention recent works on spectral properties and asymptotics of Dirac-type operators in specific asymptotic regimes (see \cite{ALTR,HOBP}).
In this work, we are interested in finding geometrical bounds on the eigenvalues of one of the simplest Dirac operator relevant in physics: the two-dimensional massless Dirac operator with infinite mass boundary conditions.
To set the stage, let $\Omega \subset \mathbb{R}^2$ be a $C^\infty$ simply connected domain and let $n = (n_1,n_2)^\top$ be the outward pointing normal field on $\partial\Omega$. The Dirac operator with infinite mass boundary conditions in $L^2(\Omega,\mathbb{C}^2)$ is defined as
\begin{multline*}
D^\Omega := \begin{pmatrix}
0 & -2\mathrm{i}\partial_z\\
-2\mathrm{i}\partial_{\bar z} & 0
\end{pmatrix},\\\dom(D^\Omega) := \{ u = (u_1,u_2)^\top \in H^1(\Omega,\mathbb{C}^2) : u_2 = \mathrm{i} {\bf n}u_1 \text{ on }\partial\Omega \},
\end{multline*}
where we have set ${\bf n} := n_1 + \mathrm{i} n_2$ and with the Wirtinger operators defined as usual by
\[
\partial_z = \frac12(\partial_1 - \mathrm{i} \partial_2),\quad \partial_{\bar{z}} = \frac12(\partial_1 + \mathrm{i} \partial_2).
\]
The Dirac operator with infinite mass boundary conditions $D^\Omega$ is known to be self-adjoint (see \cite[Thm. 1.1.]{BFVdBS17}), moreover its spectrum is symmetric with respect to the origin and constituted of eigenvalues of finite multiplicity satisfying
\[
\cdots \leq -E_k(\Omega) \leq\cdots \leq-E_{1}(\Omega) < 0 < E_{1}(\Omega) \leq \cdots \leq E_k(\Omega) \leq \cdots.
\]
In the recent paper \cite{BFVdBS17b}, the following geometrical lower bound is obtained
\begin{equation}\label{eqn:lbBFSVdB}
E_1(\Omega) \geq \sqrt{\frac{2\pi}{|\Omega|}},
\end{equation}
where $|\Omega|$ denotes the area of the domain $\Omega$. However, this lower bound is never attained among Euclidean domains and by analogy with the famous Faber-Krahn inequality \cite{F23,K25}, a natural conjecture for the optimal lower-bound is the following.
\begin{conj}\label{conj:FK} There holds
\[
E_1(\Omega) \geq \sqrt{\frac{\pi}{|\Omega|}} E_1(\mathbb{D}),
\]
where $\mathbb{D}$ is the unit disk. There is equality in the above inequality if and only if $\Omega$ is a disk.
\end{conj}
\begin{rem} As explained in \cite[Remark 2]{BFVdBS17b} (see also \cite[Appendix]{LOB}), the eigenstructure of the unit disk is explicit. Indeed, $E_1(\mathbb{D}) \simeq 1.435\dots$ is the first non-negative root of the equation $J_0(E) = J_1(E)$ where $J_0$ and $J_1$ are the Bessel functions of the first kind of order $0$ and of order $1$, respectively. Moreover, an associated eigenfunction is given for $(x_1,x_2)\in \mathbb{D}$ by
\[
\begin{pmatrix}J_0(|x|)\\ \mathrm{i}\frac{x_1 + \mathrm{i} x_2}{|x|}J_1(|x|)\end{pmatrix}.
\]
\label{rem:fundisk}
\end{rem}
Conjecture \ref{conj:FK} motivated part of this paper and is still an open question. However, in Section \ref{sec:numerics} we provide strong numerical evidences supporting it and in Section \ref{sec:aboutFK} we show how Conjecture \ref{conj:FK} is intimately connected to the famous Bossel-Daners inequality for the Robin Laplacian (see \cite{Boss86,Dan06}).
The quest for a geometrical upper-bound has also attracted attention recently as for instance in \cite{LOB}. In this work, the given geometrical upper-bound is sharp in the sense that it is an equality if and only if the considered domain is a disk. Nevertheless, this upper-bound depends in a complicated fashion of different geometrical parameters and may be hard to compute in practice.
Let us also mention that similar questions are dealt with in the differential geometry literature for lower bounds and upper bounds for Dirac operators on spin-manifolds (see for instance \cite{AF99,B92,B98,R06}).
One of the main result of this paper is the following theorem which gives a geometrical upper-bound in term of simple geometric quantities: $|\Omega|$ the area of $\Omega$, $|\partial\Omega|$ the perimeter of $\Omega$ as well as $r_i$ the inradius of $\Omega$.
\begin{thm} Let $\Omega \subset \mathbb{R}^2$ be a $C^\infty$ simply connected domain. There holds
\[
E_1(\Omega) \leq \frac{|\partial\Omega|}{(\pi r_i^2 + |\Omega|)}E_1(\mathbb{D}),
\]
with equality if and only if $\Omega$ is a disk.
\label{thm:ineq}
\end{thm}
The proof is by combining a new variational characterization of $E_1(\Omega)$, inspired by min-max techniques for operators with gaps introduced in \cite{DES00} and the classical proof of Szeg\"o about the eigenvalues of membranes of fixed area \cite{Sze54}.
It turns out this new variational characterization is of interest by itself because it also allows for numerical simulations and we believe that it could be an adequate starting point to prove Conjecture \ref{conj:FK} as discussed further on in Section \ref{sec:aboutFK}. To introduce it, consider the quadratic form
\begin{equation}
q_{E,0}^\Omega(u) := 4 \int_\Omega |\partial_{\bar z} u|^2 dx - E^2 \int_{\Omega}|u|^2dx + E \int_{\partial\Omega} |u|^2 ds, \quad \dom(q_{E,0}^\Omega) := C^\infty(\overline{\Omega},\mathbb{C}).
\label{eqn:deffq}
\end{equation}
For $E>0$, $q_{E,0}$ is bounded below with dense domain and we consider $q_E^\Omega$ the closure in $L^2(\Omega)$ of $q_{E,0}^\Omega$. Then, we define the first min-max level
\begin{equation}
\mu^\Omega(E) := \inf_{u \in \dom(q_E^\Omega)\setminus\{0\}} \frac{4 \int_\Omega |\partial_{\bar z} u|^2 dx - E^2 \int_{\Omega}|u|^2dx + E \int_{\partial\Omega} |u|^2 ds}{\int_\Omega |u|^2 dx}.
\label{eqn:firstminmax}
\end{equation}
The second main result of this paper is the following non-linear variational characterization of $E_1(\Omega)$.
\begin{thm} $E>0$ is the first non-negative eigenvalue of $D^\Omega$ if and only if $\mu^\Omega(E) = 0$.
\label{thm:vf}
\end{thm}
The advantage of the quadratic form $q_E^\Omega$ is two-fold. First, functions in the considered variational space are now scalar valued and, second, the infinite mass boundary conditions does not appear in the variational formulation. However, the first drawback is that $\dom{(q_E^\Omega)}$ contains the Hardy space $\mathcal{H}_{\rm h}^2(\Omega)$, constituted of holomorphic functions with traces in $L^2(\partial\Omega)$. In particular, $\dom{(q_E^\Omega)}$ is not a usual Sobolev space and a special care is needed in order to prove Theorem \ref{thm:vf}. In particular, it asks for a precise description of the domain $\dom{(q_E^\Omega)}$ as well as the domain of the associated self-adjoint operator \emph{via} Kato's first representation theorem (see \cite[Chap. VI, Thm. 2.1]{Kat}). It is done using convolution operators reminiscent of what is done in \cite{AMV,OBV}, elliptic regularity properties of the maximal Wirtinger operators as well as using Cauchy singular integral operators on $\partial\Omega$, seen as periodic pseudo-differential operators.\\
Theorem \ref{thm:vf} is reminiscent of \cite{DES00,DES03}, where a similar strategy is used to deal with the Dirac-Coulomb operator. To our knowledge, this is the first time this idea is extended to boundary value problems and now, we describe its heuristic.
Let $(u,v)^\top \in \dom{(D^\Omega)}$ be an eigenfunction associated with the eigenvalue $E>0$. In $\Omega$, the eigenvalue equation reads
\begin{equation}\label{eqn:vpdir}
-2\mathrm{i}\partial_z v = E u,\quad -2\mathrm{i}\partial{\bar z} u = E v.
\end{equation}
If we assume that this identity is true up to the boundary $\partial\Omega$, we obtain the following boundary condition for $u$:
\begin{equation}\label{eqn:bc1}
\overline{n} \partial_{\bar z} u + \frac{E}2 u = 0 \text{ on } \partial\Omega.
\end{equation}
Now, Equation \eqref{eqn:vpdir} gives
\begin{equation}\label{eqn:eqvpdeduced}
-4 \partial_z \partial_{\bar z} u = E^2 u \text{ in } \Omega.
\end{equation}
Hence, a weak formulation is obtained taking the scalar product by $u$, integrating by parts and taking into account the boundary condition \eqref{eqn:bc1}. This formally gives $q_E^\Omega(u) = 0$ and this is the reason for introducing the quadratic form $q_E^\Omega$ in \eqref{eqn:deffq}.
Let us add two remarks. The first one explains that \eqref{eqn:bc1}-\eqref{eqn:eqvpdeduced} can be recast into a non-linear eigenvalue problem for a Laplace operator with oblique boundary conditions. The second remark, explains how Theorem \ref{thm:vf} could be extended to handle the next eigenvalues.
\begin{rem}Note that \eqref{eqn:eqvpdeduced} is an eigenvalue equation for the Laplace operator and reads $-\Delta u = E^2 u$. The boundary condition \eqref{eqn:bc1} is a relation between the normal derivative, the tangential derivative and the value of the function on $\partial\Omega$. If we let ${\bf t}$ be the tangent field on $\partial\Omega$ such that $({\bf n},{\bf t})$ is a direct frame, the problem can be re-interpreted as an oblique problem
\begin{equation}
\label{eqn:vpnl}
\left\{\begin{array}{ccll}
-\Delta u & = & E^2 u &\text{ in } \Omega,\\
\partial_{n}u + \mathrm{i} \partial_t u+ E u &=& 0 &\text{ on } \partial\Omega,
\end{array}
\right.
\end{equation}
where $\partial_n$ and $\partial_t$ are the normal and tangential derivatives, respectively.
Note that Problem \eqref{eqn:vpnl} is non-linear because the parameter $E>0$ appears both in the eigenvalue equation and in the boundary condition.
\end{rem}
\begin{rem} For $j\geq1$, one can consider the $j$-th min-max level of $q_E^\Omega$ defined as
\[
\mu_j^\Omega(E) := \inf_{\tiny{\begin{array}{c}F \subset \dom{(q_E^\Omega)}\\ \dim F = j\end{array}}}\sup_{u \in F\setminus\{0\}} \frac{4 \int_\Omega |\partial_{\bar z} u|^2 dx - E^2 \int_{\Omega}|u|^2dx + E \int_{\partial\Omega} |u|^2 ds}{\int_\Omega |u|^2 dx}.
\]
As in \cite{DES00}, Theorem \ref{thm:vf} could be extended as follows: $E>0$ is the $j$-th non-negative eigenvalue of $D^\Omega$ if and only if $\mu_j^\Omega(E) = 0$. We do not discuss it here because we are concerned only with the principal eigenvalue $E_1(\Omega)$.
\end{rem}
Finally, let us comment the hypothesis on $\Omega$. First, one would like to lower the smoothness hypothesis to be able to handle, for instance, Lipschitz domains. This is a natural question but there is no reason for the Dirac operator with infinite mass boundary to be self-adjoint on such a domain $\dom{(D^\Omega)}$ (see the case of polygonal domains in \cite{LTOB}). Moreover, as part of the proof relies on pseudo-differential techniques, we prefer to keep the $C^\infty$ smoothness assumption on $\partial\Omega$ because it allows for a more efficient treatment of singular integral operators on the boundary. Second, the simply connectedness assumption may be an unnecessary hypothesis for Theorem \ref{thm:vf} to hold. Nevertheless, we are not able to drop it in Theorem \ref{thm:ineq} because the proof relies on the Riemann mapping theorem to build an admissible test function for $q_E^\Omega$.
\subsection{Structure of the paper}
In Section \ref{sec:prelim}, we gather several results on Sobolev spaces on $\partial\Omega$, periodic pseudo-differential operators on $\partial\Omega$ and deduce various mapping properties of the Cauchy singular integral operators.
Section \ref{sec:maxc-rop} contains a description of the domain of the maximal Wirtinger operators. In particular, we discuss the existence of a trace operator for functions belonging to these domains and state a fundamental elliptic regularity result.
Section \ref{sec:berghard} deals with the description of the Bergman and Hardy spaces on $\Omega$ thanks to integral operators. This is done by introducing the Szeg\"o projectors on the Sobolev spaces on the boundary $H^s(\partial\Omega)$ ($s\in\{-\frac12,0,\frac12\}$). As a byproduct of this analysis we are able to describe explicitly the domains of the maximal Wirtinger operators.
Theorem \ref{thm:vf} is proved in Section \ref{sec:vcproof}. We start by describing the domain of the quadratic form $q_E^\Omega$ in terms of the first-order Sobolev space $H^1(\Omega)$ and the Hardy space on $\Omega$. Then, the analysis is pushed forward to study the domain of the self-adjoint operator associated with $q_E^\Omega$ \emph{via} Kato's first representation theorem (see \cite[Chap. VI, Thm. 2.1]{Kat}). Combining these tools, we prove Theorem \ref{thm:vf}.
Then, we apply Theorem \ref{thm:vf} in Section \ref{sec:isopin} to prove Theorem \ref{thm:ineq}. The proof is by adapting the well-known proof of Szeg\"o \cite{Sze54} to our setting, constructing an adequate test function for the new variational formulation.
In Section \ref{sec:aboutFK}, we show that Conjecture \ref{conj:FK} can be reformulated and that it is related to the famous Bossel-Daners inequality.
We conclude in Section \ref{sec:numerics} illustrating by numerical experiments the validity of Conjecture \ref{conj:FK} and several theoretical results discussed all along the paper.
\section{Preliminaries}\label{sec:prelim}
\subsection{Sobolev spaces on $\partial\Omega$}
In the following, $\mathbb{T}$ is the torus $\mathbb{T} := \mathbb{R}/\mathbb{Z}$, $\mathcal{D}(\mathbb{T}) = C^\infty(\mathbb{T})$ is the space periodic smooth functions on the torus $\mathbb{T}$ and $\mathcal{D}(\mathbb{T})'$ the space of periodic distributions on the torus $\mathbb{T}$.
Let $f \in \mathcal{D}(\mathbb{T})'$ we define its Fourier coefficients using the duality pairing by
\[
\widehat{f}(n) := \langle f, e_{-n}\rangle_{\mathcal{D}(\mathbb{T})',\mathcal{D}(\mathbb{T})},\quad e_n := t\in \mathbb{T} \mapsto e^{2\mathrm{i}\pi n t}.
\]
For $s \in \mathbb{R}$, the Sobolev space of order $s$ on $\mathbb{T}$ is defined as
\[
H^s(\mathbb{T}) := \{f \in \mathcal{D}(\mathbb{T})' : \sum_{n =-\infty}^{+\infty}(1+|n|)^{2s}|\widehat{f}(n)|^2 < +\infty\}.
\]
Set $\ell := |\partial\Omega|$ and let $\gamma : \mathbb{R} \big/ [0,\ell] \to \partial\Omega$ be a smooth arc-length parametrization of $\partial\Omega$. Consider the map
\[
U^* : \mathcal{D}(\mathbb{T}) \to \mathcal{D}(\partial\Omega),\quad (U^*g)(x) := \ell^{-1}g(\ell^{-1}\gamma^{-1}(x)),\ x\in\partial\Omega,
\]
where we have set $\mathcal{D}(\partial\Omega) := C^\infty(\partial\Omega)$. We define the map $U : \mathcal{D}(\partial\Omega)' \to \mathcal{D}(\mathbb{T})'$ as
\begin{equation}\label{eqn:defU}
\langle U f, g\rangle_{\mathcal{D}(\mathbb{T})',\mathcal{D}(\mathbb{T})} := \langle f, U^*g\rangle_{\mathcal{D}(\partial\Omega)',\mathcal{D}(\partial\Omega)}.
\end{equation}
The Sobolev space of order $s\in\mathbb{R}$ on $\partial\Omega$ is defined as
\[
H^s(\partial\Omega) := \{ f \in \mathcal{D}(\partial\Omega)' : Uf \in H^s(\mathbb{T})\}.
\]
\subsection{Periodic pseudo-differential operators}
Let us start by defining periodic pseudo-differential operators on $\mathbb{T}$.
\begin{defin} A linear operator $H$ on $C^\infty(\mathbb{T})$ is a periodic pseudo-differential operator on $\mathbb{T}$ if there exists $h : \mathbb{T} \times \mathbb{Z} \to \mathbb{C}$ such that:
\begin{enumerate}
\item for all $n\in \mathbb{Z}$, $h(\cdot,n) \in C^\infty(\mathbb{T})$,
\item $H$ acts as $Hf = \sum_{n\in\mathbb{Z}} h(\cdot,n) \widehat{f}(n) e_n$,
\item there exists $\alpha \in \mathbb{R}$ such that for all $p,q\in\mathbb{N}_0$ there exists $c_{p,q}>0$ such that there holds
\[
\Big|\big(\frac{d^p}{dt^p} (\omega^q h)\big)(t,n)\Big| \leq c_{p,q}(1+|n|)^{\alpha-q},
\]
where the operator $\omega$ is defined for all $(t,n)\in\mathbb{T}\times\mathbb{Z}$ by $(\omega h)(t,n) := h(t,n+1) - h(t,n)$.
\end{enumerate}
$\alpha$ is called the order of the pseudo-differential operator $H$. The set of pseudo-differential operators of order $\alpha$ on $\mathbb{T}$ is denoted $\Psi^\alpha$ and we define
\[
\Psi^{-\infty} := \bigcap_{\alpha\in\mathbb{R}} \Psi^{\alpha}.
\]
\end{defin}
\begin{exm} For further use, we introduce the example of multiplication operators. Consider $H : \mathbb{C}^\infty(\mathbb{T}) \to C^\infty(\mathbb{T})$ defined as
\[
(Hf)(t) := h(t) f(t),\quad h\in C^\infty(\mathbb{T}).
\]
Decomposing in Fourier series, one immediately obtains
\[
(Hf) = \sum_{n\in\mathbb{Z}} h\widehat{f}(n) e_n.
\]
There holds $\omega^q h =0$ for all $q\geq1$ and, as $h\in C^\infty(\mathbb{T})$, for all $t\in\mathbb{T}$ we obtain
\[
\Big|\big(\frac{d^p h}{dt^p}\big)(t)\Big| \leq c_{p},\quad\text{for some } c_p>0
\]
and we get $H \in \Psi^0$.
\label{exm:pseudo0}
\end{exm}
Using the map $U$ defined in \eqref{eqn:defU}, we define periodic pseudo-differential operators on $\partial\Omega$ as follows.
\begin{defin} A linear operator $H$ on $C^\infty(\partial\Omega)$ is a periodic pseudo-differential operator on $\partial\Omega$ of order $\alpha\in \mathbb{R}$ if the operator $H_0 := U H U^{-1} \in \Psi^\alpha$. The set of pseudo differential operators on $\partial\Omega$ of order $\alpha$ is denoted $\Psi^\alpha_{\partial\Omega}$ and we set
\[
\Psi_{\partial\Omega}^{-\infty} := \bigcap_{\alpha\in\mathbb{R}}\Psi_{\partial\Omega}^\alpha.
\]
\end{defin}
We will need the following properties of pseudo-differential operators on $\partial\Omega$. They can be found in \cite[\S 5.8 \& 5.9]{SV}.
\begin{prop}\label{prop:pseudoimpo} Let $s,\alpha,\beta\in \mathbb{R}$ and $H\in \Psi_{\partial\Omega}^\alpha, G \in \Psi_{\partial\Omega}^\beta$.
\begin{enumerate}
\item $H$ extends uniquely to a bounded linear operator, also denoted $H$, from $H^s(\partial\Omega)$ to $H^{s-\alpha}(\partial\Omega)$.
\item\label{itm:22} \label{itm:pseudoimp1}There holds
\[
H + G \in \Psi_{\partial\Omega}^{\max(\alpha,\beta)},\quad HG \in \Psi_{\partial\Omega}^{\alpha+\beta},\quad [H,G] \in \Psi_{\partial\Omega}^{\alpha+\beta -1}.
\]
\end{enumerate}
\end{prop}
\subsection{Cauchy singular integral operators}For $f \in C^\infty(\partial\Omega)$, the Cauchy singular integral operator is defined as a principal value by
\[
S_{\rm h}(f)(z) := \frac1{i\pi} {\rm p.v.} \int_{\partial\Omega} \frac{f(\xi)}{\xi-z}d\xi,\quad z\in \partial\Omega.
\]
We define its anti-holomorphic counterpart as
\[
S_{\rm ah}(f)(z) := \overline{S_{\rm h}(\overline{f})(z)} = -\frac1{i\pi} {\rm p.v.} \int_{\partial\Omega} \frac{f(\xi)}{\overline{\xi}-\bar z}d\overline{\xi},\quad z\in \partial\Omega.
\]
It turns out $S_{\rm h}$ and $S_{\rm ah}$ are periodic pseudo-differential operators on $\partial\Omega$. This is the purpose of the following proposition.
\begin{prop} The linear maps $S_{\rm h}$ and $S_{\rm ah}$ are periodic pseudo-differential operators of order $0$. In particular, they are bounded linear operators from $H^s(\partial\Omega)$ onto itself for all $s\in \mathbb{R}$.
\label{prop:mapcauchy}
\end{prop}
\begin{proof} This is proved in \cite[Prop 2.9.] {BHOBP19} where the operators $S_{\rm h}$ and $S_{\rm ah}$ are denoted $C_{\Sigma}$ and $-C_\Sigma'$ respectively (with $\Sigma := \partial\Omega$).
\end{proof}
We will also need the following property.
\begin{prop} Let $H_\mathbf{n}$ be the multiplication operator by the normal $\mathbf{n}$ in $C^\infty(\partial\Omega)$. There holds:
\begin{enumerate}
\item\label{itm:1pseudo} $H_{\mathbf{n}}$ is a periodic pseudo-differential operator of order $0$.
\item\label{itm:2pseudo} Let $\sharp \in \{{\rm h},{\rm ah}\}$ we have $[H_{\mathbf{n}},S_\sharp] \in \Psi_{\partial\Omega}^{-1}$.
\item\label{itm:3pseudo} There holds $S_{\rm ah} + S_{\rm h} \in \Psi_{\partial\Omega}^{-\infty}$.
\end{enumerate}
\label{prop:pseudocommut}
\end{prop}
\begin{proof} Point \eqref{itm:1pseudo} is proved remarking that the operator $U H_{\mathbf{n}}U^{-1}$ is a multiplication operator in $\mathbb{T}$. Thanks to Example \ref{exm:pseudo0}, we know that $U H_{\mathbf{n}}U^{-1} \in \Psi^0$ hence by definition we get $H_{\mathbf{n}}\in \Psi_{\partial\Omega}^0$.
Let us deal with Point \eqref{itm:2pseudo}. Let $\sharp \in \{{\rm h},{\rm ah}\}$, by Proposition \ref{prop:mapcauchy}, $S_\sharp \in \Psi_{\partial\Omega}^0$ and by Point \eqref{itm:1pseudo} $H_{\mathbf{n}} \in \Psi_{\partial\Omega}^0$. Hence, by \eqref{itm:pseudoimp1} Proposition \ref{prop:pseudoimpo}, we obtain Point \eqref{itm:2pseudo}.
Finally, we prove Point \eqref{itm:3pseudo}. By \cite[Proposition 2.9.]{BHOBP19} there exists $L\in \Psi_{\partial\Omega}^0$ and $R_1,R_2 \in \Psi_{\partial\Omega}^{-\infty}$ such that
\[
S_{\rm h} = L + R_1,\quad S_{\rm ah} = -L + R_2.
\]
Hence, $S_{\rm h} + S_{\rm ah} = R_1 + R_2 \in \Psi_{\partial\Omega}^{-\infty}$ by \eqref{itm:22} Proposition \ref{prop:pseudoimpo}.
\end{proof}
\section{Maximal Wirtinger operators}
\label{sec:maxc-rop}
In this section we describe elemental properties of the maximal Wirtinger operators defined as
\begin{align*}
{\partial}_{\rm h} u = \partial_{\bar z}u ,& \quad \dom( \partial_{\rm h}) := \{u \in L^2(\Omega) : \partial_{\bar z}u \in L^2(\Omega)\},\\
{\partial}_{\rm ah} u = \partial_z u ,& \quad \dom( \partial_{\rm ah}) := \{u \in L^2(\Omega) : \partial_{z}u \in L^2(\Omega)\}.
\end{align*}
For $\sharp \in \{{\rm h}, {\rm ah}\}$, consider the operator norms $\|\cdot\|_{\sharp}$ defined as
\[
\|u\|_{\sharp}^2 := \|\partial_{\sharp} u\|_{L^2(\Omega)}^2 + \|u\|_{L^2(\Omega)}^2,\quad u\in \dom(\partial_{\sharp}).
\]
In particular, $\dom(\partial_{\sharp})$ endowed with the scalar product defined for $u,v\in\dom(\partial_{\sharp})$ by
\[
\langle u,v\rangle_\sharp =\langle\partial_\sharp u,\partial_\sharp v\rangle_{L^2(\Omega)} + \langle u,v\rangle_{L^2(\Omega)}
\]
is a Hilbert space.
The first lemma is obtained by a simple integration by parts.
\begin{lem} The following identities hold.
\[
H^1(\mathbb{R}^2) = \{f\in L^2(\mathbb{R}^2) : \partial_z f \in L^2(\mathbb{R}^2)\} = \{f\in L^2(\mathbb{R}^2) : \partial_{\bar z} f \in L^2(\mathbb{R}^2)\}
\]
\label{lem:equivfullspace}
\end{lem}
\begin{proof} Let $f\in C_0^\infty(\mathbb{R}^2)$. Integrating by parts several times we obtain:
\begin{align*}
\|\nabla f\|_{L^2(\mathbb{R}^2)}^2 = \langle f, -\Delta f\rangle_{L^2(\mathbb{R}^2)} &= 4 \langle f, -\partial_{z}\partial_{\bar z} f\rangle_{L^2(\mathbb{R}^2)} = 4 \|\partial_{\bar z} f\|_{L^2(\mathbb{R}^2)}^2\\
&= 4 \langle f, -\partial_{\bar z}\partial_{z} f\rangle_{L^2(\mathbb{R}^2)} = 4\|\partial_z f\|_{L^2(\mathbb{R}^2)}^2
\end{align*}
As $C_0^\infty(\mathbb{R}^2)$ is dense in $H^1(\mathbb{R}^2)$, we obtain the expected result.
\end{proof}
The next lemma is a density result.
\begin{lem} Let $\sharp \in \{{\rm h}, {\rm ah}\}$. The space $C^\infty(\overline{\Omega}) := C^\infty(\overline{\Omega},\mathbb{C})$ is dense in $\dom(\partial_{\sharp})$.
\end{lem}
\begin{proof} Let $u \in \dom(\partial_{\rm h})$ and assume that for all $\varphi \in C^\infty(\overline{\Omega})$ there holds
\[
0 = \ps{u}{\varphi}_{\rm h} = \ps{\partial_{\bar z} u}{\partial_{\bar z} \varphi}_{L^2(\Omega)} + \ps{u}{\varphi}_{L^2(\Omega)}.
\]
In particular, if $\varphi \in C_0^\infty(\Omega)$, we obtain $-\Delta u = - 4 u$ first in $\mathcal{D}(\Omega)'$ then in $L^2(\Omega)$. Define $v = \partial_{\bar z} u$ and denote by $v_0$ its extension to the whole $\mathbb{R}^2$ by 0. For $\varphi \in C_0^\infty(\mathbb{R}^2)$ there holds
\begin{align*}
\langle \partial_z v_0,\overline{\varphi}\rangle_{\mathcal{D}'(\mathbb{R}^2),\mathcal{D}(\mathbb{R}^2)} &= - \langle v_0,\overline{\partial_{\bar z}\varphi}\rangle_{\mathcal{D}'(\mathbb{R}^2),\mathcal{D}(\mathbb{R}^2)}\\& = - \langle v,\partial_{\bar z}\varphi\rangle_{L^2(\Omega)}\\& = - \langle \partial_{\bar z} u,\partial_{\bar z}\varphi\rangle_{L^2(\Omega)}\\& = \langle u,\varphi\rangle_{L^2(\Omega)}\\
& = \langle u_0,\varphi\rangle_{L^2(\mathbb{R}^2)}\\
& = \langle u_0,\overline{\varphi}\rangle_{\mathcal{D}'(\mathbb{R}^2),\mathcal{D}(\mathbb{R}^2)},
\end{align*}
where $u_0$ denotes the extension by zero of $u$ to the whole $\mathbb{R}^2$. It gives $\partial_z v_0 = u_0 \in L^2(\mathbb{R}^2)$. By Lemma \ref{lem:equivfullspace}, $v_0$ is in $H^1(\mathbb{R}^2)$ and by \cite[Prop. IX.18.]{Bre} we get $v\in H_0^1(\Omega)$. Remark that in $\mathcal{D}'(\Omega)$, there holds $\partial_{\bar z}\partial_{ z} v = v$. Indeed, we have
\[
\partial_{\bar z}\partial_{ z} v = \partial_{\bar z} \partial_{z}\partial_{\bar z} u = \partial_{\bar z} u = v.
\]
In particular this identity also holds true in $L^2(\Omega)$. Now, pick a sequence $v_n \in C_0^\infty(\Omega)$ converging to $v$ in the $H^1(\Omega)$-norm. There holds
\begin{align*}
\langle v,v_n\rangle_{L^2(\Omega)} = \langle \partial_{z}\partial_{\bar z} v, v_n\rangle_{L^2(\Omega)} &= -\langle \partial_{\bar z} v, \overline{\partial_{\bar z}v_n}\rangle_{\mathcal{D}'(\Omega),\mathcal{D}(\Omega)}\\&=- \langle \partial_{\bar z} v, \partial_{\bar z}v_n\rangle_{L^2(\Omega)}
\end{align*}
Letting $n\to +\infty$ one obtains $\|v\|_{L^2(\Omega)}^2 = - \|\partial_{\bar z} v\|_{L^2(\Omega)}^2$ which implies $v = 0$. In $\mathcal{D}'(\Omega)$ we have $\partial_z v = \partial_z \partial_{\bar z} u = u$. As $v=0$, $u=0$ which concludes the proof for $\sharp = {\rm h}$. The case $\sharp = {\rm ah}$ is handled similarly.
\end{proof}
In order to describe precisely the domains $\dom(\partial_\sharp)$ ($\sharp \in \{{\rm h}, {\rm ah}\}$) we need to prove the existence of traces on $\partial\Omega$ for functions in $\dom(\partial_\sharp)$. To this aim, define the following Dirichlet trace operators
\begin{equation}\label{eqn:deftrace}
\Gamma^+ : H^1(\Omega) \to H^{\frac12}(\partial\Omega),\quad \Gamma^- : H_{loc}^1(\mathbb{R}^2\setminus\overline{\Omega}) \to H^{\frac12}(\partial\Omega).
\end{equation}
These linear operators are known to be bounded (see \cite[Thm. 3.37]{McLean}) and there exists continuous extension operators such that for $f\in H^{\frac12}(\partial\Omega)$ there holds
\[
E^+f \in H^{1}(\Omega),\quad E^-f\in H^1(\mathbb{R}^2\setminus\overline{\Omega}) \quad\text{and}\quad \Gamma^\pm E^\pm f = f.
\]
Actually, the operator $\Gamma^+$ can be extended to functions in $\dom(\partial_\sharp)$ ($\sharp \in \{{\rm h},{\rm ah}\}$). This is the purpose of the following proposition.
\begin{lem}Let $\sharp \in \{{\rm h}, {\rm ah}\}$. The operator $\Gamma^+$ defined in \eqref{eqn:deftrace} extends into a linear bounded operator between $\dom(\partial_{\sharp})$ and $H^{-\frac12}(\partial\Omega)$.
\label{lem:extop}
\end{lem}
\begin{proof} Let $(v_n)_{n\in\mathbb{N}} \in C^\infty(\overline{\Omega})^\mathbb{N}$ be a sequence that converges to $v$ in the $\|\cdot\|_{\rm h}$-norm when $n\to +\infty$. Let us prove that $(\Gamma^+ v_n)_{n\in\mathbb{N}}$ has a limit in $H^{-\frac12}(\partial\Omega)$. First recall the integration by part formula
\[
\frac12 \langle \Gamma^+u,\overline{\bf n} \Gamma^+w\rangle_{L^2(\partial\Omega)} = \langle \partial_{\bar z} u, w\rangle_{L^2(\Omega)} + \langle u,\partial_{z}w\rangle_{L^2(\Omega)}
\]
valid for any $u,w \in H^1(\Omega)$. Second, pick $f \in H^{\frac12}(\partial\Omega)$ and consider $w = E^+ ({\bf n} f) \in H^{1}(\Omega)$. There holds
\[
\langle \Gamma^+(v_n - v_m),f\rangle_{L^2(\partial\Omega)} = 2 \langle \partial_{\bar z} (v_n - v_m), w\rangle_{L^2(\Omega)} + 2 \langle v_n-v_m,\partial_{z}w\rangle_{L^2(\Omega)}.
\]
In particular, we have
\begin{align*}
\big|\langle \Gamma^+(v_n - v_m),f\rangle_{L^2(\partial\Omega)}\big| &\leq 2 \|\partial_{\bar z} (v_n - v_m)\|_{L^2(\Omega)}\|w\|_{L^2(\Omega)} + 2\|v_n-v_m\|_{L^2(\Omega)}\|\partial_z w\|_{L^2(\Omega)}\\
& \leq 4 \|w\|_{H^1(\Omega)}\|v_n - v_m\|_{\rm h}\\
& \leq 4 c_\Omega \|f\|_{H^{\frac12}(\partial\Omega)}\|v_n - v_m\|_{\rm h}\quad(\text{for some } c_\Omega > 0),
\end{align*}
where we have used that $E^+$ is a continuous linear map and that the multiplication operator by $\bf n$ is bounded from $H^{\frac12}(\partial\Omega)$ onto itself. When $n,m \to +\infty$ we obtain $\|\Gamma^+ (v_n - v_m)\|_{H^{-\frac12}(\partial\Omega)} \to 0$. In particular $(\Gamma^+v_n)_{n\in\mathbb{N}}$ is a Cauchy sequence in $H^{-\frac12}(\partial\Omega)$ thus converges to an element $g\in H^{-\frac12}(\partial\Omega)$ and we define $\Gamma^+ v := g$. Remark that the definition of $\Gamma^+v$ does not depend on the chosen sequence $(v_n)_{n\in\mathbb{N}}$ and that we have
\[
\|\Gamma_+ v_n\|_{H^{-\frac12}(\partial\Omega)} \leq 4 c_\Omega \|v_n\|_{h}
\]
which implies, when $n\to +\infty$, that $\Gamma^+$ is bounded from $\dom(\partial_{\rm h})$ to $H^{-\frac12}(\partial\Omega)$. The proof for $\dom(\partial_{\rm ah})$ is handled similarly.
\end{proof}
\begin{rem}\label{rem:extop} If one picks $R>0$ such that $\overline{\Omega} \subset B(0,R) := \{x\in \mathbb{R}^2 : \|x\| < R\}$, one can prove that for $\star \in \{z,\overline{z}\}$, $\Gamma^-$ extends into a linear bounded operator between the space $\{u \in L^2(B(0,R)\setminus\Omega) : \partial_\star u \in L^2(B(0,R)\setminus\Omega)\}$ and $H^{-\frac12}(\partial\Omega)$. The proof goes along the same lines as the one of Lemma \ref{lem:extop}, using an extension operator $E^- : H^{\frac12}(\partial\Omega) \to H^1(B(0,R)\setminus\Omega)$ constructed such that for all $f\in H^{\frac12}(\partial\Omega)$, $E^-(f)|_{\partial B(0,R)} = 0$.
\end{rem}
\begin{rem} Pick $u\in\dom(\partial_{\rm ah})$ and $w\in H^1(\Omega)$. Note that by definition, the following Green's Formula holds
\begin{equation}
\langle\partial_z u,w\rangle_{L^2(\Omega)} = - \langle u,\partial_{\bar z}w\rangle_{L^2(\Omega)} + \frac12\langle \overline{\bf n}\Gamma^+u,\Gamma^+w\rangle_{H^{-\frac12}(\partial\Omega),H^{\frac12}(\partial\Omega)}.
\label{eqn:Green}
\end{equation}
\end{rem}
The following elliptic regularity result is rather well known (see the analogous statement \cite[Lemma 2.4.]{BFVdBS17}).
\begin{lem}\label{lem:ellipregul} Let $\sharp \in \{{\rm h}, {\rm ah}\}$ and $u \in \dom(\partial_{\sharp})$. If $\Gamma^+ u \in H^{\frac12}(\partial\Omega)$ then $u \in H^1(\Omega)$.
\end{lem}
\begin{proof} Let $u \in \dom(\partial_{\rm h})$ be such that $\Gamma^+ u \in H^{\frac12}(\partial\Omega)$ and set $v = u - E^+(\Gamma^+ u)$. Then, $\Gamma^+ v = 0$ and if $v \in H_0^1(\Omega)$ the result is proved. If $v_n \in C^\infty(\overline{\Omega})$ is a sequence converging to $v$ in the $\|\cdot\|_{\rm h}$-norm there holds $\Gamma^+ v_n \to 0$ in $H^{-\frac12}(\partial\Omega)$ by Lemma \ref{lem:extop}. In particular, it gives for any $w \in H^1(\Omega)$
\begin{align*}
\langle v, \partial_z w \rangle_{L^2(\Omega)} &= \lim_{n\to +\infty}\Big(- \langle \partial_{\bar z}v_n, w \rangle_{L^2(\Omega)} + \frac12 \langle \Gamma^+v_n, \overline{\bf n}\Gamma^+w \rangle_{L^2(\partial\Omega)}\Big)\\
& = - \langle \partial_{\bar z}v, w\rangle_{L^2(\Omega)}.
\end{align*}
Let $v_0$ (resp. $h_0$) be the extension of $v$ (resp. $h := \partial_{\bar z}v$) by zero to the whole $\mathbb{R}^2$. If $\varphi \in C_0^\infty(\mathbb{R}^2)$, there holds
\begin{align*}
- \langle h_0,\overline{\varphi}\rangle_{\mathcal{D}'(\mathbb{R}^2),\mathcal{D}(\mathbb{R}^2)} = - \langle h,\varphi\rangle_{L^2(\Omega)} &= \langle v,{\partial_{z}\varphi}\rangle_{L^2(\Omega)}\\& = \langle v_0, \partial_{\bar z}\overline{\varphi}\rangle_{\mathcal{D}'(\mathbb{R}^2),\mathcal{D}(\mathbb{R}^2)}\\
& = - \langle \partial_{\bar z }v_0,\overline{\varphi}\rangle_{\mathcal{D}'(\mathbb{R}^2),\mathcal{D}(\mathbb{R}^2)}.
\end{align*}
Thus $\partial_{\bar z}v_0 = h_0 \in L^2(\mathbb{R}^2)$ and by Lemma \ref{lem:equivfullspace}, $v_0 \in H^1(\mathbb{R}^2)$ and $v\in H_0^1(\Omega)$. The proof for $u \in \dom(\partial_{\rm ah})$ is handled similarly.
\end{proof}
\section{Bergman and Hardy spaces on $\Omega$}\label{sec:berghard}
We introduce $\mathcal{A}_{\rm h}^2(\Omega)$ and $\mathcal{A}_{\rm ah}^2(\Omega)$ the holomorphic and anti-holomorphic Bergman spaces on $\Omega$, respectively. They are defined as
\[
\mathcal{A}_{\rm h}^2(\Omega) := \{u \in Hol(\Omega)\cap L^2(\Omega)\},\quad \mathcal{A}_{\rm ah}^2(\Omega) := \{u : \overline{u}\in \mathcal{A}_{\rm h}^2(\Omega)\},
\]
where $Hol(\Omega)$ denotes the space of holomorphic functions in $\Omega$. The holomorphic and anti-holomorphic Hardy spaces, denoted $\mathcal{H}_{\rm h}^2(\Omega)$ and $\mathcal{H}_{\rm ah}^2(\Omega)$, respectively, are defined as
\begin{equation}
\mathcal{H}_{\rm h}^2(\Omega) := \{u\in \mathcal{A}_{\rm h}^2(\Omega) : \Gamma^+u \in L^2(\partial\Omega)\},\quad \mathcal{H}_{\rm ah}^2(\Omega) := \{u : \overline{u} \in \mathcal{H}_{\rm h}^2(\Omega)\}.
\label{eqn:Hardyspace}
\end{equation}
This section aims to describe explicitely the Bergman and Hardy spaces on $\Omega$ in terms of Cauchy integrals and Szeg\"o projectors that we define now.
For $f \in C^\infty(\partial\Omega)$ consider the Cauchy integrals defined for $z\in \mathbb{C} \setminus \partial\Omega$ by
\[
\Phi_{\rm h}(f) (z) := \frac{1}{2\mathrm{i} \pi}\int_{\partial\Omega}\frac{f(\xi)}{\xi - z}d\xi,\quad \Phi_{\rm ah}(f) (z) := -\frac{1}{2 \mathrm{i} \pi} \int_{\partial\Omega}\frac{f(\xi)}{\overline{\xi} - \overline{z}}d\overline{\xi}.
\]
It is well-known (see \cite[\S 4.1.2.]{SV}) that $\Phi_{\rm h}(f)$ (resp. $\Phi_{\rm ah}(f)$) defines a holomorphic function (resp. anti-holomorphic function) in $\mathbb{R}^2\setminus\partial\Omega$.
The well-known Plemelj-Sokhotski formula (see \cite[Thm. 4.1.1]{SV}) state that for $f \in C^\infty(\partial\Omega)$ the functions $\Phi_{\rm h}(f)$ and $\Phi_{\rm ah}(f)$ have an interior and an exterior Dirichlet trace, denoted respectively $\gamma_0^+$ and $\gamma_0^-$, such that:
\begin{equation}\label{eqn:Plemeljdef}
\gamma_0^\pm \Phi_{\rm h}(f) = \pm \frac12 f + \frac12 S_{\rm h}f,\quad \gamma_0^\pm \Phi_{\rm ah}(f) = \pm \frac12 f + \frac12 S_{\rm ah}f.
\end{equation}
Let $\sharp \in \{{\rm h},{\rm ah}\}$, note that by \cite[Theorem 3.1.]{Bell}, for $f\in C^\infty(\partial\Omega)$ we know that $\Phi_\sharp(f)|_{\Omega} \in C^\infty(\overline{\Omega})$ as well as $\Phi_\sharp(f)|_{\mathbb{R}^2\setminus\overline{\Omega}}\in C^\infty(\mathbb{R}^2\setminus\Omega)$. In particular, the traces $\gamma_0^\pm\Phi_\sharp(f)$ coincide with $\Gamma^\pm \Phi_\sharp(f)$, where $\Gamma^\pm$ are the trace operators defined in Lemma \ref{lem:extop} and Remark \ref{rem:extop}.
\begin{defin}\label{def:szeproj} We define the Szeg\"o projectors in $C^\infty(\partial\Omega)$ by
\begin{equation}\label{eqn:Plemeljdef}
\Pi_{\rm h}^\pm := \pm \Gamma^\pm \Phi_{\rm h},\quad \Pi_{\rm ah}^\pm := \pm\Gamma^\pm \Phi_{\rm ah}.
\end{equation}
\end{defin}
\begin{prop} Let $s\in\mathbb{R}$ and $\sharp \in \{{\rm h},{\rm ah}\}$. The Szeg\"o projectors $\Pi_{\sharp}^\pm$ extend uniquely into bounded linear operators from $H^s(\partial\Omega)$ onto itself. Moreover, $\Pi_\sharp^\pm$ are projectors and $\Pi_\sharp^+ + \Pi_\sharp^-= 1 $.
\label{prop:proj_conti}
\end{prop}
\begin{proof}Remark that for $\sharp\in\{{\rm h},{\rm ah}\}$ and $f \in C^\infty(\partial\Omega)$, there holds
\[
\Pi_{\sharp}^\pm f = \frac12 f \pm \frac12 S_{\sharp}f.
\]
By Proposition \ref{prop:mapcauchy}, $\Pi_\sharp^\pm$ extends into a bounded linear operator from $H^s(\partial\Omega)$ onto itself for all $s\in\mathbb{R}$.
Let $s\in\mathbb{R}$ and $f\in H^s(\partial\Omega)$. A fundamental fact is that $S_{\rm h}^2f = f$ (see \cite[Eqn. (4.10)]{SV}), in particular it implies that $S_{\rm ah}^2f = f$. Hence, we obtain
\begin{align*}
(\Pi_\sharp^\pm)^2 &= (\frac12 \pm \frac12S_{\sharp})(\frac12 \pm \frac12S_{\sharp})\\&=\frac14 + \frac14 S_{\sharp}^2 \pm \frac12 S_\sharp\\
&= \frac12 \pm \frac12S_\sharp\\
& = \Pi_\sharp^\pm
\end{align*}
Hence $\Pi_\sharp^\pm$ are projectors and one easily checks that $\Pi_\sharp^+ + \Pi_\sharp^- = 1$.
\end{proof}
The main goal of this section is to prove the following description of the Bergman and Hardy spaces. As we will see further on in Proposition \ref{prop:extphi}, this description relies on an extension of the operators $\Phi_\sharp$ to Sobolev spaces on the boundary $\partial\Omega$ ($\sharp \in \{{\rm h},{\rm ah}\}$).
\begin{thm}\label{thm:berhar} Let $\sharp \in \{{\rm h},{\rm ah}\}$. The Bergman spaces satisfy
\[
\mathcal{A}_\sharp^2(\Omega) = \{\Phi_\sharp(f) : f \in H^{-\frac12}(\partial\Omega), \Pi_\sharp^- f = 0\}.
\]
The Hardy spaces verify
\[
\mathcal{H}_\sharp^2(\Omega) = \{\Phi_\sharp(f) : f \in L^{2}(\partial\Omega), \Pi_\sharp^- f = 0\}.
\]
\end{thm}
\subsection{Potential theory of the Wirtinger derivatives}
In this paragraph we prove the following proposition.
\begin{prop}\label{prop:extphi} Let $\sharp \in \{{\rm h}, {\rm ah}\}$ and $s\in\{-\frac12,0,\frac12\}$. The operator $\Phi_{\sharp}$ extends uniquely into a bounded operator from $H^{s}(\partial\Omega)$ to $H^{s+\frac12}(\Omega)$ also denoted $\Phi_\sharp$.
\end{prop}
In order to prove Proposition \ref{prop:extphi}, we will need a few lemma. Let us start by defining fundamental solutions of the Wirtinger operators $\partial_{\rm h}$ and $\partial_{\rm ah}$:
\[
\varphi_{\rm h}(x) = \frac{1}{\pi (x_1 + i x_2)},\quad \varphi_{\rm ah}(x) = \frac{1}{ \pi(x_1 - i x_2)}.
\]
\begin{lem} Let $\sharp\in \{{\rm h}, {\rm ah}\}$. The linear map
\[
N_\sharp : u\in L^2(\Omega) \mapsto \varphi_{\sharp}\ast u_0
\]
is bounded from $L^2(\Omega)$ to $H_{loc}^1(\mathbb{R}^2)$. Here $u_0$ denotes the extension of $u$ by zero to the whole $\mathbb{R}^2$.
\label{lem:newtonpot}
\end{lem}
\begin{proof} Let us prove it for $\sharp = {\rm h}$ the proof for $\sharp = {\rm ah}$ being similar. In the space of distributions $\mathcal{D}'(\mathbb{R}^2)$, there holds
\begin{equation}\label{eqn:fundsolz}
\partial_{\bar z} \varphi_{\rm h} = \delta_0,
\end{equation}
where $\delta_0$ is the delta-Dirac distribution.
Now, for $u$ in the Schwartz space $\mathcal{S}(\mathbb{R}^2)$ recall that the Fourier transform of $u$ is defined as
\[
\widehat{u}(k) := \int_{\mathbb{R}^2} f(x) e^{-2\mathrm{i}\pi \langle x,k\rangle_{\mathbb{R}^2}} dx,\quad \text{for all } k\in\mathbb{R}^2
\]
and $\widehat{u}\in \mathcal{S}(\mathbb{R}^2)$. The Fourier transform extends to the space of tempered distribution $\mathcal{S}'(\mathbb{R}^2)$ and as $\delta_0 \in \mathcal{S}'(\mathbb{R}^2)$, the Fourier transform of \eqref{eqn:fundsolz} yields
\[
\widehat{\varphi_{\rm h}}(k) = \frac{1}{\pi \mathrm{i} (k_1 + \mathrm{i} k_2)},\quad k = (k_1,k_2)\in\mathbb{R}^2\setminus\{(0,0)\}.
\]
Let $K$ be a compact subset of $\mathbb{R}^2$ and take $u \in L^2(\Omega)$. We extend $u$ by zero to $\mathbb{R}^2$ and denote this extension $u_0 \in L^2(\mathbb{R}^2)$.
\begin{align*}
\|\varphi_{\rm h}\ast u_0\|_{H^1(K)}^2 &\leq \|\varphi_{\rm h}\ast u_0\|_{L^2(K)}^2 + \int_{\mathbb{R}^2} |k|^2|(\widehat{\varphi_{\rm h}\ast u_0})(k)|^2 dk\\&= \|\varphi_{\rm h}\ast u_0\|_{L^2(K)}^2 + \int_{\mathbb{R}^2} |k|^2|\widehat{\varphi_{\rm h}}(k)\widehat{u_0}(k)|^2dk\\& = \|\varphi_{\rm h}\ast u_0\|_{L^2(K)}^2 + \frac{1}{\pi^2}\int_{\mathbb{R}^2} |\widehat{u_0}(k)|^2dk\\& = \|\varphi_{\rm h}\ast u_0\|_{L^2(K)}^2 + \frac{1}{\pi^2}\|u\|_{L^2(\Omega)}^2.
\end{align*}
Now, let $R>0$ be such that $K \subset \{ x \in \mathbb{R}^2 : |x| < R\}$ and $\overline{\Omega} \subset \{ x \in \mathbb{R}^2 : |x| < R\}$. Consider a cut-off function $\chi \in C_0^\infty([0,+\infty))$ such that
\[
\chi(\rho) = 1\text{ whenever } 0\leq\rho < 2R,\quad \chi(\rho) = 0 \text{ whenever } \rho > 3R.
\]
Define the function $u_\chi$ as
\[
u_\chi (x) := \int_{\mathbb{R}^2}\chi(|x-y|) \varphi_{\rm h}(x-y) u_0(y) dy.
\]
As defined, $u_\chi|_{K}\equiv (\varphi_{\rm h}\ast u_0)|_{K}$. Hence, we get
\begin{align*}
\|\varphi_{\rm h}\ast u_0\|_{L^2(K)} = \|u_\chi\|_{L^2(K)} &\leq \|u_\chi\|_{L^2(\mathbb{R}^2)}\\&\leq \|\chi(|\cdot|)\varphi_{\rm h}\|_{L^1(\mathbb{R}^2)} \|u\|_{L^2(\Omega)},
\end{align*}
where we have used Young's inequality because $\chi(|\cdot|)\varphi_{\rm h} \in L^1(\mathbb{R}^2)$. Indeed, there holds
\[
\|\chi(|\cdot|)\varphi_{\rm h}\|_{L^1(\mathbb{R}^2)} \leq \frac1\pi\int_{B(0,3R)} \frac{1}{|x|}dx = 6 R.
\]
In particular, there exists $c_K > 0$, such that
\[
\|\varphi_{\rm h}\ast u_0\|_{H^1(K)} \leq c_K \|u\|_{L^2(\Omega)}.
\]
Hence, for any compact $K\subset \mathbb{R}^2$, $N_\sharp$ is a bounded linear operator from $L^2(\Omega)$ to $H^1(K)$ and the proposition is proved.
\end{proof}
Next, we recall that the Dirichlet trace on $\partial\Omega$ of a function in $H_{loc}^1(\mathbb{R}^2)$ can be defined as
\[
\Gamma : H_{loc}^1(\mathbb{R}^2) \to H^{\frac12}(\partial\Omega)
\]
and is a bounded linear operator from $H_{loc}^1(\mathbb{R}^2)$ to $H^{\frac12}(\partial\Omega)$ (see \cite[Thm. 3.37]{McLean}).
Moreover, for $s \in [0,\ell]$, we introduce ${\bf t}(s) := \gamma_1'(s) + \mathrm{i} \gamma_2'(s)$ the expression of the tangent vector in the complex plane at the point $\gamma_1(s) + \mathrm{i} \gamma_2(s)$.
\begin{lem} The dual adjoints of $({\bf t} \Gamma N_{\rm h})$ and $(\overline{{\bf t}}\Gamma N_{\rm ah})$, denoted $({\bf t} \Gamma N_{\rm h})'$ and $(\overline{{\bf t}}\Gamma N_{\rm ah})'$ respectively, are bounded linear maps from $H^{-\frac12}(\partial\Omega)$ to $L^2(\Omega)$. Moreover if $f\in C^\infty(\partial\Omega)$, in $L^2(\Omega)$ there holds:
\[
\Phi_{\rm ah }(f) = \frac{\mathrm{i}}2({\bf t}\Gamma N_{\rm h})'(f),\quad \Phi_{\rm h}(f) = -\frac{\mathrm{i}}2(\overline{{\bf t}}\Gamma N_{\rm ah})'(f).
\]
\label{lem:map}
\end{lem}
\begin{proof} Thanks to Lemma \ref{lem:newtonpot} and the mapping properties of $\Gamma$ we know that $\Gamma N_\sharp$ is a bounded linear map from $L^2(\Omega)$ to $H^{\frac12}(\partial\Omega)$ (for $\sharp \in \{{\rm h}, {\rm ah}\}$). As $\Omega$ is smooth, ${\bf t} \in C^\infty(\partial\Omega)$ and $\overline{{\bf t}} \in C^\infty(\partial\Omega)$. In particular the multiplication operators by ${\bf t}$ and $\overline{{\bf t}}$ are bounded and invertible in $H^{\frac12}(\Omega)$. Hence, their dual adjoints satisfy the expected mapping property.
Now, pick $f\in C^\infty(\partial\Omega)$ and $v\in L^2(\Omega)$. Denoting by $v_0$ the extension of $v$ by zero to the whole $\mathbb{R}^2$ and using Fubini's theorem, there holds
\begin{align*}
\langle ({\bf t}\Gamma N_{\rm h})'f, v\rangle_{L^2(\Omega)} &= \langle f, {\bf t} \Gamma N_{\rm h}v\rangle_{H^{-\frac12}(\partial\Omega),H^{\frac12}(\partial\Omega)}\\
&= \langle f, {\bf t} \Gamma N_{\rm h}v\rangle_{L^2(\partial\Omega)}\\
& = \int_{x\in\partial\Omega}\int_{y\in\mathbb{R}^2} \frac{f(x)\overline{v_0(y)}\overline{{\bf t}(x)}}{\pi\big((x_1-\mathrm{i} x_2) -(y_1 - \mathrm{i} y_2)\big)}dyds(x)\\
& = \int_{y\in \mathbb{R}^2} \frac1\pi\left(\int_{s=0}^\ell \frac{f(\gamma(s))(\gamma_1'(s) - \mathrm{i} \gamma_2'(s))}{(\gamma_1(s) - \mathrm{i} \gamma_2(s)) - (y_1 - \mathrm{i} y_2)}ds\right)\overline{v_0(y)} dy\\
&=\int_{y\in \mathbb{R}^2} \Big(\frac1{\pi}\int_{\xi\in\partial \Omega} \frac{f(\xi)}{\overline{\xi} - (y_1 - \mathrm{i} y_2)}d\overline{\xi}\Big)\overline{v_0(y)} dy\\
& = \langle - 2 \mathrm{i} \Phi_{\rm ah}(f),v\rangle_{L^2(\Omega)}.
\end{align*}
The proof for $(\overline{{\bf t}}\Gamma N_{\rm ah})'$ goes along the same lines, which concludes the proof of this lemma.
\end{proof}
For further use, we still denote $\Phi_{\rm ah}$ and $\Phi_{\rm h}$ the operators $\frac{\mathrm{i}}2({\bf t} \Gamma N_{\rm h})'$ and $-\frac{\mathrm{i}}2(\overline{{\bf t}}\Gamma N_{\rm ah})'$. Now, for $\sharp \in \{{\rm h},{\rm ah}\}$, when considering the operators
\[
\Phi_{\sharp} : \big(C^\infty(\partial\Omega),\|\cdot\|_{H^{-\frac12}(\partial\Omega)}\big) \rightarrow \big(\dom(\partial_\sharp), \|\cdot\|_\sharp\big)
\]
they are bounded operators because for any $f \in C^\infty(\partial\Omega)$, $\Phi_{\rm h}(f)$ and $\Phi_{\rm ah}(f)$ are holomorphic and anti-holomorphic in $\Omega$, respectively.
The density of $C^\infty(\partial\Omega)$ in $H^{-\frac12}(\partial\Omega)$ yields for each operator a unique extension to $H^{-\frac12}(\partial\Omega)$ which coincide with the previous one. In particular, for any $f \in H^{-\frac12}(\partial\Omega)$, $\Phi_\sharp(f) \in \dom(\partial_\sharp)$ and $\partial_\sharp \Phi_\sharp(f) = 0$.
Now, we have collected all the tools to prove Proposition \ref{prop:extphi}.
\begin{proof}[Proof of Proposition \ref{prop:extphi}] For $s = -\frac12$, Proposition \ref{prop:extphi} holds true, because of Lemma \ref{lem:map} and the density of $C^\infty(\partial\Omega)$ in $H^{-\frac12}(\partial\Omega)$.
Let us prove it for $s = \frac12$. Remark that $\Phi_\sharp(f) \in \dom(\partial_\sharp)$ so if $f\in H^{\frac12}(\partial\Omega)$ we also have $\Gamma^+\Phi_\sharp(f) = \Pi_\sharp^+ f \in H^{\frac12}(\partial\Omega)$ by Proposition \ref{prop:proj_conti}. Hence, by Lemma \ref{lem:ellipregul}, $\Phi_\sharp(f) \in H^1(\Omega)$.
Let us use the closed graph theorem and take a sequence of functions $f_n \in H^{\frac12}(\partial\Omega)$ such that $f_n \to f$ in the $H^{\frac12}(\partial\Omega)$-norm. Assume also that $\Phi_\sharp(f_n) \to u \in H^1(\Omega)$ where the convergence holds in the $H^1(\Omega)$-norm.
Because of the continuous embedding of $H^{\frac12}(\partial\Omega)$ into $H^{-\frac12}(\partial\Omega)$, $f_n \to f$ also in the $H^{-\frac12}(\partial\Omega)$-norm. In particular, by Proposition \ref{prop:extphi} for $s = -\frac12$, $\Phi_\sharp(f_n) \to \Phi_\sharp(f)$ in $L^2(\Omega)$. Consequently, the equality $u = \Phi_\sharp(f)$ holds not only in $L^2(\Omega)$ but also in $H^1(\Omega)$ and by the closed graph theorem, $\Phi_\sharp$ is a continuous linear map between $H^{\frac12}(\partial\Omega)$ and $H^1(\Omega)$.
The result for $s = 0$ holds by (real) interpolation theory (see \cite[Prop. 2.1.62. \& Prop. 2.3.11. \& Prop. 2.4.3.]{SauSch}).
\end{proof}
\subsection{Explicit description of the Bergman and Hardy spaces}
Let us prove Theorem \ref{thm:berhar}, starting with the following proposition concerning the Bergman spaces.
\begin{prop}\label{prop:propberg} Let $\sharp \in \{{\rm h},{\rm ah}\}$. There holds:
\begin{equation}\label{eqn:propberg}
\mathcal{A}_\sharp^2(\Omega) = \{ \Phi_\sharp (f) : f\in H^{-\frac12}(\partial\Omega) \text{ such that } \Pi_\sharp^- f = 0\},\quad \sharp\in\{{\rm h},{\rm ah}\}.
\end{equation}
Moreover, for all $f\in H^{-\frac12}(\partial\Omega)$ there holds
\[
\Phi_\sharp(f) = \Phi_\sharp(\Pi_\sharp^+ f).
\]
\end{prop}
\begin{proof} Denote $\mathcal{E}_\sharp$ the set on the right-hand side of \eqref{eqn:propberg}. We prove it for $\sharp = {\rm h}$, the proof for $\sharp = {\rm ah}$ being similar.
\paragraph{\underline{Inclusion $\mathcal{E}_{\rm h} \subset \mathcal{A}_{\rm h}^2(\Omega)$}}
Let $ u = \Phi_{\rm h}(f) \in \mathcal{E}_{\rm h}$, with $f \in H^{-\frac12}(\partial\Omega)$ such that $\Pi_{\rm h}^- f = 0$. By Proposition \ref{prop:extphi}, $\Phi_{\rm h}$ maps $H^{-\frac12}(\partial\Omega)$ to $L^2(\Omega)$ thus $u \in L^2(\Omega)$. Moreover, there holds $\partial_{\bar z}u = 0$ which implies that $u \in \mathcal{A}_{\rm h}^2 (\Omega)$.
\paragraph{\underline{Inclusion $\mathcal{A}_{\rm h}^2(\Omega) \subset \mathcal{E}_{\rm h}$}} For $u \in C^\infty(\overline{\Omega})$, $x \in \Omega$ and $\varepsilon > 0$ sufficiently small there holds
\begin{align*}
0 &= \frac1\pi\int_{\Omega\setminus B(x,\varepsilon)} \partial_{\bar z}\Big(\frac1{ (x_1 +\mathrm{i} x_2) - (y_1 +\mathrm{i} y_2)}\Big) u(y) dy\\& = -\frac1\pi\int_{\Omega\setminus B(x,\varepsilon)} \frac{\partial_{\bar z} u(y)}{(x_1 +\mathrm{i} x_2) - (y_1 +\mathrm{i} y_2)}dy\\& \quad \quad+ \frac1{2\pi}\int_{\partial\Omega}\frac{u(y)}{(x_1+\mathrm{i} x_2) - (y_1+\mathrm{i} y_2)} \textbf{n}(y) ds(y)\\&\quad\quad\quad + \frac1{2\pi}\int_{\partial B(x,\varepsilon)}\frac{u(y)}{(x_1+\mathrm{i} x_2) - (y_1+\mathrm{i} y_2)} \frac{(x_1 +\mathrm{i} x_2) - (y_1 +\mathrm{i} y_2)}{|y-x|} ds(y)\\
& := -A + B + C.
\end{align*}
However, we have
\[
C = \frac1{2\pi}\int_{0}^{2\pi}u(x + \varepsilon(\cos t,\sin t)) d t \longrightarrow u(x),\quad \text{when }\varepsilon \to 0.
\]
By definition, if $\gamma : [0,\ell] \to \partial\Omega$ is a smooth arc-length parametrization of $\partial\Omega$ there holds
\begin{align*}
B & = -\frac{\mathrm{i}}{2\pi} \int_{0}^\ell \frac{u(\gamma(t))}{(x_1+\mathrm{i} x_2) - (\gamma_1(t) + \mathrm{i} \gamma_2(t))}(\gamma_1'(t) + \mathrm{i} \gamma_2'(t)) dt\\
& = -\frac{1}{2\mathrm{i} \pi} \int_{\partial\Omega} \frac{u(\xi)}{\xi- (x_1 + \mathrm{i} x_2)} d\xi = - \Phi_{\rm h}(\Gamma^+ u)(x).
\end{align*}
In particular, we obtain
\begin{align}
\nonumber\Phi_{\rm h}(\Gamma^+ u)(x) =\ & \frac1{2\pi}\int_{0}^{2\pi} u(x+\varepsilon (\cos(t),\sin(t))) dt\\& \nonumber\qquad\qquad\qquad - \frac1{\pi}\int_{\Omega\setminus B(x,\varepsilon)} \frac{\partial_{\bar z} u(y)}{(x_1 + \mathrm{i} x_2) - (y_1 + \mathrm{i} y_2)} dy\\ \nonumber=\ &\frac1{2\pi}\int_{0}^{2\pi} u(x+\varepsilon (\cos(t),\sin(t))) dt\\\label{eqn:decdist}&\qquad\qquad\qquad - \frac1{\pi}\int_{\mathbb{R}^2} \frac{\mathds{1}_{\mathbb{R}^2\setminus B(0,\varepsilon)}(x-y)}{(x_1 + \mathrm{i} x_2) - (y_1 + \mathrm{i} y_2)} (\partial_{\bar z} u(y)\mathds{1}_{\Omega}(y)) dy.
\end{align}
Note that the linear form on $C_0^\infty(\mathbb{R}^2)$ defined by
\[
p.v.\Big(\frac1{x_1+\mathrm{i} x_2}\Big) := \varphi \in C_0^\infty(\mathbb{R}^2) \mapsto \lim_{\varepsilon \to 0} \int_{\mathbb{R}^2}\frac{\mathds{1}_{\mathbb{R}^2 \setminus B(0,\varepsilon)}(x)}{x_1 +\mathrm{i} x_2}\varphi(x) dx \in \mathbb{C}
\]
belongs to $\mathcal{D}'(\mathbb{R}^2)$. Remark that $(\partial_{\bar z}u \mathds{1}_{\Omega} )\in \mathcal{D}'(\mathbb{R}^2)$ and has compact support. Hence, $p.v.(\frac1{x_1+\mathrm{i} x_2}) * (\partial_{\bar z}u \mathds{1}_{\Omega}) \in \mathcal{D}'(\mathbb{R}^2)$ and taking the duality pairing with $\varphi \in C_0^\infty(\Omega)$ in \eqref{eqn:decdist} and $\varepsilon \to 0$ we get
\begin{equation}
\langle\Phi_{\rm h}(\Gamma^+ u ) - u,\varphi\rangle_{\mathcal{D}'(\Omega),\mathcal{D}(\Omega)} = \frac1{\pi}\langle p.v.\Big(\frac1{x_1 +\mathrm{i} x_2}\Big)*(\partial_{\bar z} u \mathds{1}_{\Omega}),\varphi\rangle_{\mathcal{D}'(\mathbb{R}^2),\mathcal{D}(\mathbb{R}^2)}.
\label{eqn:cvddistrib}
\end{equation}
Now, remark that $\mathcal{A}_{\rm h}^2(\Omega) \subset \dom(\partial_{\rm h})$ and pick a sequence of $C^\infty(\overline{\Omega})$ functions $(v_n)_{n\in\mathbb{N}}$ which converges to $v \in \mathcal{A}_{\rm h}^2(\Omega)$ in the norm of $\dom(\partial_{\rm h})$ when $n\to +\infty$. In particular, $(v_n)_{n\in\mathbb{N}}$ converges to $v$ and $(\partial_{\bar z} v_n)\mathds{1}_\Omega$ converges to $0$ when $n\to +\infty$ in $\mathcal{D}'(\mathbb{R}^2)$. Using \eqref{eqn:cvddistrib} for $u = v_n$ and letting $n \to +\infty$ we obtain that in $\mathcal{D}'(\Omega)$ there holds $v = \Phi_{\rm h}(\Gamma^+v)$ where we have used the continuity of the map $\Phi_{\rm h} \circ \Gamma^+ : \dom(\partial_{\rm h}) \to L^2(\Omega)$, and the continuity of the convolution in $\mathcal{D}'(\mathbb{R}^2)$.
Now, remark that we also have $v = \Phi_{\rm h}(\Gamma^+ v)$ in $\mathcal{A}_{\rm h}^2(\Omega)$ and taking the trace $\Gamma^+$ on both side of this identity we get
\[
\Pi_{\rm h}^+\Gamma^+ v = \Gamma^+ v
\]
which implies $v = \Phi_{\rm h}(\Pi_{\rm h}^+ \Gamma^+ v)$ and proves the other inclusion.
\end{proof}
We are now in a good position to prove Theorem \ref{thm:berhar}.
\begin{proof}[Proof of Theorem \ref{thm:berhar}] Proposition \ref{prop:propberg} is precisely the first statement of Theorem \ref{thm:berhar} thus, the only thing left to prove is the statement for the Hardy spaces. Now, recall that for $\sharp\in\{{\rm h},{\rm ah}\}$, we have defined the Hardy spaces in \eqref{eqn:Hardyspace} and that we want to prove
\[
\mathcal{H}_\sharp^2(\Omega) = \{\Phi_\sharp(f) : f \in L^2(\partial\Omega), \Pi_\sharp^-f = 0\}.
\]
Let $\mathcal{E}_\sharp$ be the set on the right-hand side, we prove both inclusions.
\underline{Inclusion $\mathcal{E}_\sharp\subset \mathcal{H}_\sharp^2(\Omega)$.}
Let $ u = \Phi_\sharp(f) \in \mathcal{E}_\sharp$, by definition $u \in \mathcal{A}_\sharp^2$ et $\Gamma^+ u = f \in L^2(\partial\Omega)\subset H^{-\frac12}(\partial\Omega)$ which proves this inclusion.
\underline{Inclusion $\mathcal{H}_\sharp^2(\Omega) \subset \mathcal{E}_\sharp$.}
Let $u \in \mathcal{H}_\sharp^2(\Omega)$. We know that in particular $u = \Phi_\sharp(f)$ for some $f \in H^{-\frac12}(\partial\Omega)$ such that $\Pi_\sharp^- f = 0$. But we have $\Gamma^+ u = f \in L^2(\partial\Omega)$ which proves this inclusion and concludes the proof.
\end{proof}
\subsection{Explicit description of the domain of the maximal Wirtinger operators}
In this paragraph, we prove the following descrition of the domains of the maximal Wirtinger operators introduced in Section \ref{sec:maxc-rop}. This description involves the Bergman spaces introduced in the beginning of Section \ref{sec:berghard}.
\begin{prop} Let $\sharp \in \{{\rm h}, {\rm ah}\}$. The following direct sum decomposition holds:
\[
\dom(\partial_\sharp) = \{u \in H^{1}(\Omega) : \Pi_\sharp^+ \Gamma^+ u = 0\} \dotplus \mathcal{A}_\sharp^2(\Omega).
\]
\label{prop:dirsum}
\end{prop}
For $\sharp \in \{{\rm h},{\rm ah}\}$, the range of the trace operator $\Gamma^+ : \dom{(\partial_\sharp)} \to H^{-\frac12}(\partial\Omega)$ is of crucial importance to prove Proposition \ref{prop:dirsum}. We describe its range now, thanks to the Szeg\"o projectors introduced in \eqref{eqn:Plemeljdef} but first, we prove a regularization result.
\begin{lem}\label{lem:regular} Let $\sharp \in \{\rm h, {\rm ah}\}$. The operator $\Pi_\sharp^- \circ \Gamma^+$ is a bounded linear operator from $\dom(\partial_\sharp)$ to $H^{\frac12}(\partial\Omega)$.
\end{lem}
\begin{proof} Let $u\in \dom(\partial_{\rm h})$ and $u_n \in C^\infty(\overline{\Omega})$ be a sequence converging to $u$ in the $\|\cdot\|_{\rm h}$-norm when $n\to +\infty$. Pick $f\in C^\infty(\partial\Omega)$, an integration by parts yields:
\[
\langle\Gamma^+u_n, \overline{\bf n}\ \Pi_{\rm ah}^+f\rangle_{L^2(\partial\Omega)} = 2 \langle\partial_{\bar z} u_n, \Phi_{\rm ah}(f)\rangle_{L^2(\Omega)}.
\]
It gives
\[
|\langle\Gamma^+u_n, \overline{\bf n}\ \Pi_{\rm ah}^+f\rangle_{L^2(\partial\Omega)}| \leq 2 c \|u_n\|_{\rm h}\|f\|_{H^{-\frac12}(\partial\Omega)},
\]
for some $c>0$, where we have used Lemma \ref{lem:extop} and Proposition \ref{prop:extphi}. As in $L^2(\partial\Omega)$ there holds $S_{\rm ah}^* = - S_{\rm h}$ we get
\[
(\overline{\mathbf{n}}\Pi_{\rm ah}^+)^* = \Pi_{\rm h}^-\mathbf{n}.
\]
In particular, there holds
\[
|\langle\Gamma^+u_n, \overline{\bf n}\ \Pi_{\rm ah}^+f\rangle_{L^2(\partial\Omega)}| = |\langle (\Pi_{\rm h}^-{\bf n}\Gamma^+u_n, f\rangle_{L^2(\partial\Omega)}| \leq 2 c \|u_n\|_{\rm h}\|f\|_{H^{-\frac12}(\partial\Omega)}.
\]
Letting $n\to +\infty$, we get $\Pi_{\rm h}^- {\bf n}\Gamma^+u \in H^{\frac12}(\partial\Omega)$ and that $\Pi_{\rm h}^-\circ H_{\bf n}\circ\Gamma^+$ is a linear bounded map from $H^{-\frac12}(\partial\Omega)$ to $H^{\frac12}(\partial\Omega)$. However, there holds
\[
\Pi_{\rm h}^- \Gamma^+ u = \Big(\overline{\bf n} \Pi_{\rm h}^- {\bf n} - \overline{\bf n} [\Pi_{\rm h}^-,{\bf n}]\Big)\Gamma^+ u = \overline{\bf n} \Pi_{\rm h}^- {\bf n}\Gamma^+ u + \overline{\bf n} [S_{\rm h},{\bf n}]\Gamma^+ u.
\]
By \eqref{itm:2pseudo} Proposition \ref{prop:pseudocommut}, $[S_{\rm h},{\bf n}]\in \Psi_{\partial\Omega}^{-1}$ hence, it is a bounded operator from $H^{-\frac12}(\partial\Omega)$ to $H^{\frac12}(\partial\Omega)$. Finally, as the multiplication operator by $\overline{\bf n}$ is bounded in $H^{\frac12}(\partial\Omega)$ we obtain the expected result.
The case $u \in \dom(\partial_{\rm ah})$ is handled similarly.
\end{proof}
We are now in a good position to describe the range of the trace operator $\Gamma^+$.
\begin{corol} Let $\sharp \in \{{\rm h},{\rm ah}\}$. There holds
\[
\ran (\Gamma^+) = \{f \in H^{-\frac12}(\partial\Omega) : \Pi_{\sharp}^- f \in H^{\frac12}(\partial\Omega) \}.
\]
\end{corol}
\begin{proof} Let us start by proving the reverse inclusion. Let $f$ be in the set on right-hand side, there holds $f = \Pi_\sharp^+ f + \Pi_\sharp^-f$. We know that there exists an extension operator $E^+$ from $H^{\frac12}(\partial\Omega)$ to $H^{1}(\Omega)$ such that $\Gamma^+ E^+ g = g$ for all $g \in H^{\frac12}(\partial\Omega)$. Now, if $\Pi_\sharp^- f \in H^{\frac12}(\partial\Omega)$, we set
\[
u := \Phi_\sharp(\Pi_\sharp^+ f) + E^+(\Pi_\sharp^- f).
\]
It is easily seen that $u \in \dom(\partial_\sharp)$ and $\Gamma^+ u = \Pi_\sharp^+ f + \Pi_\sharp^- f = f$.
Now, let us prove the direct inclusion and pick $f \in \ran(\Gamma^+)$. We know that there exists $u\in \dom(\partial_\sharp)$ such that $f = \Gamma^+ u $. In particular, by Lemma \ref{lem:regular} we know that $\Pi_\sharp^- f = \Pi_\sharp^- \Gamma^+ u\in H^{\frac12}(\partial\Omega)$ which concludes the proof.
\end{proof}
We are now able to prove Proposition \ref{prop:dirsum}.
\begin{proof}[Proof of Proposition \ref{prop:dirsum}] First, let us prove that the sum is direct. Let $v = \Phi_\sharp(f) = u$ with $\Pi_\sharp^- f = 0$ and $\Pi_\sharp^+ \Gamma^+ u = 0$. Then, taking the traces we obtain:
\[
\Gamma^+ v = \Pi_\sharp^+ f = \Pi_\sharp^- \Gamma^+ u,
\]
which implies $f = \Gamma^+ u = 0$. Consequently, $v = \Phi_\sharp(f) = 0$.
Second, let us pick $v \in \dom(\partial_\sharp)$. There holds
\[
v = \Phi_\sharp(\Pi_\sharp^+\Gamma^+ v) + v - \Phi_\sharp(\Pi_\sharp^+\Gamma^+ v).
\]
However, remark that $u := v - \Phi_\sharp(\Pi_\sharp^+\Gamma^+ v) \in \dom(\partial_\sharp)$ and satisfies $\Gamma^+ u = \Pi_\sharp^- \Gamma^+ v \in H^{\frac12}(\partial\Omega)$ by Lemma \ref{lem:regular}. Hence, by Lemma \ref{lem:ellipregul}, we obtain $u \in H^1(\Omega)$ and $\Gamma^+ u \in \ker\Pi_\sharp^+ = \ran \Pi_\sharp^-$, which concludes the proof.
\end{proof}
\section{Variational characterization of the principal eigenvalue}\label{sec:vcproof}
The aim of this section is to prove Theorem \ref{thm:vf}. In \S \ref{subsec:para1} we describe precisely the domains $\dom(q_E^\Omega)$ and $\dom(H_E^\Omega)$, where $H_E^\Omega$ is the unique self-adjoint operator associated with $q_E^\Omega$ \emph{via} Kato's first representation theorem. In \S \ref{subsec:para2}, we investigate the behavior of the map $E \in [0,+\infty) \mapsto \mu_\Omega(E)$. Finally, in \S \ref{subsec:para3}, we prove Theorem \ref{thm:vf}.
\subsection{The quadratic form $q_E^\Omega$ and its associated self-adjoint operator $H_E^\Omega$}\label{subsec:para1}
For $E>0$, recall that $q_E^\Omega$ is defined in \eqref{eqn:deffq} on the domain consisting of the closure of the $C^\infty(\overline{\Omega})$ functions with respect to the norm of the quadratic form
\[
N_E^\Omega(u) := \sqrt{\|\partial_{\bar z} u\|_{L^2(\Omega)}^2 + \|u\|_{L^2(\Omega)}^2 + E\|u\|_{L^2(\partial\Omega)}^2}.
\]
Remark that as defined, $q_E^\Omega$ is a closed, densely defined and bounded below quadratic form thus, by Kato's first representation theorem (see \cite[Chap. VI, Thm. 2.1]{Kat}), $q_E^\Omega$ is associated with a unique self-adjoint operator $H_E^\Omega$ acting in $L^2(\Omega)$ satisfying
\[
\dom(H_E^\Omega)\subset\dom(q_E^\Omega).
\]
In this paragraph, we describe properties of the domains $\dom(q_E^\Omega)$ and $\dom(H_E^\Omega)$ and start with the domain of the quadratic form $q_E^\Omega$.
\begin{prop}\label{prop:descformdom} Let $E >0$. The form domain $\dom(q_E^\Omega)$ admits the following direct sum decomposition
\[
\dom(q_E^\Omega) = \{u \in H^1(\Omega) : \Pi_{\rm h}^+ \Gamma^+ u = 0\} \dotplus \mathcal{H}_{\rm h}^2(\Omega).
\]
Moreover, $\dom(q_E^\Omega)$ is continuously embedded in $H^{\frac12}(\Omega)$.
\end{prop}
\begin{proof} Set $\mathcal{E} = \{u \in H^1(\Omega) : \Pi_{\rm h}^+ \Gamma^+ u = 0\} \dotplus \mathcal{H}_{\rm h}^2(\Omega)$ and remark that the sum is direct by the same arguments as in the proof of Proposition \ref{prop:dirsum}. We prove the set equality by proving both inclusions.
\paragraph{\underline{Inclusion $\mathcal{E}\subset\dom(q_{E}^\Omega)$}} Let $ v:= u + \Phi_{\rm h}(f) \in \mathcal{E}$ and take $(u_n)_{n\in \mathbb{N}}$ and $(f_n)_{n\in \mathbb{N}}$ two sequences of functions such that
\[
\text{for all } n\in \mathbb{N}\quad u_n \in C^\infty(\overline{\Omega}),\quad f_n \in C^\infty(\partial\Omega);
\]
and
\[
\text{when } n\to +\infty\text{ there holds }\|u_n - u\|_{H^1(\Omega)} \to 0,\quad\|f_n - f\|_{L^2(\partial\Omega)}\to 0.
\]
By \cite[Theorem 3.1.]{Bell}, we have $v_n := u_n + \Phi_{\rm h}(f_n) \in C^\infty(\overline{\Omega})$ and for $E>0$, there exists $C>0$ such that there holds
\begin{align*}
q_E^\Omega(v-v_n) +(E^2+1)\|v-v_n\|_{L^2(\Omega)}^2 =&\ \|\partial_{\bar z} (u - u_n)\|_{L^2(\Omega)}^2 \\& \quad+ E \|\Gamma^+(u - u_n) + \Pi_{\rm h}^+(f - f_n)\|_{L^2(\partial\Omega)}^2 \\&\quad\quad + \|(u-u_n) + (\Phi_{\rm h}(f-f_n))\|_{L^2(\Omega)}^2\\& \leq C\bigg(\|u-u_n\|_{H^1(\Omega)} + \|f- f_n\|_{L^2(\partial\Omega)}\bigg),
\end{align*}
where we have used the mapping properties of $\Phi_h$, $\Gamma^+$, $\Pi_{\rm h}^+$ and the continuity of the embedding of $L^2(\partial\Omega)$ into $H^{-\frac12}(\partial\Omega)$.
Letting $n\to +\infty$, we obtain that $v \in \dom(q_{E}^\Omega)$ and this inclusion is proved.
\paragraph{\underline{Inclusion $\dom(q_{E}^\Omega)\subset\mathcal{E}$}} For all $u \in C^\infty(\overline{\Omega})$, there holds
\[
q_E^\Omega(u) + (E^2+1) \|u\|_{L^2(\Omega)}^2 \geq \|u\|_{\rm h}^2.
\]
In particular, the closure of $C^\infty(\overline{\Omega})$ for the norm $N_E^\Omega$ is included in $\dom(\partial_{\rm h})$. It rewrites $\dom(q_E^\Omega)\subset \dom(\partial_{\rm h})$ and by Proposition \ref{prop:dirsum}, any $v \in \dom(q_E^\Omega)$ writes $v = u + \Phi_{\rm h}(f)$, for some $u \in H^1(\Omega)$ with $\Pi_{\rm h}^+ \Gamma^+ u = 0$ and some $f \in H^{-\frac12}(\partial\Omega)$ with $\Pi_{\rm h}^- f = 0$. Now, if $v_n \in C^\infty(\overline{\Omega})$ converges to $v\in \dom(q_E^\Omega)$ in the norm of the quadratic form, we have
\[
\|\Gamma^+ v - \Gamma^+ v_n\|_{L^2(\partial\Omega)} \leq E^{-1} q_E^\Omega(v-v_n)\to 0,\quad n\to +\infty.
\]
In particular $\Gamma^+ v = \Gamma^+ u + f \in L^2(\partial\Omega)$ and as $\Gamma^+ u \in H^{\frac12}(\partial\Omega)$ we get $f\in L^2(\partial\Omega)$ which concludes the proof of this inclusion.
Let us consider the inclusion map
\[
\mathcal{I} := \dom(q_E^\Omega) \to H^{\frac12}(\Omega),\quad (\mathcal{I} u) = u.
\]
By Proposition \ref{prop:extphi} for $s = 0$, this map is well-defined. Consider $v_n := u_n + \Phi_{\rm h}(f_n) \in \dom(q_E^\Omega)$ which converges to $v$ in the norm of the quadratic form $q_E^\Omega$ and assume that $v_n \to w$ in the $H^\frac12(\Omega)$-norm. In particular, as $v \in \dom(q_E^\Omega)$, there holds $v = u + \Phi_{\rm h}(f)$ for some $u\in H^1(\Omega)$ and $f\in H^{\frac12}(\partial\Omega)$ as in the definition of $\mathcal{E}$. In particular, in $\mathcal{D}'(\Omega)$ we obtain
\[
u + \Phi_{\rm h}(f) = w
\]
and as both terms belong to $H^{\frac12}(\Omega)$, the closed graph theorem gives that $\mathcal{I}$ is continuous.
\end{proof}
Because of the compact embedding of $H^{\frac12}(\Omega)$ into $L^2(\Omega)$, an immediate corollary of Proposition \ref{prop:descformdom} reads as follows.
\begin{corol}Let $E>0$, the operator $H_E^\Omega$ has compact resolvent and its spectrum consists of a non-decreasing sequence of eigengalues denoted $(\mu_j^\Omega(E))_{j\geq1}$. Moreover, there holds
\[
\mu_j^\Omega(E) =\inf_{\tiny{\begin{array}{c}F \subset \dom{(q_E^\Omega)}\\ \dim F = j\end{array}}}\sup_{u \in F\setminus\{0\}} \frac{4 \int_\Omega |\partial_{\bar z} u|^2 dx - E^2 \int_{\Omega}|u|^2dx + E \int_{\partial\Omega} |u|^2 ds}{\int_\Omega |u|^2 dx}.
\]
\end{corol}
\begin{rem} For $E = 0$, the counterpart of Propostion \ref{prop:descformdom}, would read
\[
\dom(q_0^\Omega) = \{u \in H^1(\Omega) : \Pi_{\rm h}^+ \Gamma^+ u = 0\} \dotplus \mathcal{A}_{\rm h}^2(\Omega).
\]
In particular, note that $\dom(q_0^\Omega)$ can not be included in any Sobolev space $H^{s}(\Omega)$, ($s>0$). Indeed, for any Bergman function $u \in \mathcal{A}_{\rm h}^2(\Omega)$, there holds $q_0^\Omega(u) = 0$ which implies that for all $j \geq 1$ we have $\mu_j^\Omega(0) = 0$. Thus $0$ is an eigenvalue of $H_0^\Omega$ of infinite multiplicity which would not be possible if we had $\dom(q_0^\Omega) \subset H^s(\Omega)$ because of the compact embedding of $H^s(\Omega)$ in $L^2(\Omega)$. This phenomena is reminiscent of what happens for the Dirac operator with zig-zag boundary conditions as discussed in \cite{Schm95}.
\end{rem}
We conclude this paragraph by a description of the domain of the operator $H_E^\Omega$.
\begin{prop} Let $E>0$, there holds:
\[
\dom(H_E^\Omega) = \{u \in H^1(\Omega) : \partial_{\bar z} u \in H^1(\Omega)\text{ and } \partial_{\bar z}u + {\bf n}\frac{E}2 u = 0 \text{ on }\partial\Omega\}.
\]
\label{prop:domope}
\end{prop}
\begin{proof} Let $\mathcal{E}$ denote the set in the right-hand side of Proposition \ref{prop:domope}. The proof is performed proving both inclusions.
\paragraph{\underline{Inclusion $\dom(H_E^\Omega)\subset\mathcal{E}$}}
Let $u\in \dom(H_E^\Omega)$ and $v \in C_0^\infty(\Omega)$, there holds
\begin{align*}
\langle H_E^\Omega u, \overline{v}\rangle_{\mathcal{D}'(\Omega),\mathcal{D}(\Omega)} = \langle H_E^\Omega u, v\rangle_{L^2(\Omega)} &= q_E^\Omega[u,v]\\
& = 4 \langle\partial_{\bar z}u,\partial_{z} \overline{v}\rangle_{\mathcal{D}'(\Omega),\mathcal{D}(\Omega)} - E^2\langle u,\overline{v}\rangle_{\mathcal{D}'(\Omega),\mathcal{D}(\Omega)}\\
& = \langle (-\Delta -E^2)u, \overline{v}\rangle_{\mathcal{D}'(\Omega),\mathcal{D}(\Omega)},
\end{align*}
where $q_E^\Omega[\cdot,\cdot]$ denotes the sesquilinear form associated with the quadratic form $q_E^\Omega$. Hence, in $L^2(\Omega)$, there holds $H_E^\Omega u = (-\Delta - E^2) u$. Remark that if $u \in \dom(H_E^\Omega)$ then $\partial_{\bar z} u \in \dom(\partial_{\rm ah})$, in particular, by Green's Formula \eqref{eqn:Green}, for all $v \in C^\infty(\overline{\Omega})$ we get:
\begin{align*}
\langle H_E^\Omega u, v\rangle_{L^2(\Omega)} &=\ -4 \langle \partial_z (\partial_{\bar z}u), v\rangle_{L^2(\Omega)} - E^2\langle u,v\rangle_{L^2(\Omega)}\\
& = \ 4\langle \partial_{\bar z} u, \partial_{\bar z}v \rangle_{L^2(\Omega)} - E^2\langle u,v\rangle_{L^2(\Omega)} - 2\langle\overline{\bf n} \Gamma^+\partial_{\bar z}u,\Gamma^+v\rangle_{H^{-\frac12}(\partial\Omega),H^{\frac12}(\partial\Omega)}\\
& = \ q_E^\Omega[u,v] - \langle2\overline{\bf n} \Gamma^+\partial_{\bar z}u + E u,\Gamma^+v\rangle_{H^{-\frac12}(\partial\Omega),H^{\frac12}(\partial\Omega)}.
\end{align*}
As $v \in \dom(q_E^\Omega)$ we necessarily have $\langle2\overline{\bf n} \Gamma^+\partial_{\bar z}u + E u,\Gamma^+v\rangle_{H^{-\frac12}(\partial\Omega),H^{\frac12}(\partial\Omega)} = 0$. As this is true for all $v \in C^\infty(\overline{\Omega})$ we obtain
\begin{equation}\label{eqn:bc}
2\overline{\bf n} \Gamma^+\partial_{\bar z}u + E \Gamma^+u = 0,\quad \text{in } H^{-\frac12}(\partial\Omega).
\end{equation}
Taking the Szeg\"o projectors in \eqref{eqn:bc} we obtain
\[
(\Gamma^+\partial_{\bar z} u) + \frac{\bf n}{2}E \Gamma^+u = 0 \Longleftrightarrow \left\{\begin{array}{lcl}\Pi_{\rm ah}^+(\Gamma^+(\partial_{\bar z} u)) + \frac{E}2 \Pi_{\rm ah}^+{\bf n}\Gamma^+ u &=& 0\\
\Pi_{\rm ah}^-(\Gamma^+(\partial_{\bar z} u)) + \frac{E}2 \Pi_{\rm ah}^-{\bf n}\Gamma^+ u &=& 0\end{array}\right.
\]
Nevertheless, there holds
\[
\Pi_{\rm ah}^- = \Pi_{\rm h}^+ - \frac12(S_{\rm h} + S_{\rm ah}),\quad \Pi_{\rm ah}^+ = \Pi_{\rm h}^- + \frac12(S_{\rm ah} + S_{\rm h}).
\]
In particular, we get
\begin{align*}
\Pi_{\rm ah}^- (\Gamma^+(\partial_{\bar z} u)) = - \frac{E}2 \Pi_{\rm ah}^-(\mathbf{n} \Gamma^+u) &= -\frac{E}2\Big(\mathbf{n}\Pi_{\rm h}^+\Gamma^+ u + [\Pi_{\rm h}^+,\mathbf{n}]\Gamma^+u - \frac12(S_{\rm h} + S_{\rm ah})(\mathbf{n}\Gamma^+u)\Big)\\ &= -\frac{E}2\Big(\mathbf{n}\Pi_{\rm h}^+\Gamma^+u + [S_{\rm h},\mathbf{n}]\Gamma^+u - \frac12(S_{\rm h} + S_{\rm ah})(\mathbf{n}\Gamma^+u)\Big).
\end{align*}
It rewrites
\[
\Pi_{\rm h}^+ \Gamma^+ u = -\overline{\bf n} \Big(\frac2E \Pi_{\rm ah}^- \Gamma^+(\partial_{\bar z} u) + [S_{\rm h},{\bf n}]\Gamma^+u - \frac12(S_{\rm h}+S_{\rm ah})({\bf n}\Gamma^+ u)\Big).
\]
Remark that the right-hand side belongs to $H^{\frac12}(\partial\Omega)$. This holds for the first term because of Lemma \ref{lem:regular} and for the last two-terms because of Proposition \ref{prop:pseudocommut}. As $\Pi_{\rm h}^- \Gamma^+ u \in H^{\frac12}(\partial\Omega)$ by Lemma \ref{lem:regular}, we get $\Gamma^+ u = \Pi_{\rm h}^+ \Gamma^+ u + \Pi_{\rm h}^- \Gamma^+ u \in H^{\frac12}(\partial\Omega)$ thus, by Lemma \ref{lem:ellipregul}, $u \in H^1(\Omega)$. In particular $\Pi_{\rm ah}^+(\Gamma^+(\partial_{\bar z} u)) = - \frac{E}2\Pi_{\rm ah}^+{\bf n}\Gamma^+ u \in H^{\frac12}(\partial\Omega)$ and as $\Pi_{\rm ah}^- \Gamma^+ (\partial_{\bar z}u) \in H^{\frac12}(\partial\Omega)$ by Lemma \ref{lem:regular} we obtain $\Gamma^+ \partial_{\bar z} u = \Pi_{\rm ah}^- \Gamma^+ (\partial_{\bar z}u) + \Pi_{\rm ah}^+ \Gamma^+ (\partial_{\bar z}u) \in H^{\frac12}(\partial\Omega)$ and by Lemma \ref{lem:ellipregul} we obtain $\partial_{\bar z}u \in H^1(\Omega)$. It concludes the proof of this inclusion.
\paragraph{\underline{Inclusion $\mathcal{E}\subset \dom(H_E^\Omega)$}} Pick $u\in \mathcal{E}$. One easily sees that $(-\Delta - E^2)u \in L^2(\Omega)$, moreover for all $v \in \dom(q_E^\Omega)$, there holds
\[
q_E^\Omega[u,v] = \langle(-\Delta - E^2)u,v\rangle_{L^2(\Omega)}.
\]
By definition of $H_E^\Omega$ it implies $u \in \dom(H_E^\Omega)$ and $H_E^\Omega u = (-\Delta -E^2)u$.
\end{proof}
\subsection{Concavity of the first min-max level}\label{subsec:para2}
In this paragraph we investigate the behavior of the first min-max level $\mu^\Omega(E)$ with respect to the spectral parameter $E>0$. This behavior is illustrated in Figure \ref{fig:figure4} for various domains $\Omega$.
\begin{prop}\label{prop:monotonicity} The map $\mu^\Omega: E \geq 0 \mapsto \mu^\Omega(E)$ verifies the following properties.
\begin{enumerate}
\item\label{itm:1} $\mu^\Omega$ is a continuous and concave function on $\mathbb{R}_+$.
\item\label{itm:2} We have $\mu^\Omega(0) = 0$ and there exists $E_\star^\Omega > 0$ such that for all $E \in (0,E_\star^\Omega)$ there holds $\mu^\Omega(E) > 0$.
\item\label{itm:4} Let $0<E_1<E_2$, there holds
\[
\mu^\Omega(E_2) \leq \frac{E_2}{E_1}\mu^\Omega(E_1) - E_2(E_2-E_1)
\]
In particular, if $\mu^\Omega(E_1) = 0$ (resp. $\mu^\Omega(E_2) = 0$) there holds $\mu^\Omega(E_2) < 0$ (resp. $\mu^\Omega(E_1)>0$).
\end{enumerate}
\end{prop}
\begin{proof} As for all $u\in \dom(q_E^\Omega)$ the function $\big(E \geq0 \mapsto q_E^\Omega(u)\big)$ is a continuous and concave, so is $\big(E\geq0 \mapsto \mu^\Omega(E)\big)$ and Point \eqref{itm:1} is proved.
Regarding Point \eqref{itm:2}, one observes that for all $u\in \dom(q_E^\Omega)$ there holds $q_0^\Omega(u) \geq 0$ and in particular $\mu^\Omega(0) \geq 0$. Now, for any $f\in L^2(\partial\Omega)$ we have $\Phi_{\rm h}(f) \in \dom(q_E^\Omega)$ and $q_0^\Omega(u) = 0$ because $\Phi_{\rm h}(f)$ is holomorphic in $\Omega$. Consequently, there holds $\mu^\Omega(0) = 0$.
To prove the second part of Point \eqref{itm:2}, let $u \in \dom(q_E^\Omega)$ and remark that
\begin{equation}
\label{eqn:minmaxpos}
q_E^\Omega(u) = (4-E) \|\partial_{\bar z} u\|_{L^2(\Omega)}^2 - E^2 \|u\|_{L^2(\Omega)}^2 + E \mathfrak{Q}(u)
\end{equation}
where the quadratic form $\mathfrak{Q}$ is defined as
\[
\mathfrak{Q}(u) = \|\partial_{\bar z} u\|_{L^2(\Omega)}^2 + \|u\|_{L^2(\partial\Omega)}^2,\quad \dom(\mathfrak{Q}) = \dom (q_E^\Omega).
\]
Now, remark that $\mathfrak{Q} \geq 0$ thus, by Kato's first representation theorem, there exists a unique self-adjoint operator $\mathfrak{H}$ such that $\dom(\mathfrak{H}) \subset \dom(\mathfrak{Q})$ and its spectrum is a sequence of non-decreasing eigenvalues because $\dom(\mathfrak{Q}) = \dom(q_E^\Omega)$ is compactly embedded into $L^2(\Omega)$. Let $\lambda_1^\Omega$ be its smallest eigenvalue, we already know by the min-max principle that $\lambda_1^\Omega \geq 0$. Moreover, if $\lambda_1^\Omega = 0$, for an associated eigenfunction $u$, we obtain $\mathfrak{Q}(u) = 0$ which implies that $\partial_{\bar z} u = 0$ hence $u$ is holomorphic with trace in $L^2(\partial\Omega)$. Consequently, $u$ belongs to $\mathcal{H}_{\rm h}^2(\Omega)$ and $u = \Phi_{\rm h}(f)$ for some $f \in L^2(\partial\Omega)$ such that $\Gamma^+ u = f$. However, as $\mathfrak{Q}(u) = 0$, we also obtain $\Gamma^+ u = f = 0$ which yields $u=0$ which is not possible because $u$ is an eigenfunction. It implies that $\lambda_1^\Omega > 0$ and using the min-max principle in \eqref{eqn:minmaxpos}, we get for all $u \in \dom(q_E^\Omega)$:
\[
q_E^\Omega(u) \geq (4-E) \|\partial_{\bar z} u\|_{L^2(\Omega)}^2 - E^2 \|u\|_{\Omega}^2 + E \lambda_1^\Omega \|u\|_{L^2(\Omega)}^2.
\]
In particular, if $E< 4$ we obtain
\[
q_E^\Omega(u) \geq E \big(\lambda_1^\Omega - E\big)\|u\|_{L^2(\Omega)}^2
\]
and the min-max principle yields
\[
\mu^\Omega(E) \geq E(\lambda_1^\Omega - E).
\]
Thus, setting $E_\star^\Omega := \min(4,\lambda_1^\Omega)$, for all $E \in (0,E_\star^\Omega)$, we have $\mu^\Omega(E) >0$.
Let us prove Point \eqref{itm:4}. Let $u \in \dom(q_E^\Omega)$ and $0 < E_1 < E_2$. There holds
\begin{equation}\label{eqn:monotonicity}
q_{E_2}^\Omega (u) = q_{E_1}^\Omega(u) - (E_2^2 - E_1^2)\int_{\Omega}|u|^2 dx + (E_2 - E_1)\int_{\partial\Omega}|u|^2 ds.
\end{equation}
Now, pick $u_1$ a normalized eigenfunction of $H_{E_1}^\Omega$ associated with the eigenvalue $\mu^\Omega(E_1)$. We have $q_{E_1}^\Omega(u_1) = \mu^\Omega(E_1)$ which implies
\[
\int_{\partial\Omega}|u_1|^2 ds \leq \frac{1}{E_1}\Big(4\int_\Omega |\partial_{\bar z}u_1|^2 dx + E_1 \int_{\partial\Omega}|u_1|^2ds\Big) = \frac1{E_1}(q_{E_1}^\Omega (u_1) + E_1^2 )\leq \frac{E_1^2 + \mu^\Omega(E_1)}{E_1}.
\]
Thus, evaluating \eqref{eqn:monotonicity} with $u = u_1$ we obtain
\[
q_{E_2}^\Omega(u_1) \leq \mu^\Omega(E_1) - (E_2^2-E_1^2) + \frac{E_2 - E_1}{E_1}(E_1^2 + \mu^\Omega(E_1)).
\]
The min-max principle finally gives the sought inequality
\[
\begin{array}{lcl}
\mu^\Omega(E_2) &\leq&\displaystyle \mu^\Omega(E_1) - (E_2^2-E_1^2) + \frac{E_2 - E_1}{E_1}(E_1^2 + \mu^\Omega(E_1))\\
&=& \displaystyle\frac{E_2}{E_1}\mu^\Omega(E_1) - E_2(E_2 - E_1).
\end{array}
\]
Now, assume that $\mu^\Omega(E_1) = 0$. It yields
\[
\mu^\Omega(E_2) \leq - E_2(E_2 - E_1) < 0.
\]
Similarly, if $\mu^\Omega(E_2) = 0$ we get
\[
0 < E_1(E_2 - E_1) \leq \mu^\Omega(E_1).
\]
\end{proof}
\subsection{Proof of the variational principle}\label{subsec:para3}
In our way to prove Theorem \ref{thm:vf} we will need the following two propositions.
\begin{prop}Let $E>0$ be such that $\mu^\Omega(E) = 0$ then $E \in Sp_{dis}(D^\Omega)$.
\label{prop:sens1}
\end{prop}
\begin{proof} Let $E>0$ be such that $\mu^\Omega(E) = 0$ and consider a normalized associated eigenfunction $v \in \dom(H_E^\Omega)$. Set $u = (u_1,u_2)^\top = (v, -\frac{2 \mathrm{i}}{E}\partial_{\bar z}v)^\top$, by Proposition \ref{prop:domope}, $u\in H^1(\Omega,\mathbb{C}^2)$ and as $v\in\dom(H_E^\Omega)$, in $H^{\frac12}(\partial\Omega)$ there holds
\[
\Gamma^+(\partial_{\bar z} v) +{\bf n}\frac{E}2 \Gamma^+ v = 0 \Longleftrightarrow -2E^{-1}\mathrm{i} \Gamma^+(\partial_{\bar z} v) = \mathrm{i} {\bf n} \Gamma^+u \Longleftrightarrow \Gamma^+ u_2 = \mathrm{i} {\bf n}\Gamma^+ u_1.
\]
Hence, $(u_1,u_2)^\top \in \dom(D^\Omega)$ and there holds
\begin{align*}
D^\Omega (u_1,u_2)^\top = \begin{pmatrix} 0 & -2\mathrm{i}\partial_{z}\\-2\mathrm{i}\partial_{\bar z} & 0\end{pmatrix} (u_1,u_2)^\top &= (-2\mathrm{i}\partial_z u_2, -2\mathrm{i}\partial_{\bar z} u_1)^\top\\&= (-\frac1{E}\Delta u, E u_2)^\top\\
& = E(u_1,u_2)^\top.
\end{align*}
Hence, $E \in Sp_{dis}(D_\Omega)$ and it concludes the proof of Proposition \ref{prop:sens1}.
\end{proof}
\begin{prop}Let $E \in Sp_{dis}(D^\Omega)\cap\mathbb{R}_+^*$ then $\mu^\Omega(E) \leq 0$.
\label{prop:sens2}
\end{prop}
\begin{proof}Let $E \in Sp_{dis}(D^\Omega)\cap\mathbb{R}_+^*$ and pick $u = (u_1,u_2)^\top \in \dom(D^\Omega)$ a normalized eigenfunction of $D^\Omega$ associated with $E$. We have
\[
\left\{
\begin{array}{ll}
D^\Omega u = E u& \text{in } \Omega,\\
u_2 = \mathrm{i} {\bf n} u_1 & \text{on } \partial\Omega.
\end{array}
\right.
\]
In particular, we have $-2\mathrm{i} \partial_{\bar z} u_1 = E u_2$ and $\partial_{\bar z}u_1 \in H^1(\Omega)$. It yields
\[
E u_1 = -2\mathrm{i} \partial_{z} u_2 = - \frac4{E} \partial_{z}\partial_{\bar z} u_1.
\]
Taking the scalar product with respect to $u_1$ on both side of the previous equation we get
\begin{equation}\label{eqn:obtfq}
E^2 \int_{\Omega}|u_1|^2 dx = - 4 \int_{\Omega} (\partial_{z}\partial_{\bar z} u_1)\overline{u_1}dx = 4\int_{\Omega}|\partial_{\bar z} u_1|^2dx - 2\int_{\partial\Omega}\overline{\bf n}(\partial_{\bar z}u_1) \overline{u_1} ds.
\end{equation}
Now, remark that on $\partial\Omega$, we have
\[
-\frac{2\mathrm{i}}{E} \partial_{\bar z}u_1 = u_2 = \mathrm{i} {\bf n} u_1
\]
which implies that on $\partial\Omega$
\[
2\overline{\bf n}\partial_{\bar z}u_1 + {E}u_1 = 0.
\]
Hence, \eqref{eqn:obtfq} becomes
\[
E^2\int_{\Omega} |u_1|^2 = 4\int_{\Omega}|\partial_{\bar z}u_1|^2 dx + E \int_{\partial\Omega}|u_1|^2 ds
\]
which reads $q_{E}^\Omega(u_1) = 0$ thus, the min-max principle gives $\mu^\Omega(E)\leq 0$.
\end{proof}
Now, we have all the tools to prove Theorem \ref{thm:vf}. The proof is performed proving each implication.
\begin{proof}[Proof of Theorem \ref{thm:vf}]~
By Proposition \ref{prop:sens2}, we have $\mu^\Omega(E_1(\Omega)) \leq 0$. Assume that $\mu^\Omega(E_1(\Omega)) < 0$, by Proposition \ref{prop:monotonicity} we know that there exists $0<E < E_1(\Omega)$ such that $\mu^\Omega(E) = 0$ which, by Proposition \ref{prop:sens1}, implies $E \in Sp_{dis}(D^\Omega)$. It is not possible because, by definition of $E_1(\Omega)$, $E\geq E_1(\Omega)$ consequently, we obtain $\mu^\Omega(E_1(\Omega)) = 0$.
Let $E>0$ be such that $\mu^\Omega(E) = 0$. By Proposition \ref{prop:sens1}, $E \in Sp_{dis}(D^\Omega)$ and necessarily $E \geq E_1(\Omega)$. If $E>E_1(\Omega)$, by Proposition \ref{prop:monotonicity}, we obtain $\mu^\Omega(E_1(\Omega)) > 0$ but by Proposition \ref{prop:sens2} we necessarily have $\mu^\Omega(E_1(\Omega)) \leq 0$ which implies that necessarily there holds $E = E_1(\Omega)$.
\end{proof}
\section{Geometric upper bounds on the spectral gap}\label{sec:isopin}
The goal of this section is to prove Theorem \ref{thm:ineq} and this is discussed in \S \ref{subsec:sharpub}. But first, in \S \ref{subsec:simpleub}, we give a simple geometric upper bound on the spectral gap which illustrates how Theorem \ref{thm:vf} can be used.
\subsection{A simple upper bound}\label{subsec:simpleub}
An immediate consequence of Theorem \ref{thm:vf} reads as follows.
\begin{prop}
Let $\Omega \subset \mathbb{R}^2$ be $C^\infty$ and simply connected. There holds
\[
E_1(\Omega) \leq \frac{|\partial\Omega|}{|\Omega|}.
\]
\end{prop}
There is no reason for the above upper bound to be attained among Euclidean domains. However, the bound brings into play simple geometric quantities: the perimeter and the area of $\Omega$.
\begin{proof}
Let $E>0$ and $u \equiv 1$ the function constant to $1$ in $\Omega$. As $u\in \dom(q_E^\Omega)$, by the min-max principle we obtain
\[
\mu^\Omega(E) \leq \frac{q_E^\Omega(u)}{\|u\|_{L^2(\Omega)}^2} = E\Big(\frac{|\partial\Omega|}{|\Omega|} - E\Big).
\]
So in $E_{\rm crit} := \frac{|\partial\Omega|}{|\Omega|}$ we get $\mu^\Omega(E_{\rm crit}) \leq 0$ and by Proposition \ref{prop:monotonicity} we know that
\[
E_1(\Omega) \leq E_{\rm crit} = \frac{|\partial\Omega|}{|\Omega|}.
\]
\end{proof}
\subsection{A sharp upper bound}\label{subsec:sharpub}
It turns out Theorem \ref{thm:ineq} is a consequence of the following result.
\begin{thm} Let $\Omega \subset \mathbb{R}^2$ be a $C^\infty$ simply connected domain. There holds
\[
E_1(\Omega) \leq \frac{|\partial\Omega| + \sqrt{|\partial\Omega|^2 + 8\pi E_1(\mathbb{D})(E_1(\mathbb{D}) - 1)(\pi r_i^2 +|\Omega|)}}{2(\pi r_i^2 + |\Omega|)}
\]
with equality if and only if $\Omega$ is a disk.
\label{thm:isoper}
\end{thm}
Now, we have all the tools to prove Theorem \ref{thm:ineq}.
\begin{proof}[Proof of Theorem \ref{thm:ineq}] Using that $\pi r_i^2 \leq |\Omega|$ and the isoperimetric inequality we obtain $4\pi^2 r_i^2 \leq 4\pi |\Omega| \leq |\partial\Omega|^2$. It gives
\[
|\partial\Omega|^2 + 8\pi E_1(\mathbb{D})(E_1(\mathbb{D}) - 1)(\pi r_i^2 +|\Omega|) \leq |\partial\Omega|^2(2E_1(\mathbb{D}) - 1)^2.
\]
Note that in the above inequalities, we have equality if and only if $\Omega$ is a disk and combining this bound with the one of Theorem \ref{thm:isoper} we get Theorem \ref{thm:ineq}.
\end{proof}
In the rest of this section we focus on proving Theorem \ref{thm:isoper} and assume, without loss of generality, the following.
\begin{enumerate}[label=(\roman*)]
\item $0\in \Omega$ is such that $r_i = \max_{x\in\partial\Omega}|x|$,
\item $f : \mathbb{D} \to \Omega$ is a conformal map such that $f(0) = 0$ and we write
\[
f(z) = \sum_{n\geq1} c_n z^n,
\]
where $(c_n)_{n\geq1}$ is a sequence of complex numbers.
\end{enumerate}
Before going through the proof of Theorem \ref{thm:isoper}, we gather in the following paragraph some known properties linking the geometry of $\Omega$ with the conformal map $f$.
\subsubsection{Preliminaries}
The next proposition can be found in \cite[\S 3.10.2]{P16} and relates the area of $\Omega$ with the conformal map $f$.
\begin{prop}[Area formula]\label{eqn:area} There holds
\begin{equation*}
|\Omega| = \pi \sum_{n\geq 1}n|c_n|^2.
\end{equation*}
\end{prop}
The second proposition is a consequence of the Schwarz lemma (see Koebe's estimate in \cite[Chap. I, Thm. 4.3]{GM05}). It gives a relation between the first coefficient $c_1$ of the conformal map $f$ and the inradius $r_i$.
\begin{prop}[Koebe's estimate]\label{prop:Koebe} There holds
\[
|f'(0)| = |c_1| \geq r_i.
\]
\end{prop}
Finally, the last geometric relation between the conformal map $f$ and the geometry of $\Omega$ we need to prove Theorem \ref{thm:isoper} is that the perimeter $|\partial\Omega|$ of $\Omega$ can be expressed as
\begin{equation}
|\partial\Omega| = \int_{0}^{2\pi}|f'(e^{i\theta})|d\theta.
\label{eqn:perim}
\end{equation}
\eqref{eqn:perim} is a simple consequence of the fact that $f|_{\mathbb{S}^1}$ is a parametrization of $\partial\Omega$.
\subsubsection{Proof of the upper bound on the spectral gap}
To prove Theorem \ref{thm:isoper}, we construct an adequate test function for $q_E^\Omega$ transplanting the eigenfunction of the unit disk $\mathbb{D}$ in the domain $\Omega$ thanks to the conformal map $f$. We obtain an upper bound on $\mu^\Omega(E)$ which is a second order polynomial in the spectral parameter $E>0$ and with coefficients depending on the geometry of $\Omega$. It translates into an optimization problem for the spectral parameter $E>0$ that we solve in the last step of the proof.
\begin{proof}[Proof of Theorem \ref{thm:isoper}]
Let us go through all the steps of the proof.\\
\paragraph{Step 1} Let us denote by $J_0$ (resp. $J_1$) the Bessel function of the first kind of order $0$ (resp. of order $1$). For $x \in \mathbb{D}$, consider $u_0(x) = J_0\big(E_1(\mathbb{D}) |x|\big) \in H^1(\mathbb{D}) \subset \dom(q_{E_1(\mathbb{D})}^\Omega)$. As explained in Remark \ref{rem:fundisk} $u(x) = (u_0(x), \mathrm{i} \frac{x_1 + ix_2}{|x|}J_1\big(E_1(\mathbb{D}) |x|\big))^\top$ is an eigenfunction of $D^\mathbb{D} $associated with $E_1(\mathbb{D})$. Theorem \ref{thm:vf} implies
\begin{align}\nonumber
0 = q_{E_1(\mathbb{D})}^\mathbb{D}(u_0) = &\ 2\pi E_1(\mathbb{D})^2\int_0^1 J_1\big(E_1(\mathbb{D}) r\big)^2 rdr - 2\pi E_1(\mathbb{D})^2\int_0^1 J_0\big(E_1(\mathbb{D}) r\big)^2 r dr\\&\quad + 2\pi E_1(\mathbb{D}) J_0\big(E_1(\mathbb{D})\big)^2.
\label{eqn:carvpdisk}
\end{align}
\paragraph{Step 2} For $x=(x_1,x_2)\in \Omega$, consider $v_0(x_1,x_2) = u_0(f^{-1}(x_1 +\mathrm{i} x_2)) \in H^1(\Omega) \subset \dom(q_E^\Omega)$. By the min-max principle, there holds
\begin{equation}
\mu^\Omega(E) \leq \frac{q_E^\Omega(v_0)}{\|v_0\|_{L^2(\Omega)}^2} = \frac{\|\nabla v_0\|_{L^2(\Omega)}^2 + E \|v_0\|_{L^2(\partial\Omega)}^2}{\|v_0\|_{L^2(\Omega)}^2} - E^2,
\label{eqn:ub1step}
\end{equation}
where we have used that $v_0$ is real valued to ensure that $\|\nabla v_0\|_{L^2(\Omega)} = 4\|\partial_{\bar z}v_0\|_{L^2(\Omega)}$.
\paragraph{Step 3} Now, as $f$ is a conformal map, we know that
\begin{equation}
\|\nabla v_0\|_{L^2(\Omega)} = \|\nabla u_0\|_{L^2(\mathbb{D})} = 2 \pi E_1(\mathbb{D})^2 \int_0^1 J_1\big(E_1(\mathbb{D}) r\big)^2 r dr.
\label{eqn:eqnfinal1}
\end{equation}
Using \eqref{eqn:perim}, we obtain
\begin{equation}
\|v_0\|_{L^2(\partial\Omega)}^2 = \int_{0}^{2\pi} |v_0(f(e^{\mathrm{i} \theta}))|^2 |f'(e^{\mathrm{i} \theta})| d\theta = J_0\big(E_1(\mathbb{D})\big)^2 |\partial \Omega|.
\label{eqn:eqnfinal2}
\end{equation}
Finally, the last integral reads
\begin{align}\nonumber
\|v_0\|_{L^2(\Omega)}^2 =& \int_0^1\int_0^{2\pi}|u_0(r)|^2 |f'(re^{\mathrm{i} \theta})|^2 r dr d\theta\\\nonumber=& \int_0^1 |u_0(r)|^2 \Big(\int_0^{2\pi}\Big|\sum_{n\geq 1} n c_n r^{n-1} e^{\mathrm{i} (n-1)\theta}\Big|^2 d\theta\Big) r dr\\
=& 2 \pi \sum_{n\geq 1} n |c_n|^2 M_n,\quad \text{where for } n\geq 1, M_n := n\int_{0}^1J_0\big(E_1(\mathbb{D})r\big)^2 r^{2n-1} dr,
\label{eqn:eqnfinal3}
\end{align}
where we have used Parseval identity.
\paragraph{Step 4}
Taking into account \eqref{eqn:eqnfinal1},\eqref{eqn:eqnfinal2} and \eqref{eqn:eqnfinal3}, \eqref{eqn:ub1step} becomes
\begin{align}
\nonumber\mu^\Omega(E) \leq 2\pi E_1(\mathbb{D})^2 \frac{\displaystyle\int_{0}^1J_1\big(E_1(\mathbb{D}) r\big)^2 rdr}{\displaystyle2\pi \sum_{n\geq 1}n|c_n|^2M_n} &\ - E^2\\&\ + E\frac{J_0\big(E_1(\mathbb{D})\big)^2|\partial\Omega|}{\displaystyle2\pi \sum_{n\geq 1}n|c_n|^2M_n}.\label{eqn:ub1}
\end{align}
Let us find a lower bound on the sequence $(M_n)_{n\geq 1}$. Using first an integration by parts we find
\[
M_n = \frac12J_0\big(E_1(\mathbb{D})\big)^2 + \frac{E_1(\mathbb{D})}2\int_{0}^1J_0\big(E_1(\mathbb{D}) r\big)J_1\big(E_1(\mathbb{D}) r\big) r^{2n} dr.
\]
In particular, for $n=1$ it gives
\begin{align}
M_1 = \int_0^1 J_0\big(E_1(\mathbb{D}) r\big)^2 rdr & =J_0\big(E_1(\mathbb{D}) \big)^2\label{eqn:intJ0} \\\nonumber & = E_1(\mathbb{D}) \int_{0}^1 J_0\big(E_1(\mathbb{D}) r\big) J_1(E_1(\mathbb{D}) r) r^2 dr.
\end{align}
Now, for $n\geq1$, one notices that $h_1 := \Big(r \mapsto (J_0J_1)\big(E_1(\mathbb{D})r\big)r^2\Big)$ and $h_2 := \Big(r \mapsto r^{2n-2}\Big)$ are non-decreasing functions on $[0,1]$ and by Chebyschev's inequality for non-decreasing functions, we obtain
\[
M_n \geq \frac12 M_1 + \frac12M_1 \int_{0}^1r^{2n-2}dr = \frac{n}{2n-1}M_1.
\]
In particular, we have
\begin{align}
\nonumber 2\pi \sum_{n\geq 1}n |c_n|^2 M_n &\geq J_0\big(E_1(\mathbb{D})\big)^2\Big(2\pi |c_1|^2 + 2\pi\sum_{n\geq 2}\frac{n^2}{2n-1}|c_n|^2\Big)\\\nonumber& \geq J_0\big(E_1(\mathbb{D})\big)^2\Big(2\pi |c_1|^2 + \pi\sum_{n\geq 2}n|c_n|^2\Big) \\\nonumber
& = J_0\big(E_1(\mathbb{D})\big)^2 (\pi |c_1|^2 + |\Omega|)\\
& \label{eqn:lb1}\geq J_0\big(E_1(\mathbb{D})\big)^2(\pi |r_i|^2 + |\Omega|),
\end{align}
where we have used Proposition \ref{eqn:area} and Proposition \ref{prop:Koebe}. Remark that in the first two inequalities above we have equality if and only if $c_n = 0$ for all $n\geq2$. Similarly, in the last equality, we have equality if and only if $|c_1| = r_i$. In particuliar there is equality in the above inequalities if and only if $f(z) = c_1 z$ and $\Omega$ is a disk centered in $0$ of radius $r_i$.
Combining \eqref{eqn:carvpdisk} and \eqref{eqn:lb1} in \eqref{eqn:ub1}, we obtain
\begin{multline*}
\mu^\Omega(E) \leq - E^2 + \frac{2 \pi E_1(\mathbb{D})^2 \int_0^1 J_0\big(E_1(\mathbb{D})r\big)^2 rdr + J_0\big(E_1(\mathbb{D})\big)^2\Big( E |\partial\Omega| - 2\pi E_1(\mathbb{D})\Big)}{J_0\big(E_1(\mathbb{D})\big)^2(\pi r_i^2 + |\Omega|)}.
\end{multline*}
Using \eqref{eqn:intJ0}, we obtain
\begin{align*}
\mu^\Omega(E) &\leq -E^2 + \frac{2\pi E_1(\mathbb{D})^2 + \big(E |\partial\Omega| - 2\pi E_1(\mathbb{D}) \big)}{\pi r_i^2 + |\Omega|}\\ &= \frac{\big(2\pi E_1(\mathbb{D})^2 - (\pi r_i^2 + |\Omega|)E^2\big) + \big(E |\partial\Omega| - 2\pi E_1(\mathbb{D})\big)}{\pi r_i^2 + |\Omega|}\\ &= \frac{P(E)}{\pi r_i^2 + |\Omega|}, \quad P(E) := -E^2(\pi r_i^2 + |\Omega|)+ E |\partial\Omega| +2\pi E_1(\mathbb{D})\big(E_1(\mathbb{D})-1\big).
\end{align*}
\paragraph{Step 5}
Remark that by \eqref{eqn:lbBFSVdB}, there holds $E_1(\mathbb{D}) -1 \geq \sqrt{2} - 1 >0$. In particular, the discriminant of $P$ satisfies
\[
\delta(P) := |\partial\Omega|^2 + 8\pi E_1(\mathbb{D})\big(E_1(\mathbb{D}) - 1\big)(\pi r_i^2 +|\Omega|)>0.
\]
Thus, $P$ has two real roots and as $P(0)>0$, the only positive root is
\[
E_{\rm crit} := \frac{|\partial\Omega| + \sqrt{|\partial\Omega|^2 + 8\pi E_1(\mathbb{D})\big(E_1(\mathbb{D}) - 1\big)(\pi r_i^2 + |\Omega|)}}{2(\pi r_i^2 +|\Omega|)}.
\]
One obtains $\mu^\Omega(E_{\rm crit}) \leq \frac{P(E_{\rm crit})}{\pi r_i^2 + |\Omega|} = 0$ and by Proposition \ref{prop:monotonicity} and Theorem \ref{thm:vf} we get
\[
E_1(\mathbb{D}) \leq E_{\rm crit}
\]
which is precisely Theorem \ref{thm:isoper}.
\end{proof}
\section{About the Faber-Krahn conjecture}\label{sec:aboutFK}
In this section we discuss how the variational formulation established in Theorem \ref{thm:vf} can be used to investigate Conjecture \ref{conj:FK}. \S \ref{par:newconj} deals with a new Faber-Krahn type conjecture for the operator $H_E^\Omega$ introduced in \S \ref{subsec:para1} and how this new conjecture is related to Conjecture \ref{conj:FK}. In \S \ref{par:BosselDaners}, we discuss how the well-known Bossel-Daners inequality for the Robin Laplacian is linked to Conjecture \ref{conj:FK} (see \cite{Boss86,Dan06}).
\subsection{A new conjecture}\label{par:newconj}
Let us introduce a new Faber-Krahn type conjecture for $\mu^\Omega(E)$, the first eigenvalue of $H_E^\Omega$.
\begin{conj}Let $\Omega \subset \mathbb{R}^2$ be $C^\infty$ and simply connected. For all $E>0$, there holds
\[
\mu^\Omega(E) \geq \frac{\pi}{|\Omega|}\mu^\mathbb{D}\Big(\sqrt{\frac{|\Omega|}{\pi}}E\Big).
\]
Moreover, there is equality in the above inequality if and only if $\Omega$ is a disk.
\label{conj:2}
\end{conj}
It turns out Conjecture \ref{conj:2} is equivalent to Conjecture \ref{conj:FK} and this is what we prove in the rest of this paragraph.
\begin{proof} First, remark that a simple scaling argument gives, for all $E>0$, that
\[
\sqrt{\frac{\pi}{|\Omega|}}E_1(\mathbb{D}) = E_1(\rho \mathbb{D}), \quad \mu^{\rho \mathbb{D}}(E) = \frac{\pi}{|\Omega|}\mu^{\mathbb{D}}\Big(\sqrt{\frac{|\Omega|}{\pi}}E\Big)\quad \text{where } \rho := \sqrt{\frac{|\Omega|}{\pi}}.
\]
Second, assume that Conjecture \ref{conj:FK} holds true. If $\Omega$ is a disk, there holds $\mu^\Omega(E) = \mu^{\rho\mathbb{D}}(E)$ so now, we assume that $\Omega$ is not a disk. Let us prove that for all $E>0$ there holds
\[
\mu^\Omega(E) > \mu^{\rho \mathbb{D}}(E).
\]
Let us reason by \textit{reduction ad absurdum} and assume there exists $E_\star > 0$ such that $\mu^\Omega(E_\star) \leq \mu^{\rho \mathbb{D}}(E_\star)$.
\underline{Case $E_\star < E_1(\rho\mathbb{D})$.} By hypothesis and Proposition \ref{prop:monotonicity}, there holds
\begin{align*}
\mu^\Omega(E_\star) \leq \mu^{\rho \mathbb{D}}(E_\star) &\leq \frac{E_1(\rho\mathbb{D})}{E_\star} \mu^{\rho\mathbb{D}}\big(E_1(\rho\mathbb{D})\big) - E_1(\rho\mathbb{D})(E_1(\rho\mathbb{D}) - E_\star)\\
& = - E_1(\rho\mathbb{D})(E_1(\rho\mathbb{D}) - E_\star) < 0.
\end{align*}
In particular, $\mu^\Omega(E_\star) < 0$ which implies $E_\star > E_1(\Omega)$. However, if Conjecture \ref{conj:FK} holds true we obtain $E_\star > E_1(\Omega) > E_1(\rho \mathbb{D})$ which contradicts our hypothesis.
\underline{Case $E_1(\rho\mathbb{D}) \leq E_\star \leq E_1(\Omega)$.} By hypothesis and Proposition \ref{prop:monotonicity}, there holds
\[
0 \leq \mu^\Omega(E_\star) \leq \mu^{\rho \mathbb{D}}(E_\star) \leq 0,
\]
which contradicts our hypothesis because we obtain $E_\star = E_1(\Omega) = E_1(\rho\mathbb{D})$ but we have assumed that $\Omega$ is not a disk thus, this equality can not hold if Conjecture \ref{conj:FK} holds true.
\underline{Case $E_\star > E_1(\Omega)$.} By hypothesis and Proposition \ref{prop:monotonicity}, there holds
\begin{align*}
0 = \mu^\Omega\big(E_1(\Omega)\big) &\leq \frac{E_\star}{E_1(\Omega)}\mu^\Omega(E_\star) - E_\star\big(E_\star - E_1(\Omega)\big)\\& \leq \frac{E_\star}{E_1(\Omega)}\mu^{\rho \mathbb{D}}(E_\star) - E_\star\big(E_\star - E_1(\Omega)\big).
\end{align*}
In particular, we obtain $\mu^{\rho \mathbb{D}}(E_\star) \geq E_1(\Omega)\big(E_\star -E_1(\Omega)\big) > 0$. Hence, $E_\star < E_1(\rho \mathbb{D})$ which contradicts Conjecture \ref{conj:FK}.
Consequently, we have proved that if Conjecture \ref{conj:FK} holds true so does Conjecture \ref{conj:2}.
Finally, let us assume that Conjecture \ref{conj:2} holds true. If $\Omega$ is a disk, we obtain that for all $E> 0$, $\mu^\Omega(E) = \mu^{\rho\mathbb{D}}(E)$. In particular, in $E = E_1(\Omega)$ we get $\mu^{\rho\mathbb{D}}\big(E_1(\Omega)\big) = 0$ and $E_1(\rho \mathbb{D}) = E_1(\Omega)$.
When $\Omega$ is not a disk, for all $E>0$ there holds $\mu^{\rho \mathbb{D}}(E) < \mu^\Omega(E)$. In $E=E_1(\Omega) $ we obtain $\mu^{\rho\mathbb{D}}\big(E_1(\Omega)\big) < 0$ and by Proposition \ref{prop:monotonicity} we obtain $E_1(\rho\mathbb{D}) < E_1(\Omega)$ which is precisely Conjecture \ref{conj:FK}.
\end{proof}
\subsection{Link with the Bossel-Daners inequality}\label{par:BosselDaners}
The first eigenvalue of the Robin Laplacian with positive parameter $E>0$ in the domain $\Omega$, denoted $\lambda_{\rm Rob}^\Omega(E)$, is given by the variational characterization
\[
\lambda_{\rm Rob}^\Omega(E) := \inf_{u \in C^\infty(\overline{\Omega})\setminus\{0\}} \frac{\|\nabla u\|_{L^2(\Omega)}^2 + E \int_{\partial\Omega}|u|^2 ds}{\|u\|_{L^2(\Omega)}^2}
\]
and the Bossel-Daners inequality states that
\begin{equation}
\lambda_{\rm Rob}^\Omega(E) \geq \frac{\pi}{|\Omega|}\lambda_{\rm Rob}^\mathbb{D}\Big(\sqrt{\frac{|\Omega|}{\pi}}E\Big),
\label{eqn:Boss-Dan}
\end{equation}
with equality if and only if $\Omega$ is a disk.
Note that the structure of \eqref{eqn:Boss-Dan} is similar to that of Conjecture \ref{conj:2} and it turns out they are intimately connected. This is the purpose of the following proposition.
\begin{prop} Conjecture \ref{conj:FK} implies the Bossel-Daners inequality \eqref{eqn:Boss-Dan}.
\end{prop}
\begin{proof} As Conjecture \ref{conj:FK} is equivalent to Conjecture \ref{conj:2} as discussed in \S \ref{par:newconj}, we can assume that Conjecture \ref{conj:2} holds. Let us start by remarking that for all $E>0$, if $u\in \dom(H_E^\mathbb{D})$ is a normalized eigenfunction associated with $\mu^{\mathbb{D}}(E)$ then $u$ can be picked real-valued. Hence, we get
\begin{align}
\nonumber\mu^{\mathbb{D}}(E) &= \inf_{v \in C^\infty(\overline{\mathbb{D}},\mathbb{R})} \frac{\|\nabla v\|_{L^2(\mathbb{D})}^2 - E^2\|v\|_{L^2(\mathbb{D})}^2 + \int_{\partial\mathbb{D}}|v|^2 ds}{\|v\|_{L^2(\mathbb{D})}^2}\\
& = \lambda_{\rm Rob}^{\mathbb{D}}(E) - E^2.\label{eqn:minfin2}
\end{align}
Now, we remark that for any domain $\Omega$ there holds
\begin{align}\nonumber
\lambda_{\rm Rob}^\Omega(E) - E^2 &= \inf_{v \in C^\infty(\overline{\Omega},\mathbb{R})\setminus\{0\}} \frac{\|\nabla v\|_{L^2(\Omega)}^2 - E^2\|v\|_{L^2(\Omega)}^2 + E \int_{\partial\Omega}|v|^2 ds}{\|v\|_{L^2(\Omega)}^2}\\&\nonumber = \inf_{v \in C^\infty(\overline{\Omega},\mathbb{R})\setminus\{0\}} \frac{4\|\partial_{\bar z} v\|_{L^2(\Omega)}^2 - E^2\|v\|_{L^2(\Omega)}^2 + E \int_{\partial\Omega}|v|^2 ds}{\|v\|_{L^2(\Omega)}^2}\\
& \nonumber\geq \inf_{v \in \dom(q_E^\Omega))\setminus\{0\}} \frac{4\|\partial_{\bar z} v\|_{L^2(\Omega)}^2 - E^2\|v\|_{L^2(\Omega)}^2 + E \int_{\partial\Omega}|v|^2 ds}{\|v\|_{L^2(\Omega)}^2}\\
& = \mu^\Omega(E).\label{eqn:eqmin1fin}
\end{align}
Hence, using \eqref{eqn:minfin2} and \eqref{eqn:eqmin1fin}, we get
\[
\lambda_{\rm Rob}^\Omega(E) - E^2 \geq \mu^\Omega(E) \geq \frac{\pi}{|\Omega|}\mu^\mathbb{D}\Big(\sqrt{\frac{|\Omega|}{\pi}}E\Big) = \frac{\pi}{|\Omega|}\lambda_{\rm Rob}^\mathbb{D}\Big(\sqrt{\frac{|\Omega|}{\pi}}E\Big) - E^2.
\]
If $\Omega$ is a disk, all the above inequalities are equalities. Else, we obtain
\[
\lambda_{\rm Rob}^\Omega(E) > \frac{\pi}{|\Omega|}\lambda_{\rm Rob}^\mathbb{D}\Big(\sqrt{\frac{|\Omega|}{\pi}}E\Big),
\]
which is precisely the Bossel-Daners inequality \eqref{eqn:Boss-Dan}.
\end{proof}
\section{Numerics}\label{sec:numerics}
The goal of this section is to illustrate numerically some theoretical results discussed in the previous sections and to support the validity of Conjecture \eqref{conj:FK}.
In \S \ref{nummet}, we discuss the two numerical schemes we have employed in \S \ref{numres} in order to study the principal eigenvalue of the Dirac operator with infinite mass boundary conditions in various domains $\Omega$. We also discuss the structure of the associated eigenfunctions.
\subsection{Numerical Methods}
\label{nummet}
In this paragraph we present a brief description of the numerical methods that we use in this work.
We have implemented two different numerical approaches, respectively to calculate the eigenvalues of the Dirac operator with infinite mass boundary conditions, directly from the formulation of the eigenvalue problem and to solve the minimization problem associated with the non-linear variational characterization \eqref{eqn:firstminmax}, defining $\mu^\Omega(E)$.
The eigenvalues of the Dirac operator with infinite mass boundary conditions are calculated using a numerical method based on Radial Basis Functions (RBF) (see \textit{eg}.~\cite{Kansa,Fornberg}). We have chosen a set of $RBF$ centers $y_1,...,y_N\in\mathbb{R}^2$, for some $N\in\mathbb{N}$, which are generated by a node repel algorithm (see~\cite{A19} for details). The eigenfunction $u=(u_1,u_2)^\top$ is defined in $H^1(\Omega,\mathbb{C}^2)$ and we use the notation $u_1=v_1+iw_1$ and $u_2=v_2+iw_2$, where $v_1$, $w_1$ and $v_2$, $w_2$ are the real and imaginary parts of $u_1$ and $u_2$, respectively. The RBF numerical approximation for each of these functions is defined by
\begin{equation}
\label{rbf}
\begin{array}{c}
v_1(x)=\sum_{j=1}^N\alpha^{(1)}_j\phi_j(x),\quad w_1(x)=\sum_{j=1}^N\beta^{(1)}_j\phi_j(x),\\ v_2(x)=\sum_{j=1}^N\alpha^{(2)}_j\phi_j(x),\quad w_2(x)=\sum_{j=1}^N\beta^{(2)}_j\phi_j(x),\quad
\end{array}
\end{equation}
where $\phi_j(x)=\phi(|x-y_j|)$, for some function $\phi:\mathbb{R}_0^+\rightarrow\mathbb{R}$. Several $RBF$ functions can be considered (eg.~\cite{Fornberg,A19}), but in this work we consider the multiquadric one $\phi(r)=\sqrt{1+(\epsilon r)^2}$, for some $\epsilon>0$.
The eigenvalue problem for the Dirac operator with infinite mass boundary conditions can be written as
\[ \left\{ \begin{array}{cl}
-\frac{\partial v_2}{\partial x_2}+\frac{\partial w_2}{\partial x_1}+\mathrm{i}\left(-\frac{\partial v_2}{\partial x_1}-\frac{\partial w_2}{\partial x_2}\right) = E \left(v_1+\mathrm{i} w_1\right) & \text{ in } \Omega \\
\frac{\partial w_1}{\partial x_1}+\frac{\partial v_1}{\partial x_2}+\mathrm{i}\left(-\frac{\partial v_1}{\partial x_1}+\frac{\partial w_1}{\partial x_2}\right) = E \left(v_2+\mathrm{i} w_2\right) & \text{ in } \Omega \\
\left(v_2+\mathrm{i} w_2\right) = \mathrm{i} (n_1+\mathrm{i} n_2)(v_1+\mathrm{i} w_1) & \text{ on } \partial \Omega
\end{array} \right.\]
and splitting in real and imaginary parts we have
\begin{equation}
\label{equationsnm}
\left\{ \begin{array}{cl}
-\frac{\partial v_2}{\partial x_2}+\frac{\partial w_2}{\partial x_1} = E v_1 & \text{ in } \Omega \\
-\frac{\partial v_2}{\partial x_1}-\frac{\partial w_2}{\partial x_2} = E w_1 & \text{ in } \Omega \\
\frac{\partial w_1}{\partial x_1}+\frac{\partial v_1}{\partial x_2} = E v_2 & \text{ in } \Omega \\
-\frac{\partial v_1}{\partial x_1}+\frac{\partial w_1}{\partial x_2} = E w_2 & \text{ in } \Omega \\
v_2 = -n_1w_1-n_2v_1 & \text{ on } \partial \Omega \\
w_2=n_1v_1-n_2w_1 & \text{ on } \partial \Omega
\end{array} \right.
\end{equation}
These equations are imposed at a discrete set of interior and boundary points. We consider $M^{\partial\Omega}\in\mathbb{N}$ points $p_1,...,p_{M^{\partial\Omega}}$ uniformly distributed on $\partial\Omega$ and $M^{\Omega}\in\mathbb{N}$ points $q_1,...,q_{M^\Omega}$ located at a grid defined on $\Omega$. Then, we calculate the matrices
\[\mathbf{M}^\Omega=\begin{bmatrix}
\phi_1(q_1) & \cdots & \phi_N(q_1)\\
\vdots & \ddots & \vdots\\
\phi_1(q_{M^{\Omega}}) & \cdots & \phi_N(q_{M^{\Omega}})
\end{bmatrix},\quad \mathbf{M}_1^\Omega=\begin{bmatrix}
\partial_1\phi_1(q_1) & \cdots & \partial_1\phi_N(q_1)\\
\vdots & \ddots & \vdots\\
\partial_1\phi_1(q_{M^{\Omega}}) & \cdots & \partial_1\phi_N(q_{M^{\Omega}})
\end{bmatrix},\]
\[\mathbf{M}_2^\Omega=\begin{bmatrix}
\partial_2\phi_1(q_1) & \cdots & \partial_2\phi_N(q_1)\\
\vdots & \ddots & \vdots\\
\partial_2\phi_1(q_{M^{\Omega}}) & \cdots & \partial_2\phi_N(q_{M^{\Omega}})
\end{bmatrix},\quad\mathbf{M}^{\partial\Omega}=\begin{bmatrix}
\phi_1(p_1) & \cdots & \phi_N(p_1)\\
\vdots & \ddots & \vdots\\
\phi_1(p_{M^{\partial\Omega}}) & \cdots & \phi_N(p_{M^{\partial\Omega}})
\end{bmatrix}.\]
and
\[\mathbf{M}_1^{\partial\Omega}=\begin{bmatrix}
n_1(p_1)\phi_1(p_1) & \cdots & n_1(p_1)\phi_N(p_1)\\
\vdots & \ddots & \vdots\\
n_1(p_{M^{\partial\Omega}})\phi_1(p_{M^{\partial\Omega}}) & \cdots & n_1(p_{M^{\partial\Omega}})\phi_N(p_{M^{\partial\Omega}})
\end{bmatrix},\]
\[\mathbf{M}_2^{\partial\Omega}=\begin{bmatrix}
n_2(p_1)\phi_1(p_1) & \cdots & n_2(p_1)\phi_N(p_1)\\
\vdots & \ddots & \vdots\\
n_2(p_{M^{\partial\Omega}})\phi_1(p_{M^{\partial\Omega}}) & \cdots & n_2(p_{M^{\partial\Omega}})\phi_N(p_{M^{\partial\Omega}})
\end{bmatrix}\]
Taking into account the definitions of the RBF linear combinations \eqref{rbf}, the numerical approximations for the eigenvalues are the values $E$ for which we have nonzero solutions of the overdetermined system of linear equations
\begin{equation}
\label{matric}\begin{bmatrix}\mathbf{0}\\
\mathbf{0}\\
\mathbf{0}\\
\mathbf{0}\\
\mathbf{0}\\
\mathbf{0}
\end{bmatrix}=
\left(\begin{bmatrix}
\mathbf{0} & \mathbf{0} & -\mathbf{M}_2^\Omega &\mathbf{M}_1^\Omega\\
\mathbf{0} & \mathbf{0} & -\mathbf{M}_1^\Omega &-\mathbf{M}_2^\Omega\\
\mathbf{M}_2^\Omega &\mathbf{M}_1^\Omega&\mathbf{0} & \mathbf{0} \\
-\mathbf{M}_1^\Omega &\mathbf{M}_2^\Omega&\mathbf{0} & \mathbf{0} \\
\mathbf{M}_2^{\partial\Omega}&\mathbf{M}_1^{\partial\Omega}&\mathbf{M}^{\partial\Omega}&\mathbf{0}\\
-\mathbf{M}_1^{\partial\Omega}&\mathbf{M}_2^{\partial\Omega}&\mathbf{0}&\mathbf{M}^{\partial\Omega}
\end{bmatrix}-E\begin{bmatrix}
\mathbf{M}^\Omega&\mathbf{0}&\mathbf{0}&\mathbf{0}\\
\mathbf{0}&\mathbf{M}^\Omega&\mathbf{0}&\mathbf{0}\\
\mathbf{0}&\mathbf{0}&\mathbf{M}^\Omega&\mathbf{0}\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{M}^\Omega\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\
\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}
\end{bmatrix}\right).\begin{bmatrix}\mathbf{\alpha}^{(1)}\\
\mathbf{\beta}^{(1)}\\
\mathbf{\alpha}^{(2)}\\
\mathbf{\beta}^{(2)}
\end{bmatrix}.
\end{equation}
The numerical solution of the minimization problem associated to the non-linear variational characterization is obtained directly from \eqref{eqn:firstminmax}, defining the function
\[\mathcal{F}(\alpha_1^{(1)},...,\alpha_N^{(1)},\beta_1^{(1)},...,\beta_N^{(1)})=\frac{4 \int_\Omega |\partial_{\bar z} u_1|^2 dx - E^2 \int_{\Omega}|u_1|^2dx + E \int_{\partial\Omega} |u_1|^2 ds}{\int_\Omega |u_1|^2 dx}\]
that we minimize by a gradient type method. We refer to~\cite{A19} for details about the numerical quadratures to approximate the boundary and volume integrals in the definition of $\mathcal{F}$.
\subsection{Numerical Results}
\label{numres}
We start by testing our numerical algorithm for the calculation of the eigenvalues of the Dirac operator with infinite mass boundary conditions in the case of the unit disk, for which we know that the principal eigenvalue $E_1(\mathbb{D})$ is the smallest non-negative solution of the equation
\[J_0(\mu)=J_1(\mu)\]
and we have $E_1(\mathbb{D})=1.434695650819...$ In Table~\ref{table:numerrors} we show the absolute errors of the numerical approximations for the principal eigenvalue $E_1(\mathbb{D})$, for several choices of $\epsilon$ and $N$ and show that the numerical method can be highly accurate, even with a moderate value of $N$.
\begin{table}[!ht]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& N=242 & N=323 & N=402 \\
\hline$\epsilon=5$& $4.45\times10^{-7}$ & $8.55\times10^{-8}$& $1.33\times10^{-8}$\\
\hline$\epsilon=10$& $1.30\times10^{-5}$ &$2.78\times10^{-6}$ &$4.93\times10^{-8}$ \\
\hline$\epsilon=15$& $4.92\times10^{-5}$ & $9.21\times10^{-6}$& $1.16\times10^{-6}$\\
\hline
\end{tabular}
\caption{Absolute errors of the numerical approximations for the principal eigenvalue $\lambda_1(\mathbb{D})$, for several choices of $\epsilon$ and $N$.}
\label{table:numerrors}
\end{table}
We have computed the principal eigenvalue for 2500 domains (with smooth boundary) randomly generated satisfying $|\Omega|=\pi$. The corresponding eigenvalues are plotted in Figure~\ref{fig:figure2}, as a function of the perimeter. We observe that the principal eigenvalue is minimized for the domain which also minimizes the perimeter. By the classical isoperimetric inequality it is well know that for fixed area, the perimeter is minimized by the ball. Thus, these numerical results suggest that the Faber-Krahn type inequality stated in Conjecture \ref{conj:FK} shall hold for the Dirac operator with infinite mass boundary conditions.
\begin{figure}[!ht]
\includegraphics[scale=0.6]{figura.png}
\caption{Plot of the principal eigenvalue for 2500 domains (with smooth boundary) randomly generated satisfying $|\Omega|=\pi$, as a function of the perimeter.}
\label{fig:figure2}
\end{figure}
Next, we present some numerical results for the minimization problem associated to the non-linear variational characterization \eqref{eqn:firstminmax}. Figure~\ref{fig:figure3} shows three domains (denoted by $\Omega_1$, $\Omega_2$ and $\Omega_3$) verifying $|\Omega_i|=\pi,\ (i=1,2,3)$ to illustrate the numerical results that we gathered. In Figure~\ref{fig:figure4} we plot $\mu^{\Omega_i}(E),\ i=1,2,3$ together with the curve $\mu^{\mathbb{D}}(E)$. We verify that for all $E>0$, we have
\[
\mu^{\Omega_i}(E) \geq \mu^\mathbb{D}(E),\ i=1,2,3
\]
which illustrates Conjecture~\ref{conj:2}.
\begin{figure}[!ht]
\includegraphics[width=0.32\textwidth]{domain1.png}
\includegraphics[width=0.32\textwidth]{domain2.png}
\includegraphics[width=0.32\textwidth]{domain3.png}
\caption{Plots of domains $\Omega_1$, $\Omega_2$ and $\Omega_3$.}
\label{fig:figure3}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=0.9\textwidth]{plots.png}
\caption{Plots of $\mu^{\Omega_i},\ i=1,2,3$, together with the curve $\mu^{\mathbb{D}}$ as a function of the spectral parameter $E>0$.}
\label{fig:figure4}
\end{figure}
Finally, Figure~\ref{fig:figure5} shows the absolute value (left plots) and argument (right plots) of a (normalized) eigenfunction associated to the principal eigenvalue of the domains $\Omega_i,\ i=1,2,3$. Remark that the point of maximal modulus seems to be localized at the incenter of $\Omega_i$ which is in line with our choice of test function in the proof of Theorem \eqref{thm:ineq}. However, there is absolutely no reason for the associated eigenfunction to be real-valued and this has two consequences. First, Theorem \ref{thm:ineq} could be improved if one considers an adequate test function in the domain of the operator and not only in the form domain as we do. Second, Conjecture \eqref{conj:FK} can not be reduced to the Bossel-Daners inequality because, contrary to the Robin eigenvalue problem, there is \textit{a priori} no reason for an eigenfunction to have a non-constant argument as illustrated in Figure~\ref{fig:figure5}.
\begin{figure}[!ht]
\includegraphics[width=0.49\textwidth]{dom1abs.png}
\includegraphics[width=0.49\textwidth]{dom1arg.png}
\includegraphics[width=0.49\textwidth]{dom2abs.png}
\includegraphics[width=0.49\textwidth]{dom2arg.png}
\includegraphics[width=0.49\textwidth]{dom3abs.png}
\includegraphics[width=0.49\textwidth]{dom3arg.png}
\caption{Plots of the absolute value (left plots) and argument (right plots) of the eigenfunction associated to the principal eigenvalue of $\Omega_i,\ i=1,2,3$.}
\label{fig:figure5}
\end{figure}
\clearpage
\section*{Acknowledgments}
The work of R.~D.~Benguria has been partially supported by FONDECYT (Chile) project 116-0856.
R.~D.~Benguria, V.~Lotoreichik and T.~Ourmi\`eres-Bonafos are very grateful to the American Institute of Mathematics (AIM) for supporting their participation to the AIM workshop \emph{Shape optimization with surface interactions} in 2019, where this project was initiated.
T.~Ourmi\`eres-Bonafos thanks Nicolas Raymond for pointing out that the projectors introduced in Definition \ref{def:szeproj} are named after the famous mathematician G\'abor Szeg\"o.
|
3,212,635,537,742 | arxiv | \section*{Main}
COVID-19 has emerged in December 2019 in the city of Wuhan in China~\cite{zhu2020novel} and disseminated around the world, leading the World Health Organization to declare the COVID-19 outbreak a pandemic. The disease is caused by the SARS-Cov-2 virus and the leading cause of death is respiratory failure due to severe viral pneumonia~\cite{zhou2020clinical}. Chest computed tomography (CT) has rapidly gained a major role for COVID-19 diagnosis. Indeed, despite being considered as the gold standard to make a definitive diagnosis, reverse transcription polymerase chain reaction (RT-PCR) suffers from false negatives, shortage of available supply test kits and long turnaround times~\cite{ai2020correlation,fang2020sensitivity,xie2020chest}.
Artificial intelligence has gained significant attention during the past decade and many applications have been proposed in medical imaging, including segmentation and characterization tasks such as lung cancer screening on CT~\cite{chassagnon2020artificial,litjens2017survey,ardila2019end}. A few studies have already reported deep learning to diagnose COVID-19 pneumonia on chest radiograph~\cite{wang2020covid} or CT~\cite{li2020artificial} . Other authors used deep-learning to quantify COVID-19 disease extent on CT but none of them used a multi-centric cohort while providing comparisons with segmentations done by radiologists~\cite{chaganti2020quantification,huang2020serial}. Disease extent is the only parameter that can be visually estimated on chest CT to quantify disease severity~\cite{li2020ct,yuan2020association}, but visual quantification is difficult and usually coarse. Several AI-based tools have been recently developed to quantify interstitial lung diseases (ILD)~\cite{jacob2016mortality,humphries2017idiopathic,10.1007/978-3-030-00937-3_75,anthimopoulos2018semantic}, which share common CT features with COVID-19 pneumonia, especially a predominance of ground glass opacities. In this study, we investigated a fully automatic method (Figure~\ref{fig:fig1}) for disease quantification, staging and short-term prognosis. The approach relied on (i) a disease quantification solution that exploited 2D \& 3D convolutional neural networks using an ensemble method, (ii) a biomarker discovery approach sought to determine the share space of features that are the most informative for staging \& prognosis, \& (iii) an ensemble robust supervised classification method to distinguish patients with severe vs non-severe short-term outcome and among severe patients those intubated and those who did not survive.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/Figure2.png}
\caption{Comparison between automated and manual segmentations. Delineation of the diseased areas on chest CT in a COVID-19 patient: First Row: input, AI-segmentation, expert I-segmentation, expert II-segmentation. Second Row: Box-Plot Comparisons in terms of Dice similarity and Haussdorf between AI-solution, expert I \& expert II, \& Plot of correlation between disease extent automatically measured and the average disease extent measured from the $2$ manual segmentations. Disease extent is expressed as the percentage of lung affected by the disease. Third row: statistical measures on comparisons between AI, expert I, and expert II segmentations.}
\label{fig:fig2}
\end{figure}
\section*{Part I: Disease Quantification}
In the context of this work, we report a deep learning-based segmentation tool to quantify COVID-19 disease and lung volume. For this purpose, we used an ensemble network approach inspired by the AtlasNet framework~\cite{10.1007/978-3-030-00937-3_75}. We investigated a combination of 2D slice-based~\cite{badrinarayanan2017segnet} and 3D patch-based ensemble architectures~\cite{cciccek20163d}. The development of the deep learning-based segmentation solution was done on the basis of a multi-centric cohort of $478$ unenhanced chest CT scans (208,668 slices) of COVID-19 patients with positive RT-PCR. The multicentric dataset was acquired at $6$ Hospitals, equipped with $4$ different CT models from $3$ different $91$ manufacturers, with different acquisition protocols and radiation dose (Table~\ref{tab:tab1}). Fifty CT exams from $3$ centers were used for training and $130$ CT exams from $3$ other centers were used for test (Table~\ref{tab:tab2}). Disease and lung were delineated on all $23,423$ images used as training dataset, and on only $20$ images per exam but by $2$ independent annotators in the test dataset ($2,600$ images). The overall annotation effort took approximately $800$ hours and involved $15$ radiologists with $1$ to $7$ years of experience in chest imaging. The consensus between manual ($2$ annotators) and automated segmentation was measured using the Dice similarity score (DSC)~\cite{dice1945measures} and the Haussdorf distance (HD). The CovidENet performed equally well to trained radiologists in terms of DSCs and better in terms HD (Figure~\ref{fig:fig2}). The mean/median DSCs between the two expert’s annotations on the test dataset were $0.70/0.72$ for disease segmentation. For the same task, DSCs between CovidENet and the manual segmentations were $0.69/0.71$ and $0.70/0.73$. In terms of HDs, the observed average value between the two experts was $9.16$mm while it was $8.96$mm between CovidENet and the two experts. When looking at disease extent, defined as the percentage of lung affected by the disease, we found no significant difference between automated segmentation and the average of the two manual segmentations ($19.9\%$ $\pm17.7$ [$0.5 - 73.2$] vs $19.5\%$ $\pm16.5$ [$1.1 - 75.7$]; p= $0.352$).
\begin{figure}[t]
\centering
\includegraphics[width=3.25in]{figures/slide1.png} \\
\includegraphics[width=3.25in]{figures/slide2.png}
\includegraphics[width=3.25in]{figures/slide3.png}
\caption{Spider-chart distribution of features depicting their minimum and maximum values [mean value (blue), 70\% percentile (yellow) and 90\% percentile (red) lines] with respect to the different outcomes with the following order: top: non-severe, bottom left: intensive care support \& bottom right: deceased in the testing set. White and red circles represent respectively 40\% and 60\% of the maximum value of each feature. Clear separation was observed on these feature space with respect to the non-severe \& severe cases. In terms of deceased versus intensive care patients, notable difference were observed with respect to three variables, the age of the patient, the condition of the healthy lung and the non-uniformity of the disease (indicated with gray in the spider-chart).}
\label{fig:fig3b}
\end{figure}
\section*{Part II: Imaging Biomarker Discovery}
To assess the prognostic value of the Chest computed tomography (CT) an extended multi-centric data set was built. We reviewed outcomes in patient charts within the $4$ days following chest CT and divided the patients in $3$ groups: those who didn’t survive, those who required mechanical ventilation and those who were still alive and not intubated. Out of the $478$ included patients, $27$ died ($6\%$) and $83$ were intubated ($17\%$), forming a group of $110$ patients with severe short-term outcome ($23\%$). Data of $383$ patients from $3$ centers were used for training and those of $85$ patients from $3$ other centers composed an independent test dataset (Table~\ref{tab:tab3}). Radiomics-based prognosis gained significant attention in the recent years towards predicting treatment outcomes~\cite{sun2018radiomics}. In this study we have adopted a similar strategy, we extracted $107$ features related to first order, higher order statistics, texture and shape information for lungs, disease extent and heart. Feature selection was performed on a basis of predictive value consensus. We created several representative partitions
$117$ of the training set ($80\%$ training and $20\%$ validation) and run $13$ different supervised classification methods towards optimal separation of the observed clinical ground truth between severe and non-severe cases (Table~\ref{tab:tab4}). The features that were shared between the different classifiers were retained as robust imaging biomarkers using a cut-off probability of $0.25$ and were aggregated to patients’ age and gender (Table~\ref{tab:tab5}). In total $12$ features were retained for the prognosis part and included age, gender, disease extent, descriptors of disease heterogeneity and extension, features of healthy lung and a descriptor of cardiac heterogeneity. Correlations for some these features and the clinical outcome are presented in Figure~\ref{fig:fig3} while a representation of these feature space with respect to the different classes is presented in Figure~\ref{fig:fig3b}.
\section*{Part III: Staging and Prognosis}
The staging/prognosis was implemented using a hierarchical classification principle, targeting first staging and subsequently prognosis. The staging component sought to separate patients with severe and non-severe short-term outcomes, while the prognosis sought to predict the risk of decease among severe patients. On the basis of the feature selection step, the machine learning algorithms that had a balanced accuracy greater than $60\%$ on validation were considered. The selection of these methods was done on the basis of minimum discrepancy between performance on training and internal validation sub-training data set. We have built two sequential classifiers using this ensemble method, one to determine the severe cases and a second to predict survival. The classifier aiming to separate patients with severe and non-severe short-term outcomes had a balanced accuracy of $74\%$, a weighted precision of $79\%$, a weighted sensitivity of $69\%$ and specificity of $79\%$ to predict a severe short-term outcome (Figure~\ref{fig:fig4}, Table~\ref{tab:tab6}). The performance of the second classifier aiming to differentiate between intubated and deceased patients was even higher with a balanced accuracy of $81\%$ (Figure~\ref{fig:fig4}, Table~\ref{tab:tab7}). The hierarchical classifiers combing the $3$ classes had a balanced accuracy of $68\%$, a weighted precision of $79\%$, a weighted sensitivity of $67\%$ and specificity of $83\%$ (Figure~\ref{fig:fig4}). It was observed that prognosis performance difference between training and external cohort testing was low, suggesting that the most important information present at CT scans was recovered, and additional information should be integrated in order to fully explain the outcome.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/Fig4-v2.jpg}
\caption{Classification performance of dual \& aggregated classifiers with respect to the non-severe vs severe case, the intubated vs deceased case and the three classes. Sensitivity and confusion tables are presented with respect to the different classification problems.}
\label{fig:fig4}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/Figure3.png}
\caption{Discovery of – imaging-biomarkers through consensus. Generic variables (G: age, sex), disease related variables (D: extent, volume, maximum diameter, etc.), lung variables (L: skewness, etc.) as well as heart related variables (H: non-uniformity) have been automatically selected. The prevalence of the features as well as their distribution with respect to the different classes is presented for some of them, with rather clear separation and strong correlations with ground truth.}
\label{fig:fig3}
\end{figure}
\section*{Part IV: Conclusions}
In conclusion, artificial intelligence enhanced the value of chest CT by providing fast accurate, and precise disease extent quantification and by helping to identify patients with severe short-term outcomes. This could be of great help in the current context of the pandemic with healthcare resources under extreme pressure. In a context where the sensitivity of RT-PCR has been shown to be low, such as $63\%$ when perform on nasal swab~\cite{wang2020detection}, chest CT has been shown to provide higher sensitivity for diagnosis of COVID-19 as compared with initial RT-PCR from pharyngeal swab samples~\cite{ai2020correlation}. The current COVID-19 pandemic requires implementation of rapid clinical triage in healthcare facilities to categorize patients into different urgency categories~\cite{CDC}, often occurring in the context of limited access to biological tests. Beyond the diagnostic value of CT for COVID-19, our study suggests that AI should be part of the triage process. The developed tool will be made publicly available. Our prognosis and staging method achieved state of the art results through the deployment of a highly robust ensemble classification strategy with automatic feature selection of imaging biomarkers and patients’ characteristics available within the image’ metadata. In terms of future work, the continuous enrichment of the data base with new examples is a necessary action on top of updating the outcome of patients included in the study. The integration of non-imaging data and other related clinical and categorical variables such as lymphopenia, the D-dimer level and other comorbidities~\cite{zhou2020clinical,tang2020abnormal,onder2020case,guo2020diabetes} is a necessity towards better understanding the disease and predicting the outcomes. This is clearly demonstrated from the inability of any of the state-of-the art classification methods (including neural networks and multi-layer perceptron models) to predict the outcome with a balanced accuracy greater to $80\%$ on the training data. Our findings could have a strong impact in terms of (i) patient stratification with respect to the different therapeutic strategies, (ii) accelerated drug development through rapid, reproducible and quantified assessment of treatment response through the different mid/end-points of the trial, and (iii) continuous monitoring of patient’s response to treatment.
\section*{Methods}
\subsection*{Study Design and Participants}
This retrospective multi-center study was approved by our Institutional Review Board (AAA-2020-08007) which waived the need for patients’ consent. Patients diagnosed with COVID-19 from March $4$th to $29$th at six large University Hospitals were eligible if they had positive PCR-RT and signs of COVID-19 pneumonia on unenhanced chest CT. A total of $478$ patients formed the full dataset ($208,668$ CT slices). Only one CT examination was included for each patient. Exclusion criteria were (i) contrast medium injection and (ii) important motion artifacts.
For the COVID-19 radiological pattern segmentation part, $50$ patients from $3$ centers (A: $20$ patients; B: $15$ patients, C: $15$ patients) were included to compose a training and validation dataset, $130$ patients from the remaining $3$ centers (D: $50$ patients; E: $50$ patients, F: $30$ patients) were included to compose the test dataset (Table~\ref{tab:tab2}). The proportion between the CT manufacturers in the datasets was pre-determined in order to maximize the model generalizability while taking into account the data distribution.
For the radiomics driven prognosis study, $298$ additional patients from centers A ($96$ patients), B ($64$ patients) and D ($138$ patients) were included to increase the size of the dataset. Data of $383$ patients from $3$ centers (A, B and D) were used for training and those of 85 patients from 3 other centers (C, E, F) composed an independent test set (Table~\ref{tab:tab3}). Only one CT examination was included for each patient. Exclusion criteria were (i) contrast medium injection and (ii) important motion artifacts. For short-term outcome assessment, patients were divided into $2$ groups: those who died or were intubated in the $4$ days following the CT scan composed the severe short-term outcome subgroup, while the others composed the non-severe short-term outcome subgroup.
\subsection*{CT Acquisitions}
Chest CT exams were acquired on $4$ different CT models from $3$ manufacturers (Aquilion Prime from Canon Medical Systems, Otawara, Japan; Revolution HD from GE Healthcare, Milwaukee, WI; Somatom Edge and Somatom AS+ from Siemens Healthineer, Erlangen, Germany). The different acquisition and reconstruction parameters are summarized in Table~\ref{tab:tab1}. CT exams were mostly acquired at $120$ (n=$103/180$; $57\%$) and $100$ kVp (n=$76/180$; $42\%$). Images were reconstructed using iterative reconstruction with a $512\times512$ matrix and a slice thickness of $0.625$ or $1$ mm depending on the CT equipment. Only the lung images reconstructed with high frequency kernels were used for analysis. For each CT examination, dose length product (DLP) and volume Computed Tomography Dose Index (CTDIvol) were collected.
\subsection*{Data Annotation}
Fifteen radiologists (GC, TNHT, SD, EG, NH, SEH, FB, SN, CH, IS, HK, SB, AC, GF and MB) with $1$ to $7$ years of experience in chest imaging participated in the data annotation which was conducted over a $2$-week period.
For the training and validation set for the COVID-19 radiological pattern segmentation, the whole CT examinations were manually annotated slice by slice using the open source software ITKsnap~\footnote{\url{http://www.itksnap.org}}. On each of the $23,423$ axial slices composing this dataset, all the COVID-19 related CT abnormalities (ground glass opacities, band consolidations, and reticulations) were segmented as a single class. Additionally, the whole lung was segmented to create another class (lung). To facilitate the collection of the ground truth for the lung anatomy, a preliminary lung segmentation was performed with Myrian XP-Lung software (version $1.19.1$, Intrasense, Montpellier, France) and then manually corrected.
As far as test cohort for the segmentation is concerned, $20$ CT slices equally spaced from the superior border of aortic arch to the lowest diaphragmatic dome were selected to compose a $2,600$ images dataset. Each of these images were systematically annotated by $2$ out of the $15$ participating radiologists who independently performed the annotation. Annotation consisted of manual delineation of the disease and manual segmentation of the lung without using any preliminary lung segmentation.
\subsection*{Deep Learning Construction}
The segmentation tool was built under the paradigm of ensemble methods using a 2D fully convolutional network together with the AtlasNet framework~\cite{10.1007/978-3-030-00937-3_75} and a 3D fully convolutional network~\cite{cciccek20163d}. The AtlasNet framework combines a registration stage of the CT scans to a number of anatomical templates and consequently utilizes multiple deep learning-based classifiers trained for each template. At the end, the prediction of each model is - to the original anatomy and a majority voting scheme is used to produce the final projection, combining the results of the different networks. A major advantage of the AtlasNet framework is that it incorporates a natural data augmentation by registering each CT scan to several templates. Moreover, the framework is agnostic to the segmentation model that will be utilized. For the registration of the CT scans to the templates, an elastic registration framework based on Markov Random Fields was used, providing the optimal displacements for each template~\cite{ferrante2017deformable}.
The architecture of the implemented segmentation models was based on already established fully convolutional neural network designs from the literature~\cite{badrinarayanan2017segnet,cciccek20163d}. Fully convolutional networks following an encoder decoder architecture both in 2D and 3D were developed and evaluated. For the 2D models the CT scans were separated on the axial view. The network included 5 convolutional blocks, each one containing two Conv-BN-ReLU layer successions. Maxpooling layers were also distributed at the end of each convolutional block for the encoding part. Transposed convolutions were used on the decoding part to restore the spatial resolution of the slices together with the same successions of layers. For the 3D pipeline, the model similarly consisted of five blocks with a down-sampling operation applied every two consequent Conv3D-BN-ReLU layers. Additionally, five decoding blocks were utilized for the decoding path, at each block a transpose convolution was performed in order to up-sample the input. Skip connections were also employed between the encoding and decoding paths. In order to train this model, cubic patches of size $64\times64\times64$ were randomly extracted within a close range of the ground truth annotation border in a random fashion. Corresponding cubic patches were also extracted from the ground truth annotation masks and the lung anatomy segmentation masks. To this end, we trained the model with the CT scan patch as input, the annotation patch as target and the lung anatomy annotation patch as a mask for calculating the loss function only within the lung region. In order to train all the models, each CT scan was normalized by cropping the Hounsfield units in the range [$-1024$, $1000$].
Regarding implementation details, 6 templates were used for the AtlasNet framework together with normalized cross correlation and mutual information as similarities metrics. The networks were trained using weighted cross entropy loss using weights depending on the appearance of each class and dice loss. Moreover, the 3D network was trained using a dice loss.
The Dice loss (DL) and weighted cross entropy (WCE) are defined as follows,
\begin{equation*}
DL = 1 - \dfrac{2pg+1}{p+g+1}, \hspace{5mm} WCE = -(\beta g \log(p) + (1-g) log(1-p))
\end{equation*}
where $p$ is the predicted from the network value and $g$ the target/ ground truth value. $\beta$ is the weight given for the less representative class. For network optimization, we used only the class for the diseased regions.
For the 2D experiments we used classic stochastic gradient descent for the optimization with initial learning rate = 0.01, decrease of learning rate = $2.5\cdot10^{-3}$ every $10$ epochs, momentum =$0.9$ and weight decay =$5\cdot10^{-4}$. For the 3D experiments we used the AMSGrad and a learning rate of $0.001$.
The training of a single network for both 2D and 3D network was completed in approximately $12$ hours using a GeForce GTX 1080 GPU, while the prediction for a single CT scan was done in a few seconds. Training and validation curves for one template of AtlasNet and the 3D network are shown in Figure~\ref{fig:fig5}.
Both Dice similarity score and Haussdorff distances were higher with the 2D approach compared to the 3D approach (Figure~\ref{fig:fig2,fig:exfig3}). However, the combination of their probability scores led to a significant improvement. Thus, the ensemble of 2D and 3D architectures was selected for the final COVID-19 segmentation tool.
Moreover, segmentation masks of the lung and heart of all patients were extracted by using ART-Plan software (TheraPanacea, Paris, France). ART-Plan is a CE-marked solution for automatic annotation of organs, harnessing a combination of anatomically preserving and deep learning concepts. This software has been trained using a combination of a transformation and an image loss. The transformation loss penalizes the normalized error between the prediction of the network and the affine registration parameters depicting the registration between the source volume and the whole body scanned. These parameters are determined automatically using a downhill simplex optimization approach. The second loss function of the network involved an image similarity function – the zero-normalized cross correlation loss – that seeks to create an optimal visual correspondence between the observed CT values of the source volume and the corresponding ones at the full body CT reference volume. This network was trained using as input a combination of $360,000$ pairs of CT scans of all anatomies and full body CT scans. These projections used to determine the organs being present on the test volume. Using the transformation between the test volume and the full body CT, we were able to determine a surrounding patch for each organ being present in the volume. These patches were used to train the deep learning model for each full body CT. The next step consisted of creating multiple annotations on the different reference spaces, and for that a 3D fully convolutional architecture was trained for every reference anatomy. This architecture takes as input the annotations for each organ once mapped to the reference anatomy and then seeks to determine for each anatomy a network that can optimally segment the organ of interest similar to the AtlasNet framework used for the disease segmentation. This information was applied for every organ of interest presented in the input CT Scan. In average, $6,600$ samples were used for training per organ after data augmentation. These networks were trained using a conventional dice loss. The final organ segmentation was achieved through a winner takes all approach over an ensemble networks approach. For each organ, and for each full body reference CT a specific network was built, and the segmentation masks generated for each network were mapped back to the original space. The consensus of the recommendations of the different subnetworks was used to determine the optimal label at the voxel level.
\subsection*{Staging / Prognosis}
As a preprocessing step, all images were resampled by cubic interpolation to obtain isometric voxels with sizes of $1$ mm. Subsequently, disease, lung and heart masks were used to extract $107$ radiomic features~\cite{van2017computational} for each of them (left and right lung were considered separately both for the disease extent and entire lung). The features included first order statistics, shape-based features in 2D and 3D together with texture-based features. Radiomics features were enriched with clinical data available from the image metadata (age, gender), disease extent and number of diseased regions. The minimum and maximum values were calculated for the training and validation cohorts and Min-Max normalization was used to normalize the features, the same values were also applied on the test set.
As a first step, a number of features were selected using a lasso linear model in order to decrease the dimensionality. The lasso estimator seeks to optimize the following objective function:
\begin{equation*}
\dfrac{||y - Xw||^2_2}{2 n} + \alpha ||w||_1
\end{equation*}
where $\alpha$ is a constant, $||w||_1$ is the L1-norm of the coefficient vector and n is the number of samples. The Lasso method was used with 200 alphas along a regularization path of length 0.01 and limited to 1000 iterations. The staging/prognosis component was addressed using an ensemble learning approach. First, the training data set was subdivided into training and validation set on the principle of $80\%-20\%$ while respecting that the distribution of classes between the two subsets was identical to the observed one. We have created $10$ subdivisions on this basis and evaluated the average performance of the following supervised classification methods: Nearest Neighbor, \{Linear, Sigmoid, Radial Basis Function, Polynomial Kernel\} Support Vector Machines, Gaussian Process, Decision Trees, Random Forests, AdaBoost, Gaussian Naive Bayes, Bernoulli Naive Bayes, Multi-Layer Perceptron \& Quadratic Discriminant Analysis. Features selection was performed on the training set of each subdivision. In particular, the features selected in at least three subdivision were considered critical and have been used later for the staging and prognosis.
\begin{itemize}
\item Age
\item Gender
\item Disease Extent
\item From the diseased areas: maximum attenuation, surface, maximum 2D diameter per slice, volume, non-uniformity of the Gray level Size Zone matrix (GLSZM) and non-uniformity of the Gray level Run Length matrix (GLRLM)
\item From the lung areas: skewness and $90$th percentile
\item From the area of the heart: non-uniformity on the Gray level Size Zone matrix (GLSZM)
\end{itemize}
These features included first order features (maximum attenuation, skewness and $90$th percentile), shape features (surface, maximum 2D diameter per slice and volume) and texture features (non-uniformity of the GLSZM and GLRLM).
Subsequently, this reduced feature space was considered to be most appropriate for training, and the following $7$ classification methods with acceptable performance, $>60\%$ in terms of balanced accuracy, as well as coherent performance between training and validation, performance decrease $<20\%$ for the balanced accuracy between training and validation, were trained and combined together through a winner takes all approach to determine the optimal outcome (Table~\ref{tab:tab4}). The final selected methods include the \{Linear, Polynomial Kernel, Radial Basis Function\} Support Vector Machines, Decision Trees, Random Forests, AdaBoost, and Gaussian Naive Bayes which were trained and combined together through a winner takes all approach to determine the optimal outcome. To overcome the unbalance of the different classes, each class received a weight inversely proportional to its size. The Support Vector Machines were all three granted a polynomial kernel function of degree $3$ and a penalty parameter of $0.25$. In addition, the one with a Radial Basis Function kernel was granted a kernel coefficient of $3$. The decision tree classifier was limited to a depth of $3$ to avoid overfitting. The random forest classifier was composed of 8 of such trees. AdaBoost classifier was based on a decision tree of maximal depth of $2$ boosted three times.
The classifiers were applied in a hierarchical way, performing first the staging and then the prognosis. More specifically, a majority voting method was applied to classify patients into severe and non-severe cases (Table~\ref{tab:tab6}). Then, another majority voting was applied on the cases predicted as severe only to classify them into intubated or deceased (Table~\ref{tab:tab7}). In such a setup, the correlation of the reported features are summarized in Table~\ref{tab:tab5}. For the hierarchical prognosis on the three classes a voting classifier for the prediction of each class against the others has been applied to aggregate the predicted outcomes from the $7$ selected methods. In the Figure~\ref{fig:fig6} we visualize the distributions of the different features along the ground truth labels and the prediction of the hierarchical classifier for each subject. In particular, all the samples are grouped using their ground truth labels and a boxplot is generated for each group and each feature. Additionally, color coded points are over imposed at each boxplot denoting the prediction label. It is therefore clearly visible that some features such as the disease extent, the age, the shape of the disease and the uniformity seems to be very important on separating the different subjects.
\subsection*{Statistical Analysis}
The statistical analysis for the deep learning-based segmentation framework and the radiomics study was performed using Python $3.7$, Scipy~\cite{virtanen2020scipy}, Scikit-learn~\cite{pedregosa2011scikit}, TensorFlow~\cite{abadi2015tensorflow} and Pyradiomics~\cite{van2017computational} libraries. The dice similarity score (DSC)~\cite{dice1945measures} was calculated to assess the similarity between the $2$ manual segmentations of each CT exam of the test dataset and between manual and automated segmentations. The DSC between manual segmentations served as reference to evaluate the similarity between the automated and the two manual segmentations. Moreover, the Hausdorff distance was also calculated to evaluate the quality of the automated segmentations in a similar manner. Disease extent was calculated by dividing the volume of diseased lung by the lung volume and expressed in percentage of the total lung volume. Disease extent measurement between manual segmentations and between automated and manual segmentations were compared using paired Student's t-tests.
For the stratification of the dataset into the different categories, classic machine learning metrics, namely balanced accuracy, weighted precision, and weighted specificity and sensitivity were used. Moreover, the correlations between each feature and the outcome was computing using a Pearson correlation over the entire dataset.
CT parameters between the $6$ centers were compared using the analysis of variance, while patient characteristics between training/validation and test datasets were compared using chi-square and Student's t-tests.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{figures/ExFigure1.pdf}
\caption{Training and validation curves for one template of AtlasNet and the 3D U-Net.}
\label{fig:fig5}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{figures/ExFig3.jpg}
\caption{Some additional qualitative analysis for the comparison between automated and manual segmentations. Delineation of the diseased areas on chest CT in different slices of COVID-19 patients: From left to right: Input, AI-segmentation, expert I-segmentation, expert II-segmentation}
\label{fig:exfig3}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{figures/ExFigure2.pdf}
\caption{Boxplots of the selected features and their association with the predicted outcomes and ground truth labels.}
\label{fig:fig6}
\end{figure}
\begin{table}[h!]
\centering{
\begin{tabular}{|p{20mm}|l|l|l|l|l|l|}
\hline
& \multicolumn{3}{|l|}{Training/ validation dataset} & \multicolumn{3}{|l|}{Testing dataset} \\
\cline{2-7}
& Center A & Center B & Center C & Center D & Center E & Center F \\
\hline
CT equipment & Somatom AS+ & Resolution HD & Aquilion Prime & Somatom Edge & Revolution HD & Aquilion Prime \\
\hline
Kilovoltage & 100-120 & 120 & 100-120 & 100-120 & 120-140 & 100-120 \\
\hline
DLP (mGy.cm) & \begin{tabular}[c]{@{}l@{}}$109\pm42$\\
{[}44-256{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$306 \pm 104$\\
{[}123-648{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$102 \pm 30$\\
{[}43-189{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$131 \pm 44$\\
{[}55-499{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$177 \pm 48$\\
{[}43-276{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$115 \pm 26$\\ {[}75 - 186{]}\end{tabular} \\
\hline
CTDIvol (mGy) & \begin{tabular}[c]{@{}l@{}}$3.2 \pm1.5$\\ {[}1.2-11.9{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$8.7 \pm 2.8$\\ {[}3.9-18.5{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$2.7 \pm 0.9$\\ {[}1.0-5.3{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$3.2 \pm 0.9$\\ {[}1.4-9.5{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$5.5 \pm 1.8$\\ {[}1.2-12.3{]}\end{tabular} & \begin{tabular}[c]{@{}l@{}}$2.5 \pm 0.6$\\ {[}1.7-4.3{]}\end{tabular} \\
\hline
Slice thickness & 1mm & 0.625mm & 1mm & 0.625mm & 1mm & 1mm \\
\hline
Convolution Kernel & i70 & Lung & FC51-FC52 & i50 & Lung & FC51-FC52 \\
\hline
Iterative reconstructions & SAFIRE 3 & ASIR-v 80\% & IDR 3D0.67 & SAFIRE 4 & ASIR-v 60\% & IDR 3D \\
\hline
\end{tabular}
}
\caption{\label{tab:tab1} Acquisition and reconstruction parameters. \textit{Note.— For quantitative variables, data are mean $\pm$ standard deviation, and numbers in brackets are the range. CT = Computed Tomography ; CTDIvol = volume Computed Tomography Dose Index ; DLP = Dose Length Product
* significant difference with p $< 0.001$}}
\end{table}
\begin{table}[t!]
\begin{tabular}{|l|l|l|l|}
\hline
& \begin{tabular}[c]{@{}l@{}}Training/Validation Dataset\\ (Centers A+B+C; N=50)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Test Dataset\\ (Centers D+E+F; n=130)\end{tabular} & p value \\
\hline
Age (y) & $57 \pm 17$ {[}26-97{]} & $59 \pm 16$ {[}17-95{]} & 0.363 \\
\hline
No. of Men & 31(62) & 87(67) & 0.534 \\
\hline
Disease extent* & & & \\
\hline
Manual & $18.1 \pm$ 14.9 {[}0.3-68.5{]} & $19.5 \pm 16.5$ {[}1.1-75.7{]} & 0.574 \\
\hline
Automated & - & $19.9\% \pm 17.7$ {[}0.5-73.2{]} & - \\
\hline
DLP (mGy.cm) & $180 \pm 124$ {[}43-527{]} & $139 \pm 49.0$ {[}43-276{]} & 0.026 \\
\hline
CTDIvol (mGy) & $4.9 \pm 3.4$ {[}1.0-13.0{]} & $4.0 \pm 1.9$ {[}1.2-12.3{]} & 0.064 \\
\hline
\end{tabular}
\caption{\label{tab:tab2} Patient characteristics in the datasets used for developing the segmentation tool. \textit{Note. For quantitative variables, data are mean $\pm$ standard deviation, and numbers in brackets are the range. For qualitative variables, data are numbers of patients, and numbers in parentheses are percentages.
CTDIvol = volume Computed Tomography Dose Index; DLP = Dose Length Product
*Percentage of lung volume on CT, calculated on the full volume for the training/validation dataset and 20 slices in the test dataset}}
\end{table}
\begin{table}[t!]
\begin{tabular}{|l|l|l|l|}
\hline
& \begin{tabular}[c]{@{}l@{}}Training/Validation Dataset\\ (Centers A+B+D*; N=383)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Test Dataset\\ (Centers C+E+F; n=95)\end{tabular} & p value \\
\hline
Age (y) & $63 \pm 16$ {[}24-98{]} & $57 \pm 15$ {[}17-97{]} & 0.001 \\
\hline
No. of Men & 255(67) & 65(68) & 0.732 \\
\hline
Disease extent** & $19.6 \pm 17.0$ {[}0.0-85.1{]} & $22.5 \pm 16.4$ {[}1.1-64.7{]} & 0.126 \\
\hline
Short-term outcome & & & \\
\hline
Deceased & 19(5)& 8(8)& \\
\hline
Intubated & 65(17)& 18(19)& \\
\hline
Alive and Not Intubated & 299(78) & 69(73) & \\ \hline
DLP (mGy.cm) & $160 \pm 97$ {[}44-648{]} & $146 \pm 52.0$ {[} 43-276 {]} & 0.047 \\
\hline
CTDIvol (mGy) & $4.3 \pm 2.8$ {[}1.2-18.5{]} & $4.1 \pm 2.0$ {[}1.0-12.3{]} & 0.064 \\
\hline
\end{tabular}
\caption{\label{tab:tab3} Patient characteristics in the dataset used for the developed prognosis model using radiomics. \textit{Note.— For quantitative variables, data are mean $\pm$ standard deviation, and numbers in brackets are the range. For qualitative variables, data are numbers of patients, and numbers in parentheses are percentages.
CTDIvol = volume Computed Tomography Dose Index; DLP = Dose Length Product, $*$Enlarged by including all eligible patients over the study period, $**$Percentage of lung volume on the whole CT}}
\end{table}
\begin{table}[t!]
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{Classifier} & \multicolumn{2}{|l|}{\begin{tabular}[c]{@{}l@{}}Balanced Accuracy\end{tabular}} & \multicolumn{2}{|l|}{\begin{tabular}[c]{@{}l@{}}Weighted Precision\end{tabular}} & \multicolumn{2}{|l|}{\begin{tabular}[c]{@{}l@{}}Weighted Sensitivity\end{tabular}} & \multicolumn{2}{|l|}{\begin{tabular}[c]{@{}l@{}}Weighted Specificity\end{tabular}} \\
\cline{2-9}
& Training & Validation & Training & Validation & Training & Validation & Training & Validation \\
\hline
\begin{tabular}[c]{@{}l@{}}Nearest\\ Neighbors\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.66 \\ $\pm 0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.55\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.88\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.73\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.85\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.78\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.47\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.32\\ $\pm0.04$\end{tabular} \\
\hline
\textbf{L-SVM*} & \begin{tabular}[c]{@{}l@{}}0.71\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.67\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.80\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.77\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.68\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.67\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.74\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.67\\ $\pm0.06$\end{tabular} \\
\hline
\textbf{P-SVM*} & \begin{tabular}[c]{@{}l@{}}0.77\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.66\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.83\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.76\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.77\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.72\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.76\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.61\\ $\pm0.07$\end{tabular} \\
\hline
S-SVM & \begin{tabular}[c]{@{}l@{}}0.53\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.55\\ $\pm0.05$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.69\\ $\pm$0.04\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.69\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.49\\ $\pm0.08$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.51\\ $\pm0.1$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.57\\ $\pm0.09$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.58\\ $\pm0.11$\end{tabular} \\
\hline
\textbf{RBF-SVM*} & \begin{tabular}[c]{@{}l@{}}0.74\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.67\\ $\pm0.06$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.82\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.77\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.66\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.63\\ $\pm0.06$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.81\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.71\\ $\pm0.08$\end{tabular} \\
\hline
\begin{tabular}[c]{@{}l@{}}Gaussian\\ Process\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.63\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.59\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.82\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.77\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.83\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.80\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.43\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.38\\ $\pm0.05$\end{tabular} \\
\hline
\begin{tabular}[c]{@{}l@{}}\textbf{Decision}\\ \textbf{Tree}*\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.78\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.68\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.85\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.78\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.69\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.64\\ $\pm0.05$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.87\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.72\\ $\pm0.08$\end{tabular} \\
\hline
\begin{tabular}[c]{@{}l@{}}\textbf{Random}\\ \textbf{Forest*}\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.78\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.64\\ $\pm0.06$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.84\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.75\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.74\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.65\\ $\pm0.07$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.82\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.63\\ $\pm0.09$\end{tabular} \\
\hline
\begin{tabular}[c]{@{}l@{}}Multi-Layer\\ Perceptron\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.83\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.58\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.90\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.72\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.9\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.73\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.77\\ $\pm0.06$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.43\\ $\pm0.05$\end{tabular} \\
\hline
\textbf{AdaBoost*} & \begin{tabular}[c]{@{}l@{}}0.8\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.63\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.86\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.75\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.76\\ $\pm0.07$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.66\\ $\pm0.07$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.83\\ $\pm0.08$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.6\\ $\pm0.11$\end{tabular} \\
\hline
\begin{tabular}[c]{@{}l@{}}\textbf{Gaussian}\\ \textbf{Naive Bayes*}\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.66\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.63\\ $\pm0.05$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.76\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.74\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.74\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.72\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.57\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.53\\ $\pm0.07$\end{tabular} \\
\hline
\begin{tabular}[c]{@{}l@{}}Bernouilli\\ Naive Bayes\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.51\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.50 \\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.74\\ $\pm0.05$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.63\\ $\pm0.05$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.78\\ $\pm0.0$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.77\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.24\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.22\\ $\pm0.02$\end{tabular} \\
\hline
QDA & \begin{tabular}[c]{@{}l@{}}0.72\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.60\\ $\pm0.04$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.82\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.73\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.82\\ $\pm0.01$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.75\\ $\pm0.03$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.62\\ $\pm0.02$\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.45\\ $\pm0.07$\end{tabular} \\
\hline
\end{tabular}
\caption{\label{tab:tab4} Performances of the 13 evaluated classifiers. \textit{Note.— Asterixis indicates the 7 classifiers reporting a balanced accuracy higher than 0.60 and that were finally selected,
L-SVM = Support Vector Machine with a linear kernel; L-SVM = Support Vector Machine with a polynomial kernel, L-SVM = Support Vector Machine with a sigmoid kernel, RBF-SVM = Support Vector Machine with a Radial Basis Function, QDA = Quadratic Discriminant Analysis}}
\end{table}
\begin{table}[t!]
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{2}{|l|}{\multirow{2}{*}{Features}} & \multicolumn{6}{|l|}{Correlation} \\
\cline{3-8}
\multicolumn{2}{|l|}{} & \multicolumn{2}{|l|}{Severe vs Non Severe} & \multicolumn{2}{|l|}{Intubated vs Diseased} & \multicolumn{2}{|l|}{Hierarchical Classifier} \\
\hline
\multicolumn{2}{|l|}{Age} & \multicolumn{2}{|l|}{-0.0681} & \multicolumn{2}{|l|}{0.6745} & \multicolumn{2}{|l|}{0.1853} \\
\hline
\multicolumn{2}{|l|}{Gender} & \multicolumn{2}{|l|}{-0.1517} & \multicolumn{2}{|l|}{0.0211} & \multicolumn{2}{|l|}{0.1443} \\
\hline
\multicolumn{2}{|l|}{Disease Extent} & \multicolumn{2}{|l|}{-0.3809} & \multicolumn{2}{|l|}{-0.0280} & \multicolumn{2}{|l|}{0.3491} \\
\hline
Heart & \begin{tabular}[c]{@{}l@{}}non-uniformity \\ on the GLSZM\end{tabular} & \multicolumn{2}{|l|}{-0.1953} & \multicolumn{2}{|l|}{-0.0935} & \multicolumn{2}{|l|}{0.1537} \\
\cline{2-8}
& & Left & Right & Left & Right & Left & Right \\
\hline
\multirow{2}{*}{Lung} & Skewness & 0.3324 & 0.3675 & 0.1018 & -0.0229 & -0.2967 & -0.3447 \\
\cline{2-8}
& 90th Percentile & -0.2808 & -0.3221 & -0.0576 & 0.0035 & 0.2526 & 0.3119 \\
\hline
\multirow{6}{*}{Disease} & \begin{tabular}[c]{@{}l@{}}Maximum\\ attenuation\end{tabular} & -0.0867 & -0.1978 & 0.0885 & 0.0035 & 0.0958 & 0.1846 \\
\cline{2-8}
& Surface & -0.3382 & -0.3289 & -0.0174 & 0.0381 & 0.3034 & 0.3112 \\
\cline{2-8}
& \begin{tabular}[c]{@{}l@{}}Maximum 2D\\ diameter per\\ slice\end{tabular} & -0.2304 & -0.2236 & 0.1238 & 0.1010 & 0.2288 & 0.2183 \\
\cline{2-8}
& Volume & -0.3592 & -0.3971 & -0.0489 & 0.0840 & 0.3241 & 0.3863 \\
\cline{2-8}
& \begin{tabular}[c]{@{}l@{}}Non-uniformity \\ \\ on the GLSZM\end{tabular} & -0.2944 & -0.3448 & -0.1379 & -0.0805 & 0.2375 & 0.3004 \\
\cline{2-8}
& \begin{tabular}[c]{@{}l@{}}Non-uniformity\\ on the GLRLM\end{tabular} & -0.3280 & -0.3372 & -0.0219 & 0.0829 & 0.3000 & 0.3319 \\
\hline
\end{tabular}
\caption{\label{tab:tab5} Correlation between outcome and the 12 selected features. \textit{Note.—Note.— GLSZM =Gray level Size Zone matrix , GLRLM = Gray level Run Length matrix}}
\end{table}
\begin{table}[b!]
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Classifier & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Balanced\\ Accuracy\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Weighted\\ Precision\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Weighted\\ Sensitivity\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Weighted\\ Specificity\end{tabular}} \\
\cline{2-9}
& Training & Test & Training & Test & Training & Test & Training & Test \\
\hline
L-SVM & 0.71 & 0.7 & 0.8 & 0.76 & 0.68 & 0.67 & 0.75 & 0.73 \\
\hline
P-SVM & 0.76 & 0.67 & 0.82 & 0.73 & 0.75 & 0.69 & 0.76 & 0.65 \\
\hline
RBF-SVM & 0.72 & 0.71 & 0.81 & 0.79 & 0.65 & 0.63 & 0.79 & 0.79 \\
\hline
Decision Tree & 0.76 & 0.65 & 0.85 & 0.74 & 0.65 & 0.56 & 0.88 & 0.74 \\
\hline
Random Forest & 0.76 & 0.7 & 0.82 & 0.76 & 0.71 & 0.65 & 0.8 & 0.75 \\
\hline
AdaBoost & 0.77 & 0.67 & 0.83 & 0.74 & 0.71 & 0.62 & 0.82 & 0.71 \\
\hline
\begin{tabular}[c]{@{}l@{}}Gaussian\\ Naive Bayes\end{tabular} & 0.65 & 0.73 & 0.76 & 0.77 & 0.73 & 0.76 & 0.57 & 0.69 \\
\hline
\begin{tabular}[c]{@{}l@{}}\textbf{Ensemble}\\ \textbf{Classifier}\end{tabular} & 0.75 & 0.74 & 0.82 & 0.79 & 0.69 & 0.69 & 0.81 & 0.79 \\
\hline
\end{tabular}
\caption{\label{tab:tab6}Performances of each of the 7 individual classifiers and of the ensemble classifier to differentiate between patient with severe and non-severe short-term outcome. \textit{Note.— L-SVM = Support Vector Machine with a linear kernel; L-SVM = Support Vector Machine with a polynomial kernel, RBF-SVM = Support Vector Machine with a Radial Basis Function kernel}}
\end{table}
\begin{table}[t!]
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Classifier & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Balanced\\ Accuracy\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Weighted\\ Precision\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Weighted\\ Sensitivity\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Weighted\\ Specificity\end{tabular}} \\
\cline{2-9}
& Training & Test & Training & Test & Training & Test & Training & Test \\
\hline
L-SVM & 0.84 & 0.78 & 0.87 & 0.84 & 0.81 & 0.85 & 0.87 & 0.72 \\
\hline
P-SVM & 0.95 & 0.81 & 0.96 & 0.9 & 0.95 & 0.88 & 0.95 & 0.74 \\
\hline
RBF-SVM & 0.9 & 0.62 & 0.92 & 0.83 & 0.90 & 0.77 & 0.9 & 0.48 \\
\hline
Decision Tree & 0.97 & 0.75 & 0.98 & 0.87 & 0.98 & 0.85 & 0.96 & 0.65 \\
\hline
Random Forest & 0.94 & 0.75 & 0.96 & 0.87 & 0.96 & 0.85 & 0.92 & 0.65 \\
\hline
AdaBoost & 1 & 0.62 & 1 & 0.83 & 1 & 0.77 & 1 & 0.48 \\
\hline
\begin{tabular}[c]{@{}l@{}}Gaussian\\ Naive Bayes\end{tabular} & 0.82 & 0.75 & 0.86 & 0.87 & 0.83 & 0.85 & 0.8 & 0.65 \\
\hline
\begin{tabular}[c]{@{}l@{}} \textbf{Ensemble}\\ \textbf{Classifier}\end{tabular} & 0.96 & 0.81 & 0.97 & 0.9 & 0.96 & 0.88 & 0.95 & 0.74 \\ \hline
\end{tabular}
\caption{\label{tab:tab7} Performances of each of the 7 individual classifiers and of the ensemble classifier to differentiate between intubated and deceased patients. \textit{Note.— L-SVM = Support Vector Machine with a linear kernel; L-SVM = Support Vector Machine with a polynomial kernel, RBF-SVM = Support Vector Machine with a Radial Basis Function kernel}}
\end{table}
|
3,212,635,537,743 | arxiv | \section{Conclusion}
\label{sec:conclusion}
In this paper, we propose {CGL}, a collaborative graph learning model to jointly learn the representations of patients and diseases, and effectively utilize clinical notes in EHR data. We conducted experiments on real-world EHR data to demonstrate the effectiveness of the learned representations and performance improvements of {{CGL}} over state-of-the-art models. We also provide analysis of {{CGL}} on multiple aspects, including new onset diseases, disease embeddings, and contribution of clinical notes. In the future, we plan to explore methods to quantify the contributions of certain admissions to each predicted medical code.
Usage of single admission records in EHR data will also be considered for further investigation.
\section*{Acknowledgments}
This work was supported in part by US National Science
Foundation under grants 1838730 and 1948432.
SK was supported in part by the NLM of the NIH under Award Number R01LM013308.
\bibliographystyle{named}
\section{Experiments}
\subsection{Experimental Setup}
\label{sec:experiments}
\subsubsection{Dataset Description}
We use the MIMIC-III dataset~\cite{johnson2016mimic} to evaluate {CGL}. Table~\ref{tab:dataset} shows the basic statistics of MIMIC-III. We select patients with multiple visits (\# of visits $\ge$ 2) and select clinical notes except the type of ``Discharge summary'', since it has a strong indication to predictions and is unfair to be used as features. For each note, we use the first 50,000 words, while the rest are cut off for computational efficiency, given the average word number per note is less than 5,000. We split MIMIC-III randomly according to patients into training/validation/test sets with patient numbers as 6000/125/1000. We use the codes in patients' last visit as labels and other visits as features. For heart failure prediction, we set labels as 1 if patients are diagnosed with heart failure in the last visit. Finally, the observation graph is built based on the training set. A 5-level hierarchical structure and the ontology graph are built according to ICD-9-CM.
\subsubsection{Evaluation Metrics}
We adopt weighted $F_1$ score (w-$F_1$~\cite{bai2018interpretable}) and top $k$ recall (R@$k$~\cite{choi2016doctor}) for diagnosis predictions. w-$F_1$ is a weighted sum of $F_1$ for each class. R@$k$ is the ratio of true positive numbers in top $k$ predictions by the total number of positive samples, which measures the prediction performance on a subset of classes. For heart failure predictions, we use $F_1$ and the area under the ROC curve (AUC), since it is a binary classification on imbalanced test data.
\subsubsection{Baselines}
To compare {CGL}~with state-of-the-art models, we select the following models as baselines: 1) \textit{RNN-based models}: {RETAIN}~\cite{choi2016retain}, {Dipole}~\cite{ma2017dipole}, {Timeline}~\cite{bai2018interpretable}; 2) \textit{CNN-based models}: {Deepr}~\cite{Phuoc2017deepr}; 3) Graph-based models\textit{}: {GRAM}~\cite{choi2017gram}, {MedGCN}~\cite{mao2019medgcn}; and 4) \textit{A logistic regression model}, {{LR$_{\text{notes}}$}}, on clinical notes using only TF-IDF features of each note (whose dimension is the dictionary size).
Deepr, GRAM, and Timeline use medical code embeddings as inputs, while others use multi-hot vectors of medical codes. We do not consider SMR~\cite{wang2017safe} because 1) it does not compare with the above state-of-the-art models and 2) it focuses on medication recommendation which is different from our tasks. We also do not compare with MiME~\cite{choi2018mime} and GCT~\cite{gct_aaai20} because we do not use treatments and lab results in our data.
\subsubsection{Parameters}
We randomly initialize embeddings for diseases, patients, and clinical notes and select the paramters by a grid search. The embedding sizes $d_c$, $d_p$, and $d_w$ are 32, 16, and 16. The graph layer number $L$ is 2. The hidden dimensions {\small$d_p^{(1)}$, $d_c^{(1)}$, and $d_c^{(2)}$} are 32, 64, and 128, and the GRU unit $h$ is set to 200. The coefficient $\lambda$ in $\mathcal{L}_0$ for diagnosis and heart failure prediction is 0.3 and 0.1. We set the learning rate as $10^{-3}$, optimizer as Adam, and use 200 epochs for training. The source code of {{CGL}} is released at \href{https://github.com/LuChang-CS/CGL}{https://github.com/LuChang-CS/CGL}.
\begin{table}
\centering
\scalebox{0.85}{\begin{tabular}{lcccc}
\toprule
\multirow{1}{*}{\textbf{Models}} & \multirow{1}{*}{\textbf{w-}$\boldsymbol{F_1}$ (\%)} & \textbf{R@20} (\%) & \textbf{R@40} (\%) &
\multirow{1}{*}{ \textbf{Param.}} \\
\midrule
{RETAIN} & 19.66 (0.58) & 33.90 (0.47) & 42.93 (0.39) & 2.90M \\
{Deepr} & 12.38 (0.01) & 28.15 (0.08) & 37.26 (0.14) & 0.80M \\
{GRAM} & 21.06 (0.19) & 36.37 (0.16) & 45.61 (0.27) & 1.38M \\
{Dipole} & 11.24 (0.19) & 26.96 (0.15) & 36.83 (0.26) & 2.08M \\
{Timeline} & 16.83 (0.62) & 32.08 (0.66) & 41.97 (0.74) & 1.23M \\
{MedGCN} & 20.93 (0.25) & 35.69 (0.50) & 43.36 (0.46) & 4.59M \\
LR$_{\text{notes}}$ & 17.56 (0.41) & 36.71 (0.28) & 46.02 (0.38) & 325.65M \\
\midrule
{CGL} & \textbf{22.97 (0.19)} & \textbf{38.19 (0.16)} & \textbf{48.26 (0.15)} & 3.55M \\
\bottomrule
\end{tabular}}
\caption{Diagnosis prediction results in w-${F_1}$ and R@${k}$.}
\label{tab:result_code}
\end{table}
\section{Introduction}
Electronic health records (EHR) consist of patients' temporal visit information in health facilities, such as medical history and doctors' diagnoses. The usage and analysis of EHR not only improves the quality and efficiency of in-hospital patient care but also provides valuable data sources for researchers to predict health events, including diagnoses, medications, and mortality rates, etc. A key research problem is improving prediction performance by learning better representations of patients and diseases so that improved risk control and treatments can be provided. There have been many works on this problem using deep learning models, such as recurrent neural networks (RNN)~\cite{choi2016doctor}, convolutional neural networks (CNN)~\cite{Phuoc2017deepr}, and attention-based mechanisms~\cite{ma2017dipole}. However, several challenges remain in utilizing EHR data and interpreting models:
\begin{figure}
\centering
\includegraphics[width=0.82\linewidth]{figures/hierachical_example_v3}
\caption{An example of the hierarchical structure of the ICD-9-CM system, disease interaction, and patient diagnosis.}
\label{fig:hierarchical_example}
\end{figure}
\begin{enumerate}[leftmargin=*]
\item \textbf{Effectively utilizing the domain knowledge of diseases}. Recently, graph structures are being adopted~\cite{choi2017gram} using disease hierarchies, where diseases are classified into various types at different levels. For example, \figurename{~\ref{fig:hierarchical_example}} shows a classification of two forms of hypertension and one form of heart failure. One problem is that existing works~\cite{choi2017gram,shang2019pre} only consider the vertical relationship between a disease and its ancestors (hierarchical link). However, they ignore horizontal disease links that can reflect disease complications and help to predict future diagnoses.
\item \textbf{Collaboratively learning patient-disease interactions.} Patients with the same diagnoses may have other similar diseases (patient diagnosis in \figurename~{\ref{fig:hierarchical_example}}). Existing approaches ~\cite{choi2017gram,ma2017dipole} treat patients as independent samples by using diagnoses to represent patients, but they fail to capture patient similarities, which help in predicting new-onset diseases from other patients.
\item \textbf{Incorporating unstructured text.} Unstructured data in EHR including clinical notes contain indicative features such as physical conditions and medical history. For~example, a note: ``\textit{The patient was intubated for respiratory distress and increased work of breathing. He was also hypertensive with systolic in the 70s}'' indicates that this patient has a history of respiratory problems and hypertension. Most models~\cite{choi2016retain,bai2018interpretable} do not fully utilize such data. This often leads to unsatisfactory prediction performance and lack of interpretability.
\end{enumerate}
To address these problems, we first present a hierarchical embedding method for diseases to utilize medical domain knowledge. Then, we design a collaborative graph neural network to learn hidden representations from two graphs: a {\it patient-disease observation graph} and a {\it disease ontology graph}. In the observation graph, if a patient is diagnosed with a disease, we create an edge between this patient and the disease. The ontology graph uses weighted ontology edges to describe horizontal disease interactions. Moreover, to learn the contributions of keywords for predictions, we design a TF-IDF-rectified attention mechanism for clinical notes which takes visit temporal features as context information. Finally, combining disease and text features, the proposed model is evaluated on two tasks: predicting patients' future diagnoses and heart failure events. The main contributions of this work are summarized as follows:
\begin{itemize}
\item We propose to collaboratively learn the representations of patients and diseases on the observation and ontology graphs. We also utilize the hierarchical structure of medical domain knowledge and introduce an ontology weight to capture hidden disease correlations.
\item We integrate structured information of patients' previous diagnoses and unstructured information of clinical notes with a TF-IDF-rectified attention method.
It allows us to regulate attention scores without any manual intervention and alleviates the issue of using attention as a tool to audit a model~\cite{jain-wallace-2019-attention}.
\item We conduct extensive experiments and illustrate that the proposed model outperforms state-of-the-art models for prediction tasks on MIMIC-III dataset. We also provide detailed analysis for model predictions.
\end{itemize}
\section{Methodology}\label{sec:methodology}
\subsection{Problem Formulation}
An EHR dataset is a collection of patient visit records. Let $\mathcal{C} = \{c_1, c_2, \dots, c_{|\mathcal{C}|}\}$ be the entire set of diseases represented by medical codes in an EHR dataset, where $|\mathcal{C}|$ is the medical code number. Let $\mathcal{N} = \{\omega_1, \omega_2, \dots, \omega_{|\mathcal{N}|}\}$ be the dictionary of clinical notes, where $|\mathcal{N}|$ is the word number.
\paragraph{EHR dataset.} An EHR dataset is given by $\mathcal{D} = \{ \gamma_u | u \in \mathcal{U} \}$ where $\mathcal{U}$ is the collection of patients in $\mathcal{D}$ and $\gamma_u = (V^u_1,V^u_2, \dots, V^u_T)$ is a visit sequence of patient $u$. Each visit $V^u_t = \{C^u_t, N^u_t \}$ is recorded with a subset of medical codes $C^u_t \subset \mathcal{C}$, and a paragraph of clinical notes $ N^u_t \subset \mathcal{N}$ containing a sequence of $|N^u_t|$ words.
\paragraph{Diagnosis prediction.} Given a patient $u$'s previous $T$ visits, this task predicts a binary vector $\mathbf{\hat{y}} \in \{0, 1\}^{|\mathcal{C}|}$ which represents the possible diagnoses in $(T + 1)$-th visit. $\mathbf{\hat{y}}_i =1$ denotes $c_i $ is predicted in $ C^u_{T + 1}$.
\paragraph{Heart failure prediction.} Given a patient $u$'s previous $T$ visits, this task predicts a binary value $\hat{y} \in \{0, 1\}$. $\hat{y} = 1$~denotes that $u$ is predicted with heart failure\footnote{The medical codes of heart failure start with 428 in ICD-9-CM.} in $(T + 1)$-th visit.
In the rest of this paper, we drop the superscript $u$ in $V^u_t, C^u_t$, and $N^u_t$ for convenience unless otherwise stated.
\subsection{The Proposed Model}
In this section, we propose a \textbf{C}ollaborative \textbf{G}raph \textbf{L}earning model, \textbf{{CGL}}. An overview of {{CGL}} is shown in \figurename{~\ref{fig:proposed_model}}.
\subsubsection{Hierarchical Embedding for Medical Codes}
ICD-9-CM is an official system of assigning codes to diseases. It hierarchically classifies medical codes into different types of diseases in $K$ levels. This forms a tree structure where each node has only one parent. Note that most medical codes in patients' visits from EHR data are leaf nodes. However, a patient can be diagnosed with a higher level disease, i.e., non-leaf node. Therefore, we recursively create virtual child nodes for each non-leaf node to pad them into virtual leaf nodes. We assume there are $n_k$ nodes at each level $k$ (smaller $k$ means higher level in the hierarchical structure).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/system_framework-GALE.pdf}
\caption{An overview of the proposed model. The graph learning first learns disease hidden features with two collaborative graphs: an observation graph and an ontology graph, based on the hierarchical embedding from medical domain knowledge. Then an RNN is designed to learn temporal information of visit sequences. Rectified Attention mechanism encodes clinical notes with the guide of TF-IDF and uses the visit representation as an attention context vector to integrate structured visit records and unstructured clinical notes.}
\label{fig:proposed_model}
\end{figure}
We create an embedding tensor $\{ \mathbf{E}_k \}_{k \in [1, 2, \dots, K]}$ for nodes in the tree. $\mathbf{E}_k \in \mathbb{R}^{n_k \times d_c}$ is the embedding matrix for nodes in level $k$, and $d_c$ is the embedding size. For a medical code $c_i$ as a leaf node, we first identify its ancestors in each level $k=[1, 2, \dots, K-1]$ in the tree and select corresponding embedding vectors from $\{\mathbf{E}_k\}$. Then, the hierarchical embedding $\boldsymbol{\mathbf{e}}_i \in \mathbb{R}^{Kd_c}$ of $c_i$ is calculated by concatenating the embeddings in each level: $\boldsymbol{\mathbf{e}}_i = \mathbf{E}_1^i \oplus \mathbf{E}_2^i~\oplus~, \dots, \oplus~\mathbf{E}_K^i$, where $\oplus$ denotes the concatenation. We use $\mathbf{E} \in \mathbb{R}^{|\mathcal{C}| \times Kd_c}$ to represent medical codes after hierarchical embedding.
\subsubsection{Graph Representation}
In visit records, specific diagnosis co-occurrences could reveal hidden similarities of patients and diseases. We explore such relationship by making the following hypotheses:
\begin{enumerate}[leftmargin=*]
\item \textbf{Diagnostic similarity of patients}. If two patients get diagnosed with the same diseases, they tend to have diagnostic similarities and get similar diagnoses in the future.
\item \textbf{Medical similarity of diseases}.
If two diseases belong to the same higher-level disease, they might have medical similarities such as symptoms, causes, and complications.
\end{enumerate}
Based on these hypotheses, we construct a collaborative graph $\mathcal{G} = \{ \mathcal{G}_{\mathcal{U}\mathcal{C}}, \mathcal{G}_{\mathcal{C}\mathcal{C}} \}$ for patients and medical codes. $\mathcal{G}_{\mathcal{U}\mathcal{C}}$ is the patient-disease \textbf{\textit{observation graph}} built from EHR data. Its nodes are patients and medical codes. We use a patient-code adjacency matrix $\mathbf{A}_{\mathcal{U}\mathcal{C}} \in \{0, 1\}^{|\mathcal{U}|\times|\mathcal{C}|}$ to represent $\mathcal{G}_{\mathcal{U}\mathcal{C}}$. Given patient $u$, if $u$ is diagnosed with a code $c_i$ in a previous visit, we add an edge $(u, c_i)$ and set $\mathbf{A}_{\mathcal{U}\mathcal{C}}[u][i] = 1$. $\mathcal{G}_{\mathcal{C}\mathcal{C}}$ is the \textbf{\textit{ontology graph}}. Its nodes are medical codes. To model horizontal links of two medical codes (leaf nodes), we create a code-code adjacency matrix $\mathbf{A}'_{\mathcal{C}\mathcal{C}} \in \mathbb{N}^{|\mathcal{C}| \times |\mathcal{C}|}$. If two medical codes $c_i$ and $c_j$ have their lowest common ancestor in level $k$, we add an ontology edge $(c_i, c_j)_k$ and set $\mathbf{A}'_{\mathcal{C}\mathcal{C}}[i][j] = k$. This process is based on the idea that two medical codes with a common ancestor in lower levels of the hierarchial graph of ICD-9-CM should be similar diseases. Finally, we set $\mathbf{A}'_{\mathcal{C}\mathcal{C}}[i][i] = 0$ for all diagonal elements. Although $\mathbf{A}'_{\mathcal{C}\mathcal{C}}$ can reflect the hierarchical structure of medical codes, it is a dense matrix and generates a nearly complete ontology graph, which will cause a high complexity for graph learning. We further propose a disease co-occurrence indicator matrix $\mathbf{B}_{\mathcal{C}\mathcal{C}}$ initialized with all zeros. If two medical codes $c_i$ and $c_j$ appear in a patient's visit record, we set $\mathbf{B}_{\mathcal{C}\mathcal{C}}[i][j]$ and $ \mathbf{B}_{\mathcal{C}\mathcal{C}}[j][i]$ as 1. Then, we let $\mathbf{A}_{\mathcal{C}\mathcal{C}} = \mathbf{A}'_{\mathcal{C}\mathcal{C}} \odot \mathbf{B}_{\mathcal{C}\mathcal{C}}$ be a new adjacency matrix for $\mathcal{G}_{\mathcal{C}\mathcal{C}}$ to neglect disease pairs in $\mathbf{A}'_{\mathcal{C}\mathcal{C}}$ which never co-occur in EHR data. Here $\odot$ denotes element-wise multiplication. Finally, we not only create a sparse ontology graph for computational efficiency, but also focus on more common and reasonable disease connections in the ontology graph.
\subsubsection{Collaborative Graph Learning}
To learn hidden features of medical codes and patients, we design a collaborative graph learning method on the fact and ontology graphs. Instead of calculating patient embeddings with medical codes like DeepPatient~\cite{miotto2016deep}, we assign each patient an initial embedding. $\mathbf{P} \in \mathbb{R}^{|\mathcal{U}| \times d_p}$ is the embedding matrix of all patients with the size of $d_p$. Let $\mathbf{H}_p^{(0)} = \mathbf{P}, \mathbf{H}_c^{(0)} = \mathbf{E}$ and $\mathbf{H}_p^{(l)} \in \mathbb{R}^{|\mathcal{U}| \times d_p^{(l)}}, \mathbf{H}_c^{(l)} \in \mathbb{R}^{|\mathcal{C}| \times d_c^{(l)}}$ be the hidden features of patients and medical codes (i.e., inputs of $l$-th graph layer). We design a graph aggregation method to calculate the hidden features of patients and medical codes in the next layer. First, we map the medical code features $\mathbf{H}_c^{(l)}$ into the patient dimension and aggregate adjacent medical codes from the observation graph ($\mathbf{A}_{\mathcal{U}\mathcal{C}}$) for each patient:
\begin{align}
\label{eq:z_p}
\mathbf{Z}_p^{(l)} = \mathbf{H}_p^{(l)} + \mathbf{A}_{\mathcal{U}\mathcal{C}}\mathbf{H}_c^{(l)}\mathbf{W_{\mathcal{C}\mathcal{U}}}^{(l)} \in \mathbb{R}^{|\mathcal{U}| \times d_p^{(l)}}.
\end{align}
Here $\mathbf{W_{\mathcal{C}\mathcal{U}}}^{(l)} \in \mathbb{R}^{d_c^{(l)} \times d_p^{(l)}}$ maps code embeddings to patient embeddings. For the ontology graph, if $c_i$, $c_j$ are connected in level $k$, we assign an ontology weight $\phi_j$ to $c_j$ when aggregating $c_j$ into $c_i$:
\begin{align}
\phi_j\left(k\right) = \sigma\left({\mu}_j \times k + \theta_j\right).
\end{align}
Here $\sigma$ is the sigmoid function. $\mu_j, \theta_j \in \mathbb{R}$ are trainable variables for $c_j$. $\phi_j(k)$ is a monotonic function w.r.t. level $k$. This function enables the model to describe the horizontal influence of a disease on other diseases via assigning increasing or decreasing weights by levels. Let $\Phi \in \mathbb{R}^{|\mathcal{C}| \times|\mathcal{C}|}$ be the ontology weight matrix and $\mathbf{M}, \mathbf{\Theta} \in \mathbb{R}^{|\mathcal{C}|}$ be the collection of $\mu, \theta$. $\mathbf{H}_p^{(l)}$ is mapped into the medical code dimension and aggregated with adjacent patients from the observation graph:
\begin{align}
\Phi = \sigma(\mathbf{M} \odot \mathbf{A}_{\mathcal{C}\mathcal{C}} + \mathbf{\Theta})\in \mathbb{R}^{|\mathcal{C}| \times|\mathcal{C}|}, \label{eq:phi}
\end{align}
\begin{align}
\mathbf{Z}_c^{(l)} = \mathbf{H}_c^{(l)} + \mathbf{A}^{\top}_{\mathcal{U}\mathcal{C}}\mathbf{H}_p^{(l)}\mathbf{W_{\mathcal{U}\mathcal{C}}}^{(l)} + {\Phi\mathbf{H}_c^{(l)}} \in \mathbb{R}^{|\mathcal{C}| \times d_c^{(l)}}. \label{eq:z_c}
\end{align}
Here $\mathbf{W}_{\mathcal{U}\mathcal{C}} \in \mathbb{R}^{d_p^{(l)} \times d_c^{(l)}}$ maps patient embeddings to code embeddings. Given that $\mathbf{A}_{\mathcal{C}\mathcal{C}}$ stores the level where two diseases are connected, we use $\mathbf{A}_{\mathcal{C}\mathcal{C}}$ to compute $\Phi$. Finally, $\mathbf{H}_p^{(l)}$ and $\mathbf{H}_c^{(l)}$ of the next layer are calculated as follows:
\begin{align}
\label{eq:layer_o}
\mathbf{H}_{\{p, c\}}^{(l + 1)} = \text{ReLU}\left( \text{BatchNorm}\left( \mathbf{Z}_{\{p, c\}}^{(l)}\mathbf{W}^{(l)}_{\{p, c\}} \right)\right),
\end{align}
where $\mathbf{W}^{(l)}_{\{p, c\}}$ maps $\mathbf{Z}^{(l)}_{\{p, c\}}$ to the $(l + 1)$-th layer, and we use batch normalization to normalize features. In the $L$-th graph layers, we do not calculate $\mathbf{H}_{p}^{(L)}$ and only calculate $\mathbf{H}_{c}^{(L)}$ as the graph output, since the medical codes are required for further calculation. We let $\mathbf{H}_c = \mathbf{H}_{c}^{(L)} \in \mathbb{R}^{{|\mathcal{C}| \times d^{(L)}_c}}$ be the final embedding for medical codes.
\subsubsection{Temporal Learning for Visits}
Given a patient $u$, we first compute a embedding $\mathbf{v}_t$ for visit~$t$:
\begin{align}
\label{eq:visit_emb}
\mathbf{v}_t = \frac{1}{\mid C_t \mid}\sum_{c_i \in C_t}{\mathbf{H}_c^i} \in \mathbb{R}^{{d^{(L)}_c}}.
\end{align}
After the collaborative graph learning, $\mathbf{H}_c^i$ contains the information of its multi-hop neighbor diseases by the connection of patient nodes. Hence, different from {{GRAM}}, it enables the model to effectively predict diseases that have never been diagnosed on a patient before. We then employ GRU on $\mathbf{v}_t$ to learn visit temporal features and get a hidden representation $\mathbf{R} = \{\mathbf{r}_1, \mathbf{r}_2, \dots, \mathbf{r}_T\}$ where the size of the RNN cell is $h$:
\begin{align}
\label{eq:gru1}
\mathbf{R} = \mathbf{r}_1, \mathbf{r}_2, \dots, \mathbf{r}_T &= \text{GRU}(\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_T) \in \mathbb{R}^{T \times h},
\end{align}
Then we apply a location-based attention~\cite{luong2015effective} to calculate the final hidden representation $\mathbf{o}_v$ of all visits:
\begin{align}
\label{eq:attention1}
\boldsymbol{\alpha} &= \text{softmax}\left( \mathbf{R}\mathbf{w}_{\alpha} \right) \in \mathbb{R}^{T}, \\
\mathbf{o}_v &= \boldsymbol{\alpha}\mathbf{R} \in \mathbb{R}^h, \label{eq:attention3}
\end{align}
where $\mathbf{w}_{\alpha} \in \mathbb{R}^{h}$ is a context vector for attention and $\boldsymbol{\alpha}$ is the attention weight for each visit.
\subsubsection{Guiding Attention on Clinical Notes}
We incorporate the clinical notes $N_T$ from the latest visit $V_T$, since $N_T$ generally contains the medical history and future plan for a patient. We propose an attention regulation strategy that automatically highlights key words, considering traditional attention mechanisms in NLP have raised concerns as a tool to audit a model~\cite{jain-wallace-2019-attention,serrano-smith-2019-attention}. Pruthi \textit{et al.}~\cite{pruthi-etal-2020-learning} present a manipulating strategy using a set of pre-defined impermissible tokens and penalizing the attention weights on these impermissible tokens. To implement the regulation strategy, we propose a TF-IDF-rectified attention method on clinical notes. Regarding all patients' notes as a corpus and each patient's note as a document, for a patient $u$, we first calculate the TF-IDF weight $\beta_{i}$ for each word $\omega_i$ in $u$'s note $N_T$ and normalize the weights into [0, 1]. Then, we select the embedding $\mathbf{q}_i \in \mathbb{R}^{d_w}$ from a randomly initialized word embedding matrix $\mathbf{Q} \in \mathbb{R}^{|\mathcal{N}| \times d_w}$. For attention in Eq.~(\ref{eq:attention1}), the context vector $\mathbf{w}_{\alpha}$ is randomly initialized, while clinical notes are correlated with diagnoses. Therefore, we adopt $\mathbf{o}_v$ as the context vector. Firstly, we project word embeddings $\mathbf{Q}$ into the dimension of visits to multiply the context vector $\mathbf{o}_v$:
\begin{align}
\mathbf{Q}' = \mathbf{Q}\mathbf{W}_q \in \mathbb{R}^{|\mathcal{N}| \times h}
\end{align}
Then, let $\mathbf{N}$ be the embedding matrix selected from $\mathbf{Q}'$ for words in $N_T$, we calculate the attention weight $\boldsymbol{\alpha}'$ as well as the output ${\mathbf{o}_n}$ for clinical~notes:
\begin{align}
\boldsymbol{\alpha}' &= \text{softmax}\left( \mathbf{N}\mathbf{o}_v \right) \in \mathbb{R}^{|N_T|}, \\
\mathbf{o}_n &= \boldsymbol{\alpha}'\mathbf{N} \in \mathbb{R}^{h}.
\end{align}
For a word with a high TF-IDF weight in a clinical note, we expect the model to focus on this word with a high attention weight. Therefore, we introduce a TF-IDF-rectified attention penalty $\mathcal{L}_0$ for the attention weights of words:
\begin{align}
\mathcal{L}_0 = -\sum_{\omega_i \in N_T}{\left(\alpha'_i \log\beta_{i} + (1 - \alpha'_i)\log{(1 - \beta_{i})}\right)}.
\end{align}
The attention weights that mismatch the TF-IDF weights will be penalized. We believe that irrelevant (impermissible) words such as ``patient'' and ``doctor'' tend to have low TF-IDF weights. Finally, we concatenate ${\mathbf{o}_n}$ and $\mathbf{o}_v$ as the output ${\mathbf{O}} \in \mathbb{R}^{2h}$ for patient $u$: ${\mathbf{O}} = \mathbf{o}_v \oplus {\mathbf{o}}_n$.
\subsubsection{Prediction and Inference}
Diagnosis prediction is a multi-label classification task, while heart failure prediction is a binary classification task. We both use a dense layer with a sigmoid activation function on the model output $\mathbf{O}$ to calculate the predicted probability $\hat{\mathbf{y}}$. The loss function of classification for both tasks is cross-entropy loss $\mathcal{L}_c$. Then, we combine the TF-IDF-rectified penalty $\mathcal{L}_0$ and cross-entropy loss as the final loss $\mathcal{L}$ to train the model:
\begin{align}
\label{eq:model_loss}
\mathcal{L} = \lambda\mathcal{L}_0 + \text{CrossEntropy}(\hat{\mathbf{y}}, \mathbf{y}).
\end{align}
Here, $\mathbf{y}$ is the ground-truth label of medical codes or heart failure, and $\lambda$ is a coefficient to adjust $\mathcal{L}_0$. In the inference phase, we freeze the trained model and retrieve the embeddings $\mathbf{H}_c$ of medical codes at the output of heterogeneous graph learning. Then, given a new patient for inference, we continue from Eq.~(\ref{eq:visit_emb}) and make predictions.
\section{Related Work}
\label{sec:related_works}
Deep learning models, especially RNN models, have been applied to predict health events and learn representations of medical concepts. DoctorAI~\cite{choi2016doctor} uses RNN to predict diagnoses in patients' next visits and the time duration between patients' current and next visits. RETAIN~\cite{choi2016retain} improves the prediction accuracy through a sophisticated attention process on RNN. Dipole~\cite{ma2017dipole} uses a bi-directional RNN and attention to predict diagnoses of patients' next visits. Both Timeline~\cite{bai2018interpretable} and ConCare~\cite{concare2020} utilize time-aware attention mechanisms in RNN for health event predictions. However, RNN-based models regard patients as independent samples and ignore relationships between diseases and patients which help to predict diagnoses for similar patients.
Recently, graph structures are adopted to explore medical domain knowledge and relations of medical concepts. GRAM~\cite{choi2017gram} constructs a disease graph from medical knowledge. MiME~\cite{choi2018mime} utilizes connections between diagnoses and treatments in each visit to construct a graph. GBERT~\cite{shang2019pre} jointly learns two graph structures of diseases and medications to recommend medications. It uses a bi-directional transformer to learn visit embeddings. MedGCN~\cite{mao2019medgcn} combines patients, visits, lab results, and medicines to construct a heterogeneous graph for medication recommendations. GCT~\cite{gct_aaai20} also builds graph structures of diagnoses, treatments, and lab results. However, these models only consider disease hierarchical structures, while neglecting disease horizontal links that reflect hidden disease complications. As a result, prediction performance is limited.
In addition, CNN and Autoencoder are also adopted to predict health events. DeepPatient~\cite{miotto2016deep} uses an MLP as an autoencoder to rebuild features in EHR. Deepr~\cite{Phuoc2017deepr} treats diagnoses in a visit as words to predict future risks such as readmissions in three months. AdaCare~\cite{ma2020adacare} uses multi-scale dilated convolution to capture dynamic variations of biomarkers over time. However, these models neither consider medical domain knowledge nor explore patient similarities as discussed.
In this paper, we explore disease horizontal connections using a disease ontology graph. We collaboratively learn representations of both patients and diseases in their associated networks. We also design an attention regulation strategy on unstructured text features to provide quantified contributions of clinical notes and interpretations of prediction results.
\subsection{Experimental Results}
\subsubsection{Diagnosis and Heart Failure Prediction}
Table~\ref{tab:result_code} shows the results of baselines and {CGL}~on diagnosis prediction. We use $ k = [20, 40]$ for R@$k$. Each model is trained for 5 times with different variable initializations. The mean and standard deviation are reported. The proposed {CGL}~model outperforms all the baselines. We think this is mostly because {{CGL}~} captures hidden connections of patients and diseases and utilizes clinical notes. In addition, the results of {LR$_{\text{notes}}$}~indicate that only using clinical notes does not improve performance in predicting diagnosis. Table~\ref{tab:result_hf} shows the heart failure prediction results. We observe that {CGL}~also achieves the best performance in terms of AUC and $F_1$.
\begin{table}
\centering
\scalebox{0.85}{\begin{tabular}{lccc}
\toprule
\textbf{Models} & \textbf{AUC} (\%) & $\boldsymbol{F_1}$ (\%) & \textbf{Param.} \\
\midrule
{RETAIN} & 82.73 (0.21) & 71.12 (0.37) & 1.67M \\
{Deepr} & 81.29 (0.01) & 68.42 (0.01) & 0.49M \\
{GRAM} & 82.82 (0.06) & 71.43 (0.05) & 0.76M \\
{Dipole} & 81.66 (0.07) & 70.01 (0.04) & 1.45M \\
{Timeline} & 80.75 (0.46) & 69.81 (0.34) & 0.95M \\
{MedGCN} & 81.25 (0.15) & 70.86 (0.18) & 3.98M \\
LR$_{\text{notes}}$ & 80.33 (0.12) & 69.18 (0.27) & 0.07M \\
\midrule
{CGL} & \textbf{85.66 (0.19)} & \textbf{72.68 (0.22)} & 1.62M \\
\bottomrule
\end{tabular}}
\caption{Heart failure prediction results in AUC and $F_1$.}
\label{tab:result_hf}
\end{table}
\begin{table}
\centering
\scalebox{0.85}{\begin{tabular}{lcccccc}
\toprule
\multirow{2}{*}{\textbf{Models}} & \multicolumn{3}{c}{\textbf{Diagnosis}} &\multicolumn{3}{c}{\textbf{Heart failure}}\\
\cmidrule{2-4} \cmidrule{5-7}
& w-$F_1$ & R@20 & Param. & AUC & $F_1$ & Param. \\
\midrule
{CGL} $_{h\text{-}}$ & 20.87 & 35.66 & 3.98M & 82.58 & 71.02 & 2.04M \\
{CGL} $_{n\text{-}}$ & 22.10 & 37.59 & 1.50M & 84.53 & 71.96 & 0.53M \\
{CGL} $_{w\text{-}}$ & 22.06 & 37.31 & 3.54M & 83.91 & 71.59 & 1.60M \\
{CGL} & \textbf{22.97} & \textbf{38.19} & 3.55M & \textbf{85.66} & \textbf{72.68} & 1.62M \\
\bottomrule
\end{tabular}}
\caption{w-$F_1$, R@20 of diagnosis prediction and AUC, $F_1$ of heart failure prediction for {{CGL}} variants. {CGL}$_{h\text{-}}$: no hierarchical embedding; {CGL}$_{n\text{-}}$: no clinical notes; {CGL}$_{w\text{-}}$: no ontology weights.}
\label{tab:ablation}
\end{table}
\subsubsection{Ablation Study}
To study the effectiveness of components, we also compare 3 {{CGL}} variants: {CGL}~without hierarchical embedding ({CGL}$_{h\text{-}}$), {CGL}~without clinical notes as inputs ({CGL}$_{n\text{-}}$), and {CGL}~without ontology weights ({CGL}$_{w\text{-}}$). The results are shown in Table~\ref{tab:ablation}. We observe that even without clinical notes, {{CGL} $_{n\text{-}}$} with hierarchical embeddings and ontology weights still achieves the best performance among all other baselines. This indicates that domain knowledge including hierarchical embeddings and ontology weights also help to learn better representations of medical codes. In addition, from Table~\ref{tab:ablation} we can infer that the complexity of {{CGL}} mostly comes from modeling clinical notes, i.e., word embeddings. Therefore, {{CGL}} is scalable and can be generalized to other tasks when clinical notes are not accessible.
\subsubsection{Prediction Analysis}
\paragraph{New-onset diseases.}
For a patient, new-onset diseases denote new diseases in future visits which have not occurred in previous visits of this patient. We use the ability of predicting new-onset diseases to measure learned diagnostic similarity of patients. It is natural for a model to predict diseases that have occurred in previous visits. With the help of other similar patients' records, the model should be able to predict new diseases for a patient. The idea is similar to collaborative filtering in recommender systems. If two patients are similar, one of them may be diagnosed with new-onset diseases which have occurred in the other patient. We also use R@$k$ ($k=[20, 40]$) to evaluate the ability of predicting occurred and new-onset diseases. Here, R@$k$ denotes the ratio between the number of correctly predicted occurred (or new) diseases and the number of ground-truth diseases. We select GRAM and MedGCN which have good performance in diagnosis prediction, and {{CGL} $_{n\text{-}}$} without clinical notes, because we want to explore the effectiveness of the proposed observation and ontology graphs. Table~\ref{tab:occurred} shows the results of R@$k$ on test data. We can see that {{CGL} $_{n\text{-}}$} has similar results to {{GRAM}} on occurred diseases while achieving superior performance on new-onset diseases. This verifies that our proposed collaborative graph learning is able to learn from similar patients and predict new-onset diseases in the future.
\begin{table}
\centering
\scalebox{0.85}{\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{\textbf{Models}} & \multicolumn{2}{c}{\textbf{Occurred}} &\multicolumn{2}{c}{\textbf{New-onset}}\\
\cmidrule{2-3} \cmidrule{4-5}
& R@20 & R@40 & R@20 & R@40 \\
\midrule
{GRAM} & 21.05 & 23.11 & 15.32 & 22.50 \\
{MedGCN} & 20.51 & 21.89 & 15.38 & 21.53 \\
{CGL} $_{n\text{-}}$ & \textbf{21.26} & \textbf{23.85} & \textbf{16.33} & \textbf{23.58} \\
\bottomrule
\end{tabular}}
\caption{R@$k$ of predicting occurred/new-onset diseases.}
\label{tab:occurred}
\end{table}
\begin{figure}
\centering
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/gram_code_l1}
\subcaption{GRAM level 1}
\end{subfigure}}
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/gram_code_l2}
\subcaption{GRAM level 2}
\end{subfigure}}
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/gram_code_l3}
\subcaption{GRAM level 3}
\end{subfigure}}
\par\bigskip
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/timeline_code_l1}
\subcaption{Timeline level 1}
\end{subfigure}}
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/timeline_code_l2}
\subcaption{Timeline level 2}
\end{subfigure}}
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/timeline_code_l3}
\subcaption{Timeline level 3}
\end{subfigure}}
\par\bigskip
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/code_l1}
\subcaption{{CGL}~level 1}
\end{subfigure}}
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/code_l2}
\subcaption{{CGL}~level 2}
\end{subfigure}}
\scalebox{0.85}{\begin{subfigure}[t]{0.32\linewidth}
\includegraphics[width=\linewidth]{figures/code_l3}
\subcaption{{CGL}~level 3}
\end{subfigure}}
\caption{Code embeddings in 3 levels learned by GRAM, Timeline, and {CGL}. Colors correspond to disease types in each level.}
\label{fig:code_emb}
\end{figure}
\paragraph{Disease embeddings.}
To show the similarity of diseases, we plot the learned 4795 code embeddings $\mathbf{H}_c$ using t-SNE \cite{maaten2008visualizing}. \figurename{~\ref{fig:code_emb}} shows the embeddings learned by GRAM, Timeline, and {CGL}~in 3 levels. Colors denotes different disease types in each level. In \figurename{~\ref{fig:code_emb}}, disease embeddings learned by GRAM and {{CGL}} are basically clustered according to their real categories, while Timeline seems like a random distribution. In the plot of GRAM, we observe the clusters are far away from each other given large inter-cluster distances, while nodes in a cluster are close to each other due to small intra-cluster distances. We can observe that the embeddings learned by GRAM do not capture distinguishable features of low-level diseases as well as the relationships across clusters. Therefore, we can infer that learning proper representations that reflect disease hierarchical structures and correlations is helpful for predictions.
\paragraph{Contribution of notes.}
We compare the proposed TF-IDF rectified attention weights with regular attention weights to verify if the model focuses on important words. Table~\ref{tab:contrib_note} demonstrates an example with a part of a note and predicted diagnoses. In this example, the patient is diagnosed with 33 diseases, and {CGL}~predicts 10 of them correctly in top 20 predicted codes. Important words with high $\alpha'$ values are highlighted in pink. We first observe that pink words are relevant to diagnoses. In addition, we notice the rectified attention weights are more semantically interpretable. For example, ``acute'' and ``HCAP'' (Health care-associated pneumonia) get higher weights with the rectified attention loss. Meanwhile, we show the unimportant words with low $\alpha'$ values in gray. We observe that our model detects unimportant words which have less contributions. For example, ``patient'' and ``diagnosis'' are regarded as an unimportant word but not captured in the regular attention mechanism. Therefore, we may conclude that the TF-IDF-rectified attention method improves the accuracy of interpretations using clinical notes.
\newcommand{\cbx}[2]{\setlength{\fboxsep}{0.5pt}\colorbox{#1}{#2}}
\begin{table}
\small
\centering
\scalebox{0.75}{\begin{tabular}{p{0.41\linewidth}|p{0.41\linewidth}|p{0.36\linewidth}}
\toprule
\textbf{Without penalty} & \textbf{With penalty} &\textbf{ Correct Predictions} \\\midrule
... Patient had fairly acute \cbx{pink}{decompensation} of \cbx{lightgray}{respiratory} status today with \cbx{pink}{hypoxia} and \cbx{pink}{hypercarbia} associated with \cbx{pink}{hypertension} ... Differential diagnosis includes \cbx{lightgray}{flash} \cbx{pink}{pulmonary} \cbx{pink}{edema}~and acute exacerbation of CHF vs aspiration vs infection (HCAP) ... Acuity suggests possible \cbx{lightgray}{flash} \cbx{pink}{pulmonary} \cbx{pink}{edema} vs aspiration ...
& ... \cbx{lightgray}{Patient} had fairly \cbx{pink}{acute} \cbx{pink}{decompensation} of \cbx{pink}{respiratory} status today with \cbx{pink}{hypoxia} and \cbx{pink}{hypercarbia} associated with \cbx{pink}{hypertension} ... Differential \cbx{lightgray}{diagnosis} includes flash \cbx{pink}{pulmonary} \cbx{pink}{edema} and \cbx{pink}{acute} exacerbation of CHF vs aspiration vs infection (\cbx{pink}{HCAP}) ... Acuity suggests possible flash \cbx{pink}{pulmonary} \cbx{pink}{edema} vs aspiration ...
& {\begin{itemize}[leftmargin=*]
\item {Hypertensive chronic kidney disease}
\item {Acute respiratory failure}
\item {Congestive heart failure}
\item {Diabetes}
\item ...
\end{itemize}} \\
\bottomrule
\end{tabular}}
\caption{An example of word contributions without/with the TF-IDF rectified penalty. The pink/gray color denotes high/low attention weights. }
\label{tab:contrib_note}
\end{table}
|
3,212,635,537,744 | arxiv |
\subsection{Tensor Train Decomposition}
\label{sec:scheme-tt}
Let us dwell on the concepts and the properties of the Tensor Train format.
A $d$-way tensor $\ty \in \set{R}^{N_1 \times N_2 \times \ldots \times N_d}$ is said to be in the TT-format~\cite{oseledets2011tensor}, if its elements are represented by the following formula
\begin{multline}\label{eq:tt-repr-tns}
\ty [n_1, n_2, \ldots, n_d]
=
\sum_{r_1=1}^{R_1}
\sum_{r_2=1}^{R_2}
\cdots
\sum_{r_{d-1}=1}^{R_{d-1}}
\\
\tg_1 [1, n_1, r_1]
\;
\tg_2 [r_1, n_2, r_2]
\;
\ldots
\\
\tg_{d-1} [r_{d-2}, n_{d-1}, r_{d-1}]
\;
\tg_d [r_{d-1}, n_d, 1],
\end{multline}
where $(n_1, n_2, \ldots, n_d)$ is a multi-index ($n_i = 1, 2, \ldots, N_i$ for $i = 1, 2, \ldots, d$), integers $R_{0}, R_{1}, \ldots, R_{d}$ (with convention $R_{0} = R_{d} = 1$) are named TT-ranks, and three-dimensional tensors $\tg_i \in \set{R}^{R_{i-1} \times N_i \times R_i}$ ($i = 1, 2, \ldots, d$) are named TT-cores.
The TT-decomposition~\eqref{eq:tt-repr-tns} allows to represent a tensor or a discretized multivariable function in a compact and descriptive low-parameter form, which is linear in dimension~$d$ (see illustration on \cref{fig:tt-element}), i.\,e., it has less than $d \cdot \max_{i=1,\ldots,d}(N_i R_i^2)$ parameters.
Many useful algorithms (e.\,g., element-wise addition,
multiplication,
solution of linear systems,
convolution,
integration, etc.)
have effective implementations for tensors given in the TT representation.
The complexity of these algorithms turns out to be polynomial in dimension and mode size if the TT-ranks are bounded.
It makes TT-decomposition extremely popular in a wide range of applications, including computational mathematics and machine learning.
A detailed description of the TT-format and related algorithms are given in works~\cite{oseledets2011tensor, cichocki2016tensor}.
\subsection{Discrete Optimization and Gradient Free Methods}
\label{sec:related-gradientfree}
In recent years machine learning algorithms achieved impressive results in various applications.
The important part of these algorithms is an optimization procedure.
In many situations, the problem-specific target function is not differentiable, too complex, or its gradients are not helpful due to the non-convex nature of the problem~\cite{kolda2003ds, ALARIE2021100011}, and standard well-known gradient-based methods cannot be applied directly.
The examples include hyper-parameter selection during the training of neural models, training neural networks with discrete weights or with non-differentiable loss functions, policy optimization in reinforcement learning, and many other optimization problems in complex scenarios.
In all these contexts, efficient direct gradient-free optimization procedures are highly needed.
In the case of high dimensional optimization, evolutionary strategies (ES)~\cite{doerr2021survey} are one of the most advanced methods of black-box optimization.
This approach aims to optimize the parameters of the search distribution, typically a multidimensional Gaussian, to maximize the objective function.
Finite difference schemes are commonly used to approximate gradients of the parameters of the search distribution. Numerous works proposed techniques to improve the convergence of ES~\cite{Nesterov2017}, for example, second-order natural gradient~\cite{JMLR:v15:wierstra14a}, or the history of recent updates (Covariance Matrix Adaptation Evolution Strategy; CMA-ES)~\cite{Hansen2006}, or even surrogate gradients~\cite{pmlr-v97-maheswaranathan19a} may be used to generate updates.
There is a large variety of other heuristic methods for finding the global extremum.
In particular, we note such popular approaches as NoisyBandit~\cite{scarlett2017lower}, Particle Swarm Optimization (PSO)~\cite{kennedy1995particle}, Simultaneous Perturbation Stochastic Approximation (SPSA)~\cite{maryak2001global}, Differential Evolution (DE)~\cite{Storn1997-DE} and scrambled-Hammersley (scr-Hammersley)~\cite{hammersley1960monte}.
\subsection{Tensor Based Optimization Methods}
\label{sec:related-tensor}
Recently, TT-decomposition has been actively used for the optimization of multidimensional arrays and multivariable functions.
An iterative method TTOpt based on the maximum volume approach is proposed in the work~\cite{sozykin2022ttopt}.
TTOpt is based on the theorem of sufficient proximity of the maximum modulo element of the maximum volume submatrix (i.e. the submatrix having the maximum modulus of the determinant) to the maximum modulo element of the tensor.
Based on this observation, tensor elements are sampled by analogy with the TT-cross method from specially selected successive unfoldings of the tensor, and then the search for the optimum is carried out among the elements of the obtained maximum volume submatrices.
To be able to find the minimum element, dynamic mapping of the tensor elements is carried out, which converts the minimum values into maximum ones.
The authors applied this approach to the problem of optimizing the weights of neural networks in the framework of reinforcement learning problems in~\cite{sozykin2022ttopt} and to the QUBO problem in~\cite{nikitin2022quantum}.
A similar optimization approach was also considered in~\cite{selvanayagam2022global} with practical applications of the method for optimizing the housings of electronic devices and in~\cite{shetty2022tensor} for optimizing the movement in space of robotic arms.
A new algorithm Optima-TT based on the probabilistic sampling from the TT-tensor was proposed in recent work~\cite{chertkov2022optimization}.
This approach makes it possible to obtain a very accurate approximation for the optimum of the given TT-tensor within the framework of successive tensor multiplication of TT-cores with an intelligent selection of potential candidates for the optimum.
However, we note that this method is intended for directly optimizing the TT-tensors, which means that its success strongly depends on the quality of the approximation of the original multidimensional data array.
For this purpose, one of the approximation methods in the TT-format (TT-SVD, TT-ALS, TT-cross, etc.) should be used.
We note, that the dependence of the quality of the optimization result on the accuracy of the tensor approximation was not studied for this method.
\subsection{List of Baselines}
\label{sec:related-baselines}
Taking into account the analysis of discrete optimization methods, as baselines we consider two tensor based optimization methods: TTOpt\footnote{
We used implementation of the method from \url{https://github.com/AndreiChertkov/ttopt}.
} (BS1) and Optima-TT\footnote{
We used implementation from \url{https://github.com/AndreiChertkov/teneva}.
The TT-tensor for optimization was generated by TT-cross method.
} (BS2), and five popular gradient free optimization algorithms from the nevergrad framework~\cite{Nevergrad}\footnote{
See \url{https://github.com/facebookresearch/nevergrad}
}: OnePlusOne (BS3), PSO (BS4), NoisyBandit (BS5), SPSA (BS6) and Portfolio approach (BS7), which is based on the combination of CMA-ES, DE and scr-Hammersley methods.
\subsection{Multivariable Analytic Functions}
\label{sec:exp-analytic}
First, we consider the optimization task for various tensors
arising from the discretization of multivariable analytic functions.
We consider $10$ popular benchmarks: Ackley (P-01), Alpine (P-02), Exponential (P-03), Grienwank (P-04), Michalewicz (P-05), Piston\footnote{
This function corresponds to the practical problem of modeling the time that takes a piston to complete one cycle within a cylinder; the description of related parameters and their ranges can be found in \cite{zankin2018gradient, chertkov2022black}.
} (P-06), Qing (P-07), Rastrigin (P-08), Schaffer (P-09) and Schwefel (P-10).
These functions have a complex landscape and are often used in problems of evaluating the effectiveness of optimization algorithms~\cite{dieterich2012empirical,jamil2013literature}.
This set of functions is also well suited for the study of tensor based methods, since the functions have a varying low-rank structure~\cite{chertkov2022black, strossner2022approximation}.
We consider the 7-dimensional case (since this is the dimension of the Piston function) and discretization on a uniform grid with $16$ nodes.
As follows from the results presented in \cref{tbl:results} (benchmarks P-1, P-2, \ldots, P-10), our method, like the other two tensor approaches (BS-2 and BS-3), gave the most accurate solution for all model problems.
The most sophisticated approach from the nevergrad package (BS-7) turned out to be the next in accuracy (the method did not converge only in two cases out of ten).
\subsection{Quadratic Unconstrained Binary Optimization}
\label{sec:exp-qubo}
Quadratic Unconstrained Binary Optimization (QUBO)~\cite{glover2022quantum} is a widely known NP-hard problem which unifies a rich variety of combinatorial optimization problems from finance and economics applications to machine learning and quantum computing.
QUBO formulation in a very natural manner using penalty functions, yielding exact model representations in contrast to the approximate representations produced by customary uses of penalty functions.
The standard QUBO problem can be formulated as follows:
\begin{equation}
\func{F}(\vect{x}) =
\vect{x}^T \matr{Q} \vect{x}
\rightarrow
\min\limits_{\vect{x}},
\quad
\textit{ s.t. }
\vect{x} \in \{ 0, 1\}^d,
\end{equation}
where $\vect{x}$ is a vector of binary decision variables of the length $d$ and $\matr{Q} \in \set{R}^{d \times d}$ is a square matrix of constants.
In all experiments we select dimension $d = 50$.
We consider the following QUBO problems from qubogen\footnote{
See \url{https://github.com/tamuhey/qubogen}
} package: Max-Cut Problem (P-11; search for partition for undirected graph into two sets such that the number of edges between the two sets is a large as possible), Minimum Vertex Cover Problem (P-12; search for a cover with a minimum number of vertices in the subset of the graph vertices such that each edge in the graph is incident) and Quadratic Knapsack Problem (P-13; search for a subset of maximum profit that satisfies the
budget limitations from a set of potential projects with specified interactions between pairs of projects).
We also considered one more benchmark (P-14) from the work~\cite{dong2021phase} (problem $k_3$; $d = 50$), where angle-modulated bat algorithm (AMBA) algorithm was proposed for high-dimensional binary optimization problems with engineering application to antenna topology optimization.
This is the ordinary binary knapsack problem with fixed (randomly generated) weights $w_i \in [5, 20]$, profits $p_i \in [50, 100]$ ($i = 1, 2, \ldots, d$) and the maximum capacity $C = 1000$.
The proposed method PROTES for all four considered problems ( P-11, P-12, P-13, P-14) gives the best result, as can be seen from \cref{tbl:results}.
The baseline BS-7 again turned out to be the next in accuracy.
We note that the exact solution for P-14 is provided in~\cite{dong2021phase} ($-3103$).
Several methods, including the one proposed by the authors, were compared in~\cite{dong2021phase} for this problem: BPSO ($-2854$), BBA ($-2976$), AMBA ($-2956$), A-AMBA ($-2961$), P-AMBA ($-2989$).
Thus, only the PROTES for this problem converges to the global optimum.
To demonstrate the convergence behavior of different methods depending on the number of performed requests to the objective function, we present the corresponding graph\footnote{
For clarity of demonstration, the values of the objective function on the graph are inverted.
} on \cref{fig:deps}.
\begin{figure}[t!]
\vskip 0.2in
\centering
\includegraphics
[scale=0.42]
{figures/dep.png}
\caption{
Convergence of optimization methods depending on the number of requests to the objective function.
}
\label{fig:deps}
\vskip -0.2in
\end{figure}
\subsection{Optimal Control}
\label{sec:exp-control}
\def\iseq#1{#1(0), \,#1(1), \,\ldots, \,#1(T)}
Suppose we have a state variable $x \in \mathbb{R}$ controlled by a \textbf{binary} variable $i$ called \textbf{control} (i.\,e. it's just a switch with modes "off" = 0 and "on" = 1) over some discrete interval of time $[0, T]$. The state at time $t+1$ is based on the previous state $x(t)$ and control $i(t)$ as follows $x(t+1) : \dot{x}(t) = f(x(t), i(t))$, where the function $f$ is called an \textbf{equation function}. The \textbf{Optimal Control} problem is to find such a sequence of controls $\textbf{i}^* = [\iseq{i^*}]$ over our time interval $[0, T]$ that minimizes a special function $F$. This sequence is called an \textbf{optimal solution} and the function $F$ is called an \textbf{objective function}. Any other sequence $\textbf{i} = [\iseq i]$ we will call just a \textbf{solution}.
Formulating the problem mathematically, we need to find such a solution
\begin{equation}
F(\textbf{x}, \textbf{i}) \rightarrow
\min\limits_{\textbf{x}, \textbf{i}}
, \quad
\text{s.t.} \;\;
\left\{ \begin{array}{l}
x(0) = x_0, \\
\dot{x}(t) = f(x(t), i(t)), \\
i(t) \in \{0, 1\}
\end{array} \right.
\label{optimal-control}
\end{equation}
here $\textbf{x} = [x(0), x(1), \dots, x(T)]$ is a state variable path.
We assume that function~$f$ is nonlinear.
In this case, finding an optimal solution raises a lot of difficulties.
Note, that we can easily reformulate the above-described problem as a tensor minimum finding. Since we consider only binary vectors of the solution $\textbf{i} \in \{0, 1\}^{T}$, we have the limited number of the possible variants equal to $2^T$ and, hence, only the limited number of objective values $F(\textbf{x}, \textbf{i}) = F(\textbf{x}(\textbf{i}), \textbf{i}) = \hat{F}(\textbf{i}) \in \mathbb{R}^{2^T}$ is possible. It means that we can represent the set of all solutions in the form of tensor:
\begin{equation*}
\mathbf{F}[\textbf{i}] =
\mathbf{F}[i_0, i_1, \dots, i_{T-1}] =
\hat{F}(\textbf{i})
\end{equation*}
And the optimal solution $\textbf{i}^*$ is nothing more than the multi-index of the minimum of this tensor.
In numerical applications we consider
the nonlinear equation function $f(x, i) = x^3 - i$.
Function $F$ we set as
\begin{equation*}
F(\textbf{x}, \textbf{i}) = \frac{1}{2}\sum\limits^{T}_{t=0} (x(t) - x_{\text{ref}})^2
\end{equation*}
The initial and the reference state are fixed at values $x_0 = 0.8, \ x_{\text{ref}} = 0.7$.
For $T$, we considered several values, such as $25, 50$ and $100$ (benchmarks P-15, P-16 and P-17 respectively).
As follows from the results presented in \cref{tbl:results}, our method, gave the most accurate solution in all three cases.
\subsection{Optimal Control with Constraints}
\label{sec:exp-control_constr}
As mentioned earlier, some conditions or constraints may be imposed on the solution.
There are two types of constraints: the first is the constraint that only the solution $\textbf{i}$ must satisfy (therefore we will call it \textbf{control constraint}). It is usually set in the form $P(\textbf{i}) = True \backslash False$.
The second type of constraints are those of the form $h(x(t), i(t)) \, \forall t$, called \textbf{path constraints}. For example, we can require that $x(t)$ does not exceed some given value $M$ at each step. Thus, the second type of constraints differs from the first in that there is a dependence not only on the control, but also on the state variable.
We only consider constraints of the first type and define $P$ as follows:
\textit{the control variable can take value 1 no less than 3 times during the whole time interval}.
Formally, this can be written as follows:
\begin{equation*}
P =
\left\{
\textbf{i} \in [0, 1]^N \ \bigg| \
\begin{aligned}
& i(t) \geq i(t-1) - i(t-2) \\
& i(t) \geq i(t-1) - i(t-3)
\end{aligned}
\right\}
\end{equation*}
However, not all remaining solutions (i.e. satisfying condition $P$) fit, if the differential equation is $f(x, i) = x^3 - i$, since for some values of $x(t), i(t)$ reachable during the solution process, the next state $x(t+1)$ may not be found.
In this case, in numerical experiments, the target function returns a very large number as a stub.
To account for this condition in our algorithm,
we construct the initial distribution in the form of an indicator tensor
as described in the implementation details section of the algorithm. The details of this construction are in the Appendix.
The numerical results for $T=25$ (P-18), $T=50$ (P-19) and $T=100$ (P-20) are presented in \cref{tbl:results}. Only our method successfully found the local optimum in all three cases.
\section*{Acknowledgements}
The work was supported by the Analytical center under the RF Government (subsidy agreement 000000D730321P5Q0002, Grant No. 70-2021-00145 02.11.2021).
\fi
\unless\ifdefined\isaccepted
\fi
\section{Introduction}
\label{sec:intro}
\input{content/3_intro.tex}
\section{Method}
\label{sec:scheme}
\input{content/4_scheme.tex}
\section{Related Work and Baselines}
\label{sec:related}
\input{content/5_related.tex}
\section{Numerical Experiments}
\label{sec:exp}
\input{content/6_exp.tex}
\section{Conclusions}
\label{sec:concl}
\input{content/7_concl.tex}
\section{Introduction}
\label{sec:intro}
\input{content/3 intro.tex}
\section{Method}
\label{sec:scheme}
\input{content/4 scheme.tex}
\section{Related Work and Baselines}
\label{sec:related}
\input{content/5 related.tex}
\section{Numerical Experiments}
\label{sec:exp}
\input{content/6 exp.tex}
\section{Conclusions}
\label{sec:concl}
\input{content/7 concl.tex}
\subsection{Proposed Optimization Method}
\input{content/4-2 scheme_method.tex}
\subsection{Tensor Train Decomposition}
\label{sec:scheme-tt}
\input{content/4-1 scheme_tt.tex}
\section*{Acknowledgements}
The work was supported by the Analytical center under the RF Government (subsidy agreement 000000D730321P5Q0002, Grant No. 70-2021-00145 02.11.2021).
\fi
\unless\ifdefined\isaccepted
\fi
\subsection{Notes}
Page limit: The main body of the paper has to be fitted to 8 pages, excluding references and appendices; the space for the latter two is not limited. For the final version of the paper, authors can add one extra page to the main body.
Make sure your PDF file only uses Type-1 fonts.
Keep your abstract brief and self-contained, one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. The title should have content words capitalized.
\subsection{Algorithms}
Use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. \cref{alg:example} shows an example.
\begin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i=1$ {\bfseries to} $m-1$}
\IF{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\end{algorithmic}
\end{algorithm}
\section{Theorems and such}
The preferred way is to number definitions, propositions, lemmas, etc. consecutively, within sections, as shown below.
\begin{definition}
\label{def:inj}
A function $f:X \to Y$ is injective if for any $x,y\in X$ different, $f(x)\ne f(y)$.
\end{definition}
Using \cref{def:inj} we immediate get the following result:
\begin{proposition}
If $f$ is injective mapping a set $X$ to another set $Y$,
the cardinality of $Y$ is at least as large as that of $X$
\end{proposition}
\begin{proof}
Left as an exercise to the reader.
\end{proof}
\cref{lem:usefullemma} stated next will prove to be useful.
\begin{lemma}
\label{lem:usefullemma}
For any $f:X \to Y$ and $g:Y\to Z$ injective functions, $f \circ g$ is injective.
\end{lemma}
\begin{theorem}
\label{thm:bigtheorem}
If $f:X\to Y$ is bijective, the cardinality of $X$ and $Y$ are the same.
\end{theorem}
An easy corollary of \cref{thm:bigtheorem} is the following:
\begin{corollary}
If $f:X\to Y$ is bijective,
the cardinality of $X$ is at least as large as that of $Y$.
\end{corollary}
\begin{assumption}
The set $X$ is finite.
\label{ass:xfinite}
\end{assumption}
\begin{remark}
According to some, it is only the finite case (cf. \cref{ass:xfinite}) that is interesting.
\end{remark}
\subsection{Discrete Optimization and Gradient Free Methods}
\label{sec:related-gradientfree}
\input{content/5-1 related_gradientfree.tex}
\subsection{Tensor Based Optimization Methods}
\label{sec:related-tensor}
\input{content/5-2 related_tensor.tex}
\subsection{List of Baselines}
\label{sec:related-baselines}
\input{content/5-3 related_baselines.tex}
\subsection{Multivariable Analytic Functions}
\label{sec:exp-analytic}
\input{content/6-1 exp_analytic.tex}
\subsection{Quadratic Unconstrained Binary Optimization}
\label{sec:exp-qubo}
\input{content/6-2 exp_qubo.tex}
\subsection{Optimal Control}
\label{sec:exp-control}
\input{content/6-3 exp_control.tex}
\subsection{Optimal Control with Constraints}
\label{sec:exp-control_constr}
\input{content/6-4 exp_control_constr.tex}
|
3,212,635,537,745 | arxiv | \section{Introduction}
The standard model (SM) of particle physics is considered to be an effective theory
despite the fact that it has succeeded in describing known experimental data available
up to now. Supersymmetry (SUSY) between bosons and fermions at the unification-energy
scale is believed to be one of the most promising extension of the SM. Among
the supersymmetric theories, the minimal supersymmetric extension of the SM (MSSM)
is a well studied framework of SUSY because of its compactness.
In the MSSM, however, there are many unfixed parameters. For limiting the possible
region of the MSSM parameters, a promising approach is so-called ``natural SUSY".
In the framework of the natural SUSY, a light stop with a large A-term and light higgsinos give a solution of fine-tunings
in the MSSM\cite{Kitano:2006}. We consider that using experimental results is the top priority for limiting the MSSM
parameters, and investigate the possibility of survival of a light stop.
Recently, we have found the possible region of the MSSM parameters\cite{Kouda:2016} within the bounds from
the experimental results of (i) the Higgs mass\cite{HiggsMass:2015}, (ii) the branching ratio,
$\mathrm{Br}(b \rightarrow s \gamma)$\cite{Amhis:2014hma}, (iii) the muon $g-2$\cite{Hoecker:2014},
(iv) the dark matter (DM) abundance\cite{Adam:2015rua,Ade:2015xua}, and
(v) the direct searches for the lighter stop at the LHC\cite{Aad2015,PhysRevD.94.032005,
ATLAS-CONF-2016-076,ATLAS-CONF-2016-050,ATLAS-CONF-2016-077,CMS-PAS-SUS-16-029}.
In the parameter searches, we use {\tt SuSpect2}\cite{suspect2}, {\tt SUSY-HIT}\cite{susyhit}, and
{\tt micrOMEGAs}\cite{micromega} for (i)--(iv).
Moreover, we have studied indirect searches for SUSY particles at the ILC by using benchmark points
within the possible region of the MSSM parameters\cite{Kouda:2016,Kouda:2017,Kon:2017}. We have calculated
the 1-loop effects of SUSY particles in the processes, $e^+ e^- \rightarrow \tau^+ \tau^-$,
$e^+ e^- \rightarrow b \overline{b}$, $e^+ e^- \rightarrow t \overline{t}$\cite{hollik}, and
$e^+ e^- \rightarrow Z h$\cite{Cao:2014rma,Heinemeyer:2015qbu}
with the aid of {\tt GRACE/SUSY-loop}\cite{gracesusyloop}.
\section{MSSM parameter searches}
Our criterion for searching the MSSM parameters is that the MSSM effects are within the bounds from the
following experimental results:
\begin{enumerate}
\item the Higgs mass, $m_h$,
\item the branching ratio of the rare decay mode of $b$-quark, $\mathrm{Br}(b \rightarrow s \gamma)$,
\item the muon anomalous magnetic moment, $a_{\mu}$,
\item the DM abundance, $\mathnormal{\Omega}_{\mathrm{DM}} h^2$,
\item the lower bounds of the lighter stop mass, $m_{\tilde{t}_1}$ in the direct searches at the LHC.
\end{enumerate}
We have performed systematic multidimensional scans in the entire range of the MSSM parameter
space\cite{Ichikawa:2016}, but we present only the results which are consistent with the five observables above.
\subsection{Higgs mass}
The experimental value of the Higgs mass is measured\cite{HiggsMass:2015} as
\begin{equation}
m_{h}^{\mathrm{exp}} = 125.09 \pm 0.21 \pm 0.11~\mathrm{GeV}\quad . \label{Hm}
\end{equation}
The MSSM contribution to the Higgs mass mainly depends on the masses of stops, $m_{\tilde{t}_1}$,
$m_{\tilde{t}_2}$, the A-term, $A_t$, the higgsino mass parameter, $\mu$, and
$\tan \beta$\cite{Haber:1991PhysRevLett,Okada:1991PTP,Ellis:1991,Haber:1996fp,Carena:2000}.
An alternative parameter, $X_t$ is useful due to the relation among the parameters,
\begin{equation}
X_t \equiv \frac{m_{\tilde{t}_2}^2 - m_{\tilde{t}_1}^2}{m_t} = 2(A_t - \mu \cot \beta)\quad .
\end{equation}
Figure~\ref{Fig:xtmst1} shows the possible contours on the $X_t$--$m_{\tilde{t}_1}$ plane
which is consistent with \eqref{Hm} for $\tan \beta = 30$ and $\mu = 600~ \mathrm{GeV}$.
The reason for the large value of $\mu$ here is explained in the next subsection.
For the allowed mass of the lighter stop, $m_{\tilde{t}_1}$, there are two possibilities as follows:
\begin{enumerate}
\item the light stop scenario, $m_{\tilde{t}_1} \lesssim 1.5~ \mathrm{TeV}$
with $X_t = -5~ \mathrm{TeV~ to~} -2~ \mathrm{TeV}$,
\item the heavy stop scenario, $m_{\tilde{t}_1} > 1.5~ \mathrm{TeV}$
with $X_t \gtrsim -4~ \mathrm{TeV}$.
\end{enumerate}
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\columnwidth]{Fig1.eps}
\caption{The possible contours on the $X_t$--$m_{\tilde{t}_1}$ plane plotted by green crosses.}
\label{Fig:xtmst1}
\end{figure}
\subsection{Rare decay mode of $\boldsymbol{b}$-quark}
At any allowed point in in Figure~\ref{Fig:xtmst1}, the value of $A_t$ is severely restricted by the Higgs mass.
Figure~\ref{Fig:Ldetmu} shows the dependence of the Higgs mass, $m_h$ on $A_t$. Thus, we obtain a solution,
$A_t = -2.4~ \mathrm{TeV}$ for $m_h = 125~ \mathrm{GeV}$.
The MSSM parameters are also constrained by the experimental value of the branching ratio\cite{Amhis:2014hma},
\begin{equation}
\mathrm{Br}(B \rightarrow X_s \gamma) = (3.43 \pm 0.21 \pm 0.07)\times 10^{-4}\quad .
\end{equation}
Figure~\ref{Fig:Rdetmu} shows the dependence of the branching ratio, $\mathrm{Br}(b \rightarrow s \gamma)$
on $A_t$. Thus, we obtain a constraint, $\mu > 0.5~ \mathrm{TeV}$ for
$A_t = -2.4~ \mathrm{TeV}$.
\begin{figure}[bht]
\centering
\begin{subfigure}{0.4\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig2a.eps}
\caption{The dependence of the Higgs mass, $m_h$ on $A_t$. The cyan circle indicates a solution.}
\label{Fig:Ldetmu}
\end{subfigure}
\hspace{2mm}
\begin{subfigure}{0.4\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig2b.eps}
\caption{The dependence of the branching ratio, $\mathrm{Br}(b \rightarrow s \gamma)$ on $A_t$. The allowed
region is between the two horizontal lines.}
\label{Fig:Rdetmu}
\end{subfigure}
\caption{The constraints on the MSSM parameters, $A_t$ and $\mu$ in the light stop scenario.}
\label{Fig:detmu}
\end{figure}
\clearpage
\subsection{Muon $\boldsymbol{g-2}$}
The muon anomalous magnetic moment, $a_{\mu}\equiv (g_{\mu} - 2)/2$ has been accurately measured,
thus causes the MSSM contribution to be restricted\cite{wang:2015,Endo:2014274}. The experimental value,
$a_{\mu}^{\mathrm{exp}}$, the SM prediction, $a_{\mu}^{\mathrm{SM}}$, and the difference,
$\Delta a_\mu = a_{\mu}^{\mathrm{exp}} - a_{\mu}^{\mathrm{SM}}$\cite{Hoecker:2014} are
\begin{equation}
\begin{split}
a_{\mu}^{\mathrm{exp}} &= (1165920.91 \pm 0.54 \pm 0.33)\times 10^{-9}\quad ,\\
a_{\mu}^{\mathrm{SM}} &= (1165918.03 \pm 0.01 \pm 0.42 \pm 0.26)\times 10^{-9}\quad ,\\
\Delta a_\mu &= (2.88 \pm 0.63 \pm 0.49)\times 10^{-9}\quad .
\end{split}
\end{equation}
The MSSM contribution to $a_{\mu}$ depends on the slepton mass, $m_{\tilde{l}_L}$, $\tan \beta$, $\mu$, the bino mass, $M_1$, and
the wino mass, $M_2$\cite{Cho:2011}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\columnwidth]{Fig3.eps}
\caption{The constraint of the muon $g-2$ for $\tan \beta = 30$,
$\mu = 600~ \mathrm{GeV}$, and $M_2 = 2M_1$. The allowed region is surrounded by the light green lines.
}
\label{Fig:gminus2}
\end{figure}
Figure~\ref{Fig:gminus2} shows the allowed region on the $m_{\tilde{l}_L}$--$M_1$ plane for $\tan \beta = 30$,
$\mu = 600~ \mathrm{GeV}$, and $M_2 = 2M_1$. It is remarkable that the bino mass is constrained as $M_1\lesssim 360~ \mathrm{GeV}$, thus the LSP is binolike due to $M_1 < \mu$.
Three red points numbered 1 to 3 are the benchmark points which are defined in
section 2.5\cite{Kouda:2016,Kouda:2017,Kon:2017}.
\subsection{DM abundance}
The lightest SUSY particle (LSP) is a promising candidate for the DM. We consider the lightest neutralino,
$\tilde{\chi}_1^0$ as the LSP. The DM abundance can be explained by the LSP-pair annihilation\cite{Ellis:1984453},
the stau--LSP co-annihilation\cite{Ellis:1998367}, and the stop--LSP co-annihilation\cite{Ellis:2014,Ibarra:2015}.
There are three scenarios as in Table~\ref{Scenarios} by combining the light or the heavy stop with the mass
degeneracy of stau--LSP or stop--LSP. If the masses of stau, stop, and LSP degenerate, both co-annihilation
processes occur.
\begin{table}[htb]
\caption{Three scenarios for the stau mass and the stop mass.}
\label{Scenarios}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
& Light stop & Heavy stop \\ \hline
$m_{\tilde{\tau}_1} \cong m_{\tilde{\chi}_1^0}$ & stau--LSP co-annihilation & stau--LSP co-annihilation \\ \hline
$m_{\tilde{t}_1} \cong m_{\tilde{\chi}_1^0}$ & stop--LSP co-annihilation & --- \\ \hline
\end{tabular}
\end{center}
\end{table}
The observed value of the DM abundance\cite{Adam:2015rua,Ade:2015xua},
\begin{equation}
\mathnormal{\Omega}_{\mathrm{DM}}^{\mathrm{exp}} h^2 = 0.1188 \pm 0.0010\quad , \label{EO}
\end{equation}
restricts the MSSM parameters within the narrow region.
Figure~\ref{Fig:StopDM} shows the projection of the MSSM parameter scan on the
$m_{\tilde{t}_R}$--$\mathnormal{\Omega}_{\mathrm{DM}} h^2$ plane. The data are plotted for two values of
the stau mass, $m_{\tilde{\tau}_R}$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.65\columnwidth]{Fig4.eps}
\caption{The dependence of the DM abundance on the stop and the stau mass. The allowed region is between
the two horizontal lines.}
\label{Fig:StopDM}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\columnwidth]{Fig5.eps}
\caption{The stop mass and the stau mass consistent with the DM abundance in the light stop scenario
for $M_1 = 220~ \mathrm{GeV}$.}
\label{Fig:StopStau}
\end{figure}
Figure~\ref{Fig:StopStau} shows the projection of the MSSM parameter scan on the
$m_{\tilde{t}_1}$--$m_{\tilde{\tau}_1}$ plane which is consistent with \eqref{EO} in the light stop scenario for
$M_1 = 220~ \mathrm{GeV}$. The data within the red ellipse are consistent with the five observables as mentioned
in the opening paragraph of section 2. In the allowed region, the lighter stop mass and the lighter stau mass are
restricted as $m_{\tilde{t}_1} = 300~\mathrm{to}~ 400~ \mathrm{GeV}$ and
$m_{\tilde{\tau}_1} = 225~\mathrm{to}~ 235~ \mathrm{GeV}$. These mass ranges depend on $M_1$. There,
however, is a room for the heavy stop scenario if $m_{\tilde{\tau}_1} \cong m_{\tilde{\chi}_1^0}$.
\subsection{Direct searches at the LHC}
In the direct searches of SUSY particles at the LHC, there has been no evidence of SUSY particle production
up to now. We, however, still have a possible region on the MSSM parameters which has not been excluded
by experiments.
Figure~\ref{Fig:spec} shows the mass spectra for the benchmark points which we take from the possible region of
the MSSM parameters corresponding to three scenarios in Table~\ref{Scenarios}. The MSSM parameter sets
named set1 and set3 are for the light stop scenario, and that named set2 is for the heavy stop scenario.
\begin{figure}[htb]
\centering
\begin{subfigure}{0.32\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig6a.eps}
\caption{The mass spectrum for set1.}
\label{Fig:set1}
\end{subfigure}
\begin{subfigure}{0.32\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig6b.eps}
\caption{The mass spectrum for set2.}
\label{Fig:set2}
\end{subfigure}
\begin{subfigure}{0.32\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig6c.eps}
\caption{The mass spectrum for set3.}
\label{Fig:set3}
\end{subfigure}
\caption{The mass spectra of the SUSY particles for three parameter sets.}
\label{Fig:spec}
\end{figure}
Figure~\ref{Fig:LHC} shows the mass bounds of the lighter stop, $\tilde{t}_1$ from the direct searches at the
LHC\cite{Aad2015,PhysRevD.94.032005,ATLAS-CONF-2016-076,ATLAS-CONF-2016-050,ATLAS-CONF-2016-077,
CMS-PAS-SUS-16-029}. The two benchmark points plotted in the figure are located outside of the excluded regions.
Thus we find that two kinds of the light stop scenarios and the heavy stop scenario still survive.
\begin{figure}[hbt]
\centering
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig7a.eps}
\caption{The lighter stop searches in the ATLAS experiments\cite{Aad2015,PhysRevD.94.032005,
ATLAS-CONF-2016-076,ATLAS-CONF-2016-050,ATLAS-CONF-2016-077}.}
\label{Fig:ATLAS}
\end{subfigure}
\hspace{2mm}
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig7b.eps}
\caption{The lighter stop searches in the CMS experiments\cite{CMS-PAS-SUS-16-029}.}
\label{Fig:CMS}
\end{subfigure}
\caption{The lighter stop searches at the LHC. The gray regions are excluded by experiments. The blue star and
the red one indicate the benchmark points, set1 and set3, respectively. The benchmark point, set2 is out
of the ranges of the graphs because of the heavy stop mass, $M_{\tilde{t}_1} = 1799~ \mathrm{GeV}.$
}
\label{Fig:LHC}
\end{figure}
\section{Numerical results}
For indirect searches of SUSY particles, SUSY effects are revealed through 1-loop corrections. There, however,
exist the SM contributions to 1-loop corrections, thus the following quantities are useful\cite{hollik}.
\begin{align}
\delta_{\mathrm{susy}}^{\mathrm{EW}} & \equiv \frac{d \sigma_{\mathrm{1-loop}}^{\mathrm{MSSM, EW}}
- d \sigma_{\mathrm{1-loop}}^{\mathrm{SM, EW}}}{d \sigma_{\mathrm{tree}}}\quad ,\\
\delta_{\mathrm{susy}}^{\mathrm{QCD}} & \equiv \frac{d\sigma_{\mathrm{1-loop}}^{\mathrm{MSSM, QCD}}
- d \sigma_{\mathrm{1-loop}}^{\mathrm{SM, QCD}}}{d \sigma_{\mathrm{tree}}}\quad ,\\
\delta_{\mathrm{susy}} & \equiv \delta_{\mathrm{susy}}^{\mathrm{EW}} + \delta_{\mathrm{susy}}^{\mathrm{QCD}}
\quad .
\end{align}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.38\columnwidth]{Fig8.eps}
\caption{The dependence of the tree level cross sections on $E_{\mathrm{cm}}$.}
\label{Fig:tree}
\end{figure}
For the indirect searches of SUSY particles, we consider the following processes:
\begin{align*}
(\mathrm{i})~ & e^+ + e^- \rightarrow \tau^+ + \tau^- \qquad& \mathrm{at}~ E_{\mathrm{cm}} = 250~ \mathrm{GeV}
\quad , & \hspace{145pt}~ \\
(\mathrm{ii})~ & e^+ + e^- \rightarrow b + \overline{b} \qquad & \mathrm{at}~ E_{\mathrm{cm}} = 250~ \mathrm{GeV}
\quad , & \hspace{145pt}~ \\
(\mathrm{iii})~ & e^+ + e^- \rightarrow t + \overline{t} \qquad & \mathrm{at}~ E_{\mathrm{cm}} = 500~ \mathrm{GeV}
\quad , & \hspace{145pt}~ \\
(\mathrm{iv})~ & e^+ + e^- \rightarrow Z + h \qquad & \mathrm{at}~ E_{\mathrm{cm}} = 250~ \mathrm{GeV}\quad ,
& \hspace{145pt}~
\end{align*}
where the center-of-mass energy, $E_{\mathrm{cm}}$ is determined by the tree level cross sections as in
Figure~\ref{Fig:tree} and by the technical design of the ILC\cite{ilcd}.
Figure~\ref{Fig:TAUtau} and Figure~\ref{Fig:bB} show the angular distributions of $\delta_{\mathrm{susy}}$
in the $\tau^+ \tau^-$-pair production process and the $b \overline{b}$-pair production one, respectively.
It is remarkable that SUSY effects can be revealed as excess of the SM estimation at the 1-loop level,
because the value of $\delta_{\mathrm{susy}}$ is larger than the expected error bars. For the integrated
luminosity, $\int L dt = 250 \mathrm{fb^{-1}}$, however, the parameter set of the light stop scenario, set1 cannot be
discriminated from that of the heavy stop scenario, set2 because of the mass relations,
$m_{\tilde{\tau}_1} \cong m_{\tilde{\chi}_1^0}$ and $m_{\tilde{b}_1} \gg E_{\mathrm{cm}}$ in both parameter sets.
\begin{figure}[hbt]
\centering
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig9a.eps}
\caption{The angular distribution of $\delta_{\mathrm{susy}}$ in the $\tau^+ \tau^-$-pair production process.}
\label{Fig:TAUtau}
\end{subfigure}
\hspace{2mm}
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Fig9b.eps}
\caption{The angular distribution of $\delta_{\mathrm{susy}}$ in the $b \overline{b}$-pair production process.}
\label{Fig:bB}
\end{subfigure}
\caption{The indirect searches of SUSY particles for the parameter sets, set1 and set2 at
$E_{\mathrm{cm}} = 250~ \mathrm{GeV}$. In (a), statistical errors are shown only for the results of set1
where we assumed that the integrated luminosity is $250 \mathrm{fb^{-1}}$.}
\label{Fig:Ecm250}
\end{figure}
Figure~\ref{Fig:tT500} shows the angular distribution of $\delta_{\mathrm{susy}}$ in the $t \overline{t}$-pair production
process. There is small difference between the $\delta_{\mathrm{susy}}$'s of the two parameter sets in the forward direction of top quark.
\clearpage
\begin{figure}[htb]
\centering
\includegraphics[width=0.43\columnwidth]{Fig10.eps}
\caption{The angular distribution of $\delta_{\mathrm{susy}}$ in the $t \overline{t}$-pair production process for
the parameter sets, set1 and set2 at $E_{\mathrm{cm}} = 500~ \mathrm{GeV}$. Statistical errors are shown only
for the results of set1 where we assumed that the integrated luminosity is $500 \mathrm{fb^{-1}}$,
but are negligibly small.}
\label{Fig:tT500}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.43\columnwidth]{Fig11.eps}
\caption{The angular distribution of $\delta_{\mathrm{susy}}$ in the $Z h$ production process for the parameter sets,
set1 and set2 at $E_{\mathrm{cm}} = 250~ \mathrm{GeV}$. Statistical errors are shown only
for the results of set1 where we assumed that the integrated luminosity is $250 \mathrm{fb^{-1}}$.}
\label{Fig:Zh250}
\end{figure}
\clearpage
Figure~\ref{Fig:Zh250} shows the angular distribution of $\delta_{\mathrm{susy}}$ in the $Z h$ production process.
There is difference between the $\delta_{\mathrm{susy}}$'s of the two parameter sets, which is larger than the range
of error bars. It is remarkable that the 1-loop corrections of this process are sensitive to the lighter stop mass, $m_{\tilde{t}_1}$.
\section{Summary}
We have obtained possible MSSM parameter sets which are consistent with the bounds from the experimental
results of the Higgs mass, the rare decay mode of $b$-quark, the muon $g-2$, the dark matter abundance, and the direct searches for the lighter stop at the LHC.
For the parameter sets, the virtual effects of MSSM in the indirect search processes are estimated at 1\%$\ \sim 5$\%
in the early stage of the ILC, which are larger than the ranges of error bars. For the purpose of discrimination of the light and heavy stop scenarios, $Z h$ is the most promising process to investigate.
|
3,212,635,537,746 | arxiv | \section{Entanglement entropy}
Suppose we are interested in the physics of events localized in a region $A$ of the space. The Hilbert space of states can be decomposed accordingly as a tensor product $
{\cal H}={\cal H}_{A}\otimes {\cal H}_{-A}$ where ${\cal H}_{A}$ and ${\cal H}_{-A}$ are the spaces of states localized in $A$ and in the complementary region $-A$ respectively. Take the vacuum (the fundamental state) as a global state $\left| \Psi \right>$ of the system with density matrix $
\rho_0= \left| \Psi \right> \left< \Psi \right|$.
The state $\rho_A$ relevant to the algebra of operators acting on ${\cal H}_{A}$
follows from the partial trace of $\rho_0$ over the Hilbert space ${\cal H}_{-A}$. That is, we obtain the local reduced density matrix
\begin{equation}
\rho_{A}=\textrm{tr}_{{\cal H}_{-A}}\left| \Psi \right> \left< \Psi \right|\,.
\end{equation}
The global state $\rho_0$ is generally entangled in the bipartite system ${\cal H}_{A}\otimes {\cal H}_{-A}$ and in consequence this matrix is a mixed (non pure) state for the local region.
The corresponding entropy
\begin{equation}
S(A)=-\textrm{tr}(\rho_A\log \rho_A)\,
\end{equation}
is usually called entanglement or geometric entropy.
The entanglement entropy is one of the most prominent candidates to explain the intriguing entropy of the black holes \cite{bombelli}. However, in this proposal the role of quantum gravity is fundamental to produce a finite entropy, and the whole subject is still controversial. In a series of recent papers a definite conjecture for the meaning of the entanglement entropy in the context of the AdS-CFT duality has been given \cite{ryu}.
This and other measures of entanglement have also been extensively studied in condensed matter and low dimensional systems, partially motivated by advances in quantum information theory and the density matrix renormalization group method. As a result it was uncovered that a variety of phenomena have an interesting correlation with the entanglement properties of fundamental states. This includes quantum phase transitions \cite{phase}, topological order \cite{topological}, and, as will be reviewed in this paper, the renormalization group irreversibility in two dimensions \cite{ch1,rg}.
\section{Entanglement entropy and quantum field theory}
From the point of view of quantum field theory (QFT) the entanglement entropy $S(A)$ can be considered as a non local variable which is definable for any theory disregarding the field content. In this sense it can be identified as a natural tool to investigate general properties of QFT. In fact, the entanglement entropy satisfies very remarkable non perturbative relations (see for example \cite{h1})
\begin{eqnarray}
S(A)=S(-A)\,,\\
S(A)+S(B)\ge S(A\cup B) +S(A\cap B)\,.
\end{eqnarray}
The first one follows from the purity of the vacuum state, while the second, called strong subadditivity (SSA), is a general and very important property of the entropy for multipartite systems.
From a technical point of view, the entanglement entropy $S(A)$ is proportional to the variation of the Euclidean free energy with respect to the introduction of a small conical singularity at the boundary of $A$ \cite{conical,conformal}. Explicitly we have
\begin{equation}
S(A)=\lim_{\alpha\rightarrow 1} \left(1-\frac{d}{d\alpha}\right) \log Z(A,\alpha),\label{alpha}
\end{equation}
where $Z(A,\alpha)$ is the Euclidean partition function of the theory on a space with conical singularity of angle $2\pi \alpha$ at the boundary of $A$.
The entanglement entropy in the continuum theory is ultraviolet divergent. However, it has a nice geometrical structure of divergences. In $d$ spatial dimensions we have \cite{ch2}
\begin{equation}
S(V)=g_{d-1}[\partial V]\,\epsilon^{-(d-1)}+...+ g_1[\partial V]\,\epsilon^{-1} + g_0[\partial V]\,\log (\epsilon)+ S_0(V)\,, \label{div}
\end{equation}
where $S_0(V)$ is a finite part, $\epsilon$ is a short distance cutoff, and the $g_i$ are local and extensive functions on the boundary $\partial V$, which are homogeneous of degree $i$. The leading divergent term coefficient $g_{d-1}[\partial V]$ is then proportional to the $(d-1)$ power of the size of $V$, and this is usually referred to as the area law for the entanglement entropy \cite{srednicki}. However, $g_i$ for $i> 0$ depends on the regularization procedure and $g_{d-1}$ is not proportional to the area if this later is not rotational invariant. These terms are not physical within QFT since they are not related to continuum quantities.
Universal quantities are present in $S(A)$ however. In particular the mutual information
\begin{equation}
I(A,B)=S(A)+S(B)-S(A\cup B)
\end{equation}
between two non-intersecting regions is universal since the boundary terms get subtracted away \cite{ch1}. In fact, a general theorem based on the strong subadditive property and the Lorentz covariance \cite{h1} establishes that if the entanglement entropy is finite, the mutual information is identically zero for a relativistic QFT. The presence of a non zero mutual information implies that the entropy cannot be made finite through contributions coming from the physics at the high energy sectors (within QFT).
\section{The c-theorem}
The statement of the c-theorem in $1+1$ dimensions \cite{zamo} can be expressed in two equivalent ways:
\bigskip
\noindent (a) There is a universal dimensionless function of the couplings $c(\{\lambda_i \})$ in the theories space in $1+1$ dimensions which is non-increasing along the renormalization group trajectories and stationary at the fixed points, where it takes a finite value proportional to the Virasoro central charge $C_V$.
\bigskip
\noindent (b) For any theory in $1+1$ dimensions there is a universal dimensionless function of a distance $c(r)$ which is non-increasing under dilatations and takes a finite value proportional to $C_V$ at the fixed points.
\bigskip
An interesting interpretation which follows from the formulation (a) is that the c-function measures some kind of entropy related to the information which is lost during the renormalization group transformations. However, if we take this too literally, we could conclude that the theorem should also be valid outside the domain of unitary and relativistic theories, where it actually does not hold. In contrast, the formulation (b) does not refer to the renormalization group and show the c-theorem as a property of the continuum theory, disregarding the procedure used to define it.
The Zamolodchikov's proof of the c-theorem is given in the Euclidean formulation of the two dimensional QFT, and involves the covariance of the energy momentum tensor correlators plus the reflexion positivity property. Here we show a very simple alternative proof based on the strong subadditive property of the entanglement entropy and the Lorentz invariance in a real space-time formulation \cite{ch1}.
Consider two intervals of lengths $b$ and $c$ relatively boosted to each other and located as shown in figure 1. The causal domain of dependence of these intervals (causal shadow) have intersection and causal union given by the ones corresponding to the two intervals of length $a$ and $d$. These sets appear in the small side of the relativistic version of the strong subadditive inequality between $b$ and $c$ \cite{h1}. Then, we have\footnote{To be explicit, this relation is just the standard SSA inequality on the (non time-like) surface $c_1\cup a \cup b_1$ (see figure 1), which gives $S(c_1\cup a) +S(a\cup b_1)\ge S(a)+S(c_1\cup a\cup b_1)$, followed by the identification of $S(c_1\cup a)=S(c)$, $S(a\cup b_1)=S(b)$, and $S(c_1\cup a\cup b_1)=S(d)$, which is a consequence of the unitarity of the causal evolution.}
\begin{equation}
S(c)+S(b)\ge S(a)+S(d)\,.\label{sss}
\end{equation}
The relativistic geometry gives the simple relation
\begin{equation}
a\,d=c\,b\,,
\end{equation}
which can be rewritten as
\begin{equation}
c=\lambda a,\hspace{1cm} d=\lambda b,\hspace{1cm}\lambda=\frac{c}{a}=\frac{d}{b}\geq 1 \,.
\end{equation}
Therefore (\ref{sss}) gives
\begin{equation}
S(b)-S(a)\ge S(\lambda b)-S(\lambda a)\,.\label{catorce}
\end{equation}
This means that the difference of the entropies $S(b)-S(a)$, with $b\ge a$ is non-increasing under dilatations. It is also dimensionless and universal since according to (\ref{div}) in two dimensions the divergent term for an interval is a constant (independent of the interval size) proportional to $\log (\epsilon )$.
At the conformal fixed points the entropy can be explicitly evaluated \cite{conformal}, and is given by
\begin{equation}
S(r)=\frac{C_V}{3} \log (r/\epsilon)\,.
\end{equation}
Thus, at the fixed points $S(b)-S(a)=(C_V/3) \log(b/a)\ge 0$. We conclude that $S(b)-S(a)$ is a c-function for any $b > a$.
\begin{figure}[t]
\centering
\leavevmode
\epsfysize=5cm
\epsfbox{figu.eps}
\caption{Time is the vertical axis, the horizontal axis is the spatial coordinate $x$ and the null lines are drawn at $45^\circ$. The causal domain of dependence (diamond shaped set drawn with dashed lines) corresponding to the spatial intervals $b$, $c$ have intersection given by the domain of dependence of $a$ and union (followed by causal completion) given by the domain of dependence of $d$.}
\end{figure}
An entropic c-function depending on a single parameter (and containing all the information in $S(b)-S(a)$) can be defined by
\begin{equation}
c(r)=r\frac{dS(r)}{dr},
\end{equation}
which is dimensionless, universal, positive, and according to (\ref{catorce}) satisfies
\begin{equation}
c^{\prime}(r)=r\,S^{\prime\prime}(r)+S^{\prime}(r)\le 0 \,.
\end{equation}
\begin{figure}[t]
\centering
\leavevmode
\epsfysize=5cm
\epsfbox{f2.eps}
\caption{From top to bottom: one third of the Zamolodchikov c-functions for a real scalar and a Dirac field, and entropic c-functions for a Dirac (dashed curve) and a real scalar field (dotted curve).}
\end{figure}
We have computed the entropic $c$-functions for free fermions and bosons numerically in a lattice and found analytically short and long distance expansions using (\ref{alpha}) \cite{ch3}. The analytic results involve sine-Gordon model correlators of exponential operators which are given in terms of Painlev\'e V differential equations. As shown in figure 2 these are quite different from the Zamolodchikov's ones. The leading short and long distance terms read
\begin{eqnarray}
c_D(t)\sim \frac{1}{3} -\frac{1}{3} t^2 \log^2 (t)\,\,\,\,\, \textrm{for}\,\, t\ll 1\,\,; \hspace{.4cm} c_D(t)\sim \frac{1}{2} t K_1(2 t)\,\,\,\,\, \textrm{for}\,\,t\gg 1\,, \\
c_S(t)\sim \frac{1}{3}+\frac{1}{2 \log (t)}\,\,\,\,\, \textrm{for}\,\,t\ll1\,\,; \hspace{.4cm} c_S(t) \sim \frac{1}{4} t K_1(2 t)\,\,\,\,\, \textrm{for}\,\,t\gg 1\,,
\end{eqnarray}
where $t=m r$, $m$ is the field mass, and $c_D(t)$ and $c_S(t)$ are the entropic c-functions corresponding to a free Dirac and a real scalar field respectively.
\section{Is there an entropic c-theorem in more dimensions?}
Though several authors have made much progress in the direction of extending the c-theorem to higher dimensions using correlators of the energy momentum tensor, no definitive result has been obtained so far \cite{sofar}. Thus, it is tempting to try the present method in more dimensions, exploiting the Lorentz symmetry and the strong subadditive inequality on the geometric entropy. However, one has to bear in mind that no compelling physical principle guarantees that a version of the theorem has to hold in $d\ge 2$ \cite{frad}.
We have essayed different possibilities without success, since several obstacles show up in higher dimensions. What follows is a list of some of them. They may be avoided in de Sitter space but at the expense of introducing a dimensionfull parameter (the space-time radius) which impedes the proof of the theorem.
\noindent {\bf a-.} {\sl The shape problem:} a generalization of the construction on figure 1 requires to select a specific shape for the set replacing the interval in two dimensions. The first obstacle is then that in $d\ge 2$ the intersections and unions of sets with a given shape are in general of a different shape. Also, the SSA relation between them in general contains divergent terms on each side which do not cancel due to the new surface features contained in the intersection and the union. This put difficulties in the gain of useful information from the SSA relation.
\noindent {\bf b-.} {\sl The commutativity problem:} there are certain shapes for which the previous problem does not appear, for example two rectangles formed by moving one of them along one of its sides. However, if the rectangles are boosted to each other (this is essential in (\ref{catorce})) there are pieces of the boundary of one of the rectangles which are at a timelike distance from the other: technically they do not commute in the algebra of causal sets, and the SSA relation cannot be used \cite{h1}. We can avoid this problem by using the limit of rectangles with one large side. However, in this case we have to consider intensive quantities with respect to the large side, the corresponding $c$ functions are dimensionfull, and cannot be used to prove the theorem.
\noindent {\bf c-.} {\sl The order of the divergences:} this is perhaps the deepest problem encountered. Because of its nature, the SSA relation when used between infinitesimally displaced sets, allow us to obtain inequalities involving only second order derivatives of the entropy. However, in $d$ spatial dimensions we have general divergent terms with $d-1$ dimensions. Thus, the local SSA relation cannot give inequalities for universal quantities if $d\ge 2$.
\noindent {\bf d-.} {\sl Logarithmic terms:} the mutual information $I(A,B)$ is universal, dimensionless and increasing with the size of $A$ and $B$. Thus, $I(A,B)$ is decreasing under dilatations for star-shaped non-intersecting sets $A$ and $B$. However $I(A,B)$ diverges logarithmically at critical points due to the logarithmic terms induced by the vertices \cite{ch2}.
\bigskip
|
3,212,635,537,747 | arxiv | \section{INTRODUCTION}
\label{s:intro}
Numerous models of pulsars (see Michel 1991 for the review) proposed over the last
three decades tried to make specific predictions about emission of
gamma-rays and X-rays. Gamma-rays are particularly important
as a direct signature of
basic non-thermal processes in pulsar magnetospheres,
and potentially should help to discriminate among different models. Interpreting
gamma-rays should also be less ambiguous compared to X-rays. In
the latter case, especially for objects younger than $10^6$ yr, contributions
from initial cooling, internal friction, etc.
of unknown magnitude
may dominate the total X-ray emission.
There are seven positive detections of pulsars by {\it CGRO~} (see Table 1), i.e. less than
one per cent of all pulsars known to date.
In all cases
the sources
had been identified by virtue of gamma-ray flux modulations with
previously known $P$.
Crab and Vela are the only pulsars seen by three of {\it CGRO~} detectors.
\begin{table*}
\quad\quad\quad\label{t:par}
\caption{Gamma-ray luminosities for pulsars detected with {\it CGRO~}
(in log of $[~{\rm erg}~{\rm s}^{-1} ]$). Beaming angle of emission $\Omega_\gamma = 1 {\rm ~sr}$ was assumed.}
\begin{tabular}{lcccccccccc}
\hline
PSR & & log $L_{\rm sd}$ & {\it EGRET~} & {\it COMPTEL~} & {\it OSSE~} & Refs. & `total'\\
\hline
B0531$+$21 & Crab & $38.65$ & $34.6$ & $35.0$ & $35.1$ & 1,2,3 & $35.42$ \\
B1509$-$58 & & $37.25$ & -- & -- & $34.4$ & -,-,3 & $34.40$ \\
B0833$-$45 & Vela & $36.84$ & $34.2$ & $33.5$ & $31.3$ & 1,2,3 & $34.28$ \\
B1951$+$32 & & $36.57$ & $34.1$ & $34.1$ & -- & 1,4,- & $34.40$ \\
B1706$-$44 & & $36.53$ & $34.4$ & -- & -- & 1,-,- & $34.40$ \\
J0633$+$1746 & Geminga$^*$ & $34.51$ & $32.9$ & $+$ & -- & 1,5,- & $32.90$ \\
B1055$-$52 & & $34.48$ & $33.4$ & -- & -- & 6,-,- & $33.40$ \\
\hline \\
\end{tabular}\\
$^*$at the distance of 157 pc. \\
Refs. to flux values:\\
1) Ramanamurthy et al.(1995),
2) Carrami{\~n}ana et al.(1995),
3) Schroeder et al.(1995),\\
4) Kuiper et al.(1996a),
5) Kuiper et al.(1996b),
6) Fierro (1995)\\
\end{table*}
The {\it EGRET~} data became so far the only ground for testing theoretical
models. The latest critical review of three models of gamma-ray emission
(polar cap models by Harding 1981, Dermer \& Sturner 1994,
and outer gap model of Yadigaroglu \& Romani 1995)
comes from Nel et al.(1996). They conclude that
none of the models fits observations satisfactorily for two major reasons.
First,
the confrontation of polar cap models (Harding 1981, and Dermer \& Sturner 1994)
with the observations leads to a~relation
\begin{equation}
L_\gamma({\rm observ.}) \propto L_\gamma^\alpha({\rm model}),
\label{e1}
\end{equation}
with $\alpha \simeq 0.6$ and 0.5, respectively, instead of $\alpha = 1$.
Second, there have always been some cases amongst $\sim 350$ {\it EGRET~} upper limits, apparently
contradicting predictions of $L_\gamma$~ made by the models. The troublesome limits come usually from the pulsars
B1509-58,
B1046-58, B0656+14, B1929+10, B0950+08, as well as from the millisecond object J0437-4715.
Of these two problems raised by Nel et al.(1996), the latter is more severe in our opinion.
The former problem may be
solved to some degree by updating parameters used for two objects at the low-luminosity domain, i.e. B1055-52
and Geminga. After Thompson et al.(1994), Nel et al. used for B1055-52 the spectral index $\gamma = 1.18$
determined by Fierro et al. (1993) from the first three viewing periods of {\it EGRET~}.
However, a substantially higher value, $\gamma = 1.59$, based on the data from 10 viewing periods
became recently available (Fierro 1995).
This value of the spectral slope reduces $L_\gamma$~of B1055-52 by a factor of $\sim 4.$
(We shall discuss other consequences of the steeper spectral
slope for B1055-52 in Section 3). Further reduction of $L_\gamma$~in the case of B1055-52 is possible by
lowering distance
$D$ to the source. The argument for lowering the distance (the usually assumed
value is $1.53\,$kpc, after the model of Taylor \& Cordes 1993) may
come from {\it ROSAT~} PSPC observations. If the bulk of the (presumably) thermal
X-ray emission from B1055-52 is due to
initial cooling then the inferred radius for the neutron star at $D = 1.53\,$kpc exceeds
$30\,$km ({\"O}gelman 1995) - a value hardly acceptable from the theoretical point of view.
In the case of Geminga, its inferred luminosity drops by a factor of $\sim 2.8$ relative
to the value used by Nel et al. due to a newly determined HST parallax distance of $157\,$pc
(Caraveo et al. 1996), instead of
$D = 250\,$pc.
Fig. \ref{fig:fig1} presents the inferred {\it EGRET~} luminosities versus model predictions of Harding (1981),
with Geminga, and B1055-52 points corrected for $D = 157\,$pc, and $\gamma = 1.59$, respectively.
Generally,
the agreement between theory and observations looks quite satisfactory, even though
the Crab pulsar visibly deviates from the trend predicted by Harding (1981).
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.4cm \epsfbox{Fig1.eps}
\end{center}
\caption{ Gamma-ray luminosity above $100$ MeV for 6 {\it EGRET~} sources
is plotted against predictions of the polar cap model of Harding (1981).
The normalization factor is arbitrary; $\tau = P/2\dot P$ is a characteristic age expressed in [yr].
(Note: The Crab pulsar is the first dot from the right.) Bars indicate uncertainties in $L_\gamma$~
arising from uncertainties in distance and flux.
In the case of Vela (next to Crab) the lower
distance limit was extended down to 200~pc
to conform to recent estimates
based on X-ray and gamma-ray observations
of the Vela SNR ($400 \pm 200$pc after Aschenbach et al.1995
and $\mathrel{\copy\simlessbox} 350\,$pc after Oberlack et al.1994, respectively).
}
\label{fig:fig1}
\end{figure}
Arons (1996) presented a simple argument against all models which offer a functional relation
for $L_\gamma$~ roughly similar to
$L_\gamma \propto B/P^2$, by plotting the voltage ($\sim B/P^2$)
available for
particles, against the gamma-ray luminosity $L_\gamma(>100~\rm{MeV})$ for six {\it EGRET~} pulsars. An extrapolation of
the resulting trend towards
low values of $B/P^2$ leads to $L_\gamma$~ exceeding $L_{\rm sd}$~ below $\sim 10^{14}\,$Volts. Since $L_\gamma$~must never exceed $L_{\rm sd}$,
this special value of voltage defines then `an empirical gamma-ray death line'
on $P-\dot P$ diagram (or on $Voltage-P$
diagram, as originally presented by Arons). It is hard to reconcile such a~line
with the observed death line for
radio emission, which corresponds to $\sim 10^{12}\,$Volts. The effect was known
already earlier,
when polar cap models of Buccheri et al.(1978) and Harding (1981) had been introduced.
In the latter model, for a pulsar with $\dot P \approx 10^{-15}$s~s$^{-1}$, its $L_\gamma$~ would reach $L_{\rm sd}$~
at a characteristic age equal to $3 \times 10^7$ years (Harding 1981).
Below we propose a polar cap model
which is free of the two problems discussed above. This model reproduces gamma-ray luminosities
inferred for seven
observed pulsars. At the same time
it avoids the problem of the empirical gamma-ray
death line of Arons (1996),
and it relaxes the upper limits' constraints of
Nel et al. (1996), especially for old classical pulsars, and
millisecond pulsars. In Section 2 we start with recalling the model for total power contained
in outflowing particles, which refers directly to the relation analysed by Arons (1996). Then we present arguments
for summing all available {\it CGRO~} data in order to get a better start for a modified polar cap model
rather than using the {\it EGRET~} data alone. Section 3 contains a description of our model, and its reference
to existing information on gamma-rays from pulsars. Summary and comments are in
Section 4; it contains also the ranking of pulsars with the highest gamma-ray fluxes resulting
from our model.
\section{SIMPLE MODELS VS. {\it CGRO} DATA}
\label{s:section2}
Though {\it EGRET~} has been far more successful than other
{\it CGRO~} instruments in detecting pulsars,
this does not
mean that pulsars' gamma-ray emission above 100~MeV
dominates energetically over gamma-ray emission from lower energy bands.
In the case of the Crab pulsar most of the energy output occurs within
the {\it COMPTEL~} and {\it OSSE~} energy ranges (e.g. Fierro 1995).
In the extreme case of B1509-58 there is
only {\it OSSE~} detection. Early positive reports from the {\it COMPTEL~} team (Carrami{\~n}ana
et al. 1995) have not been confirmed (Kuiper 1996), and {\it EGRET~} put only upper
limits for the source (Thompson et al. 1994).
Luminosities listed in
Table~1 were inferred from phase-averaged fluxes assuming beaming solid angle
of gamma-ray emission $\Omega_\gamma$ equal 1 steradian. For 3 objects (Crab, Vela, and B1951+32) detected with
more than one instrument, we can also construct a sum of inferred luminosities (ignoring
possible changes of a beaming angle with energy range), to get an estimate of `total' gamma-ray luminosity.
For the remaining 4 pulsars we will use their luminosities in the {\it EGRET~} energy range as `total'.
The values of `total' gamma-ray luminosities are shown in the last colum of Table 1.
Let us now compare these `total' gamma-ray luminosities
with the simplest possible phenomenological polar cap model.
According to this model,
gamma-ray luminosity $L_\gamma$~is proportional to a power contained in outflowing primary electrons
$L_{\rm particles}$,
which in turn is proportional to a product of
primary electron
energy $E_0$, a surface area
of canonical polar cap $A_{\rm pc} \approx \pi R_{\rm pc}^2 \propto 1/P$, and a Goldreich-Julian
flux $\dot n_{\rm GJ} \propto B/P$ (Goldreich \& Julian 1969) of outflowing primary electrons:
\begin{equation}
L_\gamma \propto L_{\rm particles} \propto E_0 \cdot \dot n_{\rm GJ} \cdot A_{\rm pc}.
\label{e2}
\end{equation}
Assuming $E_0 = {\rm const.}$ for all objects one obtains
\begin{equation}
L_\gamma \propto L_{\rm particles} \propto B/P^2 \propto L_{\rm sd}^{1/2}.
\label{e2a}
\end{equation}
(Note: The model for $L_{\rm particles}$ from equation (\ref{e2a}) was actually a starting point
in the work of Harding 1981, who assumed $E_0 = 10^{13}$eV and whose objective
was to find a prescription for $L_\gamma(>100~\rm{MeV})$.
We shall return to this point in the next section.)
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.4cm \epsfbox{Fig2.eps}
\end{center}
\caption{`Total' gamma-ray luminosity $L_\gamma$ for all seven {\it CGRO~} sources,
inferred from {\it EGRET~}, {\it COMPTEL~}, and {\it OSSE~}
observations (wherever available - see Table 1.) are compared
with a simple model (equation \ref{e2a}) discussed in Section 2.
(Note: B1951+32 and B1706-44 practically coincide.
The dot lying at the upper right part
of the diagonal represents the Crab pulsar.)
}
\label{fig:fig2}
\end{figure}
Figure \ref{fig:fig2} shows how the relation of equation (\ref{e2a}) compares with observations.
The normalization factor ($C = 10^{16}$) has been chosen
to obtain the best `by eye' fit. The overall agreement looks quite impressive.
There is a substantial improvement for Crab comparing to Figure \ref{fig:fig1} due to
significant contributions from
{\it OSSE~} and {\it COMPTEL~} data.
Moreover, B1509-58 adds up smoothly to six {\it EGRET~} pulsars. The same functional relation,
but for {\it EGRET~} points only, has been
considered by Arons (1996) (see previous section).
The relation $L_\gamma = 10^{16} L_{\rm sd}^{1/2}[~{\rm erg}~{\rm s}^{-1} ]$ presented in
Fig.\ref{fig:fig2},
cannot hold for all radio pulsars. It leads formally to
$L_\gamma$~reaching
$L_{\rm sd}$~at $10^{32}~{\rm erg}~{\rm s}^{-1} $ (which corresponds to the Arons' empirical gamma-ray death line),
whereas pulsars are observed down to $L_{\rm sd}\simeq 10^{30}~{\rm erg}~{\rm s}^{-1} $.
Clearly, pulsar models which predict $L_\gamma$~as a simple
combination of $B$ and $P$, require some revision.
\section{HOW DO ELECTRON--POSITRON PAIRS CONTRIBUTE TO GAMMA-RAYS}
\label{s:rq}
According to the model of Daugherty \& Harding 1982 (DH82)
primary electrons are
accelerated
along open magnetic field lines
to high energies ($\sim 10^{13}$eV) due to
rotation-induced electric field. The model assumes a dipolar structure of the magnetic field.
Curvature photons emitted by primary
electrons are absorbed by magnetic field with subsequent creation
of electron-positron pairs (Sturrock pairs). These pairs cool off instantly
via synchrotron radiation. Synchrotron photons may lead to further
pair creation. Electromagnetic cascades propagating
in pulsar's magnetic field may be very rich, with several
subsequent generations
of pairs and photons.
Numerical treatment of
electromagnetic cascades initiated by primary electrons
above polar cap, and propagating across the magnetosphere
(DH82) does not include effects of
possible acceleration of Sturrock pairs. The only contribution from pairs to gamma-rays
taken into account is due to synchrotron emission of created pairs. The pairs themselves
do not accelerate, and subsequently - do not contribute to the curvature radiation.
Such simplification is usually justified by arguing that an appearence of conductive plasma
above some height effectively leads to a~screening of electric field
parallel to local magnetic field lines. If, however, the density of created pairs is lower
than the local corotation plasma density, the electrons from pairs will be subject to
further acceleration (whereas positrons will be decelerated; eventually some of them
will be stopped and reversed towards the stellar surface).
In the context of polar cap models
developed by Daugherty \& Harding
(1982, 1994, 1996) it became clear that potential contribution to gamma-rays
from pairs might be necessary to account for the observed gamma-ray fluxes .
If these secondary particles
are indeed subject to effective acceleration at significant altitudes above
polar cap surface, e.g. at heights of several NS radii (Daugherty \& Harding 1996),
the resulting beaming angles
of gamma-ray emission $\Omega_\gamma$ will be wider than those measured at
the polar cap surface. The requirement for ``nearly aligned rotators" (Dermer \& Sturner 1994,
Daugherty \& Harding 1994),
necessary to explain large duty cycles of gamma-ray emission, might be then relaxed.
[Note: Relaxing the assumption about small inclination angles between the rotation
and magnetic axes may be inevitable
on observational grounds. Inclination estimates carried out by Lyne \& Manchester (1988),
and Rankin (1990) show that in many pulsars inclination angles are large indeed.
However, mutual comparison of these results shows that they
are in agreement only for small inclination angles ($\mathrel{\copy\simlessbox} 40\deg$), and there is no correlation
if either estimate is larger than this value (Miller \& Hamilton 1993).]
Hereafter we will assume that for all pulsars $\Omega_\gamma = 1 {\rm sr}$
(corresponding to opening angles of $\sim 30$ degrees).
Suppose, that the secondary particles, which we assume to
be created as described in the model of DH82, do participate in the gamma-ray production
similarly as the primary electrons do.
In the spirit of equations (\ref{e2}) and (\ref{e2a}), we propose a following prescription for $L_\gamma$:
\begin{equation}
L_\gamma = C \cdot n_{\pm} \cdot E_{\pm} \cdot L_{\rm sd}^{1/2},
\label{e3}
\end{equation}
where $n_{\pm}$ is a number of created pairs $e^\pm$~ per primary electron,
and $E_{\pm}$ is a characteristic energy attained by particles due to acceleration.
The normalization constant $C$ will be determined by fitting the observations.
We will assume
that $E_{\pm}$ achieved by secondary particles is similar to the energy attained by
primary electrons $E_0$, i.e. $E_{\pm} \simeq E_0$.
We begin with making a choice for the value of $E_0$, since it is
this parameter which, along with $B$ and $P$, will determine the number
of pairs $n_{\pm}$ created per primary electron.
The analytical fit of Harding (1981) for $L_\gamma$~above 100 MeV, which
gained so much popularity in testing her polar cap model against {\it EGRET~} observations was obtained
from numerical simulations performed for a fixed value of primary electron energy
$E_0 = 10^{13}{\rm eV}$. Originally, this energy has been chosen to make the best spectral fits
above 100 MeV for {\it COS-B~} data of Crab and Vela. However, such assumption about $E_0$ is not possible
throughout the entire pulsar's lifetime
because the energy of outgoing electrons
is subject to twofold limitation (e.g. Sturrock 1971): there
is an absolute upper limit
\begin{equation}
E_{\rm W} = 1.2 \times 10^7 \, B_{12} P^{-2} \,[~\rm{MeV}]
\end{equation}
($B_{12} = B/10^{12}\,$G, period $P$ is in [s])
set
by potential drop across the polar cap, and a maximum value
\begin{equation}
E_{\rm max} = 4.6\times 10^7 \, B_{12}^{1/4} P^{-1/8}\, [~\rm{MeV}]
\end{equation}
set up by
curvature cooling in purely dipolar magnetic field.
The energy $E_0$ of primary electrons must conform to
one of these two limits (whichever comes first) as the pulsar is slowing down, since
both $E_{\rm W}$ and $E_{\rm max}$ decrease as $P$ increases.
In consequence,
the prescription for $L_\gamma(>100\,~\rm{MeV})$ of Harding (1981) should not be
treated as accurate everywhere.
Moreover, accelerating electrons should cross threshold
energy
\begin{equation}
E_{\rm min} = 1.2 \times 10^7 \, B_{12}^{-1/3} P^{1/3} [~\rm{MeV}]
\end{equation}
required for pair creation.
Whenever $E_{\rm W}$ falls below
$E_{\rm min}$, a classical pulsar crosses the well known deathline, and enters `a graveyard for pulsars'.
Another deathline occurs for millisecond pulsars even earlier, when $E_{\rm max}$ falls below
$E_{\rm min}$ (Rudak \& Ritter 1994).
An ultrarelativistic primary electron, sliding along curved magnetic field line, emits curvature
photons which in turn may be converted into $e^\pm$~ pairs. The total number of pairs created per
electron depends
on the local component of magnetic field
$B$ perpendicular to the
direction of propagation of the photon, on the energy of the electron $E_0$, and on the curvature of magnetic
field lines.
As the primary electron accelerates, its energy $E_0$ will cross the threshold value
$E_{\rm min}$,
thus triggering creation of a first pair.
The number of created pairs quickly increases as $E_0$ exceeds $E_{\rm min}$.
At some point
it becomes high enough to make further acceleration of electrons less effective due to screening effects.
Instead of fixing $E_0$ at some specific value we assume thus that
electrons may be accelerated up to an energy $E_0$ satysfying following condition
throughout pulsar's entire lifetime:
\begin{equation}
E_0 = \min \{\zeta \cdot E_{\rm min}, \,E_{\rm W}, \,E_{\rm max}\},
\label{e4}
\end{equation}
where $\zeta > 1$.
The best choice for the value of the parameter $\zeta$ was made a posteriori, to
reproduce the empirical trend of $L_\gamma$~ for the seven {\it CGRO~} pulsars with similar accuracy
as equation (\ref{e2a}) does in Fig.2. We found that the range $2 \mathrel{\copy\simlessbox} \zeta \mathrel{\copy\simlessbox} 5$ fulfills
this requirement. All results presented below are for $\zeta = 2.5$.
For $E_{\rm min}$ we preferred to take the numerically obtained values whenever they differred
from the analytical approximation
(the analytical formulae for $E_{\rm W},\, E_{\rm max}$, and $E_{\rm min}$ are taken
from Rudak \& Ritter 1994).
We found that the former are consistently smaller by a factor
of $\sim 1.5$ (in most cases) than the latter.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.4cm \epsfbox{Fig3.eps}
\end{center}
\caption[pozycja spisu]{`Total' gamma-ray luminosity is plotted against
spin-down luminosity $L_{\rm sd}$~for seven {\it CGRO~} pulsars (filled dots).
Open triangles are the {\it EGRET~} upper limits as given by Nel et al.1996 for 350 objects, including
seven millisecond pulsars.
Filled triangle indicates position of J0437-4715 (after Fierro et al.1995).
The dashed line corresponds to
$L_\gamma = 10^{16} \cdot L_{\rm sd}^{1/2}$ (see Section 2). The upper solid
curve shows the evolutionary track of a pulsar with $B_{\rm pc} = 10^{12}$G, calculated
according to eqs. (\ref{e3}) and (\ref{e4}). The lower solid curve shows the evolutionary track for
a millisecond pulsar with $B_{\rm pc} = 10^9$G.
The evolutionary tracks end up when the objects reach their `death points' in the $L_{\rm sd}$~space
(marked with dotted lines).
}
\label{fig:fig3}
\end{figure}
The developement of cascades was followed by means of numerical simulations described
in DH82.
Calculations of number of pairs $n_\pm$ were performed
with numerical simulations after choosing $P$ and $B$, and setting the primary electron energy
$E_0$ according to equation (\ref{e4}). The normalization constant in our model of $L_\gamma$~
(equation \ref{e3}) was determined
by fitting numerical results to the seven detections (the last column of Table 1.).
Then we calculated two evolutionary tracks in the $L_\gamma$~-- $L_{\rm sd}$~ space for representatives of
the classical pulsars (with typical magnetic field strength $B \sim 10^{12}$G), as well as
of the millisecond pulsars ($B \sim 10^9$G). Both tracks are shown in Fig.\ref{fig:fig3} as
solid curves. The upper curve ($B \sim 10^{12}$G) starts at $L_{\rm sd} \simeq 10^{39}~{\rm erg}~{\rm s}^{-1} $, nearby Crab,
and down to $L_{\rm sd} \sim 10^{34}~{\rm erg}~{\rm s}^{-1} $ ~it roughly follows the dashed line, which
depicts the relation $L_\gamma = 10^{16} \cdot L_{\rm sd}^{1/2}$ from Section 2. As $L_{\rm sd}$~ decreases,
our exemplary classical pulsar enters a region where proximity to pulsar's death line
becomes important. The number of created pairs
$n_\pm$ declines constantly as the pulsar slows down, and it starts to decrease dramatically
when $E_{\rm W}$ falls below $\zeta \, E_{\rm min}$, affecting thus $E_0$ in equation (\ref{e4}).
At the point where
$E_{\rm W} = E_{\rm min}$, the creation of pairs ceases ($n_\pm = 0$) -- the pulsar reaches its death line.
The exemplary millisecond pulsar ($B_{\rm pc} = 10^9$G) follows the lower solid curve in Fig.\ref{fig:fig3}.
It starts at $L_{\rm sd} \simeq 10^{37}~{\rm erg}~{\rm s}^{-1} $, corresponding to initial period of one millisecond.
Unlike the former classical pulsar, it encounters different death line at $L_{\rm sd} \sim 10^{33.5}~{\rm erg}~{\rm s}^{-1} $,
due to an equality $E_{\rm max} = E_{\rm min}$.
The efficiency of pair creation for the millisecond
pulsar throughout its lifetime, is significantly lower than for the classical pulsar.
Low strength of magnetic field plays a decisive role here, and it cannot be compensated for by faster
rotation (and therefore -- by smaller curvature radii available).
The filled triangle in Fig.\ref{fig:fig3} indicates the upper limit for J0437-4715
set from {\it EGRET~} observations (after Fierro et al.1995). In addition, 350 {\it EGRET~} upper limits
(Nel et al. 1996),
including seven limits for millisecond pulsars, are shown for comparison.
Apart from J0437-4715, four objects with {\it EGRET~} upper limits only are placed clearly below
the upper evolutionary track. These are
(from right to left): B1046-58, B0656+14, B1929+10, and B0950+08. If the proposed model is correct, these objects
should be the best candidates for detection in gamma-rays (but not necessarily in the
{\it EGRET~} energy range).
B1951+32 and B1509-58, present in the data of Nel et al.~(1996), have been replaced with detections (filled dots)
by {\it EGRET~} and {\it OSSE~}, respectively.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.4cm \epsfbox{Fig4.eps}
\end{center}
\caption[pozycja spisu]{
`Total' gamma-ray luminosity is plotted against
predicted luminosity, calculated
according to eqs. (\ref{e3}) and (\ref{e4}), for seven {\it CGRO~} pulsars (filled dots).
Open triangles denote combined {\it OSSE~}, {\it COMPTEL~} and {\it EGRET~} upper limits (wherever available).
The continuous diagonal line corresponds to a perfect agreement between
predictions and observations.
}
\label{fig:fig4}
\end{figure}
The comparison of how our model reproduces $L_\gamma$~ for the seven {\it CGRO~} pulsars, along with combined
upper limits from {\it EGRET~}, {\it COMPTEL~}, and {\it OSSE~}
(from Thompson et al.1994, Fierro et al.1995, Schroeder et al.1995, and Carrami{\~n}ana et al.1995),
wherever available, is shown in Fig.\ref{fig:fig4}.
In the case of Geminga and B1509-58, the model overestimates
the observed $L_\gamma$~ by a factor of $\sim 2$. However, existing
upper limits from {\it COMPTEL~} and {\it EGRET~}, respectively, improve the agreement.
The upper limit for J0437-4715, based on {\it EGRET~} only, is one order of magnitude
above the predicted value of $L_\gamma$. For four other objects (B1046-58, B0656+14, B1929+10, and B0950+08)
the stringent upper limits
from {\it EGRET~} differ from the model predictions by no more than a factor of $\sim 3$.
Moreover,
upper limits from {\it COMPTEL~} on B1046-58 and B1929+10 place them on a safe
side of the diagonal line of prefect agreement in Fig.\ref{fig:fig4}.
There is no information available about B0950+08 and B0656+14 from any {\it COMPTEL~} observations.
\section{SUMMARY}
We have proposed a semi-phenomenological model of gamma-ray emission from pulsars, which is
based on polar cap activity triggered by
primary electrons. The energy of electrons is only a few times higher than the threshold energy
required to induce pair creation in the presence of a dipolar magnetic field, with other
restrictions applied when necessary.
Electromagnetic cascades induced via curvature radiation
were treated in the same way as described by DH82.
The important ingredient of the model is the assumption that secondary particles, produced
in cascades due to one-photon absorption, contribute to overall gamma-ray emission
similarly as primary electrons.
The model was confronted with the gamma-ray luminosities for seven pulsars inferred from
available data from {\it OSSE~}, {\it COMPTEL~}, and {\it EGRET~} experiments. We find that the model is
consistent with the existing data. Moreover,
the model does not lead
to any violence of energetics for pulsars with low spin-down luminosity $L_{\rm sd}$, -- the predicted
gamma-ray luminosity $L_\gamma$~never reaches $L_{\rm sd}$. It avoids, therefore, the problem
of `an empirical gamma-ray death-line' as raised by Arons (1996).
We also used the {\it EGRET~} archive of 350 upper limits
along with {\it OSSE~}, and {\it COMPTEL~} upper limits (published for 15, and 18 pulsars, respectively),
to find likely restrictions on the model. We have used some updates with respect
to the data used by Nel et al.(1996) in their analysis.
The {\it EGRET~} upper limit for B1951+32 was replaced with
its detections by {\it EGRET~} (Ramanamurthy et al. 1995), and {\it COMPTEL~} (Kuiper et al.1996a).
Similarly, the {\it EGRET~} upper limit for B1509-58 was replaced with {\it OSSE~} detection (Schroeder et al.1995).
In the case of B1929+10 the model distance of 170~pc
was replaced with 250~pc (see Yancopoulos, Hamilton \& Helfand~1994 for detailed arguments).
For a fixed value of
$L_{\rm sd}$, the predicted $L_\gamma$~depends rather weakly on magnetic field strength $B$ as long as
$10^{11}{\rm G} \mathrel{\copy\simlessbox} B \mathrel{\copy\simlessbox} 10^{13}{\rm G}$ (Dyks 1997), especially for pulsars
with $L_{\rm sd} \mathrel{\copy\simgreatbox} 10^{34}~{\rm erg}~{\rm s}^{-1} $. That is why all evolutionary tracks calculated for
high values of $B$ converge roughly
to the asymptotic relation
of equation (\ref{e2a}) for $L_{\rm sd} \mathrel{\copy\simgreatbox} 10^{34}~{\rm erg}~{\rm s}^{-1} $ (dashed line in Fig.3).
Only as $B$ enters the domain of millisecond pulsars
($\sim 10^8 - 10^9{\rm G}$), $L_\gamma$~ drops significantly. Therefore, all millisecond pulsars, including
J0437-4715, are expected to be very weak gamma-ray emitters regardless of their $L_{\rm sd}$, and
their {\it EGRET~} upper limits alone are still one order of magnitude
above our predictions for their gamma-ray luminosities. Qualitatively quite similar behaviour of millisecond
pulsars, though for different physical reasons,
results from the model of Dermer \& Sturner 1994,
which was used explicitely in the context of millisecond pulsars
by Sturner \& Dermer 1994. For their luminosity $L_{SD94} = 1.1 \times 10^{10} B^{3/2} P^{-3}~{\rm erg}~{\rm s}^{-1} $
of gamma-rays beamed into a solid angle
$\Omega_\gamma \approx 1.5 \times 10^{-3} P^{-1}$sr
(eqs.2 and 3 of Sturner \& Dermer 1994, respectively), the apparent gamma-ray luminosity
for 1 sr can be expressed as
\begin{equation}
L_\gamma = \Omega_\gamma^{-1}L_{SD94} \approx 1.2 \times 10^9 \, B^{1/2} L_{\rm sd}^{1/2}~{\rm erg}~{\rm s}^{-1} ,
\label{e7}
\end{equation}
and accordingly
for two pulsars with $B = 10^9{\rm G}$ and $10^{12}{\rm G}$
but identical spin-down luminosity
$L_{\rm sd}$~ the former object will be placed below the latter one in a diagram like Fig.3.
The {\it EGRET~} upper limit for J0437-4715 ($\sim 10^{32}~{\rm erg}~{\rm s}^{-1} $) is well above
$1.6 \times 10^{30}~{\rm erg}~{\rm s}^{-1} $ resulting from equation (\ref{e7}).
Out of~ several objects in the analysis of Nel et al.(1996), with uncomfortably low {\it EGRET~}
upper limits, and contradicting thus the models they discuss, only two are
left as a potential threat to our model: B0950+08, and B0656+14.
The {\it EGRET~} limits for these two sources are
too low to be accommodated by the model.
It is encouraging, however, that B0656+14 was reported as a possible
{\it EGRET~} source (Ramanamurthy et al. 1996).
Both pulsars were on the priority list of {\it COMPTEL~}
but with low ranks, and no results are available for them so far. There are no {\it OSSE~} limits
available for B0656+14 either.
Definitely,
B0656+14 deserves more attention as a promising target for gamma-ray experiments below
the energy range of {\it EGRET~}. Its parameters are very similar to those of Geminga and B1055-58.
Moreover, all three pulsars are strong X-ray emitters, and are thought to be the best candidates for
initial cooling ({\"O}gelman 1995, Becker \& Tr{\"u}mper 1997). On the other hand,
the combined upper limits available for B1046-58,
and B1929+10 ({\it EGRET~}, {\it COMPTEL~}, {\it OSSE~} in both cases) don't rule out our model.
The upper limits for energy fluxes adopted from Nel et al.(1996), and used also in this work
require a word of comment. They were inferred from upper limits for photon fluxes
under an assumption that
all photon spectra above 100~MeV have a spectral index $\gamma$, obeying a trend derived by
Thompson et al.(1994) from five {\it EGRET~} pulsars:
\begin{equation}
\gamma = 0.33 \log \tau - 3.08,
\label{e5}
\end{equation}
where the characteristic
age of pulsars $\tau = P/2\dot P$ is expressed in years. The trend is based essentially on
the Crab pulsar ($\gamma=2.16,\, \tau=1.3 \times 10^3$) on one side,
and on B1055-52 ($\gamma=1.18,\, \tau=5.3 \times 10^5$) on the other side. With the new determination
of the spectral slope for B1055-58, $\gamma=1.59$ (Fierro 1995), the prescription
for $\gamma(\tau)$ looks questionable. As a consequence, upper limits for $L_\gamma$~ derived for old pulsars,
especially for millisecond pulsars, might be somewhat tighter. That would put models discussed by
Nel et al.(1996) into even deeper trouble, whereas the model we propose still remains intact.
Pulsars from the database
of Taylor, Manchester \& Lyne (1993) extended by Taylor et al. (1995), plus Geminga,
arranged in a traditional ranking,
based just on spin-down fluxes $L_{\rm sd}/D^2$,
start with Crab and five other gamma-ray pulsars. But then, there is a wide gap
(of no gamma-ray detections) before the seventh
gamma-ray pulsar, B1055-52, emerges as No.33.
The gap contains
several millisecond pulsars, with their flagship J0437-4715 taking very
high overall position -- No.7.
Our ranking of top 30 candidates for gamma-ray emission,
arranged by a predicted flux resulting from equations (\ref{e3}) and (\ref{e4}),
\begin{equation}
f_\gamma = C \cdot {n_\pm \cdot E_0 \cdot L_{\rm sd}^{1/2}\over D^2},
\label{e6}
\end{equation}
is presented in Table~2.
The pulsar database of Taylor et al.(1993, 1995) ordered by $f_\gamma$
starts now with Vela, then goes Crab and Geminga.
B1055-52 advances by 13 positions to No.20. The millisecond
pulsars (from the gap), including J0437-4715, disappear from the list of `Top 30'.
\newpage
\begin{table}
\quad\quad\quad\label{t:par2}
\caption{Our ranking of top $30$ gamma-ray candidates.
The seven {\it CGRO~} detections are marked with $\gamma$.}
\vskip 1mm
\begin{tabular}{|r|c|c|c|c|c|}
\hline
& PSR B & PSR J & & $D$ & $L_\gamma/D^2$ \\
& & & & [kpc] & $[~{\rm erg}~{\rm s}^{-1}~{\rm cm}^{-2} ]$ \\
\hline
1&0833$-$45 &0835$-$4510 &$\gamma$&0.50& 0.1302E-07 \\
2&0531$+$21 &0534$+$2200 &$\gamma$&2.00& 0.8963E-08 \\
3& &0633$+$1746 &$\gamma$&0.15& 0.7308E-08 \\
4&1706$-$44 &1709$-$4428 &$\gamma$&1.82& 0.6430E-09 \\
5&1929$+$10 &1932$+$1059 &&0.25& 0.5678E-09 \\
6&0950$+$08 &0953$+$0755 &&0.12& 0.5646E-09 \\
7&1951$+$32 &1952$+$3252 &$\gamma$&2.50& 0.4108E-09 \\
8&0656$+$14 &0659$+$1414 &&0.76& 0.2130E-09 \\
9&1509$-$58 &1513$-$5908 &$\gamma$&4.40& 0.2080E-09 \\
10&1046$-$58 &1048$-$5832 &&2.98& 0.1904E-09 \\
11& &2043$+$2740 &&1.13& 0.1653E-09 \\
12&1823$-$13 &1826$-$1334 &&4.12& 0.1239E-09 \\
13&1800$-$21 &1803$-$2137 &&3.94& 0.1193E-09 \\
14&0740$-$28 &0742$-$2822 &&1.89& 0.1143E-09 \\
15&0114$+$58 &0117$+$5914 &&2.14& 0.1136E-09 \\
16&1757$-$24 &1801$-$2451 &&4.61& 0.9188E-10 \\
17& &1908$+$0734 &&0.58& 0.9129E-10 \\
18&1727$-$33 &1730$-$3350 &&4.24& 0.7281E-10 \\
19& &0538$+$2817 &&1.77& 0.6567E-10 \\
20&1055$-$52 &1057$-$5226 &$\gamma$&1.53& 0.6346E-10 \\
21&0823$+$26 &0826$+$2637 &&0.38& 0.6209E-10 \\
22& &1918$+$1541 &&0.68& 0.5512E-10 \\
23&0355$+$54 &0358$+$5413 &&2.07& 0.4791E-10 \\
24&1853$+$01 &1856$+$0113 &&3.30& 0.4525E-10 \\
25&0450$+$55 &0454$+$5543 &&0.79& 0.4274E-10 \\
26&1822$-$09 &1825$-$0935 &&1.01& 0.4079E-10 \\
27&1702$-$19 &1705$-$1906 &&1.18& 0.3806E-10 \\
28&1133$+$16 &1136$+$1551 &&0.27& 0.2123E-10 \\
29&0906$-$17 &0908$-$1739 &&0.63& 0.1433E-10 \\
30&1451$-$68 &1456$-$6843 &&0.45& 0.8666E-11 \\
\hline
\end{tabular}
\end{table}
\section*{ACKNOWLEDGEMENTS}
Numerical code used in this work is based on a code originally developed and kindly provided by A.K.~Harding.
This research has been financed by the KBN grant 2P03D.009.11. We acknowledge useful remarks on the
typescript from T.~Bulik. We thank the anonymous referee for comments and useful suggestions.
\section*{ADDENDUM}
Already after submitting our paper for publication Lucien Kuiper pointed to us
that the COMPTEL group has found indications for a signal from B0656+14 in the energy interval
of $10 - 30\,$MeV
(Hermsen, W., et al., 1997, Proceedings 2nd INTEGRAL Workshop, ESA SP-382, 287).
|
3,212,635,537,748 | arxiv | \section{Introduction}
Core-collapse supernovae (CCSNe) occur at a rate of several per second in the Universe, each releasing a prolific $\sim$ $10^{58}$ neutrinos and antineutrinos. Their detection provides a rich bounty for stellar and neutrino studies, shown by the detection of the neutrinos from SN 1987A \cite{Hirata:1987hu,Bionta:1987qt,Hirata:1988ad,Bratton:1988ww,Arnett:1987iz,Bahcall:1987nx,Raffelt:1987yt,Barbieri:1988nh,Lattimer:1988mf,Arnett:1990au,Jegerlehner:1996kx,Lunardini:2004bj}. While a core-collapse supernova in the Milky Way would easily be detected in neutrinos, the occurrence rate is only $\lesssim$ 3 per century \cite{VanDenBergh:1991ke,Tammann:1994ev,Diehl:2006cf}. Proposed neutrino detectors should be able to detect supernovae up to 10 Mpc away with an occurrence rate of $\sim$ 1 per year \cite{Ando:2005ka,Kistler:2008us}. The vast majority of supernovae are therefore undetectable. However, the cumulative emission from all past core-collapse supernovae, which forms the Diffuse Supernova Neutrino Background (DSNB), has promising detection prospects \cite{BisnovatyiKogan:1984,Krauss:1983zn,Dar:1984aj,Woosley:1986aa,Totani:1995rg,Totani:1995dw,Malaney:1996ar,Hartmann:1997qe,Kaplinghat:1999xi,Fukugita:2002qw,Ando:2002zj,Ando:2002ky,Strigari:2005hu,Lunardini:2005jf,Yuksel:2007mn,Chakraborty:2008zp}. The Super-Kamiokande (SK) limit of $\phi(E_{\bar{\nu}_e} > 19.3 \,\mathrm{MeV}) < 1.2$ cm$^{-2}$ s$^{-1}$ \cite{Malek:2002ns} on the DSNB flux is already close to theoretical predictions \cite{Strigari:2005hu,Yuksel:2007mn}.
Predicting the DSNB for a given supernova neutrino emission model requires knowledge of the rate of core-collapse supernovae. In the past, this was not well known, and various studies provided insights on the supernova rate \emph{from} the DSNB \cite{Totani:1995dw,Fukugita:2002qw,Ando:2004sb}. Indeed, the SK limit on the DSNB flux is strong enough to rule out some supernova rate evolution models, assuming a fiducial neutrino emission model. However, our understanding of the cosmic star formation history (CSFH) has been greatly augmented by improved direct measurements in different wavebands and redshifts (see, e.g., Refs.~\cite{Hopkins:2004ma,Hopkins:2006bw} and references therein). Cross-checks with other well-measured observables are now constraining, so that the CSFH is well determined by methods other than the DSNB. Thus it is both timely and important to study the prospects of the DSNB for probing stellar and neutrino physics.
In this paper we start with the CSFH of Hopkins \& Beacom (hereafter HB06 \cite{Hopkins:2006bw}) which is based on a compilation of recent data, and assess it by cross-checks, with the aim of evaluating the uncertainties that are carried forward into DSNB predictions through the core-collapse supernova rate. We henceforth refer to these as the \underline{\it astrophysical} inputs and uncertainties on the DSNB. On the other hand, we refer to the supernova neutrino emission and neutrino properties as the \underline{\it emission} inputs and uncertainties on the DSNB.
As cross-check material we consider measurements of the rate of core-collapse supernovae, which have been significantly updated \cite{Cappellaro:2004ti,Dahlen:2004km,Botticella:2007er,Smartt:2008zd,Dahlen:2008aa}, the extragalactic background light, which records the total stellar emission over all time (for a recent review, see, e.g., Ref.~\cite{Hauser:2001xs}), and, finally, the stellar mass density (see e.g., Ref.~\cite{Wilkins:2008be} and references therein). While our approach is similar to previous studies such as Ref.~\cite{Strigari:2005hu}, we perform novel checks and with higher precision. In particular, our analysis delivers fiducial inputs with unprecedentedly small uncertainties.
Using our constrained astrophysical inputs, we find that the DSNB uncertainty is dominated by the emission inputs, demonstrating the potential to study stellar and neutrino physics using the DSNB. Furthermore, taking into account both astrophysical and emission inputs, we find that the predicted DSNB is within at most a factor \mbox{$\sim$ 4} of the SK limit set in 2003. For a 6 MeV thermal spectrum, typical of scenarios with neutrino mixing, this factor reduces to $\sim$ 2. At the lower detection threshold energy for a gadolinium-enhanced SK \mbox{($\sim 10$ MeV} \cite{Beacom:2003nk,Yuksel:2005ae}), the combined uncertainty on the predicted DSNB is remarkably only a factor 2, with a event rate of 1.2--5.6 events yr$^{-1}$ in the energy range 10--26 MeV. Thus, the core-collapse rate and the neutrino emission per supernova are large enough to allow the imminent detection of the DSNB in SK. The successful detection will confirm the ubiquitous emission of neutrinos from core-collapse supernovae and initiate the much-anticipated study of stellar and neutrino physics, while a non-detection would require new stellar or neutrino physics.
The paper is organized as follows. In Sections \ref{sec:astrophysics} and \ref{sec:dsnb} we discuss the astrophysical and emission inputs for the DSNB. We discuss the DSNB detectability and make future predictions for a gadolinium-enhanced SK in Section \ref{sec:sk}, and finish with conclusions in Section \ref{sec:conclusion}. We adopt the standard $\Lambda$CDM cosmology with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and $H_0=70$ km s$^{-1}$ Mpc$^{-1}$.
\section{Astrophysical inputs \label{sec:astrophysics}}
For a given supernova neutrino spectrum, the key input for calculating the DSNB flux is the history of the rate of CCSNe (this includes Type II and the subdominant Type Ib/c supernovae). The CCSN rate is directly related to the birth and death rate of massive stars. In recent years, data on the star formation rate (SFR) have improved both in breadth and sophistication, leading to an unprecedented understanding of the CSFH. In this section, we start with the latest CSFH and cross-check it with the measured CCSN rate, extragalactic background light (EBL), and stellar mass. Importantly, these checks probe different stellar life phases and are sensitive to different stellar masses, making them complementary. Thus we are able to self-consistently assess the astrophysical inputs for the DSNB.
\subsection{Cosmic star formation history \label{sec:sfr}}
The SFR is most often derived from measurements of living massive stars. The measured luminosities, together with knowledge of their masses and lifetimes, gives their birth rates. Since the most massive stars have the shortest lifetimes, they provide a measure of the most recent star formation activity. In practice, the observed luminosities are corrected for dust and the total SFR (over the entire stellar mass range) is derived by extrapolation to lower masses using the initial mass function (IMF). The calibration is done by use of a stellar population code that calculates the radiative output from a population of stars given an IMF.
We compute calibration factors using the PEGASE.2 stellar population code \cite{Fioc:1997sr}, which contains a careful treatment of stellar physics. We assume constant SFR bursts of 10$^8$ years, a close binary fraction of 0.05, evolutionary tracks with stellar winds, the supernova model B of Woosley \& Weaver \cite{Woosley:1995ip}, and a constant metallicity of $Z=0.02$ (i.e., solar). The parameter that the results are the most sensitive to is the star formation duration. For integrated measurements of galaxies, it is usually appropriate to assume the SFR has remained constant over time scales that are long compared to the lifetimes of the dominant UV emitting population ($\sim 10^8$ years) \cite{Kennicutt:1998zb}; however, there are calibration uncertainties of a few tens of percent (see Table \ref{table:calibration}). In contrast, studies suggest that the mean metallicity of star formation gas is close to the solar value for redshifts of a few \cite{Panter:2002ed,Panter:2008vw}, resulting in calibration uncertainties of ten percent or less. The other parameters yield variations of only a few percent. We note the agreement of PEGASE.2 outputs with other stellar population synthesis codes demonstrated in e.g., Refs.~\cite{Bruzual:2003tq,Fardal:2006sd}.
\begin{figure}[t]
\includegraphics[width=3.25in]{fig1}
\caption{\label{fig:sfr} Redshift evolution of the comoving SFR density. Data derived from various indicators are included as labeled, all scaled to a Salpeter IMF \cite{Salpeter:1955it}. The majority are from the compilation of HB06 \cite{Hopkins:2006bw}, with additional data from LBG \cite{Reddy:2008rj} and H$\alpha$ measurements \cite{James:2008pa}. We plot the fiducial CSFH (thick solid curve) and our generous adopted uncertainty range (thin solid curves). The curves take into account SFR data at higher redshifts \cite{Hopkins:2006bw,Yuksel:2008cu} that are not shown here, where we focus on the lower redshifts most relevant for the DSNB.}
\end{figure}
\begin{table}
\caption{Calibration factors, $f_{\rm bol}=L_{\rm bol}/\dot{\rho}_*$ and $f_{\rm UV}=L_{\rm UV}/\dot{\rho}_*$, calculated using PEGASE.2 \cite{Fioc:1997sr}, for three IMFs \cite{Salpeter:1955it,Kroupa:2000iv,Baldry:2003xi} and three epochs (yr). For a particular IMF, the star formation duration causes an uncertainty of a few tens of percent about the central $10^8$ yr value. This contributes to the scatter in the SFR for a given IMF. Note $f_{\rm UV}$ is determined at $\lambda = 0.2$ $\mathrm{\mu m}$ since the UV output is nearly constant over wavelength for the durations considered.}\label{table:calibration}
\begin{ruledtabular}
\begin{tabular}{lcccccccccc}
& \multicolumn{2}{c}{IMF slope\footnotemark[1]} && \multicolumn{3}{c}{$f_\mathrm{bol}$\footnotemark[2]} &&
\multicolumn{3}{c}{$f_\mathrm{UV}$\footnotemark[3]} \\
IMF & $\xi_1$ & $\xi_2$ && $10^7$ & $10^8$ & $10^9$ && $10^7$ & $10^8$ & $10^9$ \\
\hline
Salpeter (1955) & 2.35 & 2.35 && 4.3 & 6.5 & 8.6 && 5.1 & 7.8 & 8.9 \\
Kroupa (2001) & 1.3 & 2.3 && 6.7 & 9.9 & 13 && 7.9 & 12 & 13 \\
BG (2003) & 1.5 & 2.15 && 8.5 & 12 & 15 && 9.9 & 14 & 16 \\
\end{tabular}
\footnotetext[1]{$\psi(M) \propto M^{-\xi_1}$ for 0.1--0.5 $\mathrm{M_\odot}$ and $M^{-\xi_2}$ for 0.5--100 $\mathrm{M_\odot}$}
\footnotetext[2]{in $10^9$ $\mathrm{L_\odot \, (M_\odot/yr)^{-1}}$}
\footnotetext[3]{in $10^{27}$ $\mathrm{ erg \, s^{-1} \, Hz^{-1} (M_\odot/yr)^{-1}}$}
\end{ruledtabular}
\end{table}
We calculate results for three IMFs: the traditional steeper Salpeter IMF \cite{Salpeter:1955it}, an intermediate Kroupa IMF \cite{Kroupa:2000iv}, and a shallower Baldry-Glazebrook (BG) IMF \cite{Baldry:2003xi}. We define the IMF as $\psi(M)=\mathrm{d}N/\mathrm{d}M$ so that $\psi(M) \mathrm{d}M$ gives the number of stars in the mass range $M$ to $M+\mathrm{d}M$. The slopes $\psi(M) \propto M^{-\xi}$ are shown in Table \ref{table:calibration}.
We take the SFR compilation and fit of HB06 as our starting point and add recent data. The HB06 data consist of various SFR indicators, including UV measurements from SDSS \cite{Baldry:2005df}, GALEX \cite{Schiminovich:2004km,Arnouts:2005aa} and COMBO17 \cite{Wolf:2002ks}, far-infrared (FIR) from Spitzer \cite{PerezGonzalez:2005bt}, and high redshift measurements from the Hubble Ultra Deep Field (UDF) \cite{Thompson:2006bb}; to this we add recently-derived data from Lyman Break Galaxies (LBG) \cite{Reddy:2008rj} and from H$\alpha$ emission \cite{James:2008pa}. These are shown in Fig.~\ref{fig:sfr} as a function of redshift, all scaled to a Salpeter IMF. Assuming the Kroupa or BG IMF results in values that are lower by an overall factor $\simeq$ 0.66 and $\simeq$ 0.55, respectively.
Fig.~\ref{fig:sfr} demonstrates the overall consistency between SFR densities estimated by different indicators, over a wide range of redshifts. In general, the systematic scatter due to different indicators dominates over the formal uncertainties from the calibrations of each indicator. However, at low redshifts ($z < 1$), the scatter decreases and approaches the calibration uncertainties. To further constrain the CSFH in the future would require calibration uncertainties to be examined. We note that redshift dependent dust corrections have been applied. At $z<1$, the UV and FIR measurements are combined, while for $1<z<3$, a constant dust correction is made; see Ref.~\cite{Hopkins:2004ma} for further details.
To compare the CSFH to other observables, it is useful to define an analytic fit. We adopt a continuous broken power-law as in Ref.~\cite{Yuksel:2008cu},
\begin{equation}\label{fit}
\dot{\rho}_*(z) = \dot{\rho}_0 \left[
(1+z)^{\alpha \eta} +
\left( \frac{ 1+z }{B} \right)^{\beta \eta} +
\left( \frac{ 1+z }{C} \right)^{\gamma \eta} \,
\right]^{1/\eta},
\end{equation}
where $\dot{\rho}_0$ is the normalization, $B$ and $C$ encode the redshift breaks, the transitions are smoothed by the choice $\eta \simeq -10$, and $\alpha$, $\beta$ and $\gamma$ are the logarithmic slopes of the low, intermediate, and high redshift regimes, respectively. The constants $B$ and $C$ are defined as
\begin{eqnarray}
B &=& (1 + z_1)^{1-\alpha/\beta}, \\
C &=& (1 + z_1)^{(\beta-\alpha)/\gamma} (1 + z_2)^{1-\beta/\gamma},
\end{eqnarray}
where $z_1$ and $z_2$ are the redshift breaks. We adopt the fiducial CSFH fit obtained by combining the HB06 compilation with new measurements at high redshift derived from gamma-ray bursts \cite{Kistler:2007ud,Yuksel:2008cu} (note that high-redshift data are not shown in Fig.~\ref{fig:sfr}). We define a generous envelope that takes into account the scatter in the data, as shown in Fig.~\ref{fig:sfr}. The parameters are given in Table \ref{table:fit}.
\begin{table}
\caption{CSFH parametric fit to the form of Eq.~(\ref{fit})}\label{table:fit}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
Analytic fits\footnotemark[1] & $\dot{\rho}_0$ & $\alpha$ & $\beta$ & $\gamma$ & $z_1$ & $z_2$ \\
\hline
Upper & 0.0213 & 3.6 & -0.1 & -2.5 & 1 & 4 \\
Fiducial & 0.0178 & 3.4 & -0.3 & -3.5 & 1 & 4 \\
Lower & 0.0142 & 3.2 & -0.5 & -4.5 & 1 & 4 \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Shown for the Salpeter IMF. For the Kroupa and BG IMFs the normalization $\dot{\rho}_0$ decreases by a factor $\simeq$ 0.66 and $\simeq$ 0.55, respectively; the overall shape is not greatly affected (see, e.g., Table 2 of Ref.~\cite{Hopkins:2006bw}). Units of $\rho_0$ are in $\mathrm{M_\odot \, yr^{-1} \, Mpc^{-3}}$.}
\end{table}
\subsection{Rate of core-collapse supernovae \label{sec:CCSN}}
The number of stars per unit mass undergoing core collapse is dependent on the IMF and mass range of stars that lead to core collapse. The predicted comoving CCSN rate history, $R_{\rm CCSN}(z)$, follows directly from the CSFH,
\begin{equation}\label{CCSNrate}
R_{\rm CCSN}(z) = \dot{\rho}_*(z)
\frac{\int_{8}^{50}\psi(M)dM}{\int_{0.1}^{100} M \psi(M)dM},
\end{equation}
where the ratio of integrals is 0.0070/$\mathrm{M_\odot}$, 0.0109/$\mathrm{M_\odot}$, and 0.0132/$\mathrm{M_\odot}$ for the Salpeter, Kroupa, and BG IMFs, respectively.
The lower mass threshold is the most important parameter for calculating $R_{\rm CCSN}$. In general, it is difficult to predict accurately from theory, because stellar properties change rapidly between \mbox{$\sim$ 6--10 $M_\odot$}; it is usually assumed to be 8 $\mathrm{M_\odot}$. On the other hand, the mass can be determined from direct identifications of progenitor stars from pre-explosion imaging. The most recent study places the mass threshold for Type II-P supernovae at $8.5^{+1}_{-1.5}\,\mathrm{M_\odot}$ \cite{Smartt:2008zd}. The uncertainties would translate to an uncertainty of about ten percent in $R_{\rm CCSN}$. However, one should keep in mind that explosions of O-Ne-Mg cores may result from a lower mass range that leads to a different variety of CCSN. The upper mass limit is less important because of the steep IMF, and only affects the results at the percent level. Theoretically, prompt black hole formation occurs above some critical mass; the precise value depends strongly on properties such as rotation and metallicity, but is thought to lie around 30--50 $\mathrm{M_\odot}$ \cite{Fryer:1999mi}. Such explosions will generally not produce optical supernovae but will likely produce neutrinos. Observationally, a progenitor mass of at least 40 $\mathrm{M_\odot}$ has been claimed to have been inferred from a neutron star in a very young cluster \cite{Muno:2005xn}.
\begin{figure}[t]
\includegraphics[width=3.25in]{fig2}
\caption{\label{fig:CCSNrate} Evolution of the core-collapse supernova rate (top panel) and Type Ia to CCSN ratio (bottom panel), shown against distance in Mpc (for $z<0.1$) and redshift (for $z>0.1$). Data as labeled; filled symbols indicate data or limits we adopt, and empty points indicate published rates we treat as lower limits. Error bars show statistical errors, except for horizontal bars on Dahlen et al.~points that show bin size. In the top panel, predictions from the CSFH are shown by the solid curve, where the uncertainty band propagates from Fig.~\ref{fig:sfr}. The SFR measurements, scaled to a CCSN rate, are shown in the background (small light brown squares). The BG IMF has been adopted; using other IMFs causes differences of only a few percent. In the bottom panel, the dashed lines denote conservative upper limits from the non-observation of any Type Ia supernovae within 10 Mpc \cite{Kistler:2008us}. The 10 Mpc point has been very conservatively selected from Ref.~\cite{Kistler:2008us}, and the rate has been further corrected downwards (see text).}
\end{figure}
It should be noted that the predicted $R_{\rm CCSN}$ is largely insensitive to the IMF. This is because the effects of the IMF nearly cancel out between $\dot{\rho}_*$ and the integrals in Eq.~(\ref{CCSNrate}). For instance, a shallower IMF contains relatively more massive stars but has a smaller $\dot{\rho}_*$. Numerically, e.g., $R_{\rm CCSN}^{Sal} \approx 0.96 R_{\rm CCSN}^{BG}$. This is a natural outcome because both the CCSN and the SFR indicators are associated with the same massive stars. Thus, the CCSN rate allows an almost IMF-independent check of the CSFH.
In the top panel of Fig.~\ref{fig:CCSNrate} we show the predicted CCSN rate, using the BG IMF, for the fiducial CSFH (thick solid curve) as well as the upper and lower CSFH (thin solid curves). Using the Kroupa or Salpeter IMF produces variations of only a few percent, as expected. For data, we plot rates derived from supernova compilations \cite{Kistler:2008us,Smartt:2008zd}, extended supernova surveys using the Hubble Space Telescope \cite{Dahlen:2004km,Dahlen:2008aa}, searches in nearby galaxies \cite{Cappellaro:1999qy,Cappellaro:2004ti}, and the Southern inTermediate Redshift ESO Supernova Search (STRESS) \cite{Botticella:2007er}. We also show in the background the SFR measurements scaled via Eq.~(\ref{CCSNrate}) to a CCSN rate using the BG IMF.
For the local Universe, CCSN rates are derived from compilations of supernovae occurring within 10 Mpc over a time period of 10 years \cite{Kistler:2008us}, and within 28 Mpc over a time period of 10.5 years \cite{Smartt:2008zd}. These compilations include Types Ia, II, Ib, and Ic supernovae, the latter three of which are used to derive the CCSN rate. Due to the known incompleteness of the compilations, both should be treated as lower limits on the CCSN rate. In addition, for the 10 Mpc point, we conservatively exclude sources that might inadvertendly be outside 10 Mpc and we further exclude transients of peculiar origin (e.g., SN 2002kg, SN 2008S and the 2008 transient in NGC 300) \cite{Thompson:2008sv}, which reduces the number of CCSN to 13. Also, the 10 Mpc point has been corrected downwards for the local increase in SFR density. We apply a simple correction assuming the SFR density traces the galaxy density; the galaxy over-density in 10 Mpc is about a factor 2 according to galaxy catalogues \cite{Karachentsev:2004aa} (see also Fig.~2 of Ref.~\cite{Blanton:2000dr}). The correction does not affect the Type Ia to CCSN ratio that we discuss below.
Since the lifetimes of massive stars are short, the shape of the CCSN rate evolution must follow that of the CSFH. On the other hand, the normalization needs to be checked. We find excellent agreement between predictions from the CSFH and the most recent data from the survey of Dahlen et al.~using the Hubble Space Telescope \cite{Dahlen:2008aa}. These data have been derived by periodically observing the same patch of sky, locating supernovae within a volume only limited by flux. The new data (red solid squares) \cite{Dahlen:2008aa} are updates of their previous data (blue empty circles) \cite{Dahlen:2004km}, and have much better statistics and a more detailed analysis (we plot the data which have been corrected for dust). We find further agreement with recently published supernova compilations within 10 Mpc \cite{Kistler:2008us} and 28 Mpc \cite{Smartt:2008zd}, even though they are certainly incomplete, especially in the Southern sky, confirming the minimal normalization over a wide distance scale.
On the other hand, data from Cappellaro et al.~\cite{Cappellaro:1999qy,Cappellaro:2004ti} and STRESS \cite{Botticella:2007er} fall short of the trend of other data. Unlike in Dahlen et al., these surveys search for supernovae by periodically observing a pre-selected sample of galaxies in a given field. While a large number of galaxies are selected, even this is likely incomplete, as small galaxies are often under-sampled. In addition, as the authors clarify, host galaxy extinction is an important uncertainty on supernova surveys \cite{Botticella:2007er}. The surveys of Cappellaro et al.~do not include host galaxy extinction, and as the authors state, they most likely have too few CCSN \cite{Cappellaro:2004ti}.
The ratio of Type Ia to CCSN also assists in assessing the results of supernova surveys. Due to their brighter nature, Type Ia supernovae are more easily detected, so that the ratio of Type Ia to CCSN will increase if CCSN are increasingly missed. In the bottom panel of Fig.~\ref{fig:CCSNrate}, we show the ratio of raw supernova counts for Ref.~\cite{Smartt:2008zd}, and the ratio of reported Type Ia and CCSN rates for other studies. From the non-observation of any Type Ia supernovae within 10 Mpc -- of the supernovae considered in the 10 Mpc compilation, none were Type Ia \cite{Kistler:2008us} -- we determine 90\% and 99\% upper limits on the ratio, shown by dashed lines. We apply these upper limits over all distances and redshifts; although the ratio in principle evolves with time due to Type Ia time-delay effects, the evolution is not strong for global samples and $z \lesssim 1$ (see, e.g., Refs.~\cite{Neill:2007kg,Pritchet:2008np}), and moreover, a delayed component to the Ia rate relative to the CCSN rate would only make the limits stronger by increasing the local ratio. We therefore use the dashed lines as a conservative indicator for the true ratio, for simplicity. We see from Fig.~\ref{fig:CCSNrate} that more CCSN are being missed in the data we treat as lower limits (empty symbols). For example, the large ratio of Ref.~\cite{Cappellaro:1999qy} predicts $\sim 5$ Type Ia within 10 Mpc, in disagreement with observations \cite{Kistler:2008us}. Alternatively, taking the 90\% upper limit as described above and respecting the reported Type Ia supernova rates, the CCSN rates (empty symbols) increase to values more in line with theoretical predictions from the CSFH.
We therefore conclude the data confirm the CSFH normalization over a large range of distances. Moreover, the range is precisely that of interest for the DSNB (almost all of the detectable flux comes from $z < 1$).
As an aside, while predictions include all core collapses, including those with little or no optical signals due to the prompt formation of a black hole or due to dust obscuration, the data are derived from optical supernovae only. The excellent agreement between prediction and data suggests that dark core collapses comprise a minority of all core collapses. Alternatively, the fraction of optically-dark but neutrino-bright collapses can be increased by adjusting the CSFH and mass range for core collapse; however, this would be accompanied by a correspondingly larger DSNB flux, which is constrained by the tight experimental limits discussed below.
We note in this context the red supergiant problem noted by Smartt et al.~\cite{Smartt:2008zd}. They find a shortage of Type II-P supernovae associated with red supergiants in the mass range 17--25 $\mathrm{M_\odot}$, despite clear evidence that these massive supergiants exist. These stars constitute $\sim$ 10\% of all massive stars leading to core collapse, and although slightly smaller in mass than theoretical predictions, they could be dark core collapses. Monitoring nearby red supergiants can reveal the occurrence rate of dark core collapses, as proposed by \cite{Kochanek:2008mp}.
\subsection{Extragalactic background light}
It is widely accepted that the EBL is the record of the total stellar emission over all time. The observed EBL is dominated by two peaks of comparable energy density. The first peak in the optical to near infrared (NIR) is powered by direct starlight, while the second in the FIR is dominated by starlight that is absorbed and re-emitted by dust. Since circumstellar dust absorbs most efficiently in the UV, the power source of the FIR peak is primarily massive stars. The relative energy densities of the two peaks is a testament of the heavy dustiness of the present and past Universe (for a review, see e.g., Ref.~\cite{Hauser:2001xs}).
We check the CSFH by using the \emph{total observed EBL}. In principle, if one had precise knowledge of dust -- its quantity, properties and evolution -- one could use the spectral shape of the EBL to probe different stellar mass regimes. However, a full treatment is beyond the scope of this paper and we simply exploit the total EBL as a calorimeter of the total energy radiated by stars. Although this includes contributions from less massive stars, half of the total EBL is powered by stars with masses $M \gtrsim 3 \, \mathrm{M_\odot}$ for the BG IMF.
Direct measurements of the EBL in the optical to NIR are complicated by foreground contamination \cite{Hauser:2001xs}. Indirect limits have been placed from integrated galaxy counts and from the opacity of the Universe to gamma rays using distant TeV blazars. We caution however that gamma-ray constraints depend on the assumed intrinsic blazar spectral index, so the result may be weakened in light of unknown acceleration mechanisms \cite{Dwek:2004pp,Stecker:2007aa,DeAngelis:2008sk,Razzaque:2008te,Stecker:2008aa}. In Fig.~\ref{fig:ebl} we present the latest data on the EBL, including direct measurements (empty symbols) \cite{Bernstein:2002aa,Bernstein:2002ab,Bernstein:2005aa,Bernstein:2007aa,Cambresy:2001aa,Wright:2000aa,Dwek:1998aa,Lagache:2000aa,Hauser:1998aa}, lower limits from galaxy counts (filled symbols) \cite{Gardner:2000aa,Madau:1999yh,Levenson:2008aa,Fazio:2004aa,Metcalfe:2003zi,Elbaz:2002vd,Papovich:2004aa,Dole:2006aa,Frayer:2006aa}, and upper limits from gamma-ray attenuation (thick curves) \cite{Aharonian:2005gh,Aharonian:2007aa,Aliu:2008ay}.
\begin{figure}
\includegraphics[width=3.25in]{fig3}
\caption{\label{fig:ebl}The observed EBL spectrum. Filled symbols are based on integrated galaxy counts, while empty symbols and the black lines at high wavelength represent absolute measurements. In the UV to NIR, we also show the gamma-ray upper limits placed by H.E.S.S. (red solid curve) \cite{Aharonian:2005gh}, MAGIC (blue dashed curve) \cite{Aliu:2008ay}, and Edelstein et al.~2000 (filled downward triangle at 0.1 $\mu$m) \cite{Edelstein:2000aa}. We show three shaded regions representing the minimum (dark shading), nominal (light shading) and maximum (lightest shading) EBL. The integrated total EBL from these \medskip are listed at the top-right. \\[-2mm]
\indent The integrated galaxy counts shown include (in order of increasing $\lambda$) Gardner et al.~2000 (red cicle) \cite{Gardner:2000aa}, Madau \& Pozzetti 2000 (green square) \cite{Madau:1999yh}, Levenson \& Wright 2008 (blue diamond) \cite{Levenson:2008aa}, Fazio et al.~2004 (yellow right-pointed triangle) \cite{Fazio:2004aa}, Metcalfe et al.~2003 (red up-pointed triangle) \cite{Metcalfe:2003zi}, Elbaz et al.~2002 (blue down-pointed triangle) \cite{Elbaz:2002vd}, Papovich et al.~2004 (green right-pointed triangle) \cite{Papovich:2004aa}, Dole et al.~2006 (yellow circle) \cite{Dole:2006aa}, and Frayer et al.~2006 (purple square) \cite{Frayer:2006aa}. Absolute measurements shown include (in order of increasing $\lambda$) Cambresy et al.~2001 (dark blue right-pointed triangle) \cite{Cambresy:2001aa}, Wright 2001 (purple left-pointed triangle) \cite{Wright:2000aa}, Dwek \& Arendt 1998 (dark green square) \cite{Dwek:1998aa}, Lagache et al.~2000 (green diamond) \cite{Lagache:2000aa}, and Hauser et al.~1998 (red cirlce) \cite{Hauser:1998aa}. For the results of Bernstein et al.~\cite{Bernstein:2002aa,Bernstein:2002ab,Bernstein:2005aa,Bernstein:2007aa}, we show $2\sigma$ upper limits (filled downward-pointing triangles), nominal values (empty circles), and lower limits (filled upward-pointing triangles), all connected by dashed lines. We show the diffuse sky measurements of Matsumoto et al.~2005 (small empty circles) \cite{Matsumoto:2005aa}, and the FIR measurements by Fixsen et al.~1998 (filled black circles) \cite{Fixsen:1998kq} connected by black lines.}
\end{figure}
The systematic uncertainties on galaxy photometry and zodiacal light subtraction suggests that a curve between the measurements and counts represents the most appropriate assumption for the total observed EBL. We show three EBL regions: the minimum, which essentially traces the galaxy counts, the nominal, which respects the gamma-ray constraints, and the maximum, which lies within most of the data error bars. We do not consider the 1--4 micron diffuse sky emission detected by IRTS \cite{Matsumoto:2005aa} to be of extragalactic origin, since it is most likely caused by emission from interplanetary dust particles \cite{Dwek:2005dj}. The total EBL from these regions are 52, 73 and 99, all in units of $\mathrm{nW \, m^{-2} \, sr^{-1}}$. Our estimates are slightly higher than those of Refs.~\cite{Madau:1999yh,Madau:2000yn}, who find a best estimate of 60 $\mathrm{nW \, m^{-2} \, sr^{-1}}$. On the other hand, they are more in line with recent estimates of 60--93 $\mathrm{nW \, m^{-2} \, sr^{-1}}$ \cite{Gispert:2000np}, 45-170 $\mathrm{nW \, m^{-2} \, sr^{-1}}$ \cite{Hauser:2001xs}, and 50-129 $\mathrm{nW \, m^{-2} \, sr^{-1}}$ \cite{Fardal:2006sd}.
To calculate the EBL for a given CSFH, we need the spectral luminosity density $\epsilon(\nu,z)$ as a function of $z$, measured since the epoch $z_*$ when stars first turned on. Assuming the EBL arises dominantly from stellar radiation, $\epsilon$ is given by \cite{Dwek:1998ab}
\begin{equation}
\epsilon(\nu,z) = \int^{t_z}_{t_*} \dot{\rho}_*(t) \mathrm{d}t
\int^{M(t^\prime)}_{0.1} L(\nu,M,t^\prime) \psi(M) \mathrm{d}M,
\end{equation}
where $L(\nu,M,t^\prime)$ is the luminosity per unit mass of a star of initial main-sequence mass $M$ at time $t^\prime = t_z - t$, $\psi(M)$ is the IMF in the range 0.1--100 $\mathrm{M_\odot}$ and $M(t^\prime)$ is the initial main-sequence mass of a star with a lifetime of $t^\prime$. We calculate this quantity using the PEGASE.2 stellar population code, with the same assumptions as described in Section \ref{sec:sfr}, but with evolution of metallicity, with an initial value of $Z=0.001$ at $z=10$. The specific EBL intensity $I(\nu)$ at the observed frequency $\nu$ is then the integral of $\epsilon(\nu^\prime,z)$, from the comoving volume element at $z$, over redshifts,
\begin{equation}
I(\nu)= \frac{c}{4 \pi} \int^{z_*}_0
\epsilon(\nu^\prime,z) \left| \frac{\mathrm{d}t}{\mathrm{d}z} \right| \mathrm{d}z,
\end{equation}
where $\nu^\prime=\nu(1+z)$ is the frequency at emission and $|\mathrm{d}z/\mathrm{d}t|=H_0(1+z) [\Omega_m(1+z)^3+\Omega_\Lambda]^{1/2}$.
The values of the calculated total EBL are $95^{+39}_{-30}$ $\mathrm{nW \, m^{-2} \, sr^{-1}}$, $88^{+36}_{-28}$ $\mathrm{nW \, m^{-2} \, sr^{-1}}$, and $78^{+31}_{-24}$ $\mathrm{nW \, m^{-2} \, sr^{-1}}$ for the Salpeter, Kroupa, and BG IMFs, respectively. While the CSFH differs by almost a factor 2 between the Salpeter and BG IMFs, the calculated total EBL differs by much less, because the total EBL is dominated by relatively higher mass stars. On the other hand, steeper IMFs have more low mass stars which live long and pile up, which works to increase the total EBL. These dependencies on the IMF have in fact been studied by various authors to constrain the IMF (e.g., Ref.~\cite{Fardal:2006sd}). Of importance to us is the agreement between observations and predictions, when calculated using recent IMFs with shallower slopes and suppression at the lower mass end (BG IMF).
We note that this result contrasts with a recent study which found predictions that were smaller than observations \cite{Fardal:2006sd}. In their study, the predicted EBL was $\sim 50$ $\mathrm{nW \, m^{-2} \, sr^{-1}}$ for the BG IMF and they estimated the observed EBL to be $77$ $\mathrm{nW \, m^{-2} \, sr^{-1}}$. Our study differs in three aspects. First, by including the gamma-ray constraints, the updated observed EBL is slightly smaller. Second, the updated CSFH is somewhat larger. Finally, we include consistent evolution of metallicity in our calculations, whereas the authors in \cite{Fardal:2006sd} assumed a constant $Z=0.02$ over all redshifts. Lower metallicity leads to less mass loss and hence higher time-integrated radiative output. These three factors result in us obtaining better agreement.
\subsection{Total stellar mass \label{sec:mass}}
Integration of the CSFH over redshift with appropriate corrections for stellar mass loss yields the stellar mass density. This quantity can be independently measured using galaxy surveys, which are often coupled to NIR observations as a proxy for stellar mass. Therefore, it also provides an independent check of the CSFH. The comparison has the property of probing a lower stellar mass range than the EBL. Numerous studies have been made, with results varying from good agreement \cite{Cole:2000ea,Fontana:2004aa,Arnouts:2007wi} to the CSFH over-producing stars \cite{Cole:2005sx,Hopkins:2006bw}. In these comparisons, the IMF plays a critical role \cite{Kroupa:2007wz}. A recent detailed study shows that the CSFH and observations of stellar mass density are in good agreement for redshifts $z \lesssim 0.7$ \cite{Wilkins:2008be}. In a subsequent paper, the authors find that if the IMF is constant in time, the best-fit IMF slope is 2.15 \cite{Wilkins:2008sa}, which is the same as our adopted BG IMF. Although the check becomes complicated by large scatter in measurements at redshifts above 0.7, the studies illustrate the overall consistency of the CSFH and stellar mass in the redshift of our interest and the preference for a shallow IMF.
\subsection{Consistency of the CSFH and observations}
Besides systematic uncertainties that contribute to the scatter in the CSFH, another issue is the CSFH normalization uncertainty arising from dust correction. As we discuss below, the true CSFH cannot be smaller by a factor at most $\sim$ 2, and even this seems unlikely.
Various studies have shown that the EBL is dominated by stars, with little contribution from non-nucleosynthesis energy sources such as active galactic nuclei \cite{Hopkins:2005fb,Fardal:2006sd}. Hence, we infer the minimum CSFH from the minimum observed EBL, $\approx$ $50 \, \mathrm{nW \, m^{-2} \, sr^{-1}}$. Requiring the minimum observed EBL to be explained, the true CSFH could be at most a factor $95/50 \sim 2$ smaller than the fiducial CSFH.
However, this would require several unlikely changes. First, the Salpeter IMF is disfavored by other observables such as the stellar mass density. Second, the dust correction applied to the CSFH is typically a factor 2--3. Invoking a true dust-correction of 1--1.5 would imply almost negligible true dust, in conflict with the observed FIR EBL peak. Third, to obtain consistency with the CCSN rates, one would need to either increase the mass range for CCSN or invoke almost negligible true dust correction, and neither seems plausible.
Therefore, we conclude that the CSFH cannot be smaller by even a factor $\sim$ 2; the maximal reduction factor is perhaps $78/50 \sim 1.5$. This sets the lower limit of the CSFH normalization. We remind the reader that the DSNB detectability is not directly affected by the value of this factor, since the flux is directly normalized by the CCSN rate and is independent of uncertainties associated with low-mass star formation.
\section{DSNB Predictions \label{sec:dsnb}}
In the previous section we explored the consistent picture of stellar birth, life, and death. As a result, we obtained the fiducial astrophysical input and uncertainties. In this section, we introduce the DSNB and discuss the neutrino emission per supernova and neutrino properties, i.e., the emission inputs and uncertainties. We then make predictions for the DSNB flux.
\subsection{DSNB formalism}
The predicted DSNB number flux, over $4\pi$, is calculated by integrating $R_{\rm CCSN}(z)$ multiplied by the neutrino emission per supernova, $\mathrm{d}N/\mathrm{d}E$, appropriately redshifted, over cosmic time \cite{Ando:2004hc},
\begin{equation} \label{DSNB}
\frac{d\phi(E)}{dE} = c \int R_{\rm CCSN}(z)
\frac{dN(E^\prime)}{dE^\prime} (1+z) \left| \frac{dt}{dz} \right| dz,
\end{equation}
where $E^\prime = E (1+z)$.
Progenitors over a wide range of masses lead to similar neutron star masses and hence neutrino emissions \cite{Takahashi:2003rn}. The dominant neutrino emission occurs during the Kelvin-Helmholtz cooling phase, when the newly formed hot and dense protoneutron star cools to a neutron star \cite{Burrows:1986me} (see also Refs.~\cite{Raffelt:book} and \cite{Kotake:2005zn}). Neutrinos and anti-neutrinos of all flavors are produced ($\nu_e$, $\bar{\nu}_e$, and $\nu_x$; where $\nu_x$ refers to $\nu_\mu$, $\nu_\tau$, and their antiparticles), and each species carries away an approximately equal fraction of the total energy, $E_\nu^{\mathrm{tot}} \approx 3\times 10^{53}$ erg. Their spectra are to a good approximation thermal; we summarize some temperatures from numerical supernova simulations in Table \ref{table:temperature} (where we use $i$ to explicitly denoted quantities at production). The hierarchy $T_{\nu_e}^i<T_{\bar{\nu}_e}^i<T_{\nu_x}^i$ reflects the different radii at which each neutrino species decouples, which in turn arises from the relevant neutrino interactions.
The observed $\bar{\nu}_e$ spectra outside the star are linear combinations of the neutrino spectra at production, owing to neutrino mixing. The time-integrated $\bar{\nu}_e$ spectrum per supernova is well approximated by the Fermi-Dirac distribution with zero chemical potential \cite{Raffelt:book,Kotake:2005zn},
\begin{equation} \label{spectrum}
\frac{dN_{\bar{\nu}_e}}{dE^\prime_{\bar{\nu}_e}}(E^\prime_{\bar{\nu}_e})=
\frac{E_\nu^\mathrm{tot}}{6}
\frac{120}{7\pi^4}
\frac{E^{^\prime 2}_{\bar{\nu}_e}}{T^4_{\bar{\nu}_e}}
\left( e^{E^\prime_{\bar{\nu}_e}/T_{\bar{\nu}_e}}+1 \right)^{-1},
\end{equation}
where $T_{\bar{\nu}_e}$ is the effective $\bar{\nu}_e$ temperature outside the star after neutrino mixing. This is the temperature that is measured by neutrino detectors and we therefore use it for predictions. The effective temperature contains information on stellar and neutrino physics, and it is a separate problem to work backwards from the effective spectrum to the initial spectra, taking into account the effects of neutrino mixing.
\subsection{Neutrino emission per supernova}\label{neutrinoemission}
It is important to address the range of expected neutrino emission per supernova, because we integrate over the entire CCSN population. Potential processes that affect the neutrino emission are both microphysical \cite{Pons:1998mm,Horowitz:2004yf,Burrows:2004vq,Langanke:2007ua} and macrophysical \cite{Thompson:2004if,Beacom:2000qy,Sumiyoshi:2007aa,Sumiyoshi:2008zw}. Neutrino mixing also plays an important role \cite{Takahashi:2001ep,Takahashi:2002cm,Ando:2002zj}. Although a body of predictions is given in the literature, it should be emphasized that numerical simulations usually do not explode, and that even the most state-of-the-art simulations do not reach beyond the accretion phase at a few hundred milliseconds after bounce. We therefore try to be as general as possible and focus on the time-integrated emission needed for the DSNB.
\begin{table}
\caption{Flavor-dependent temperatures from some examples of numerical supernova simulations in the literature.}\label{table:temperature}
\begin{ruledtabular}
\begin{tabular}{lccccc}
Author & time\footnotemark[1] & $T_{\nu_e}^i$\footnotemark[2] & $T_{\bar{\nu}_e}^i$\footnotemark[2] & $T_{\nu_x}^i$\footnotemark[2] & Ref. \\
\hline
Myra \& Burrows (1990) & 0.2 & 3.3 & 4.0 & 8.0 & \cite{Myra:1990tt} \\
Totani et al.~(1998) & 0.5 & 3.9 & 4.9 & 6.3 & \cite{Totani:1997vj} \\
& 10 & 3.5 & 6.3 & 7.9 & \\
Rampp \& Janka (2000) & 0.5 & 2.3 & 3.4 & & \cite{Rampp:2000ws} \\
Liebendoerfer et al.~(2001) & 0.5 & 4.2 & 4.6 & 5.3 & \cite{Liebendoerfer:2000cq} \\
Mezzacappa et al.~(2001) & 0.5 & 3.6 & 4.2 & 5.2 & \cite{Mezzacappa:2000jb} \\
Keil et al.~(2003) & & 3.7 & 4.0 & 5.2 & \cite{Keil:2002in} \\
Thompson et al.~(2003) & 0.2 & 2.9 & 3.5 & 4.5 & \cite{Thompson:2002mw} \\
Liebendoerfer et al.~(2004) & 0.5 & 3.7 & 4.2 & 4.5 & \cite{Liebendoerfer:2002xn} \\
& 0.5\footnotemark[3] & 4.5 & 4.8 & 7.5 & \\
Sumiyoshi et al.~(2007) & 1.4\footnotemark[3] & 6.6 & 7.0 & 10 & \cite{Sumiyoshi:2007aa} \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{postbounce time in seconds}
\footnotetext[2]{temperatures at production in MeV}
\footnotetext[3]{leads to black hole formation}
\end{table}
\subsubsection{Total neutrino energies}
The total energy budget in all flavors of neutrinos is dictated by the binding energy of the final remnant,
\begin{equation} \label{binding}
E_\nu^{\mathrm{tot}} \simeq E_{\rm bind} = 3 \times 10^{53}
\left( \frac{M_{NS}}{1.4 \, \mathrm{M_\odot}} \right)^2
\left( \frac{R_{NS}}{10\,\mathrm{km}} \right)^{-1}
\mathrm{erg},
\end{equation}
where $M_{NS}$ and $R_{NS}$ are the neutron star mass and radius. The neutron star mass is best estimated by measurements in high-mass X-ray binaries and binary pulsars containing a radio pulsar and a neutron star companion. The most likely value is $\approx$ 1.4 $\mathrm{M_\odot}$, with a range 1.2--1.6 $\mathrm{M_\odot}$ \cite{Lattimer:2006xb}. The radii are more difficult to measure, but are consistent with being $\sim$ 10 km. Note the higher masses measured in low-mass X-ray binaries are attributed to their longer mass accretion histories.
The $\bar{\nu}_e$ emission depends on how $E_\nu^{\mathrm{tot}}$ is partitioned between neutrino flavors. In the numerical supernova simulations of the Lawrence Livermore (LL) group, which successfully followed the simulation into the Kelvin-Helmholtz cooling phase ($\sim$ 18 s) \cite{Totani:1997vj}, almost exact luminosity equipartition is seen throughout. The time-integrated total $\bar{\nu}_e$ energy is $4.7 \times 10^{52}$ erg, in good agreement with $E_\nu^{\mathrm{tot}}/6$.
While the LL group succeeded in obtaining an explosion, it has been realized that they lacked neutrino processes now recognized as important. A review of the literature shows that luminosity equipartition is not universal: while $L_{\nu_e}^i \approx L_{\bar{\nu}_e}^i$ appears to be robust, $L_{\nu_x}^i$ varies by a factor 2--3 in either direction, depending on the evolutionary phase and on numerical methods. However, we remind the reader that these simulations only reach a few hundred milliseconds at most, when the bulk of neutrinos have not yet been emitted, and therefore say little about energy equipartition.
In addition, the observed $\bar{\nu}_e$ emission is a linear combination of $\bar{\nu}_e$ and $\bar{\nu}_x$ at production, due to neutrino mixing. Mixing scenarios, which are dependent on the neutrino mass hierarchy and oscillation parameters as well as the neutrino and stellar densities through which the neutrinos propagate, have been studied systematically \cite{Takahashi:2001ep,Takahashi:2002cm,Ando:2002zj,Dasgupta:2007ws,EstebanPretel:2007yq,Chakraborty:2008zp}. These give the effective $\bar{\nu}_e$ temperature a value in between the extremes corresponding to a pure $\bar{\nu}_e$ at production and a pure $\bar{\nu}_x$ at production.
Observationally, analysis of $\bar{\nu}_e$'s from SN 1987A show that the observed $\bar{\nu}_e$ energy budget is 3--$6 \times 10^{52}$ erg \cite{Arafune:1987ua,Janka:1989aa}, confirming the approximate energy equipartition after neutrino mixing. Variations away from energy equipartition at production are likely reduced by the effects of neutrino mixing as described above.
In some cases, the total energy budget may be smaller than Eq.~(\ref{binding}). One example is the effect of the equation of state for dense matter. Another possibility would seem to be the case of prompt black hole formation, where the neutrino signal is expected to be abruptly cut short \cite{Baumgarte:1996iu,Liebendoerfer:2002xn}. However, recent simulations show that black hole formation is preceded by an increase in the neutrino luminosity \cite{Sumiyoshi:2007aa,Sumiyoshi:2008zw,Fischer:2008rh}, as the gravitational energy of the rapidly accreting matter is released. The time integrated neutrino luminosity can be as high or higher than in normal neutron star formation. Another potential case is rotation, which could decrease Eq.~(\ref{binding}) by bloating the neutron star, although the effects are most likely not significantly large for an integrated population (see the light curves converging with time in Fig.~2 of Ref.~\cite{Thompson:2004if}). These are all interesting physics to study using supernova neutrinos.
\subsubsection{Average neutrino energies}
In general, the neutrino temperature is only weakly dependent on the neutron star parameters. Assuming a thermal spectrum, the neutrino luminosity is $L_\nu \propto R_\nu^2 T_\nu^4$. Since the total energy budget is well defined, it is useful to consider $E_\nu^{\mathrm{tot}} \approx L_\nu \Delta t$, where $\Delta t \approx 10 \, \mathrm{s}\, (R_{NS}/10\,\mathrm{km})^2 (\rho/2 \rho_{nm})^{2/3}$ is the time scale for the neutrino cooling phase \cite{Burrows:1984aa,Burrows:1990ts} and $\rho_{nm}$ is the nuclear density. This gives $T_\nu \sim 5 \, (M_{NS}/1.4 \, \mathrm{M_\odot})^{1/3} (R_{NS}/10\,\mathrm{km})^{-3/4}$ MeV. Although this is a simplistic analysis, it shows that $T_\nu$ is weakly dependent on the neutron star mass and radius. For example, the neutrino emission from a more massive neutron star-neutron star merger system is comparable to that of a normal supernova \cite{Rosswog:2003rv}.
Indirect constraints on the $\nu_x^i$ spectrum have been placed from the chemical abundance of $\nu_x$-process induced elements. Since the inelastic $\nu_x$ scattering off nuclei uniquely leads to the production of specific isotopes \cite{Heger:2003mm}, the required $\nu_x$ flux can be inferred. For $E_{\nu_x}^i=4$--$5.8 \times 10^{52}$ erg, $T^i_{\nu_x}$ is constrained to lie between 4--7 MeV \cite{Yoshida:2005uy,Yoshida:2008zb}.
The DSNB $\bar{\nu}_e$ spectrum can be calculated from the observed SN 1987A $\bar{\nu}_e$ \cite{Fukugita:2002qw,Lunardini:2005jf,Yuksel:2007mn}. The SN 1987A spectrum constructed by fitting \cite{Lunardini:2005jf} or by nonparametric methods \cite{Yuksel:2007mn} shows similar distortions from a thermal spectrum. For the purposes of DSNB detection, the spectrum near detector threshold is important, which as we discuss below is $\approx 20$ MeV for the current SK and $\approx 10$ MeV in a gadolinium-enhanced SK. The spectrum of Ref.~\cite{Yuksel:2007mn} is similar to a 5 MeV thermal spectrum for high energies ($E_\nu \gtrsim 30$ MeV), and closer to a 4 MeV thermal spectrum at somewhat smaller energies ($E_\nu \lesssim 30$ MeV). Therefore, the SN 1987A spectrum could be treated as similar to a 4 MeV thermal spectrum, which is on the low end of the theory predictions. The limited sample of the SN 1987A data means this spectrum serves as a guidance rather than the definitive neutrino spectrum per supernova; there is no good reason to invoke a smaller spectrum.
\begin{figure}[t]
\includegraphics[width=3.25in]{fig4}
\caption{\label{fig:DSNBflux} DSNB flux spectrum for emitted neutrino spectra as labeled. For each spectrum, two curves are plotted representing the full range of uncertainties due to astrophysical inputs (the fiducial prediction lies in between). The shadings indicate backgrounds, with origins as labeled. Decays of invisible muons and spallation products would be reduced in a gadolinium-enhanced SK, opening the energy region 10 MeV and above to a rate-limited DSNB search; see Fig.~\ref{fig:DSNBevent}.}
\end{figure}
\subsection{DSNB flux prediction} \label{DSNBflux}
In Fig.~\ref{fig:DSNBflux} we show the predicted DSNB flux for a selection of neutrino spectra as labeled. For each, the two curves correspond to the upper and lower $R_{\rm CCSN}$ inputs from Fig.~\ref{fig:CCSNrate}. The fiducial astrophysical input lies in between. The figure illustrates the relative sizes of the DSNB uncertainties originating from the astrophysics and emission inputs. The shading represents the relevant backgrounds which detections of the DSNB must compete against. We discuss in Sec.~\ref{sec:future} how these backgrounds would be rejected in a gadolinium-enhanced SK.
We adopt thermal spectra given by Eq.~(\ref{spectrum}) with effective temperatures of 4, 6, and 8 MeV. These are effective temperatures, reflecting the range of initial temperatures shown in Table \ref{table:temperature}, as well as the range of neutrino mixing effects described in section \ref{neutrinoemission}. We also show the neutrino spectrum directly reconstructed from the SN 1987A neutrino data \cite{Yuksel:2007mn}.
We see that with the current astrophysical uncertainties, the emission uncertainties dominate at $E_\nu \gtrsim 20$ MeV. The DSNB in this energy range therefore provides information on the emission inputs. At lower energy the fluxes and uncertainties converge.
\section{DSNB constraints and detection \label{sec:sk}}
\subsection{Super-Kamiokande constraints}
\begin{figure}[t]
\includegraphics[width=3.25in]{fig5}
\caption{\label{fig:DSNBevent} DSNB event rates at SK (flux spectra weighted with the detection cross section) against positron energy. Note the linear axis. We hatch in the 2003 upper limit by the Super-Kamiokande Collaboration, $< 2$ events (22.5 kton yr)$^{-1}$ in the energy range 18--26 MeV. The limit applies to all spectra (see text). In a gadolinium-enhanced SK, decays of invisible muon and spallation products would be reduced, opening up the energy range $\gtrsim$ 10 MeV for DSNB search (unshaded region).}
\end{figure}
Super-Kamiokande is a water \v{C}erenkov neutrino detector with a fiducial volume of 22.5 kton, sensitive to DSNB through the inverse beta reaction $\bar{\nu}_e p \to e^+ n$ on free protons (hydrogen nuclei). The positron energy faithfully represents the neutrino energy, and $E_e \simeq E_{\bar{\nu}_e}-1.3 \,\mathrm{MeV}$ to zeroth order in $M_p$. In 2003, the Super-Kamiokande Collaboration reported a stringent upper limit on the integrated DSNB $\bar{\nu}_e$ signal above $E_e=18$ MeV, using 1496 days (4.1 yr) of data. The dominant background in this energy range was the decay of invisible muons, the fixed spectrum of which rises steeply with positron energy \cite{Kaplinghat:1999xi,Malek:2002ns}. The best signal to noise ratio is therefore obtained at the lowest energy bin, 18--26 MeV, where the background rate is some $\sim 3$ events yr$^{-1}$. By searching for DSNB neutrinos over detector backgrounds, the Super-Kamiokande Collaboration has limited the DSNB flux to be $\phi(E_{\bar{\nu}_e} > 19.3 \,\mathrm{MeV}) < 1.2$ cm$^{-2}$ s$^{-1}$. In a more physically relevant form, the limit is $\lesssim$ 2 events (22.5 kton yr)$^{-1}$ in the lowest energy bins \cite{Malek:2002ns,Yuksel:2005ae}.
\subsection{DSNB event spectrum}
In Fig.~\ref{fig:DSNBevent} we show the predicted event rates (flux spectrum weighted with the detection cross section \cite{Vogel:1999zy,Strumia:2003zx}) as a function of positron energy, which is the observed quantity in SK. We account for corrections of order $1/M_p$ in both the cross section and kinematics, which results in a net $\sim$ 20\% reduction in event rates at the current SK (and 10\% reduction at a gadolinium-enhanced SK). While the SK limit could be weakened by these corrections \cite{Lunardini:2008xd}, we adopt the published limit \cite{Malek:2002ns}, since the net effect of the corrections is smaller than the statistical uncertainties on the number of background events at SK. Furthermore, corrections should also be applied to the source, which would reduce the neutrino opacity and lead to more high-energy neutrino escaping, partially canceling the effect of the corrections. One should also consider that there are uncertainties in approximating the neutrino spectrum, which will not be purely thermal.
We show four DSNB spectra in the figure, as labeled. We also show the region already excluded by SK \cite{Malek:2002ns,Yuksel:2005ae} by hatched shading. The exclusion region is drawn by scaling the 6 MeV spectra up so that the predicted event rate in the energy range 18--26 MeV is 2 events (22.5 kton yr)$^{-1}$. Since SK currently probes a narrow range of the exponential tail of the thermal neutrino spectra, the exclusion region has an almost temperature-independent shape.
The SK limit already partially excludes the dominance of $\bar{\nu}_e$ effective temperatures at or above 8 MeV. The 6 MeV DSNB, which is typical of scenarios with neutrino mixing, is within at most a factor 2 of the current SK limit. Furthermore, the 4 MeV spectrum, which reflects our lower estimate for the effective temperature, lies within a factor 4 of the current SK limit. Effective temperatures in the relevant energy range lower than 4 MeV are unlikely given the SN 1987A data and general considerations above. In Table \ref{table:counts} we show the predicted event numbers in the energy bin 18--26 MeV, which can be compared to the SK event limit, and which can be improved soon.
\begin{table}
\caption{Integrated DSNB event rates in the positron energy range 18--26 MeV for the current SK and 10--26 MeV for a gadolinium-enhanced SK. The uncertainties reflect the upper and lower $R_{\rm CCSN}$, defined from the CSFH.}\label{table:counts}
\begin{ruledtabular}
\begin{tabular}{ccc}
$\bar{\nu}_e$ spectrum &
\multicolumn{2}{c}{events [(22.5 kton yr)$^{-1}$]} \\
& $10<E_e/\mathrm{MeV}<26$ & $18<E_e/\mathrm{MeV}<26$ \\
\hline
8 MeV & $4.2 \pm 1.4$ & $2.0 \pm 0.7$ \\
6 MeV & $3.5 \pm 1.1$ & $1.3 \pm 0.4$ \\
4 MeV & $1.8 \pm 0.5$ & $0.4 \pm 0.1$ \\
SN 1987A & $1.7 \pm 0.5$ & $0.5 \pm 0.1$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{DSNB future prospects} \label{sec:future}
In the current SK, DSNB $\bar{\nu}_e$ are detected in a singles positron search, $\bar{\nu}_e p \to e^+ n$, for which there are very large background rates \cite{Malek:2002ns}. The largest background above $\sim$ 18 MeV comes from the decay of invisible muons, i.e., non-relativistic muons produced from atmospheric $\nu_\mu$ and $\bar{\nu}_\mu$ \cite{Kaplinghat:1999xi}. The electrons and positrons produced in muon decays cannot be distinguished from those of the signal inverse beta decay reaction. At energies below $\sim$ 18 MeV, the decays of spallation products of cosmic-ray muons also contribute to the background (see Fig.~\ref{fig:DSNBflux}).
With 0.2\% dissolved gadolinium, the neutrons produced in inverse beta decay can be identified with high efficiency, via the photons emitted upon neutron capture on gadolinium. This delayed signal allows tight temporal and spatial coincidence for signal events, reducing the invisible muon background by a factor $\sim$ 5 and removing the spallation backgrounds in the range 10--18 MeV, opening it up to DSNB searches \cite{Beacom:2003nk,Yuksel:2005ae}. This is shown in Fig.~\ref{fig:DSNBevent} where the relevant shadings shown in Fig.~\ref{fig:DSNBflux} have been removed. Below 10 MeV, reactor $\bar{\nu}_e$ still overwhelm the DSNB signal \cite{Beacom:2003nk}.
Importantly, the enhancement allows a mostly rate-limited, rather than background-limited, DSNB search, so that the sensitivity improves linearly with exposure. Another important point is that a gadolinium enhancement could be applied to the existing SK detector \cite{Beacom:2003nk}.
The advantages of an enhanced SK are dramatic. At the lowered energy threshold, the event spectrum and uncertainties converge, so that the total range in predictions is only a factor $\sim$ 2 (i.e., from the lowest predicted 4 MeV curve to the highest predicted 6 MeV curve). The predicted total event rate in the energy range 10--26 MeV is almost triple that in SK with a high energy threshold, since the peak of the DSNB spectra can be probed; a gadolinium-enhanced SK is almost guaranteed to detect the DSNB. A non-detection would require novel stellar or neutrino physics, for example a change in the equation of state, extremely fast rotation, invisible neutrino decays on cosmological scales \cite{Fogli:2004gy,Ando:2003ie}, or the effect of hypothetical particles on the emission model of supernovae. Any such explanations would have to show why the SN 1987A detection results were atypical.
\section{Discussions and Conclusions \label{sec:conclusion}}
Neutrinos are the only probes (possibly apart from gravitational waves) of the central regions of core-collapse events, and their study and detection are strongly motivated by many areas of physics. In this paper we assess the uncertainties for DSNB searches and implications for stellar and neutrino physics. The results can largely be divided into three categories.
\subsection{On astrophysical inputs}
We start with an up-to-date compilation of the CSFH and cross-check it with other observables. While our aim is to constrain the astrophysical inputs for the DSNB (i.e., the CCSN rate), these new results are of value in their own right.
\begin{itemize}
\item \emph{Consistency with CCSN rates}: Using a collection of recently observed CCSN rates, we show that they are in agreement with predicted CCSN rates from the CSFH, up to $z \sim 1$. Importantly, this check is almost independent of the IMF.
\item \emph{Consistency with EBL}: Using new EBL measurements, including TeV gamma-ray constraints, we show that predictions from the CSFH are in agreement with observations. The observed total EBL is $73^{+21}_{-26} \, \mathrm{nW \, m^{-2} \, sr^{-1}}$, while we predict $78^{+31}_{-24} \, \mathrm{nW \, m^{-2} \, sr^{-1}}$ for the BG IMF.
\item \emph{Consistency with stellar mass density}: The choice of the BG IMF is independently supported by studies of the stellar mass density, which show agreement in $z \lesssim 0.7$.
\end{itemize}
In conclusion, our fiducial CSFH together with the BG IMF gives excellent agreement among all observations considered, preparing the astrophysical inputs for the DSNB. In particular, we stress how the astrophysical inputs cannot be too low, in order to maintain the self-consistent picture of the birth, life, and death of stars, as illustrated by the cross-checks.
\subsection{On emission inputs}
Next we assess the range of neutrino emission expected from supernovae. Coupled to our cross-checked astrophysical inputs, this allows us to discuss how constraining the present limit on the DSNB is. Our best-determined results are shown in Fig.~\ref{fig:DSNBevent}.
\begin{itemize}
\item \emph{Neutrino emission}: Ultimately, this is a quantity to be studied using supernova neutrino detections. We thus discuss the generic emission of $\bar{\nu}_e$ from supernovae using simple arguments based on the energetics of core collapse, neutrino mixing, and the observed SN 1987A neutrino burst. We also show the neutrino temperature depends weakly on the remnant parameters. We conclude that the relevant high-energy $\bar{\nu}_e$ emission cannot be made too small.
\item \emph{Present constraints}: The SK limit is already probing interesting parameter regions. The dominance of an effective $\bar{\nu}_e$ temperature above 8 MeV is constrained. Similarly, a high rate of dark core collapses is also prohibited by an amount that depends on the assumed temperature.
\item \emph{Implications for stellar and neutrino physics}: If an effective temperature is ruled out, this rules out neutrino mixing and initial neutrino temperatures that would lead to that effective temperature in the DSNB energy range. The correspondance between the initial and effective neutrino emissions is dependent on the neutrino mixing scenario, which has been studied systematically by various authors.
\end{itemize}
To conclude, from general and SN 1987A considerations, the neutrino emission cannot be too low. The current SK limit is already constraining interesting neutrino effective temperatures. Noting the SK limit was placed in 2003, more data and improved cuts will allow better sensitivity.
\subsection{On detection in SK}
With the DSNB inputs and their uncertainties checked, we discuss implications for DSNB detection in the future.
\begin{itemize}
\item \emph{SK prospects}: The DSNB is near detection in SK. The 6 MeV spectrum, typical of scenarios with neutrino mixing, is within a factor $\sim$ 2 of the current SK limit. The factor increases to $\sim$ 4 for the lower 4 MeV and SN 1987A reconstructed spectra.
\item \emph{SK with gadolinium}: Intriguingly, the fluxes and uncertainties converge at the improved detection threshold (\mbox{$\sim$ 10 MeV}), so that predictions span an uncertainty of a factor $\sim$ 2. The predicted event rate between 10--26 MeV is 1.2--5.6 events yr$^{-1}$, leaving no room for the DSNB to escape detection.
\item \emph{Future physics with SK}: The effective temperature $T_{\bar{\nu}_e}$ contains physics concerning neutrino emission from the collapsed core and on neutrino mixing. Stellar and neutrino physics can be extracted by future comparisons between the measured and theoretical $T_{\bar{\nu}_e}$. Future SK analysises should report results directly in terms of the time-integrated luminosity and effective temperature \cite{Yuksel:2005ae}, using the astrophysically-measured supernova rate.
\end{itemize}
To conclude, while the current SK will continue probing interesting physics, a gadolinium-enhanced SK is almost guaranteed to detect DSNB events. A non-detection would require novel stellar or neutrino physics. Together with the decreasing astrophysical uncertainties, these results strongly support the imminent detection of the DSNB and solidify its important role in understanding supernova and neutrino physics.
\section*{Acknowledgments}
We thank Shin'ichiro Ando, Maria Terese Botticella, Thomas Dahlen, Andrew Hopkins, Cecilia Lunardini, Katsuhiko Sato, Stephen Smartt, Todd Thompson, Stephen Wilkins, and Mark Vagins for helpful discussions; Matt Kistler and Hasan Yuksel for helpful discussions and technical assistance. S.H.~thanks the hospitality of CCAPP, Ohio State University, where this work took place. S.H.~was supported by CCAPP, J.F.B.~was supported by NSF CAREER Grant PHY-0547102, and E.D.~was partially supported by NASA grant LTSA 03-0000-065.
\bibliographystyle{h-physrev4}
|
3,212,635,537,749 | arxiv | \section{Introduction}
The global increase of mobile users as well as massive newly-formed wireless connections (e.g. autonomous driving cars and Internet of things) have raised unprecedented challenges to cellular communications systems. Rolling out more intelligent network management approaches is the primary direction beyond the 5th generation cellular system (Beyond 5G) \cite{r2020}. The concept of self-organized network (SON) is summarized in \cite{klaine2017survey} to address this technique trend, where network techniques are enabled with self-configuration, self-healing and self-optimization
functionalities. To achieve these goals, the techniques are anticipated to handle the coupling relations between resources allocation, handover management, interference cancellation and coverage optimization, etc.
As a paradigm of the network self-optimization, we consider an automatic way to jointly optimize cell load balance \cite{mao17} and network throughput. Specifically, the realization is required to be as consistent as the operation mode of current cellular systems (i.e. with a smooth evolution). We observe that the inter-cell handover (HO) criteria and massive MIMO (e.g. full-dimension MIMO (FD-MIMO) \cite{nam2013full}) antenna angles are often static in the 4G LTE/LTE advanced system \cite{zhang17}. Motivated by this fact, we consider exploiting the freedom of changing these parameters, i.e. we aim to design an online policy for updating the inter-cell handover thresholds and FD-MIMO antenna tilt angles. More importantly, the optimization objective is no-longer considered as a monopolized metric but with multiple ones.
On the opposite, the non-self-optimization approach is referred to as static optimization approaches where the network parameters are configured according to a pre-assumed model or long-term statistics of the environment. These analytical methods are often unextendable to online fashions due to the bottlenecks of modeling the network dynamics and developing feasible solvers \cite{imran2014challenges}. From the experiments in \cite{ruiz2015analysis}, it demonstrates that directly using the results from a static optimization approach to a dynamic scenario can lead to 85\% to 97\% hand-over failures and connection outages. Also, the negative impacts of model mismatch (applying long-term measurements based strategies to dynamic situations) are discussed in \cite{andrews2014overview}. Accordingly, we can summarize the practical challenges of using these non-adaptive approaches as follows:
\begin{itemize}
\item Model-based utility optimization cannot extensively incorporate all hidden factors into a unique analytical framework.
\item Even explicit factors cannot be precisely characterized or timely measured, such as user mobility and the relation between antenna tilt angles and reference signal receiving power (RSRP), etc.
\item Usually, a non-convex batch-based optimization problem has to be solved at the end due to the mixed-integer features of network parameters (Here, the antenna tilt angles are discrete variables due to hardware constraints). The non-convex feature makes the static formulation based approaches being intractable online.
\end{itemize}
To adaptively adjust network parameters with light efforts, dynamic programming \cite{ye13} as well as heuristic approaches \cite{damnjanovic2011survey} can be considered. However, these methods often rely on environment assumptions or have uncertainties of their optimality. The randomness of user mobility often results in fluctuated cell-loads and frequent handover \cite{lopez2012mobility} which is the hardest part to characterize. To alleviate any assumptions on the environment, we utilize a reinforcement learning framework (RL) for the online algorithm design. The success of the RL mechanism has been justified in different tasks, such as computer games \cite{schrittwieser2019mastering}: Atari \cite{mnih2013playing} and StarCraft II \cite{vinyals2019grandmaster}, chemical reactions \cite{zhou2017optimizing} and image captioning \cite{mnih2014recurrent} which is our primary motivation of applying RL-based optimization approaches. In our RL approach, a central agent tracks the user measurements and react to favor the predefined objectives to adjust cell individual offsets (CIOs) and BS antenna tilt angles. More importantly, the RL actions are anticipated to perform in a proactive way rather than reactive, i.e., the actions are taken into account of the prediction on user mobility.
In this paper, we introduce a multi-objective RL method and verify the effectiveness of using our measurement data associated with a comprehensive system-level simulator. The main contributions of this paper are as follows,
\begin{itemize}
\item {\textbf{A New Operational Mode:}} Our innovative combination of network handover management and FD-MIMO antenna tilt is a hierarchical approach for user mobility management: The tilt actions essentially change the cell coverage as a large-scale optimization; The handover management which determines user association is considered as a small-scale adjustment since it especially adheres to the cell edge users. This functional separation is explicitly mapped to the network hardware components which enables enhanced flexibility to the network configuration.
\item {\textbf{A New Algorithm:}} Most existing RL algorithms can only accumulate scalar rewards for a single objective optimization. Whereas, our method can learn through vector rewards thus handle multiple objectives optimization online. To the best of our knowledge, it is the first Pareto-optimization based RL algorithm with continuous action space. Accordingly, this extension allows the underlying policy network to generate mixed actions (i.e., both continuous and discrete actions, where the latter can be easily obtained through quantization). Therefore, the policy on controlling handover threshold (continuous variable) and antenna tilt angles (discrete variable) can be jointly optimized.
\item {\textbf{Extensive Evaluations:}} We evaluate our method based on a comprehensive simulator which is designed according to 3GPP standards on air interface and access networks\cite{5grrc,5gphysical}. The user side power distribution is obtained from our measurement data in a real city environment. Moreover, we include hardware constraints to the network parameters, such that the FD-MIMO antenna tilt and handover are only allowed to be tuned periodically. Moreover, for self-contained evaluation, we formulate the multi-objective optimization problem through a static perspective. Accordingly, brute-force search based methods are introduced as the benchmark: We give these static methods full access to the user mobility and at the same time relax the periodic action constraints to them. Our results demonstrate that the RL method performs fairly well as the static approaches.
\end{itemize}
In addition, our introduced RL framework can be generalized to handle more than two objectives as well as supporting vector reward with arbitrary dimensions. Also, the action operation can be extended to an asynchronous manner which is considered as a promising direction for our future work.
The rest of this paper is organized as follows: In Section \ref{sec2}, we will review the network optimization methods including both model-based static/online optimization and learning-based approach. In Section \ref{sec3}, we will introduce the background on our network configuration including FD-MIMO antenna tilt, handover management, and network utilities. Section \ref{sec4} will elaborate on our vector reward based multiple objective reinforcement learning methods. Evaluation results are presented in Section \ref{sec6}, where the details of a static solver for the joint optimization will be put in the Appendix as a comparison benchmark. Finally, Section \ref{sec7} will conclude the paper.
\section{Related Work}
\label{sec2}
Cell load balancing is often considered as the primary objective in network optimization. \cite{ye13} introduced a near-optimal distributed solver in heterogeneous networks. In \cite{bethanabhotla2014user}, an optimizer of user association in massive MIMO systems is proposed. However, the resulting analytical solution is based on a perfect assumption of the massive MIMO channel and without any specific MIMO operations, such as precoding/beamforming. Meanwhile, \cite{razaviyayn2013linear} introduced a downlink precoding method for mobility load balance. But this method does not include the optimization for user association. Moreover, a stochastic geometry based analytical approach is proposed in \cite{singh2013offloading}. While this method can solve a static user association distribution, it cannot be applied online. Meanwhile, it relies on assumptions of user mobility. Overall, the aforementioned approaches are limited to ideal scenarios and with high computational complexity. In current cellular systems, the user association is defined by handover events. Thus the method \cite{Hasan18} containing adaptive optimization on handover parameters by using reference signal receiving power (RSRP) measurements is more relevant. This method is comprised by load prediction and resource adjustment. Indeed, it offers a practical framework for user association optimization. However, it is a heuristic approach, where the optimally cannot be justified.
Rather than the aforementioned single objective or single variable based optimization, \cite{wang2010dynamic} considers optimizing load balance and maintaining throughput at the same time, where the solver is formulated as an integer optimization. To overcome the computational bottlenecks of this NP-complete problem, online based multiple objective optimizations are proposed in \cite{son2009dynamic,ao2017approximation}. \cite{son2009dynamic} introduced jointly solving the frequency reuse and load balance via changing user association. In addition, the method in \cite{ao2017approximation} considered combining the objectives on throughput maximization and load balance. The result shows that it can achieve $60\%$ - $67\%$ optimality. However, these methods often assume ideal RSRP measurements without handover delay.
Although the previous methods offer certain strategies on network parameter optimization, the performance is vulnerable to environment variation due to model mismatch. Alternatively, learning-based approaches were introduced: \cite{mwanje2013q} proposed a Q-learning based algorithm to adaptively adjust cell individual offsets (CIOs), which is shown to be superior to a CIO adjustment strategy with fixed step-size.
\cite{mwanje2016cognitive} proposed a mobility load balance algorithm using generalized Q-learning. On the other hand, \cite{kudo2014q} introduced a framework that operates the handover decision merely through user side calculation. Each user is configured with a reinforcement agent which aims to minimize transmission outage and cell load fluctuation. \cite{xu2019deep,xu2019load} introduced using RL for CIO control, where the RL observation is well designed environment features which are demonstrated to be able to effectively handle the curse of dimensionality. Alternatively, an asynchronous multi-agent RL framework is proposed to deal with large number of users in \cite{wang2018handover}. Rather than RL based approaches, \cite{zappone2018user} introduced a feed-forward neural network based supervised learning approach, where the input is directly chosen as the point cloud comprised by user locations and power measurements. Accordingly, the network output is chosen as the ideal user association. However, the operations rely on users' real time feedback to BSs. Meanwhile, predefined training set labels on the optimal user association is required.
Considering the pros and cons of previous methods, we employ a deterministic poly gradient based RL method \cite{lillicrap2015continuous} to handle the CIO adjustment as well as FD-MIMO antenna tilting. Regarding multi-objective RL techniques, \cite{van2014multi} extended using scalar reward to vector reward based Q-learning approach. However, the action space is restricted as discrete values. In addition, a simple treatment on vector reward by transforming it to scalar reward using scalarization function can be found in \cite{miettinen2002scalarizing}. Nonetheless, this method relies on handcrafted scalarization functions and cross-validation over multiple policies. \cite{van2017hybrid} proposed an RL framework by decomposing a single reward into multiple rewards to accelerate the convergence of Q-learning. Moreover, this method aims at optimization on discrete variables rather than continuous variables.
\section{Network Scenario}
\label{sec3}
In this section, we will introduce our network layout and utilities. We consider $N$ base stations (BSs) distributed in a city area. For ease of discussion, we denominate the coverage of one BS as a cell, where cells are geographically adjacent to each other, such that Fig. \ref{network_layout} depicts a case of four cells. Our RL agent controls the parameters of the adjacent BSs in a centralized way. We assume there are $K$ users moving around this area. We let $t$ represent the system time index, $k$ be the user index, and $n$ be the BS index.
\begin{figure}[tp]
\centering
\includegraphics[width = 0.6 \linewidth, height = 0.4 \linewidth]{figures/mobility_scenario.PNG}
\caption{Network layout.}
\label{network_layout}
\end{figure}
\subsection{FD-MIMO with Antenna E-Tilt}
For each BS, it is deployed with a 2D antenna array to facilitate the FD-MIMO operation which can offer an extra degree of freedom to the access network. More than operating beamforming towards narrow directions, BS is enabled with an antenna electrical-tilt (E-Tilt) function as shown in Fig. \ref{antenna_tile}, where the antenan tilt contains both elevation and azimuth directions. The tilt action essentially allows a broadcast beam adaption for downlink transmission. Thus, the adjustment of BSs' antenna tilt angles can alternatively change the cell coverage.
\begin{figure}[tp]
\centering
\includegraphics[width = 0.9 \linewidth, height = 0.3 \linewidth]{figures/Antenna_tilt.PNG}
\caption{FD-MIMO antenna tilt.}
\label{antenna_tile}
\end{figure}
Clearly, the received power by user $k$ from BS $n$ at time instant $t$ is a function of the tilt angle, which we denote as $p_{n,k}(b_n(t))$. $b_n(t)$ comes from a predefined dictionary $\{\theta_0, \theta_1, \cdots, \theta_{M-1} \}$, with $\theta_m$ being a combination of azimuth and elevation angles. For simplicity, we collect all BSs' tilt angles as a vector ${\boldsymbol b}(t)$.\\
{\textbf{ Remarks}}
\begin{itemize}
\item The downlink received power ${p_{n,k}(b_n(t))}$ can be characterized as a function of the channel realization. However, it is challenging to give an analytical expression of ${p_{n,k}(b_n(t))}$ using both tilt angles and channel coefficients. In our simulation, ${p_{n,k}(b_n(t))}$ is determined via our measurement data set of RSRP. The received signal to noise ratio (SINR) is defined as ${p_{n,k}(b_n(t))\over{\sum_{n' \neq = n} p_{n',k}(b_n(t))}}$, where $n$ is assumed as the serving cell. It is jointly determined by the tilt angles of both serving cell and interference cells. In our system, SINR is first mapped to channel quality indicator (CQI), and then reported to BS in a periodical approach.
\item Compared to downlink beamforming, the antenna tilt is a slower action due to hardware constraints. Meanwhile, the tilt-action impacts a wider range of users such that adjusts the cell coverage.
\end{itemize}
\subsection{Handover Management}
The handover process is introduced to switch the user association between cells. Due to user mobility, handover happens when the signal quality from neighbor cells is better than the serving cell. In our system, users report their RSRP measurement to the serving BS, thereafter the BS determines the handover event according to certain criteria. Particularly, we consider A3-event based inter-cell handover which is defined as
\begin{align}
p_{n',k}(t) - p_{n,k}(t) > O_{n',n}(t) + H_{ys}
\end{align}
where $n$ represents the serving cell and $n'$ represents the neighbor cell; $O_{n',n}(t)$ is the cell individual offset from cell $n$ to $n'$ which is usually symmetrical to $O_{n, n'}(t)$ such that $O_{n',n}(t) = - O_{n, n'}(t)$; $H_{ys}$ is a hysteresis parameter which is set as a constant to avoid frequent handover. When we increase the value of $O_{n, n'}(t)$, the number of handover users can be reduced. Accordingly, the cell load will change, and vice versa. Therefore, adjusting CIO can control the cell load which motivates us to consider it as a parameter in our optimization framework. For ease of discussion, we stack the CIOs as a matrix ${\boldsymbol O}(t)$.
\subsection{Network Utilities}
Given user SINR and association, we define the following network utilities.
\subsubsection{Cell Load}
We define $l_k(t) := \min \{C_k(t)/r_k(t), l_{limit} \} $ as the load/bandwidth occupation from user $k$ per the unit of physical resource blocks (PRBs), where $r_k(t)$ is the rate of user $k$ which is calculated by mapping through a CQI table adopted in the system;
$C_k(t)$ is the traffic of UE $k$ and $l_{limit}$ is the maximum load allowed for each user; Here, $C_k(t)$ is a pre-given parameter rather than a variable to optimize. For cell $n$, the cell load is given by
\begin{align}
\label{cell_load}
L_n({\boldsymbol I}(t), {\boldsymbol b}(t)) &= \sum_k I_{n,k} \cdot l_k(t),
\end{align}
where $I_{n,k}$ is the identification for user $k$'s association to BS $n$: $I_{n,k}(t) = 1$ when BS $n$ is associated with user $k$, $I_{n,k} = 0$, otherwise. Note that although we spell out user association rather than handover thresholds as variables of the cell load, CIOs are preferred as the parameters in online optimization due to the feasibility and compatibility. Moreover, the load metric is often normalized by the maximum PRBs per cell.
\subsubsection{Cell Throughput}
Given user association, user rate and user load, we can sum them together to obtain the network throughput,
\begin{align}
\label{throughput}
R_n({\boldsymbol I}(t), {\boldsymbol b}(t)) &= \sum_k I_{n,k} \cdot r_k(t) \cdot l_k(t).
\end{align}
Comparing (\ref{throughput}) to (\ref{cell_load}), we see (\ref{throughput}) is a weighted summation of the same terms in (\ref{cell_load}) which indicates a coupling relation between cell load and throughput.
Intuitively, the coupling relation between cell load balancing and throughput maximization can be observed as follows: We suppose users are \textit{unevenly distributed} in a bandwidth-limited system. To achieve a high throughput (an increase of $\sum_n R_n$), every BS tends to connect to high rate users. Therefore, users are likely to be selected by their nearest BSs. However, this strategy can result in an increase of the peak load in some cells. Consequently, the available PRBs in the high load cells can potentially be lower than the user needed PRBs which will limit the increase of network throughput. Conversely, if BSs hand over some high rate users to neighbor cells to balance the cell load, a high network throughput can potentially be achieved. This is because neighbor cells can allocate more PRBs to low rate users to compensate for the power loss without sacrificing the overall throughput. Thus, choosing proper user association (via CIOs adjustment) and power allocation (through antenna tilt) are important to cell load balancing as well as network throughput maximizing. Moreover, it is important to notice that choosing load balancing as the sole objective is not enough for the optimization. This is because balancing cell load can trigger user link dropping which decreases the overall network throughput. Therefore, the load balancing and throughput maximizing are jointly considered as the objective in this paper. Moreover, when the network is operating with a low peak load, it is more robust to handle user mobility anomalies. For ease of discussion, we summarize the introduced notations in Table \ref{notation_tables}.
\begin{table}[]
\centering
\caption{Notations}
\begin{tabular}{|l|l|l|}
\hline
Symbols & Units &Definition \\
\hline
$N$ & N/A &The number/set of BSs; Also the set of BSs \\
\hline
$K$ & N/A &The number/set of users; Also the set of users \\
\hline
$T$ & h/s &The antenna tilt period; Also the RL action period as well as the time index set\\
\hline
$b_n(t)$ & N/A &The antenna tilt index of cell $n$\\
\hline
$I_{n,k}(t)$ & N/A & Association identification from user $k$ to BS $n$:\\
& &$I_{n,k}(t) = 1$ when BS $n$ is associated with user $k$, $I_{n,k} = 0$, otherwise\\
\hline
${\boldsymbol I}(t)$& N/A & User association matrix \\
\hline
${\boldsymbol b}(t)$& N/A & A vector by stacking the tilt index of all BSs \\
\hline
$L_n({\boldsymbol I}(t), {\boldsymbol b}(t))$ or $L_n(t)$ & PRB &The load of cell $n$ \\
\hline
${\boldsymbol l}(t)$& PRB &A vector by stacking the load of all cells: ${\boldsymbol l}(t) = [L_1(t), L_2(t), \cdots, L_N(t)]$\\
\hline
$R_n({\boldsymbol I}(t), {\boldsymbol b}(t))$ or $R_n(t)$ & bit/s &The throughput of cell $n$ \\
\hline
$U({\boldsymbol I}(t), {\boldsymbol b}(t))$ or $U(t)$ & N/A &The tradeoff utility of the overall network \\
\hline
$p_{n,k}(b_n(t))$ or $p_{n,k}(t)$ & mW &The $k$th user's received power from the $n$th cell.\\
\hline
$r_k(t) $ & bit/s/PRB &The rate of user $k$, a function of $p$\\
\hline
$l_k(t)$ & PRB &The load of user $k$, $l_k(t) := \min \{C_k(t)/r_k(t), l_{limit} \} $\\
\hline
$l_{limit}$ & PRB &The maximum load constraint for each user\\
\hline
$C_{k}(t)$ & bit/s &The traffic of user $k$, also known as the bit rate requirement\\
\hline
$O_{n, n'}(t)$ & mW &The CIO parameter between cell $n$ and cell $n'$\\
\hline
${\boldsymbol O}(t)$ & mW &The CIO matrix\\
\hline
$e_n(t)$ & N/A &The percentage of cell edge users of cell $n$\\
\hline
${\boldsymbol e}(t)$& N/A &A vector: ${\boldsymbol e}(t) = [e_1(t), e_2(t), \cdots, e_N(t)]$\\
\hline
\end{tabular}
\label{notation_tables}
\end{table}
\section{The Introduced Approach}
\label{sec4}
In this section, we first introduce some important concepts of reinforcement learning. Then, we will introduce an RL formulation of the online joint optimization between cell loads and throughput. Finally, we will present the details of our algorithm flow.
\subsection{Preliminary on Reinforcement Learning}
From the perspective of online optimization, reinforcement learning is considered as an approximation to dynamic programming, where the major difference is RL can operate without a mathematical model of the environment. There are two components in the RL framework: an environment and an agent. RL algorithms control the agent to interact with the environment and accumulate rewards from the environment. The RL operation can be generally described as a Markov decision process (MDP) which is a 4-tuple ${\big (}S, A, P_{\boldsymbol a}({\boldsymbol s}(t)|{\boldsymbol s}(t-1), r_{\boldsymbol a}({\boldsymbol s}(t)|{\boldsymbol s}(t-1){\big )}$, where
\begin{itemize}
\item $S$ represent the states which are an interface from the environment to the RL agent.
\item $A$ is the action set at the RL agent. RL agent takes actions to affect the environment, and then receives a change of the states. The action is taken according to a policy denoted as $\pi$.
\item $P_{\boldsymbol a}({\boldsymbol s}(t)|{\boldsymbol s}(t-1))$ represents the station transition probability from ${\boldsymbol s}({t-1}) \in S$ to ${\boldsymbol s}({t}) \in S$ with action ${\boldsymbol a} \in A$.
\item $r_{\boldsymbol a}({\boldsymbol s}(t)|{\boldsymbol s}({t-1}))$ is the immediate reward after action $\boldsymbol a$ is conducted.
\end{itemize}
The Markov property of the states is described via the state transition probability $P_{\boldsymbol a}({\boldsymbol s}(t)|{\boldsymbol s}({t-1}))$. The RL agent takes actions in discrete time steps such that: At time $t$, the agent observes a state ${\boldsymbol s}(t)$. Then, it selects an action ${\boldsymbol a}(t) \in A$ according to a policy $\pi$. The states are updated to ${\boldsymbol s({t+1})}$ associated with sending the RL agent a feed-back reward $r_{{\boldsymbol a}(t)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t))$. Accordingly, this transition can be stacked as a 4-tuple ${\big(}{\boldsymbol s}(t), {\boldsymbol a}(t), r_{{\boldsymbol a}(t)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t)),{\boldsymbol s}(t+1) {\big)}$, namely, a record from the RL experiments. More importantly, the underlying state transition $P_{\boldsymbol a}({\boldsymbol s}(t)|{\boldsymbol s}(t-1))$ and reward $r_{\boldsymbol a}({\boldsymbol s}(t)|{\boldsymbol s}({t-1}))$ cannot be analytically characterized in most scenarios. Thus, RL based approaches fully rely on the samplings from these distributions.
The objective of an RL agent is to accumulate more rewards from the environment in the near future. To achieve this goal, the RL algorithm has two-fold features: 1) It can learn a policy which can offer good received rewards; 2) It can learn a criterion to evaluate whether the policy is good enough. Here, the criterion is named as ``value function'' which estimates the future accumulative rewards as a consequence of using policy $\pi$ from a state $\boldsymbol s$,
\begin{equation}
\label{value_function}
V_{\pi}(\boldsymbol s)={\mathbb{E}}_{\pi}\left[\sum_{t=0}^{\infty} \gamma^{t} r_{{\boldsymbol a}(t)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t)) | {\boldsymbol s}({0})={\boldsymbol s}\right],
\end{equation}
where $0\leq \gamma\leq 1$ is a discounted factor introduced to characterize the uncertainties on the future environment and avoid cyclic rewards, and ${\mathbb{E}}_{\pi}$ is taken over the states, rewards and policy, where the policy is usually formulated as a conditional probability: $\pi({\boldsymbol a}(t) |{\boldsymbol s}(t) )$. This is because of the fundamental trade-off between exploration and exploitation in online learning: The exploitation aims to take the best action based on current experiments; The exploration is to gather more information from the environment. Some standard techniques to achieve this trade-off are $\epsilon$ greedy, decayed $\epsilon$ greedy and soft-max, etc. \cite{kaelbling1996reinforcement}.
With a slight modification to the value function, we can define the action-value function \cite{mnih2013playing},
\begin{align}
\label{Q_definition}
Q_{\pi}({{\boldsymbol s}, {\boldsymbol a}}) &= {\mathbb{E}}_{\pi}\left[\sum_{t=0}^{\infty} \gamma^{t} r_{{\boldsymbol a}(t)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t)) | {\boldsymbol s}({0})={\boldsymbol s}, {\boldsymbol a}({0})={\boldsymbol a}\right]\\
&= {\mathbb{E}}_{\pi}\left[ r_{{\boldsymbol a}}({\boldsymbol s}(1)|{\boldsymbol s}) + \sum_{t=1}^{\infty}\gamma^{t} r_{{\boldsymbol a}(t)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t)) \right]\\
&= {\mathbb{E}}\left[ r_{{\boldsymbol a}}({\boldsymbol s}(1)|{\boldsymbol s}) + \gamma Q_{\pi}({{\boldsymbol s(1)},{\boldsymbol a}({1}})) \right]
\label{recursive_form},
\end{align}
where the expectation in (\ref{recursive_form}) operates over the state ${\boldsymbol s}(1)$ and action ${\boldsymbol a}(1)$. We also see that (\ref{recursive_form}) as a recursive form of the action-value is an unbiased estimation of $Q_{\pi}({{\boldsymbol s}, {\boldsymbol a}})$. $Q_{\pi}({{\boldsymbol s}, {\boldsymbol a}})$ together with (\ref{recursive_form}) is known as the Bellman equation. In general, when we set the starting time in (\ref{Q_definition}) as $t$, we can immediately attain an expression of $Q_{\pi}({\boldsymbol a}(t), {\boldsymbol s}(t))$.
Moreover, we can observe that the calculation of the $Q$ function requires the statistics in the future, which is a non-causality formulation. Therefore, to evaluate the current states and actions, we have to first estimate the $Q$ function. This is often achieved by solving the Bellman equation. In many RL algorithms, the solver is based on minimizing a temporal difference (TD) \cite{sutton1988learning} which is defined as
\begin{align}
\label{TD}
TD(Q) = l(Q_{\pi}({{\boldsymbol s}(t), {\boldsymbol a}(t)}) - r_{{\boldsymbol a}(t)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t)) - \gamma {\mathbb E}_{\pi'}[Q_{\pi'}'({{\boldsymbol s(t+1)},{\boldsymbol a}({t+1}}))] )
\end{align}
where $l$ represents a predefined loss function, such as ${\mathbb E}|\cdot|^2$, $\pi'$ and $Q'$ are respectively named as target policy and target value function. In this paper, we consider deterministic policy based optimization. Therefore, the inner expectation ${\mathbb{E}}_{\pi'}$ can be drooped. Then, the TD objective becomes
\begin{align}
\label{TD1}
TD(Q) = l(Q_{\pi}({{\boldsymbol s}(t), {\boldsymbol a}(t)}) - r_{{\boldsymbol a}(t)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t)) - \gamma Q_{\pi'}'({{\boldsymbol s(t+1)},{\boldsymbol a}({t+1}})) ).
\end{align}
We may further notice that the variable of $TD$ is $Q$ rather than $Q_{\pi}$. This is because we employ an \emph{off-policy} formulation which allows learning the value function and policy in a decoupled way. In general, off-policy can significantly improve data efficiency and offer a fast convergence \cite{munos2016safe}. On the other hand, on-policy learning is often leveraged on $Q_{\pi'}' = Q_{\pi}$ or ${\boldsymbol a}(t+1) \sim \pi$ in (\ref{TD}).
The multiple objective reinforcement learning is defined as the total value function can be decomposed as a linear combination of multiple individual value functions. Such as in the case of two,
\begin{align}
\label{value_function2}
V_{\pi}(\boldsymbol s) = w_1 V_{\pi}^{(1)}(\boldsymbol s) + w_2 V_{\pi}^{(2)}(\boldsymbol s)
\end{align}
where $V_{\pi}^{(1)}(\boldsymbol s)$ and $V_{\pi}^{(2)}(\boldsymbol s)$ are two separate value functions following the same definition in (\ref{value_function}); $w_1$ and $w_2$ are weights satisfying $w_1 + w_2 = 1$.
\subsection{RL Formulation}
\label{RL_form}
We follow the routine of MDP to formulate our network parameters optimization.
\subsubsection{Action Set $A$ }
The action set is considered as antenna tilt and CIOs adjustment as discussed in Section \ref{sec3}, i.e., at time $t$, ${\boldsymbol a}(t) = [{\boldsymbol b}(t), {\boldsymbol O}(t)]$. ${\boldsymbol O}(t)$ and ${\boldsymbol b}(t)$ are characterized as random processes, i.e., $\int P({\boldsymbol O}(t))d{\boldsymbol I}(t) = 1$ and $\int P({\boldsymbol b}(t))d{\boldsymbol b}(t) = 1$. The randomness of ${\boldsymbol O}(t)$ and ${\boldsymbol b}(t)$ is because of the fundamental trade-off on exploitation and exploration of online learning.
\subsubsection{State Space $S$}
We define the state vector as a stack of the cell loads and the ratio of cell edge users (denoted as ${\boldsymbol e}(t)$), i.e., ${\boldsymbol s}(t) = [{\boldsymbol l}(t), {\boldsymbol e}(t)]$. A user $k$ is counted as an edge user if the throughput is smaller than a pre-defined threshold. This handcrafted state definition essentially captures the features of user geometry: The load reveals the density of users inside the cell, and the edge user ratio reflects the number of potential users for handover events. Comparing with directly using user locations as the states, this reduced state dimension can significantly lower the training cost on the feature extraction.
\subsubsection{Objective (Value Function)}
For convenience, we first let $\sum_n F(L_n({\boldsymbol I}(t),{\boldsymbol b}(t)))$ (briefly noted as $F(t)$) represent a function which can measure the level of cell load balance. Then, we define the multiple objective optimization as,
\begin{equation}
\label{optimization0}
\begin{aligned}
&\max_{\{{\boldsymbol I}(t): t>0\}, \{{\boldsymbol b}(t): t>0\} } {\mathbb E}[\sum_{t=0}^{\infty}\gamma^t {
R}(t)] + \lambda{\mathbb E}[\sum_{t=0}^{\infty}\gamma^t {F}(t) ]\\
&s.t.
\quad \boldsymbol{I}(t)=\{I_{n, k}(t): I_{n,k}(t) \in \{0, 1\}, n \in N, k \in K\}\\
&\qquad {\boldsymbol b}(t) = \{b_{n}(t):b_{n}(t) \in \{\theta_0, \theta_1, \cdots, \theta_{L-1} \}, n \in N\}\\
&\qquad {\boldsymbol I}(t) = {\boldsymbol I}(t') \qquad \forall t, t' \in (pT', (p+1)T'] \quad p \in {\mathbb N} \\
&\qquad {\boldsymbol b}(t) = {\boldsymbol b}(t')\qquad \forall t, t' \in (pT, (p+1)T] \quad p \in {\mathbb N}\\
\end{aligned}
\end{equation}
where the expectation $\mathbb E$ takes into account of all the environmental randomness; $T$ is the time constraint to changing antenna tilt and $T'$ represents the minimum handover time; and $t=0$ stands for the initial operation time.
To be consistent with our action space definition, we have an alternative formulation as follows,
\begin{equation}
\label{optimization1}
\begin{aligned}
&\max_{\{{\boldsymbol a}(t): t>0\}\sim \pi} {\mathbb E}_{\pi}[\sum_{t=0}^{\infty}\gamma^t {
R}(t)] + \lambda{\mathbb E}_{\pi}[\sum_{t=0}^{\infty}\gamma^t {F}(t) ]\\
&s.t.
\quad \boldsymbol{O}(t)=\{O_{n, n'}(t): O_{n,n'}(t)\in [O_{min}, O_{max}], n \in N, n' \in N\}\\
&\qquad {\boldsymbol b}(t) = \{b_{n}(t):b_{n}(t) \in \{\theta_0, \theta_1, \cdots, \theta_{L-1} \}, n \in N\}\\
&\qquad {\boldsymbol O}(t) = {\boldsymbol O}(t') \qquad {\boldsymbol b}(t) = {\boldsymbol b}(t')\qquad\\ &\qquad \forall t, t' \in (pT, (p+1)T] \quad p \in {\mathbb N}\\
\end{aligned}
\end{equation}
Comparing (\ref{optimization0}) to (\ref{optimization1}), we make the following changes,
\begin{itemize}
\item The variable ${\boldsymbol I}(t)$ is replaced by ${\boldsymbol O}(t)$ which alternatively defines the user association via inter-cell handover. The algorithm is expected to be able to automatically adjust ${\boldsymbol O}(t)$ to proactively change the user association.
\item The time constraint on CIO is altered to $T$. We make this modification is to synchronize the network adjustment for ease of RL-based action operations. However, the user association is still determined at every $T'$ according to the definition of the A3-handover events.
\item We use ${\boldsymbol a}(t)$ to briefly represent the action policy on ${\boldsymbol O}(t)$ and ${\boldsymbol b}(t)$.
\end{itemize}
\subsubsection{Rewards}
We note that the objective in (\ref{optimization1}) can be merged as ${\mathbb E}[\sum_{t=0}^{\infty}\gamma^t {U}(t)]$, where $U(t) := R(t) + F(t)$. By following the definition of the single objective value function in (\ref{value_function}), we can consider directly set $U(t)$ as the rewards. However, this simple treatment can result in the following ambiguity issue: Suppose we obtain a reward $U$. $U$ can be either composed by $U = R_1 +\lambda F_1$ or $U = R_2 +\lambda F_2$, but $R_1 \neq R_2$ and $F_1 \neq F_2$. This gives us an intuitive understanding that directly merging the two objectives into one and using scalar-reward can potentially cause convergence issues to the RL-based approaches.
To solve the ambiguity issue, we consider defining the reward as a vector:
\begin{align}
{\boldsymbol r}_{\boldsymbol a}(t) = [r^{(1)}_{\boldsymbol a}(t), r^{(2)}_{\boldsymbol a}(t)] = [\sum_n R_n(t), \sum_n F_n(t)].
\end{align}
Now, we scale the objective and define ${\boldsymbol w} = [w_1, w_2]$, where $w_1 = {1\over{1+\lambda}}$ and $w_2 = {\lambda\over{1+\lambda}}$. Then, we have the following action-value function,
\begin{align}
\label{vector_rewards}
\max_{\{{\boldsymbol a}(t): t>0\}\sim \pi} w_1{\mathbb E}_{\pi}[\sum_{t=0}^{\infty}\gamma^t r^{(1)}_{\boldsymbol a}(t)] + w_2{\mathbb E}_{\pi}[\sum_{t=0}^{\infty}\gamma^t r^{(2)}_{\boldsymbol a}(t) ]\ .
\end{align}
We name the above formulation as a vector reward based multiple objective RL. Intuitively, we can consider $F(t)$ as the peak cell load, i.e., the maximum value of the average cell-load over a period of $T$. Penalizing the peak load can avoid over-loading users in particular cells which alternatively evenly distributes user associations. Accordingly, the reward vector is defined as,
\begin{align}
\label{vector_rewards_1}
{\boldsymbol r}_{\boldsymbol a}(t) = [{1\over T}\sum_{t\in T}\sum_{n\in N} R_n(t), -\max_n {1\over T}\sum_{t\in T}L_n(t)]
\end{align}
Moreover, we can consider a composite term to characterize the cell load balance. For instance, let $F(t) = -\max_n {1\over T}\sum_{t\in T}L_n(t) -\sigma(\{L_n(t)\}_{ n\in N, t\in T})$, where $\sigma$ is the standard deviation function. Accordingly, the two terms of $F(t)$ can be jointly treated as the last entry in the vector reward. But more generally, the second term of $F(t)$ can be included to the vector reward as a new dimension. Therefore, the optimization on the value-action function becomes,
\begin{align}
\label{vector_rewards2}
\max_{\{{\boldsymbol a}(t): t>0\}\sim \pi} w_1{\mathbb E}_{\pi}[\sum_{t=0}^{\infty}\gamma^t r^{(1)}_{\boldsymbol a}(t)] + w_2{\mathbb E}_{\pi}[\sum_{t=0}^{\infty}\gamma^t r^{(2)}_{\boldsymbol a}(t) ]+ w_3{\mathbb E}_{\pi}[\sum_{t=0}^{\infty}\gamma^t r^{(3)}_{\boldsymbol a}(t) ]\ .
\end{align}
Thus, the vector reward becomes,
\begin{align}
\label{vector_rewards_2}
{\boldsymbol r}_{\boldsymbol a}(t) = [{1\over T}\sum_{t\in T}\sum_{n\in N} R_n(t), -\max_n {1\over T}\sum_{t\in T}L_n(t), -\sigma(\{L_n(t)\}_{ n\in N, t\in T})]
\end{align}
Note that the above formulation still addresses the same multiple objective optimization defined in (\ref{optimization1}) but using a vector reward with more entries. Therefore, the basic idea of vector reward based RL is to map the features of multiple objectives to multiple dimensions of the vector reward.
\subsection{Pareto Deterministic Policy Gradient}
\begin{figure}
\centering
\includegraphics[width = 0.8 \linewidth, height = 0.5 \linewidth]{figure_new/PDPG.PNG}
\caption{The framework of Pareto Deterministic Policy Gradient.}
\label{PDPG}
\end{figure}
Now, we consider how to solve vector reward based RL problems. For simplicity, we only elaborate the case of two dimensional rewards, where the reward vector with more than two dimensions can be easily obtained. According to the definition of the action-value function in (\ref{recursive_form}), we can rephrase (\ref{vector_rewards}) as
\begin{equation*}
\label{Q_value}
\begin{aligned}
Q_{\pi}({\boldsymbol s}, {\boldsymbol a}) &= w_1Q_{\pi}^{(1)}({\boldsymbol s}, {\boldsymbol a}) + w_2Q_{\pi}^{(2)}({\boldsymbol s}, {\boldsymbol a})\\
&= w_1{\mathbb{E}}\left[ r_{{\boldsymbol a}}^{(1)}({\boldsymbol s}(1)|{\boldsymbol s}) + \gamma Q_{\pi}^{(1)}({{\boldsymbol s(1)},{\boldsymbol a}({1}})) \right] + w_2{\mathbb{E}}\left[ r_{{\boldsymbol a}}^{(2)}({\boldsymbol s}(1)|{\boldsymbol s}) + \gamma Q_{\pi}^{(2)}({{\boldsymbol s(1)},{\boldsymbol a}({1}})) \right]
\end{aligned}
\end{equation*}
By rearranging the above equation, we have
\begin{align*}
w_1{\mathbb{E}}\left[(Q_{\pi}^{(1)}({\boldsymbol s}, {\boldsymbol a}) - r_{{\boldsymbol a}}^{(1)}({\boldsymbol s}(1)|{\boldsymbol s}) - \gamma Q_{\pi}^{(1)}({{\boldsymbol s(1)},{\boldsymbol a}({1}})) \right])
\\= -w_2{\mathbb{E}}\left[(Q_{\pi}^{(2)}({\boldsymbol s}, {\boldsymbol a}) - r_{{\boldsymbol a}}^{(2)}({\boldsymbol s}(1)|{\boldsymbol s}) - \gamma Q_{\pi}^{(2)}({{\boldsymbol s(1)},{\boldsymbol a}({1}})) \right]).
\end{align*}
This equation implies that the temporal differences on $Q^{(2)}$ and $Q^{(2)}$ are proportional to each other. Motivated by this fact, we define a new TD objective for the vector reward RL as,
\begin{align}
\label{TD_objective}
TD(Q^{(1)}, Q^{(2)}) = w_1 TD^{(1)}(Q^{(1)}) + w_2 TD^{(2)}(Q^{(2)}) .
\end{align}
where
\begin{align*}
TD^{(1)}(Q^{(1)}) = l(Q_{\pi}^{(1)}({\boldsymbol s}(t), {\boldsymbol a}(t))- r_{{\boldsymbol a}(t)}^{(1)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t)) - \gamma {Q_{\pi'}'}^{(1)}({{\boldsymbol s(t+1)},{\boldsymbol a}({t+1}})))\\
TD^{(2)}(Q^{(2)}) = l(Q_{\pi}^{(2)}({\boldsymbol s}(t), {\boldsymbol a}(t))- r_{{\boldsymbol a}(t)}^{(2)}({\boldsymbol s}(t+1)|{\boldsymbol s}(t)) - \gamma {Q_{\pi'}'}^{(2)}({{\boldsymbol s(t+1)},{\boldsymbol a}({t+1}}))) .
\end{align*}
By minimizing $TD(Q^{(1)}, Q^{(2)})$, we can simultaneously optimize the temporal difference of $TD^{(1)}(Q^{(1)})$ and $TD^{(2)}(Q^{(2)})$ with weights $w_1$ and $w_2$ respectively.
Moreover, we use neural networks to represent the $Q$ function and policy $\pi$, where they are respectively denoted as ${\mathcal Q}({\boldsymbol s}, {\boldsymbol a}|\theta_{\mathcal Q})$ and ${\mathcal A}({\boldsymbol s}|\theta_{\mathcal A})$. Overall, the learning of $\theta_{\mathcal Q}$ and $\theta_{\mathcal A}$ are briefly summarized as follows:
\begin{itemize}
\item Fix $\theta_{\mathcal A}$ and alternatively update $\theta_{{\mathcal Q}^{(1)}}$ and $\theta_{{\mathcal Q}^{(2)}}$ using the gradients of $TD(Q^{(1)}(\theta_{{\mathcal Q}^{(1)}}), Q^{(2)}(\theta_{{\mathcal Q}^{(2)}}))$.
\item Fix $\theta_{{\mathcal Q}^{(1)}}$ and $\theta_{{\mathcal Q}^{(2)}}$, and update $\theta_{\mathcal A}$ through the chain rule of $Q({\boldsymbol s}_t, {\mathcal A}({\boldsymbol s}_t|\theta_{\mathcal A}))$.
\item Repeat the above two stages until convergence.
\end{itemize}
This alternative updating rule is also called the actor-critic algorithm in the context of RL. As the name suggested, it operates in an actor-critic way: The actor (Policy Network) performs actions and the critic (Value Network) evaluates the action and critiques the actions when low reward values are received. In addition, the output dimension of ${\mathcal A}({\boldsymbol s}|\theta_{\mathcal A})$ is the same as the number of parameters defined in (\ref{optimization0}). Meanwhile, parameters which have integer constraint are quantized afterward.
Overall, the algorithm is summarized in Algorithm \ref{algorithm1}. We name this method as Pareto deterministic policy gradient algorithm as it optimizes a deterministic policy via using gradient descent according to the action-value objectives. The general framework for the vector reward with more than two dimensions is illustrated in Fig. \ref{PDPG}. We can also notice that our method is based on optimization of a single policy network rather than employing cross-validation over multiple policy networks.
\begin{algorithm}
\caption{Pareto Deterministic Policy Descent}
\label{algorithm1}
\begin{algorithmic}[1]
\label{algorithm_5}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Coefficients of policy and actor networks: ${\boldsymbol c}_{\mathcal Q}$, ${\boldsymbol c}_{\mathcal A}$, discounted factor: $\gamma$, soft update parameters: $\tau$, exploration random process: ${\mathcal N}(t)$, normalized tradeoff weights $w = {\lambda \over{1+\lambda}}$
\ENSURE Online action output: ${\boldsymbol a}(t) = [{\boldsymbol {\tilde I}}(t), {\boldsymbol B}(t)]$
\STATE \textit{Initialization} : Replay buffer $\mathcal R$; Critic network ${\mathcal Q}_{1}({\boldsymbol s},{\boldsymbol a} |\theta_{{\mathcal Q}_1} = {\boldsymbol c}_{\mathcal Q})$; Critic network 2 ${\mathcal Q}_{2}({\boldsymbol s},{\boldsymbol a} |\theta_{{\mathcal Q}_2} = {\boldsymbol c}_{\mathcal Q})$; Actor network ${\mathcal A} ({\boldsymbol s} |\theta_{\mathcal A} = {\boldsymbol c}_{\mathcal A})$; Target critic networks ${\mathcal Q}'_1({\boldsymbol s},{\boldsymbol a} |\theta_{{\mathcal Q'}_1} ={\boldsymbol c}_{\mathcal Q})$ and ${\mathcal Q}'_2({\boldsymbol s},{\boldsymbol a} |\theta_{{\mathcal Q'}_2} ={\boldsymbol c}_{\mathcal Q})$ and target actor network ${\mathcal A}' ({\boldsymbol s} |\theta_{\mathcal A'} = {\boldsymbol c}_{\mathcal A})$; $t = 0$
\FOR{$t$ until the ends}
\STATE Select action ${\boldsymbol a}(t) = {\mathcal A} ({\boldsymbol s}(t) |\theta_{\mathcal A} ) + {\mathcal N}(t)$
\STATE Render environment and obtain $r(t)^{(1)}$, $r^{(2)}(t)$ and ${\boldsymbol s}(t+1)$
\STATE Queue $({\boldsymbol s}(t), {\boldsymbol a}(t), r^{(1)}(t), r^{(2)}(t), {\boldsymbol s}(t+1))$ into a buffer $\mathcal R$
\STATE Sample a minibatch $\Omega$ from $\mathcal R$
\STATE Set $\{y^{(1)}(t'): y^{(1)}(t') = r^{(1)}(t') + \gamma Q'_1({\boldsymbol s}(t'+1), {\mathcal A}'({\boldsymbol s}(t'+1)|\theta_{\mathcal A'})|\theta_{\mathcal Q'_1}) , t' \in \Omega\}$ and $\{y^{(2)}(t'): y^{(2)}(t') = r^{(2)}({t'}) + \gamma Q'_2({\boldsymbol s}({t'+1}), {\mathcal A}'({\boldsymbol s}({t'+1})|\theta_{\mathcal A'})|\theta_{\mathcal Q'_2}) , t' \in \Omega\}$
\STATE Update the two critic networks' coefficients $\theta_{{{\mathcal Q}_1}}$ and $\theta_{{{\mathcal Q}_2}}$ individually by respectively using the gradient $ \sum_{t'}\nabla_{\theta_{{\mathcal Q}_1}}l(y({t'}), {\mathcal Q}_1({\boldsymbol s}({t'}), {\boldsymbol a}({t'})|\theta_{{\mathcal Q}_1}))$ and $ \sum_{t'}\nabla_{\theta_{{\mathcal Q}_2}}l(y({t'}), {\mathcal Q}_2({\boldsymbol s}({t'}), {\boldsymbol a}({t'})|\theta_{{\mathcal Q}_2}))$
\STATE Update Actor Network $\theta_{\mathcal A} $ using the gradient $w\sum_{t'} \nabla_{{\boldsymbol a}}Q_1(s(t'), {\mathcal A}({\boldsymbol s}(t')|\theta_{\mathcal A})|\theta_{{\mathcal Q}_1}\nabla_{\theta_{\mathcal A}} {\mathcal A}({\boldsymbol s}({t'})|\theta_{\mathcal A}) + (1-w)\sum_{t'} \nabla_{{\boldsymbol a}}Q_2({\boldsymbol s}({t'}), {\mathcal A}({\boldsymbol s}(t')|\theta_{\mathcal A})|\theta_{{\mathcal Q}_2})\nabla_{\theta_{\mathcal A}} {\mathcal A}({\boldsymbol s}({t'})|\theta_{\mathcal A})$
\STATE
\begin{equation}
\begin{array}{l}
\theta_{Q'_1}\leftarrow \tau \theta_{Q_1}+(1-\tau) \theta_{{Q^{\prime}_1}} \\
\theta_{Q'_2}\leftarrow \tau \theta_{Q_2}+(1-\tau) \theta_{{Q^{\prime}_2}} \\
\theta_{{\mathcal A}^{\prime}} \leftarrow \tau \theta_{\mathcal A}+(1-\tau) \theta_{{\mathcal A}^{\prime}}
\end{array}
\end{equation}
\ENDFOR
\end{algorithmic}
\end{algorithm}
In addition, the following experimental techniques are included to stabilize the convergence of our algorithm:
\begin{itemize}
\item Experiment Replay: We use a replay buffer $\mathcal R$ which is a finite-sized cache to record the transition tuples. The replay buffer is a queue structure that can time out the oldest tuple. At each learning step, networks are updated by sampling a minibatch uniformly from the buffer. This is because PDPG is an off-policy algorithm. Using a large replay buffer can store more uncorrelated transitions thus improve the learning convergence.
\item Feature Scaling: We scale each entry of the state vector into a certain range and adjust the rewards through shifting and re-scaling. Our experiments prove this technique can make the network learning more effectively.
\item Noise Exploration: We add an independent Gaussian process with $\sigma$ variance and zero mean to the output of the deterministic policy network to form an exploration policy, i.e., ${\boldsymbol a}(t) = {\mathcal A}({\boldsymbol s}(t)) + {\mathcal N}(t)$ . Some other random processes, such as Ornstein-Uhlenbeck process is also widely utilized in the RL literature \cite{lillicrap2015continuous}. Note that the transition history in the experiment replay is based on this exploration policy rather than our target deterministic policy.
\item Soft Target Update: From TD based learning, it is important to choose an updating rule for the target value function and policy to ensure a stable convergence. We follow the “soft” target update strategy, rather than replacing the coefficients of the target networks directly copying from previous steps. The weights $\tau$ is chosen as a very small value to assure the target networks are with a slow change.
\end{itemize}
\section{Evaluation}
\label{sec6}
We evaluate our algorithm on a simulator which is developed based on 3GPP TS-38.331 and 3GPP TS-38.213 \cite{5grrc, 5gphysical}. The simulation environment is a city area with FD-MIMO deployment at BSs. Particularly, we consider applying a RL agent to a $400$m $\times$ $400$m sub-area with 4 BSs. The RL control signal for the 4 cells are assumed synchronized as they are neighboring to each other. In the coverage area of the 4 cells, the average number of users is set as $80$. In the simulator, the user mobility data is generated according to predefined mobility models. Here, we use random way points (RWP) model by default. The model assumes users walking towards random directions with random step sizes. Given users' locations and BS antenna tilt angles, the downlink RSRPs are calculated according to our measurement data which is stored in a look-up table as a 3 mode-tensor, where the three modes are respectively the index of anchor locations, tilt angle and BSs. The number of measurement anchor locations we have in the sub-area is 24,573. The number of tunable antenna tilt angles for each BS is $11$. The maximum time length for the mobility data is set as 200 days.
Given SINR values on the user side, the corresponding CQI values can be obtained. Then the user rate is determined according to a lookup table. In our simulator, the traffic model for each user is set as a constant bit rate (CBR), i.e., $C_k = 1$Mbps. The maximal load for each user is set as 6 PRBs. The total number of PRBs in one cell is 100. The CIO range is [$-12$dB, $12$dB]. When the associated users require more PRBs than the maximal PRBs per cell, the simulator will operate a resource scheduling algorithm afterward, where the eventual PRB allocation is propotional to the rate ranking of scheduled users.
The parameters in Algorithm \ref{algorithm1} are configured in the intervals $\gamma = [0.1,0.6]$ and $\tau = [0.001,0.01]$. The variance of ${\mathcal N}(t)$ is set as $0.1$ at the beginning of the learning stage to conduct the exploration policy. When the number of iterations is beyond a predefined threshold (our experimental value varies from $100$ to $500$), it is set as a smaller value (such as $100$) to promote the exploitation policy. Both the actor-network and critic-network are chosen as a fully-connected feed-forward neural network with three layers, where the activation functions for the intermediate layers are set as ReLU, the output activation function for the actor neural network is chosen as Tanh function and the critic network output activation is set as linear. The number of neurons for the two intermediate layers are set $50$ and $100$ respectively. For the RL state vector, the throughput threshold for edge user definition is set as 550 kbps. Finally, in the RL algorithm, the period of the antenna tilt and CIO change is set as $2$ hours in terms of the mobility model.
\subsection{Comparison to Different RL Rewards}
We first evaluate the performance using different rewards in the RL algorithm. We consider the CDF curves of network throughput samples from every 15 mins in 200 days amid operating the algorithms to the environment. As illustrated in Fig. \ref{Throughput_CDF_RL_Alg}, the x-axis is chosen as normalized network throughput, where the denominator is chosen as 100Mbps. The labels on the legend respectively represent: DDPG \cite{lillicrap2015continuous} algorithm using scalar reward ${1\over T}\sum_{t\in T}\sum_{n\in N} R_n(t)$, DDPG algorithm using scalar reward $-\max_n {1\over T}\sum_{t\in T}L_n(t)$, DDPG algorithm using scalar reward $w_1{1\over T}\sum_{t\in T}\sum_{n\in N} R_n(t) - w_2\max_n {1\over T}\sum_{t\in T}L_n(t)$, our introduced PDPG algorithm using vector reward (\ref{vector_rewards_1}), PDPG algorithm using vector reward (\ref{vector_rewards_2}) and DDPG algorithm using scalar reward $w_1{1\over T}\sum_{t\in T}\sum_{n\in N} R_n(t) - w_2\max_n {1\over T}\sum_{t\in T}L_n(t) -w_3\sigma(\{L_n(t)\}_{ n\in N, t\in T})$. In this figure, all the weights inside the rewards are well cross-validated to optimize the throughput distribution. In our experience, the vector reward based methods have no much difference on the throughput distributions under different combinations of $w_1$, $w_2$ and $w_3$. Whereas, scalar reward based approaches require more cross-validation tests to obtain the resulting distribution.
\begin{figure}
\centering
\includegraphics[width = 0.5 \linewidth, height = 0.4 \linewidth]{figure_new/throughput_cdf.png}
\caption{Throughput CDF of using different RL algorithms.}
\label{Throughput_CDF_RL_Alg}
\end{figure}
In Fig. \ref{cell_load_alg}, we have cell load box plots of the underlying 4 cells, where the corresponding throughput is the same to Fig. \ref{Throughput_CDF_RL_Alg}. As we can see, the scalar reward with only throughput reward has less balanced cell load. Consequently, the unbalanced cell load affects the overall throughput optimization, in which the algorithm can only converge to a local optimum with less performance advantage. Meanwhile, the scalar reward with only maximum load based approach scarifies the throughput gain to favor a more balanced load which also deviates from the joint optimization purpose. Using linear combination of the previous scalar rewards to form another scalar reward can lead to convergence issue as demonstrated in Fig. \ref{convergence_RL}. This is because of the ambiguity issue by mixing scalar rewards as we discussed in Section \ref{RL_form}. The convergence curve together with the throughput and cell load distribution corroborate the advantage of choosing the vector reward based approaches. Overall, the vector reward based RL approaches can achieve more balanced cell load and high throughput which yield the joint optimality on our objective.
\begin{figure}
\centering
\includegraphics[width = 0.5 \linewidth, height = 0.4 \linewidth]{figure_new/cell_load.png}
\caption{Cell load result of using different RL algorithms}
\label{cell_load_alg}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.5 \linewidth, height = 0.4 \linewidth]{figure_new/convergence.png}
\caption{RL convergence curves of using different rewards.}
\label{convergence_RL}
\end{figure}
\subsection{Comparison to Static Approach}
To make a self-contained evaluation, we add static optimization based brute-force algorithms to the comparison, where the details of these algorithms can be found in Appendix. Accordingly, the resulting CDF curves of the network throughput are presented in Fig. \ref{throughput_alg2} (See Algorithm \ref{algorithm_12} and Algorithm \ref{algorithm_13}). Regarding the labels in this figure, the ``static optimization 1'' stands for Algorithm \ref{algorithm_13}; the ``static optimization 2'' stands for Algorithm \ref{algorithm_12}, where the antenna tilt angles are only allowed to be changed per 2 hours in terms of the mobility model for these two algorithms; While the ``static optimization 3'' stands for Algorithm \ref{algorithm_13} using the best cross validated $w$, but the antenna tilt angles can be changed every 15 mins. We can observe that the CDF curve of our introduced RL method is very similar to the static optimization 3. Here, the CDF curve of the RL algorithm is with respect to when the RL algorithm converged (The convergence is considered as when the number of iterations are beyond 10000 as shown in Fig. \ref{convergence_alg2}.). Fig. \ref{cell_load_alg2} shows that the RL approach achieves a balanced cell load as similar as the ideal static case - static optimization 3. The cell load obtained from other static optimization approaches have high variance. Thus, the corresponding throughput performance is deteriorated. Moreover, it is important to note our utilized solvers for the static optimization problem are still sub-optimum approach. The computational complexity of finding the true global optimum for the static optimization problem is prohibitively high. Overall, we believe the RL approach achieves good performance on cell load balancing as well as throughput maximization.
\begin{figure}
\centering
\includegraphics[width = 0.5 \linewidth, height = 0.4 \linewidth]{figure_new/throughput_cdf_2.png}
\caption{Throughput CDF of static optimization and vector reward based RL algorithm.}
\label{throughput_alg2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.5 \linewidth, height = 0.4 \linewidth]{figure_new/cell_load_2.png}
\caption{Cell load distribution of static optimization and vector reward based RL algorithm.}
\label{cell_load_alg2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.5 \linewidth, height = 0.4 \linewidth]{figure_new/convergence_2.png}
\caption{The throughput changing over time for static optimization and vector reward based RL algorithm. }
\label{convergence_alg2}
\end{figure}
\subsection{Mobility Models}
\label{mobility_model}
To fully evaluate our introduced RL algorithm, we test the algorithm using another mobility model: self-similar least-action walk (SLAW) model \cite{SLAW_model}, where the SLAW model assumes that user mobility is based on clusters (users only move among these clusters). Fig. \ref{convergence_RL} shows the convergence of the vector reward based RL algorithm under the two mobility models. Similarly as our previous findings, using vector reward requires less parameters re-tuning over different mobility models. This is because the vector-formed reward conveys the objective features to the RL value network through a more efficient way. Overall, we conclude that our introduced vector reward based RL algorithm is a robust learning approach with less cross-validation requirements.
\begin{figure}
\centering
\includegraphics[width = 0.5 \linewidth, height = 0.4 \linewidth]{figure_new/RL_mobility.png}
\caption{The time dynamic of RL convergence in SLAW model, where metric for the upper one is the network peak load and the lower one is the throughput}
\label{time_dynamic}
\end{figure}
\section{Conclusion}
\label{sec7}
In this paper, we introduced a vector reward based multiple objective reinforcement learning algorithm. It is utilized to jointly optimize the cell load balance and throughput in mobility management tasks. We choose FD-MIMO antenna tilt and handover CIO adjustment as the RL action space. In addition, the RL reward is designed as a vector, where each entry represents a feature of the joint optimization objective. Accordingly, to promote the Pareto optimality, the introduced RL agent is configured with multiple value networks along with a single policy network. The weighting parameters on the multiple objectives are embedded into the value network which is used for guiding the policy network learning afterward. Moreover, we developed a static formulation for the same joint optimization problem and compare it to our introduced RL algorithm. The algorithm evaluations are presented in different ways including the cell load distribution, throughput distribution and the learning curves of RL algorithms.
For future work, we can consider asynchronous actions based multiple objective learning. Meanwhile, we can include other handover protocols into the framework, such as A2 and A5 events based inter-frequency handover. Moreover, other network features can be incorporated to test the performance of using high dimension vector rewards.
\section*{Acknowledgement}
|
3,212,635,537,750 | arxiv | \section{What \textit{is} an Interpretation of Quantum Mechanics?}
\sectionmark{What is an Interpretation of Quantum Mechanics?}
\label{sec:1}
Interpretations of quantum mechanics provide accounts of the meaning of the quantum formalism, guidance as to how to use the formalism to connect with nature and to make predictions on the outcome of experiments, and understanding of the counterintuitive aspects of the formalism. The first interpretation of quantum mechanics was the Copenhagen Interpretation, developed by Werner Heisenberg and Niels Bohr in the late 1920s. (See reference \cite{Cr86} for a complete description of and references for the Copenhagen Interpretation.) It quickly became the orthodox view of the meaning of the quantum formalism, and it is currently used in most quantum mechanics textbooks. However, its ambiguities have generated a large number of interpretational paradoxes associated with relativity conflicts, wave-particle duality, observer-dependent behavior, wave function collapse, and quantum nonlocality.
These problems generated by the Copenhagen Interpretation have led to a plethora of alternative ``interpretations", many of which are outlined in this book. The reader is cautioned, however, to examine each of these interpretations carefully to determine if it really qualifies for the status of a \textit{full interpretation}, in that it deals with \textit{all} of the many interpretational problems inherent in the standard quantum formalism or raised by aspects of the Copenhagen Interpretation.
One would think that clever experimentalists could go into the quantum optics laboratory and determine which of these interpretations is correct by testing their experimental predictions. However, this is not the case. It is the \textit{formalism} of quantum mechanics that makes the testable experimental predictions, and all of the many interpretations are attempting to give meaning to that same formalism. Thus, an interpretation could only be falsified if it was found to be inconsistent with the formalism of quantum mechanics, and otherwise the choice between interpretations becomes a matter of individual preference and philosophical aesthetics.
The interpretational problems of quantum nonlocality, which many would-be interpretations completely ignore, is a particularly difficult philosophical hurdle.. Many interpretational attempts instead focus on some particular problem, e.g., wave function collapse, to the exclusion of other interpretational problems including nonlocality. As we will see, the Transactional Interpretation is unique in providing a graphic picture of the mechanisms behind quantum nonlocality while dealing with all of the other interpretational problems as well.
\section{Quantum Nonlocality}
\sectionmark{Quantum Nonlocality}
\label{sec:2}
Quantum mechanics, our standard theoretical model of the physical world at the smallest scales of energy and size, differs from the classical mechanics of Newton that preceded it in one very important way. Newtonian systems are always \textit{local}. If a Newtonian system breaks up, each of its parts has a definite and well-defined energy, momentum, and angular momentum, parceled out at breakup by the system while respecting conservation laws. After the component subsystems are separated, the properties of any subsystem are completely independent and do not depend on those of the other subsystems.
On the other hand, quantum mechanics is \textit{nonlocal}, meaning that the component parts of a quantum system may continue to influence each other, even when they are well separated in space and out of speed-of-light contact. This characteristic of standard quantum theory was first pointed out by Albert Einstein and his colleagues Boris Podolsky and Nathan Rosen (EPR) in 1935, in a critical paper\cite{Ei35} in which they held up the discovered nonlocality as a devastating flaw that, it was claimed, demonstrated that the standard quantum formalism must be incomplete or wrong. Einstein called nonlocality ``spooky actions at a distance". Schr\"{o}dinger followed on the discovery of quantum nonlocality by showing in detail how the components of a multi-part quantum system must depend on each other, even when they are well separated\cite{Sc35}.
Beginning in 1972 with the pioneering experimental work of Stuart Freedman and John Clauser\cite{Fr72}, a series of quantum-optics EPR experiments testing Bell-inequality violations\cite{Be64} and other aspects of entangled quantum systems were performed. This body of experimental results can be taken as a demonstration that, like it or not, both quantum mechanics and the underlying reality it describes are intrinsically nonlocal. Einstein's spooky actions-at-a-distance are really out there in the physical world, whether we understand and accept them or not.\\
How and why is quantum mechanics nonlocal? Nonlocality comes from two seemingly conflicting aspects of the quantum formalism: (1) energy, momentum, and angular momentum, important properties of light and matter, are conserved in all quantum systems, in the sense that, in the absence of external forces and torques, their net values must remain unchanged as the system evolves, while (2) in the wave functions describing quantum systems, as required by Heisenberg's uncertainty principle\cite{He27}, the conserved quantities may be indefinite and unspecified and typically can span a large range of possible values. This non-specifity persists until a measurement is made that ``collapses" the wave function and fixes the measured quantities with specific values. These seemingly inconsistent requirements of (1) and (2) raise an important question: how can the wave functions describing the separated members of a system of particles, which may be light-years apart, have arbitrary and unspecified values for the conserved quantities and yet respect the conservation laws when the wave functions are collapsed?
This paradox is accommodated in the formalism of quantum mechanics because the quantum wave functions of particles are \textit{entangled}, the term coined by Schr\"{o}dinger\cite{Sc35} to mean that even when the wave functions describe system parts that are spatially separated and out of light-speed contact, the separate wave functions continue to depend on each other and cannot be separately specified. In particular, the conserved quantities in the system's parts (even though individually indefinite) must always add up to the values possessed by the overall quantum system before it separated into parts.
How could this entanglement and preservation of conservation laws possibly be arranged by Nature? The mathematics of quantum mechanics gives us no answers to this question, it only insists that the wave functions of separated parts of a quantum system do depend on each other. Theorists prone to abstraction have found it convenient to abandon the three-dimensional universe and describe such quantum systems as residing in a many-dimensional Hilbert hyper-space in which the conserved variables form extra dimensions and in which the interconnections between particle wave functions are represented as allowed sub-regions of the overall hyper-space. That has led to elegant mathematics, but it provides little assistance in visualizing what is really going on in the physical world.\\
Consider these questions:
\begin{itemize}
\item Is the quantum wave function a real object present in space-time?
\item What are the true roles of the observers and measurements in quantum processes?
\item What is wave function collapse?
\item How can quantum nonlocality be understood?
\item How can quantum nonlocality be visualized?
\item What are the underlying physical processes that make quantum nonlocality possible?
\end{itemize}
To our knowledge, the only interpretation that adequately answers these questions is the Transactional Interpretation of quantum mechanics\cite{Cr86,Cr88,Cr01}, which will be described in what follows.
\section{The One-Dimensional Transaction Model}
\sectionmark{The One-Dimensional Transaction Model}
\label{sec:3}
The starting point for the Transactional Interpretation of quantum mechanics is to view the ``normal" wave functions $\psi$ appearing in the wave-mechanics formalism as Wheeler-Feynman retarded waves, to view the complex-conjugated wave functions $\psi*$ as Wheeler-Feynman advanced waves, and to view the calculations in which they appear together as Wheeler-Feynman ``handshakes" between emitter and absorber\cite{Wh45}.
\begin{figure}
\center
\includegraphics[width=6 cm]{Fig-1.eps}
\caption{(color online) Schematic of the emission stage of a transaction. An emiter produces a retarded wave (solid) toward an absorber and an advanced wave (dashed) in the other time direction.}
\label{fig:1}
\end{figure}
While there are notable similarities between the Wheeler-Feynman time-symmetric approach to electrodynamics and this approach to the quantum formalism, there are also important differences. In the classical electrodynamics of Wheeler-Feynman, it is the advanced-wave responses from all of the absorbers in the future universe, arriving together back at the emitter that cause it to radiate, lose energy, and recoil during emission. There are no photons and there is no quantization of energy, and so there is no single future absorber that receives all of the energy and momentum that the emitter has transmitted. Further, the emitter is responding to the full intensity of the superimposed advanced-wave fields from the future in a completely deterministic way, losing energy and gaining recoil momentum as a moving electric charge responding to external electric and magnetic fields.
In the domain of quantum mechanics these rules must be changed to reflect quantization and the probabilistic nature of quantum mechanics. In the case of photon emission and absorption, an emitter emits a single photon, losing a quantum of energy and experiencing momentum recoil. An absorber receives a single photon, gaining a quantum of energy $\hbar\omega$ and experiencing momentum recoil $\hbar k$. The rest of the future universe does not explicitly participate in the process. If the wave function $\psi$ propagates for a significant distance before absorption, it becomes progressively weaker, much too weak to be consistent with the behavior of an electric charge simply responding to external fields as in classical Wheeler-Feynman electrodynamics.
\begin{figure}
\center
\includegraphics[width=7 cm]{Fig-2.eps}
\caption{(color online) Schematic of the confirmation stage of a transaction. An absorber responds with an advance wave (dashed) back to the emmitter and a retarded wave (solid) going foward in time beyond the absorber.}
\label{fig:2}
\end{figure}
As an intermediate conceptual step, it is useful to think about the quantum situation in a single space dimension $x$ and in one time dimension $t$, so that the attenuation of the wave function with distance can be put aside, for the moment. This is a wave-on-a-string situation in which the light cone becomes a diagonal Minkowski line connecting emitter to absorber, as shown in Fig.\ref{fig:1}. In the spirit of even-handed time symmetry, the emitter must simultaneously send out retarded wave function $F_{1}(x, t) = \psi = A\exp[i(kx-\omega t)]$ and advanced wave function $G_{1}(x, t) = \psi* = A\exp[-i(kx-\omega t)]$ in the two time and space directions, i.e., in both directions from the emitter along the Minkowski line. The energy and momentum eigenvalues of $ F_{1}$ are $\hbar\omega$ and $\hbar k$, while the eigenvalues of $G_{1}$ are $-\hbar\omega$ and $-\hbar k$. Therefore, the emission of the composite wave function $ F_{1}+ G_{1}$ involves no change in energy or momentum, i.e., it has no energy or momentum cost. This is to be expected, since the emission process is time-symmetric, and time-symmetric fields should not produce any time-asymmetric loss of energy or momentum.
The absorber at some later time receives the retarded wave $ F_{1}$ and terminates it by producing a canceling wave $F_{2} = -A /exp[i(kx-\omega t)]$, as shown in Fig. \ref{fig:2}. Because the absorber must respond in a time-symmetric way, it must also produce advance wave $G_{2} = -A \exp[-i(kx-\omega t)]$, which travels back along the Minkowski line until it reaches the emitter. At the emitter it exactly cancels the advanced wave $G_{1}$ that the emitter had produced in the negative time direction.
\begin{figure}
\center
\includegraphics[width=7 cm]{Fig-3.eps}
\caption{(color online) Schematic of the completed transaction. Extra waves cancel, leaving an advanced-retarded ``handshake" that transfers energy $\hbar\omega$ and momentum $\hbar k$ from emitter to absorber.}
\label{fig:3}
\end{figure}
The net result is that a superposition of $ F_{1}+G_{2}$ connects emitter with absorber, the emitting charge interacts with $G_{2}$ by losing energy $\hbar\omega$ and recoiling with momentum $-\hbar k$, and the absorbing charge interacts with $F_{1}$ by gaining energy $\hbar\omega$ and recoiling with momentum $\hbar k$. Due to the cancellations beyond the interaction points, there is no wave function on the Minkowski diagonal before emission or after absorption. A Wheeler-Feynman handshake, shown in Fig. \ref{fig:3}, has moved a quantum of energy $\hbar\omega$ and momentum $\hbar k$ from emitter to absorber. An observer, unaware of the time-symmetric processes involved, would say that a forward-going wave was emitted and subsequently absorbed.
\section{The Three-Dimensional Transaction Model}
\sectionmark{The Three-Dimensional Transaction Model}
\label{sec:4}
Now let us consider the more realistic situation of three spatial dimensions and one time dimension. Now, assuming symmetric emission, the wave function $\psi$ spreads out in three dimensions like a bubble expanding from the central source location. The wave function, attenuated by distance, can reach many potential absorbers, each of which can respond by producing an advanced wave function $\psi*$ that, also attenuated by distance, travels back to the emitter. The emitter at the instant of emission can thus receive many advanced-wave ``echoes". In this way, guided by the quantum formalism, attenuation and competition have been added to this picture.
We can think of the retarded waves from the emitter as offers to transmit energy or ``offer waves", the first step in a handshake process that may ultimately produce the emission of a photon. Similarly, the advanced wave responses from potential absorbers can be thought of as ``confirmation waves", the second step in the handshake process to transfer a photon. The advanced waves travel in the negative time direction and arrive back at the emission space-time location \textit{at the instant of emission}, each with a strength $\psi$ that reflects the attenuation of the offer wave in traveling forward from emitter to absorber multiplied by a strength $\psi*$ that reflects the attenuation of the confirmation wave in traveling back from absorber to emitter.
Therefore, the emitter receives an ``echo" of magnitude $\psi_{i}\psi_{i}*$ from the $i^{th}$ potential future absorber. To proceed with the process, the emitter must ``choose" one (or none) of these offer-confirmation echoes as the initial basis for a photon-emission handshake or ``transaction", with the choice weighted in probability by the strength of each echo. After the choice is made, there must be repeated emitter-absorber wave exchanges, until the strength of the space-time standing wave that thus develops is sufficient in strength to transfer a quantum of energy $\hbar\omega$ and momentum $\hbar k$ from the emitter to the absorber, completing the transaction.
As a criticiam of this transaction model, it might be argued that while the quantum wave function $\psi$ is a solution of the Schr\"{o}dinger wave equation, its complex conjugate $\psi*$ is not, and therefore the transaction model is inappropriately mixing solutions with non-solutions. However, we observe that the Schr\"{o}dinger wave equation is inconsistent with Lorenz invariance and can be regarded as only the non-relativistic limit of the ``true" relativistic wave equation, i.e., the Klein-Gordon equation for bosons or the Dirac wave equation for fermions, both consistent with relativity. Taking the non-relativistic limit of the Klein-Gordon or Dirac wave equation produces \textit{two} wave equations, the Schr\"{o}dinger wave equation and its complex conjugate. The wave function $\psi$ is a solution of the Schr\"{o}dinger wave equation, while $\psi*$ is a solution of the complex conjugate of Schr\"{o}dinger wave equation, and so both are equally valid solutions. The quantum version of the electromagnetic wave equation, which is relativistically invariant and is appropriate for describing the emission and absorption of photons, has both advanced and retarded solutions.
We note here that the sequence of stages in the emitter-absorber transaction presented here employs the semantic device of ``pseudo-time", describing a process between emitter and absorber extending across lightlike or timelike intervals of space-time as if it occurred in a time sequence external to the process. This is only a pedagogical convention for the purposes of description. The process itself is atemporal, and the only observables come from the superposition of all of the steps that form the final transaction.
This is the transaction model by which the Transactional Interpretation describes the elements of the wave-mechanics formalism and accounts for quantum mechanical processes. The wave functions $\psi$ of the wave-mechanics formalism are the offer waves. In some sense they are real waves traveling through space, but in another sense they are not real because they represent only a mathematical encoding of the \textit{possibility} of a quantum process. The transaction that forms after the emitter-absorber offer-confirmation exchange process goes to completion is the real object, what we would call the ``particle" that has been transferred from emitter to absorber. In that sense, the real objects in our universe are waves, while particles are an illusion created by the boundary conditions that must be observed at the vertices of the wave-exchange transactions.
What happens to the offer and confirmation waves that do not result in the formation of a transaction? Since the formation of a transaction produces all of the observable effects, such waves are ephemeral, in that they produce no observable effects, and their presence or absence has no physical consequences. However, in explaining seemingly paradoxical quantum phenomena such as interaction-free measurements\cite{El93,Cr06}, such waves can be viewed as ``feeling out" components of the system even when no transaction forms.
The transactional model not only provides a description of the process that underlies the calculation of a quantum mechanical matrix element, but it also explains and justifies Born's probability rule\cite{Bo26}. In particular, it explains why a quantum event described by a wave function $\psi$ has a probability of occurrence given by $\psi\psi*$. In the transaction model, the quantities $\psi\psi*$ are the strengths of the advanced-wave echoes arriving back at the site of emission at the instant of emission. The ``lightning strike" of a transaction formation depends probabilistically on the strengths of these echoes.
The Born probability rule is an assumption of the Copenhagen Interpretation, asserted axiomatically without justification as one of the tenets of the interpretation. On the other hand, the Born probability rule follows naturally from the transactional account of the Transactional Interpretation and does not need to be added as a separate assumption. In that sense, the Transactional Interpretation is superior to the Copenhagen Interpretation because it is more philosophically ``economical", requiring fewer independent assumptions.\\
There is one more element of the transaction model, \textit{hierarchy}, which needs to be added in order to avoid transactional inconsistencies pointed out by Maudlin\cite{Ma96}. All advanced-wave echoes are not equal. Those propagating back to the emitter from small space-time separation intervals rank higher in the selection hierarchy than those propagating back to the emitter from large space-time separation intervals. The emitter's probabilistic decision to select or not select an echo propagating back from a small space-time interval must occur ``before" any echoes from larger space-time intervals are considered and their transaction allowed to form.
This hierarchy of transaction formation has interesting implications for time itself in quantum processes. In some sense, the entire future of the universe is reflected in the formation of each transaction, with the echoes from time-distant future events allowed the possibility of forming transactions only after the echoes from near future absorbers have been weighed and rejected.
To make another analogy, the emergence of the unique present from the future of multiple possibilities, in the view of the Transactional Interpretation, is rather like the progressive formation of frost crystals on a cold windowpane, first nearby and then extending further out. As the frost pattern expands, there is no clear freeze-line, but rather a moving boundary, with fingers of frost reaching out well beyond the general trend, until ultimately the whole window pane is frozen into a fixed pattern. In the same way, the emergence of the present involves a lace-work of connections with the future and the past, insuring that the conservation laws are respected and the balances of energy and momentum are preserved.
\section{The Transactional Interpretation of Quantum Mechanics}
\sectionmark{The Transactional Interpretation of Quantum Mechanics}
\label{sec:5}
The Transactional Interpretation of quantum mechanics\cite{Cr86, Cr88,Cr01}, inspired by the structure of the quantum wave mechanics formalism itself, views each quantum event as a Wheeler-Feynman ``handshake" or ``transaction" process extending across space-time that involves the exchange of advanced and retarded quantum wave functions to enforce the conservation of certain quantities (energy, momentum, angular momentum, etc.). It asserts that each quantum transition forms in four stages: (1) emission, (2) response, (3) stochastic choice, and (4) repetition to completion.
The first stage of a quantum event is the emission of an ``offer wave" by the ``source", which is the object supplying the quantities transferred. The offer wave is the time-dependent retarded quantum wave function $\psi$, as used in standard quantum mechanics. It spreads through space-time until it encounters the ``absorber", the object receiving the conserved quantities.
The second stage of a quantum event is the response to the offer wave by any potential absorber (there may be many in a given event). Such an absorber produces an advanced ``confirmation wave" $\psi*$, the complex conjugate of the quantum offer wave function $\psi$. The confirmation wave travels in the reverse time direction and arrives back to the source at precisely the instant of emission with an amplitude of $\psi\psi*$.
The third stage of a quantum event is the stochastic choice that the source exercises in selecting one of the many received confirmations. The strengths $\psi\psi*$ of the advanced-wave ``echoes" determine which transaction forms in a linear probabilistic way.
The final stage of a quantum event is the repetition to completion of this process by the source and selected absorber, reinforcing the selected transaction repeatedly until the conserved quantities are transferred and the potential quantum event becomes real.\\
Here we summarize the principal elements of the Transactional Interpretation, structured in order to contrast it with the Copenhagen Interpretation:
\begin{itemize}
\item The fundamental quantum mechanical interaction is taken to be the transaction. The state vector $\psi$ of the quantum mechanical formalism is a physical wave with spatial extent and is identical with the initial ``offer wave" of the transaction. The complex conjugate of the state vector $\psi*$ is also a physical wave and is identical with the subsequent ``confirmation wave" of the transaction. The particle (photon, electron, etc.) and the collapsed state vector are identical with the completed transaction. The transaction may involve a single emitter and absorber and two vertices or multiple emitters and absorbers and many vertices, but is only complete when appropriate quantum boundary conditions are satisfied at all vertices, i.e., loci of emission and absorption. Particles transferred have no separate identity independent from the satisfaction of the boundary conditions at the vertices.
\item The correspondence of ``knowledge of the system" with the state vector $\psi$ is a fortuitous but deceptive consequence of the transaction, in that such knowledge must necessarily follow and describe the transaction.
\item Heisenberg's Uncertainty Principle\cite{He27} is a consequence of the fact that a transaction in going to completion is able to project out and localize only one of a pair of conjugate variables (e.g., position or momentum) from the offer wave, and in the process it delocalizes the other member of the pair, as required by the mathematics of Fourier analysis. Thus, the Uncertainty Principle is a consequence of the transactional model and is not a separate assumption.
\item Born's Probability Rule\cite{Bo26} is a consequence of the fact that the magnitude of the ``echo" received by the emitter, which initiates a transaction in a linear probabilistic way, has strength $P = \psi\psi*$. Thus, Born's Probability Rule is a consequence of the transactional model and is not a separate assumption of the interpretation.
\item All physical processes have equal status, with the observer, intelligent or otherwise, given no special status. Measurement and measuring apparatus have no special status, except that they happen to be processes that connect and provide information to observers.
\item Bohr's ``wholeness" of measurement and measured system exists, but is not related to any special character of measurements but rather to the connection between emitter and absorber through the transaction.
\item Bohr's ``complementarity" between conjugate variables exists, but like the uncertainty principle is just a manifestation of the requirement that a given transaction going to completion can project out only one of a pair of conjugate variables, as required by the mathematics of Fourier analysis.
\item Resort to the positivism of ``don't-ask-don't-tell" is unnecessary and undesirable. A distinction is made between observable and inferred quantities. The former are firm predictions of the overall theory and may be subjected to experimental verification. The latter, particularly those that are complex quantities, are not verifiable and are useful only for visualization, interpretational, and pedagogical purposes. It is assumed that both kinds of quantities must obey conservation laws, macroscopic causality conditions, relativistic invariance, etc.
\end{itemize}
In summary, the Transactional Interpretation explains the origin of the major elements of the Copenhagen Interpretation while avoiding their paradoxical implications. It drops the positivism of the Copenhagen Interpretation as unnecessary, because the positivist curtain is no longer needed to hide the nonlocal backstage machinery.
It should also be pointed out that giving some level of objective reality to the state vector colors all of the other elements of the interpretation. Although in the Transactional Interpretation, the uncertainty principle and the statistical interpretation are formally the same as in the Copenhagen Interpretation, their philosophical implications, about which so much has been written from the Copenhagen viewpoint, may be rather different.
The Transactional Interpretation offers the possibility of resolving \textit{all} of the many interpretational paradoxes that quantum mechanics has accumulated over the years. Many of these are analyzed in reference \cite{Cr86}, the publication in which the Transactional Interpretation was introduced. Here we will not attempt to deal with all of the paradoxes. We will instead focus on the interpretational problems associated with quantum nonlocality and entanglement.
\section{The Transactional Interpretation and Nonlocality}
\sectionmark{The Transactional Interpretation and Nonlocality}
\label{sec:6}
As we discussed in Section \ref{sec:2}, quantum nonlocality is one of the principal counter-intuitive aspects of quantum mechanics. Einstein's ``spooky action-at-a-distance" is a real feature of quantum mechanics, but the quantum formalism and the orthodox Copenhagen Interpretation provide little assistance in understanding nonlocality or in visualizing what is going on in a nonlocal process. The Transactional Interpretation provides the tools for doing this.
Perhaps the first example of a nonlocality paradox is the Einstein's bubble paradox, proposed by Albert Einstein at the 5th Solvay Conference in 1927\cite{Ja66}. A source emits a single photon isotropically, so that there is no preferred emission direction. According the the quantum formalism, this should produce a spherical wave function $\psi$ that expands like an inflating bubble centered on the source. At some later time, the photon is detected, and according to the quantum formalism, the bubble visualized as the wave function should ``pop", disappearing instantaneously from all locations except the position of the detector. Einstein asked how the parts of the wave function away from the detector could ``know" that they should disappear, and how it could be arranged that only a single photon was detected?
\begin{figure}
\center
\includegraphics[width=9 cm]{Fig-4.eps}
\caption{(color online) Schematic of the transaction involved in the Einstein's bubble paradox. The offer wave $\psi$ forms a spherical wave front, reaching the detector on the right and causing it to return a confirmation wave $\psi*$, so that a transaction forms and a photon of energy is transferred. Other detectors also return confirmation waves, but the source has randomly selected the detector on the right for a transaction.}
\label{fig:4}
\end{figure}
At the 5th Solvay Conference, Werner Heisenberg\cite{Ja66} dismissed Einstein's bubble paradox by asserting that the wave function is not a real object moving through space, as Einstein had implicitly assumed, but instead is a mathematical representation of the knowledge of some observer who is watching the process. Until detection, the observer knows nothing about the location of the emitted photon, so the wave function must be spherical, distributed over the $4\pi$ solid angle to represent his ignorance. However, after detection the location of the photon is known to the observer, so the wave function ``collapses" and is localized at the detector. One photon is detected because only one photon was emitted.
The Transactional Interpretation provides an alternative explanation, one that permits the wave function to be, in some sense, a real object moving through space. This is illustrated in Fig. \ref{fig:4}. The offer wave $\psi$ from the source indeed spreads out as a spherical wave front and eventually encounters the detector on the right. The detector responds by returning to the source a confirmation wave $\psi*$. Other detectors (i.e., potential absorbers) also return confirmation waves, but the source randomly, weighted by the $\psi\psi*$ echoes from the potential absorbers, selects the detector on the right to form a transaction. The transaction forms between source and detector, and one $\hbar\omega$ photon's worth of energy is transferred from the source to the detector. The formation of this particular transaction, satisfying the source boundary condition that only one photon is emitted, prevents the formation of any other transaction to another possible photon absorber, so only one photon is detected. Further, the wave function bubble never pops, as Einstein had suggested. The unblocked parts of the offer wave keep moving outward and encountering more distant absorbers, which participate in the competition for transaction formation.
One might ask about the ``left over" offer and confirmation waves that do not participate in the formation of a transcation. However, we note that the first two stages (offer and confirmation) of transaction formation map directly into classical Wheeler-Feynman time-symmetric electrodynamics\cite{Wh45}, in which the advanced and retarded waves before emission and after absorption largely cancel out, leaving little in the way of residue.
This is an illustration of a simple two-vertex transaction in which the transfer of a single photon is implemented nonlocally. It avoids Heisenberg's peculiar assertion that the mathematical solution to a simple 2nd order differential equation involving momentum, energy, time, and space is somehow a map of the mind, deductions, and knowledge of a hypothetical observer.\\
As another example of nonlocality in action, consider the Freedman-Clauser experiment\cite{Fr72}. An atomic transition source produces a pair of polarization-entangled photons. The three-level atomic transitions that produce the two photons have a net orbital angular momentum change of $L=0$ and even parity, so, if the photons are emitted back-to-back, they must both be in the same state of circular polarization or linear polarization. Measurements on the photons with linear polarimeters in each arm of the experiment show that when the planes of the polarimeters are aligned, independent of the direction of alignment, the two polarimeters always measure HH or VV for the two linear polarization states.
When the plane of one polarimeter is rotated by an angle $\theta$ with respect to the other plane, some opposite-correlation HV and VH events creep in, and these grow as $1-\cos^{2}(\theta)$, which for small values of $\theta$ is proportional to $\theta^2$. This polarization correlation behavior produces a dramatic violation of the Bell inequalities\cite{Be64}, which for local hidden variable alternatives to standard quantum mechanics require a \textit{linear} growth in HV and VH with $\theta$. The implication of the Bell-inequality violations is that quantum nonlocality is required to explain the observed quadratic polarization correlations.
\begin{figure}
\center
\includegraphics[width=11 cm]{Fig-5.eps}
\caption{(color online) Space-time schematic of a nonlocal ``V" transaction for visualizing the polarization-entangled Freedman-Clauser EPR experiment. Offer waves $\psi_{L}$ and $\psi_{R}$ (blue/solid) move from source to detectors, and in response, confirmation waves $\psi_{L}*$ and $\psi_{R}*$ (red/dashed) move from detectors to source. The three-vertex transaction can form only if angular momentum is conserved by having correlated and consistent measured polarizations for both detected photons.}
\label{fig:5}
\end{figure}
How are the nonlocality-based polarization correlations of the Freedman-Clauser experiment possible? The Transactional Interpretation provides a clear answer, which is illustrated in Fig. \ref{fig:5}. The source of the polarization-entangled photons seeks to emit a photon pair by sending out offer waves $\psi_{L}$ and $\psi_{R}$ to the left and right detectors. The detectors respond by returning confirmation waves $\psi_{L}*$ and $\psi_{R}*$ back to the source. A completed three-vertex transaction can form from these, however, only if the two potential detections are compatible with the conservation of angular momentum at the source. This requirement produces the observed polarization correlations.
There are also a large number of more complicated experiments that demonstrate the peculiarities of quantum nonlocality and entanglement in other ways, and which involve transactions with more than three vertices. One such example is the interaction-free measurement experiment of Elitzur and Vaidmann\cite{El93}, which has been analyzed with the Transactional Interpretation\cite{Cr06}. In all such cased, the Transactional Interpretation provides a way of visualizing multi-vertex nonlocal processes that otherwise seem strange and counter-intuitive.
\section{ Do Quantum Wave Functions Exist in Real Three-Dimensional Space or Only in Hilbert Space?}
\sectionmark{ Do Quantum Wave Functions Exist in Real 3D Space?}
\label{sec:7}
In classical wave mechanics, propagating waves, e.g. light or sound waves, are viewed as existing in and propagating through normal three-dimensional space. However, early in the development of the formalism of quantum mechanics it was realized that there was a problem with treating the quantum wave functions of multi-particle systems in the same way. Because of conservation laws and entanglement, the uncollapsed wave function of each particle in such a system was not only a function of its own space and momentum coordinates and other variables (e.g. spin and angular momentum), but might also be dependent on the equivalent coordinates of the other particles in the system. For example, the momentum magnitude and direction of each particle of a quantum system might be unspecified and allowed to take on any value over a wide range, but their momenta must be correlated so that the overall momentum of the system can have a well defined momentum value.
Therefore, it was concluded that Hilbert space provided a general way of describing quantum systems and that in multi-particle systems, a quantum mechanical wave function could not exist in simple three-dimensional space, but must instead reside in a higher Hilbert space of many more dimensions, with a dimension for each relevant variable. The wave function of a ``free" independent non-entangled particle in such a space simply ``traces over" the extra Hilbert space dimensions, allowing the extra variables to take on any value because there is no dependence on them. In such a Hilbert space the inter-dependences of multi-particle systems could be described, conservation laws could be defined as ``allowed regions" that the wave functions could occupy, and powerful mathematical operations appropriate to higher dimensional spaces could be applied to the quantum formalism. The assertion that quantum wave functions cannot be considered to exist in normal space and must be viewed as existing only in an abstract higher-dimensional space, of course, creates a severe roadblock for any attempt to visualize quantum processes. (We note that Ruth Kastner's ``Possibilist Transactional Interpretation"\cite{Ka12,Ka13} adopts this point of view and treats quantum wave functions as being real objects only in an abstract multidimensional Hilbert space, from which transactions emerge in real space. The possibilist approach is not incorrect, but we consider it to be unnecessarily abstract.)
The ``standard" Transactional Interpretation, with its insights into the mechanism behind wave function collapse through multi-vertex transaction formation, provides a new view of the situation that make the retreat to Hilbert space unnecessary. The offer wave for each particle can be considered as the wave function of a free particle, initially free of the constraints of conservation laws and indepent of the characteristics of other particles, and can be viewed as existing in normal three dimensional space. The connections between an ensemble of such free particles is only established when the multi-vertex transaction forms. The application of conservation laws and the influence of the variables of the other particles of the system comes not in the initial offer wave stage of the process but in the final formation of the transaction The transaction ``knits together" the various otherwise independent particle wave functions that can span a wide range of possible parameter values into an interaction, and only those wave function sub-components that correlate to satisfy the conservation law boundary conditions can participate in the final multi-vertex transaction formation. The ``allowed zones" of Hilbert space arise from the action of transaction formation, not from constraints on the initial offer waves, i.e., particle wave functions. Hilbert space is the map, not the territory.
Thus, the assertion that the quantum wave functions of individual particles in a multi-particle quantum system cannot exist in ordinary three-dimensional space might be a misinterpretation of the role of Hilbert space, the application of conservation laws, and the origins of entanglement. Offer waves are somewhat ephemeral three-dimensional space objects, but only those components of the offer wave that satisfy conservation laws and entanglement criteria are permitted to be projected out in the final transaction, which also exists in three-dimensional space.
Another interesting question, relevant to the current need for a yet-unknown theory of quantum gravity, is whether the Transactional Interpretation would be consistent with the existence of a ``universal" quantum wave function that could describe the state of the entire universe. The Copenhagen Interpretation, with its focus on observers, has a severe problem with a universal wave function that would be interpreted as a description of observer knowledge and would require an observer outside the universe to collapse it. The Transactional Interpretation, which is independent of observers and observer knowledge, has no such problems. Further, it is relativistically invariant, and therefore could, in principle, be extended to a theory of quantum gravity, should one that used wave functions emerge from the current theoretical effort.
The two examples of nonlocality-based \textit{gedankenexperiments} presented here provide only a sample of the power of the Transactional Interpretation in analyzing the complex and counter-intuitive experimental results that seem to be emerging from experimental quantum optics at an exponentially increasing rate. The transactional analysis of interaction-free measurments\cite{El93,Cr06} is another example of the power of the method. However, as the experiments become more complex, the analysis inevitably becomes more elaborte and difficult to follow. Therefore, for the purposes of the present discussion, we will confine ourselves to the above two analysis examples.
\section{Conclusion}
\sectionmark{Conclusion}
\label{sec:8}
Is free will possible in such a system? It is our view that it is. Freedom of choice does not include the freedom to choose to violate physical laws. The transactional handshakes between present and future are acting to enforce physical laws, and they restrict the choices between future possibilities only to that extent.
By analogy, when you present a debit card to purchase groceries, there is a nearly-instantaneous electronic transaction between the cash register and the bank that deducts the purchase cost from your bank account and insures that you have sufficient funds for the purchase. The bank transaction enforces some ``law of conservation of money" as applied to your finances. But the transaction is not deterministic, and in particular it does not determine what you buy, only that you can afford what you have bought. This is similar to what goes on in a photon emission-absorption transaction, the transaction ensuring that precisely one photon-worth of energy and momentum is deducted from one system and added to another system.\\
We conclude that the Transactional Interpretation does not require (but \textit{is} consistent with) a deterministic block universe. It does, however, imply that the emergence of present reality from future possibility is a far more hierarchical and complex process than we have previously been able to imagine.
We have seen that the Transactional Interpretation of quantum mechanics provides the tools for understanding the many counterintuitive aspects of the quantum formalism and for visualizing nonlocal quantum processes. Further, the transaction model is ``visible" in the quantum formalism itself, once one associates the wave function $\psi$ with an offer, the conjugated wave function $\psi*$ with a confirmation, and quantum matrix elements with completed transactions.
To our knowledge, the Transactional Interpretation of quantum mechanics is the only interpretation that adequately answers the questions arising from quantum nonlocality and entanglement and also deals with all of the other interpretational problems of the quantum formalism.
\begin{acknowledgement}
The author would like to thank the many people who, over the years, have made comments and raised interesting questions that can be addressed with the Transactional Interpretation. These include the late Sir Rudolph Peierls, the late John Wheeler, John Clauser, Rudolph M\"{o}ssbauer, Avshalom Elitzur, Ruth Kastner, Gerald Miller, Vince Feng, James Woodward, Heidi Fearn, and Nick Herbert.
\end{acknowledgement}
|
3,212,635,537,751 | arxiv | \section{Introduction}
The investigation of the electromagnetic properties of a relativistic
plasma in a strong external magnetic field is interesting both from a
theoretical point of view and for possible applications in cosmology
and astrophysics. Large magnetic fields are known to be present in
neutron stars \cite{neustar}, where $B \sim 10^{12}$ G, in supernovas
\cite{supernova}, where $B \sim 10^{14}$ G, and several models foresee
very large magnetic fields in the early Universe \cite{Kronberg}.
When such strong magnetic fields are present the propagation of
electromagnetic waves in a plasma is considerably modified with
respect to the free field case. These strong fields can be not
treated like a perturbation. Indeed, the tensorial structure of the
polarization tensor need to be completely reconsidered \cite{Rojas}.
According to the pioneering approach of Fradkin \cite{Fradkin} one of
our aims is to relate the electromagnetic properties of a QED plasma
in a strong magnetic field to the thermodynamic properties of the
system. This should give the reader a physically more transparent
interpretation of the results that partially are present already in
the literature, as well as of the new results that we obtain here.
All the thermodynamic properties of a QED plasma can be derived
starting from the effective Lagrangian of the system. Assuming ${\bf
B(x)} = (0,0,B)$ the Lagrangian can be written
\begin{equation}
{\cal L}^{\rm eff} = {\cal L}^{\rm eff}(B) + {\cal L}^{\rm eff}(B,T,\mu) ,
\end{equation}
where the vacuum contribution at the one loop level is
\cite{Schwinger}
\begin{equation}
{\cal L}^{{\rm eff}} (B)=-\frac{1}{8\pi^2} \int_0^\infty \frac{ds}{s^3}
\left[ eBs \coth(eBs)-1-\frac13 (eBs)^2 \right] \exp(-m^2s) ,
\end{equation}
whereas the one-loop matter contribution is
\begin{eqnarray}
\lefteqn{{\cal L}^{\rm eff}(B,\mu,T) = \frac {\ln Z}{\beta V}=}
\nonumber \\
&&\frac 1 {\beta} \frac{|e|B}{2\pi^2}
\sum_{n=\hat 0}^\infty
\int_{-\infty}^\infty d\!k_{3} \left\{
\ln[ 1+e^{-\beta(E_n(k_{3})-\mu)}] +
\ln[ 1+e^{-\beta(E_n(k_{3})+\mu)}] \right\}
\label{lbt}.
\end{eqnarray}
Here
\begin{equation}
E_n(k_{3}) = \sqrt{k_{3}^2 + 2neB + m^2}
\end{equation}
is the energy of the $n$-th Landau level at tree level. The sum
$\sum_{n=\hat 0} \equiv {1 \over 2 } \sum_n (2 -
\delta_{n0})$
takes into account the double spin degeneration of the Landau levels
with the exception of the lowest. Equation (\ref{lbt}) follows from a
simple phase space consideration {\footnote{ Although Eq.(\ref{lbt})
can be obtained from the fermion propagator as done in
refs.\cite{Dittrich}\cite{Elmfors93}, we prefer to adopt here this
simpler physical interpretation.}}. In fact, due to the coalescing of
the transverse momentum states into those of a two-dimensional
harmonic oscillator, the number of available states for any given
value of $n$ is
\begin{equation}
\frac {V|e|B}{(2\pi)^2}(2 - \delta_{n0})\,dk_{3}~~~.
\label{phasesp}
\end{equation}
One of the most important effects of the modification of the electron
phase-space is the ``pair generation" that takes place when magnetic
fields larger than $B_c \equiv m^2/e$ are applied to the plasma. We
use quotation marks in order not to confuse the reader about the
meaning of pair generation in this context. Clearly, a constant
magnetic field can not induce pair generation from the
vacuum. However, things can be different at finite temperature and/or
density. Indeed, the number density of electrons+positrons is
\cite{Miller}
\begin{equation}
n = \frac {|e|B}{2\pi^2} \sum_{n=\hat 0}^\infty \int_{-\infty}^\infty
dk_{3} \left\{ \frac 1 {1+e^{\beta (E_n(k_{3})-\mu)}} + \frac
1 {1+e^{\beta(E_n(k_{3})+\mu)}} \right\}~~~. \label{npairs}
\end{equation}
It increases roughly linearly with B when $B \gg B_c$ and $T
\ut{<} (|e|B)^{1/2}$. This phenomenon can be understood as a
shift in the equilibrium of photons and pairs \cite{Dittrich}: the
equilibrium of the process $e^+e^- \leftrightarrow \gamma$ moves to
the left owing to the growing of the number of available states that
electrons and positrons can occupy in the lowest Landau level. Thanks
to the amplification of its phase-space this level is practically the
only occupied level when $eB \gg T^2$. Since $B$ modifies the density
of the charge carriers, it is reasonable to expect that the
electromagnetic properties of the plasma have to be affected by strong
magnetic fields. The charge neutrality of the plasma is preserved,
provided that $\mu = 0$, since the increasing of the electron and
positron energy densities balance each other. Indeed, the charge
density is
\begin{eqnarray}
\lefteqn{\rho(\mu) = \frac {\partial {\cal L}^{\rm eff}(B,T,\mu)}
{\partial \mu} = }\\ && \frac {|e|B}{2\pi^2} \sum_{n=\hat 0}^\infty
\int_{-\infty}^\infty dk_{3} \left\{ \frac 1 {1+e^{\beta
(E_n(k_{3})-\mu)}} - \frac 1 {1+e^{\beta(E_n(k_{3})+\mu)}} \right\}
\label{pairs}\ .\nonumber
\end{eqnarray}
The magnetization plays an essential role in determining the screening
properties of the plasma. Like the energy density, the magnetization
${\bf M}$ can be obtained starting from the effective Lagrangian
\begin{equation}
{\bf M} = \frac {\partial {\cal L}}{\partial {\bf B}} =
\frac {\partial {\cal L}^{\rm eff}(B)}{\partial {\bf B}} +
\frac {\partial {\cal L}^{\rm eff}(B,T,\mu)}{\partial {\bf B}}~~~.
\end{equation}
For example, the matter contribution to the magnetization in the limit
$T = 0$ is (we refer the reader to ref. \cite{Elmfors93} for a more
general expression) $$ M(B,0,\mu) = $$
\begin{equation}
{|e| \over 2 \pi ^{2}}
\sum _{n=\hat{0}}^{\left[ {\mu ^{2}-m^{2} \over 2eB} \right]}
\left( \mu \sqrt{\mu ^{2} -m^{2}-2eBn} -
(m^{2} +4eBn) \log {\mu + \sqrt{\mu ^{2} -m^{2} -x} \over
\sqrt{m^{2}+x}} \right) . \label{magn}
\end{equation}
Note that in contrast to what happens for $\rho$, a component of the
magnetization depends on the vacuum part of the effective
Lagrangian. In other words, the vacuum polarizes in strong magnetic
fields. The relative strength of the vacuum and matter polarization
has been evaluated in ref.\cite{Elmfors93}. The vacuum contribution
to the magnetization exceeds the matter contribution if $eB \gg
T^2,~|\mu^2 - m^2|$. We do not consider here the other vacuum
contributions to the polarization tensor since they are already widely
reported in the literature (see e.g. ref. \cite{Dit-Reu}).
In the next section we will show that the components of the
polarization tensor can be related to the thermodynamic quantities
$\rho$, ${\bf M}$ and the magnetic susceptibility $\chi = \partial
M/\partial B$. In section 3 we write the general expression of the
polarization tensor in terms of the fermion propagator in an external
magnetic field. In section 4 we apply this expression to get the
components of $\Pi^{\mu\nu}$ in the static limit and verify that they
coincide with the results obtained in section 2. In section 5 we give
examples of computations of some components of $\Pi^{\mu\nu}$ beyond
the static limit. Finally, section 6 contains our conclusions.
\section{The screening operator in terms of ${\cal L}^{\rm eff}$}
The polarization operator for a QED plasma is defined by
\begin{equation}
\Pi_{\mu\nu}(x,x') \equiv i~\frac {\delta <j_\mu(x')> }
{\delta A_\nu(x)} = i~\frac {\delta}{\delta A_\nu(x)}\,
\frac {\delta \Gamma^{\rm eff}}{\delta A_\mu(x')}~~~.
\label{def}
\end{equation}
where $\Gamma^{\rm eff} = \int d^4x\,{\cal L}^{\rm eff}$ is the effective
action. If the plasma has a non-vanishing chemical potential the tree
level Lagrangian is
\begin{equation}
{\cal L} = -\frac14 F_{\mu\nu}F^{\mu\nu} +\bar{\psi}(i {\partial
\kern-0.5em/} -e {A\kern-0.5em/} - \gamma_0 \mu -m)\psi~~~.
\label{tree}
\end{equation}
The form of Eq. (\ref{def}) remains unchanged if we replace $A(x)$
with $\tilde A(x) \equiv (A^0 + \frac 1e \mu, {\bf A})$. Note that
only this combination of the chemical potential and the vector
potential is physically meaningful
\cite{Elmfors94}. In fact, whenever $A_0$ is changed, the
polarization of the plasma rearranges so to fulfil $\mu + eA_0 = {\rm
const.}$ at the equilibrium \cite{Kapusta}. Thus, $\mu$ is determined
by the charge distribution of the plasma, and its value may depend on
the position. It is then convenient to work in the gauge $A_0 = 0$ in
order to have a well defined $\tilde A(x) = \mu = const.$ for the
whole plasma.
As a first application of the above formula we compute the limit for
$x \rightarrow x'$ of $\Pi^{00}$. We then have
\begin{eqnarray}
\Pi^{00}(B,T,\mu) &=&e^2 \frac {\partial^2 {\cal L}^{\rm
eff}(B,T,\mu)} {\partial \mu^2} = e^2 {\partial^2 \rho\over \partial
\mu^2} =
\label{pi00}\\
&=& {|e|^3B\beta \over (2\pi )^{2}}\sum_{n = \hat{0}}^{\infty}
{\int_{0}^{\infty} dk_3~ {\left( {1 \over \cosh ^{2}\left( {\beta (E_n
(k_{3}) + \mu )
\over 2}\right)} + {1 \over \cosh ^{2}\left( {\beta (E_n (k_{3}) -
\mu ) \over 2}
\right)}\right)}} \nonumber
\end{eqnarray}
where, assuming a uniform ${\bf B}$ and working in the Euclidean
space, we have used $\Gamma^{\rm eff}_{\rm mat} = -i \beta V
{\cal L} (B,T,\mu)$.
It is worthwhile to compare this result with the corresponding
quantity obtained for $B = 0$ \cite{Midorikawa}
\begin{equation}
\Pi^{00}(0,T,\mu) =
{e^2\beta \over 2(2\pi)^3}{\int_{0}^{\infty} d^3{\bf k}{\left( {1
\over \cosh ^{2}\left( {\beta (E + \mu ) \over 2}\right)} + {1 \over
\cosh ^{2}\left( {\beta (E - \mu )\over 2} \right)}\right)}}
\end{equation}
Keeping in mind Eq.(\ref{phasesp}) it is evident at a glance that the
effect of the magnetic field on $\Pi^{00}$ is completely mediated by
the modification of the electron phase space. The increase of the
screening power of static electric fields when overcritical magnetic
fields are applied to the plasma can also be interpreted as due to the
increase of the number of pairs (see Eq. (\ref{npairs})) as we
anticipated in the introduction. In this sense the magnetic field
plays a role analogous to that played by the temperature.
Let us now consider the $\Pi^{0i}$ components ($i = 1,2,3$). In the
same gauge as we used above, we have
\begin{equation}
\Pi_{0i}(x,x') = i~\frac \delta{\delta \tilde A^0}
\frac \delta {\delta \tilde A^i}~\Gamma^{\rm eff} =
ie\,{\varepsilon}_{ijk} \partial_k \frac {\delta \rho}{\delta B^j(x')}.
\end{equation}
where we used the chain rule for the functional derivative to get
$\displaystyle\frac \delta {\delta \tilde A^i} = {\varepsilon}_{ij}^{k} \partial_k
\displaystyle\frac \delta {\delta B^j}$.
Whereas $\Pi_{03} = 0$ identically and $\Pi_{01}=\Pi_{02} = 0$ if the
momentum insertion is zero or parallel to the field, the transversal
components receive a non-vanishing contribution if the momentum
insertion is perpendicular to the field. More precisely, the Fourier
transform of (14) in terms of the photon momentum components $p_{i}$
is
\begin{equation}
\Pi_{0i} = ie {\varepsilon}_{ij}~p_i \frac {\partial\rho}{\partial
B} + O(p^2) \quad i = 1,2 \label{pi0i}
\end{equation}
while $\Pi_{i0} = \Pi_{0i}^*$. These terms are related to the to the
interaction of the net charge of the plasma with the magnetic field
and contribute to the Hall conductivity. For the spatial components
we get the general formula
\begin{equation}
\Pi_{ij} = {\varepsilon}_{i}^{kl}{\varepsilon}_{j}^{mn}~p_kp_m \left[ \left(\delta_{nl} -
\frac {B_nB_l}{B^2}\right) \frac 1 B \frac \partial {\partial
B} + \frac {B_nB_l}{B^2} \frac {\partial^2} {\partial B^2}
\right] {\cal L}^{\rm eff} \label{piij}
\end{equation}
This gives
\begin{eqnarray}
\Pi _{33} &=& p_{2}^2 \frac 1 B~M~~~; \label{pizz}\\
\Pi _{11} &=& p_{3}^2 \frac 1 B~M + p_{2}^2~ \chi~~~;\label{pixx}\\
\Pi _{22} &=& p_{3}^2 \frac 1 B~M + p_1^2~ \chi~~~;\label{piyy}\\
\Pi_{12} &=&\Pi_{21} = p_1p_{2}~ \chi~~~;\\
\Pi_{13} &=&\Pi_{31} = - p_{3}p_1 \frac 1 B~M ~~~;\\
\Pi_{23} &=&\Pi_{32} = p_{3}p_{2} \frac 1 B~M ~~~.\label{piyz}
\end{eqnarray}
It is worthwhile to note the symmetry of the spatial part of the
polarization tensor. This agrees with the tensorial structure obtained
in ref.\cite{Rojas}.
\section{The electron propagator in an external magnetic field}
Although the results of the previous section have been obtained
without any use of the fermion propagator $S$, a treatment addressed
to determine the components of the polarization tensor beyond the
leading order in momenta can not leave $S$ out of consideration. The
fermion propagator in a constant magnetic field is given by
\cite{Kob-Sak}
\begin{eqnarray}
S(x,x') &=& \sum_{n=0}^{\infty}\int {d\omega dk_{2} dk_{3} \over (2\pi
)^{3}} e^{-i\omega (t-t') +ik_{2} (y-y') +ik_{3}(z-z')}
\nonumber\\
&&\times {1 \over \omega ^{2} -k_{3}^{2}-m^{2}-2eBn +i\epsilon}
S(n;\omega ,k_{2},k_{3})
\end{eqnarray}
where $$ S(n;\omega ,k_{2},k_{3}) = $$
\begin{equation}
\left( \matrix{
mI_{n,n} & 0 & -(\omega + k_{3})I_{n,n} & -i\sqrt{2eBn}I_{n,n-1}\cr 0
& mI_{n-1,n-1} & i\sqrt{2eBn} I_{n-1,n} & -(\omega -
k_{3})I_{n-1,n-1} \cr -(\omega - k_{3})I_{n,n} & i\sqrt{2eBn}
I_{n,n-1} & mI_{n,n} & 0 \cr -i\sqrt{2eBn} I_{n-1,n} & -(\omega +
k_{3})I_{n-1,n-1} & 0 & mI_{n-1,n-1} \cr }\right ) \nonumber
\end{equation}
and
\begin{equation}
I_{n,l} = I_{n;k_{2}}(x) I_{l;k_{2}}(x')
\end{equation}
where
\begin{equation}
I_{n;k_{2}}(x) = \left( {eB \over \pi} \right) ^{1/4} e^{-{1 \over
2}eB\left( x-{k_{2} \over eB} \right) ^{2}} {1 \over 2^{n/2} \sqrt{n!}
} H_{n}\left( \sqrt{eB} \left( x-{k_{2} \over eB}
\right) \right) .
\end{equation}
$H_{n}$ are the Hermite polynomials and we have chosen the
$\gamma$-matrices in the chiral representation. The propagator is
obtained by solving Dirac's equation in a magnetic field. Some useful
identities are
\begin{equation}
\sum_{n=0}^{\infty}{I_{n;k_{2}}(x)
I_{n;k_{2}}(x')} = \delta (x-x')
\end{equation}
and
\begin{equation}
\int_{-\infty}^{\infty}{dx I_{n;k_{2}}(x) I_{l;k_{2}}(x)} =
\delta _{n,l} \label{xint} .
\end{equation}
In terms of $S(x,x')$ the polarization tensor is
\begin{equation}
\Pi^{\mu \nu}(x) = e^2 \int {dt'dx'dy'dz' Tr\left( \gamma ^{\mu}
S(x,x') \gamma ^{\nu} S(x',x) \right) } \label{PIS} .
\end{equation}
Before we pass to the computation of the components of $\Pi^{\mu \nu}$
beyond the static limit we will show how the results of the previous
section can be reproduced using Eq. (\ref{PIS}).
\section{The static limit}
Whenever the momentum insertion from the electromagnetic field into
the fermion loop is vanishing the polarization tensor can be written
\begin{equation}
\Pi^{\mu \nu} = \sum_{n,l=0}^{\infty}
\int {d\omega dk_{2} dk_{3} \over (2\pi )^{3}}
{1 \over \left( \omega ^{2} -k_{3}^{2}-m^{2}-2eBn \right)
\left( \omega ^{2} -k_{3}^{2}-m^{2}-2eBl \right)}\;\pi ^{\mu \nu}
\end{equation}
where
\begin{equation}
\pi ^{\mu \nu} = \int dx'~ Tr\left( \gamma ^{\mu} S(n;\omega ,
k_{2},k_{3}) \gamma ^{\nu} S(l;\omega ,k_{2},k_{3}) \right)
\label{pimunu}
\end{equation}
It can be calculated, for example, that
\begin{equation}
\pi ^{00} =2(\omega ^{2} + k_{3}^{2} +2eBn +m^{2}) \left( I_{n,n} +
I_{n-1,n-1} \right) \delta _{l,n} \label{pi00f}
\end{equation}
\begin{equation}
\pi ^{33} =2(\omega ^{2} + k_{3}^{2} -2eBn -m^{2}) \left( I_{n,n} +
I_{n-1,n-1} \right) \delta _{l,n} \label{piI33}
\end{equation}
\begin{eqnarray}
\lefteqn{ \pi ^{11} =2(\omega ^{2} - k_{3}^{2}- m^{2}) (I_{n,n}
\delta _{l,n+1} + I_{n-1,n-1} \delta _{l,n-1}) +}\nonumber \\
&&2eB\left( \sqrt{n(n+1)} I_{n-1,n+1} \delta _{l,n+1} +
\sqrt{n(n-1)} I_{n,n-2} \delta _{l,n-1}\right) \label{pi11f}
\end{eqnarray}
$\pi ^{11}$ differs from $\pi ^{22}$ only in that the last term has
the opposite sign. As expected $\Pi^{00}$ and $\Pi^{33}$ correspond
to zero angular moment transfer between the photon and the plasma,
implying the selection rule $\Delta n = 0$. Instead, transversally
polarized waves can induce transitions between different Landau
levels.
\subsection{The $T=0$ finite $\mu$ contribution}
Let us consider the $T=0$ contribution for the various components. We
have
\begin{equation}
\Pi ^{00}(B,0,\mu) = {8|e|^3 B \over \left( 2\pi \right) ^{2}}
\sum_{n=\hat{0}}^{\infty} {\oint_C{{d\omega \over 2\pi
i}\int_{0}^{\infty}{dk_3\, {\omega ^{2} +k_3^{2} +2eBn+m^{2} \over
\left(
\omega ^{2} -k_3^{2}-2eBn-m^{2} \right) ^{2}}}}}
\label{pi00mu}
\end{equation}
To obtain this, perform the $k_{2}$ integration using (\ref{xint}).
It follows that $$
\Pi^{00}(B,0,\mu) =
$$ $$ {8|e| ^3B \over \left( 2\pi \right) ^{2}}
\sum_{n=\hat{0}}^{\infty} {\oint{{d\omega \over 2\pi
i}\int_{0}^{\infty}{dk_3\, \left({2\omega ^{2}
\over \left( \omega ^{2} -k_3^{2}-2eBn-m^{2} \right) ^{2}}-{1 \over
\omega ^{2}-k_3^{2}-2eBn-m^{2}}\right) }}} =
$$ $$ {8|e|^3 B \over \left( 2\pi \right) ^{2}}
\sum_{n=\hat{0}}^{\infty}{\left( {d \over
dx}{\int_{0}^{\sqrt{\mu^{2}-m^{2}-x}}{dk_3\sqrt{k_3^{2}+m^{2}+x}}}-\int_{0
}^{\sqrt{\mu^{2}-m^{2}-x}}{{dk_3\, \over 2 \sqrt{k_3^{2}+m^{2}+x}}}
\right)} =
$$
\begin{equation}
{|e|^3 B \over \pi ^{2}} \sum_{n=\hat{0}}^{\left[ {\mu ^{2}-m^{2}
\over 2eB} \right]}{{\mu \over \sqrt{\mu^{2}-m^{2}-2enB}}} \label{pi00Bmu} ,
\end{equation}
where $x=2eBn$. As can be seen from above, the only non-vanishing
contribution is coming from the point where the $\omega = E$ double
pole is crossing the integration contour. More about the contour $C$
and the methods we use to perform the integrals can be found in the
Appendix.
There are two regions where the expression simplifies. One is the
limit $B = 0$ where we get
\begin{equation}
\Pi ^{00}(0,0,\mu) = {e^2 \mu \sqrt{\mu ^{2} -m^{2}} \over 2\pi ^{2}}
\label{pi000mu}
\end{equation}
the other is $2eB> \mu^{2} - m^{2}$ where
\begin{equation}
\Pi ^{00}(B,0,\mu) = {|e|^3 B \over 2\pi ^{2}} {\mu \over \sqrt{\mu
^{2}- m^{2}}} ~~~.\label{pi00B}
\end{equation}
Eq. ({\ref{pi00B}) can be understood in terms of the modified relation
between the charge density and the chemical potential of the plasma.
In fact, in this limit Eq. (\ref{pairs}) becomes
\begin{equation}
\rho(\mu) \approx {eB\over 2\pi^2} \sqrt{\mu^2 - m^2}
\label{rhomu}
\end{equation}
and Eq.(\ref{pi00B}) can easily be reproduced using $\Pi^{00} = e^2
\partial \rho /\partial \mu$. Eq. (\ref{pi000mu}) disagrees with the
result of refs.\cite{Midorikawa}. In fact, the authors obtain zero
for $\Pi ^{00}(0,0,\mu)$. On the other hand, our Eq.(\ref{pi00Bmu})
agrees completely with the result of ref.\cite{Zeitlin}.
Let us now consider the spatial components and verify that they are
zero. From (\ref{piI33}) we have
\begin{equation}
\Pi ^{33} (B,0,\mu ) = {8|e|^3 B \over \left( 2\pi \right) ^{2}}
\sum_{n=\hat{0}}^{\infty} {\oint{
{d\omega \over 2\pi i}\int_{0}^{\infty}{ dk_3~{ \omega ^{2} +k_3^{2}
-2eBn-m^{2} \over \left( \omega ^{2} -k_3^{2}-2eBn-m^{2} \right)
^{2}}}}} = 0
\end{equation}
Again it is important to take the contour crossing into account in
order to obtain the correct result.
$\Pi ^{11}(B,0,\mu)$ and $\Pi ^{22}(B,0,\mu)$ are simply computed.
The last term of (\ref{pi11f}) is killed by the $k_{2}$ integration
and we are left with a vanishing result.
Hence we conclude that
\begin{equation}
\Pi ^{11}(B,0,\mu) = \Pi ^{22}(B,0,\mu) = \Pi ^{33}(B,0,\mu) =0
\end{equation}
Thus static magnetic fields are not screened in the present limit.
These equations clearly remain valid when ${\bf B}\rightarrow 0$.
Again we disagree with the conclusion of ref.\cite{Midorikawa} where a
magnetic screening length proportional to the Fermi momenta was
obtained at $B_{\rm ext} = 0$. The disagreement that we obtain here
and for $\Pi^{00}(0,0,\mu)$ suggests to us that the contributions from
the crossing of the poles through the contour $C$ was ignored by the
authors of ref.\cite{Midorikawa}.
Proceeding similarly to what we have done for the other components we
easily deduce that $\Pi ^{0j}(B,0,\mu) =0$ if there is no momentum
insertion in the fermion loop.
\subsection{The finite $T$ contribution}
At finite temperature we must add two more contour integrals:
\begin{eqnarray}
\Pi ^{00}(B,T,\mu) &=& {8|e|^3B \over \left( 2\pi \right)^{2}}
\sum_{n=\hat{0}}^{\infty} \left(
\int_{-i \infty +\mu -\epsilon}^{i \infty +\mu -\epsilon}
{d\omega \over 2\pi i}~ {1 \over 1+e^{-\beta (\omega -\mu)}}
I(\omega,k,n) \right. \\ &+& \left.\int_{-i \infty +\mu +\epsilon}^{i
\infty+\mu +\epsilon} {d\omega \over 2\pi i}~ {1 \over
1+e^{\beta(\omega -\mu)}} I(\omega,k,n) + \oint_C {d\omega \over 2\pi
i}~ I(\omega,k,n)
\right) \nonumber
\end{eqnarray}
where
\begin{equation}
I(\omega,k,n) = {\omega ^{2} +k^{2} +2eBn+m^{2} \over
\left( \omega ^{2}-k^{2}-2eBn-m^{2} \right)^2}~~~.
\end{equation}
The integrals are performed according to the prescription given in the
Appendix with the result
\begin{equation}
\Pi^{00}(B,T,\mu) = {|e|^3 B\beta \over 4 \pi
^{2}}\sum_{n = \hat{0}}^{\infty}{\int_{0}^{\infty} dk_3~ {\left({1
\over
\cosh ^{2}\left( {\beta (E + \mu) \over 2} \right)} + {1 \over
\cosh ^{2}\left( {\beta (E - \mu) \over 2} \right)} \right)}}
\end{equation}
in agreement with (\ref{pi00}). Although here the result coincides
with that of ref.\cite{Midorikawa} in the limit ${\bf B} = 0$, this
can easily be understood since the contributions from the pole
crossing through the contours $C$ and $C_\pm$ cancel in this case.
Similarly one can verify that the spatial components remain zero even
at non-zero temperature. The behavior of $\Pi^{00}$ as a function of
$B$ is shown in Fig. 1. As we anticipated in the introduction, we can
see from this figure that the screening properties of the plasma are
sensitive to the magnetic field only when $eB \gg T^2$. Thus the
effect of even a large magnetic field can be neglected if this
condition is not fulfilled.
\begin{center}
\leavevmode
\epsfysize=7cm
\epsfbox{pi00.ps}
\end{center}
{\small Fig.1. $\Pi^{00}$ for three different temperatures. From top,
$T=10m_{e}$, $T=m_{e}$ and $T=0.1m_{e}$.}
\section{Beyond the static limit}
The application of an external magnetic field breaks the isotropy of
the plasma. It is thus understandable that the electromagnetic
response of the plasma will depend on the direction of the the
momentum insertion with respect to the field orientation. In this
section we will give some sample calculations when the photon has
non-zero momentum or energy.
\subsection{${\bf p} \parallel {\bf B}$}
We start by considering a momentum insertion parallel to the magnetic
field. Put $k'_{3} = k_{3} + p_{3}$ where $p_{3}$ is the momentum
insertion. $\Pi ^{\mu \nu}$ is then obtained by replacing $k_{3}^{2}$
with $k_{3}'^{2}$ in one of the propagators and replacing $k_{3}^2$
with $k_{3} k'_{3}$ in $\pi^{\mu \nu}$ in
Eqs.(\ref{pi00f}-\ref{pi11f}). To compute $\Pi^{00}$ we use
\begin{equation}
I(\omega,k_3,n) = {\omega ^{2} +k_{3}k'_{3} +2enB +m^2 \over \left(
\omega ^{2} -E_{n}(k_{3})^{2}
\right)\left( \omega ^{2} -E_{n}(k'_{3})^{2} \right)}
\end{equation}
as the integrand in Eq.(\ref{pi00mu}). Performing the $\omega$
integration over the contour $C$ (see the appendix) this becomes
\begin{equation}
{m^2+2eBn \over 2p_{3}}\left( \int_{-\sqrt{\mu ^{2}-m^{2}-2eBn}
}^{\sqrt{\mu ^{2}-m^{2}-2eBn}}{{ dk_3\over
(2k+p_{3})\sqrt{k^{2}+m^{2}+2eBn}}} - (p_{3} \rightarrow -p_{3})
\right) \label{pi00C}
\end{equation}
Expanding to second order in $p_{3}^{2}$ we find $$
\Pi ^{00} (B,0,\mu|p_{3}) = \Pi ^{00} (B,0,\mu|p_{3} =0) +
$$ $$ {p_{3}^{2}|e|^3B \over \pi ^{2}} \sum_{n=\hat{0}}^{\left[ {\mu
^{2}-m^{2} \over 2eB} \right]} \left( {\mu \over 12 \left( \mu ^{2} -
m^{2} -2eBn \right) ^{3/2}} -{\mu \over 6 \sqrt{\mu ^{2} - m^{2}
-2eBn}(m^2 +2eBn)} \right) $$
\begin{equation}
+ O(p_{3}^4) \label{pi00py2}
\end{equation}
where $\Pi ^{00}(B,0,\mu)$ is given by Eq.(36).
At $T \neq 0$ we have to manage two simple poles in the complex
$\omega$-plane. As discussed in the Appendix, the integration over
$C_+$ splits in two integrals each of them corresponding to the two
possible positions of the positive-energy pole: $E > \mu$ or $E <
\mu$. A third integral, over $C_-$, receives a contribution only from
the negative-energy pole, always contained in $C_-$. An
$T$-independent term coming from the integration over $C_+$ and $C_-$
cancel exactly with the result of the integral over $C$ given by
Eq.(\ref{pi00C}). Thus we get $$
\Pi ^{00}(B,T,\mu|p_{3}) =
-{8|e|^3B \over (2\pi)^{2}} \sum_{n=\hat{0}}^{\infty} \frac {2eBn +
m^2}{p_{3}}\times $$
\begin{equation}
\int^\infty_{-\infty} {dk \over (2k + p_{3})\sqrt{k^2+2eBn+m^2}}
\left( {1 \over 1+e^{\beta (\omega -\mu)}} + {1 \over 1+e^{\beta
(\omega +\mu)}} \right) ~.
\end{equation}
Previously we found that $\Pi ^{33}(B,T,\mu) =0$ at zero momentum. At
non-zero $p_{3}$ we have to integrate
\begin{equation}
{\omega ^{2} +k_{3}k'_{z} -2enB -m^2 \over \left( \omega
^{2}-E_{n}(k_{3})^{2} \right)
\left( \omega ^{2} -E_{n}(k'_{z})^{2} \right)}
\end{equation}
over $k_3$ and $\omega$. The residue at $\omega = E_{n}$ is
$-\displaystyle\frac{k_{3}}{2E_{n}p_{3}}$. Clearly, $\Pi^{33} = 0$ even at
non-zero $p_{3}$ after integrating over both signs of $k_{3}$. This
agrees with Eq.(\ref{pizz}).
It is also worthwhile to check $\Pi^{11}$ with non-zero $p_3$. Using
Eq.(\ref{pi11f}) we find that we have now to manage the integrand
\begin{equation}
{\omega ^{2} -k_3k'_3 - m^{2} \over \left( \omega ^{2}
-k_{3}^2-m^{2}-2neB
\right) \left( \omega ^{2} -k^{'2}_{3}-m^{2}-2(n+1)eBl \right)} \ .
\end{equation}
After the integrations we get
\begin{eqnarray}
\lefteqn{\Pi^{11}(B,0,\mu|p_3) = \frac {p_3^2}{B} \left[{|e|
\over 2 \pi ^{2}}
\sum _{n=\hat{0}}^{\infty} \left( \mu \sqrt{\mu ^{2} -m^{2}-2eBn}
\right.\right.}
\nonumber\\
&& \left.\left. -(m^{2} +4eBn) \log {\mu +
\sqrt{\mu ^{2} -m^{2} -x} \over \sqrt{m^{2}+x}} \right)\right] +
0(p_3^4)
\label{pi113}
\end{eqnarray}
that agree with Eq.(\ref{pixx}) as the reader can check keeping in
mind Eq.(\ref{magn}). The same result as (\ref{pi113}) is found for
$\Pi^{22}(B,0,\mu|p_3)$.
Let us now move on to $\Pi ^{0j}$. Using (\ref{pimunu}) we find that
\begin{equation}
\pi ^{03} = 2\omega \left( k_{3} + k_{3}'\right)
\left( I_{n,n} + I_{n-1,n-1} \right) \delta _{n,l}
\end{equation}
This clearly gives zero even for $p_{3} \neq 0$. In fact, the
integrand can be rewritten as
\begin{equation}
{\omega \over p_{3}}\left( {1 \over \omega ^{2} -k^{2}-2eBn-m^{2}} -
{1 \over \omega ^{2} -k'^{2}-2eBn-m^{2}}\right)
\end{equation}
Shifting the $k'$ in the second term gives zero after integration. We
leave to verify $\Pi_{01}(B,0,\mu|p_3) = \Pi_{02}(B,0,\mu|p_3) = 0$ as
an easy exercise for the reader.
\subsection{ ${\bf p} \bot {\bf B}$}
The case with momentum transverse to the magnetic field is slightly
more complicated. Of special interest are the $\Pi ^{0i}\ \ (i = 1,2)$
components at non-zero $p_i$ which give through
\begin{equation}
\sigma ^{ij} = i { \partial \Pi ^{0i} (p) \over \partial p_{j}}
\end{equation}
the plasma conductivity. Using (\ref{pimunu}) we find that
\begin{equation}
\pi ^{01} =\int dx' 4i\sqrt{2eBl} \omega \left( I_{n,n} +
I_{n-1,n-1} \right) I_{l,l-1}
\end{equation}
At non-zero $p_{2}$ we have to calculate $$\int dx'dy'dk_{2}dk_{2}'~
e^{ik_{2}(y-y')-ik_{2}'(y-y')+ ip_{2}(y-y')} I_{n;k_{2}}(x)
I_{n';k_{2}}(x')I_{l';k_{2}'} (x')I_{l;k_{2}'}(x) =$$
\begin{equation}
eB\int dxdx'~ I_{n;0}(x) I_{n';0}(x')I_{l;p_{2}}(x)I_{l';p_{2}}(x')
\end{equation}
This can be found to be
\begin{equation}
eB e^{-{p_{2}^2 \over 2eB}} \left( -{p_{2} \over(2eB)^{1/2}}
\right)^{n+n'-l-l'} \left( {l!l'! \over n!n'!}\right)^{1/2}
L_{l}^{n-l} \left( {p_{2}^{2} \over 2eB} \right) L_{l'}^{n'-l'}
\left( {p_{2}^{2} \over 2eB} \right)
\end{equation}
for $n \geq l$ and $n' \geq l'$, where the $L's$ are
Laguerre-polynomials. Expanding to order $p_{2}$ we find, in our case,
\begin{equation}
-\sqrt{2eB}p_{2} \left( l \delta _{n,l+1} + l
\delta _{n,l} \right) + ( n \rightarrow n+1)
\end{equation}
and to order $p_{2}^{3}$ $$ {p_{2}^{3} \over 2\sqrt{2eB}} \left(
\left( 3l^{2} +l \right)
\delta_{n,l+1}
-\left( l^{2}+l \right) \delta _{n,l+2} +\left( 3l^{2}-l \right)
\delta _{n,l} -\left( l^{2}-l \right) \delta _{n,l-1} \right)
$$
\begin{equation}
+ ( n \rightarrow n+1) .
\end{equation}
For larger powers of $p_{2}$, larger jumps between the Landau levels
are allowed. We get $$
\Pi ^{01} (B,0,\mu |p_{2}) =
{ie^2 p_{2} \over \pi ^{2}} \sum_{n={\hat 0}}^{\left[ { \mu ^{2} -
m^{2}\over 2eB} \right] }\left(
\sqrt{\mu ^{2} -m^{2} -2eBn}
- {eBn \over \sqrt{\mu ^{2} -m^{2} -2eBn} } \right) $$
\begin{equation}
-{ie p_{2}^{3} \over 4\pi ^{2}B} \sum_{n=1}^{\left[ { \mu ^{2}
- m^{2} \over 2eB} \right]}\left( 2n\sqrt{\mu ^{2}
-m^{2} -2eBn} - {3eBn^{2} \over \sqrt{\mu ^{2} -m^{2}
-2eBn}}
\right) + O(p_{2}^{5})\ . \label{pi01py}
\end{equation}
$\Pi_{02}$ is given by the same expression with $p_2$ replaced by
$p_1$. Eq.({\ref{pi01py}) confirms the result for the conductivity of
\cite{Zeitlin} and
extends it to second order in momentum. Note that the higher order correction
vanishes for
$2eB > \mu ^{2} -m^{2}$. The expression for the
conductivity given in ref.\cite{Gonzalez}, on the other hand, do not
agree with our result or the result of \cite{Zeitlin}.
We are also able to extend our result to finite temperature.
$$
\Pi ^{01} (B,T,\mu | p_{2}) =
{ie^2 p_{2} \over \pi ^{2}} \sum_{n={\hat 0}}^{\infty}\left( 1 +B{d
\over dB}\right) \int_{0}^{\infty} dk_3~\left( {1 \over 1+e^{\beta
(E-\mu )}} - {1 \over 1+e^{\beta (E+\mu )}}
\right)
$$
$$ -{ie p_{2} ^{3} \over 4\pi ^{2}B} \sum_{n=1}^{\infty} \left(
2n +3Bn{d \over dB}\right) \int _{0}^{\infty} dk_3~
\left({1 \over 1+e^{\beta (E-\mu )}} - {1 \over 1+e^{\beta (E+\mu
)}}\right) $$
\begin{equation}
+ O(p_{2}^{5})
\end{equation}
The first order term can again be checked using (\ref{pi0i}).
We may also consider $\Pi ^{00}$ and $\Pi ^{ii}$ at nonzero $p_{1}$
and $p_{2}$ . The calculation uses the same method as above. After
expanding the Laguerre-polynomials and performing the
contour-integrals, we can verify Eqs.(17-19). On the other hand we
find
$$
\Pi^{00}(B,0,\mu | p_{2}) = \Pi ^{00} (B,0,\mu, p_2=0) +
$$
$$
{e^2 p_{2}^{2} \over 2\pi ^{2} eB} \sum _{n=\hat{0}} ^{\infty}
\left( \mu \sqrt{\mu ^{2}
-m^{2}-2eBn} + m^{2} \log {\mu + \sqrt{\mu ^{2} -m^{2} -x} \over
\sqrt{m^{2}+2eBn}} -{4eBn\mu \over \sqrt{\mu ^{2} - m^{2} -2eBn}}
\right)
$$
\begin{equation}
+ O(p_{2}^{4})
\end{equation}
Together with (\ref{pi00py2}) we then have the complete expression to
second order in momentum. In this case we can not check the result
using section 1.
\subsection{Energy insertion}
Let us now consider the case in which the photon energy is different
from zero. In this case the energy of one of the fermions in the
polarization loop will be shifted to $\omega' = \omega + p_0$ where
the photon energy $p_0$ is assumed to be real. To compute
$\Pi^{00}(B,0,\mu)$, the integrand to replace the one in Eq.(35) is
\begin{equation}
I(\omega,k_3,n) = {\omega\omega' +k_{3}^2 +2enB +m^2 \over \left(
\omega ^{2} - E_{n}(k_{3})^{2}
\right)\left( \omega^{'2} - E_{n}(k_{3})^{2} \right)}~~~.
\end{equation}
Performing the $\omega$-integration we find $$
\Pi^{00}(B,0,\mu|p_0)=
$$ $$ {|e|^3 B \over p_{0} \pi^2}
\left(
\sum_{n = \hat 0}^{\left[ {\mu^{2}-m^{2} \over 2eB} \right]}
\sqrt{\mu^2 - 2enB - m^2} -
\sum_{n = \hat 0}^{\left[ {(\mu -p_{0})^{2}-m^{2} \over 2eB} \right]}
\sqrt{(\mu - p_0)^2 - 2enB - m^2}\right) =
$$ $$
\Pi^{00}(B,0,\mu)+{2|e|^3 B \over \pi^2}
\sum_{n =\hat 0}^{\left[ {\mu^{2}-m^{2} \over 2eB} \right]}\left(
p_0 {2enB + m^2\over 2 (\mu^2 - 2enB - m^2)^{3/2}} + p_0^2 {\mu (2enB
+ m^2)\over (\mu^2 - 2enB -m^2)^{5/2}}\right) $$
\begin{equation}
+O(p_0^3) ~~~.
\end{equation}
At $T\neq 0$ we have instead $$\Pi^{00}(B,T,\mu|p_0) = {8|e|^3 B
\over(2\pi)^2 p_{0}}\sum ^{\infty}_{n=\hat 0}
\left[\int^{\infty}_{-\infty} dk_3~\left(\frac 1
{e^{\beta(E-\mu)}+1} - \frac 1 {e^{\beta(E+\mu)}+1}\right)\right. $$
\begin{equation}
-\left.\int^{\infty}_{-\infty} dk_3~\left(\frac 1
{e^{\beta(E-(\mu+p_0))}+1} - \frac 1 {e^{\beta(E+(\mu+p_0))}+1}
\right)\right]
\end{equation}
\section{Conclusions}
In this paper we have studied the polarization tensor of a QED plasma
placed in a magnetic field. Although this has been the subject of
several studies, some discrepancies, even in the simplest limit $B =
0$ \cite{Midorikawa}, called for a more careful analysis.
Furthermore, recent results concerning the thermodynamics of a QED
plasma in a strong magnetic field allow a new, physically more
transparent, interpretation of old results as well as of the new
results that we obtain here.
We have used two different methods. The first relate the polarization
tensor directly to the thermodynamic quantities of the system. The
second makes use of the fermion propagator in an external magnetic
field. These methods give the same results in the first non-vanishing
order in the photon 4-momentum. Beyond this limit the first method is
not applicable and the introduction of the second can not be avoided.
For instance, this is needed if one wants to find the dispersion
relations for the electromagnetic waves.
Some of our results have a simple physical interpretation. Static
electric fields are screened by the plasma in strong magnetic fields
($eB > T^2, \mu^2 - m^2$) more effectively than in the free field
case. This can be understood since static electric fields are
screened by the charge rearrangement in the plasma. At $T = 0$ no
thermal pair production is active and the screening can be achieved
only in the presence of a charge asymmetry. In strong magnetic fields
the relation with the charge asymmetry is modified. Indeed, the charge
density becomes proportional to the magnetic induction when $eB$ is
larger than the Fermi momentum squared, $\mu^2 - m^2$ (see
Eq.(\ref{rhomu})). Consequently, $\Pi^{00}$ grows linearly with $B$
(see Eq.(\ref{pi00B}).
At $T\neq 0$, as we discussed in the introduction, the thermal pair
production is amplified by large magnetic fields. Then more charge
carriers are available to screen static electric fields. The reader
can see in Fig. 1 that $\Pi^{00}$ starts to grow linearly in $B$, once
all electrons and positrons have dropped in the lowest Landau level.
In this static limit the anisotropy induced by the external magnetic
field can not play any role and the definition of an electric
screening length $\lambda^{-2} = m_{el}^2 = \Pi^{00}(p_0=0, {\bf p}
\rightarrow 0)$
\cite{Kapusta} is still meaningful.
Static magnetic fields are not screened at all. In fact, macroscopic
spatial currents can not be obtained if ${\bf p} = 0$ (see
Eq.\ref{piij}). This conclusion can not be modified in presence of
strong external magnetic fields neither at vanishing nor at finite
temperature. However, magnetic screening is achieved if ${\bf p} \neq
0$ and Eqs.(\ref{pizz}-\ref{piyz}) show how this is related to the
magnetization and magnetic susceptibility of the plasma in a non
trivial way.
We also computed the electric conductivity of the plasma. Our result
confirms that of ref.\cite{Zeitlin}, but we improve the calculation by
including second order terms in the momentum expansion. On the other
hand, we disagree with the result of ref.\cite{Gonzalez}.
The determination of the complete dispersion relations for
electromagnetic waves propagating trough plasmas in strong magnetic
fields is beyond the purposes of the present paper. One would need to
take into account the full tensorial structure of the polarization
operator as done in Refs.\cite{Rojas}. Using our results according to
the prescription of Refs.\cite{Rojas}, would provide physically more
accessible information about the propagation of electromagnetic waves
trough a plasma in presence of strong magnetic fields.
\section*{Acknowledgements}
We would like to thank Lars Bergstr\"om and Hector Rubinstein for
discussions and encouragement while writing this paper. The work of
D. G. was supported by a Twinning EEC contract.
\section*{Appendix}
Let us give some details on how the contours are specified. At finite
temperature the integral $\int d\omega$ is replaced by a sum over
$\omega ={i\pi \over \beta} (2n+1)$. This can be accomplished by
integration along a counter clockwise contour around the poles of
$\tanh{{\pi z \over 2}}$, e.g.
\begin{equation}
\sum _{n} f(2n+1) = {i \over 4 } \oint dz \tanh {\pi z \over 2} f(z)
\end{equation}
Deforming the contours gives
\begin{eqnarray}
\lefteqn{ \int _{-i\infty}^{i\infty} \frac {d\omega}{2\pi i} f(\omega )
\int_{-i \infty +\mu +\epsilon}^{i \infty +\mu +\epsilon}
\frac {d\omega}{2\pi i} f(\omega)
{1 \over e^{\beta (\omega -\mu)}+1} +} \\ && \int_{-i \infty +\mu
-\epsilon} ^{i \infty +\mu -\epsilon}
\frac {d\omega}{2\pi i} f(\omega)
{1 \over e^{-\beta (\omega -\mu)}+1} + \oint_C
\frac {d\omega}{2\pi i} f(\omega)
\nonumber
\end{eqnarray}
The first term is the divergent vacuum contribution, the second and
third terms are non-zero only at finite temperature and the last term
is the $T=0$ non-zero $\mu$ contribution. The contours are depicted
in fig. 2.
\begin{center}
\leavevmode
\epsfysize=7cm
\epsfbox{contour.ps}
\end{center}
{\small Fig. 2. Contours for $T = 0$ on the left and for $T \neq 0$ on
the right.
\vskip 1cm
It is worthwhile also to give some details about how we performed the
integrals. Mainly we wish to direct the attention of the reader to the
crossing of poles through the contours of integration.
We start by considering the $T = 0$ finite-density contribution. This
term has the form
\begin{equation}
\oint_C \frac {d\omega}{2\pi i}\int^{\infty}_{-\infty} dk
f(\omega,k) = \sum_{i}{\rm lim}_{\omega \rightarrow - E_i}
\frac {d^{m_i-1}}{d\omega^{m_i-1}}\int^{\infty}_{-\infty} dk
f(\omega,k) (\omega - E_i)^m_i\theta(\mu - \omega)
\label{intC}
\end{equation}
where we have assumed that the function $f(\omega,k)$ has poles
$\omega = E_i$ on the real axis of order $m_i$. The contour of
integration $C$ is depicted in Fig. 2. The result of the integration
is zero if the poles lie outside the region delimited by the contour
$C$. This justifies the $\theta$-function on the left side of
Eq.(\ref{intC}). The effect of the $\theta$-function is simply to
modify the effective integration limits in the case the pole is
first-order.
More interesting, when the pole is second-order (we do not come
through higher order poles in our one-loop computations) a new term
appears containing a delta-function coming from the derivative of the
$\theta$. This term corresponds to the crossing of the pole trough the
contour of integration. The $\delta$ kills the $k$-integral and leaves
us with a term of the form like that in the result of Eq.(36).
The finite temperature contribution has the general form
\begin{eqnarray}
\lefteqn{\int_{-i \infty +\mu +\epsilon}^{i \infty +\mu +\epsilon}
\frac {d\omega}{2\pi i} \int_{-\infty}^{\infty} dk~ f(\omega,k)
{1 \over e^{\beta (\omega -\mu)}+1} +} \\ && \int_{-i \infty +\mu
-\epsilon} ^{i \infty +\mu -\epsilon}
\frac {d\omega}{2\pi i} \int_{-\infty}^{\infty} dk~ f(\omega,k)
{1 \over e^{-\beta (\omega -\mu)}+1} + \oint_C
\frac {d\omega}{2\pi i} \int_{-\infty}^{\infty} dk~ f(\omega,k)
\nonumber \label{Tint}
\end{eqnarray}
When the poles are first order the first two, temperature dependent,
integrals split in three non-vanishing parts. The first part
corresponds to the integral over the contour $C_+$ (see Fig. 2.) for
values of $k$ such that the pole $\omega = E$ is inside $C_+$ ($E >
\mu$). It can be written
\begin{equation}
\int^{\infty}_{-\infty} dk~ f(E,k) \frac 1 {e^{\beta(E - \mu)}+1} -
\int^{\sqrt{\mu^2 - x - m^2}}_{-\sqrt{\mu^2 - x - m^2}} dk~ f(E,k)
\frac 1 {e^{\beta(E - \mu)}+1}~~~; \label{C+}
\end{equation}
the second part corresponds to the integral over $C_-$ when only the
pole $\omega = -E$ is inside $C_-$
\begin{equation}
\int^{\infty}_{-\infty} dk f(-E,k) \frac 1 {e^{\beta(E + \mu)}+1}
\end{equation}
and the third one comes from the contribution to the same integral
when $k$ is such that the pole $\omega = E$ is inside $C_-$
\begin{equation}
\int^{\sqrt{\mu^2 - x - m^2}}_{-\sqrt{\mu^2 - x - m^2}} dk~ f(E,k)
\frac 1 {e^{-\beta(E - \mu)}+1}~~~.\label{C-2}
\end{equation}
It is then straightforward to verify that the sum of the second
integral in (\ref{C+}) and the integral in (\ref{C-2}) is temperature
independent and cancels with the integral over $C$ in
(\ref{Tint}). Then (\ref{Tint}) takes the form
\begin{equation}
\int^{\infty}_{-\infty} dk~ f(E,k) \left(\frac 1 {e^{\beta(E - \mu)}
+ 1} + \frac 1 {e^{\beta(E + \mu)} + 1}\right)
\end{equation}
where the separate contributions of electrons and positrons are
evident. If the poles are second-order we have to take care of the
contribution coming from the derivative of the $\theta$-function
integrated through the $C_{\pm}$ contours. This generates the terms $$
\int^{\infty}_{-\infty} dk~ f(E,k)
\frac 1 {e^{\beta(E - \mu)} + 1} \delta(E - \mu)
+ \int^{\infty}_{-\infty} dk~ f(E,k)
\frac 1 {e^{-\beta(E - \mu)} +1} \delta(E - \mu) =
$$
\begin{equation}
\frac 1 2 \left(\frac {dE}{dk}\right)^{-1}_{E = \mu}
f(E = \mu) = \frac \mu {\sqrt{\mu^2 - x - m^2}}~f(E = \mu) ~~~.
\end{equation}
This is a T independent term. When inserted in (\ref{Tint}) it cancel
the result of the integral over $C$.
\newpage
\bigskip
|
3,212,635,537,752 | arxiv | \section{Supplemental theoretical parameters}
In addition to Fig.1 we list the hyperparameters of 3 charge density generators(CDGs).
Before listing the hyperparameters, we will explain the mathematical meaning of the operators.
These networks are built through 6 structural layer operators.
\noindent
1.~Input layer $\mathbf{x}$:
\begin{equation}
\mathbf{x} = \{P,N\},
\end{equation}
where $P$ is proton number and $N$ is neutron number.
\noindent
2.~Feature layer($\hat{l}_{i2\to i8}$):
\begin{equation}
\{P,N\} \to \{P,N,\mathcal{O}_p,\mathcal{O}_n, \delta, v_p, v_n, \mathcal{P}\},
\end{equation}
where
\begin{equation}
\begin{aligned}
&\mathcal{O}_p = P~ mod ~2, &&\mathcal{O}_n = N~ mod ~2,\\
&\delta = [(-1)^\mathrm{P}+(-1)^\mathrm{N}]/2, &&\mathcal{P}=v_pv_n/(v_p+v_n),
\end{aligned}
\end{equation}
$v_p$($v_n$) is the difference between the actual nucleon numbers $P$($N$) and the nearest magic numbers (8, 20, 28, 50, 82, 126).
\noindent
3.~Fully connected (dense) layer($\hat{l}_{f,g(x)}$):
\begin{equation}
\begin{aligned}
y_j^{(L)} &= \sum^{\mathrm{D}^{(L-1)}}_i w_{ij}^{(L)} * x_{i}^{(L)} + b_{j}^{(L)}, \\
x^{(L+1)}_i &= g(y^{(L)}_i),
\end{aligned}
\end{equation}
where $x_{i}^{(L)}$ and $y_j^{(L)}$ are the inputs and outputs of the layer $L$, $w_{ij}^{(L)}$ and $b_{j}^{(L)}$ are the weights and biases of the layer $L$.
$g(x)$ denotes the activation function, thus $\hat{l}_{f,r}$ and $\hat{l}_{f,s}$ correspond to $g(x) = \mathrm{ReLU}(x) = max(0,x)$ and $g(x) = \mathrm{Sigmoid}(x) = 1/(1+e^{-x})$, respectively.
\noindent
4.~Convolutional Layer($\hat{l}_{c,g(x)}$) \cite{Simonyan2014.}:
\begin{equation}
\begin{aligned}
y_{kn}^{(L)} &=\sum_j^{C_{in}^{(L)}} \sum^{K_s^{(L)}}_{i} w_{ki}^{(L)} * x_{j,S_t*n + i}^{(L)} + b_{kn}^{(L)}, \\
x^{(L+1)}_{kn} &= g(y^{(L)}_{kn}),
\end{aligned}
\end{equation}
where $k \in [0, C_{out}^{(L)})$ and $n \in[0, \lceil \mathrm{D}^{(L-1)}/S_t^{(L)} \rceil)$, $C_{in}^{(L)}$ and $C_{out}^{(L)}$ denote the number of channels of input and output respectively.
$C_{out}^{(L)}$ is also equal to the number of convolution kernels $w_{ki}^{(L)}$ in layer $L$.
$S_t^{(L)}$ denotes the stride when the convolution kernel slides.
$K_s^{(L)}$ denotes convolution kernel size.
In this paper, one-dimensional convolution is used.
\noindent
5.~Pooling Layer($\hat{l}_{p}$):
\begin{equation}
x^{(L+1)}_{n} = max(x^{(L)}_{S_t*n},x^{(L)}_{S_t*n+1},...,x^{(L)}_{S_t*n+K_s-1}),
\end{equation}
$S_t^{(L)}$ and $K_s^{(L)}$ denotes the stride and the size.
\noindent
6.~{\it rms-R} Layer($\hat{l}_{\rho \to R}$):
\begin{equation}
R_{pre} = [\frac{\int \rho_{pre} r^4 dr}{\int \rho_{pre} r^2 dr}]^{\frac{1}{2}}.
\end{equation}
Through this layer, the lattice density $\rho_{pre}$ is transformed into the root mean square radius $R_{pre}$.
By stacking the above unit layers, CDG-1(Table.\ref{ts1}), CDG-2(Table.\ref{ts2}), and CDG-3(Table.\ref{ts3}) are built.
Numerical Normalization Factor is 1000/111, multiplying the outputs by which makes the outputs between 0 and 1 and has no effect on the root mean square radius, which can accelerate the convergence \cite{Yang2021Phys.Lett.B823.136650}.
Adaptive Momentum Estimation(Adam) \cite{Kingma2015.} is chosen as the optimizer for stochastic gradient descent.
The prediction of the network is slightly dependent on the initialization of the parameters $\mathbf{w}_{initial}$, which makes it possible to derive statistical errors by initializing them differently.
We extend the 2-dimensional input to 800 dimensions by linear or non-linear operations, which will spontaneously generate more features in order to extract density distribution information.
Subsequently, this high-dimensional information is descended to produce the density distribution $\rho_{c,i}$.
For this module, fully connected layers and convolutional layers are used for modeling, respectively.
We list the table of hyperparameters of the CDGs as follows.
\begin{table}[!htb]
\centering
\caption{the hyperparameters set of CDG-1}
\begin{tabular}{cccc}
\hline
L & Types and Operators & D & $g(x)$ \\
\hline
0 & Input Layer($\mathbf{x}$) & 2 & - \\
1 & Dense Layer($\hat{l}_{f,r}$) & 50 & ReLU \\
2 & Dense Layer($\hat{l}_{f,r}$) & 100 & ReLU \\
3 & Dense Layer($\hat{l}_{f,r}$) & 200 & ReLU \\
4 & Dense Layer($\hat{l}_{f,r}$) & 400 & ReLU \\
5 & Dense Layer($\hat{l}_{f,r}$) & 800 & ReLU \\
6 & Dense Layer($\hat{l}_{f,r}$) & 400 & ReLU \\
7 & Dense Layer($\hat{l}_{f,r}$) & 200 & ReLU \\
8 & Output Layer($\hat{l}_{f,s}$) & 150 & Sigmoid \\
9 & {\it rms-R} Layer($\hat{l}_{\rho \to R}$) & 1 & - \\
\hline
\multicolumn{2}{c}{Other hyperparameters} & \multicolumn{2}{c}{Values and properties} \\
\hline
\multicolumn{2}{c}{Numerical Normalization Factor} & \multicolumn{2}{c}{1000/111} \\
\multicolumn{2}{c}{Optimizer} & \multicolumn{2}{c}{Adam}
\\
\multicolumn{2}{c}{Learning rate} & \multicolumn{2}{c}{0.0005}
\\
\multicolumn{2}{c}{Total params} & \multicolumn{2}{c}{857,400}
\\
\multicolumn{2}{c}{Batch size} & \multicolumn{2}{c}{64}
\\
\multicolumn{2}{c}{Running time}& \multicolumn{2}{c}{447.543 s}
\\
\hline
\end{tabular}
\label{ts1}
\end{table}
\begin{table}[!htb]
\centering
\caption{the hyperparameters set of CDG-2}
\begin{tabular}{cccccccc}
\hline
L & ~~~~~~~Types and Operators~~~~~~~ & D &$C_{in}$ & $C_{out}$ & $K_s$ & $S_t$ & $g(x)$ \\
\hline
0 & Input Layer($\mathbf{x}$) & 2 &- & - & - & - & - \\
1 & Dense Layer($\hat{l}_{f,r}$) & 50 &- &- &- & - & ReLU \\
2 & Dense Layer($\hat{l}_{f,r}$) & 100 &- & - &- & - & ReLU \\
3 & Dense Layer($\hat{l}_{f,r}$) & 200 &- & - & - & - & ReLU \\
4 & Dense Layer($\hat{l}_{f,r}$) & 400 &- & - & - & - & ReLU \\
5 & Dense Layer($\hat{l}_{f,r}$) & 800 &- & - & - &- & ReLU \\
6 & Convolutional Layer($\hat{l}_{c,r}$) & - &1 & 5 & 4 & 2 & ReLU \\
7 & Pooling Layer($\hat{l}_{p}$) & - &- & - & 3 & 3 & - \\
8 & Convolutional Layer($\hat{l}_{c,r}$) & - &5 & 20 & 5 & 2 & ReLU \\
9 & Pooling Layer($\hat{l}_{p}$) & - &- & - & 5 & 3 & - \\
10 & Convolutional Layer($\hat{l}_{c,r}$) & - &20 & 100 & 5 & 1 & ReLU \\
11 & Pooling Layer($\hat{l}_{p}$) & - &- & - & 5 & 3 & - \\
12 & Convolutional Layer($\hat{l}_{c,r}$) & - &100 & 200 & 5 & 1 & ReLU \\
13 & Output Layer($\hat{l}_{f,s}$) & 150 &- & - & - & - & Sigmoid \\
14 & {\it rms-R} Layer($\hat{l}_{\rho \to R}$) & 1 &- & - & - & - & - \\
\hline
\multicolumn{3}{c}{~~~~~~~~~~~Other hyperparameters~~~~~~~~~~~} & \multicolumn{5}{c}{~~~~~~~Values and properties~~~~~~~~} \\
\hline
\multicolumn{3}{c}{Numerical Normalization Factor} & \multicolumn{5}{c}{1000/111} \\
\multicolumn{3}{c}{Optimizer} & \multicolumn{5}{c}{Adam}
\\
\multicolumn{3}{c}{Learning rate} & \multicolumn{5}{c}{0.0005}
\\
\multicolumn{3}{c}{Total params} & \multicolumn{5}{c}{567,645}
\\
\multicolumn{3}{c}{Batch size} & \multicolumn{5}{c}{64}
\\
\multicolumn{3}{c}{Running time}& \multicolumn{5}{c}{817.412 s}
\\
\hline
\end{tabular}
\label{ts2}
\end{table}
\begin{table}[!htb]
\centering
\caption{the hyperparameters set of CDG-3}
\begin{tabular}{cccc}
\hline
L & Types and Operators & D & $g(x)$ \\
\hline
0 & Input Layer($\mathbf{x}$) & 2 & - \\
1 & Feature Layer($\hat{l}_{i2\to i8}$) & 8 & - \\
2 & Dense Layer($\hat{l}_{f,r}$) & 50 & ReLU \\
3 & Dense Layer($\hat{l}_{f,r}$) & 100 & ReLU \\
4 & Dense Layer($\hat{l}_{f,r}$) & 200 & ReLU \\
5 & Dense Layer($\hat{l}_{f,r}$) & 400 & ReLU \\
6 & Dense Layer($\hat{l}_{f,r}$) & 800 & ReLU \\
7 & Dense Layer($\hat{l}_{f,r}$) & 400 & ReLU \\
8 & Dense Layer($\hat{l}_{f,r}$) & 200 & ReLU \\
9 & Output Layer($\hat{l}_{f,s}$) & 150 & Sigmoid \\
10 & {\it rms-R} Layer($\hat{l}_{\rho \to R}$) & 1 & - \\
\hline
\multicolumn{2}{c}{Other hyperparameters} & \multicolumn{2}{c}{Values and properties} \\
\hline
\multicolumn{2}{c}{Numerical Normalization Factor} & \multicolumn{2}{c}{1000/111} \\
\multicolumn{2}{c}{Optimizer} & \multicolumn{2}{c}{Adam}
\\
\multicolumn{2}{c}{Learning rate} & \multicolumn{2}{c}{0.0005}
\\
\multicolumn{2}{c}{Total params} & \multicolumn{2}{c}{857,700}
\\
\multicolumn{2}{c}{Batch size} & \multicolumn{2}{c}{64}
\\
\multicolumn{2}{c}{Running time}& \multicolumn{2}{c}{450.277 s}
\\
\hline
\end{tabular}
\label{ts3}
\end{table}
We note the models as
\begin{equation}
\begin{aligned}
\mathcal{M}_{CDG-1} &= \{\hat{l}_{f,r}^7, \hat{l}_{f,s}, \hat{l}_{\rho \to R} \}, \\
\mathcal{M}_{CDG-2} &= \{\hat{l}_{f,r}^5, (\hat{l}_{c,r}, \hat{l}_{p})^3 ,\hat{l}_{c,r} \hat{l}_{f,s}, \hat{l}_{\rho \to R} \}, \\
\mathcal{M}_{CDG-3} &= \{\hat{l}_{i2 \to i8}, \hat{l}_{f,r}^7, \hat{l}_{f,s}, \hat{l}_{\rho \to R}. \}
\end{aligned}
\end{equation}
We calculate matter density using equation\cite{Reinhard1991.2850} $\rho_\tau(\mathbf{r}) = \Sigma_{i,\sigma} w_i \vert\phi_i(\mathbf{r},\sigma,\tau) \vert^2$, where $w_i$ denote the pairing weight for each single particle state and $\tau = p(n)$ corresponds to proton(neutron).
The total wave function reads: $\phi_i(\mathbf{r}) = R_i(r)/r \times \mathcal{Y}_{jlm}(\theta,\psi)$.
The functions $\mathcal{Y}_{jlm}(\theta,\psi)$ are spinor spherical harmonics, which makes it necessary to calculate the density in the radial direction only.
Taking into account that the nucleon itself has an intrinsic electromagnetic structure, we need to fold the proton and neutron densities with the intrinsic charge density of the nucleons.
Folding becomes a simple product in Fourier space, so that we transform the densities into the so-called form factors $F_{\tau}(k)$:
\begin{equation}
F_{\tau}(k)=4 \pi \int_{0}^{\infty} \mathrm{d} r r^{2} j_{0}(k r) \rho_{\tau}(r),
\label{eqs1}
\end{equation}
where $j_0$ is the spherical Bessel function of zeroth order.
Similarly, the form factor of the spin-orbit current $\nabla \boldsymbol{J}$ (accounting for magnetic contributions to the charge density) is written as $F_{ls,\tau}(k)$.
The charge form factor is then given by
\begin{equation}
\begin{aligned}
F_{\mathrm{C}}(k)=&\sum_{\tau}\left[F_{\tau}(k) G_{\mathrm{E}, \tau}(k)+F_{l s, \tau}(k) G_{\mathrm{M}}(k)\right] \\
&\times \exp (\frac{1}{8}(\hbar k)^{2} /\left\langle P_{\mathrm{cm}}^{2}\right\rangle),
\end{aligned}
\label{eqs2}
\end{equation}
here $G_{\mathrm{E}, \tau}(k)$, $G_{\mathrm{M}}(k)$ are electric and magnetic form factors of the nucleons, and the exponential factor is centre-of-mass correction(see Ref.\cite{Reinhard1991.2850} for detail).
The charge density is obtained from the charge form factor by the inverse
Fourier-Bessel transform,
\begin{equation}
\rho_{c,theo}(r)=\frac{1}{2 \pi^{2}} \int \mathrm{d} k k^{2} j_{0}(k r) F_{\mathrm{C}}(k).
\label{eqs3}
\end{equation}
As the charge radii are updated, other physical quantities change.
In addition to Fig.5 we list the hyperparameters of M2 (Table.\ref{m2}) and M3 (Table.\ref{m3}).
M2 can be seen as the inverse of Eqs.(\ref{eqs1},\ref{eqs2} and \ref{eqs3}), which is a charge density to matter density (CDTMD) map.
\vspace*{3\baselineskip}
\begin{table}[!htb]
\centering
\caption{the hyperparameters set of M2}
\begin{tabular}{cccccccc}
\hline
\multicolumn{8}{c} {Cell-1 ($\hat{C}_1 = \{ \hat{l}_{f,r}^3\}$)} \\
\hline
L & Types and Operators & D & & & & & $g(x)$ \\
\hline
0 & Input Layer($\mathbf{x}$) & 2 & & & & & - \\
1 & Dense Layer($\hat{l}_{f,r}$) & 32 & & & & & ReLU \\
2 & Dense Layer($\hat{l}_{f,r}$) & 64 & & & & & ReLU \\
3 & Dense Layer($\hat{l}_{f,r}$) & 128 & & & & & ReLU \\
\hline
\multicolumn{8}{c} {Cell-2 ($\hat{C}_2 = \{ \hat{l}_{c,r},\hat{l}_{p},\hat{l}_{c,r}, \hat{l}_{f,r}\} $) }\\
\hline
L & ~~~~~~~Types and Operators~~~~~~~ & D &$C_{in}$ & $C_{out}$ & $K_s$ & $S_t$ & $g(x)$ \\
\hline
0 & Input Layer($\rho_c$) & 150 &- & - & - & - & - \\
1 & Convolutional Layer($\hat{l}_{c,r}$) & - &1 & 8 & 3 & 3 & ReLU \\
2 & Pooling Layer($\hat{l}_{p}$) & - &- & - & 2 & 2 & - \\
3 & Convolutional Layer($\hat{l}_{c,r}$) & - &8 & 32 & 5 & 5 & ReLU \\
4 & Dense Layer($\hat{l}_{f,r}$) & 512 &- & - & - & - & ReLU \\
\hline
\multicolumn{8}{c} {Cell-3 ($\hat{C}_3= \{ \hat{l}_{f,r}^2 \} $) }\\
\hline
L & ~~~~~~~Types and Operators~~~~~~~ & D & & & & & $g(x)$ \\
\hline
0 & Dense Layer($\hat{l}_{f,r}$) & 768 & & & & & ReLU \\
1 & Dense Layer($\hat{l}_{f,r}$) & 1024 & & & & & ReLU \\
\hline
\multicolumn{8}{c} {Cell-4 ($\hat{C}_4= \{ \hat{l}_{f,r}^2, \hat{l}_{f,s}\}$ ) }\\
\hline
L & ~~~~~~~Types and Operators~~~~~~~ & D & & & & & $g(x)$ \\
\hline
0 & Dense Layer($\hat{l}_{f,r}$) & 512 & & & & & ReLU \\
1 & Dense Layer($\hat{l}_{f,r}$) & 256 & & & & & ReLU \\
2 & Dense Layer($\hat{l}_{f,s}$) & 150 & & & & & Sigmoid \\
\hline
\multicolumn{8}{c} {Cell-5 ($\hat{C}_5= \{ \hat{l}_{f,r}^2, \hat{l}_{f,s}\}$) }\\
\hline
L & ~~~~~~~Types and Operators~~~~~~~ & D & & & & & $g(x)$ \\
\hline
0 & Dense Layer($\hat{l}_{f,r}$) & 512 & & & & & ReLU \\
1 & Dense Layer($\hat{l}_{f,r}$) & 256 & & & & & ReLU \\
2 & Dense Layer($\hat{l}_{f,s}$) & 150 & & & & & Sigmoid \\
\hline
\multicolumn{3}{c}{Other hyperparameters} & \multicolumn{5}{c}{Values and properties} \\
\hline
\multicolumn{3}{c}{Numerical Normalization Factor} & \multicolumn{5}{c}{6.25} \\
\multicolumn{3}{c}{Optimizer} & \multicolumn{5}{c}{Adam}
\\
\multicolumn{3}{c}{Learning rate} & \multicolumn{5}{c}{0.001}
\\
\multicolumn{3}{c}{Total params} & \multicolumn{5}{c}{2,763,404}
\\
\multicolumn{3}{c}{Batch size} & \multicolumn{5}{c}{32}
\\
\multicolumn{3}{c}{Running time}& \multicolumn{5}{c}{2241.901 s}
\\
\hline
\end{tabular}
\label{m2}
\end{table}
M2 is composed of 5 cells ($\hat{C}_{1,2,3,4,5}$).
We note the model as
\begin{equation}
\left [
\begin{array}{c}
\hat{C}_4 \\
\hat{C}_5
\end{array}
\right ]
\left [
\hat{C}_3 (\hat{C}_1(\mathbf{x}) \uplus \hat{C}_2(\rho_c))
\right ] =
\left [
\begin{array}{c}
\rho_p \\
\rho_n
\end{array}
\right ],
\end{equation}
here the symbol $\uplus$ indicates splicing two vectors.
\clearpage
M3 is a density to binding energy per nucleon (DTBE) map.
\vspace*{3\baselineskip}
\begin{table}[!htb]
\centering
\caption{the hyperparameters set of M3}
\begin{tabular}{cccccccc}
\hline
\multicolumn{8}{c} {Cell-1 ($\hat{C}_1 = \{ \hat{l}_{f,r}^3\}$)} \\
\hline
L & Types and Operators & D & & & & & $g(x)$ \\
\hline
0 & Input Layer($\mathbf{x}$) & 2 & & & & & - \\
1 & Dense Layer($\hat{l}_{f,r}$) & 32 & & & & & ReLU \\
2 & Dense Layer($\hat{l}_{f,r}$) & 64 & & & & & ReLU \\
3 & Dense Layer($\hat{l}_{f,r}$) & 128 & & & & & ReLU \\
\hline
\multicolumn{8}{c} {Cell-2 ($\hat{C}_2 = \{ \hat{l}_{c,r},\hat{l}_{p},\hat{l}_{c,r}, \hat{l}_{f,r}\} $) }\\
\hline
L & ~~~~~~~Types and Operators~~~~~~~ & D &$C_{in}$ & $C_{out}$ & $K_s$ & $S_t$ & $g(x)$ \\
\hline
0 & Input Layer($\rho_{p,n,c}$) & 150 &3 & - & - & - & - \\
1 & Convolutional Layer($\hat{l}_{c,r}$) & - &3 & 24 & 3 & 3 & ReLU \\
2 & Pooling Layer($\hat{l}_{p}$) & - &- & - & 2 & 2 & - \\
3 & Convolutional Layer($\hat{l}_{c,r}$) & - &24 & 72 & 5 & 5 & ReLU \\
4 & Dense Layer($\hat{l}_{f,r}$) & 512 &- & - & - & - & ReLU \\
\hline
\multicolumn{8}{c} {Cell-3 ($\hat{C}_3= \{ \hat{l}_{f,r}^2 \} $) }\\
\hline
L & ~~~~~~~Types and Operators~~~~~~~ & D & & & & & $g(x)$ \\
\hline
0 & Dense Layer($\hat{l}_{f,r}$) & 768 & & & & & ReLU \\
1 & Dense Layer($\hat{l}_{f,r}$) & 1024 & & & & & ReLU \\
\hline
\multicolumn{8}{c} {Cell-4 ($\hat{C}_4= \{ \hat{l}_{f,r}^5, \hat{l}_{f,s}\}$ ) }\\
\hline
L & ~~~~~~~Types and Operators~~~~~~~ & D & & & & & $g(x)$ \\
\hline
0 & Dense Layer($\hat{l}_{f,r}$) & 1024 & & & & & ReLU \\
1 & Dense Layer($\hat{l}_{f,r}$) & 512 & & & & & ReLU \\
2 & Dense Layer($\hat{l}_{f,r}$) & 256 & & & & & ReLU \\
3 & Dense Layer($\hat{l}_{f,r}$) & 128 & & & & & ReLU \\
4 & Dense Layer($\hat{l}_{f,r}$) & 32 & & & & & ReLU \\
5 & Dense Layer($\hat{l}_{f,s}$) & 1 & & & & & Sigmoid \\
\hline
\multicolumn{3}{c}{Other hyperparameters} & \multicolumn{5}{c}{Values and properties} \\
\hline
\multicolumn{3}{c}{Numerical Normalization Factor} & \multicolumn{5}{c}{6.25} \\
\multicolumn{3}{c}{Optimizer} & \multicolumn{5}{c}{Adam}
\\
\multicolumn{3}{c}{Learning rate} & \multicolumn{5}{c}{0.0001}
\\
\multicolumn{3}{c}{Total params} & \multicolumn{5}{c}{2,177,241}
\\
\multicolumn{3}{c}{Batch size} & \multicolumn{5}{c}{32}
\\
\multicolumn{3}{c}{Running time}& \multicolumn{5}{c}{2740.545 s}
\\
\hline
\end{tabular}
\label{m3}
\end{table}
M3 is composed of 4 cells ($\hat{C}_{1,2,3,4}$).
We note the model as
\begin{equation}
\hat{C}_4
\left [
\hat{C}_3 (\hat{C}_1(\mathbf{x}) \uplus \hat{C}_2(\rho_{p,n,c}))
\right ] = E/A,
\end{equation}
M2 and M3 are trained by the density and binding energy of DDHF under different nuclear forces, SkM, SkM*, SkIII, SLy4, SkT, and SkT3 with learning deviations less than 0.2\%.
\clearpage
\section{Supplemental figures}
We show the experimental data and the selected training nuclei(Fig.\ref{fig:exp}).
In addition to Fig.3 we also include here results for CDG-1(Fig.\ref{fig:cdg1}) and CDG-2(Fig.\ref{fig:cdg2}).
\begin{figure}[!htb]
\includegraphics[width=13 cm]{Figure_SM/Expdata.eps}
\caption{\label{fig:exp} (Color online)Experimental data of charge radius and position of trained nuclides.
}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=13 cm]{Figure_SM/CDG-1.eps}
\caption{\label{fig:cdg1} (Color online) Prediction of charge radius by CDG-1.
}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=13 cm]{Figure_SM/CDG-2.eps}
\caption{\label{fig:cdg2} (Color online)Prediction of charge radius by CDG-2.
}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=14 cm]{Figure_SM/profile_2.eps}
\caption{\label{fig:profile} (Color online)
The charge density distributions of four untrained random nuclei-$^{26}$Na (a), $^{103}$Mo (b), $^{152}$Sm (c) and $^{204}$Pb (d) predicted by CDG-3, and compared with original data from DDHF, where inner panels are in the logarithmic scales. And
the correction $\delta \rho_c$ to DDHF by neural network for untrained random nuclei-$^{26}$Na (a), $^{103}$Mo (b), $^{152}$Sm (c) and $^{204}$Pb (d).
}
\end{figure}
The radius residual information is transferred to the correction of the density distribution by
\begin{equation}
\label{eq:drho}
\begin{aligned}
\delta \rho, \delta R &= \mathcal{M}(\mathbf{x},\mathbf{w}^{\mathrm{MLE}}_{theo} + \delta \mathbf{w}) - (\rho , R)_{theo}, \\
\delta \mathrm{w} &= \sum_{t_2} \mathcal{F} (\frac{\partial Loss(0.7, \mathbf{w}^{(t_2)})}{\partial \mathrm{w}}),
\end{aligned}
\end{equation}
here $\mathcal{F}$ is the Adam optimizer.
The network weights $\mathbf{w}^{\mathrm{MLE}}_{theo}$ to describe the theoretical distribution are biased $\delta \mathbf{w}$ due to the inclusion of experimental data.
The corresponding results are presented in Fig.\ref{fig:profile}.
The charge density distributions of four untrained random nuclei-$^{26}$Na (a), $^{103}$Mo (b), $^{152}$Sm (c) and $^{204}$Pb (d) predicted by CDG-3 are shown, and compared with original data from DDHF, where inner panels are in the logarithmic scales.
As can be seen from the Fig.\ref{fig:profile}, the correction are particularly small.
These fine corrections are on a magnitude of $\rm 10^{-3}$ $\rm fm^{-3}$ and converge within a very narrow interval.
For different nuclei, convergence to different correction patterns and the patterns of corrections are determined by $\delta \mathbf{w}$.
The correction patterns deserve further study for isotope chains.
\begin{figure}[!htb]
\includegraphics[width=10 cm]{Figure_SM/radiipre_COMPARE.eps}
\caption{\label{fig:radii2} (Color online) Predictions for Ca by CDG-3 with Ca isotopes trained (a) and untrained (b), and Predictions for Pb by CDG-3 with Pb isotopes trained (c) and untrained (d). }
\end{figure}
In order to further test the predictive power of the network, we trained CDG-3 with the train set that replaces the Ca isotopes with other random nuclides, whose predictions are shown in Fig.\ref{fig:radii2}(b) compared to Fig.\ref{fig:radii2}(a) employing the same train set as Fig.\ref{fig:exp}.
In this case, there are also only a few nuclei ($^{39}$Ca,$^{44}$Ca,$^{48}$Ca) that are not covered within a standard deviation.
Fig.\ref{fig:radii2}(c) and Fig.\ref{fig:radii2}(d) show the CDG-3 predictions for Pb isotopes, where (c) employ the same train set as Fig.\ref{fig:exp}, while (d) employ a train set that replaces the Pb isotopes with other random nuclides.
One can see that even though no nuclei in the isotope chain are trained, the experimental data are still roughly predicted.
\bibliographystyle{apsrev4-1}
\section{Introduction}
{\it Introduction}-The charge density distribution is essential for extracting nuclear structure information, including the evolution of shell structure, shape coexistence, shape transition, and neutron-skin thickness \cite{Campbell2016Prog.Part.Nucl.Phys.86.127180, Wood1992Phys.Rep.215.101201, Bender2003Rev.Mod.Phys.75.121180, Li2016Nucl.Sci.Tech.27.16}.
Experimentally, the relative radii of neighboring nuclei are determined using muonic-atom spectra \cite{Angeli2013At.DataNucl.DataTables99.6995} and electron scattering experiments \cite{Angeli2013At.DataNucl.DataTables99.6995, Helm1956Phys.Rev.104.14661475,Hofstadter1956Rev.Mod.Phys.28.214254}, as well as isotope shifts.
Given the strong connection between charge density and matter density, one generally calculates the charge density distribution by merging the form factor \cite{Dreher1974Nucl.Phys.A235.219248, Friar1975Adv.Nucl.Phys..219376} with the matter density determined by proton-nucleon scattering experiments \cite{Mutschler2016Nat.Phys.13.152156}, proton-knockout experiments, or even heavy-ion collisions \cite{Fan2019Phys.Rev.C99.041601}.
Early on, the Fermi distributions \cite{Yennie1954Phys.Rev.95.500512, Ullah1994Pramana43.165168} and Fourier-Bessel expansion \cite{Ullah1994Pramana43.165168, Euteneuer1978Nucl.Phys.A298.452476} were used to describe the matter and charge densities of atomic nuclei roughly.
With the development of computational power, various branching models \cite{Naito2019Phys.Rev.C99.024309, Zhang2017Phys.Rev.C95.014316, Bertulani2019Phys.Rev.C100.015802, Bardeen1957Phys.Rev.106.162164,Bogolyubov1958Sov.Phys.JETP7.4146, Neff2015Phys.Rev.C92.024003, Dean2008Phys.Rev.Lett.101.119201,Roth2009Phys.Rev.C79.064324,Dytrych2007Phys.Rev.Lett.98.162503, Dytrych2013Phys.Rev.Lett.111.252501} derived from density functional theory (DFT) and shell models have become more popular among theorists.
However, due to the complexity of nuclear many-body systems, the calculations of the theory are still inadequate with the existence of beyond-mean-field effects and nucleon-nucleon correlations \cite{Yang2019Phys.Rev.C100.054325, Tanaka2021Science371.260264, Duer2018Nature560.617621, Miller2019Phys.Rev.Lett.123.232003}.
For nuclear complex systems, back propagation neural networks have achieved brilliant success in various aspects, such as nuclear masses \cite{Niu2018Phys.Lett.B778.4853, Ma2020Phys.Rev.C101.045204, Athanassopoulos2004Nucl.Phys.A743.222235}, nuclear spins and parities \cite{Gernoth1993Phys.Lett.B300.17}, charge radii \cite{Utama2016J.Phys.GNucl.Part.Phys.43.114002,Wu2020Phys.Rev.C102.054323}, excited states \cite{Lasseri2020Phys.Rev.Lett.124.162502}, extrapolation problems in $ab~initio$ method \cite{Negoita2019Phys.Rev.C99.054308}, $\alpha$-decay half-lives \cite{Saxena2021J.Phys.GNucl.Part.Phys.48.055103}, $\beta$-decay half-lives \cite{Niu2019Phys.Rev.C99.064307}, fission yields \cite{Wang2019Phys.Rev.Lett.123.122501, Qiao2021Phys.Rev.C103.034621}, and so on.
Most present neural networks learn and predict the residuals between theoretical and experimental values, which has the advantage that the corrected predictions are stronger than the existing theoretical model \cite{Niu2018Phys.Lett.B778.4853, Utama2016Phys.Rev.C93.014311,Niu2019Phys.Rev.C99.064307}.
However, the corrections are difficult to self-consistent for different observables and have weak physical interpretability.
Google proposed a hybrid quantum-classical machine learning model for training beyond classical data types, where back-propagation is used to tune the quantum logic gate parameters, allowing for deep integration of physics and neural networks \cite{Broughton2020.}.
Based on the Hohenberg-Kohn maps \cite{Moreno2020Phys.Rev.Lett.125.076402} of DFT and the proven strong generalization ability of neural networks in describing density distributions \cite{Yang2021Phys.Lett.B823.136650}, we collectively constrain the charge density distributions by back-propagation with experimental charge radii data, which makes the residual information flow back from radii to densities.
We construct the charge densities to binding energy map to achieve the further transfer of information from radii to binding energies.
{\it The charge density generators}-Previously, a multilayer feed-forward neural network with a back-propagation algorithm of error has been elaborated to perform the maximum likelihood estimation \cite{Blundell2015.16131622} in the process of generating density distributions \cite{Yang2021Phys.Lett.B823.136650} approximating the theoretical calculation.
A network of about 200-300 nuclei trained quickly is sufficient to describe the density distribution of all nuclei on the nuclear chart and has a powerful extrapolation capability.
Based on the high computational efficiency and generalization ability of neuron networks, we introduced the correlation between theoretically calculated charge density and experimental root mean square charge radius (RMSCR) to a new hybrid neural network.
\begin{figure}[!htb]
\includegraphics[width=8 cm]{Figure_main/NET01.pdf}
\caption{\label{fig:net} (Color online)Schematic diagram of the structure of the charge density generator neural network.
}
\end{figure}
The structure of the network is shown in Fig.\ref{fig:net}, where $\rho_{c,i} = \rho_c(r_i)$ is the density on the lattice with $r_i = 0.1 \times i ~\rm{fm}$.
The input is the number of protons and neutrons of a certain nuclide, $\mathbf{x} = \{P, N\}$.
The outputs are the charge density $\rho_c$ and the RMSCR ($R_c$) gained by the customized integration layer $\hat{l}_{\rho_c \rightarrow R_c}$,
\begin{equation}
\hat{l}_{\rho_c \rightarrow R_c} : R_{c,pre} =\sqrt{\frac{\int \rho_{c,pre} r^4 dr}{\int \rho_{c,pre} r^2 dr}}.
\end{equation}
For the hidden module, three attempts are made: 1. A deep fully connected neural network; 2. A convolutional neural network, suitable for learning the gradient information of features; 3. A neural network with a feature layer $\hat{l}_{i2\to i8}$, containing parity, paring and shell effect,
\begin{equation}
\hat{l}_{i2\to i8} : \{P,N\} \to \{P,N,\mathcal{O}_p,\mathcal{O}_n, \delta, v_p, v_n, \mathcal{P}\}
\end{equation}
where
\begin{equation}
\begin{aligned}
&\mathcal{O}_p = P~ mod ~2, &&\mathcal{O}_n = N~ mod ~2,\\
&\delta = [(-1)^P+(-1)^\mathrm{N}]/2, &&\mathcal{P}=v_pv_n/(v_p+v_n).
\end{aligned}
\end{equation}
$v_p$($v_n$) is the difference between the actual nucleon numbers $P$($N$) and the nearest magic numbers (8, 20, 28, 50, 82, 126).
Hereafter, we refer to the three schemes $\mathcal{M}$ as charge density generator-1 (CDG-1), CDG-2 and CDG-3, respectively.
Please see the supplementary materials (SM) for parameter details.
This process is simply noted as
\begin{equation}
(\rho_{c} , R_{c})_{pre} = \mathcal{M}(\mathbf{x},\mathbf{w}),
\end{equation}
where $\mathbf{w}$ is the set of trainable parameters.
The charge density distribution calculated by physical models $\rho_{c,theo}$ is quite accurate.
The RMSCR residuals between theory and experiment can be eliminated by assuming a correction $\delta\rho_c$ to charge density with $\int_0^\infty \delta\rho_c(r) r^2 dr = 0$.
We record this process as standardization.
The standardized charge density distribution $\rho_{c,std}$ satisfying $\hat{l}_{\rho_c \rightarrow R_c}(\rho_{c,std}) \simeq R_{c,exp}$ can be obtained by superimposing the correction,
\begin{equation}
\label{eq1}
\rho_{c,std} = \rho_{c,theo}+ \delta\rho_c,
\end{equation}
here the experimental RMSCR $R_{c,exp}$ obtained by various means with a high accuracy are taken from Ref. \cite{Angeli2013At.DataNucl.DataTables99.6995}.
Additionally, we add an assumption to make the correction $\delta\rho_c$ reasonable:{\it ~make the smallest possible corrections to the theory.}
To this end, we design a composite loss function.
The normalized mean square error(NMSE)\cite{Yang2021Phys.Lett.B823.136650} is employed as a valid assessment of the density distribution $L_d$:
\begin{equation}
\begin{aligned}
L_d &= \frac{1}{N_g}\int (\lambda \rho_{c,pre}(r)-\rho_{c,theo}(r))^2 dr \\
\lambda &= \frac{Ze}{\int 4\pi \rho_{c,pre}(r) r^2 d r}
\end{aligned}
\end{equation}
where $\lambda$ is a normalization factor constrained by nuclear charge number $Ze$.
$N_g = 150$ indicates the number of grid points.
$\rho_{c,theo}(r)$ denotes charge density distribution calculated by density-dependent Hartree-Fock (DDHF), where the SkM* interaction are adopted and Bardeen-Cooper-Schrieffer (BCS) effect is also taken into account.
The charge density is obtained from the charge form factor $F_{\mathrm{C}}$ by the inverse Fourier-Bessel transform,
\begin{equation}
\label{eq:rhocth}
\rho_{c,theo}(r)=\frac{1}{2 \pi^{2}} \int \mathrm{d} k k^{2} j_{0}(k r) F_{\mathrm{C}}(k).
\end{equation}
where $j_0$ is the spherical Bessel function of zeroth order.
The contributions of matter density and spin-orbit current are folded in $F_{\mathrm{C}}$ \cite{Reinhard1991.2850} (see SM for details).
RMSCR has a large range of variation (about 1 - 6 fm) and therefore Pearson $\chi^2$ divergence is picked as its loss function $L_r$,
\begin{equation}
L_r = \frac{( R_{c,pre}-R_{c,exp})^2}{R_{c,pre}} .
\end{equation}
Combining the two loss functions ensures that the assumption holds:
\begin{equation}
Loss(\mathcal{W},\mathbf{w}):=\frac{1}{B_s} \sum_{nu}^{B_s} [(1-\mathcal{W})\times L^{nu}_d(\mathbf{w}) + \mathcal{W}\times L^{nu}_r(\mathbf{w})] ,
\end{equation}
where $\mathcal{W}$ is the weighting factor.
$B_s=64$ is batch size, which means that 64 nuclei ($nu$) are constrained simultaneously for each training session.
Actually, the corrections for different nuclei are derived from the same parameter updates $\delta \mathbf{w}$, i.e., {\it ~the correction of each nucleus is uniformly constrained by the other nuclei}.
We randomly take 640 nuclei (10 batches) of about 900 nuclei measured to date in laboratories as the train set.
The remaining nuclei are recorded as the validation set (see SM for details).
It is recorded as an epoch when all nuclei on the train set have been trained once.
\begin{figure}[!htb]
\includegraphics[width=9 cm]{Figure_main/loss.pdf}
\caption{\label{fig:loss} (Color online)
Upper panel: The evaluation functions NMSE($\mathcal{W}=0$) for density distributions as a function of epochs on the train set and validation set. Lower panel: The evaluation functions Pearson $\chi^2$ divergence($\mathcal{W}=1$) for charge radii as a function of epochs on the train set and validation set.
The inner panels: The errors of the densities (upper) and radii (lower) of CDG-1, CDG-2 and CDG-3 on validation set labeled as G1, G2 and G3.}
\end{figure}
{\it Machine learning processes}-The process of machine learning is divided into two stages, simulating the DDHF theory and correcting with experimental data.
The evaluation of these processes is shown in Fig.\ref{fig:loss}.
The first 3000 epochs are the stage of simulating the DDHF theory, during which the maximum likelihood estimation of the objective function $Loss(0,\mathbf{w})=L_d$ is performed.
One can see that the loss functions of the train set and validation set almost completely overlap.
It means neither for charge density (upper panel) nor RMSCR (lower panel) is overfitting, which shows the generalization ability of the network.
The short magenta line is the Pearson $\chi^2$ divergence $L_{r,theo}$ on validation set between charge radius $R_{c,theo}$ of the DDHF and the experimental value $R_{c,exp}$, defined as
\begin{equation}
L_{r,theo}=\frac{1}{V} \sum_{nu \in val.} \frac{( R_{c,theo}^{nu}-R_{c,exp}^{nu})^2}{R_{c,theo}^{nu}},
\end{equation}
here $V$ is the nuclide number on validation set.
Obviously, after a short training, the loss values $L_r$ overlap with $L_{r,theo}$.
This indicates that the network naturally captures the RMSCR information well in the process of learning the density distribution.
After 3000 epochs, the pre-trained model is further tuned with objective function $Loss(\mathcal{W}=0.7)$, which allows the importance of experimental data to slightly exceed that of theoretical calculations.
Thus, the charge density distributions under collective constraints are spontaneously corrected by the network.
Since the correction makes the predicted charge radius close to the experimental value and the distribution naturally deviate from the physical model, the loss value $L_r$ falls and the loss value $L_d$ jumps as shown in Fig.\ref{fig:loss}.
We define the degree of optimization for residuals between theoretical and experimental values $\Delta$:
\begin{equation}
\Delta = L_{r,theo}-L_{r,s},
\end{equation}
where $L_{r,s} = Loss(1, \mathbf{w}^{\mathrm{MLE}}_{theo}+\delta \mathbf{w})$ indicates the loss value $L_r$ of the trained network.
$\mathbf{w}^{\mathrm{MLE}}_{theo}$ achieves the maximum likelihood estimate to the theory.
For any other theoretical models, the correction is still valid if $\Delta>0$.
Noteworthy is that the training costs only 10 GPU minutes.
\begin{figure}[htb]
\includegraphics[width=8.5 cm]{Figure_main/predictandacc.pdf}
\caption{\label{fig:map2} (Color online)
The trained nuclides' position on nuclear chart (blue square box) and the charge radii predicted by CDG-3(a) and the prediction accuracy of different generators(b).
The accuracy indicates the share of experimental data on validation set falling within the different predicted standard deviations (red for 1 $std$., blue for 1.5 $std$. and pink for 2 $std$.) .}
\end{figure}
{\it Standardized charge density and charge radius}-
To explore the performance of the network, the errors of the densities (upper) and radii (lower) of the retained CDG-1, CDG-2 and CDG-3 on validation set are plotted in the inner panels of Fig.\ref{fig:loss} labeled as G1, G2 and G3.
It can be observed that the validation set errors of CDG-3 is minimal for both density and radius.
The predictions of CDG-3 are closest to the experimental radii, and the corrections to the theory of CDG-3 are smallest, which agrees with our assumption.
It is due to a strong connection between odd-even staggering and pair, shell effects taken account in CDG-3, which is consistent with the mass research \cite{Niu2018Phys.Lett.B778.4853}.
Therefore, we conclude that the network structures without considering the physical factors is insufficient to improve the prediction accuracy, compared with CDG-1 and CDG-2.
Accuracy is a more intuitive representation of CDGs' performance than error.
Fig.\ref{fig:map2}(b) shows the prediction accuracy of the three networks, where the accuracy indicates the share of experimental data on validation set falling within the different predicted standard deviations (red for 1 $std$., blue for 1.5 $std$. and pink for 2 $std$.).
The accuracy are consistent with errors on validation set shown in Fig.\ref{fig:loss} for different networks, i.e., the smaller the error, the higher the accuracy.
In particular, the share of experimental data falling within 2 standard deviations did not exceed 80\% for CDG-1 and CDG-2.
Meanwhile, the accuracy of CDG-3 (1 $std$.) is already close to 80\%, which even comes to 96\% when expanded to 2 $std$.
It can be speculated that CDG-3 can predict the radii of the remaining about 2000 unmeasured nuclides with high precision.
We present the predictions for nuclear charge radii in Fig.\ref{fig:map2}(a) with colorful squares.
\begin{figure}
\includegraphics[width=9 cm]{Figure_main/radiipre.pdf}
\caption{\label{fig:radii2} (Color online)
The radii obtained by CDG-3 prediction with statistical error (1 $std.$), experiment and DDHF on Ca(a), Pb(b), Dy(c), and Tb(d) isotope chains.}
\end{figure}
Neural networks give different nuclear radii from DDHF.
We display the radii obtained by CDG-3 prediction with statistical error (1 $std.$), experiment and DDHF on different isotope chains in Fig.\ref{fig:radii2}.
It is found that the trained network with parity, paring and shell effects can describe well the odd-even staggering of Ca isotopes from Fig.\ref{fig:radii2}(a).
For Pb isotopes (b), the performance of CDG-3 is comparable to the theoretical model, because the nature of Pb is a benchmark for SkM* interaction.
The present predictions almost match the experimental data in the case of training several Pb isotopes.
The predictive power is still significant, even if Ca and Pb isotopes are removed from the train set (see SM).
Given the excellent performance of CDG-3, we make predictions for untrained $\rm ^A$Dy(c) and $\rm ^A$Tb(d).
For $\rm ^A$Dy, our predictions are in agreement with the experiment, but with much narrower error intervals than the experiment.
And for $\rm ^A$Tb, our predictions are larger than the experimental average by almost 0.1 fm.
The findings from CDG-3 can clearly be tested in laboratories worldwide.
\begin{figure}[!htb]
\includegraphics[width=7 cm]{Figure_main/totNet.pdf}
\caption{\label{fig:net2} (Color online)Schematic diagram of the structure of feedforward neural network for the CDTBE map.
}
\end{figure}
{\it Charge density to binding energy map-}
Furthermore, corrections to the density will theoretically change the other nuclear properties.
According to the Hohenberg-Kohn theorem \cite{Moreno2020Phys.Rev.Lett.125.076402}, there exists a bijective map between the local matter density and the one-body potential.
This means that the corrections can be mapped to other observables, such as binding energy \cite{Ryczko2019Phys.Rev.A100.022512} and wavefunctions.
We construct the charge density to binding energy (CDTBE) map.
The schematic diagram of the structure of feedforward neural network is shown in Fig.\ref{fig:net2}.
There are three parts in the CDTBE map- M1, M2 and M3.
M1 is our previously trained charge density generator, and we take the CDG-3.
M2 is a charge density to matter density map as the inverse of Eq.(\ref{eq:rhocth}).
And M3 is a density to binding energy per nucleon map.
M2 and M3 are trained by the density and binding energy of DDHF with different nuclear interactions, SkM, SkM*, SkIII, SLy4, SkT, and SkT3 \cite{Friedrich1986Phys.Rev.C33.335351} with learning deviations less than 0.2\%.
This allows the constructed map to contain the self-consistency and completeness of the Schr{\"o}dinger equation.
Details of the parameters for M2 and M3 can be found in the SM.
As the charge density updates in CDG-3, the residual information flows to other observables, $\delta R_c \to \delta \rho_c \to \{ \delta\rho_n , \delta\rho_p \} \to \delta(E/A)$.
\begin{figure}[!htb]
\includegraphics[width=8 cm]{Figure_main/CaEtoA.pdf}
\caption{\label{fig:skin} (Color online) binding energy per nucleon of Ca isotope chain obtained by CDG-3 residual information flow with statistical error (1 $std.$), experiment and DDHF.
}
\end{figure}
The binding energy is one of the most intriguing observables, and the effect from radii residuals for Ca isotopes is visualized in Fig.\ref{fig:skin}.
We note that the propagated corrections provide a better description for the binding energy of the Ca isotopes excluding $^{47,48,49}$Ca.
Especially for $^{40,42,44}$Ca, both radii (see Fig.\ref{fig:radii2}(a)) and binding energies coincide well with the experimental values.
This illustrates that the physical mechanisms contained in CDTBE map based on density functional theory are adequate for these nuclei, i.e., other effects such as relativistic effects, deformations, and mixed configurations do not matter to them.
For $^{48}$Ca, the radius is corrected to smaller than DDHF, leading to an increase of nuclear densities and a further decrease of binding energy, which is consistent with equation of state.
However, it results in further deviations from the experiment, which supports an indispensable beyond-mean-field effect near $^{48}$Ca.
Employing more advanced physical models to train neural networks with more physical mechanisms, such as relativistic effects, deformations, and mixed configurations, is conducive to connect more observables and enhance the predictions.
\section{Summary}
In this study, supervised learning of DDHF model data is performed firstly to generate the nuclear charge density distributions through deep neural networks, CDG-1, CDG-2 and CDG-3.
The pre-trained networks are further tuned by adding the experimental root mean square radii at the outputs, and the collective constraint make the neuron weights be corrected.
CDG-3 has a higher accuracy compared to CDG-1 and CDG-2, that 80\% of the untrained nuclei fall within one standard deviation of prediction, while 96\% of the nuclei fall within two standard deviations, since a feature capture layer with parity, pairing and shell effects is added in CDG-3 to tame odd-even staggering.
CDG-3 reduces significantly the error interval for radii of $\rm ^A$Dy.
For $\rm ^A$Tb, the predictions are larger than the experimental average by almost 0.1 fm and different from DDHF.
These findings can be proven by worldwide experiments.
Correlation of physical quantities is essential for the development of nuclear many-body models and the measurement of experimental data.
The residuals in the charge radii are further propagated to charge density, matter density and binding energy.
The propagated corrections provide a better description for the binding energy of the Ca isotopes excluding $^{47,48,49}$Ca, which supports an indispensable beyond-mean-field effect near $^{48}$Ca.
We propose a new phenomenological approach combining theory and experiment, which is available to find the realistic Hohenberg-Kohn map for the complex system.
A more realistic map requires connecting more physical quantities by the neural networks.
However, as the residuals propagate deeply, the model dependence of the observables grows, which can be eliminate by employing Bayesian model average (BMA) \cite{Neufcourt2019Phys.Rev.Lett.122.062502, Neufcourt2020Phys.Rev.C101.044307}.
In the current framework, We can not only improve the accuracy of theoretical predictions, but also maintain self-consistency and interpretability.
\section{Acknowledgements}
This work is supported by the National Natural Science Foundation of China under Grants No.~12005175,
the Fundamental Research Funds for the Central Universities under Grant No.~SWU119076,
the JSPS Grant-in-Aid for Early-Career Scientists under Grant No.~18K13549,
the JSPS Grant-in-Aid for Scientific Research (S) under Grant No.~20H05648,
This work was also partially supported by the RIKEN Pioneering Project: Evolution of Matter in the Universe and the RIKEN Special Post-doctoral Researcher Program.
\bibliographystyle{apsrev4-1}
|
3,212,635,537,753 | arxiv | \section{Introduction} \selabel{intro}
Parabosonic algebra has a long history both in theoretical and
mathematical physics. Although, formally introduced in the fifties
by Green \cite{Green}, Greenberg-Messiah \cite{GreeMe}, and Volkov
\cite{Vo} in the context of second quantization, its history
traces back to the fundamental conceptual problems of quantum
mechanics; in particular to Wigner's approach to first
quantization \cite{Wi}. In quantum mechanics we consider a unital
associative non-commutative algebra, generated in terms of the
generators $p_{i}, q_{i}, I$, $i=1,\ldots,n$ and relations (we
have set $\hbar = 1$):
\begin{equation} \label{CCR}
[q_{i}, p_{j}] = i \delta_{ij} I, \qquad [q_{i},
q_{j}] = [p_{i}, p_{j}] = 0
\end{equation}
$I$ is of course the unity of the algebra and $[x, y]$ stands for
$xy-yx$. The states of the system are vectors of a Hilbert space,
where the elements of the above mentioned algebra act. The
dynamics is determined by the Heisenberg equations of motion (we
have set $\hbar = 1$):
\begin{equation} \label{Heisenberg}
i \frac{dq_{i}}{dt} = [q_{i}, H],\qquad i
\frac{dp_{i}}{dt} = [p_{i}, H]
\end{equation}
Relations \eqref{CCR} are known in the physical
literature as the Weyl algebra, or the Heisenberg-Weyl algebra or
more commonly as the Canonical Commutation Relations often
abbreviated as CCR. Their central importance for the quantization
procedure, lies in the fact that if one accepts the algebraic
relations \eqref{CCR} together with the quantum dynamical
equations \eqref{Heisenberg} then it is an easy matter (see
\cite{Ehrenf}) to extract the classical Hamiltonian equations of
motion while on the other hand the acceptance of the classical
Hamiltonian equations together with \eqref{CCR} reproduces the
quantum dynamics exactly as described by \eqref{Heisenberg}. We do
not consider arbitrary Hamiltonians of course but functions of the
form $H = \sum_{i=1}^{n}p_{i}^{2} + V(q_{1},\ldots,q_{n})$ which
however are general enough for simple physical systems. In this
way the CCR emerge as a fundamental link between the classical and
the quantum description of the dynamics.
For technical reasons it is common to use -instead of
the variables $p_{i}, q_{i}$- the linear combinations:
$$
b_{j}^{+} = \frac{1}{\sqrt{2}}(q_{j} - ip_{j}), \qquad
b_{j}^{-} = \frac{1}{\sqrt{2}}(q_{j} + ip_{j})
$$
for $j=1,\ldots,n$ in terms of which \eqref{CCR} become:
\begin{equation} \label{CCRbose}
[b_{i}^{-}, b_{j}^{+}] = \delta_{ij} I, \qquad
[b_{i}^{-}, b_{j}^{-}] = [b_{i}^{+}, b_{j}^{+}] = 0
\end{equation}
for $i,j=1,\ldots,n$. These latter relations are usually called
the bosonic algebra (of n bosons), and in they case of the
infinite degrees of freedom $i,j = 1, 2, \ldots \ $ they become
the starting point of the free field theory (i.e.: second
quantisation).
In 1950 E.P. Wigner in a two page publication \cite{Wi}, noticed
that what the above approach implies is that the CCR \eqref{CCR}
are sufficient conditions -but not necessary- for the equivalence
between the classical Hamiltonian equations and the Heisenberg
quantum dynamical equations \eqref{Heisenberg}. In a kind of
reversing the problem, Wigner posed the question of looking for
necessary conditions for the simultaneous fulfillment of classical
and quantum dynamical equations. Working with the simplest example
of a single, one dimensional harmonic oscillator, he stated an
infinite set of solutions for the above mentioned problem . It is
worth noting that a particular irreducible representation of the
CCR was included as one special case among Wigner's infinite
solutions.
A few years latter in 1953, Green in his celebrated paper
\cite{Green} introduced the parabosonic algebra (in possibly
infinite degrees of freedom), by means of generators and
relations:
\begin{equation} \label{CCRparabose}
\begin{array}{c}
\big[ B_{m}^{-}, \{ B_{k}^{+}, B_{l}^{-} \} \big] = 2\delta_{km}B_{l}^{-} \\
\\
\big[ B_{m}^{-}, \{ B_{k}^{-}, B_{l}^{-} \} \big]= 0 \\
\\
\big[ B_{m}^{+}, \{ B_{k}^{-}, B_{l}^{-} \} \big] = - 2\delta_{lm}B_{k}^{-} - 2\delta_{km}B_{l}^{-}\\
\end{array}
\end{equation}
$k,l,m = 1, 2, \ldots$ and $\{x, y \}$ stands for $xy+yx$. Green
was primarily interested in field theoretic implications of the
above mentioned algebra, in the sense that he considered it as an
alternative starting point for the second quantisation problem,
generalizing \eqref{CCRbose}. However, despite his original
motivation he was the first to realize -see also \cite{OhKa}- that
Wigner's infinite solutions were nothing else but inequivalent
irreducible representations of the parabosonic algebra
\eqref{CCRparabose}. (See also the discussion in \cite{Pal1}). \\
This paper consists logically of two parts. The first part
includes \seref{1}, \seref{2}, \seref{3}. The basic elements for
the structure parabosonic algebra are presented. In \seref{1} we
state the definition and derive basic properties of the
parabosonic algebra in infinite degrees of freedom. The
parabosonic algebra is considered to be a $\mathbb{Z}_2$-graded
associative algebra with an infinite set of (odd) generators
$B_{i}^{\pm}$ for $i = 1, 2, ... \ $. It's $\mathbb{Z}_2$-grading
is inherited by the natural $\mathbb{Z}_2$-grading of the tensor
algebra. The notions of $\mathbb{Z}_2$-graded algebra and
$\mathbb{Z}_{2}$-graded tensor products \cite{Che}, are discussed
as a special examples of the more general and modern notions of
$\mathbb{G}$-module algebras ($\mathbb{G}$: a finite abelian
group) and of braiding in monoidal categories \cite{Mon, Maj2,
Maj3}. In \seref{2} the notion of the super-Hopf algebra is
presented in connection with the non-trivial quasitriangular
structure of the $\mathbb{CZ}_{2}$ group Hopf algebra and the
braided monoidal category of it's representations
${}_{\mathbb{CZ}_{2}}\mathcal{M}$. The super-Hopf algebraic
structure of the parabosonic algebra is established, without
appealing to its Lie superalgebraic structure, and this is the
central result of this part of the paper. Let us remark here In
\seref{3}, for the sake of completeness, well known results
regarding the Lie superalgebraic structure of the parabosonic
algebra in finite degrees of freedom are reviewed.
The second part of the paper consists of \seref{4}. We begin the
section with a review of the bosonisation technique for switching
a Hopf algebra $A$ in a braided monoidal category $\mathcal{C}$
into an ordinary Hopf algebra. Although we do not present the
method in it's full generality (see \cite{Maj1}), we give
sufficient details for its application in a much more general
class of problems than those involved in the ``super" or even in
the $\mathbb{G}$-graded ($\mathbb{G}$ finite and abelian) case: We
consider the case of a Hopf algebra in the braided monoidal
category ${}_{H}\mathcal{M}$ where $H$ is some quasitriangular
Hopf algebra, and explain in detail how we can construct an
ordinary Hopf algebra out of it. The construction is achieved by
means of a smash product algebra $A \star H$, and uses older
results \cite{Mo}, \cite{Ra}, which guarantee the compatibility
between the algebraic and the coalgebraic structure, in order for
a smash product to be a Hopf algebra. The construction is such
that the (braided) modules of the original (braided) Hopf algebra
$A$ and the (ordinary) modules of the ``bosonised" (ordinary) Hopf
algebra $A \star H$ are in a bijective correspondence, providing
thus an equivalence of categories. We apply the method in the case
of the parabosonic algebra, i.e. the case for which $H =
\mathbb{CZ}_{2}$ equipped with it's non-trivial quasitriangular
structure, producing a ``variant" of the parabosonic algebra. This
variant $P_{B} \star \mathbb{CZ}_{2}$, which we will denote by
$P_{B(g)}$, is a smash product Hopf algebra between the
parabosonic super-Hopf algebra $P_{B}$ and the group Hopf algebra
$\mathbb{CZ}_{2}$, and it is a Hopf algebra in the ordinary sense
(and not in the ``super" sense). We explicitly state the structure
maps (multiplication, comultiplication, counity and the antipode)
for the (ordinary) Hopf algebraic structure of $P_{B(g)}$. Finally
one more variant of the bosonisation for the parabosonic algebra
is presented, which achieves the same object with the bosonisation
technique. We construct an algebra $P_{B(K^{\pm})}$, which is a
little ``bigger" than the parabosonic algebra $P_{B}$ or it's
bosonised form $P_{B(g)}$ and we establish it's (ordinary) Hopf
algebraic structure. So we prove that the bosonisation technique
is not unique.
In what follows, all vector spaces and algebras and all
tensor products will be considered over the field of complex
numbers. Whenever the symbol $i$ enters a formula in another place
than an index, it always denotes the imaginary unit $i^{2} = -1$.
Furthermore, whenever formulas from physics enter the text, we use
the traditional convention: $\hbar = m = \omega = 1$. Finally, the
Sweedler's notation for the comultiplication is freely used
throughout the text.
\section{Super-algebraic structure of Parabosons} \selabel{1}
The parabosonic algebra, was originally defined in terms of
generators and relations by Green \cite{Green} and
Greenberg-Messiah \cite{GreeMe}. We begin with restating their
definition, in a modern algebraic context.
Let us consider the vector space $V_{X}$ freely
generated by the elements: $X_{i}^{+}, X_{j}^{-}$, $i,j=1, 2, ...
\ $. Let $T(V_{X})$ denote the tensor algebra of $V_{X}$:
$$
T(V_{X}) = \bigoplus _{n \geq 0} T^{n}(V_{X})
$$
where $T^{0}(V_{X}) = \mathbb{C}$, $ \ T^{1}(V_{X}) = V_{X}$ and
for $n \geq 2$: $T^{n}(V_{X}) = V_{X} \otimes ... \otimes V_{X}$
the $n$-th tensor power of $V_{X}$. It is well known \cite{Che}
that $ \ T(V_{X})$ is -up to isomorphism- the free algebra
generated by the elements $X_{i}^{+}$, $X_{j}^{-}$ ($i,j=1, 2, ...
\ $) of the basis of $V_{X}$ or equivalently the non-commutative
polynomial algebra generated over the indeterminates $X_{i}^{+}$,
$X_{j}^{-}$ ($i,j=1, 2, ... \ $). In $T(V_{X})$ we consider the
two-sided ideal $I_{P_{B}}$, generated by the following elements:
\begin{equation} \label{eq:pbdef}
\big[ \{ X_{i}^{\xi}, X_{j}^{\eta}\}, X_{k}^{\epsilon} \big] -
(\epsilon - \eta)\delta_{jk}X_{i}^{\xi} - (\epsilon - \xi)\delta_{ik}X_{j}^{\eta}
\end{equation}
respectively, for all values of $\xi, \eta, \epsilon = \pm 1$ and
$i,j=1, 2, ... \ $. $ \ I_{X}$ is the unity of the tensor algebra.
$[A, B]$ stands for $A \otimes B - B \otimes A$ and $ \{A, B \}$
stands for $A \otimes B + B \otimes A$, where $A$ and $B$ are
arbitrary elements of the tensor algebra $T(V_{X})$. We now have
the following:
\begin{definition} \delabel{parabosonsbosons}
The parabosonic algebra in $P_{B}$ is the quotient algebra of the
tensor algebra $T(V_{X})$ of $V_{X}$ with the ideal $I_{P_{B}}$:
$$
P_{B} = T(V_{X}) / I_{P_{B}}
$$
\end{definition}
We denote by $\pi_{P_{B}}: T(V_{X}) \rightarrow P_{B}$ the
canonical projection. The elements $X_{i}^{+}$, $ \ X_{j}^{-}$, $
\ I_{X}$, where $i,j=1, 2, ... \ $ and $I_{X}$ is the unity of the
tensor algebra, are the generators of the tensor algebra
$T(V_{X})$. The elements $\pi_{P_{B}}(X_{i}^{+}),
\pi_{P_{B}}(X_{j}^{-}), \pi_{P_{B}}(I_{X}) \ $, $ \ i,j=1,...$ are
a set of generators of the parabosonic algebra $P_{B}$, and they
will be denoted by $B_{i}^{+}, B_{j}^{-}, I$ for $i,j=1, 2, ...$
respectively, from now on. $\pi_{P_{B}}(I_{X}) = I$ is the unity
of the parabosonic algebra. The generators of the parabosonic algebra
satisfy equ. (\ref{CCRparabose}).
Based on the above definitions we prove now the following
proposition:
\begin{proposition} \prlabel{parabosonstobosons}
The parabosonic algebra $P_{B}$ is a $\mathbb{Z}_{2}$-graded
associative algebra with it's generators $B_{i}^{\pm}$ for $i,j=1,
2, ...$, being odd elements.
\end{proposition}
\begin{proof}
It is obvious that the tensor algebra $T(V_{X})$ is a
$\mathbb{Z}_{2}$-graded algebra with the monomials being
homogeneous elements. If $x$ is an arbitrary monomial of the
tensor algebra, the degree of $x$ is denoted by $|x|={\rm deg}\,x$.
Then $|x|={\rm deg}(x) = 0$, namely $x$ is an even element,
if it constitutes of an even number of factors (an even number of
generators of $T(V_{X})$) and $|x|={\rm deg}\,(x) = 1$, namely $x$ is an odd
element, if it constitutes of an odd number of factors (an odd
number of generators of $T(V_{X})$). The generators $X_{i}^{+},
X_{j}^{-} \ $, $ \ i,j=1,...,n$ are odd elements in the above
mentioned gradation.
In view of the above description we can easily conclude that the
$\mathbb{Z}_{2}$-gradation of the tensor algebra is immediately
``transfered" to the algebra $P_{B}$. The ideal $I_{P_{B}}$ is an
homogeneous ideal of the tensor algebra, since it is generated by
homogeneous elements of $T(V_{X})$. Consequently, the projection
homomorphism $\pi_{P_{B}}$ is an homogeneous algebra map of degree
zero, or we can equivalently say that it is an even algebra
homomorphism.
\end{proof}
The rise of the theory of quasitriangular Hopf algebras from the
mid-80's \cite{Dri} and thereafter and especially the study and
abstraction of their representations (see: \cite{Maj2, Maj3},
\cite{Mon} and references therein), has provided us with a novel
understanding of the notion and the properties of
$\mathbb{G}$-graded algebras,
where $\mathbb{G}$ is a finite abelian group.
We are restricting ourselves to the simplest case
where $\mathbb{G} = \mathbb{Z}_{2}$ and we denote by $\{1, g\}$
the elements of the $\mathbb{Z}_{2}$ group (written
multiplicatively). An algebra $A$ being a $\mathbb{Z}_{2}$-graded
algebra (in the physics literature the term superalgebra is also
of widespread use) is equivalent to saying that $A$ is a
$\mathbb{CZ}_{2}$-module algebra, via the $\mathbb{Z}_{2}$-action
determined by:
\[ 1 \triangleright a = a \; \mbox{ and }\; g \triangleright a
= (-1)^{|a|}a
\]
for any $a$ homogeneous in $A$ and $\ |a| \ $ it's degree.
What we
actually mean is that $A$, apart from being an algebra is also a
$\mathbb{CZ}_{2}$-module and at the same time the structure maps
of $A$ (i.e.: the multiplication and the unity map which embeds
the field into the center of the algebra) are
$\mathbb{CZ}_{2}$-module maps, which is nothing else but
homogeneous linear maps of degree $0$ (or: even linear maps).
Stated more generally, the $\mathbb{G}$-grading of $A$ can be
equivalently described in terms of a specific action of the finite
abelian group $\mathbb{G}$ on $A$, thus in terms of a specific
action of the $\mathbb{CG}$ group algebra on $A$. This is not
something new. In fact such ideas already appear in works such as
\cite{CohMon} and \cite{Stee}.
In ref \cite{Maj3}, \cite{Mon} the construction of the tensor
products of $\mathbb{G}$-graded objects, is presented as a
consequence of the quasitriangularity of the $\mathbb{CG}$ group
Hopf algebra (for $\mathbb{G}$ a finite abelian group, see
\cite{Scheu1}) or in other words: as a consequence of the braiding
of the monoidal category ${}_{\mathbb{CG}}\mathcal{M}$ (category
of $\mathbb{CG}$-modules).
It is well known that for any group $\mathbb{G}$, the group
algebra $\mathbb{CG}$ equipped with the maps:
$$
\begin{array}{ccccc}
\Delta(z) = z \otimes z & & \varepsilon(z) = 1 & & S(z) = z^{-1} \\
\end{array}
$$
for any $z \in \mathbb{G}$, becomes a Hopf algebra. Focusing again
in the special case $\mathbb{G} = \mathbb{Z}_{2}$, the fact that
$A$ is a $\mathbb{Z}_2$-graded algebra is equivalently described
by saying that $A$ is an algebra in the braided monoidal category
of $\mathbb{CZ}_{2}$-modules ${}_{\mathbb{CZ}_{2}}\mathcal{M}$. In
this case the braiding is induced by the non-trivial
quasitriangular structure of the $\mathbb{CZ}_{2}$ Hopf algebra
i.e. by the non-trivial $R$-matrix:
\begin{equation} \label{eq:nontrivRmatrcz2}
R_{Z_{2}} = \frac{1}{2}(1 \otimes 1 + 1 \otimes g + g \otimes 1 -
g \otimes g)
\end{equation}
We digress here for a moment, to recall
that (see \cite{Maj2, Maj3} or \cite{Mon}) if $(H,R_{H})$ is a
quasitriangular Hopf algebra through the $R$-matrix $R_{H} = \sum
R_{H}^{(1)} \otimes R_{H}^{(2)}$, then the category of modules
${}_{H}\mathcal{M}$ is a braided monoidal category, where the
braiding is given by a natural family of isomorphisms $\Psi_{V,W}:
V \otimes W \cong W \otimes V$, given explicitly by:
\begin{equation} \label{eq:braid}
\Psi_{V,W}(v \otimes w) = \sum (R_{H}^{(2)} \vartriangleright w)
\otimes (R_{H}^{(1)} \vartriangleright v)
\end{equation}
for any $V,W \in obj({}_{H}\mathcal{M})$. By $v,w$ we denote any
elements of $V,W$ respectively. \\
Combining eq. \eqref{eq:nontrivRmatrcz2} and \eqref{eq:braid} we
immediately get the braiding in the
${}_{\mathbb{CZ}_{2}}\mathcal{M}$ category:
\begin{equation} \label{symmbraid}
\Psi_{V,W}(v \otimes w) = (-1)^{|v||w|} w \otimes v
\end{equation}
This is a symmetric braiding,
since
\[\Psi_{V,W} \circ \Psi_{W,V} = Id\]
so we actually have a
symmetric monoidal category ${}_{\mathbb{CZ}_{2}}\mathcal{M}$,
rather than a truly braided one.
The really important thing about the existence of the braiding
\eqref{symmbraid} is that it provides us with an alternative way
of forming tensor products of $\mathbb{Z}_{2}$-graded algebras. If
$A$ and $B$ are superalgebras with multiplications:
\[
m_{A}: A
\otimes A \rightarrow A\; \mbox{ and }\; m_{B}: B \otimes B \rightarrow B
\]
respectively, then the super vector space $A \otimes B$ (with the
obvious $\mathbb{Z}_{2}$-gradation) is equipped with the
associative multiplication
\begin{equation} \label{braidedtenspr}
(m_{A} \otimes m_{B})(Id \otimes \Psi_{B,A} \otimes Id): A \otimes
B \otimes A \otimes B \longrightarrow A \otimes B
\end{equation}
given equivalently by:
\[
(a \otimes b)(c \otimes d) = (-1)^{|b||c|}ac \otimes bd
\]
for $b,c$ homogeneous in $B, A$ respectively. The tensor product
becomes a superalgebra (or equivalently an algebra in the braided
monoidal category of $\mathbb{CZ}_{2}$-modules
${}_{\mathbb{CZ}_{2}}\mathcal{M}$) which we will denote: $A
\underline{\otimes} B$ and call the braided tensor product algebra
from now on.
Let us close this description with two important remarks. First,
we stress that in \eqref{braidedtenspr} both superalgebras $A$ and
$B$ are viewed as $\mathbb{CZ}_{2}$-modules and as such we have $B
\otimes A \cong A \otimes B$ through $b \otimes c \mapsto
(-1)^{|c||b|} c \otimes b$. Second we underline that the tensor
product \eqref{braidedtenspr} had been already known from the past
\cite{Che} but rather as a special possibility of forming tensor
products of superalgebras than as an example of the more general
conceptual framework of the braiding applicable not only to
superalgebras but to any $\mathbb{G}$-graded algebra ($\mathbb{G}$
a finite abelian group) as long as $\mathbb{CG}$ is equipped with
a non-trivial quasitriangular structure or equivalently
\cite{Mon}, \cite{Scheu1}, a bicharacter on $\mathbb{G}$ is given.
\section{Super-Hopf structure of Parabosons: a braided
group}\selabel{2}
The notion of $\mathbb{G}$-graded Hopf algebra, for $\mathbb{G}$ a
finite abelian group, is not a new one neither in physics nor in
mathematics. The idea appears already in the work of Milnor and
Moore \cite{MiMo}, where we actually have $\mathbb{Z}$-graded Hopf
algebras. On the other hand, universal enveloping algebras of Lie
superalgebras are widely used in physics and they are examples of
$\mathbb{Z}_{2}$-graded Hopf algebras (see for example \cite{Ko},
\cite{Scheu}). These structures are strongly resemblant of Hopf
algebras but they are not Hopf algebras at least in the ordinary
sense.
Restricting again to the simplest case where $\mathbb{G} =
\mathbb{Z}_{2}$ we briefly recall this idea: An algebra $A$ being
a $\mathbb{Z}_{2}$-graded Hopf algebra (or super-Hopf algebra)
means first of all that $A$ is a $\mathbb{Z}_{2}$-graded
associative algebra (or: superalgebra). We now consider the
braided tensor product algebra $A \underline{\otimes} A$. Then $A$
is equipped with a coproduct
\begin{equation} \label{braidedcom}
\underline{\Delta} : A \rightarrow A \underline{\otimes} A
\end{equation}
which is an superalgebra homomorphism from $A$ to the braided
tensor product algebra $A \underline{\otimes} A$ :
$$
\underline{\Delta}(ab) = \sum (-1)^{|a_{2}||b_{1}|}a_{1}b_{1}
\otimes a_{2}b_{2} = \underline{\Delta}(a) \cdot
\underline{\Delta}(b)
$$
for any $a,b$ in $A$, with $\underline{\Delta}(a) = \sum a_{1}
\otimes a_{2}$, $\underline{\Delta}(b) = \sum b_{1} \otimes
b_{2}$, and $a_{2}$, $b_{1}$ homogeneous. We emphasize here that
this is exactly the central point of difference between the
``super" and the ``ordinary" Hopf algebraic structure: In an
ordinary Hopf algebra $H$ we should have a coproduct $\Delta : H
\rightarrow H \otimes H$ which should be an algebra homomorphism
from $H$ to the usual tensor product algebra $H \otimes H$.
Similarly, $A$ is equipped with an antipode $\underline{S} : A
\rightarrow A$ which is not an algebra anti-homomorphism (as it
should be in an ordinary Hopf algebra) but a superalgebra
anti-homomorphism (or: ``twisted" anti-homomorphism or: braided
anti-homomorphism) in the following sense (for any homogeneous
$a,b \in A$):
\begin{equation} \label{braidedanti}
\underline{S}(ab) = (-1)^{|a||b|}\underline{S}(b)\underline{S}(a)
\end{equation}
The rest of the axioms which complete the super-Hopf algebraic
structure (i.e.: coassociativity, counity property, and
compatibility with the antipode) have the same formal description
as in ordinary Hopf algebras.
Once again, the abstraction of the representation theory of
quasitriangular Hopf algebras provides us with a language in which
the above description becomes much more compact: We simply say
that $A$ is a Hopf algebra in the braided monoidal category of
$\mathbb{CZ}_{2}$-modules ${}_{\mathbb{CZ}_{2}}\mathcal{M}$ or: a
braided group where the braiding is given in equation
\eqref{symmbraid}. What we actually mean is that $A$ is
simultaneously an algebra, a coalgebra and a
$\mathbb{CZ}_{2}$-module, while all the structure maps of $A$
(multiplication, comultiplication, unity, counity and the
antipode) are also $\mathbb{CZ}_{2}$-module maps and at the same
time the comultiplication $\underline{\Delta} : A \rightarrow A
\underline{\otimes} A$ and the counit are algebra morphisms in the
category ${}_{\mathbb{CZ}_{2}}\mathcal{M}$ (see also \cite{Maj2,
Maj3} or \cite{Mon} for a more detailed description).
\\
We proceed now to the proof of the following proposition which
establishes the super-Hopf algebraic structure of the parabosonic
algebra $P_{B}$:
\begin{proposition} \prlabel{superHopfPb}
The parabosonic algebra equipped with the even linear maps
$\underline{\Delta}: P_{B} \rightarrow P_{B} \underline{\otimes}
P_{B} \ \ $, $\ \ \underline{S}: P_{B} \rightarrow P_{B} \ \ $, $\
\ \underline{\varepsilon}: P_{B} \rightarrow \mathbb{C} \ \ $,
determined by their values on the generators:
\begin{equation} \label{eq:HopfPB}
\begin{array}{ccccc}
\underline{\Delta}(B_{i}^{\pm}) = 1 \otimes B_{i}^{\pm} + B_{i}^{\pm} \otimes 1 &
& \underline{\varepsilon}(B_{i}^{\pm}) = 0 & & \underline{S}(B_{i}^{\pm}) = - B_{i}^{\pm} \\
\end{array}
\end{equation}
for $i = 1, 2, \ldots \ $, becomes a super-Hopf algebra.
\end{proposition}
\begin{proof}
Recall that by definition $P_{B} = T(V_{X}) / I_{P_{B}}$. Consider
the linear map: $$\underline{\Delta}^{T}: V_{X} \rightarrow P_{B}
\underline{\otimes} P_{B}$$ determined by it's values on the basis
elements specified by: $$\underline{\Delta}^{T}(X_{i}^{\pm}) = I
\otimes B_{i}^{\pm} + B_{i}^{\pm} \otimes I$$ By the universality
of the tensor algebra this map is uniquely extended to a
superalgebra homomorphism: $\underline{\Delta}^{T}: T(V_{X})
\rightarrow P_{B} \underline{\otimes} P_{B}$. After lengthly
algebraic calculations (see Appendix \ref{app}) we can prove that:
\begin{equation}
\underline{\Delta}^{T}(\big[ \{ X_{i}^{\xi}, X_{j}^{\eta}\},
X_{k}^{\epsilon} \big] -
(\epsilon - \eta)\delta_{jk}X_{i}^{\xi} - (\epsilon -
\xi)\delta_{ik}X_{j}^{\eta})= 0
\label{eq:comultinsuperparab}
\end{equation}
for all values of $\xi, \eta, \epsilon = \pm 1$ and $i,j=1, 2, ...
\ $. This means that $I_{P_{B}} \subseteq
ker(\underline{\Delta}^{T}$), which in turn implies that
$\underline{\Delta}^{T}$ is uniquely extended to a superalgebra
homomorphism: $\underline{\Delta}: P_{B} \rightarrow P_{B}
\underline{\otimes} P_{B}$, according to the following
(commutative) diagram:
\begin{displaymath}
\xymatrix{T(V_{X}) \ar[rr]^{\underline{\Delta}^{T}}
\ar[dr]_{\pi_{P_{B}}} & &
P_{B} \underline{\otimes} P_{B} \\
& P_{B} \ar@{.>}[ur]_{ \underline{\Delta}} & }
\end{displaymath}
with values on the generators determined by \eqref{eq:HopfPB}.
Proceeding the same way we construct the maps $\
\underline{\varepsilon} \ $, $\ \ \underline{S} \ $, as determined
in \eqref{eq:HopfPB}.
For the case of $\underline{\varepsilon}$, we start defining the
trivial zero map
\[
\underline{\varepsilon}^T\,:\, V_x\to \{0\}\in \mathbb{C}
\]
and we (uniquely) extend its definition to a superalgebra
homomorphism $ \ \underline{\varepsilon}: P_{B} \rightarrow
\mathbb{C} \ $ following the commutative diagram:
\begin{displaymath}
\xymatrix{T(V_{X}) \ar[rr]^{\underline{\varepsilon}^{T}}
\ar[dr]_{\pi_{P_{B}}} & &
\mathbb{C} \\
& P_{B} \ar@{.>}[ur]_{ \ \underline{\varepsilon}} & }
\end{displaymath}
with values on the generators determined by \eqref{eq:HopfPB}.
In the case of the antipode $\underline{S}$ we need the notion of
the $\mathbb{Z}_{2}$-graded opposite algebra (or: opposite
superalgera) $P_{B}^{op}$, which is a superalgebra defined as
follows: $P_{B}^{op}$ has the same underlying super vector space
as $P_{B}$, but the multiplication is now defined as: $a \cdot b =
(-1)^{|a||b|}ba$, for all $a,b \in P_{B}$. (In the right hand
side, the product is of course the product of $P_{B}$). We start
by defining a linear map
$$\underline{S}^{T}: V_{X} \rightarrow P_{B}^{op}$$ determined by:
$$\underline{S}^{T}(X_{i}^{\pm}) = -B_{i}^{\pm}$$ This map is (uniquely)
extended to a superalgebra homomorphism: $\underline{S}^{T}:
T(V_{X}) \rightarrow P_{B}^{op}$. Now we can compute:
\begin{equation} \label{eq:antipodeinsuperparab}
\underline{S}^{T}(\big[ \{ X_{i}^{\xi}, X_{j}^{\eta}\},
X_{k}^{\epsilon} \big] -
(\epsilon - \eta)\delta_{jk}X_{i}^{\xi} - (\epsilon -
\xi)\delta_{ik}X_{j}^{\eta}) = 0
\end{equation}
for all values of $\xi, \eta, \epsilon = \pm 1$ and $i,j=1, 2, ...
\ $. This means that $I_{P_{B}} \subseteq ker(\underline{S}^{T})$,
which in turn implies that $\underline{S}^{T}$ is uniquely
extended to a superalgebra homomorphism $\underline{S}: P_{B}
\rightarrow P_{B}^{op}$, according to the following commutative
diagram:
\begin{displaymath}
\xymatrix{T(V_{X}) \ar[rr]^{\underline{S}^{T}}
\ar[dr]_{\pi_{P_{B}}} & &
P_{B}^{op} \\
& P_{B} \ar@{.>}[ur]_{ \underline{S}} & }
\end{displaymath}
thus to a superalgebra anti-homomorphism: $\underline{S}: P_{B}
\rightarrow P_{B}$, with values on the generators determined by
\eqref{eq:HopfPB}.
Now it is sufficient to verify the rest of the super-Hopf algebra
axioms (coassociativity, counity and the compatibility condition
for the antipode) on the generators of $P_{B}$. This can be done
with straigthforward computations.
\end{proof}
Let us note here, that the above proposition generalizes a result
which -in the case of finite degrees of freedom- is a direct
consequence of the work in \cite{Pal}. In that work the
parabosonic algebra in $2n$ generators ($n$-paraboson algebra)
$P_{B}^{(n)}$ is shown to be isomorphic to the universal
enveloping algebra of the orthosymplectic Lie superalgebra:
$P_{B}^{(n)} \cong U(B(0,n))$. We present this accomplishment in
detail in \seref{3}. See also the discussion in \cite{KaDa2}.
\section{Lie super-algebraic structure of Parabosons:
the case of finite degrees of freedom}\selabel{3}
In this section, we restrict ourselves to the case of the finite
degrees of freedom (finite number of parabosons), in order to
recall an important development in the study of the structure of
the parabosonic algebra. We thus consider the parabosonic algebra
generated by $B_{i}^{+}, B_{j}^{-}, I, \ $ for $i,j=1, 2, ...n$
where $n$ is a positive integer. The generators satisfy exactly
the same relations as before, determined by \eqref{CCRparabose} or
equivalently \eqref{eq:pbdef}. The difference is that we only have
a finite number of generators now and we will call this algebra
the parabosonic algebra in $2n$ generators or the $n$-paraboson
algebra from now on. We are going to denote it by: $P_{B}^{(n)}$.
It was conjectured \cite{OmOhKa}, that due to the mixing of
commutators and anticommutators in $P_{B}^{(n)}$ the proper
mathematical ``playground" for the study of the structure of
$P_{B}^{(n)}$ should be some kind of Lie superalgebra (
$\mathbb{Z}_{2}$-graded Lie algebra). Starting in the early '80
's, and using the recent (by that time) results in the
classification of the finite dimensional simple complex Lie
superalgebras which was obtained by Kac (see: \cite{Kac1, Kac2}
but also \cite{kap}), Palev managed to identify the parabosonic
algebra with the universal enveloping algebra of a certain simple
complex Lie superalgebra. In \cite{Pal}, \cite{Pal5} and
\cite{Pal2}, Palev shows the following:
\begin{lemma}
In the k-vector space $P_{B}^{(n)}$ we consider the k-subspace
generated by the set of elements:
$$
\Big\{ \{B_{i}^{\xi}, B_{j}^{\eta}\}, \ B_{k}^{\epsilon} \ \ |
\xi, \eta, \epsilon = \pm, \ i,j,k = 1,\ldots,n \Big\}
$$
This vector space is a superspace (i.e.: a $\mathbb{Z}_{2}$-graded
vector space or equivalently: a $\mathbb{CZ}_{2}$-module).The
elements $B_{i}^{\xi}$ span the odd subspace and the elements
$\{B_{i}^{\xi}, B_{j}^{\eta}\}$ span the even subspace. \\
The above vector space endowed with a bilinear multiplication $\langle..,..\rangle$
whose values are determined by the values of
the anticommutator and the commutator in $P_{B}^{(n)}$, i.e.:
$$
\langle B_{i}^{\xi}, B_{j}^{\eta} \rangle = \{B_{i}^{\xi},
B_{j}^{\eta}\}
$$
and:
$$
\big\langle \{ B_{i}^{\xi}, B_{j}^{\eta}\}, B_{k}^{\epsilon}
\big\rangle = \big[ \{ B_{i}^{\xi}, B_{j}^{\eta}\},
B_{k}^{\epsilon} \big] = (\epsilon - \eta)\delta_{jk}B_{i}^{\xi} +
(\epsilon - \xi)\delta_{ik}B_{j}^{\eta}
$$
respectively, according to the above mentioned gradation, is a
simple, complex Lie superalgebra (or: $\mathbb{Z}_{2}$-graded Lie
algebra) isomorphic to $B(0,n)$. \\
The elements:
$$
\begin{array}{ccccc}
- \frac{1}{2} \{ B_{i}^{-}, B_{i}^{+} \}, & \{ B_{i}^{-}, B_{j}^{+} \}, & \{ B_{i}^{\xi}, B_{j}^{\xi}
\},
& (B_{i}^{\xi})^{2}, & B_{i}^{\xi} \\
\end{array}
$$
for all values $i \neq j = 1, 2, \ldots n$ and $\xi = \pm$ ,
constitute a Cartan-Weyl basis of $B(0,n)$.
\end{lemma}
Note that, according to the above lemma, the elements $\big\{
\{B_{i}^{\xi}, B_{j}^{\eta}\} \ \ | \xi, \eta = \pm, \ i,j =
1,\ldots,n \big\}$ constitute a basis in the even part of
$B(0,n)$. This is a subalgebra of $B(0,n)$ isomorphic to the Lie
algebra $sp(2n)$. It's Lie multiplication can be readily deduced
from the above given commutators and reads:
$$
\begin{array}{c}
\big\langle \{ B_{i}^{\xi}, B_{j}^{\eta}\}, \{ B_{k}^{\epsilon},
B_{l}^{\phi} \} \big\rangle = \big[ \{ B_{i}^{\xi},
B_{j}^{\eta}\}, \{ B_{k}^{\epsilon}, B_{l}^{\phi} \} \big] =
\\
\\
(\epsilon - \eta)\delta_{jk} \{ B_{i}^{\xi}, B_{l}^{\phi} \} +
(\epsilon - \xi)\delta_{ik}\{ B_{j}^{\eta}, B_{l}^{\phi} \} +
(\phi - \eta)\delta_{jl}\{ B_{i}^{\xi}, B_{k}^{\epsilon} \} +
(\phi - \xi)\delta_{il}\{ B_{j}^{\eta}, B_{k}^{\epsilon} \} \\
\end{array}
$$
On the other hand the elements $\big\{ B_{k}^{\epsilon} \ \ |
\epsilon = \pm, \ k = 1,\ldots,n \big\}$ constitute a basis of the
odd part of $B(0,n)$.
Note also, that $B(0,n)$ in Kac's
notation, is the classical simple complex orthosymplectic Lie
superalgebra denoted $osp(1,2n)$ in the notation traditionally
used by physicists until then.
Based on the above observations, Palev finally proves:
\begin{proposition} \prlabel{parab}
The parabosonic algebra in $2n$ generators is isomorphic to the
universal enveloping algebra of the classical simple complex Lie
superalgebra $B(0,n)$ (according to the classification of the
simple complex Lie superalgebras given by Kac), i.e:
$$
P_{B}^{(n)} \cong U(B(0,n))
$$
\end{proposition}
Lie superalgebras are exactly the algebraic structures underlying
the idea of supersymmetry. The above mentioned proposition thus,
indicates a link between parafield theories and supersymmetry. For
a similar discussion one should also see \cite{Pl1}.
\prref{parab} also indicates that in the case of the finite
degrees of freedom, the representation theory of the parabosonic
algebra $P_{B}^{(n)}$ coincides with the representation theory of
the orthosymplectic Lie superalgebra $osp(1/2n)$ \cite{Kac3}.
In the case of the finite degrees of freedom, the super Hopf
structure of the parabosonic algebra $P_{B}^{(n)}$ can be deduced
from the fact that the universal enveloping algebra $U(L)$ of any
Lie superalgebra $L$ is an super Hopf algebra. In the case of the
infinite degrees of freedom, the parabosonic algebra is referred
in the bibliography \cite{Pal1} to be also the universal
enveloping algebra of some Lie superalgebra. Let us stress here
however, that our proof of \prref{superHopfPb} does not make use
of any kind of underlying Lie superalgebraic structure for either
the $P_{B}^{(n)}$ or the $P_{B}$ algebras.
\section{Ordinary Hopf structures in Parabosons}\selabel{4}
\subsection{Review of the bosonisation technique}
A general scheme for transforming a Hopf algebra $A$ in the
braided monoidal category ${}_{H}\mathcal{M}$ (where $H$ is a
quasitriangular Hopf algebra) into an ordinary one, namely the
smash product Hopf algebra $A \star H$, such that the category of
braided modules of $A$ and the category of (ordinary) modules of
$A \star H$ are equivalent, has been developed in the original
reference \cite{Maj1}, see also \cite{Maj2, Maj3, Mon}. The
technique is called bosonisation, the term coming from physics.
This technique uses ideas developed by Molnar in \cite{Mo} and by
Radford in \cite{Ra}, which guarantee the compatibility between an
algebraic and a coalgebraic structure in a tensor product
\cite{Mo} or even in a smash product \cite{Ra}, in order for it to
become a bialgebra and finally a Hopf algebra. It is also
presented and applied in \cite{Andru, Fi, FiMon}. For clarity
reasons we give a compact review the main points of the above
method.
In general, $A$ being a Hopf algebra in a category, means that
$A$ apart from being an algebra and a coalgebra, is also an object of
the category and at the same time its structure maps (commultiplication, antipode etc)
are morphisms in the category.
In particular, if $H$ is some quasitriangular Hopf algebra, $A$ being
a Hopf algebra in the braided monoidal category
${}_{H}\mathcal{M}$ , means that the $H$-module $A$ is an algebra
in ${}_{H}\mathcal{M}$ (or $H$-module algebra) and a coalgebra in
${}_{H}\mathcal{M}$ (or $H$-module coalgebra) and at the same time
$\Delta_{A}$ and $\varepsilon_{A}$ are algebra morphisms in the
category ${}_{H}\mathcal{M}$. (For more details on the above
definitions one may consult for example \cite{Maj2, Maj3} or
\cite{Mon}).
Since $A$ is an $H$-module algebra we can form the
cross product algebra $A \rtimes H$ (also called: smash product
algebra) which as a k-vector space is $A \otimes H$ (i.e. we
write: $a \rtimes h \equiv a \otimes h$ for every $a \in A$, $h
\in H$), with multiplication given by:
\begin{equation} \label{eq:crosspralg}
(b \otimes h)(c \otimes g) = \sum b(h_{1} \vartriangleright c)
\otimes h_{2}g
\end{equation}
for all $b,c \in A$ and $h,g \in H$, the $\otimes$ the usual
tensor product and $\Delta (h) = \sum h_1 \otimes h_2$.
On the other hand $A$ is a (left) $H$-module
coalgebra with $H$ becomes qua\-sitriangular through the
$R$-matrix $R_{H} = \sum R_{H}^{(1)} \otimes R_{H}^{(2)}$.
Quasitriangularity
switches the (left) action of $H$ on $A$ into a (left) coaction
$\rho: A \rightarrow H \otimes A$ through:
\begin{equation} \label{eq:act-coact}
\rho(a) = \sum R_{H}^{(2)} \otimes (R_{H}^{(1)} \vartriangleright
a)
\end{equation}
and $A$ endowed with this coaction becomes (see \cite{Maj2, Maj3})
a (left) $H$-comodule coalgebra or equivalently a coalgebra in
${}^{H}\mathcal{M}$ (meaning that $\Delta_{A}$ and
$\varepsilon_{A}$ are (left) $H$-comodule morphisms, see
\cite{Mon}).
We recall here (see: \cite{Maj2, Maj3}) that when
$H$ is a Hopf algebra and $A$ is a (left) $H$-comodule coalgebra
with the (left) $H$-coaction given by: $\rho(a) = \sum a^{(1)}
\otimes a^{(0)}$ , one may form the cross coproduct coalgebra $A
\rtimes H$, which as a k-vector space is $A \otimes H$ (i.e. we
write: $a \rtimes h \equiv a \otimes h$ for every $a \in A$, $h
\in H$), with comultiplication given by:
\begin{equation} \label{eq:crosscoprcoalg}
\Delta(a \otimes h) = \sum a_{1} \otimes a_{2}^{ \ (1)} \ h_{1}
\otimes a_{2}^{ \ (0)} \otimes h_{2}
\end{equation}
and counit: $\varepsilon(a \otimes h) = \varepsilon_{A}(a)
\varepsilon_{H}(h)$. (In the above: $\Delta_{A}(a) = \sum a_{1}
\otimes a_{2}$ and we use in the elements of $A$ upper indices
included in parenthesis to denote the components of the coaction
according to the Sweedler notation, with the convention that
$a^{(i)} \in H$
for $i \neq 0$). \\
Now we proceed by applying the above described construction of the
cross coproduct coalgebra $A \rtimes H$ , with the special form of
the (left) coaction given by eq. \eqref{eq:act-coact}. Replacing
thus eq. \eqref{eq:act-coact} into eq. \eqref{eq:crosscoprcoalg}
we get for the special case of the quasitriangular Hopf algebra H
the cross coproduct comultiplication:
\begin{equation} \label{eq:crosscoprcoalgR}
\Delta(a \otimes h) = \sum a_{1} \otimes R_{H}^{(2)}h_{1} \otimes
(R_{H}^{(1)} \vartriangleright a_{2}) \otimes h_{2}
\end{equation}
Finally we can show that the cross product algebra (with
multiplication given by \eqref{eq:crosspralg}) and the cross
coproduct coalgebra (with comultiplication given by
\eqref{eq:crosscoprcoalgR}) fit together and form a bialgebra
(see: \cite{Maj2, Maj3, Mo, Mon, Ra}). This bialgebra, furnished
with an antipode
\begin{equation} \label{antipodecrosspr}
S(a \otimes h) = (S_{H}(h_{2}))u(R^{(1)} \vartriangleright
S_{A}(a)) \otimes S(R^{(2)}h_{1})
\end{equation}
where $u = \sum S_{H}(R^{(2)})R^{(1)}$, and $S_{A}$ the (braided)
antipode of $A$, becomes (see \cite{Maj2}) an ordinary Hopf
algebra. This is the smash product Hopf algebra denoted $A \star
H$.
Apart from the above described construction, it is worth
mentioning two more important points proved in \cite{Maj1}. First,
it is shown that if $H$ is triangular and $A$ is quasitriangular
in the category ${}_{H}\mathcal{M}$, then $A \star H$ is
(ordinarily) quasitriangular.
Second, it is shown that the braided modules of the original braided Hopf algebra
$A$ ($A$-modules in ${}_{H}\mathcal{M}$, where $A$ is an algebra
in ${}_{H}\mathcal{M}$) and the (ordinary) modules of the
``bosonised" (ordinary) Hopf algebra $A \star H$ are in a
bijective correspondence, providing thus an equivalence of
categories the category of the braided modules of $A$ ($A$-modules
in ${}_{H}\mathcal{M}$) where the braiding is given by a natural
family of isomorphisms $\Psi_{V,W}: V \otimes W \cong W \otimes
V$, stated explicitly by
\begin{equation} \label{eq:braid2}
\Psi_{V,W}(v \otimes w) = \sum (R_{H}^{(2)} \vartriangleright w)
\otimes (R_{H}^{(1)} \vartriangleright v)
\end{equation}
for any $V,W \in obj({}_{H}\mathcal{M})$ (by $v,w$ we denote any
elements of $V,W$ respectively), is equivalent to the
category of the (ordinary) modules of $A \star H$. Let us stress
here, that from the mathematicians viewpoint, this does not prove
that we have a Morita equivalence, since such a kind of
equivalence would presuppose the whole category of modules over
$A$ and not it's subcategory of braided modules.
Let us close this review of the bosonisation technique, with a
note on terminology. The term ``bosonisation" was first introduced
by Majid in \cite{Maj1}. It is coming from physics and it stems
from the -widespread among physicists- point of view which
considers the bosonic algebra to be a quotient algebra of the
universal enveloping algebra of the Heisenberg Lie algebra, with
its elements thus being even or: ungraded elements.
In the case that
$H = \mathbb{CG}$ where $\mathbb{G}$ is a finite abelian group,
the Hopf algebra in ${}_{\mathbb{CG}}\mathcal{M}$ is just a
$\mathbb{G}$-graded Hopf algebra in the sense of \cite{Ko},
\cite{Mon} or \cite{Scheu}. The result of the bosonisation
technique in this case is the construction of an ordinary Hopf
algebra $A \star \mathbb{CG}$ which absorbs the grading and whose
elements are ungraded or ``bosonic" elements. This is the original
motivation which led Majid to the use of the term bosonisation
(see also \cite{Maj2, Maj3}).
Finally, let us note that for another use of the term
bosonisation, which is technically reminiscent of the above but it
is not explicitly related to the Hopf structure, one should also
see \cite{Pl2}.
\subsection{Bosonisation of $P_{B}$ using the smash product}
In the special case that $A$ is some super-Hopf
algebra, then: $H = \mathbb{CZ}_{2}$, equipped with it's
non-trivial quasitriangular structure, formerly mentioned. In this
case, the technique simplifies and the ordinary Hopf algebra
produced is the smash product Hopf algebra $A \star
\mathbb{CZ}_{2}$. The grading in $A$ is induced by the
$\mathbb{CZ}_{2}$-action on $A$:
\begin{equation} \label{eq:cz2action}
1\vartriangleright a = a,\quad g \vartriangleright a = (-1)^{|a|}a
\end{equation}
for $a$ homogeneous in $A$. Utilizing the non-trivial $R$-matrix
$R_{g}$ and using eq. \eqref{eq:nontrivRmatrcz2} and eq.
\eqref{eq:act-coact} we can readily deduce the form of the induced
$\mathbb{CZ}_{2}$-coaction on $A$:
\begin{equation} \label{eq:cz2coaction}
\rho(a) = g^{|a|} \otimes a \equiv\left\{ \begin{array}{ccc}
1 \otimes a & , & a: \textrm{even} \\
g \otimes a & , & a: \textrm{odd} \\
\end{array} \right.
\end{equation}
Let us note here that instead of invoking the non-trivial
quasitriangular structure $R_{g}$ we could alternatively extract
the (left) coaction \eqref{eq:cz2coaction} utilizing the
self-duality of the $\mathbb{CZ}_{2}$ Hopf algebra. For any
abelian group $\mathbb{G}$ a (left) action of $\mathbb{CG}$
coincides with a (right) action of $\mathbb{CG}$. On the other
hand, for any finite group, a (right) action of $\mathbb{CG}$ is
the same thing as a (left) coaction of the dual Hopf algebra
$(\mathbb{CG})^{*}$. Since $\mathbb{CZ}_{2}$ is both finite and
abelian and hence self-dual in the sense that: $\mathbb{CZ}_{2}
\cong (\mathbb{CZ}_{2})^{*}$ as Hopf algebras, it is immediate to
see that the (left) action \eqref{eq:cz2action} and the (left)
coaction \eqref{eq:cz2coaction} are virtually the same thing.
The above mentioned action and coaction enable us to form the
cross product algebra and the cross coproduct coalgebra according
to the preceding discussion which finally form the smash product
Hopf algebra $A \star \mathbb{CZ}_{2}$. The grading of $A$, is
``absorbed" in $A \star \mathbb{CZ}_{2}$, and becomes an inner
automorphism:
$$
gag = (-1)^{|a|}a
$$
where we have identified: $a \star 1 \equiv a$ and $1 \star g
\equiv g$ in $A \star \mathbb{CZ}_{2}$ and $a$ be a homogeneous
element in $A$. This inner automorphism is exactly the adjoint
action of $g$ on $A \star \mathbb{CZ}_{2}$ (as an ordinary Hopf
algebra). The following proposition is proved -as an example of
the bosonisation technique- in \cite{Maj2}:
\begin{proposition} \label{bosonisat}
Corresponding to every super-Hopf algebra $A$ there is an ordinary
Hopf algebra $A \star \mathbb{CZ}_{2}$, its bosonisation,
consisting of $A$ extended by adjoining an element $g$ with
relations, coproduct, counit and antipode:
\begin{equation} \label{eq:HopfPBg}
\begin{array}{cccc}
g^{2} = 1 & ga = (-1)^{|a|}ag & \Delta(g) = g \otimes g & \Delta(a) = \sum a_{1}g^{|a_{2}|} \otimes a_{2} \\
\\
S(g) = g & S(a) = g^{-|a|}\underline{S}(a) & \varepsilon(g) = 1 & \varepsilon(a) = \underline{\varepsilon}(a) \\
\end{array}
\end{equation}
where $\underline{S}$ and $\underline{\varepsilon}$ denote the
original maps of the super-Hopf algebra $A$.
In the case that $A$
is super-quasitriangular via the $R$-matrix $$ \ \underline{R} =
\sum \underline{R}^{(1)} \otimes \underline{R}^{(2)} \ $$ then the
bosonised Hopf algebra $A \star \mathbb{CZ}_{2}$ is
quasitriangular (in the ordinary sense) via the $R$-matrix: $$ \
R_{smash} = R_{Z_{2}} \sum
\underline{R}^{(1)}g^{|\underline{R}^{(2)}|} \otimes
\underline{R}^{(2)} \ $$
Moreover, the representations of the bosonised Hopf algebra $A
\star \mathbb{CZ}_{2}$ are precisely the super-representations of
the original superalgebra $A$.
\end{proposition}
The application of the above proposition in the case of the
parabosonic algebra $P_{B}$ is straightforward, we immediately get
it's bosonised form $P_{B(g)}$ which by definition is $ P_{B(g)}
\equiv P_{B} \star \mathbb{CZ}_{2} $. Utilizing equations
\eqref{eq:HopfPB} which describe the super-Hopf algebraic
structure of the parabosonic algebra $P_{B}$, and replacing them
into equations \eqref{eq:HopfPBg} which describe the ordinary Hopf
algebra structure of the bosonised superalgebra, we get after
straightforward calculations the explicit form of the (ordinary)
Hopf algebra structure of $P_{B(g)} \equiv P_{B} \star
\mathbb{CZ}_{2}$ which reads:
\begin{equation} \label{eq:HopfPBgexpl}
\begin{array}{cccc}
\Delta(B_{i}^{\pm}) = B_{i}^{\pm} \otimes 1 + g \otimes B_{i}^{\pm} & \Delta(g) = g \otimes g
& \varepsilon(B_{i}^{\pm}) = 0 & \varepsilon(g) = 1 \\
\\
S(B_{i}^{\pm}) = B_{i}^{\pm}g = -gB_{i}^{\pm} & S(g) = g & g^{2} = 1 & \{g,B_{i}^{\pm}\} = 0 \\
\end{array}
\end{equation}
where $i = 1, 2, \ldots \ $ and we have again identified
$B_{i}^{\pm} \star 1 \equiv B_{i}^{\pm}$ and $1 \star g \equiv g$
in $P_{B} \star \mathbb{CZ}_{2}$.
Finally, we can easily check that since $\mathbb{CZ}_{2}$ is
triangular (via $R_{Z_{2}}$) and $P_{B}$ is super-quasitriangular
(trivially since it is super-cocommutative)
it is an immediate consequence of the above proposition that
$P_{B(g)}$ is quasitriangular (in the ordinary sense) via the
$R$-matrix:
\begin{equation} \label{eq:nontrivRmatrcz2smash}
R_{smash} = \frac{1}{2}(1 \star 1 \otimes 1 \star 1 + 1 \star 1
\otimes 1 \star g + 1 \star g \otimes 1 \star 1 - 1 \star g
\otimes 1 \star g)
\end{equation}
which under the above mentioned identification: $1 \star g \equiv
g$ completely coincides with the $R$-matrix $R_{Z_{2}}$ given in
eq. \eqref{eq:nontrivRmatrcz2}.
\subsection{Bosonisation of $P_{B}$ using two additional operators $K^{\pm}$}
Let us describe now a different construction (see also:
\cite{DaKa, KaDa2} for the case of the finite degrees of freedom
and \cite{KaDa1} for the general case), which achieves the same
object, i.e. the determination of an ordinary Hopf structure for
the parabosonic algebra $P_{B}$.
\begin{proposition} \label{altern}
Corresponding to the super-Hopf algebra $P_{B}$ there is an
ordinary Hopf algebra $P_{B(K^{\pm})}$, consisting of $P_{B}$
extended by adjoining two elements $K^{+}$, $K^{-}$ with
relations, coproduct, counit and antipode:
\begin{equation} \label{eq:HopfPBK}
\begin{array}{cc}
\Delta(B_{i}^{\pm}) = B_{i}^{\pm} \otimes 1 + K^{\pm} \otimes B_{i}^{\pm} & \Delta(K^{\pm}) = K^{\pm} \otimes K^{\pm} \\
\\
\varepsilon(B_{i}^{\pm}) = 0 & \varepsilon(K^{\pm}) = 1 \\
\\
S(B_{i}^{\pm}) = B_{i}^{\pm}K^{\mp} & S(K^{\pm}) = K^{\mp} \\
\\
K^{+}K^{-} = K^{-}K^{+} = 1 & \{K^{+},B_{i}^{\pm}\} = 0 = \{K^{-},B_{i}^{\pm}\} \\
\end{array}
\end{equation}
for all values $i = 1, 2, \ldots \ $.
\end{proposition}
\begin{proof}
Consider the complex vector space $\mathbb{C}\langle X_{i}^{+},
X_{j}^{-}, K^{\pm} \rangle$ freely generated by the elements
$X_{i}^{+}, X_{j}^{-}, K^{+}, K^{-}$ where $i = 1, 2, \ldots \ $.
Denote $T(X_{i}^{+}, X_{j}^{-}, K^{\pm})$ its tensor algebra. In
the tensor algebra we denote $I_{BK}$ the ideal generated by all
the elements of the form \eqref{eq:pbdef} together with all
elements of the form: $\ K^{+}K^{-}-1 \ $, $\ K^{-}K^{+}-1 \ $, $\
\{K^{+}, X_{i}^{\pm}\} \ $, $\ \{K^{-}, X_{i}^{\pm}\} \ $, for all
values of $ \ i = 1, 2, \ldots \ $. We define:
$$
P_{B(K^{\pm})} = T(X_{i}^{+}, X_{j}^{-}, K^{\pm}) / I_{BK} \
$$
We denote by $B_{i}^{\pm}, K^{\pm}$ where $i = 1, 2, \ldots \ $
the images of the generators $X_{i}^{\pm}, K^{\pm}$, $ \ i = 1, 2,
\ldots \ $ of the tensor algebra, under the canonical projection.
These are a set of generators of $P_{B(K^{\pm})}$. \\
Consider the linear map
$$
\Delta^{T} : \mathbb{C}\langle X_{i}^{+}, X_{j}^{-}, K^{\pm}
\rangle \rightarrow P_{B(K^{\pm})} \otimes P_{B(K^{\pm})}
$$
determined by
$$
\begin{array}{c}
\Delta^{T}(X_{i}^{\pm}) = B_{i}^{\pm}
\otimes 1 + K^{\pm} \otimes B_{i}^{\pm} \\
\\
\Delta^{T}(K^{\pm}) = K^{\pm} \otimes K^{\pm} \\
\end{array}
$$
By the universality property of the tensor algebra, this map is
uniquely extended to an algebra homomorphism:
$$
\Delta^{T}: T(X_{i}^{+}, X_{j}^{-}, K^{\pm}) \rightarrow
P_{B(K^{\pm})} \otimes P_{B(K^{\pm})}
$$
We emphasize that the usual tensor product algebra $P_{B(K^{\pm})}
\otimes P_{B(K^{\pm})}$ is now considered, with multiplication $(a
\otimes b)(c \otimes d) = ac \otimes bd$ for any $a,b,c,d \in
P_{B(K^{\pm})}$. Now we can trivially verify that
\begin{equation} \label{eq:DKb}
\Delta^{T}(\{K^{\pm},X_{i}^{\pm}\})
= \Delta^{T}(K^{+}K^{-} -1) = \Delta^{T}(K^{-}K^{+}-1) = 0
\end{equation}
After lengthy algebraic calculations we also get:
\begin{equation} \label{eq:Db}
\Delta^{T}(\big[ \{ X_{i}^{\xi}, X_{j}^{\eta}\}, X_{k}^{\epsilon}
\big] -
(\epsilon - \eta)\delta_{jk}X_{i}^{\xi} - (\epsilon -
\xi)\delta_{ik}X_{j}^{\eta}) = 0
\end{equation}
The calculations are carried in the same spirit of the calculation
found in Appendix \ref{app} but we note that this time we use the
comultiplication stated in equation \eqref{eq:HopfPBK} and the
usual tensor product algebra $P_{B(K^{\pm})} \otimes
P_{B(K^{\pm})}$ is considered instead of the braided tensor
product algebra $P_{B(K^{\pm})} \underline{\otimes}
P_{B(K^{\pm})}$ used in Appendix \ref{app}.
Relations \eqref{eq:DKb}, and \eqref{eq:Db}, mean that $I_{BK}
\subseteq ker (\Delta^{T})$ which in turn implies that
$\Delta^{T}$ is uniquely extended to an algebra homomorphism from
$ P_{B(K^{\pm})}$ to the usual tensor product algebra
$P_{B(K^{\pm})} \otimes P_{B(K^{\pm})}$, with the values on the
generators determined by \eqref{eq:HopfPBK}, according to the
following (commutative) diagram:
\begin{displaymath}
\xymatrix{T(X_{i}^{+}, X_{j}^{-}, K^{\pm}) \ar[rr]^{\Delta^{T}}
\ar[dr]_{\pi} & & P_{B(K^{\pm})} \otimes
P_{B(K^{\pm})} \\
& P_{B(K^{\pm})} \ar@{.>}[ur]^{ \ \Delta} & }
\end{displaymath}
Following the same
procedure we construct an algebra homomorphism $\varepsilon:
P_{B(K^{\pm})} \rightarrow \mathbb{C}$ and an algebra
antihomomorphism $S: P_{B(K^{\pm})} \rightarrow P_{B(K^{\pm})}$
which are completely determined by their values on the generators
of $P_{B(K^{\pm})}$ as given in \eqref{eq:HopfPBK}. Note that in
the case of the antipode we start by defining a linear map $S^{T}$
from $\mathbb{C}\langle X_{i}^{+}, X_{j}^{-}, K^{\pm} \rangle$ to
the opposite algebra $P_{B(K^{\pm})}^{op}$, with values determined
by $S^{T}(X_{i}^{\pm}) = B_{i}^{\pm}K^{\mp}$ and $S^{T}(K^{\pm}) =
K^{\mp} \ $. Following the above described procedure we verify
that $I_{BK} \subseteq ker(S^{T})$, thus resulting with an algebra
anti-homomorphism:
$$
S: P_{B(K^{\pm})} \rightarrow P_{B(K^{\pm})}
$$
with values on the generators determined by \eqref{eq:HopfPBK}.
Now it is
sufficient to verify the rest of the Hopf algebra axioms (i.e.:
coassociativity of $\Delta$, counity property for $\varepsilon$,
and the compatibility condition which ensures us that $S$ is an
antipode) on the generators of $P_{B(K^{\pm})}$. This can be done
with straightforward computations (see \cite{DaKa}).
\end{proof}
Let us notice here, that the initiation for the above mentioned
construction lies in the case of the finite degrees of freedom: If
we consider the parabosonic algebra in $2n$ generators
($n$-paraboson algebra) and denote it $P_{B}^{(n)}$, it is
possible to construct explicit realizations of the elements
$K^{+}$ and $K^{-}$ in terms of formal power series, such that the
relations specified in \eqref{eq:HopfPBK} hold. The construction
is briefly (see also \cite{DaKa}) as follows: We define
$$
\mathcal{N} = \sum_{i=1}^{n}N_{ii} =
\frac{1}{2}\sum_{i=1}^{n}\{B_{i}^{+},B_{i}^{-}\}
$$
We inductively prove
$$
\mathcal{N}^{m} B_{i}^{\pm}= B_{i}^{\pm}\left(\mathcal{N}\pm I\right)^{m}
$$
For any entire complex function $f(z)$ we get
\[
f\left(\mathcal{N}\right)B_{i}^{\pm} =B_{i}^{\pm} f\left(\mathcal{N}+I\right)
\]
We now introduce the following elements:
$$
\begin{array}{ccccc}
K^{+} = \exp(i \pi \mathcal{N}), \quad K^{-} = \exp(-i \pi \mathcal{N}) \\
\end{array}
$$
then we get
\begin{equation} \label{eq:Kb}
\begin{array}{lr}
\{K^{+},B_{i}^{\pm}\} = 0, & \{K^{-},B_{i}^{\pm}\} = 0 \\
\end{array}
\end{equation}
A direct application of the Baker-Campbell-Hausdorff formula leads
also to:
\begin{equation} \label{eq:KK}
K^{+}K^{-} = K^{-}K^{+} = 1
\end{equation}
which completes the statement.
\section{Discussion}
Several points which deserve to be discussed stem from the
constructions of the preceding paragraphs:
First of all we should mention that an analogous treatment
regarding the (super-) algebraic and the (super-) Hopf algebraic
structure can be given for the parafermionic algebras and for
mixed systems of paraparticles as well. The parafermionic algebra
in finite degrees of freedom has been shown \cite{Kata},
\cite{RySu}, to be isomorphic to the universal enveloping algebra
of the Lie algebra $B_{n} = so(2n+1)$ and thus an ordinary Hopf
algebra \cite{DaKa}, consequently the grading does not seem to
play an important role in it's structure. On the other hand,
algebras which describe mixed systems of paraparticles such as the
relative parafermi or the relative parabose sets (see
\cite{GreeMe} for their description) have been shown to be
$\mathbb{Z}_{2}$-graded (see \cite{Pal6}) or $\mathbb{Z}_{2}
\times \mathbb{Z}_{2}$-graded (see \cite{Ya}) algebras
respectively. It would thus be an interesting idea to apply
similar techniques to these algebras and obtain results about
their braided representations and their tensor products, and about
their super-Hopf structure and their bosonised forms as well. Of
course such questions inevitably involve questions of pure
mathematical interest, such as the possible quasitriangular
structures (and thus the possible braidings) for a
$\mathbb{C}(\mathbb{Z}_{2} \times \mathbb{Z}_{2})$ group Hopf
algebra, which up to our knowledge have not yet been solved in
general (see \cite{Scheu1} for a
relevant discussion). \\
Let us note here that the super-Hopf algebraic structure of the
parabosonic algebra established in \seref{2} has important
applications in physics: It has recently been shown \cite{KaDa3}
that using the results of \prref{superHopfPb}, one may obtain the
construction of the parabosonic Fock-like representations
corresponding to an arbitrary value of the positive integer $p$
(see: \cite{GreeMe}) as irreducible submodules of the braided
tensor product representations between $p$-copies of the first
Fock-like representation (corresponding to the value of $p=1$).
The super-Hopf algebraic structure of the parabosonic algebra is
essential in this process and leads us to a purely braided
interpretation of the Green ansatz for parabosons (see
\cite{KaDa3} for a more detailed description of the method).
Regarding now the results of the last section, i.e.: the
``bosonised" variants $P_{B(g)}$, $P_{B(K^{\pm})}$ of the
parabosonic algebras, various questions can be posed: \\
From the point of view of the structure,
an obvious question arises: While $P_{B(g)}$ is a quasitriangular
Hopf algebra through the $R$-matrix: $R_{Z_{2}}$ given in eq.
\eqref{eq:nontrivRmatrcz2}, there is yet no suitable $R$-matrix
for the Hopf algebra $P_{B(K^{\pm})}$. Thus the question of the
quasitriangular structure of $P_{B(K^{\pm})}$ is open. \\
On the other hand, regarding representations, we have already
noted that the super representations of $P_{B}$ (
$\mathbb{Z}_{2}$-graded modules of $P_{B}$ or equivalently:
$P_{B}$-modules in
${}_{\mathbb{CZ}_{2}}\mathcal{M}$ ) are in $``1-1"$ correspodence
with the (ordinary) representations of $P_{B(g)}$. The construction
of the representation of $P_{B(g)}$ which
corresponds to any given representation of $P_{B}$ can be
done straightforwardly \cite{Maj1, Maj2}. Although we do
not have such a strong result for the representations of
$P_{B(K^{\pm})}$, the construction in the end of \seref{4} for the case of
finite degrees of freedom, enables us to uniquely extend the
Fock-like \cite{GreeMe} representations of $P_{B}^{(n)}$ to representations of
$P_{B(K^{\pm})}^{(n)}$. Since the Fock-like representations of
$P_{B}$ are unique up to unitary equivalence (see the proof in
\cite{GreeMe} or \cite{OhKa}), this is a point which deserves to be discussed
analytically in a forthcoming work. We must note here that this
question has to be discussed in connection with the explicit
construction of the parabosonic Fock-like representations which
is yet another open problem (see the discussion in \cite{KaDa3}
or \cite{LiStVdJ}).
Finally, it will be an interesting thing to study the (ordinary)
tensor products of representations of $P_{B(g)}$ and
$P_{B(K^{\pm})}$, through the comultiplications stated in
\eqref{eq:HopfPBgexpl} and \eqref{eq:HopfPBK} respectively, in
comparison with the (braided) tensor products of (braided)
representations of $P_{B}$ through the comultiplication stated in
\eqref{eq:HopfPB}. Specifically, it will be of interest to answer
the question of whether the ordinary Hopf structures presented in
the last section of this paper are able of generating essentially
new representations of the parabosonic algebra: The possibility
that the reduction of (ordinary) tensor product representations of
either $P_{B(g)}$ or $P_{B(K^{\pm})}$ might lead to submodules
non-equivalent to the parabosonic Fock-like representations (the
latter emerge as irreducible submodules in the reduction of the
braided tensor product representations of $P_{B}$) is an
intriguing one and deserves to be discussed analytically in a
forthcoming work.
\textbf{Acknowledgements:} This paper is part of a
project supported by ``Pythagoras II", contract number 80897.
|
3,212,635,537,754 | arxiv | \section{Introduction}
For 15 years, the problem of large-scale flows of galaxies has seen little
attention relative to other probes of the large scale structure in the Universe. The data on peculiar velocities has been difficult to obtain, and the results had contradictory conclusions \citep{sw95, z02}. They are limited to small redshifts ($\sim 100 h^{-1} \rm Mpc$) at which distance indicators can reliably be used. These earlier forays into the subject led to
disagreements that few people wanted to sift through. But in the interval,
the data has improved dramatically, thus stirring recent activity in the subject.
Peculiar velocities are unique in that they
provide explicit information on the three dimensional mass
distribution, and measure mass on scales of $20-50 h^{-1}$ Mpc, a scale
untouched by alternative methods.
Local peculiar velocity data is, in principle, affluent in
cosmological information.
Power spectra and correlation functions
could be derived from the data by direct calculation or by maximum likelihood techniques \citep[e.g.][]{gd89, jk95, freu99,jusf00,brid01,abate09}. Direct calculation of low order moments of the flow, such as the bulk
motion and the shear could also analyzed within the framework of cosmological models
\citep[e.g.][]{feldwh10}. Some authors claim the bulk flow of mixed catalogs of galaxies argues there are problems with
$\Lambda CDM$ \citep{watkins09}, but recent results \citep{nuss11} show that the SFI++ catalog by itself has a large scale bulk flow that is consistent with $\Lambda CDM$, and this analysis has smaller error bars.
Other, perhaps more ambitious, applications could involve an assessment of the
statistical nature of the initial cosmological large scale fluctuations, i.e. whether gaussian or otherwise
\citep{nd93, berj95}. All these analyses could be performed with peculiar velocity measurements alone.
Here we will be concerned with a comparison of the observed peculiar velocities on the one hand and the velocities derived from the fluctuations in the galaxy distribution
on the other. The basic physical principle behind this comparison is simple.
The large scale flows are almost certainly the result of the process
of gravitational instability with overdense regions attracting material, and
underdense regions repelling material. Initial conditions in the early universe
might have been somewhat chaotic, so that the original peculiar
velocity field (i.e. deviations from Hubble flow) was uncorrelated
with the mass distribution, or even contained vorticity. But those
components of the velocity field which are not coherent with the
density fluctuations will adiabatically decay as the Universe expands,
and so at late times one expects the velocity field to be aligned with
the gravity field, at least in the limit of small amplitude
fluctuations \citep{Peeb80, n91}.
In the linear regime, this relation implies a simple proportionality between
the gravity field {\bf g} and the velocity field ${\bf v}_{\rm g}$,
namely ${\bf v}_{\rm g} \propto {\bf g}~ t$ where
the only possible time $t$ is the Hubble time. The exact expression
depends on the mean cosmological density
parameter $\Omega$ and is given by \citep{Peeb80},
\begin{equation}
\label{eq:vg}
{\bf v}_{\rm g}(r) = {\frac{2 f(\Omega)}{ 3 H_0 \Omega}} {\bf g}({\bf r}) \ .
\end{equation}
Given complete knowledge of the mass fluctuation field $\delta_\rho({\bf r})$
over all space, the gravity field ${\bf g(r) } $ is
\begin{equation}
\label{eq:grho}
{\bf g}({\bf r}) = G\bar{\rho} \int d^3 {\bf r'} \delta_\rho({\bf r'})
\frac{{\bf r' }-{\bf r}} { |{\bf r'}-{\bf r}|^3} \ ,
\end{equation}
where $\bar\rho$ is the mean mass density of the Universe.
If the galaxy distribution at least
approximately traces the mass on large scale, with linear bias $b$
between the galaxy fluctuations $\delta_G$ and the mass fluctuations
(i.e. $\delta_g = b \delta_\rho$),
then from (\ref{eq:vg}) and (\ref{eq:grho}) we have
\begin{equation}
\label{eq:vrho}
{\bf v}_{\rm g}(r) = {H_0 \beta \over 4 \pi {\bar n}}
\sum_{i} {1\over\phi(r_i)} {\bf r_{\rm i} -r \over | r_{\rm i} -r|^3 }
+ {{H_0 \beta }\over 3}{\bf r}\ ,
\end{equation}
where $\bar n$ is the true mean galaxy density in the sample, $\beta \equiv f(\Omega)/b$ with $f\approx \Omega^{0.55}$
the linear growth factor \citep{lind05}, and where we have replaced the integral over space
with a sum over the galaxies in a catalog, with radial selection
function $\phi(r)$\footnote{$\phi(r)$ is
defined as the fraction of the luminosity distribution function observable
at distance $r$ for a given flux limit; see \citep[e.g.][]{y91}.}.
The second term is
for the uniform component of the galaxy distribution and would exactly
cancel the first term in the absence of clustering within the survey volume. Note that the result is insensitive to the value of $H_0$, as
the right hand side has units of velocity. We shall henceforth quote
all distances in units of $\ {\rm km\,s^{-1}}$.
The sum in equation (\ref{eq:vrho}) is to be computed in real space, whereas the galaxy catalog exists in redshift space. As we shall see in \S\ref{sec:zdist}, the modified equation, which
includes redshift distortions, maintains a dependence on $\Omega$ and $b$ through the
parameter $\beta$.
Therefore, a comparison of the measured velocities of galaxies to the
predicted velocities, ${\bf v}_{\rm g}(r)$, gives us measure of $\beta$.
Further, a detailed comparison of the flow patterns addresses
fundamental questions regarding the way galaxies trace mass on large scales and
the validity of gravitational instability theory.
In this paper we shall make this comparison
using the best presently available data for both the
velocity and gravity fields.
The direct comparison of the peculiar velocities is fraught with
difficulty. Distances to individual galaxies are typically uncertain at
the 20\% level and are furthermore subject to considerable
Malmquist bias. We shall elaborate a method that was first presented 15 years ago \citep{DNW96} (hereafter DNW96) and which was designed to alleviate
most of the observational biases. But the peculiar velocity data at that time was poor and our results as well
as those of others \citep[e.g.][]{h95,k91,y88,sd88,branf91,ndac01,zb02} were all meant to be preliminary and none of their conclusions were compelling.
Recently ideal data sets have been assembled, thus allowing a new, definitive analysis of large scale flows. The new gravity field is very well described by the nearly whole sky Two Mass Redshift Survey (2MRS) (Huchra et al. 2005b), and the new peculiar velocity catalog is the SFI++ sample \citep{spring07, spring09}.
The 2MRS has previously been used to address the gravity field in considerable detail \citep[e.g.][]{e06,h08}, and some effort has gone into
the comparison of the 2MRS predicted velocities versus the SFI++ measured velocities, in particular by \cite{p05, l10}.
To date nobody has
included a proper treatment of the correlated noise in the analysis.
Here we shall compare the observed versus predicted radial
velocities, taking into account a full error analysis based on a suite of elaborate mock catalogs designed to
match the 2MRS and SFI++ data sets.
We shall use a refinement of the method of
orthogonal mode expansion by \cite{ND94}
(hereafter ND94) and \cite{ND95} (hereafter
ND95). Analysis of peculiar velocity data is
inevitably plagued by systematics, random measurement errors and sparseness of the data.
The methods employed here are specifically designed to minimize these biasing, thus achieving a
robust unbiased comparison between the measured SFI++ and the predicted 2MRS velocities
In \S\ref{sec:data} we introduce the 2MRS and the SFI++ data sets and various trims that we do to ensure
unambiguous reliable results.
In \S\ref{sec:vrecon} we describe the method for extracting large scale peculiar motions from both data sets.
We discuss the linear equation for predicting the peculiar velocity field associated with a distribution of galaxies in
redshift space and review our old method
deriving estimates of galaxy peculiar velocities from the inverse Tully-Fisher (ITF) relation by means
of an expansion over orthonormal modes (basis functions). We focus on the new refinements designed to
optimize the extraction of the signal from the data.
As has been the case in the past, mock catalogs
constructed from $N$-body simulations are essential for debugging and
calibrating the methods. This is especially so for our application,
since the entire analysis is performed in essentially pure redshift
space. We present details of the mocks in \S\ref{sec:mocks}.
In \S\ref{sec:results} we inspect the flow fields reconstructed from the 2MRS and SF++ data, visually and statistically, demonstrating that
differences between them are similar to those expected in the mocks.
We present our constraints on the cosmological parameters in \S\ref{sec:const}.
In the concluding \S\ref{sec:disc} we summarize our findings, discussing their implications and
contrasting them with other results in the literature. For readers wanting to avoid the 'how to' details, we suggest skipping \S\ref{sec:vrecon} and \S\ref{sec:mocks} but then coming back to understand how our machinery operates.
\section{New Data for the Comparison}
\label{sec:data}
\subsection{Gravity field}
Twenty years ago the only catalog of galaxy photometry with uniform
coverage over the full sky was derived from the IRAS satellite
\citep{sd88,y88}. From the point source catalog
(galaxies were unresolved in IRAS) a flux limited sample at $60\mu m$ was
constructed and redshifts were obtained for all objects to construct
the IRAS PSCz (Point Source Catalog redshift survey, Strauss et al.
1990). Among other problems, this PSCz catalog gave little weight to
ellipticals (which are dim at $60\mu m$ as this wavelength is dominated by
dusty star formation) and suffered from severe confusion in regions of high
density. However the uniform full-sky coverage was unique in enabling
the estimation of local gravity, and furthermore our local gravity
field (in a relatively low density region of the universe) is
dominated by spiral, not elliptical, galaxies and IRAS gave a fair,
although noisy, representation of the spirals.
Much larger redshift surveys do now exist e.g. SDSS \citep{abaz} and 2dF
\citep{colless03}, but few have attempted
to be complete over the whole sky as many cosmological measurements do
not require such complete surveys and a trade off has been made
between depth and sky coverage from the available telescope time and
resources. The most recent *all sky* imaging survey was the Two Micron
All-Sky Survey (2MASS, \cite{skrut}), and the 2MASS Extended
Source Catalog (XSC, Jarrett et al. 2000) extracts from that imaging a
flux limited (to K=13.5) sample of half a million extragalactic objects. The 2MASS
Redshift Survey (2MRS, Huchra et al. 2005a) is a program to obtain
redshifts for all galaxies in the 2MASS XSC to a fixed flux limit in
the K-band. The K=11.25 magnitude limited version of 2MRS consists of
~23,000 galaxy redshifts with uniform sky coverage to within 5 degrees
of the Galactic plane except towards the Galactic centre where
stellar confusion limits the catalog to +/-10 deg (Huchra et al. 2005b). A K=11.75 mag
limited version of 2MRS is almost complete, consists of ~43,000 galaxy
redshifts, and will be made available soon (Huchra et al. in prep.).
Since the sample is K band selected, the extinction correction is modest and it is ideal for calculating local gravity.
In the Southern hemisphere redshifts for the 2MASS galaxies were observed as part of the 6dFGS \citep{jo05, jo09}, which used the 6dF multi-fiber spectrograph on the 1.2m UK Schmidt in Siding Spring, Australia. Their ultimate product was a map of 110,256 2MASS galaxies in the Southern sky to a magnitude limit of K=12.75 mag and to within 10 deg of the Galactic plane. This survey is far deeper than the stated goal of 2MRS, but also has a higher Galactic latitude limit. In the Northern hemisphere the 2MRS builds on a strong tradition of redshift surveys at the CfA: the CfA redshift survey and ZCAT
\citep{dav82,lgh86}. In the absence of a northern hemisphere equivalent to the 6dF, new redshift observations are done galaxy by galaxy using the 1.2m telescope at the Fred Lawrence Whipple Observatory on Mt. Hopkins, AZ. The average density of galaxies at the magnitudes 2MRS is observing is about 1 per degree, so without a wide-field multi-object spectrograph in the Northern hemisphere this remains the most efficient way to get new redshifts. Lower Galactic latitude galaxies in the Southern hemisphere ($ |b|> 5$) were added to 2MRS from observations at CTIO.
The version of the 2MRS which is complete to K=11.25 (consisting of
23,200 galaxies; Huchra et al. 2005, Westover 2007) has been used in
to calculate the acceleration on the Local Group by \citep{e06a}.
The dipole estimate seems to converge to the CMB result within $60h^{-1}$ Mpc, suggesting that the bulk of the motion of the Local Group comes from structures within that distance. They also have done a dipole analysis, weighting the sample by its luminosity, rather than the counts, and find relatively minor changes. Density and velocity fields have been calculated by
\citep{e06} for the K=11.25 sample. All major local superclusters and voids are successfully identified, and backside infall onto the "Great Attractor" region (at $50h^{-1}$ Mpc) is detected.
The 2MRS catalog appears to be a fair tracer of the underlying mass distribution.
The real-space correlation lengths, $r_0$ is best fit by a regression $r_0 = (7.5 \pm 0.5) - (3.0 \pm 0.6) {\rm log}_{10} n$, where $n$ is the cumulative number density in $10^{-3}~ h^3 $~Mpc \citep{w09}. In contrast, \citep{ze10} report that the R band optically selected SDSS survey gives $r_0 = (6.7 \pm 0.1) - (2.0 \pm 0.1) {\rm log}_{10} n$.
In terms of bias estimates, \citep{w09} reports $b/b_* = 0.73 + 0.24L/L_*$ while
\citep{no02} state that $b/b_* = 0.85 + 0.15L/L_* $ for the 2DF survey.
In other words, Westover's data show the 2MRS correlations are more dependent on luminosity than are optically selected samples.
In view of this luminosity dependent result, it makes the most sense to evaluate the gravity field in a luminosity-weighted manner; it is computed below with a variety of luminosity weightings.
\cite{w09} has also made a mock catalog for the missing galaxies at low latitude by interpolating the galaxy density above and below them in three dimensions. We shall use this catalog as an estimate of the local mass density.
\subsection{TF sample}
Twenty years ago, the mis-calibration of full sky Tully-Fisher data was the
problem that led to very discrepant results for the determination of
$\beta \equiv \Omega/b$, with $\beta=0.5\pm 0.2$ \citep{DNW96} and
$\beta=1.0\pm 0.2$ \citep[e.g.][]{dek93}. The mistaken TF
calibration led to a large scale flow that confused both analyses, but
in the end, it was a calibration error in the Southern sky which made
a false large-scale flow \citep{wcf97}. In one analysis this led to a higher
$ \chi^2 $ than was acceptable, and in the other it led to a biased result.
For the analysis below, we use the recently completed survey of spiral
galaxies with I-band Tully-Fisher distances, SFI++ \citep{mas06, spring07, spring09},
which in turn builds on the original Spiral Field
I-band Survey \citep{ gio94, gio95, hg99} and
Spiral Cluster I-band Survey \citep{gio97a, gio97b}. We use
the published SFI++ magnitudes and velocity widths, and derive our own
peculiar velocities, rather than taking the published distances as given.
We use the SFI++ catalog as it includes several datasets to give full sky coverage. It is not essential for our analysis that the peculiar velocity sample have uniform sky coverage, but they must have a uniform calibration.
The other major Tully-Fisher catalog was published by \citep{t08}. This survey is restricted to $cz<3000$ km/s, and includes many of the same galaxies in that redshift range as SFI++. Tully also make use of a different algorithm for measuring spectral line widths, which are not easily comparable to the values derived for SFI++. So while one could in principle combine the two catalogs for this analysis, the small potential gain in sample size is not enough to justify the resulting heterogeneity in observational methods and data analysis.
We shall use the inferred distances, as well as redshifts, to derive an estimate of peculiar velocity for each galaxy. Correlation analysis \citep{bor00} indicates that peculiar velocities in the SFI++ behave as expected for $\Lambda CDM$ models. (see esp. \citep{bran01,freu99, daCostaNuss, feldw08})
\begin{figure}
\centering
\includegraphics[ scale=0.4 ,angle=00]{eta_M.ps}
\caption{ top: The scatter of the $\eta-M_I$ relationship, where $\gamma=0.12$ Bottom: The mean $\Delta\eta$
in .5 magnitude bins of the raw distribution (recall that $\eta=$ log(W)). The red is for galaxies with $cz > 5000$ km/s, while the blue is for $cz < 5000$ km/s. Note the change where $M_I > -20$, and also how the red and blue curves appear to have different TF relations. The histograms are for the raw data, with no flow model, while the points, with $1\sigma$ error bars, are the mean values of $\eta-M_I$ after the fit. The nearby and more distant galaxies now have identical TF relations.}
\label{fig:eta_M}
\end{figure}
In the analysis below, we shall use the inverse of the Tully-Fisher (ITF)
relationship, as given in equation \ref{eq:ITF}. We begin by drawing the published
magnitudes, velocity widths, and redshifts from \cite{spring07, spring09}. We
include all field, group, and cluster galaxies, which leaves us with an
initial sample of 4859 galaxies. Galaxies in groups and clusters are
treated as individual objects, though the redshifts for template cluster
galaxies are replaced by the systematic redshift of the cluster. Following
Giovanelli et al. (1997a), we brighten the magnitudes of Sb galaxies by 0.10
magnitudes and brighten the magnitudes of spirals earlier than Sb by 0.32
magnitudes, while leaving types later than Sb unchanged. This is done in
order to account for subtle differences in the TF relation of different
spiral subclasses.
We select only objects with inclination $i > 45$ deg to ease
problems with inclination corrections. The data must be transformed to the
LG frame, and galaxies with $cz<200$ km/s are deleted. All the analysis is done in the LG frame as the boundary conditions then simply becomes $v_g \rightarrow 0$ and $v_{itf} \rightarrow 0$.
The few galaxies with large residual $\Delta\eta$ ($\Delta\eta$ is the residual from equation \ref{eq:ITF}; see figure \ref{fig:eta_M}) are sufficiently deviant to be a worry for statistics which depend on data with a compact core and no long tails. The typical outlying object is not unusually nearby in redshift, and peculiar motions cannot be the explanation. The vast majority of SFI++ galaxies have well behaved TF relationships; perhaps the outliers are undergoing a merger? There are large negative $\Delta\eta$ outliers, but few corresponding large positive $\Delta\eta$ outliers, and this is resolved by
clipping the outliers at $| \Delta\eta | > 0.20$.
The top panel of figure \ref{fig:eta_M} shows the distribution of $\Delta\eta$ {\it before} galaxies have had their magnitude changed because of peculiar velocities; in
the bottom panel is shown the result of averaging the data into .5 mag bins, where the red histogram is for galaxies at $cz > 5000$ km/s, and the blue histogram is for those with $cz < 5000$ km/s, in the raw data. The red and blue points, plus
$1-\sigma$ error bars, are the average $<\Delta\eta>$ versus $M$ {\it after} the best flow model, described
in \S\ref{sec:refine}, is fit to the data. The blue point with $M<-23$ is deviant, but it only represents 11 galaxies, compared to an average of $170$ galaxies in the other bins. Note that the zero point of the $<\Delta\eta>$ behavior makes no difference; only the slope, the constancy of $<\Delta\eta>$ versus $M$ is important. Compared to the different slopes before the flow corrections are applied, the TF relation is now identical in the foreground versus the background of the SFI++ data.
This figure is for illustration only, as the data is not binned during the fitting process.
The bend in the TF relationship at $M=-20$ is known to be a result of the reduced mass in the Baryonic Tully-Fisher relation \citep{stark09,guro10}. We are missing the data to straighten out the curve, and since
the ITF method is easiest to apply if there is a linear relationship between $\eta$ and $M$,
we simply delete all galaxies with $M > -20$ from further consideration. After all these cuts, we are left with 2830 spiral galaxies with $200 < cz_{LG} < 10000$ km/s.
The raw distribution of $\Delta \eta$, after limiting the sample and fitting the best linear curve, is found to approximately fit a gaussian with
$\sigma = 0.059$. This is the dispersion with {\it no flow model} applied.
The gaussian width to the distribution is $\sigma = 0.0558$ after the flow model is applied. The small decrease is limited by the intrinsic, dominant noise of the TF relation. This noise has numerous causes, such as the uncertainty in the inclination correction of the SFI++ galaxies, or small variations in the outer limits of the rotation curves of the galaxies.
The following sections, \S\ref{sec:vrecon} and \S\ref{sec:results}, explain the machinery for effecting this reduction.
\section{Reconstruction of Peculiar Velocities}
\label{sec:vrecon}
In this section we outline our method described in ND94, ND95 and DNW96 for deriving the smooth peculiar velocities of galaxies from
an observed distribution of galaxies in redshift space and, independently,
from a sample of spiral galaxies with measured circular velocities
$\eta$ and apparent magnitudes $m$.
\subsection{Peculiar Velocities from the Distribution of Galaxies in Redshift Space}
\label{sec:zdist}
There are several methods
for generating peculiar velocities from redshift surveys,
using linear \citep[e.g.][]{Fisher95b} and non-linear relations
\citep[e.g.][]{Peeb80,cc98,NussBranch,Frisch, Ensslin09}
Here we restrict ourselves to large scales where linear-theory is applicable.
We will use the method of ND94 for reconstructing velocities from the 2MRS. This method is
is particularly convenient, as it is easy to implement, fast,
and requires no iterations. Most importantly, this redshift space
analysis closely parallels the ITF estimate described below.
We next present a very brief summary of the
methodology.
We follow the notation of DNW96. The comoving redshift space coordinate and
the comoving peculiar velocity relative to the Local Group (LG) are, respectively, denoted by
$\pmb{$s$} $ (i.e. $s= cz/H_0$) and $\pmb{$v$}(\pmb{$s$})$.
To first order, the peculiar velocity is irrotational
in redshift space \citep{chodnuss} and can be expressed as $\pmb{$v$}_g(\pmb{$s$})=-\pmb{$\nabla$}\Phi(\pmb{$s$})$ where $\Phi(\pmb{$s$})$ is a potential function.
As an estimate of the fluctuations in the fractional density field $\delta_0(\pmb{$s$})$ traced by the discrete distribution
of galaxies in redshift space we consider,
\begin{equation}
\delta_0(\pmb{$s$})=
{1\over {(2\pi)^{3/2}{\bar n} \sigma^3}}\sum_i {{w(L_{0i})}\over {\phi(s_i)}}
\exp\left[-{{\left(\pmb{$s$} - \pmb{$s$}_i\right)^2}\over
{2\sigma^2}}\right] -1 \quad .
\label{eq:deldef}
\end{equation}
where $\bar n=\sum_i w(L_{0i})/\phi(s_i)$ and $ w$ weighs each galaxy according to its estimated luminosity, $L_{0i}$.
The 2MRS density field is here smoothed by a gaussian window
with a redshift independent width,
$\sigma=350\ {\rm km\,s^{-1}}$. This is in contrast to DNW96 where the {\it IRAS~}
density was smoothed with a width proportional to the mean particle separation.
The reason for adopting a constant smoothing for 2MRS is its dense sampling which is nearly four time
higher than {\it IRAS~}.
We emphasize that the coordinates ${\bf s}$ are in {\it
observed redshift} space, expanded in a galactic reference frame. The only
corrections from pure redshift space coordinates is the
collapse of the fingers
of god of the known rich clusters prior to the redshift space smoothing
(Yahil {\it et al.\ } 1991).
Weighting the galaxies in equation (\ref{eq:deldef}) by
the selection function and luminosities evaluated at their redshifts rather than the
actual (unknown) distances yields a biased estimate for
the density field. This bias gives rise to Kaiser's rocket effect \citep{kais87}.
To construct the density field, equation \ref{eq:deldef}, we volume limit the 2MRS sample to 3000 km/s, so that $\phi{(s<3000)}= 1$, resulting in $\phi{(s=10000)}=0.27$ \citep{w09}. In practice, this means we delete galaxies from the 2MRS sample fainter than $M_* + 2$. Galaxies at 10,000 km/s therefore have $1/\phi = 3.7$ times the weight of foreground galaxies in the generation of the velocity field, $v_g$.
If we expand the angular dependence of $\Phi$ and $\delta_0(\pmb{$s$})$
redshift space in
spherical harmonics
in the form,
\begin{equation}
\Phi(\pmb{$s$})=
\sum_{l=0}^{\infty}\sum_{m=-l}^l \Phi_{lm}(s)Y_{lm}(\theta,\varphi)
\end{equation}
and similarly for $\delta_0$, then, to first order,
$\Phi_{lm}$ and $\delta_{0lm}$ satisfy,
\begin{eqnarray}
\label{eq:phis}
{1\over {s^2}} {{\rm d} \over {{\rm d} s}}
\left(s^2 {{{\rm d} \Phi_{lm}} \over {{\rm d} s}}\right)
&-&{1 \over {1+\beta}}{{l(l+1) \Phi_{lm}} \over {s^2}}\\
&=&{\beta \over {1+\beta}} \left(\delta_{0lm} -
{ \kappa(s) } { {{\rm d} \Phi_{lm}} \over {{\rm d} s}}\right)\; , \nonumber
\end{eqnarray}
where
\begin{equation}
\kappa=\frac {{\rm d} {\rm ln}\phi} {{\rm d} s} - \frac{2}{s} \frac{d {\rm ln} w(L_{0i})}{d {\rm ln} L_{0i}}
\end{equation}
represents the correction for the bias introduced by
the generalized Kaiser rocket effect. As emphasized by
ND94, the solutions to equation (\ref{eq:phis}) for the monopole ($l=0$) and the dipole
($l=1$) components of the radial peculiar velocity in the LG frame
are uniquely determined by specifying vanishing velocity at the origin.
That is, the radial velocity field at redshift $\bf s$, when expanded to
harmonic $l \le 1$, is not influenced by material at redshifts greater
than $\bf s$.
In this paper, we shall consider solutions as a function
of $\beta$ and the parameter $\alpha$ defining a power law form $w_i\propto L_i^\alpha$ for
the galaxy weights.
\subsection{Peculiar Velocities from the Inverse Tully-Fisher relation}
\label{sec:itfd}
Given a sample of galaxies with measured circular velocity
parameters, $\eta_i \equiv {\rm log}\omega_i$, linewidth $\omega_i$, apparent magnitudes $m_i$,
and redshifts $z_i$, the goal is to derive an estimate for the smooth
underlying peculiar velocity field. We assume that the circular
velocity parameter, $\eta$, of a galaxy is, up to a random
scatter, related to its absolute magnitude, $M$, by means of a
linear {\it inverse} Tully-Fisher (ITF) relation, i.e.,
\begin{equation}
\label{eq:ITF}
\eta=\gamma M + \eta_0 .
\end{equation}
One of the main advantages of inverse TF methods is that samples selected by magnitude, as most are, will be minimally plagued by Malmquist bias effects when analyzed in the inverse
direction \citep{schechter, a82}.
We write the absolute magnitude of a galaxy,
\begin{equation}
M_i = M_{0i} + P_i
\end{equation}
where
\begin{equation}
M_{0i} = m_i + 5{\rm log}(z_i)-15
\end{equation}
and
\begin{equation}
P_i = 5{\rm log}(1- u_i/z_i)
\label{eq:P_i}
\end{equation}
where $m_i$ is the
apparent magnitude of the galaxy, $z_i$ is its redshift
in units of $\ {\rm km\,s^{-1}},$
and $u_i$ its radial peculiar velocity in the LG frame.
ND95 base a velocity model on spherical harmonics and spherical Bessel functions, for galaxies distributed over the sky to 6000 km/s. With the 2MRS we extend the gravity field to 10,000 km/s.
In general, one can write the
function $P_i$ in terms of an expansion over $j_{_{\rm m}}$ orthogonal basis functions, ${F}^j_i$,
\begin{equation}
\label{eq:pF}
P_i =\sum_{j=0}^{j_{_{\rm m}}} a^j {F}^j_i
\end{equation}
with orthonormality conditions,
\begin{equation}
\label{eq:Forth}
\sum_{i=1}^{N_g} {F}^j_i {F}^{j^\prime}_i= \delta_K^{j , {j^\prime}}
\end{equation}
and the zeroth mode defined by ${F}^0_i=1/\sqrt N_g$, where $N_g$ is
the number of galaxies in the sample. The mode ${F}^0$
describes a Hubble-like flow in the space of the data set which is
degenerate with the zero point of the ITF relation.
Here we
set $a^0=0$, which removes the Hubble-like flow from the gravity field, below.
The best fit mode coefficients, $a^j$, the slope, $\gamma$, and the zero point
$\eta_0$, are found by minimizing the $\chi^2$ statistic
\begin{equation}
\label{eq:chi}
\chi^2_{_{\rm ITF} } =\sum_{i=1}^{N_{\rm g}}
\frac{\left(\gamma M_{0i}+\gamma P_i+\eta_0-\eta_i\right)^2}{\sigma^2_{\eta,{\rm int}}}\; ,
\end{equation}
where $\sigma_{\eta,{\rm int}}$ is the $rms$ of the intrinsic scatter in $\eta$
about the ITF relation, and $N_{\rm g}$ is the number of galaxies in the sample.
Given the orthonormality condition, the solution to the equations $\partial \chi^2_{_{\rm ITF} } /\partial a^j=0$,
$\partial \chi^2_{_{\rm ITF} } /\partial \gamma=0$ and $\partial \chi^2_{_{\rm ITF} } /\partial \eta_0=0$ is straightforward.
Thanks to the orthonormality condition, the covariance matrix $<\delta a_j \delta a_j'>$ of the errors in $a^j$
is diagonal with
\begin{equation}
\label{eq:siga}
\sigma_a=<(\delta a^j)^2>^{1/2}=\frac{\sigma_{\eta,{\rm int}}}{\gamma} \; .
\end{equation}
This lack of covariance of the errors in the coefficients is most rewarding as it makes the
ITF error analysis exceptionally simple.
Therefore, statistical assessment of the match between the data will be done at the level of the modes
rather than the peculiar velocities.
The interested reader will find details in ND95 and DNW96.
\subsection{The orthonormal basis functions}
The choice of radial basis functions for
the expansion of the modes can be made with considerable latitude. The
functions should obviously be linearly independent, and close to orthogonal
when integrated over volume. They should be smooth and close to a complete
set of functions up to a given resolution limit. ND95 chose spherical
harmonics $Y_l^m$ for the angular wavefunctions and the
derivatives of spherical Bessel functions for the radial basis functions,
motivated by the desire to use functions which automatically satisfy
potential theory boundary conditions at the origin and the outer boundary.
That is, they chose
\begin{equation}
P(y,\theta,\phi)=\sum_{n=0}^{n_{max}}\sum_{l=0}^{l_{max}}\sum_{m=-l}^{m=l}
{a_{nlm} \over y}\left(j^{\prime}_{l}\left(k_ny\right)-c_{_{l1}}\right)
Y_{lm}\left(\theta,\phi\right) \; .
\end{equation}
The function $y(z)$ is designed to compress the distance scale, increasing the smoothing scale of the mode to deal with increased noise at large distances. For this analysis we use
\begin{equation}
\label{eq:yz}
y(z) = \sqrt{{\rm log}(1+(z/z_s)^2)}
\end{equation}
where $z_s = 5000$ km/s.
The constant $c_{l1}$ is non-zero for the dipole term only and ensures that
$P=0$ at the origin, and is non-zero at the outer boundary.
Details of how the orthogonalized functions $\tilde F_i^j$ are derived
from this expansion are given in ND95.
The spherical harmonics are expanded to a maximum $n=5$ and $l=3$, except we delete the $n=1$ mode for $l=0$ as this mode can be confused with the false Hubble flow described in the next section. We also include an external quadruple, distinct from the internal quadruple, to describe the gravity induced by material at distances $cz>10000\ {\rm km\,s^{-1}}$. Summing over the values of $m$, that makes a total of 72 modes fit toward reducing the $\chi^2$ of equation \ref{eq:chi}. The use of $y(z)$ is designed to allow the radial resolution to degrade with distance; for example, the $n=5$ modes have a wavelength in the radial direction of $3000\ {\rm km\,s^{-1}}$ at $cz=8000\ {\rm km\,s^{-1}}$ and a wavelength of $1300\ {\rm km\,s^{-1}}$ at $cz=1000\ {\rm km\,s^{-1}}$.
\subsection{ Expanding the 2MRS gravity field}
In order to assess the match between the velocities by
means of the expansion coefficients and to ensure that both fields
are smoothed similarly, the 2MRS predicted velocities $\pmb{$g$}$ must also be described by
an expansion over the basis functions used in the $v_{_{\rm itf} }$ model.
Using the machinery for computing a gravity field described in \S\ref{sec:zdist},
one can generate a linear theory predicted peculiar velocity
$\pmb{$g$}$ for any point in space as a function of its redshift for
any value of $\beta$. We must ensure that the smoothing scales
of the ITF and 2MRS predicted peculiar velocities are matched
to the same resolution. Therefore we expand $\pmb{$g$}$ in terms of the
modes used in the velocity model. Because of the
orthonormality, we can write the mode coefficients as
\begin{equation}
\label{eq:a_g}
a_{_{\rm g}}^j = 5 \sum_{i=1}^{N_{\rm g}} {\rm log}\left(1 - {(v_{_{{\rm g},i }} - H' cz_i)\over cz_i}
\right) {F}^j_i
~ ,
\end{equation}
where the $H'$ term is a correction for Hubble flow and the summation
index $i$
is restricted to be
over the positions of the same galaxies in the ITF expansion.
This procedure will filter out fluctuations that are not
described by the resolution of our basis functions.
We do not include any mode such that $P_i = $ constant, which would be a pure Hubble flow.
In the fitting for the ITF modes, pure Hubble flow
is absorbed into a shift of the zero point $\eta_0$ and the
orthogonality is ensured. Within a given set of test points
occupying a volume smaller than that used to define the gravity
field, it is possible for $v_g$ to have a non-zero value of
Hubble flow $H'$, which must be removed from $v_g$ before we
tabulate the mode coefficients. That is, we tabulate the mean
Hubble ratio
\begin{equation}
H' = \frac{\sum_{k=1}^{N_{\rm g}} v_{g,k} cz_k }{ \sum_{k=1}^{N_{\rm g}} v_{g,k} ^2}
\end{equation}
and subtract it from the predicted field $v_g$. This ``breathing
mode" which mimics a Hubble flow is not
trivial in amplitude, and can be a 10\% correction on the effective Hubble
constant within simulated catalogs. This mode is cosmologically expected to a modest degree, but a bigger portion of the effect is caused by error in the determination of $\overline{n}$, which we estimate by assuming the weighted counts within 12,000 km/s is the mean value.
\subsection{Refinements of the ND95 functions}
\label{sec:refine}
The \cite{ND95} reconstruction of the base functions can only provide a rough estimate of
the spatial distribution of galaxies in the TF data. It does not guarantee that the signal to noise in the
filtered fields is uniform all over the sample.
The ND95 expansion yields reliable TF velocities of nearby galaxies,
but very noisy estimates at larger distances.
Moreover, it is difficult to achieve a desired resolution as a function of redshift and
to ensure equal resolutions in the radial and angular directions.
The ND95 method expands the observed velocities in terms of harmonic functions, but the
individual harmonic modes are not regularized and may acquire unrealistically large amplitudes, depending on the spatial coverage of the data.
We aim here at generating base functions which are themselves smoothed
with a variable isotropic smoothing window designed to yield a constant signal to noise in the
estimated $v_{_{\rm itf} }$.
We construct these new basis functions with the help
of the ND95 orthogonal functions denoted here by $F^{\rm ND}$.
Suppose a single radial velocity field, $\Vs$, with the appropriate variable smoothing
has been found.
We term $\Vs$ the {\it seed} field as the new modes will stem from it. As will be described below this field will be chosen as
the 2MRS predicted velocity field, but any other field representing a viable velocity field could serve as $\Vs$. Given
\begin{equation}
\Psii=5{\rm log}(1-\Vsi/cz_i)
\end{equation}
(where $i$ refers to galaxies in the TF sample)
we expand $\Psii$ in base functions constructed according to ND95,
\begin{equation}
\Asj=\sum_i \Psii F^{NDj}_i\; .
\end{equation}
Here, the number of the modes $F^{ND}$ is sufficiently large so that the inverse transformation $\sum_j \Asj F^{NDj}_i$
reproduces
$\Psii$. In practice we use about 1400 ND95 modes (we go to $l=17$).
We then form additional fields, $P_\alpha$, according to
\begin{equation}
P_{\alpha, i}=\sum_j{\cal R}^j_{\alpha} \Asj F^{NDj}_i \; ,
\end{equation}
where $\cal R$ is a set of normally distributed
random numbers with zero mean and standard deviation of unity.
This reconstruction of the additional
fields preserves the ``power" in the modal expansion and randomizes the phases.
So far all these fields, $P_{\alpha,i}$, are unfiltered and may contain non-linear small scale fluctuations.
Therefore, we smooth all fields $P_{\alpha,i}$ according to
\begin{equation}
P^{\rm smooth}_{\alpha,i}=\sum_{all \; galaxies} P_{\alpha,i'} W(s_{i,i'},R_{s,i'})\; ,
\end{equation}
where $s_{i,i'} $ is distance (in redshift space) between the galaxies $i$ and $i'$, and $W$ is a gaussian window
of width $R_{s,i}$ which depends on the galaxy $i$.
The smoothing width $R_{s,i}$ is tuned such that the expected
error in the ITF velocity of galaxy $i$ is $\sim 150\ {\rm km\,s^{-1}}$ and therefore it depends on the redshift and the
local density of galaxies near $i$.
The smoothing length at the positions of galaxies in the SFI++ sample is shown in figure (\ref{fig:Rs}); it varies roughly linearly with redshift, ranging from
$1h^{-1}\,{\rm Mpc}$ nearby up-to $30h^{-1}\,{\rm Mpc}$ for galaxies at redshifts $\sim 10000\ {\rm km\,s^{-1}}$.
The new basis functions are then obtained by orthonormalizing $P^{\rm smooth}_{\alpha,i}$.
We refer to the new
functions by the standard notation $F^j_i$, with $F^1_i$ being the smoothed $\Psii$. These new modes will
be used in the expansion given in equation (\ref{eq:pF}). In \S\ref{sec:TheITF} we describe how we
determine the number of modes, $j_{_{\rm m}}$, to be used in the expansion.
The seed field $\Vs$ could be constructed by interpolating any unfiltered cosmological velocity field
on the positions of the
galaxies in the SFI++ sample.
Nevertheless, we could improve on this by constructing $\Vs$ from the
unfiltered 2MRS velocities given directly by the solution to equation (\ref{eq:phis}).
In practice, we use unfiltered $\pmb{$g$}$ obtained with $\beta=0.2$. The choice of $\pmb{$g$}$ for $\beta=0.2$ is arbitrary; the predicted field with any other $\beta$ could be used.
If the $\Vs \sim \pmb{$g$}$ and $v_{_{\rm itf} }$ both describe the same underlying velocity field then
the additional modes should mainly reflect the covariance of the errors
between the two fields.
The flow patterns of
9 of the modes are shown, respectively, in the 9 panels in figure (\ref{fig:modes}). The color scheme throughout this paper is normal: red (blue) means outflowing (infalling), from the central point..
We have extensively tested a broad variety of choices for the the first mode.
None of the results of the analysis reported are sensitive to this choice of the
first mode.
The figure shows that the higher order modes exhibit smaller scale structures.
This is a direct result of the orthogonolization processes. The $j^{th}+1$ mode has to be orthogonal to the
all previous $j$ modes. In order to achieve that, the orthogonalized $j^{th}+1$ will pick more of the small scale structure.
\begin{figure}
\centering
\includegraphics[ scale=0.45,angle=00]{Rs}
\caption{ The width of the gaussian smoothing gaussian window
versus galaxy redshifts in the SFI++ sample. The scatter reflects the angular variations in the
density of galaxies. }
\label{fig:Rs}
\end{figure}
\begin{figure*}
\centering
\includegraphics[ scale=1.1 ,angle=00]{modes}
\caption{ The flow patterns of 9 modes for galaxies within $ 5h^{-1}\,{\rm Mpc}$ of the supergalactic plane. The order of the mode is indicated
in the corresponding panel. }
\label{fig:modes}
\end{figure*}
\subsection{The ITF scatter and the number of modes}
\label{sec:TheITF}
Once the basis functions for the modal expansion are given, we proceed to
solve for the coefficients $a^j$ in $P_i=\sum a^j F^j_i$ by minimizing $\chi^2_{\rm ITF}$ in (\ref{eq:chi}).
The minimization is also performed, at the same time, with respect to the slope and zero point of the ITF. The estimated slope is
$\gamma=-0.1297\pm 0.0015$ and $-0.13 \pm 0.0016$, respectively, for 20 and 30 modes used in the flow model.
The raw slope before fitting the model is $\gamma=-0.1267\pm 0.0016$. The zero point plays no role at all here and
we do not keep track of its estimated values.
All estimated parameters, including $a^j$, are independent of
the (assumed constant) intrinsic scatter $\sigma_{\eta,{\rm int}}$ in the ITF. The velocity model can be used to estimate the unknown value of $\sigma_{\eta,{\rm int}}$.
Given the residual
\begin{equation}
\Delta \eta_i=\eta_i - (\gamma M_{0i} +\gamma P_i -\eta_0)
\end{equation}
we approximate $\sigma_{\eta,{\rm int}}$ by
\begin{equation}
\sigma_\eta^2=\sum_{i} (\Delta \eta_i)^2/N_{_{\rm d.o.f}}
\end{equation}
where
\begin{equation}
N_{_{\rm d.o.f}}=N_{\rm g}-(j_{_{\rm m}}+2)
\end{equation}
is the number of degrees of freedom taking into account
that the minimization of $\chi^2_{\rm ITF}$ is done with respect to $j_{_{\rm m}}$ coefficients plus the slope and zero point of the ITF.
This $\sigma_\eta$ will decreases as the number
of modes, $j_{_{\rm m}}$, in the expansion is increased.
If $j_{_{\rm m}}$ is too large then the higher order modes will be dominated by noise.
If $j_{_{\rm m}} $ is too small then the model may miss significant components of the underlying
true galaxy velocities.
The optimal range of $j_{_{\rm m}} $ for our comparison can be seen
by inspecting the behavior of $\sigma_\eta$ as a function of $j_{_{\rm m}}$.
The (blue) circles in the top panel of figure \ref{fig:chi_TF} shows $\sigma_\eta $ versus the number of modes
for modes generated from the seed field $\Vs=\pmb{$g$}(\beta=0.2) $ (see \S\ref{sec:refine}).
Most of the reduction in $\sigma^2_\eta$ is already achieved the first mode.
This is very encouraging since this means that $\pmb{$g$}$ picks up a significant
contribution of the velocities as described by the ITF data. It also means that both the 2MRS and the ITF data are likely to provide approximations to the underlying flow field.
However, the 2MRS predicted field $\pmb{$g$}$ deviates from the underlying field by the presence of correlated errors in the
reconstruction scheme.
The inclusion of additional expansion modes in the ITF velocity model will dissolve these
errors. The average reduction in the variance $\sigma^2_\eta$ per mode becomes insignificant beyond $j_{_{\rm m}}=64$;
the average reduction per mode for the first 64 modes is 10 times larger than that for the next 75 modes.
An F-test also confirms that the reduction in the variance marginal beyond $ j_{_{\rm m}}=64$.
Hence we will approximate $\sigma^2_{\eta,{\rm int}}=0.0558$, the value acquired by $\sigma_\eta$
for 64 modes in the velocity model. Therefore, the F-test argues that $64$ is the maximum number modes needed to
model the ITF.
The next step is to determine the minimum number of modes needed to describe the ITF flow assuming that
$\sigma_{\eta,{\rm int}}=0.0558$. To do so we tabulate $\chi^2_{\rm ITF}$ a function of $j_{_{\rm m}}$ and
compute the probability $Q=Q(\chi^2_{\rm ITF}|N_{_{\rm dof}})$ that the value $ \chi^2_{\rm ITF} $ is exceeded by
chance \citep[c.f. \S6.2 in ][]{press}.
The values of of $\chi^2_{\rm ITF}/N_{_{\rm dof}}$ and Q are represented as the (blue) circles, respectively, in the middle and bottom panels of figure \ref{fig:chi_TF}.
For $j_{_{\rm m}}=64$, we get $\chi^2_{\rm ITF}/N_{_{\rm d.o.f}}=1$ corresponding to $Q=0.5$, in accordance with our choice of $\sigma_{\eta,{\rm int}}$.
Without a velocity model, i.e. $j_{_{\rm m}}=0$, we get $(\chi^2_{\rm ITF}/N_{_{\rm dof}}, Q) =(1.125, 3\times 10^{-6})$. This
exceedingly low $Q$
rejects a vanishing velocity field with very high confidence level (CL).
Including the first mode alone gives a highly significant improvement: $(\chi^2_{\rm ITF}/N_{_{\rm d.o.f}},Q)=(1.031, 0.12)$.
The hypothesis that $ \chi^2_{\rm ITF}$ value corresponding to the first mode is obtained by chance is
rejected only at the 0.12 CL. This is encouraging since the first mode velocity field is proportional to the
2MRS predicted velocities, $\pmb{$g$}$ (for $\beta=0.2$).
For $j_{_{\rm m}}=30$ and 20 we get $(\chi^2_{\rm ITF}/N_{_{\rm d.o.f}},Q)=(1.01, 0.29)$ and $ (1.014, 0.2)$, respectively.
For comparison with $\Vs=\pmb{$g$}$, the corresponding results for random choice of the seed field, $\Vs$, are shown as the (red) crosses in the top and bottom panels of figure \ref{fig:chi_TF}.
With a random $ \Vs$, about 20 modes are
needed to reduce $\sigma_\eta$ to the level achieved by the single mode $\pmb{$g$}$.
\begin{figure*}
\centering
\includegraphics[ scale=0.8 ,angle=00]{mocks.ps}
\caption{ Scatter plots of $v_{_{\rm itf} }$ versus $\pmb{$g$}$ (expanded in 20 modes) for galaxies in 9 mock catalogs.
About 800 galaxies are plotted for each mock.}
\label{fig:mocks}
\end{figure*}
\begin{figure}
\centering
\includegraphics[ scale=0.8,angle=00]{chi_TF_Q}
\caption{ {\it Top:} The rms value, $\sigma_\eta$, of the ITF as a function of the number of modes used
in the velocity model. {\it middle:} $\chi^2_{\rm ITF}$ per degrees-of-freedom versus the number of modes.
It is unity at 64 modes.
{\it Bottom:} The probability that the
$\chi^2_{\rm ITF}$ exceeds a certain value by chance, as a function of the number of modes. The chi-square is computed assuming an intrinsic scatter $\sigma_{\eta,{\rm int}}=0.0558$. The value $Q=0.5$ is achieved at 64 modes. (see text for details)}
\label{fig:chi_TF}
\end{figure}
\begin{figure}
\centering
\includegraphics[ scale=.8 ,angle=00]{vvsigma6_beta035.ps}
\caption{ The peculiar velocities $ v_{_{\rm itf} }$ versus $\pmb{$g$}$ of galaxies in the real data (top) and the mocks (bottom)
for 20 and 70 expansion modes, as indicated in the figure.
The real $\pmb{$g$}$ has been reconstructed with $\beta=0.35$ and $\alpha=0$.
Velocities of about 400 randomly selected galaxies are plotted in each panel where each mocks is represented by about 25 galaxies. All velocities in the top panel are normalized by the rms value, $\sigma_g=233\ {\rm km\,s^{-1}}$ of $\pmb{$g$}$, while velocities
in the bottom panel are normalized by the
rms value of $\pmb{$g$}$ of their corresponding catalogs. }
\label{fig:vv}
\end{figure}
\subsection{What is the purpose of the ITF machinery?}
The expansion of the gravity field is conceptually very clean when computed in the LG frame (ND95).
The Poisson-like equation for the 3-D gravitational field has been solved as a sum over the spherical harmonic functions $Y_{lm}$ times 1-D functions of $r$ that satisfy physically reasonable boundary conditions at the origin. For the purposes of the ITF solution, we furthermore quantize the radial solutions with quantum number $n$.
The ITF method is backward from the usual methodology of TF applications; one does not fit curves to the scatter of peculiar velocities. Instead, the $\chi^2$ equation \ref{eq:chi} is minimized by the addition of linear combinations of the orthonormal functions of $n,l,m$, where each describes a set of large-scale flow that satisfy the boundary conditions. Furthermore, we have endeavored to form a first, 'seed', mode based on linear growth rate, but in which large scale graininess is filtered out by the use of figure \ref{fig:Rs}.
The individual galaxy's peculiar velocity enters by equation \ref{eq:P_i}, with differential
$d P_i \propto d u_i /z_i $, and since the uncertainty of peculiar velocity $u_i$ is proportional to redshift, the uncertainty of $P_i $ is redshift independent. This means that each object is given equal weight in a fit, and our window function is therefore equivalent to the display of figure \ref{fig:vitf}, which shows the positions of the SFI++ galaxies.
\section{Mock catalogs and error analysis}
\label{sec:mocks}
As a measure of the agreement between the TF and predicted velocities by means of the corresponding
expansion coefficients, $a_{_{\rm itf}}$ and $a_{_{\rm g}}$, we will consider the
$\chi^2$ function
\begin{equation}
\label{eq:chib}
\chi^2=\sum_{(j,k)=1}^{j_{_{\rm m}}} \left(a_{_{\rm g}}^j-a_{_{\rm itf}}^j \right)\left(\sigma_a^2+\xi_{_{\rm g}}\right)^{-1}_{j,k}\left(a_{_{\rm g}}^{k}-a_{_{\rm itf}}^{k} \right)
\; .
\end{equation}
The parameters $\alpha $ and $\beta$ will be obtained by minimization of this function.
The covariance of the residual, $a_{_{\rm itf}}-a_{_{\rm g}}$, is
the sum of the covariance matrices of the errors in the estimation of $a_{_{\rm itf}}$ and
$a_{_{\rm g}}$, respectively.
Thanks to the orthonormality of the basis functions, the error covariance in the estimation of $a_{_{\rm itf}}$ is diagonal with constant terms
$\sigma_a^2=\left(\sigma_\eta/\gamma\right)^2$ (see \S\ref{sec:itfd} and DNW96).
The matrix $\xi_{_{\rm g}}$ represents the covariance of the errors in the
determination of $a_{_{\rm g}}$.
The origin of these errors is as follows:
\begin{enumerate}
\item { Equation \ref{eq:phis} is expected to predict reliable velocity fields
only for small amplitude fluctuations.
Small scale nonlinear deviations from linear theory inevitably leak to large scales.}
\item{ The 2MRS is a finite number sampling of the underlying density field.
This leads to ``shot-noise" errors in the estimation of the density field.}
\item{ Small scale random motions of galaxies, especially in groups and
clusters, give rise to a smearing of the distribution of galaxies along the line of sight
in redshift space.}
\item{ There is a possible large scale stochastic biasing \citep{dh99,sb00,wpl05} between the galaxy distribution and the mass fluctuations.}
\end{enumerate}
The only way to achieve a reliable estimate of $\xi_{_{\rm g}}$ taking into
account all of these complicated errors is by means of mock catalogs designed to
match the general properties of the 2MRS.
A parent simulated catalog of the whole 2MASS catalog has already been
prepared \citep{delucia} by incorporating semi-analytic galaxy formation models
in the Millennium simulation \citep{mill}.
From this parent catalog we have drawn 15 independent mock 2MRS catalogues
satisfying the following conditions:
\begin{enumerate}
\item{ The ``observer" in each mock is selected to reside in a galaxy with
a quiet velocity field within $500$ km/s, similar to the observed universe. That is, the central server sees only one cluster that has high enough peculiar velocities to result in negative redshifts. Recall that in the LG frame, the only galaxies with negative redshift are in the Virgo cluster. }
\item{ The motion of the central galaxy is 500 to 700 km/s. }
\item{ The density in the environment of the local group, averaged over a sphere of 400 km/s radius , is less than twice the normal. }
\end{enumerate}
Corresponding mock ITF catalogs were also prepared.
A counts-in-cells statistics shows that the distribution of galaxies in the mocks is
unbiased relative to the dark matter, i.e. $b=1$.
The preparation of the mocks for velocity reconstruction is done in the
same way as the real data.
Equation (\ref{eq:phis}) is used to generate prediction of mock $\pmb{$g$}$ with $\beta=f(\Omega,\Lambda)/b =0.47$ corresponding to $b=1$ and
$\Omega=0.25$ and $\Lambda=0.75$
as in the Millennium simulation.
The mean of the rms values of $v_g$ in the mocks is $269\ {\rm km\,s^{-1}}$ and the standard deviation from
this mean is $56 \ {\rm km\,s^{-1}}$.
For contrast, the rms value of $v_g$ derived from the 2MRS with $\beta=0.35$ is
$233 \ {\rm km\,s^{-1}}$.
To better illustrate the covariance between the residuals in the mocks, we plot in
figure \ref{fig:mocks} the velocities $v_{_{\rm itf} }$ versus $\pmb{$g$}$ for 9 individual mocks. These are velocities expanded with 20 modes.
Note the similarity between the structure of the distribution of points in the individual panels and
the top panel in figure \ref{fig:vv}, showing $v_{_{\rm itf} }$ versus $\pmb{$g$}$ (with $\beta=0.35$) for the real data.
\subsection{The error covariance matrix}
The covariance matrix $\xi_{_{\rm g}}$ is computed from the 15 mocks by
projecting the correlation function, $\xi_{_{\rm P}}$, of the residuals
$\Delta P=P_{\rm itf} -P_{\rm g}$ onto the basis functions, where
$P_{\rm itf}=5{\rm log}(1-v_{_{\rm itf} }/cz)$ and correspondingly
for $P_{\rm g}$. That is
\begin{equation}
\label{eq:xiaa}
\xi_{_{\rm g}}(j,k)=<(a_{_{\rm itf}}^j-a_{_{\rm g}}^j)(a_{_{\rm itf}}^k-a_{_{\rm g}}^k)>=\sum_{i,i'} F^j_i \xi_{_{\rm P}}(i,i') F^k_{i'} \; ,
\end{equation}
where the summation in the last term on the right-hand side is over all data galaxies and
$F$ are the basis functions used for the real data.
In this calculation, the $v_{_{\rm itf} }$ velocities are reconstructed from an ITF relation without adding the internal scatter of the TF relation.
The reason is that the error in $a_{_{\rm itf}}$ resulting from the intrinsic scatter has a simple analytic form
given by $\sigma_a$.
The function $\xi_{_{\rm P}}$ is computed from the 15 mocks as follows.
Denote line-of-sight and projected separations in redshift space by, $s_\parallel$ and $s_\perp$, respectively.
For each mock we tabulate the average $ <\Delta P_1\Delta P_2> $ over pairs
with separations defined by the grid.
We then normalize this quantity by the variance of
$v_{_{\rm itf} }$ (for zero ITF instrinsic scatter) in the corresponding mock.
This is reasonable since the rms values of the velocity field vary considerably among the mocks and some of them
are significantly different than the real data. To minimize this cosmic variance and to derive $\xi_{_{\rm g}} $ given the observed
rms value of the velocity, this normalization of $ <\Delta P_1\Delta P_2> $ for each mock is prudent.
The average over all mocks is then computed and interpolated from the grid onto the
actual pair separations in the TF catalog to obtain the normalized $\xi_{_{\rm P}}$.
The normalized covariance matrix $\xi_{_{\rm g}}$ is then computed according to (\ref{eq:xiaa}) and
scaled by a factor matching the velocity variance estimated from the observed $v_{_{\rm itf} }$.
\section{The reconstructed velocities}
\label{sec:results}
This section presents
a visual inspection of the fields, and assess the coherence of the residual $v_{_{\rm itf} } - \pmb{$g$}$ by means of
a velocity correlation analysis.
The quantification of the agreement between the fields and the extraction constraints on $\alpha$ and $\beta$ will
be deferred to later sections.
\begin{figure*}
\centering
\includegraphics[ scale=0.6,angle=00]{12plots.400.eps}
\vspace{0pt}
\caption{ The derived peculiar velocities $v_{_{\rm itf} }$, $\pmb{$g$}$, and $v_{_{\rm itf} } -\pmb{$g$}$ of galaxies on aitoff projections on the sky in galactic
coordinates. The rows correspond to galaxies with $cz<2000$, $2000<cz<4000$
$4000<cz<6000\ {\rm km\,s^{-1}}$ and $6000<cz<10000$ km/s, respectively. The size of the symbols is linearly proportional to the velocity amplitude (see key to the size of the symbols given at the bottom of the figure). In order to better see the differences, a 400 km/s dipole, in the direction of the CMB dipole, has been subtracted from the $ v_{_{\rm itf} }$ and $\pmb{$g$}$ velocities. Note that $v_{_{\rm itf} } -\pmb{$g$}$ is considerably smaller than $v_{_{\rm itf} } $ or $\pmb{$g$}$, even for the most distant galaxies.}
\label{fig:vitf}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[ scale=.65 ,angle=90]{v_sg}
\vspace{10pt}
\caption{Supergalactic plane projection, $|SGB| <30$, of the derived flows. To better see the differences in the plots, a dipole of 400 km/s towards the CMB pole has been subtracted from the fields, and is shown in the bottom left and bottom center. A quadruple velocity is now visible in the plots. The points are drawn at the estimated distance of an SFI++ galaxy, and the line, blue or red, is drawn to the galaxy's redshift. In other words, the length of the arrow is the peculiar velocity. The lower right plot shows $v_{_{\rm itf} }-\pmb{$g$}$ for a mock catalog, and the upper right shows $v_{_{\rm itf} }-\pmb{$g$}$ for the data. They have very similar degrees of coherence.}
\label{fig:6fig}
\end{figure*}
\subsection{Visual inspection of the flows}
Blue dots and red crosses of figure \ref{fig:vv} show velocities expanded in 20 and 70 modes, respectively.
In the bottom panel, $v_{_{\rm itf} }$ versus $\pmb{$g$}$ from the mocks are shown. In mock $v_{_{\rm itf} }$ velocities are obtained from a fake ITF relation with
an intrinsic slope $\gamma=-0.1$ and a scatter with $\sigma_\eta=0.05$.
For the sake of clarity only 400 randomly selected galaxies are shown in either panel.
Further, each mock is represented by about 25 galaxies (randomly selected).
The velocities are scaled by the corresponding
rms value of $\pmb{$g$}$ in the corresponding catalog.
There is a an excellent overall agreement between $v_{_{\rm itf} }$ and $\pmb{$g$}$ for 20 modes, both in the real data and the mocks. A good agreement prevails even for 70 modes despite the clear enhanced noise contamination.
In the real data, the rms of $v_{_{\rm itf} }-\pmb{$g$}$ for 20 expansion modes is $99\ {\rm km\,s^{-1}}$, significantly smaller than $\sigma_g$.
For 70 modes the rms values of $\pmb{$g$}$ and $v_{_{\rm itf} }-\pmb{$g$}$ are $238\ {\rm km\,s^{-1}} $
and $231\ {\rm km\,s^{-1}}$, respectively. Both panels show clear structures in the
distribution of points, implying strong covariance between the residuals, $v_{_{\rm itf} } -\pmb{$g$}$, in the real, or mock, data.
Because the bottom panel represents
random selections of galaxies from all the mocks, the covariance pattern between the velocities is diluted in the distribution of points. The covariance pattern is, however, clear in figure \ref{fig:mocks} where scatter velocity plots for a few mocks are shown individually.
In the aitoff projections in Figure \ref{fig:vitf} we plot the TF peculiar velocities, $v_{_{\rm itf} }$ and the derived gravity modes, $\pmb{$g$}$, for galaxies
in redshift shells, $cz<2000$, $2000<cz<4000$, $4000<cz<6000$, and $6000<cz<10000\ {\rm km\,s^{-1}}$. The projections are in galactic coordinates centered on $l,b=0$ and with $b=90$ at the top.
Figures \ref{fig:vitf} and \ref{fig:6fig} show $\pmb{$g$}$ with $\beta=0.35$.
The rightmost plots are the residuals $v_{_{\rm itf} }-\pmb{$g$}$. The key point is to note that the residuals are small for the entire sky and have amplitude that is constant with redshift.
The amplitude and coherence of the residuals $v_{_{\rm itf} }-\pmb{$g$}$ is the same as for the mock catalogs
in figure \ref{fig:6fig},
where for example the lower right picture shows $v_{_{\rm itf} }-\pmb{$g$}$ for a mock catalog. It is not very dissimilar from the real plot of $v_{_{\rm itf} }-\pmb{$g$}$ in the upper right, demonstrating the feasibility of the entire method.
Note the quadrupole pattern for $cz<4000\ {\rm km\,s^{-1}}$ in figures \ref{fig:vitf} and \ref{fig:6fig}, visible after $400\ {\rm km\,s^{-1}}$ has been subtracted from the flow. This has been previously noted by \cite{hht07} on the basis of the flows detected in 133 SNe. The quadrupole is the typical pattern observed in nbody simulations and is the principle mode of collapse to a 1-D structure.
There is amazingly overall good agreement between the large scale motions
as described by $v_{_{\rm itf} }$ and $\pmb{$g$}$. The residual velocities are coherent over large scales but they are clearly of smaller amplitude than $v_{_{\rm itf} }$ and $\pmb{$g$}$.
Note that residuals shown in $v_{_{\rm itf} }-\pmb{$g$}$, particularly visible in the shell $6000<cz<10,000\ {\rm km\,s^{-1}}$, are dominated by $l=4$, because the fit for the reduction of the TF $\chi^2$ is limited at $l=3$ modes.
\begin{figure}
\centering
\includegraphics[ scale=0.5,angle=00]{Psiv.ps}
\vspace{10pt}
\caption{{\it Top: } The velocity correlation of the real data and 15 mock catalogs.
The dashed red and solid red curves curve are, respectively, the correlations of $v_{_{\rm itf} }$ and $v_{_{\rm itf} } - \pmb{$g$}$ in the real data.
The blue lines are each correlations of $v_{_{\rm itf} }-\pmb{$g$}$ for the mock catalogs. {\it Bottom:} Velocity correlations for 15 mock catalogs. The red curves are
the velocity of $v_{itf}$, the dot-dashed curves show the correlation of $(v_{true}-v_{g})$, and the blue curves correspond to $v_{true}-v_{itf})$.
Both $v_{true}$ and $v_{g}$ are first smoothed with the 20 mode expansion before the autocovariance is computed. Note that the correlation of $v_{_{\rm itf} }-\pmb{$g$}$ is only slightly worse than the correlation of $v_{true}-v_{gs}$, showing that the velocity reconstruction dominates the errors. Note also that we are plotting the square root of the velocity correlation $\Psi$. }
\label{fig:psiv}
\end{figure}
\subsection{Correlations}
The residuals, both in the real and mock data, have error fields, $ v_{_{\rm itf} } - \pmb{$g$}$, that show large regions of coherence. To address the significance of these errors, we show in figure \ref{fig:psiv} the velocity correlation function, \citep{gd89}, defined as
\begin{equation}
\label{eq:Psiv}
\Psi(s; {u})=\frac{\sum_{pairs} u_1u_2 {\rm cos}\theta_{12}}{\sum_{pairs} {\rm cos}^2\theta_{12}}
\end{equation}
where the sum is over all pairs, 1 and 2, separated by vector distance $\bf s_{12}$
(in redshift space), $ {\rm \theta_{12}}$ is the angle between points $1$ and $2$, and $u$ is either $v_{_{\rm itf} }$
(dashed red)
or $v_{_{\rm itf} } - \pmb{$g$}$ (red for data, blue for 15 mock catalogs),
At small lags for the real data, the function $\Psi(r; {v_{_{\rm itf} }-\pmb{$g$}})$ is a factor of 3 less than
$\Psi(s;{ v_{_{\rm itf} }})$, about the same as for the mock catalogs. Note how the large coherence of $v_{_{\rm itf} }$ is enormously diminished in
$\Psi(s>2000 {\rm km/s};{v_{_{\rm itf} }-\pmb{$g$}})$. This shows that the coherence seen in the residual field,
figure \ref{fig:psiv}, is expected and is not a problem. The large scale drift of a sample is demonstrated by the persistent amplitude of $\Psi$ beyond $\approx 60-80$ Mpc.
The bottom panel of figure \ref{fig:psiv} shows velocity correlations for 15 mock catalogs where the actual velocity, $v_{true}$, generated in the nbody code and then smoothed with the 20 mode expansion, can be compared to either $v_{_{\rm itf} }$ or $\pmb{$g$}$. Note that the raw velocities, $v_{itf}$ (red), have enormous correlation that reaches large lag, while the correlations, $(v_{true}-v_{_{\rm itf} })$, (blue) are extremely small. This is because the only difference with $v_{true}$ is the gaussian error in $\Delta\eta = .05$ that affects $v_{_{\rm itf} }$. The blue curves show this error is not a problem, because the mode expansions are insensitive to gaussian noise in the 2500 galaxies, i.e. they are essentially perfect. This demonstrates that even though the TF noise is as large as for the actual data, the ability to find the correct flow, when characterized by only 20 numbers, is intact.
Note also that the auto-covariance of $(v_{true} - \pmb{$g$})$ (dot-dashed curves) is also is greatly reduced from that of $v_{_{\rm itf} }$. Recall that $\pmb{$g$}$ assumes linear theory estimated from the distribution of ~20000 galaxies. Occasionally the correlations are badly mistaken, when a large cluster (much larger than Virgo) is in the foreground and complicates the difference between physical and redshift space separations, but $\pmb{$g$}$ is always an excellent approximation to the TF velocity.
\begin{figure}
\centering
\includegraphics[ scale=0.78 ,angle=00]{chi_2030beta.ps}
\caption{The difference $\Delta \chi^2=\chi^2-\chi^2_{\rm min} $ versus $\beta$ computed for three values of $\alpha$, as
indicated in the figure. Horizontal lines mark the 68\% and 95.4 \% CLs. }
\label{fig:chibeta}
\end{figure}
\begin{figure}
\centering
\includegraphics[ scale=0.5,angle=00]{betaj.ps}
\caption{The thick solid red line is best fit $\beta$ as a function of the number of modes in the velocity model, for $\alpha=0$.
The thin blue lines mark the $95\% $ CL on the best fit.
The circles denote the ``differential" best fit $\beta$ obtained with the single $j^{th}$ mode. After the first 4 modes,
only 1 in 4 modes are
represented. The errorbars attached to the circles are
correspond to $95\%$ CLs on the differential best fit. }
\label{fig:betaj}
\end{figure}
\begin{figure}
\centering
\includegraphics[ scale=0.4 ,angle=00]{chi_2030alpha_beta_one.ps}
\caption{ Contour plot of $\Delta \chi^2$ in the plane of $\alpha$ and $\beta$. The contours are 2.3, 6.17 and 9.2 corresponding to CLs of 68\% 95.4\% and 99\%, respectively. }
\label{fig:chiab}
\end{figure}
\section{The Constraints on $\alpha$, $\beta$}
\label{sec:const}
Equipped with the error covariance matrices, we proceed to
minimize $\chi^2$ in equation (\ref{eq:chib}) with respect to $\alpha$ and $\beta$. We shall present
detailed results for $\alpha$ and $\beta$ for fields expanded in 20 modes and 30 modes.
The minimization is done by computing $\chi^2$ on a grid of values in the plane $\alpha$ and $\beta$.
At the minimum point $\chi^2=\chi^2_{\rm min}=21.5$ which is very reasonable given that
the standard deviation from the expected value of 22 (20 mode coefficients plus TF slope and zero point) is $\sim \sqrt{44}\approx 7$ \citep{press}.
Figure \ref{fig:chibeta} shows the difference $\Delta \chi^2=\chi^2 -\chi^2_{\rm min}$ as a function
of $\beta$ for three fixed values of $\alpha$, as indicated in the figure.
The horizontal lines indicate 68\% and 95\% confidence levels (CLs).
Figure \ref{fig:chiab} is a contour plot of $\Delta \chi^2$ in the plane of $\alpha$ and $\beta$.
The higher frequency basis functions should probe smaller scales.
Hence, if our assumption of linear bias which is independent of scale is valid then
varying the number of modes, $j_{_{\rm m}}$, should yield consistent constraints on $\beta$.
Figure \ref{fig:betaj} shows the best fit $\beta$ (thick red curve) and the corresponding $95\%$ (2$\sigma$) CLs (thin blue solid lines) versus the number modes in the expansion, for $\alpha=0$.
The circles show the ``differential" best fit $\beta$ obtained from a single mode as a function of the order of
the mode. The $2\sigma $ errorbars on this differential $\beta$ are significantly enhanced beyond the
second order mode. A few points lie at the ends of the errorbars corresponding to best fit $\beta$
obtained at either $0.05$ or $0.7$ which are the bounds of the range of $\beta$ values used
in the 2MRS reconstruction.
There is a hint that $\beta$ declines with increasing $j_{_{\rm m}}$ but this is completely dominated by the
noise.
This figure shows clearly that we get consistent constraints on $\beta$ (the red curve) when varying the
number of modes in the expansion. Further, it shows that most of the signal is contained
the very few first modes.
\section{Discussion}
\label{sec:disc}
The analysis reported demonstrates a good match between the 2MRS predicted and TF observed velocities.
The analysis is unique in several respects.
Firstly, it completely avoids dealing with covariance matrices of errors in the
velocities estimated from the TF sample,
Secondly, it uses elaborate mock galaxy catalogues to compute the error covariance in the predicted velocities from the redshift survey.
Thirdly, the TF and predicted velocities are filtered in a very similar fashion, taking special care to minimize the effects of noise in the comparison.
In this analysis faint galaxies with $M>-20 $ are excised from the TF catalog since they systematically show strong deviations from the linear TF relation.
The comparison yields $\beta=f/b=0.33\pm 0.04$ ($1\sigma$ error).
The quoted error is not actually far from the limit of what the current data can constrain in the
absence of any errors on the 2MRS predicted velocities
\footnote{In the absence of gravity errors and for $\beta$ close to the best fit value $\beta_{_{\rm 0}}$, the $\chi^2$ function is approximated as
\[ \chi^2= \sum_j[a_{_{\rm itf}}^j -B(\beta) a_{_{\rm g 0}}^j ]^2/\sigma_a^2\] where $a_{_{\rm g 0}}^j$ correspond to reconstruction
with $\beta=\beta_{_{\rm 0}}$ and $B=(\beta/\beta_{_{\rm 0}})(1+1.5\beta_{_{\rm 0}})/(1+1.5\beta)$ approximates the
dependence of $\pmb{$g$}$ on $\beta$ (in contrast to the dependence $\beta/\beta_{_{\rm 0}}$ in reconstruction from galaxy distribution in real space).
The $1\sigma $ error in $B$ is $\sigma_{\rm a}/\sqrt{\sum_j (a_{_{\rm g 0}}^j)^2}\approx 0.06$, where we took $\sigma_{\rm a}=\sigma_\eta/\gamma=0.43$ and $ \sum_j (a_{_{\rm g 0}}^j)^2=52 $ as given from the solution with $\beta=0.35$. This error in $B$ translates into
an error of $0.03$ in $\beta$ which is close to the error obtained with the full analysis.}.
A moderate reduction of the errors by
a factor of two requires a significant enlargement in the number of peculiar velocity measurements by a factor of 4, which
could be done if dedicated time is available, but the TF samples already use the best local galaxies.
Going to larger distance is not the answer, as the error of a peculiar velocity increases linearly with the distance, and the 2MRS density field becomes very dilutely sampled. Two surveys, WALLABY, to be undertaken by the ASKAP telescope in Western Australia, and ALFALFA, an ongoing project at Arecibo, will hopefully produce good TF data
for $cz<12000$ km/s.
Another strategy would involve peculiar velocities inferred from
more precise distance indicators than the TF relation. SNe and surface brightness fluctuations
techniques are likely candidates, but such measurements are available for a much smaller number of galaxies.
Larger samples of local SNe are turning out to have increased errors (Ganeshalingam, private communications), significantly larger than previously measured \citep[e.g.][]{r97}.
The good match between the gravity and velocity fields implies that they probe the same underlying
potential field within the framework of the gravitational instability paradigm for structure formation.
The agreement is achieved assuming a linear biasing relation between mass and galaxies on
large scales. Linear biasing is consistent with theoretical predictions \citep[e.g.][]{kn97} for the large scale clustering of galaxies. Further, no scale-dependent biasing seems to be required by the velocity comparison.
However, the theoretically expected scale dependence of the bias factor \citep[e.g.][]{pesm,dej10} is well below
the level which can be probed by the velocity comparison carried out here.
There is also no clear indication for a scale dependence bias from the observed
galaxy clustering on the relevant scales \citep[e.g.][]{ver2df}.
DNW96, which compared the predicted velocities from the {\it IRAS } 1.2 Jy redshift surveys and the MARK III Tully-Fisher data, revealed systematic discrepancies that could not be attributed to
errors in the data and the reconstruction methods.
Inspection of the flow fields obtained in the current work with the
those presented in DNW96 (see their figures 9--13) clearly show that the problem lies in the MARK III data set. The velocity fields predicted from the {\it IRAS}
and {2MRS} surveys have similar patterns which grossly deviate from
MARK III but are in accordance with SFI++.
The 2MRS has all the attributes that one would want for estimating the gravity field including a very weak bias. The survey was done by an instrument that was photometrically stable, which is important to avoid large scale drifts in the derived gravity field. The survey is far superior to the {\it IRAS} survey, the first full sky galaxy survey, which detected galaxies at $60\mu m$, a sign of star formation and not a good indicator of mass. However, that survey led to sensible results, and was not at fault for the disagreement 15 years ago.
\cite{radsm} compared the predicted velocities from the {\it IRAS PSCZ} survey have also been compared with measured peculiar velocities of SNe. They found a best-fitting
$\beta_{_{IRAS}}=f/b_{_{IRAS}}= 0.55 \pm 0.06$. The lower value of $\beta$ derived in our work could be
due to a difference in the biasing factor between 2MRS and the {\it IRAS} galaxies, but we also emphasize
that our estimation of the error in the predicted velocities should be more reliable as it is based on realistic mock catalogs.
\cite{p05} performed a comparison of the 2MRS predicted velocities with
direct velocity measurements from three different samples, including 836 SFI++ galaxies
within $cz=5000 \ {\rm km\,s^{-1}}$. Their analysis yields $\beta=0.55 \pm .05$ for the comparison of gravity
with the SFI++. They derive $\Omega=0.55\pm 0.05$, inconsistent with our result at more than the 2.5$\sigma$ level. However, they did not calibrate their methods with advanced
mock catalogues nor included the expected covariance of the
predicted velocities.
\cite{l10} employed a sophisticated version of the nonlinear MAK reconstruction method \citep{f02}
to compare the 2MRS predicted velocities with the 3K velocity catalog
\citep{t08} of 1791 galaxies with
redshifts $<3000\ {\rm km\,s^{-1}}$. They derive $\Omega=0.31\pm 0.05$, corresponding to $\beta \sim 0.52$. However, as they point out their error analysis is incomplete. Their method is promising as
it takes intro account of nonlinear effects. Nevertheless, they do not account for the covariance of the
errors in their smoothed observed velocities and predicted
velocities.
Both \cite{p05} and \cite{l10} use iterative schemes based on \cite{y91} for
deriving the peculiar velocities from redshift surveys.
These schemes rely on a relation between the peculiar velocity and density
in real space.
At any iteration, this relation is solved for new peculiar velocity
given real space coordinates obtained from the observed redshifts by subtracting
the old peculiar velocities derived in the previous iteration.
We caution here that the these schemes are intrinsically biased:
error in velocities used to estimate the distances will
yield a biased density field in real space (see the \S\ref{sec:note} for details). Hence the
estimation of the velocity field is actually done from a biased distribution in real space.
The bias produces an undesired smoothing of density field along the radial direction.
The smoothing width (in $\ {\rm km\,s^{-1}}$) is equal to the rms random error in the
velocities $\sim 200-300 \ {\rm km\,s^{-1}}$ \citep{be02, NussBranch}. Therefore, the bias is more pronounced in nonlinear methods which aim at probing small scales.
Checks to find the best way of estimating the gravity field did not lead to improvements. Weighting the galaxy maps by the 2MRS luminosity led to a worse agreement, and recall that 2MRS is selected in K band, which is closest to a measure of the stellar mass. Giving the elliptical galaxies double weight, as indicated by lensing analysis \citep{mskh06}, did not improve the agreement. This in itself is not too surprising, because on large scale the 2MRS survey is dominated by spiral galaxies. It seems that the galaxies brighter than $M_{*}+2$
are each surrounded by a dark matter halo that has the same mass on average. There is no hint that the dark matter mass is larger if the luminosity is increased.
Using our estimate for $\beta$ we can constrain the amplitude of mass
fluctuations. As a measure of the
amplitude we consider the rms of density fluctuations in spheres of $8h^{-1}\,{\rm Mpc}$ in radius, denoted
by $\sigma_8$ and $\sigma_{\rm 8g}$ for the mass and galaxy distributions, respectively.
Adopting $\Omega=0.266$ \citep{wmap7}
gives $f(\Omega,\Lambda=1-\Omega) = 0.483 $ \citep{lind05}.
Comparing this to our result $\beta=f/b=0.33\pm 0.04 $ ($1\sigma$ error) we get a bias factor $b=1.46\pm .20$
between the dark matter and the 2MRS galaxy distribution.
Taking $\sigma_{\rm 8g}=0.97\pm 0.05$ \citep{w09,rpe09}, yields $\sigma_8=\sigma_{\rm 8g}/b=0.65 \pm 0.11$
for the underlying mass density field, marginally consistent with the latest WMAP results
\citep{wmap7} of $\sigma_8=0.8\pm 0.03$ (see also
\cite{jaro10}).
\section{Conclusions}
We summarize the major conclusions of our work:
\begin{itemize}
\item{After a detailed examination of the 2MRS and SFI++ catalogs, we find the local gravity field to be a fine predictor of the local velocity field. Such a conclusion is a comfort for linear perturbation theory in an expanding universe and was certainly expected. It is interesting that the counts of galaxies give the best possible gravity field, reinforcing the old idea that the mass of the halo around a galaxy is not very well correlated with the luminosity of that galaxy.}
\item{ We see no evidence that the dark matter does not follow the galaxy distribution, and is consistent with constant bias on large scales. There is no evidence for a non-linear bias in the local flows. A smooth component to the universe is not something testable with these methods.}
\item{ Linear perturbation theory appears to be adequate for the large scales tested by our method.}
\item{The solution favors $\alpha=0.$, no correlation between luminosity and mass, and $\beta = .33 \pm .04$, which is consistent, but more than twice as tight as \cite{e06a}. Using the derived $\Omega$ from WMAP \citep{jaro10}, leads to an estimate $\sigma_8=0.65 \pm 0.10$, deviant from WMAP's reports at the $1.5\sigma$ level.}
\item{Our estimate of $\sigma_8 $ gives the most precise value at $z\sim 0$ and is useful for tests of the growth rate and Dark Energy.}
\item{The velocity-gravity comparison measures the acceleration on scales up to $30 - 50$ Mpc. and since we derived a similar value of $\beta$ as for clusters of galaxies, we conclude that dark matter appears to fully participate in the clustering on scales of a few Megaparsecs and larger.}
\item{We find no evidence for large-scale flows such as reported by, for example
\citep{hs99,he97,feldwh10}. Note that our analysis has not used the CMBR dipole, but we see a velocity field that is fully consistent with those previously reported \citep{e06, e06a, el09}, which are consistent with the CMBR dipole radiation. We see no evidence that the dipole in the CMBR is produced by anything other than our motion in the universe.}
\end{itemize}
\section{Acknowledgments}
We are all grieving over the untimely death of John Huchra, an old dear friend who first collaborated on papers describing the CfA1 survey. We are indebted to him for his contribution to this research, including being responsible for completion of the 2MRS and for pushing the observations to low galactic latitude.
We thank Enzo Branchini for a careful reading of the manuscript. MD acknowledges the support provided by the NSF grant AST-0807630.
The majority of this research was carried out at the MPA, Garching, when MD was supported by a Humboldt fellowship. AN thanks the MPA for the hospitality.
This work was supported by THE ISRAEL SCIENCE FOUNDATION (grant No.203/09), the German-Israeli Foundation for
Research and Development, the Asher Space Research
Institute and by the WINNIPEG RESEARCH FUND.
KLM acknowledges funding from the Peter and Patricia Gruber Foundation
as the 2008 Peter and Patricia Gruber Foundation International
Astronomical Union Fellow, from a 2010 Leverhulme Trust Early Career
Fellowship and from the University of Portsmouth and SEPnet
(www.sepnet.ac.uk).
The Millennium Simulation databases used in this paper and the web
application providing online access to them were constructed as part of
the activities of the German Astrophysical Virtual Observatory.
\bibliographystyle{mn2e}
|
3,212,635,537,755 | arxiv | \section{Introduction}
Over the past 20 years, superconducting qubits have made great progress in both quantity and quality~\cite{Krantz}. With the increase of the number of qubits and the improvement of the gate fidelity, superconducting circuits have emerged as a powerful platform of quantum simulation~\cite{Buluta,Houck,Georgescu} and also as a promising implementation for fault-tolerant quantum computation~\cite{Devoret,Campbell}. For typical superconducting transmon/Xmon qubits~\cite{Koch2007,Barends}, there are many proposals to realize two-qubits gates. The main kind of proposals is implemented with frequency-tunable qubits with static capacitive couplings. By carefully tuning the frequency of qubits to make the $|11\rangle$ state in resonance with the $|02\rangle$ state, CZ gate with low leakage to high energy levels has been implemented~\cite{Martinis}. But due to the static coupling, two qubits’ idle frequencies should be set far away from each other to reduce the residual $ZZ$ coupling between $|11\rangle$ and $|02\rangle$ and lower the error when simultaneously performing single qubit gates operation. When the number of qubits of the system increases, the frequency crowding problem becomes worse. Superconducting qubits with a tunable coupler~\cite{Chen2014,YanF2018,Niskanen,Hime} have been studied as a proposal to sovle these problems and high-fidelity two-qubits gates have been implemented experimentally~\cite{sung2020,Foxen}.
In this work, we experimentally realize a extensible superconducting circuits with four tunable couplers and five transmon qubits and implement high-fidelity two-qubit CZ gate between the center qubit and one of the four near-neighbor qubits. Via optimized control, we demonstrate a two-qubit CZ gate with average fidelity of 99.65\% in cross entrophy benchmarking~\cite{DgateXEB2019,XEB2018}. We also use speckle purity benchmarking (SPB)~\cite{Google-supermacy2019} to assess the average purity fidelity which is 99.69\% of the CZ gate and get the control error is 0.04\%. This result means that we can expand the design to two-dimension structure and realize two-qubit CZ gates between any qubit and one of its near-neighbor qubits. The design in this work and the proposals to realize CZ gate may pave the way to realize fault-tolerant quantum computation.
\begin{figure}[h]
\centering
\includegraphics[width=8.2cm]{Fig1.pdf}
\caption{ (a) Design of the qubit chip of the device. There are five transmon qubits ($\text{Q}_{1}$-$\text{Q}_{5}$) and four tunable couplers. The center qubit ($\text{Q}_{1}$) capacitively couples to the other four qubits. Every nearest neighbours share one coupler. Control lines and readout resonators are in the control chip which is not shown here. (b) Schematic diagram of the $\text{Q}_{1}$, $\text{Q}_{2}$ and their coupler in the qubit chip. $\text{C}_{qc}$ represents the capacitance between the coupler and the qubit and $\text{C}_{qq}$ represents the direct capacitance between the nearset qubits. Each qubit and coupler has a SQUID for individual Z control. In (a) false colors (green, yellow and red) are used to represent the corresponding components in (b).}
\label{image}
\end{figure}
To realize extensible design of two dimension structure, flip chip step has been applied in our device and two chips, one is qubit chip for qubit and coupler capacitive structure and the other is control chip for qubit readout and qubit/coupler control, are fabricated. The design of the qubit chip of the device and the schemetic diagram of the $\text{Q}_{1}$,$\text{Q}_{2}$ and their coupler are shown in Fig.~\ref{image}. There are 5 qubits and 4 couplers in the qubit chip. Each coupler has a fast Z bias control line in the control chip to implement Z control. The qubit frequencies can be tuned to a large range of several GHz by both a dc Z bias control line and a fast Z bias control line. Each qubit also has an inductive XY control line to implement single qubit rotation. The readout resonators are seperated into two groups, and one group of resonators for $\text{Q}_{3}$ and $\text{Q}_{4}$ shares one readout transmission line and the other group of resonators for $\text{Q}_{1}$, $\text{Q}_{2}$ and $\text{Q}_{5}$ shares one purcell filter. Both groups of readout resonators have their signals amplified by Josephson parametric amplifiers (JPA)~\cite{JPA2014}.More experimental details of our device and system are presented in Appendix A and Appendix B.
In our devie, the center qubit $\text{Q}_{1}$ can tunably couple to the other four qubits through direct capacitive coupling and indirect coupling via corresponding coupler. We can implement many quantum simulation proposals and even some fault-tolerant quantum proposals if we can realize high-fidelity qubit gates in this design and thus expand our design. So we focus our attention on $\text{Q}_{1}$,$\text{Q}_{2}$ and their coupler. The other three qubits ($\text{Q}_{3}$, $\text{Q}_{4}$, $\text{Q}_{5}$) are always idled with frequencies below 4 GHz with their dc Z bias control lines and the corresponding couplers are idled at their symmetric point before the experiment start and during the experiment with their fast Z bias control lines which we calls workbias mode. This procedure lower the unwanted effective coupling strength between $\text{Q}_{3}$, $\text{Q}_{4}$, $\text{Q}_{5}$ and $\text{Q}_{1}$ and lower the influence to the readout of $\text{Q}_{1}$. After all these preparatory work, we can write an effective Hamiltonian of the three-body system as ($\hbar=1$)
\begin{equation}
\label{Hbh}
H=\sum_{i=\text{1,2,c}}(w_{i}\hat{b}_{i}^\dagger \hat{b}_{i}+\frac{\eta_{i}}{2}\hat{b}_{i}^\dagger\hat{b}_{i}^\dagger \hat{b}_{i} \hat{b}_{i})\
+\sum_{{i}\textless{j}}g_{ij}(\hat{b}_{i}^\dagger \hat{b}_{j}+\hat{b}_{j}^\dagger \hat{b}_{i}),
\end{equation}
where $\hat{b}_{i}^\dagger$ and $\hat{b}_{i}$ ($\text{i=1,2,c}$) are the raising and lowering operators of the correponding oscillators. The anharmonicity and energy levels of each oscillators are donated by $\eta_{i}$ and $w_{i}$.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Fig3.pdf}
\caption{ (a) Waveforms correspond to the highest fidelities for three types of control waveforms. $\text{Q}_{2}$ is placed at the frequency from $|1\rangle$ to $|2\rangle$ of $\text{Q}_{1}$ when idle. (b) Leakage to unwanted energy levels of three types of waveforms.}
\label{leakage}
\end{figure}
We use the notation $|\text{Q}_{1},\text{Coupler},\text{Q}_{2}\rangle$ to represent the eigenstates of the system (Eq.~\eqref{Hbh}) where Coupler is placed at the frequency that the effective $\text{Q}_{1}-\text{Q}_{2}$ coupling strength is nearly zero. To realize high-fidelity CZ gates, the whole Hamiltonian needs to be considered, especially the existence of coupler. We perform numerical simulations to analyze the three-body system through $\text{QuTiP}$~\cite{QuTiP1,QuTiP2}. Three types of control waveforms of coupler energy level are used, including square-shaped, Slepian-shaped~\cite{Martinis} and cosine-shaped control pulses. For simplification, we do not consider the decoherence of the system and assume the coupling strengths $g_{ij}$ stay the same when energy levels change. We analyze the relationship among the fidelities of the CZ gates, different pulse lengths and the energy level of $\text{Q}_{1}$ of different control waveforms with the same lowest coupler energy level $w_{c}$. In Fig.~\ref{leakage}(a), waveforms which correspond to the highest fidelities of CZ gates of three types are plotted. The pulse lengths needed are different, where square-shaped waveform needs the shortest length around 25 ns, Slepian-shaped waveform needs about 30 ns pulse length and cosine-shaped waveform need the longest pulse length around 63 ns. The highest fidelities of CZ gates of three types of waveforms are also different, where square-shaped waveform's high fidelity is about 99.4$\%$ and the other two waveforms' fidelities can reach above 99.9$\%$. From the fidelities and corresponding pulse lengths, same waveforms with different pulse lengths are used to analyze the leakage to unwanted energy levels in Fig.~\ref{leakage}(b). The square-shaped waveform has the highest periodic leakage up to 0.1$\%$ level which we think is the reason why the CZ gate fidelity of this type of waveform is lower. The other two kinds of waveforms' leakage gradually decreases with the increase of the pulse lengths and can be lower than 0.01$\%$ when pulse lengths are longer than 40ns. Due to the decoherence, we finally choose Slepian-shaped waveform as our experiment waveform.
In our experiment, we have several different points from the numerical simulation. The first difference is that we compensate the frequencies shift of qubits when coupler bias changes. Actually, to avoid two-level-systems, frequencies of qubits when performing CZ gates need to be tunned precisely, especially in many-body systems. The compensation is achieved by first measuring the qubit freuency as a function of the qubit flux bias and then as a function of coupler bias. The uncalibrated and calibrated frequencies of qubits via coupler flux bias are shown in Fig.~\ref{calif01}. The frequencies are almost completely independent on the coupler bias.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Fig4.pdf}
\caption{ (a) The frequency of $\text{Q}_{1}$ as a function of the coupler flux bias before and after calibration while $\text{Q}_{2}$ is far detuned. We compensate the frequency shift which ranges from 0 to -30 MHz by tunning the flux bias of $\text{Q}_{1}$, and then sweep the microwave drive frequency and measure the qubit excited state probability. The drive frequency range is based on the original qubit frequency where coupler is placed at the frequency that the effective coupling strength between $\text{Q}_{1}$ and $\text{Q}_{2}$ is nearly zero. (b) The frequency of $\text{Q}_{2}$ as a function of the coupler flux bias before and after calibration while $\text{Q}_{1}$ is far detuned.}
\label{calif01}
\end{figure}
The second difference is that the fast Z bias pulses are distorted when reaching the qubits. This mismatch are corrected by performing deconvolution to the ideal pulse sample data~\cite{Yan}. The third difference is that dc control is not used for couplers, $\text{Q}_{1}$ and $\text{Q}_{2}$ which means their frequencies are not always placed at the idle frequencies. In our experiments, coupler and qubits needed are placed at their original frequencies with zero flux bias at most time and then detuned to the idle frequencies several microseconds before and during the experiment with their fast Z bias control lines and finally detuned back to their original frequencies after the readout pulses end. This control method can reduce the number of control lines in large systems and thus be useful. We can not directly measure the coupler frequency as a function of the coupler flux bias due to the lack of readout resonator of the coupler. So the fourth difference is that we simply use the Slepian-shaped pulse. Relationship between $XX$ coupling strength and coupler flux bias are tested and then fast z control in our experiment is expressed in the form of coupling strength. The last differnece is that $\text{Q}_{2}$'s idle frequency is set around 50 MHz away from the energy level form $|1\rangle$ to $|2\rangle$ of $\text{Q}_{1}$.
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{cphase_and_leakage.pdf}
\caption{ (a) Schematic of measuring leakage from $|101\rangle$ and the experimental data as a function of $XX$ coupling strength and $\text{Q}_{2}$ detune frequency. (b) Schematic of a Ramsey-type experiment measuring the conditional phase and the experimental data. Red star in (a) and (b) represents rough CZ point. (c) Delicate measurement of the leakage as a function of $XX$ coupling strength and the number of CZ gates. (d) Delicate measurement of the conditional phase angle ($\phi_{CZ}-(N_{CZ}-1)\times180\degree$) as a function of $\text{Q}_{2}$ detune frequency and the number of CZ gates. }
\label{cphase}
\end{figure}
We calibrate the CZ gate by adjusting the Z control amplitudes for a fixed gate length (45 ns) and measuring the conditional phase angle and the leakage from $|101\rangle$. To measure the leakage from $|101\rangle$, we first perform $X$ gates for both two qubits and then measure the state population of $|2i0\rangle$ ($i=\text{0,1,2}$) after a CZ gate (Fig.~\ref{cphase}(a)) since we can not measure the state of the coupler. To measure the conditional phase angle, we perform a Ramsey-type experiment in Fig.~\ref{cphase}(b). The red star in Fig.~\ref{cphase}(a) and (b) represents rough optimal point for the CZ gate which has both low leakage and accurate conditional phase angle. To get more delicate coupler flux bias, we fixed $\text{Q}_{2}$ detune frequency and measure the leakage as a function of $XX$ coupling strength and the number of CZ gates in Fig.~\ref{cphase}(c). After the optimal $XX$ coupling strength decides, we measure the conditional phase angle as a function of $\text{Q}_{2}$ detune frequency and the number of CZ gates in Fig.~\ref{cphase}(d).
After preparation experiments finished above, speckle purity benchmarking (SPB) and cross entrophy benchmarking (XEB) are measured to assess the fidelity of single qubit gate and the CZ gate. The Pauli fidelities of single $\pi/2$ gates are 99.84$\pm$0.01\% and 99.81$\pm$0.02\% and the Pauli purity fidelities of single qubit $\pi/2$ gates are 99.88$\pm$0.02\% and 99.84$\pm$0.02\% for $\text{Q}_{1}$ and $\text{Q}_{2}$ respectively as shown in Fig.~\ref{all_xeb}(b) and Fig.~\ref{all_xeb}(a). We use following unitary to express CZ gate
\begin{equation}
\label{cz}
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & e^{i(\Delta_{+}+\Delta_{-})} & 0 & 0\\
0 & 0 & e^{i(\Delta_{+}-\Delta_{-})} & 0\\
0 & 0 & 0 & e^{i(2\Delta_{+}-\pi)} \\
\end{array}
\right)
\end{equation}
The additional z ratations can be corrected by applying virtual Z gates that do not influence the gate fidelity~\cite{McKay}. So we use NM algorithm to fitting the rotation angles $\Delta_{+}$ and $\Delta_{-}$ and then get the average CZ gate fidelity is 99.65$\pm$0.04\% and the average purity fidelity is 99.69$\pm$0.04\% in Fig.~\ref{all_xeb}(c) which means the control error is about 0.04$\%$. We think the zpulse distortion of coupler Z bias control lines is the main reason for the control error.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{all_xeb.pdf}
\caption{ (a) Pauli purity fidelities for single $\pi/2$ gates. (b) Pauli fidelities for single qubit $\pi/2$ gates. (c) Average purity fidelity and averge fidelity of the CZ gate.}
\label{all_xeb}
\end{figure}
In conclusion, our work provides a path towards buliding extensible superconduting qubit system in which each qubits can tunably couple to four near-neighbor qubits. We realize high-fidelity CZ gate in this prototype system which can improve the accuracy of various quantum simulations and promote the development of fault-tolerant quantum computation. We also raise a question that how to correct zpulse distortion of the coupler zpulse control line without corresponding readout resonators. Taken together, the designs and demonstrations will help resovle many chanllenges in the implementation of large scale quantum systems.
|
3,212,635,537,756 | arxiv |
\section{appendix}
\subsection{Search space}
\paragraph{Standard image classification on CIFAR-10 and ImageNet}
The search space for CIFAR-10 and ImageNet classification experiments includes the following operations:
\begin{itemize}
\item identity
\item $1\times1$ convolution
\item $3\times3$ separable convolution
\item $5\times5$ separable convolution
\item $3\times3$ dilated separable convolution
\item $5\times5$ dilated separable convolution
\item $1\times7$ convolution followed $7\times1$ convolution
\item $3\times3$ max pooling
\item $3\times3$ average pooling
\end{itemize}
A block forms an output by concatenating all leaf nodes in the graph. Blocks have 2 input nodes which ingest the output of block $i-1$ and block $i-2$ respectively. The input nodes are bottleneck layers, and can reduce the spatial size by using stride 2.
Note that while ENAS supports only 5 operators due to memory constraints, GHNs can search for more operators.
This is because ENAS (and other methods which use one-shot models) must store all the parameters in memory because it finds paths in a larger model.
Thus the memory requirements are $O(KN)$ where $K$ is the number of operations and $N$ is the number of nodes in the candidate architecture.
In contrast, the memory requirement for GHNs is $O(N) + O(K)$ for the candidate architecture and GHN respectively.
\paragraph{Anytime prediction on CIFAR-10}
The search space for the CIFAR-10 anytime prediction experiments includes the following operations:
\begin{itemize}
\item $1\times1$ convolution
\item $3\times3$ convolution
\item $5\times5$ convolution
\item $3\times3$ max pooling
\item $3\times3$ average pooling
\end{itemize}
In the anytime setting, nodes concatenate their inputs rather than sum. Thus, the identity operator was removed as it would be redundant. The search space does not include separable convolutions so that it is comparable with our baselines \citep{huang2017multi}. Block 1 contains nodes which may operate on any of the 3 scales ($32\times32, 16\times16, 8\times8$). Block 2 contains nodes which can only operate on scales $16\times16$ and $8\times 8$. Block 3 only contains nodes which operate on the scale $8\times 8$. We fix the number of exit nodes. These choices are inspired by \cite{huang2017multi}
\subsection{Graph HyperNetwork details}
\paragraph{Standard image classification on CIFAR-10 and ImageNet}
While node embeddings are initialized to a one-hot vector representing computational operator of the node, we found it helpful to pass the sparse vector through a learned embedding matrix prior to graph propagation. The GHN is trained for 200 epochs with batch size 64 using the ADAM optimizer with an initial learning rate 1e-3 that is divided by 2 at epoch 100 and 150. A naive hypernet would have a separate output branch for each possible node type, and simply ignore branches that aren't applicable to the specific node. In this manner, the number of parameters of the hypernetwork scale according to the number of possible node computations. In contrast, the number of parameters for a one-shot model scale according to the number of nodes in the graph. We further reduce number of parameters by obtaining smaller sized convolutions kernels through the slicing of larger sized kernels.
\paragraph{Anytime prediction }
In the anytime prediction setting, two one-hot vectors representing the node's scale and presence of an early exit classifier are additionally concatenated to the first initialized node embedding. We found it helpful to train the GHN with a random number of nodes per block, with maximum number of allowed nodes being the evaluation block size. Because nodes concatenate their inputs, a bottleneck layer is required. The hypernetwork can predict bottleneck parameters for a varying number of input nodes by generating weights based on edge activations rather than node activations. We form edge activations by concatenating the node activations of the parent and child. Edge weights generated this way can be concatenated, allowing the dimensionality of the predicted bottleneck weights the be proportional to the number of incoming edges.
\subsection{Final architecture training details}
\paragraph{CIFAR-10} Following existing NAS methods \citep{zoph2017learning,real2018regularized}, the final candidates are trained for 600 epochs using SGD with momentum 0.9, a single period cosine schedule with $l_{max}=0.025$, and batch size 64. For regularization, we use scheduled drop-path with a final dropout probability of 0.4. We use an auxiliary head located at 2/3 of the network weighted by 0.5. We accelerate training by performing distributed training across 32 GPUs; the learning rate is multiplied by 32 with an initial linear warmup of 5 epochs.
\paragraph{ImageNet Mobile} For ImageNet mobile experiments, we use an image size of $224\times224$. Following existing NAS methods \citep{zoph2017learning,real2018regularized}, the final candidates are trained for 250 epochs using SGD with momentum 0.9, initial learning rate 0.1 multiplied by 0.97 every epoch. We use an auxiliary head located at 2/3 of the network weighted by 0.5. We use the same regularization techniques, and similarly accelerate training in a distributed fashion.
\paragraph{Anytime}Following \cite{huang2017multi}, the final candidates are trained using SGD with momentum 0.9. We train the models for 300 epochs use an initial learning rate of 0.1, which is divided by 10 after 150 and 225 epochs using a batch size of 64. We accelerate training with distributed training in a similar fashion as the CIFAR-10 classification and ImageNet mobile experiments.
The number of filters for the final architecture is chosen such that the number of FLOPS is comparable to existing baselines.
\subsection{Investigating Accuracy Drop off}
Figure \ref{fig:sample_correlation} shows a plot comparing the accuracy of an architecture that is trained for 50 epochs and the accuracy of the same architecture using GHN generated weights.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{figures/sample_correlation.pdf}
\caption{Comparison for 100 randomly sampled architectures.}
\label{fig:sample_correlation}
\end{figure}
\subsection{Visualization of Final architectures}
\subsubsection{CIFAR-10 and ImageNet Classification}
Figure \ref{fig:best_block} shows the best found block in the CIFAR-10 Experiments.
\begin{figure}[h]
\includegraphics[width=\linewidth]{figures/graph_-2132791772218152156.pdf}
\caption{Best block found for classification}
\label{fig:best_block}
\end{figure}
\subsubsection{Anytime Prediction}
Figures \ref{fig:best_anytime1}, \ref{fig:best_anytime2} and \ref{fig:best_anytime3} show blocks 1 2 and 3 of the best architecture found in the anytime experiments. The color red denotes that an early exit is attached to the output of the node.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4\linewidth]{figures/graph_642261663860860599_cell_0.pdf}
\end{center}
\caption{Block 1 for anytime network. Red color denotes early exit.}
\label{fig:best_anytime1}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\linewidth]{figures/graph_642261663860860599_cell_1.pdf}
\caption{Block 2 for anytime network. Red color denotes early exit.}
\label{fig:best_anytime2}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{figures/graph_642261663860860599_cell_2.pdf}
\caption{Block 3 for anytime network. Red color denotes early exit.}
\label{fig:best_anytime3}
\end{center}
\end{figure}
\section{Conclusion}
In this work, we propose the Graph HyperNetwork (GHN), a composition of graph neural networks and
hypernetworks that generates the weights of any architecture by operating directly on their
computation graph representation. We demonstrate a strong correlation between the performance with
the generated weights and the fully-trained weights. Using our GHN to form a surrogate search
signal, we achieve competitive results on CIFAR-10 and ImageNet mobile with nearly 10$\times$ faster
speed compared to other random search methods. Furthermore, we show that our proposed method can be
extended to outperform the best human-designed architectures in setting of anytime prediction,
greatly reducing the computation cost of real-time neural networks.
\section{Introduction}
The success of deep learning marks the transition from manual feature engineering to automated
feature learning. However, designing effective neural network architectures requires expert domain
knowledge and repetitive trial and error. Recently, there has been a surge of interest in {\it
neural architecture search} (NAS), where neural network architectures are automatically optimized.
One approach for architecture search is to consider it as a nested optimization problem, where the
inner loop finds the optimal parameters $w^*$ for a given architecture $a$ w.r.t. the training loss
${\mathcal{L}}_{train}$, and the outer loop searches the optimal architecture w.r.t. a validation loss
${\mathcal{L}}_{val}$:
\begin{equation}
\label{eq:inner}
w^*(a) = \argmin_w {\mathcal{L}}_{train}(w, a)
\end{equation}
\begin{equation}
\label{eq:outer}
a^* = \argmin_a {\mathcal{L}}_{val}(w^*(a),a)
\end{equation}
Traditional NAS is expensive since solving the inner optimization in Eq.~\ref{eq:inner} requires a
lengthy optimization process (e.g. stochastic gradient descent (SGD)). Instead, we propose to learn
a parametric function approximation referred to as a hypernetwork
\citep{ha2016hypernetworks,brock2017smash}, which attempts to \textit{generate} the network weights
directly. Learning a hypernetwork is an amortization of the cost of solving Eq.~\ref{eq:inner}
repeatedly for multiple architectures. A trained hypernetwork is well correlated with SGD and can
act as a much faster substitute.
Yet, the architecture of the hypernet itself is still to be determined. Existing methods have
explored a variety of tactics to represent architectures, such as an ingenious 3D tensor encoding
scheme \citep{brock2017smash}, or a string serialization processed by an LSTM
\citep{zoph2016neural,zoph2017learning,pham2018efficient}. In this work, we advocate for a
\textit{computation graph} representation as it allows for the topology of an architecture to be
explicitly modeled. Furthermore, it is intuitive to understand and can be easily extensible to
various graph sizes.
To this end, in this paper we propose the \textit{Graph HyperNetwork} (GHN), which can aggregate
graph level information by directly learning on the graph representation. Using a hypernetwork to
guide architecture search, our approach requires significantly less computation when compared to
state-of-the-art methods. The computation graph representation allows GHNs to be the first
hypernetwork to generate all the weights of arbitrary CNNs rather than a subset (e.g.
\cite{brock2017smash}), achieving stronger correlation and thus making the search more efficient and
accurate.
While the validation accuracy is often the primary goal in architecture search, networks must also
be resource aware in real-world applications. Towards this goal, we exploit the flexibility of the
GHN by extending it to the problem of \textit{anytime prediction}. Models capable of anytime
prediction progressively update their predictions, allowing for a prediction at any time. This is
desirable in settings such as real-time systems, where the computational budget available for each
test case may vary greatly and cannot be known ahead of time. Although anytime models have
non-trivial differences to classical models, we show the GHN is amenable to these changes.
We summarize our main contributions of this work:
\vspace{-0.1cm}
\begin{enumerate}
\setlength{\itemsep}{1pt}
\item We propose Graph HyperNetwork that predicts the parameters of unseen neural networks by
directly operating on their computational graph representations.
\item Our approach achieves highly competitive results with state-of-the-art NAS methods on both
CIFAR-10 and ImageNet-mobile and is 10$\times$ faster than other random search methods.
\item We demonstrate that our approach can be generalized and applied in the domain of
anytime-prediction, previously unexplored by NAS programs, outperforming the existing manually
designed state-of-the-art models.
\end{enumerate}
\section{Background}
We review the two major building blocks of our model: graph neural networks and hypernetworks.
\paragraph{Graph Neural Network:}
A graph neural network \citep{ScarselliGTHM09,li2015gated,KipfW16} is a collection of nodes and
edges $({\mathcal{V}}, {\mathcal{E}})$, where each node is a recurrent neural network (RNN) that individually sends and
receives messages along the edges, spanning over the horizon of message passing. Each node $v$
stores an internal node embedding vector ${\bm{h}}_v^{(t)} \in \mathbb{R}^D$, and is updated recurrently:
\begin{equation}
\label{eq:gnn_prop}
{\bm{h}}_v^{(t+1)} =
\begin{cases}
U \left({\bm{h}}_v^{(t)}, {\bm{m}}_v^{(t)} \right) \ \ & \text{if node $v$ is active},\\
{\bm{h}}_v^{(t)} \ \ & \text{otherwise},
\end{cases}
\end{equation}
where $U$ is a recurrent cell function and ${\bm{m}}_v^{(t)}$ is the message received by $v$ at time step
$t$:
\begin{equation}
{\bm{m}}_v^{(t)}=\sum_{u\in N_{in}(v)} M \left({\bm{h}}_u^{(t)} \right),
\end{equation}
with $M$ the message function and $N_{in}(v)$ the set of neighbors with incoming edges pointing
towards $v$. $U$ is often modeled with a long short-term memory (LSTM) unit \citep{hochreiter97lstm}
or gated recurrent unit (GRU) \citep{cho14gru}, and $M$ with an MLP. Given a graph ${\mathcal{A}}$, we define
the GNN operator $G_{\mathcal{A}}$ to be a mapping from a set of initial node embeddings $\{{\bm{h}}_v^{(0)}\}$ to
a set of different node embeddings $\{{\bm{h}}_v^{(t)}\}$, parameterized by some learnable parameters
${\bm{\phi}}$:
\begin{equation}
\left\{{\bm{h}}_v^{(t)} | v \in {\mathcal{V}}\right\} =
G_{\mathcal{A}}^{(t)} \left(\left\{{\bm{h}}_v^{(0)} | v \in {\mathcal{V}} \right\}; {\bm{\phi}} \right).
\end{equation}
Throughout propagation the node embeddings ${\bm{h}}_v^{(t)}$ continuously aggregate graph level
information, which can be used for tasks such as node prediction and graph prediction by further
aggregation. Similar to RNNs, GNNs are typically learned using backpropagation through time (BPTT)
\citep{bptt}.
\paragraph{Hypernetwork:}
A hypernetwork \citep{ha2016hypernetworks} is a neural network that generates the parameters of
another network. For a typical deep feedforward network with $D$ layers, the parameters of the
$j$-th layer $W_j$ can be generated by a learned function $H$:
\begin{equation}
W_j = H(z_j), \ \ \forall j = 1, \dots, D,
\end{equation}
where $z_j$ is the layer embedding, and $H$ is shared for all layers. The output dimensionality of
the hypernetwork is fixed, but it's possible to accommodate predicting weights for layers of varying
kernel sizes by concatenating multiple kernels of the fixed size. Varying spatial sizes can also be
accommodated by slicing in the spatial dimensions. Hypernetworks have been found effective in
standard image recognition and text classification problems, and can be viewed as a relaxed weight
sharing mechanism. Recently, they have shown to be effective in accelerating architecture search
\citep{brock2017smash}.
\section{Graph Hypernetworks for Neural Architectural Search}
Our proposed Graph HyperNetwork (GHN) is a composition of a graph neural network and a hypernetwork.
It takes in a computation graph (CG) and generates all free parameters in the graph. During
evaluation, the generated parameters are used to evaluate the fitness of a random architecture, and
the top performer architecture on a separate validation set is then selected. This allows us to
search over a large number of architectures at the cost of training a single GHN. We refer the
reader to Figure~\ref{fig:main} for a high level system overview.
\begin{figure}
\vspace{-0.9cm}
\includegraphics[width=\linewidth]{figures/main3.pdf}
\vspace{-1cm}
\caption{Our system diagram. \textbf{A}: A neural network architecture
is randomly sampled, forming a GHN. \textbf{B}: After graph propagation, each node in the GHN
generates its own weight parameters. \textbf{C}: The GHN is trained to minimize the training loss of
the sampled network with the generated weights. Random networks are ranked according to their
performance using GHN generated weights. }
\label{fig:main}
\vspace{-0.3cm}
\end{figure}
\subsection{Graphical Representation}
We represent a given architecture as a directed acyclic graph ${\mathcal{A}} = ({\mathcal{V}}, {\mathcal{E}})$, where each node $v
\in {\mathcal{V}}$ has an associated computational operator $f_v$ parametrized by $w_v$, which produces an
output activation tensor $x_v$. Edges $e_{u \mapsto v} = (u, v) \in {\mathcal{E}}$ represent the flow of
activation tensors from node $u$ to node $v$. $x_v$ is computed by applying its associated
computational operator on each of its inputs and taking summation as follows
\begin{equation}
\label{eq:compute_node}
x_v = \sum_{e_{u \mapsto v} \in {\mathcal{E}}} f_v(x_u; w_v), \ \ \forall v \in {\mathcal{V}}.
\end{equation}
\subsection{Graph Hypernetwork}
Our proposed Graph Hypernetwork is defined as a composition of a GNN and a hypernetwork. First,
given an input architecture, we used the graphical representation discussed above to form a graph
${\mathcal{A}}$. A parallel GNN $G_{\mathcal{A}}$ is then constructed to be \textit{homomorphic} to ${\mathcal{A}}$ with the exact
same topology. Node embeddings are initialized to one-hot vectors representing the node's
computational operator. After graph message-passing steps, a hypernet uses the node embeddings to
generate each node's associated parameters. Let ${\bm{h}}_v^{(T)}$ be the embedding of node $v$ after $T$
steps of GNN propagation, and let $H \left(\cdot; {\bm{\varphi}}\right)$ be a hypernetwork parametrized by
${\bm{\varphi}}$, the generated parameters $\tilde{{\bm{w}}}_v$ are:
\begin{equation}
\tilde{{\bm{w}}}_v = H \left({\bm{h}}_v^{(T)}; {\bm{\varphi}}\right).
\end{equation}
For simplicity, we implement $H$ with a multilayer perceptron (MLP). It is important to note that
$H$ is shared across all nodes, which can be viewed as an output prediction branch in each node of
the GNN. Thus the final set of generated weights of the entire architecture $\tilde{{\bm{w}}}$ is found
by applying $H$ on all the nodes and their respective embeddings which are computed by $G_{\mathcal{A}}$:
\begin{align}
\tilde{{\bm{w}}}=\left\{\tilde{{\bm{w}}}_v | \ v \in {\mathcal{V}} \right\}
&= \left\{H\left({\bm{h}}_v^{(T)}; {\bm{\varphi}}\right) \big| \ v \in {\mathcal{V}} \right\} \\
&= \left\{H\left({\bm{h}}; {\bm{\varphi}}\right) \big| \ {\bm{h}} \in G_{\mathcal{A}}^{(T)}\left(\left\{{\bm{h}}_v^{(0)} \big| v \in {\mathcal{V}} \right\}; {\bm{\phi}}\right)\right\} \\
&= GHN\left({\mathcal{A}}; {\bm{\phi}}, {\bm{\varphi}}\right).
\end{align}
\subsection{Architectural Motifs and Stacked GNNs}
\label{section:graph_cells}
\begin{wrapfigure}[]{r}{0.33\textwidth}
\vspace*{-0.5cm}
\includegraphics[width=\linewidth]{figures/graph_cells.pdf}
\caption{Stacked GHN along the depth dimension.}
\label{fig:graph_cells}
\end{wrapfigure}
The computation graph of some popular CNN architectures often spans over hundreds of nodes
\citep{he2016deep,huang2017densely}, which makes the search problem scale poorly. Repeated
architecture motifs are originally exploited in those architectures where the computation of each
computation block at different resolutions is the same, e.g. ResNet \citep{he2016resnet}. Recently,
the use of architectural motifs also became popular in the context of neural architecture search,
e.g. \citep{zoph2017learning, pham2018efficient}, where a small graph module with a fewer number of
computation nodes is searched, and the final architecture is formed by repeatedly stacking the same
module. \cite{zoph2017learning} showed that this leads to stronger performance due to a reduced
search space; the module can also be transferred to larger datasets by adopting a different
repeating pattern.
Our proposed method scales naturally with the design of repeated modules by stacking the same graph
hypernetwork along the depth dimension. Let ${\mathcal{A}}$ be a graph composed of a chain of repeated modules
$\{{\mathcal{A}}_i\}_{i=1}^N$. A graph level embedding ${\bm{h}}_{{\mathcal{A}}_i}$ is computed by taking an average over all
node embeddings after a full propagation of the current module, and passed onwards to the input node
of the next module as a message before graph propagation continues to the next module.
\begin{align}
{\bm{h}}_{{\mathcal{A}}_0} &= 0,\\
{\bm{h}}_{{\mathcal{A}}_i} &= \frac{1}{|{\mathcal{V}}_i|}\sum_{v\in {\mathcal{V}}_i} \left\{{\bm{h}}_v^{(T)} | v \in {\mathcal{V}}_i \right\} \label{eq:agg}\\
&= \frac{1}{|{\mathcal{V}}_i|}\sum G_{{\mathcal{A}}_i}^{(T)}\left(\left\{{\bm{h}}_v^{(0)} | v \in {\mathcal{V}}_i \right\}, {\bm{h}}_{{\mathcal{A}}_{i-1}}; {\bm{\phi}}\right) \ \ \forall i > 0 \label{eq:next_cell}
\end{align}
Note that $G_{{\mathcal{A}}_i}$ share parameters for all ${\mathcal{A}}_i$. Please see Figure~\ref{fig:graph_cells} for an overview.
\subsection{Forward-backward GNN message passing}
\label{sec:prop_scheme}
Standard GNNs employ the \textit{synchronous propagation scheme} \citep{li2015gated}, where the node
embeddings of all nodes are updated simultaneously at every step (see Equation~\ref{eq:gnn_prop}).
Recently, \cite{liao2018graph} found that such propagation scheme is inefficient in passing
long-range messages and suffers from the vanishing gradient problem as do regular RNNs. To mitigate
these shortcomings they proposed \textit{asynchronous propagation} using graph partitions. In our
application domain, deep neural architectures are chain-like graphs with a long diameter; This can
make synchronous message passing difficult. Inspired by the backpropagation algorithm, we propose
another variant of asynchronous propagation scheme, which we called \textit{forward-backward}
propagation, that directly mimics the order of node execution in a backpropagation algorithm.
Specifically, let $s$ be a topological sort of the nodes in the computation graph in a forward pass,
\begin{equation}
\label{eq:gnn_prop2}
{\bm{h}}_v^{(t+1)} =
\begin{cases}
U \left({\bm{h}}_v^{(t)}, {\bm{m}}_v^{(t)} \right) \ \ & \text{if } s(t) = v \text{ and } 1 \le t \le |{\mathcal{V}}|\\
\ \ & \text{or if } s(2|{\mathcal{V}}| - t) = v \text{ and } |{\mathcal{V}}| + 1 \le t < 2|{\mathcal{V}}|,\\
{\bm{h}}_v^{(t)} \ \ & \text{otherwise}.
\end{cases}
\end{equation}
The total number of propagation steps $T$ for a full forward-backward pass will then become $2|{\mathcal{V}}|-1$. Under the synchronous scheme, propagating information across a graph with diameter $|{\mathcal{V}}|$ would require $O(|{\mathcal{V}}|^2)$ messages. This is reduced to $O(|{\mathcal{V}}|)$ under the forward-backward scheme.
\subsection{Learning}
Learning a graph hypernetwork is straightforward since $\tilde{{\bm{w}}}$ are directly generated by a
differentiable network. We compute gradients of the graph hypernetwork parameters ${\bm{\phi}}, {\bm{\varphi}}$
using the chain rule:
\begin{equation}
\nabla_{{\bm{\phi}}, {\bm{\varphi}}}{{\mathcal{L}}_{train}(\tilde{{\bm{w}}})} = \nabla_{\tilde{{\bm{w}}}}{
{\mathcal{L}}_{train}(\tilde{{\bm{w}}})} \cdot \nabla_{{\bm{\phi}}, {\bm{\varphi}}}{\tilde{{\bm{w}}}}
\end{equation}
The first term is the gradients of standard network parameters, the second term is decomposed as\begin{align}
\nabla_{\bm{\phi}}{\tilde{{\bm{w}}}} &= \left\{ \nabla_{\bm{h}} H( {\bm{h}}; {\bm{\varphi}}) \cdot \nabla_{\bm{\phi}} {\bm{h}} \ \big| \ {\bm{h}} \in G^{(T)} \left( \{{\bm{h}}_v^{(0)}\}, {\mathcal{A}}, {\bm{\phi}} \right) \right\}, \label{eq:gnn_grad} \\
\nabla_{\bm{\varphi}}{\tilde{{\bm{w}}}} &= \left\{ \nabla_{\bm{\varphi}} H( {\bm{h}}_v^{(T)}; {\bm{\varphi}}) \ \big| \ v \in {\mathcal{V}} \right\} \label{eq:hypernet_grad}
\end{align}
where (Eq. \ref{eq:gnn_grad}) is the contribution from GNN module $G$ and (Eq.
\ref{eq:hypernet_grad}) is the contribution from the hypernet module $H$. Both $G$ and $H$ are
jointly learned throughout training.
\section{Related Work}
Various search methods such as reinforcement learning~\citep{zoph2016neural,
baker2016designing,zoph2017learning}, evolutionary
methods~\citep{real2017large,miikkulainen2017evolving,xie2017genetic,liu2017hierarchical,real2018regularized}
and gradient-based methods~\citep{liu2018darts,luo2018neural} have been proposed to address the
outer optimization (Eq.~\ref{eq:outer}) of NAS, where an agent learns to sample architectures that
are more likely to achieve higher accuracy. Different from these methods, this paper places its
focus on the inner-loop: inferring the parameters of a given network (Eq.~\ref{eq:inner}). Following
\cite{brock2017smash,bender2018understanding}, we opt for a simple random search algorithm to
complete the outer loop.
While initial NAS methods simply train candidate architectures for a brief period with SGD to obtain
the search signal, recent approaches have proposed alternatives in the interest of computational
cost. \cite{baker2017accelerating} propose directly predicting performance from the learning curve,
and \cite{deng2017peephole} propose to predict performance directly from the architecture without
learning curve information. However, training a performance predictor requires a ground truth, thus
the expensive process of computing the inner optimization is not avoided.
\cite{pham2018efficient,bender2018understanding,liu2018darts} use parameter sharing, where a
``one-shot'' model containing all possible architectures in the search space is trained. Individual
architectures are sampled by deactivating some nodes or edges in the one-shot model. In this case,
predicting $w^*(a)$ can be seen as using a selection function from the set of parameters in the
one-shot model.
Prior work has shown the feasibility of predicting parameters in a network with a function
approximator \citep{denil2013predictparams}. \cite{Schmidhuber92Learning,schmidhuber1993self}
proposed ``fast-weights'', where one network produces weight changes for another. HyperNetworks
\citep{ha2016hypernetworks} generate the weights of another network and show strong results in
large-scale language modeling and image classification experiments. SMASH~\citep{brock2017smash}
applied HyperNetworks to perform NAS, where an architecture is encoded as a 3D tensor using a memory
channel scheme. In contrast, we encode a network as a computation graph and use a graph neural
network. While SMASH predicts a subset of the weights, our graph model is able to predict
\textit{all} the free weights.
While earlier NAS methods focused on standard image classification and language modeling, recent
literature has extended NAS to search for architectures that are computationally efficient
~\citep{tan2018mnasnet,dong2018dpp,hsu2018monas,elsken2018multi,zhou2018resource}. In this work, we
applied our GHN based search program on the task of anytime prediction, where we not only optimize
for the final speed but the entire speed-accuracy trade-off curve.
\subsection{Ablation Studies (CIFAR-10)}
\label{section:ablations}
\begin{figure}[t]
\vspace{-1.0cm}
\begin{center}
\begin{subfigure}{.48\textwidth}
\includegraphics[width=0.8\linewidth]{figures/nodes.pdf}
\caption{Vary number of nodes; $T=5$ , forward-backward}
\label{fig:sfig2}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{figures/tsteps.pdf}
\caption{Vary propagation schemes, $N=7$ }
\label{fig:sfig1}
\end{subfigure}%
\caption{ GHN when varying the number of nodes and propagation scheme}
\label{fig:ghn_hyps}
\end{center}
\vspace{-0.5cm}
\end{figure}
\vspace{-0.25cm}
\paragraph{Number of graph nodes:}
The GHN is compatible with varying number of nodes - graphs used in training need not be the same
size as the graphs used for evaluation. Figure~\ref{fig:sfig2} shows how GHN performance varies as a
function of the number of nodes employed during training - fewer nodes generally produces better
performance. While the GHN has difficulty learning on larger graphs, likely due to the vanishing
gradient problem, it can generalize well from just learning on smaller graphs. Note that all GHNs
are tested with the full graph size ($N=17$ nodes).
\vspace{-0.25cm}
\paragraph{Number of propagation steps:}
We now compare the forward-backward propagation scheme with the regular synchronous propagation
scheme. Note that $T=1$ synchronous step corresponds to one full forward-backward phase. As shown in
Figure~\ref{fig:sfig1}, the forward-backward scheme consistently outperforms the synchronous
scheme. More propagation steps also help improving the performance, with a diminishing return. While
the forward-backward scheme is less amenable to acceleration from parallelization due to its
sequential nature, it is possible to parallelize the evaluation phase across multiple GHNs when
testing the fitness of candidate architectures.
\begin{wraptable}[8]{r}{5.5cm}
\footnotesize
\vspace{-0.4cm}
\begin{center}
\begin{tabular}{ c c c c}
SP & PE & \multicolumn{2}{c}{Correlation} \\
&& Random-100 & Top-50 \\
\hline
\xmark & \xmark & 0.24 & 0.15\\
\xmark & \cmark & 0.44 & 0.37\\
\cmark & \cmark & 0.68 & 0.48
\end{tabular}
\end{center}
\vspace{-0.1in}
\caption{Stacked GHN Correlation. SP denotes sharing parameters and PE denotes passing embeddings}
\label{table:stacked}
\end{wraptable}
\vspace{-0.25cm}
\paragraph{Stacked GHN for architectural motifs:}
We also evaluate different design choices of GHNs on representing architectural motifs. We compare
1) individual GHNs, each predicting one block independently,
2) a stacked GHN where individual GHN's
pass on their graph embedding without sharing parameters,
3) a stacked GHN with shared parameters (our proposed approach).
As shown in Table~\ref{table:stacked}, passing messages between GHN's is crucial, and sharing parameters produces better performance.
\subsection{Anytime Prediction}
In the real-time setting, the computational budget available can vary for each test case and cannot
be known ahead of time. This is formalized in anytime prediction, \citep{grubb2012speedboost} the
setting in which for each test example ${\mathbf{x}}$, there is non-deterministic computational budget $B$
drawn from the joint distribution $P({\mathbf{x}}, B)$. The goal is then to minimize the expected loss $L(f)
= \mathbb{E}\left[ L\left( f({\mathbf{x}}), B \right)\right]_{P({\mathbf{x}}, B)}$, where $f(\cdot)$ is the model and
$L(\cdot)$ is the loss for an $f(\cdot)$ that must produce a prediction within the budget $B$.
We conduct experiments on CIFAR-10. Our anytime search space consists of networks with 3 cells
containing 24, 16, and 8 nodes. Each node is given the additional properties: 1) the spatial size it
operates at and 2) if an early-exit classifier is attached to it. A node enforces its spatial size
by pooling or upsampling any input feature maps inputs that are of different scale. Note that while
a naive one-shot model would triple its size to include three different parameter sets at three
different scales, the GHN is negligibly affected by such a change. The GHN uses the area under the
predicted accuracy-FLOPS curve as its selection criteria. The search space, contains various
convolution and pooling operators. Training methodology of the final architectures are chosen to
match \cite{huang2017multi} and can be found in the Appendix.
Figure \ref{fig:test1} shows a comparison with the various methods presented by
\cite{huang2017multi}. Our experiments show that the best searched architectures can outperform the
current state-of-the-art human designed networks. We see the GHN is amenable to the changes proposed
above, and can find efficient architectures with a random search when used with a strong search
space.
\begin{figure}[t]
\vspace{-0.5cm}
\centering
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{figures/anytime_compare.pdf}
\captionof{figure}{Comparison with state-of-the-art\\ human-designed networks on CIFAR-10.}
\label{table:Results4}
\label{fig:test1}
\end{minipage}%
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{figures/anytime_randoms.pdf}
\captionof{figure}{Comparison between random 10 and\\ top 10 networks on CIFAR-10.}
\label{fig:test2}
\end{minipage}
\end{figure}
\subsection{Predicted performance correlation (CIFAR-10)}
\begin{table}[t]
\caption{Benchmarking the correlation between the predicted and true performance of the GHN against SGD and a one-shot model baselines. Results are on CIFAR-10.}
\vspace{-0.2cm}
\label{table:correlation}
\small
\begin{center}
\begin{tabular}{ c c c c c}
Method & \multicolumn{2}{c}{Computation cost} & \multicolumn{2}{c}{Correlation} \\
& Initial (GPU hours) & Per arch. (GPU seconds) & Random-100 & Top-50 \\
\hline
SGD 10 Steps & - & 0.9 & 0.26 & -0.05\\
SGD 100 Steps & - & 9 & 0.59 & 0.06\\
SGD 200 Steps & - & 18 & 0.62 & 0.20 \\
SGD 1000 Steps & - & 90 & 0.77 & 0.26 \\
One-Shot & 9.8 & 0.06 & 0.58 & 0.31\\
\hline
\hline
GHN & 6.1 & 0.08 & 0.68 & 0.48
\end{tabular}
\end{center}
\end{table}
In this section, we evaluate whether the parameters generated from GHN can be indicative of the
final performance. Our metric is the correlation between the accuracy of a model with trained
weights vs. GHN generated weights. We use a fixed set of 100 random architectures that have not been
seen by the GHN during training, and we train them for 50 epochs to obtain our ``ground-truth''
accuracy, and finally compare with the accuracy obtained from GHN generated weights. We report the
Pearson's R score on all 100 random architectures and the top 50 performing architectures (i.e.\
above average architectures). Since we are interested in searching for the best architecture,
obtaining a higher correlation on top performing architectures is more meaningful.
To evaluate the effectiveness of GHN, we further consider two baselines: 1) training a network with
SGD from scratch for a varying number of steps, and 2) our own implementation of the one-shot model
proposed by \citet{pham2018efficient}, where nodes store a set of shared parameters for each
possible operation. Unlike GHN, which is compatible with varying number of nodes, the one-shot model
must be trained with $N=17$ nodes to match the evaluation. The GHN is trained with $N=7$, $T=5$
using forward-backward propagation. These GHN parameters are selected based on the results found in
Section~\ref{section:ablations}.
Table \ref{table:correlation} shows performance correlation and search cost of SGD, the one-shot
model, and our GHN. Note that GHN clearly outperforms the one-shot model, showing the effectiveness
of dynamically predicting parameters based on graph topology. While it takes 1000 SGD steps to
surpasses GHN in the ``Random-100'' setting, GHN is still the strongest in the ``Top-50'' setting,
which is more important for architecture search. Moreover, compared to GHN, running 1000 SGD steps
for every random architecture is over 1000 times more computationally expensive. In contrast, GHN
only requires a pre-training stage of 6 hours, and afterwards, the trained GHN can be used to
efficiently evaluate a massive number of random architectures of different sizes.
\subsection{NAS benchmarks}
\input{results1}
\input{results2}
\input{results3}
\subsubsection{CIFAR-10}
\label{section:cifar10}
We conduct our initial set of experiments on CIFAR-10 \citep{krizhevsky2009cifar}, which contains 10
object classes and 50,000 training images and 10,000 test images of size 32$\times$32$\times$3. We
use 5,000 images split from the training set as our validation set.
\vspace{-0.25cm}
\paragraph{Search space:}
Following existing NAS methods, we choose to search for optimal blocks rather than the entire
network. Each block contains 17 nodes, with 8 possible operations. The final architecture is formed
by stacking 18 blocks. The spatial size is halved and the number of channels is doubled after blocks
6 and 12. These settings are all chosen following recent NAS methods
\citep{zoph2016neural,pham2018efficient,liu2018darts}, with details in the Appendix.
\vspace{-0.25cm}
\paragraph{Training:}
For the GNN module, we use a standard GRU cell \citep{cho14gru} with hidden size 32 and 2
layer MLP with hidden size 32 as the recurrent cell function $U$ and message function $M$
respectively. The shared hypernetwork $H \left(\cdot; {\bm{\varphi}}\right)$ is a 2-layer MLP with hidden
size 64. From the results of ablations studies in Section~\ref{section:ablations}, the GHN is
trained with blocks with $N=7$ nodes and $T=5$ propagations under the forward-backward scheme, using
the ADAM optimizer \citep{kingma2015adam}. Training details of the final selected architectures are
chosen to follow existing works and can be found in the Appendix.
\vspace{-0.25cm}
\paragraph{Evaluation:}
First, we compare to similar methods that use random search with a hypernetwork or a one-shot model
as a surrogate search signal. We randomly sample 10 architectures and train until convergence for
our random baseline. Next, we randomly sample 1000 architectures, and select the top 10 performing
architectures with GHN generated weights, which we refer to as GHN Top. Our reported search cost
includes both the GHN training and evaluation phase. Shown in Table~\ref{table:Results1}, the GHN
achieves competitive results with nearly an order of magnitude reduction in search cost.
In Table~\ref{table:Results2}, we compare with methods which use more advanced search methods, such
as reinforcement learning and evolution. Once again, we sample 1000 architectures and use the GHN to
select the top 10. To make a fair comparison for random search, we train the top 10 for a short
period before selecting the best to train until convergence. The accuracy reported for GHN Top-Best
is the average of 5 runs of the same final architecture. Note that all methods in
Table~\ref{table:Results2} use CutOut~\citep{devriescutout17}. GHN achieves very competitive results
with a simple random search algorithm, while only using a fraction of the total search cost. Using
advanced search methods with GHNs may bring further gains.
\subsubsection{ImageNet-Mobile}
We also run our GHN algorithm on the ImageNet dataset \citep{russakovsky2015imagenet}, which
contains 1.28 million training images. We report the top-1 accuracy on the 50,000 validation
images. Following existing literature, we conduct the ImageNet experiments in the mobile setting,
where the model is constrained to be under 600M FLOPS. We directly transfer the best architecture
block found in the CIFAR-10 experiments, using an initial convolution layer of stride 2 before
stacking 14 blocks with scale reduction at blocks 1, 2, 6 and 10. The total number of flops is
constrained by choosing the initial number of channels. We follow existing NAS methods on the
training procedure of the final architecture; details can be found in the Appendix. As shown in
Table \ref{table:Results3} the transferred block is competitive with other NAS methods which require
a far greater search cost.
\section{Experiments}
In this section, we use our proposed GHN to search for the best CNN architecture for image
classification. First, we evaluate the GHN on the standard CIFAR \citep{krizhevsky2009cifar} and
ImageNet \citep{russakovsky2015imagenet} architecture search benchmarks. Next, we apply GHN on an
``anytime prediction'' task where we optimize the speed-accuracy tradeoff that is key for many
real-time applications. Finally, we benchmark the GHN's predicted-performance correlation and
explore various factors in an ablation study.
\input{results.nasbench}
\input{results.anytime}
\input{results.correlation}
\input{results.ablation}
|
3,212,635,537,757 | arxiv | \section{Introduction}
There has been a rising interest in running high-quality CNN models under strict constraints on memory and computational budget. Many innovative architectures, such as MobileNets \cite{howard2017mobilenets}, ShuffleNet \cite{zhang2017shufflenet}, NASNet-A \cite{zoph2017learning}, MobileNetV2 \cite{sandler2018mobilenetv2}, have been proposed in recent years. However, all these architectures are heavily dependent on depthwise separable convolution \cite{Szegedy_2015_CVPR} which lacks efficient implementation. Meanwhile, there are few studies that combine efficient models with fast object detection algorithms \cite{huang2016speed}.
This research tries to explore the design of an efficient CNN architecture for both image classification tasks and object detection tasks.
It has made a number of major contributions listed as follows:
\textbf{We propose a variant of DenseNet \cite{huang2016densely} architecture called PeleeNet for mobile devices.}
PeleeNet follows the connectivity pattern and some of key design principals of DenseNet.
It is also designed to meet strict constraints on memory and computational budget. Experimental results on Stanford Dogs \cite{khosla2011novel} dataset show that our proposed PeleeNet is higher in accuracy than the one built with the original DenseNet architecture by 5.05\% and higher than MobileNet \cite{howard2017mobilenets} by 6.53\%.
PeleeNet achieves a compelling result on ImageNet ILSVRC 2012 \cite{deng2009imagenet} as well. The top-1 accuracy of PeleeNet is 72.6\% which is higher than that of MobileNet by 2.1\%. It is also important to point out that PeleeNet is only 66\% of the model size of MobileNet. Some of the key features of PeleeNet are:
\begin{itemize}
\item \textbf{Two-Way Dense Layer}
Motivated by GoogLeNet \cite{Szegedy_2015_CVPR}, we use a 2-way dense layer to get different scales of receptive fields. One way of the layer uses a 3x3 kernel size. The other way of the layer uses two stacked 3x3 convolution to learn visual patterns for large objects. The structure is shown on Fig. \ref{fig1_2waydb},
\begin{figure}[ht]
\begin{center}
\begin{tabular}[ht]{cc}
\includegraphics[height =0.27\textwidth ]{images/originalDenseLayer.pdf} &
\includegraphics[height =0.27\textwidth ]{images/2wDenseLayer.pdf} \\
(a) original dense layer &
(b) 2-way dense layer
\end{tabular}
\caption{Structure of 2-way dense layer\label{fig1_2waydb}}
\end{center}
\end{figure}
\item
\textbf{Stem Block}
Motivated by Inception-v4 \cite{szegedy2017inception} and DSOD \cite{shen2017dsod}, we design a cost efficient stem block before the first dense layer. The structure of stem block is shown on Fig. \ref{fig1_stem}. This stem block can effectively improve the feature expression ability without adding computational cost too much - better than other more expensive methods, e.g., increasing channels of the first convolution layer or increasing growth rate.
\begin{figure}[ht]
\begin{center}
\begin{tabular}[ht]{cc}
\includegraphics[width =0.5\textwidth ]{images/stem2.pdf}
\end{tabular}
\caption{Structure of stem block\label{fig1_stem}}
\end{center}
\end{figure}
\item
\textbf{Dynamic Number of Channels in Bottleneck Layer}
Another highlight is that the number of channels in the bottleneck layer varies according to the input shape instead of fixed 4 times of growth rate used in the original DenseNet. In DenseNet, we observe that for the first several dense layers, the number of bottleneck channels is much larger than the number of its input channels, which means that for these layers, bottleneck layer increases the computational cost instead of reducing the cost. To maintain the consistency of the architecture, we still add the bottleneck layer to all dense layers, but the number is dynamically adjusted according to the input shape, to ensure that the number of channels does not exceed the input channels.
Compared to the original DenseNet structure, our experiments show that this method can save up to 28.5\% of the computational cost with a small impact on accuracy. (Fig. \ref{fig1_dydl})
\begin{figure}[ht]
\begin{center}
\begin{tabular}[ht]{cc}
\includegraphics[height =0.2\textwidth ]{images/dynamic_bl3.png} & \includegraphics[height =0.2\textwidth ]{images/dynamic_bl2.png} \\
(a) Dense layer with bottleneck & (b) Computational cost of the first 4 dense layers
\end{tabular}
\caption{Dynamic number of channels in bottleneck layer\label{fig1_dydl}}
\end{center}
\end{figure}
\item
\textbf{Transition Layer without Compression}
Our experiments show that the compression factor proposed by DenseNet hurts the feature expression. We always keep the number of output channels the same as the number of input channels in transition layers.
\item
\textbf{Composite Function}
To improve actual speed, we use the conventional wisdom of “post-activation” (Convolution - Batch Normalization \cite{ioffe2015batch} - Relu) as our composite function instead of pre-activation used in DenseNet. For post-activation, all batch normalization layers can be merged with convolution layer at the inference stage, which can accelerate the speed greatly.
To compensate for the negative impact on accuracy caused by this change, we use a shallow and wide network structure. We also add a 1x1 convolution layer after the last dense block to get the stronger representational abilities.
\end{itemize}
\textbf{We optimize the network architecture of Single Shot MultiBox Detector (SSD) \cite{liu2016ssd} for speed acceleration and then combine it with PeleeNet.}
Our proposed system, named Pelee, achieves 76.4\% mAP on PASCAL VOC \cite{everingham2010pascal} 2007 and 22.4 mAP on COCO. It outperforms YOLOv2 \cite{redmon2016yolo9000} in terms of accuracy, speed and model size. The major enhancements proposed to balance speed and accuracy are:
\begin{itemize}
\item
\textbf{Feature Map Selection}
We build object detection network in a way different from the original SSD with a carefully selected set of 5 scale feature maps (19 x 19, 10 x 10, 5 x 5, 3 x 3, and 1 x 1). To reduce computational cost, we do not use 38 x 38 feature map.
\item
\textbf{Residual Prediction Block}
We follow the design ideas proposed by \cite{lee2017residual} that encourage features to be passed along the feature extraction network. For each feature map used for detection, we build a residual \cite{he2016deep} block (ResBlock) before conducting prediction. The structure of ResBlock is shown on Fig. \ref{fig_resblock}
\item
\textbf{Small Convolutional Kernel for Prediction}
Residual prediction block makes it possible for us to apply 1x1 convolutional kernels to predict category scores and box offsets.
Our experiments show that the accuracy of the model using 1x1 kernels is almost the same as that of the model using 3x3 kernels. However, 1x1 kernels reduce the computational cost by 21.5\%.
\end{itemize}
\begin{figure}[ht]
\begin{center}
\begin{tabular}[ht]{cc}
\includegraphics[width =0.4\linewidth ]{images/resblock3.png} &
\includegraphics[width =0.6\linewidth, height=0.3\linewidth ]{images/pelee_arch.png} \\
(a) ResBlock &
(b) Network of Pelee
\end{tabular}
\caption{Residual prediction block\label{fig_resblock}}
\end{center}
\end{figure}
\textbf{We provide a benchmark test} for different efficient classification models and different one-stage
object detection methods on NVIDIA TX2 embedded platform and iPhone 8.
\section{PeleeNet: An Efficient Feature Extraction Network}
\subsection{Architecture}
The architecture of our proposed PeleeNet is shown as follows in Table \ref{tab3.3.7}. The entire network consists of a stem block and four stages of feature extractor. Except the last stage, the last layer in each stage is average pooling layer with stride 2. A four-stage structure is a commonly used structure in the large model design. ShuffleNet \cite{zhang2017shufflenet} uses a three stage structure and shrinks the feature map size at the beginning of each stage. Although this can effectively reduce computational cost, we argue that early stage features are very important for vision tasks, and that premature reducing the feature map size can impair representational abilities. Therefore, we still maintain a four-stage structure. The number of layers in the first two stages are specifically controlled to an acceptable range.
\begingroup
\renewcommand{\baselinestretch}{1.2}
\begin{table}[ht]
\begin{center}
\caption{Overview of PeleeNet architecture\label{tab3.3.7}}
\begin{tabular}[ht]{cccc}
\hline
\multicolumn{2}{c}{\textbf{Stage}} &
\textbf{Layer}&
\textbf{Output Shape}
\\
\hline
\hline
\multicolumn{3}{c}{Input}
&224 x 224 x 3 \\
\hline
\textbf{Stage 0} &
Stem Block & &56 x 56 x 32 \\
\hline
\multirow{3}{*}
{\textbf{Stage 1} }&
Dense Block &DenseLayer \textbf{\large x 3}&
\multirow{3}{*}
{28 x 28 x 128}
\\
\cline{2-3}
&
\multirow{2}{*}{Transition Layer}& 1 x 1 conv, stride 1& \\
\cline{3-3}&
&2 x 2 average pool, stride 2 & \\
\hline
\multirow{3}{*}
{\textbf{Stage 2} }&
Dense Block &DenseLayer \textbf{\large x 4}&
\multirow{3}{*}
{14 x 14 x 256}
\\
\cline{2-3}
&
\multirow{2}{*}{Transition Layer}& 1 x 1 conv, stride 1& \\
\cline{3-3}&
&2 x 2 average pool, stride 2 & \\
\hline
\multirow{3}{*}
{\textbf{Stage 3} }&
Dense Block &DenseLayer \textbf{\large x 8}&
\multirow{3}{*}
{7 x 7 x 512}
\\
\cline{2-3}
&
\multirow{2}{*}{Transition Layer}& 1 x 1 conv, stride 1& \\
\cline{3-3}&
&2 x 2 average pool, stride 2 & \\
\hline
\multirow{2}{*}
{\textbf{Stage 4} }&
Dense Block &DenseLayer \textbf{\large x 6}&
\multirow{2}{*}
{7 x 7 x 704}
\\
\cline{2-3}
&
Transition Layer& 1 x 1 conv, stride 1& \\
\hline
\multicolumn{2}{c}{\multirow{2}{*}{\textbf{Classification Layer}}} & 7 x 7 global average pool &1 x 1 x 704\\
\cline{3-4}
\multicolumn{2}{c}{}
&\multicolumn{2}{c}{1000D fully-connecte,softmax} \\
\hline
\end{tabular}
\end{center}
\end{table}
\endgroup
\subsection{Ablation Study}
\subsubsection{Dataset}
We build a customized Stanford Dogs dataset for ablation study.
Stanford Dogs \cite{khosla2011novel} dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image classification.
We believe the dataset used for this kind of task is complicated enough to evaluate the performance of the network architecture. However, there are only 14,580 training images, with about 120 images per class, in the original Stanford Dogs dataset, which is not large enough to train the model from scratch.
Instead of using the original Stanford Dogs, we build a subset of ILSVRC 2012 according to the ImageNet wnid used in Stanford Dogs. Both training data and validation data are exactly copied from the ILSVRC 2012 dataset. In the following chapters, the term of Stanford Dogs means this subset of ILSVRC 2012 instead of the original one. Contents of this dataset:
\begin{itemize}
\item Number of categories: 120
\item Number of training images: 150,466
\item Number of validation images: 6,000
\end{itemize}
\subsubsection{Effects of Various Design Choices on the Performance}
We build a DenseNet-like network called DenseNet-41 as our baseline model. There are two differences between this model and the original DenseNet. The first one is the parameters of the first conv layer. There are 24 channels on the first conv layer instead of 64, the kernel size is changed from 7 x 7 to 3 x 3 as well. The second one is that the number of layers in each dense block is adjusted to meet the computational budget.
All our models in this section are trained by PyTorch with mini-batch size 256 for 120 epochs. We follow most of the training settings and hyper-parameters used in ResNet on ILSVRC 2012. Table \ref{tab3.46.2} shows the effects of various design choices on the performance.
We can see that, after combining all these design choices, PeleeNet achieves 79.25\% accuracy on Stanford Dogs, which is higher in accuracy by 4.23\% than DenseNet-41 at less computational cost.
\begin{table}[ht]
\begin{center}
\caption{Effects of various design choices and components on performance\label{tab3.46.2}}
\begin{tabular}[c]{m{14em}cccccccc}
\hline
& \multicolumn{7}{c}{\textbf{From DenseNet-41 to PeleeNet}} \\
\hline
\hline
Transition layer without compression &
&\ding{51}&\ding{51}&\ding{51}&\ding{51}&\ding{51}&\ding{51} \\
\hline
Post-activation&&&\ding{51}&&&\ding{51}&\ding{51} \\
\hline
Dynamic bottleneck channels&&&&\ding{51}&\ding{51}&\ding{51}&\ding{51} \\
\hline
Stem Block&&&&&\ding{51}&\ding{51}&\ding{51} \\
\hline
Two-way dense layer&&&&&&\ding{51}&\ding{51} \\
\hline
Go deeper (add 3 extra dense layers)&&&&&&&\ding{51} \\
\hline
\textbf{Top 1 accuracy} &\textbf{75.02} &76.1 &75.2 &75.8 &76.8 &78.8 &\textbf{79.25} \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Results on ImageNet ILSVRC 2012}
Our PeleeNet is trained by PyTorch with mini-batch size 512 on two GPUs. The model is trained with a cosine learning rate annealing schedule, similar to what is used by \cite{pleiss2017memory} and \cite{loshchilov2016sgdr}. The initial learning rate is set to 0.25 and the total amount of epochs is 120. We then fine tune the model with the initial learning rate of 5e-3 for 20 epochs. Other hyper-parameters are the same as the one used on Stanford Dogs dataset.
\textbf{Cosine Learning Rate Annealing} means that the learning rate decays with a cosine shape (the learning rate of epoch $\mathit{t\ (t<=120)}$ set to $\mathit{0.5*lr*(\cos(\pi*t/120)+1)}$.
As can be seen from Table \ref{tab3.4.7}, PeleeNet achieves a higher accuracy than that of MobileNet and ShuffleNet at no more than 66\% model size and the lower computational cost. The model size of PeleeNet is only 1/49 of VGG16.
\begin{table}[ht]
\begin{center}
\caption{Results on ImageNet ILSVRC 2012 \label{tab3.4.7}}
\begin{tabular}[c]{ccccc}
\hline
\multirow{2}{*}{\textbf{Model}} &
\multirow{2}{*}{
\begin{tabular}[c]{@{}c@{}}
\textbf{Computational Cost} \\
\textbf{(FLOPs)}
\end{tabular}}&
\multirow{2}{*}{
\begin{tabular}[c]{@{}c@{}}
\textbf{Model Size} \\
\textbf{(Parameters)}
\end{tabular}}&
\multicolumn{2}{c}{
\textbf{Accuracy (\%)}
}\\
&&&\textbf{Top-1}&\textbf{Top-5}\\
\hline
\hline
VGG16 &15,346 M&138 M&71.5 & 89.8 \\
\hline
1.0 MobileNet &569 M&4.24 M&70.6 & 89.5 \\
ShuffleNet 2x (g = 3) &524 M&5.2 M&70.9&- \\
NASNet-A &564 M&5.3 M&74.0&91.6 \\
\hline
\textbf{PeleeNet (ours)} &\textbf{508 M} &\textbf{2.8 M} &\textbf{72.6}& \textbf{90.6} \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Speed on Real Devices}
Counting FLOPs (the number of multiply-accumulates) is widely used to measure the computational cost. However, it cannot replace the speed test on real devices, considering that there are many other factors that may influence the actual time cost, e.g. caching, I/O, hardware optimization etc,.
This section evaluates the performance of efficient models on iPhone 8 and NVIDIA TX2 embedded platform.
The speed is calculated by the average time of processing 100 pictures with 1 batch size.
We run 100 picture processing for 10 times separately and average the time.
As can be seen in Table \ref{tab_inet_tx2} PeleeNet is much faster than MoibleNet and MobileNetV2 on TX2. Although MobileNetV2 achieves a high accuracy with 300 FLOPs, the actual speed of the model is slower than that of MobileNet with 569 FLOPs.
Using half precision float point (FP16) instead of single precision float point (FP32) is a widely used method to accelerate deep learning
inference.
As can be seen in Figure \ref{fig_inet_speed}, PeleeNet runs 1.8 times faster in FP16 mode than in FP32 mode. In contrast, the network that is built with depthwise separable convolution is hard to benefit from the TX2 half-precision (FP16) inference engine. The speed of MobileNet and MobileNetV2 running in FP16 mode is almost the same as the ones running in FP32 mode.
On iPhone 8, PeleeNet is slower than MobileNet for the small input dimension but is faster than MobileNet for the large input dimension. There are two possible reasons for the unfavorable result on iPhone. The first reason is related to CoreML which is built on Apple’s Metal API. Metal is a 3D graphics API and is not originally designed for CNNs. It can only hold 4 channels of data (originally used to hold RGBA data). The high-level API has to slice the channel by 4 and caches the result of each slice. The separable convolution can benefit more from this mechanism than the conventional convolution. The second reason is the architecture of PeleeNet. PeleeNet is built in a multi-branch and narrow channel style with 113 convolution layers. Our original design is misled by the FLOPs count and involves unnecessary complexity.
\begin{table}[ht]
\begin{center}
\caption{\textbf{Speed on NVIDIA TX2} (The larger the better) The benchmark tool is built with NVIDIA TensorRT4.0 library. \label{tab_inet_tx2}}
\begin{tabular}[ht]{cccccc}
\hline
\multirow{3}{*}{\textbf{Model}}&
\multirow{3}{*}{
\begin{tabular}[c]{@{}c@{}}
\textbf{Top-1 Accuracy} \\
\textbf{on ILSVRC2012} \\
@224x224
\end{tabular}}&
\multirow{3}{*}{
\begin{tabular}[c]{@{}c@{}}
\textbf{FLOPs}\\
@224x224
\end{tabular}}
&
\multicolumn{3}{c}{
\begin{tabular}[c]{@{}c@{}}
\textbf{Speed}\\
\textbf{(images per second)}
\end{tabular}
}
\\
\cline{4-6}
&&&\multicolumn{3}{c}{Input Dimension}
\\
\cline{4-6}
&&&224x224&320x320&640x640 \\
\hline
\hline
1.0 MobileNet& 70.6&569 M&136.2&75.7&22.4 \\
\hline
1.0 MobileNetV2& 72.0&300 M&123.1&68.8&21.6 \\
\hline
ShuffleNet 2x (g = 3)& \textbf{73.7}&524 M&110&65.3&19.8 \\
\hline
\textbf{PeleeNet (ours) }&72.6&508 M&\textbf{240.3}&\textbf{129.1}&\textbf{37.2} \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[ht]
\begin{center}
\begin{tabular}[ht]{cc}
\includegraphics[width =0.48\textwidth ]{images/speed_vs_acc.png} & \includegraphics[width =0.48\textwidth ]{images/peleenet_fp32vsfp16.png} \\
(a) Speed and accuracy on FP16 mode & (b) FP32 vs FP16 by 224x224 dimension
\end{tabular}
\caption{Speed on NVIDIA TX2\label{fig_inet_speed}}
\end{center}
\end{figure}
\begin{table}[ht]
\begin{center}
\caption{\textbf{Speed on iPhone 8} (The larger the better) The benchmark tool is built with CoreML library. \label{tab_inet_ip8}}
\begin{tabular}[ht]{ccccc}
\hline
\multirow{3}{*}{\textbf{Model}}&
\multirow{3}{*}{
\begin{tabular}[c]{@{}c@{}}
\textbf{Top-1 Accuracy} \\
\textbf{on ILSVRC2012} \\
@224x224
\end{tabular}}&
\multirow{3}{*}{
\begin{tabular}[c]{@{}c@{}}
\textbf{FLOPs}\\
@224x224
\end{tabular}}
&
\multicolumn{2}{c}{
\begin{tabular}[c]{@{}c@{}}
\textbf{Speed}\\
\textbf{(images per second)}
\end{tabular}
}
\\
\cline{4-5}
&&&\multicolumn{2}{c}{Input Dimension}
\\
\cline{4-5}
&&&224x224&320x320 \\
\hline
\hline
1.0 MobileNet& 70.6&569 M&\textbf{27.7}&20.3 \\
\hline
\textbf{PeleeNet (ours) }&\textbf{72.6}&508 M&26.1&\textbf{22.8} \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Pelee: A Real-Time Object Detection System}
\subsection{Overview}
This section introduces our object detection system and the optimization for SSD. The main purpose of our optimization is to improve the speed with acceptable accuracy.
Except for our efficient feature extraction network proposed in last section, we also build the object detection network in a way different from the original SSD with a carefully selected set of 5 scale feature maps. In the meantime, for each feature map used for detection, we build a residual block before conducting prediction (Fig. \ref{fig_resblock}). We also use small convolutional kernels to predict object categories and bounding box locations to reduce computational cost. In addition, we use quite different training hyperparameters. Although these contributions may seem small independently, we note that the final system achieves 70.9\% mAP on PASCAL VOC2007 and 22.4 mAP on MS COCO dataset. The result on COCO outperforms YOLOv2 in consideration of a higher precision, 13.6 times lower computational cost and 11.3 times smaller model size.
There are 5 scales of feature maps used in our system for prediction: 19 x 19, 10 x 10, 5 x 5, 3 x 3, and 1 x 1. We do not use 38 x 38 feature map layer to ensure a balance able to be reached between speed and accuracy. The 19x19 feature map is combined to two different scales of default boxes and each of the other 4 feature maps is combined to one scale of default box. Huang et al. \cite{huang2016speed} also do not use 38 x 38 scale feature map when combining SSD with MobileNet. However, they add another 2 x 2 feature map to keep 6 scales of feature map used for prediction, which is different from our solution.
\begin{table}[ht]
\begin{center}
\caption{Scale of feature map and default box \label{tab_fp}}
\begin{tabular}[ht]{ccccccc}
\hline
\textbf{Model} &
\multicolumn{6}{c}{\textbf{Scale of Feature Map : Scale of Default Box}} \\
\hline
\hline
\begin{tabular}[c]{@{}c@{}}
Original \\
SSD
\end{tabular}&
38x38:30 & 19x19:60 & 10x10:110 & 5x5:162 & 3x3:213 & 1x1:264 \\
\hline
\begin{tabular}[c]{@{}c@{}}
SSD + \\
MobileNet
\end{tabular}&
19x19:60 & 10x10:105 & 5x5:150 & 3x3:195 & 2x2:240 & 1x1:285 \\
\hline
\textbf{Pelee (ours)} &
\multicolumn{2}{c}{ 19x19: 30.4 \& 60.8 } & 10x10:112.5 & 5x5:164.2 & 3x3:215.8 & 1x1:267.4 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Results on VOC 2007}
Our object detection system is based on the source code of SSD\footnote{https://github.com/weiliu89/caffe/tree/ssd} and is trained with Caffe \cite{jia2014caffe}.
The batch size is set to 32. The learning rate is set to 0.005 initially, then it decreased by a factor of 10 at 80k and 100k iterations,respectively. The total iterations are 120K.
\subsubsection{Effects of Various Design Choices}
Table \ref{tab5.2} shows the effects of our design choices on performance. We can see that residual prediction block can effectively improve the accuracy. The model with residual prediction block achieves a higher accuracy by 2.2\% than the model without residual prediction block. The accuracy of the model using 1x1 kernels for prediction is almost same as the one of the model using 3x3 kernels. However, 1x1 kernels reduce the computational cost by 21.5\% and the model size by 33.9\%.
\begin{table}[ht]
\begin{center}
\caption{Effects of various design choices on performance\label{tab5.2}}
\begin{tabular}[ht]{cccccc}
\hline
\textbf{38x38 Feature} &
\textbf{ResBlock} &
\begingroup
\renewcommand{\arraystretch}{0.5}
\begin{tabular}[c]{@{}c@{}}
\\
\textbf{Kernel Size}\\
\textbf{for}\\
\textbf{Prediction}
\end{tabular}
\endgroup &
\textbf{FLOPs}
&
\begingroup
\renewcommand{\arraystretch}{0.5}
\begin{tabular}[c]{@{}c@{}}
\\
\textbf{Model Size}\\
\textbf{(Parameters)}
\end{tabular}
\endgroup &
\textbf{mAP (\%)} \\
\hline
\hline
\ding{51}&\ding{55}&3x3&1,670 M& 5.69 M& 69.3\\
\ding{55}&\ding{55}&3x3& 1,340 M &5.63 M & 68.6\\
\hline
\ding{55}&\ding{51}&3x3&1,470 M& 7.27 M& 70.8\\
\ding{55}&\ding{51}&1x1&1,210 M& 5.43 M& \textbf{70.9}\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Comparison with Other Frameworks}
As can be seen from Table \ref{tab3.3.2}, the accuracy of Pelee is higher than that of TinyYOLOv2 by 13.8\% and higher than that of SSD+MobileNet \cite{huang2016speed} by 2.9\%. It is even higher than that of YOLOv2-288 at only 14.5\% of the computational cost of YOLOv2-288. Pelee achieves 76.4\% mAP when we take the model trained on COCO trainval35k as described in Section \ref{sec3_4} and fine-tuning it on the 07+12 dataset.
\begin{table}[ht]
\begin{center}
\caption{\textbf{Results on PASCAL VOC 2007.} Data:
”07+12”: union of VOC2007 and VOC2012 trainval.
”07+12+COCO”: first train on COCO trainval35k then fine-tune on 07+12 \label{tab3.3.2}}
\begin{tabular}[ht]{cccccc}
\hline
\textbf{Model} &
\begin{tabular}[c]{@{}c@{}}
\textbf{Input} \\
\textbf{Dimension}
\end{tabular}&
\textbf{FLOPs}
&
\begin{tabular}[c]{@{}c@{}}
\textbf{Model Size} \\
\textbf{(Parameters)}
\end{tabular}&
\textbf{Data} &
\textbf{mAP (\%)} \\
\hline
\hline
YOLOv2&288x288& 8,360 M& 67.13 M&07+12& 69.0\\
Tiny-YOLOv2&416x416 &3,490 M&15.86 M&07+12&57.1 \\
SSD+MobileNet&300x300 &1,150 M&5.77 M&07+12 &68 \\
\textbf{Pelee (ours)}&304x304&1,210 M&5.43 M&07+12&\textbf{70.9} \\
\hline
SSD+MobileNet&300x300 &1,150 M&5.77 M&07+12+COCO &72.7 \\
\textbf{Pelee (ours)}&304x304&1,210 M&5.43 M&07+12+COCO&\textbf{76.4} \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Speed on Real Devices}
We then evaluate the actual inference speed of Pelee on real devices. The speed are calculated by the average time of 100 images processed by the benchmark tool. This time includes the image pre-processing time, but it does not include the time of the post-processing part (decoding the bounding-boxes and performing non-maximum suppression). Usually, post-processing is done on the CPU, which can be executed asynchronously with the other parts that are executed on mobile GPU. Hence, the actual speed should be very close to our test result.
Although residual prediction block used in Pelee increases the computational cost, Pelee still runs faster than SSD+MobileNet on iPhone and on TX2 in FP32 mode. As can be seen from Table \ref{tab_coco_speed}, Pelee has a greater speed advantage compared to SSD+MobileNet and SSDLite+MobileNetV2 in FP16 mode.
\begin{table}[ht]
\begin{center}
\caption{Speed on real devices \label{tab_coco_speed}}
\begin{tabular}[ht]{cccccc}
\hline
\multirow{2}{*}{\textbf{Model}}
&
\multirow{2}{*}{
\begin{tabular}[c]{@{}c@{}}
\textbf{Input} \\
\textbf{Dimension}
\end{tabular}}&
\multirow{2}{*}{
\textbf{FLOPs}}
&
\multicolumn{3}{c}{\textbf{Speed} (FPS)} \\
\cline{4-6}
&
&
&
\textbf{iPhone 8} &
\begin{tabular}[c]{@{}c@{}}
\textbf{TX2} \\
(FP16)
\end{tabular}
&
\begin{tabular}[c]{@{}c@{}}
\textbf{TX2} \\
(FP32)
\end{tabular}
\\
\hline
\hline
SSD+MobileNet&300x300&1,200 M&22.8&82&73 \\
SSDLite+MobileNetV2&320x320&805 M&-&62&60\\
\textbf{Pelee (ours)}&304x304&1,290 M&\textbf{23.6}&\textbf{125}&\textbf{77} \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Results on COCO} \label{sec3_4}
We further validate Pelee on the COCO dataset. The models are trained on the COCO train+val dataset excluding 5000 minival images and evaluated on the test-dev2015 set. The batch size is set to 128. We first train the model with the learning rate of $\mathit{10^{-2}}$ for 70k iterations, and then continue training for 10k iterations with $\mathit{10^{-3}}$
and 20k iterations with $\mathit{10^{-4}}$.
Table \ref{tab3.4.4} shows the results on test-dev2015. Pelee is not only more accurate than SSD+MobileNet \cite{huang2016speed}, but also more accurate than YOLOv2 \cite{redmon2016yolo9000} in both mAP@[0.5:0.95] and mAP@0.75. Meanwhile, Pelee is 3.7 times faster in speed and 11.3 times smaller in model size than YOLOv2.
\begin{table}[ht]
\begin{center}
\caption{Results on COCO test-dev2015 \label{tab3.4.4}}
\begin{tabular}[ht]{ccccccc}
\hline
\multirow{2}{*}{\textbf{Model}} &
\multirow{2}{*}{
\begin{tabular}[c]{@{}c@{}}
\textbf{Input} \\
\textbf{Dimension}
\end{tabular}
} &
\begin{tabular}[c]{@{}c@{}}
\textbf{Speed} \\
\textbf{on TX2} \\
\textbf{ (FPS)}
\end{tabular}
&
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}
\textbf{Model Size} \\
\textbf{(Parameters)}
\end{tabular}}&
\multicolumn{3}{c}{
\textbf{Avg. Precision (\%), IoU:} }
\\
\cline{5-7}
&&&&\textbf{0.5:0.95}
&\textbf{0.5}
&\textbf{0.75} \\
\hline
\hline
\begin{tabular}[c]{@{}c@{}}
Original \\
SSD
\end{tabular}
&300x300&-&34.30 M&25.1&43.1&25.8 \\
YOLOv2&416x416&32.2&67.43 M&21.6&44.0&19.2 \\
YOLOv3&320x320&21.5&62.3 M&-&51.5&- \\
\hline
YOLOv3-Tiny&416x416&105&12.3 M&-&33.1&- \\
SSD+MobileNet&300x300&80&6.80 M&18.8&-&- \\
\begin{tabular}[c]{@{}c@{}}
SSDlite + \\
MobileNet v2
\end{tabular}
&320x320&61&4.3 M&22&-&- \\
\textbf{Pelee (ours)}&304x304&120 &5.98 M &\textbf{22.4} &\textbf{38.3}&\textbf{22.9} \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
\label{conclusions}
Depthwise separable convolution is not the only way to build an efficient model. Instead of using depthwise separable convolution, our proposed PeleeNet and Pelee are built with conventional convolution and have achieved compelling results on ILSVRC 2012, VOC 2007 and COCO.
By combining efficient architecture design with mobile GPU and hardware-specified optimized runtime libraries, we are able to perform real-time prediction for image classification and object detection tasks on mobile devices. For example, Pelee, our proposed object detection system, can run 23.6 FPS on iPhone 8 and 125 FPS on NVIDIA TX2 with high accuracy.
|
3,212,635,537,758 | arxiv | \section{Introduction}
The true nature of dark matter is not known. Weakly interacting massive particles (WIMP) are still considered one of the most likely candidates for cold dark matter (CDM), and several experiments are ongoing to try to detect such particles. These are, however, closing in on the neutrino floor where any signal would be drowned in the solar neutrino background \citep{2007PhRvD..76c3007M}.
A promising alternative to WIMPs are ultra-light axions, fuzzy dark matter \citep{2016PhR...643....1M,2017PhRvD..95d3541H,PhysRevD.28.1243,PhysRevLett.64.1084,2000NewA....5..103G,Peebles_2000,49c0ac5f658e4b0693e5f368f6d94393,2003PhRvD..68b4023G,PhysRevD.93.025027,2006PhLB..642..192A, 2000PhRvL..85.1158H}, and superfluid dark matter \citep{2015PhRvD..92j3510B,2016arXiv160508443K,2015arXiv150703013K,2019JCAP...05..054S}. These models have distinct and observable signatures on the small scales of structure formation, they are also are able to solve some of the discrepancies observed in CDM simulations, such as the missing satellites problem \citep{1999ApJ...524L..19M,1999ApJ...522...82K}, the cusp-core problem \citep{2010AdAst2010E...5D,2018PhRvD..98h3027B,2019PhRvD..99j3020B} and the too-big-to-fail problem \citep{2011MNRAS.415L..40B}. These disparities could, however, have a solution within baryonic processes, which are usually not included in standard CDM simulations as shown in \citet{2012ApJ...744L...9M}, \citet{Brooks:2012ah}, \citet{2010Natur.463..203G}, \citet{2014ApJ...789L..17M}, \citet{2013MNRAS.429.3068T}, \citet{2012MNRAS.422.1231G}, \citet{2012MNRAS.421.3464P}, \citet{2016MNRAS.457.1931S}, and \citet{2017ApJ...850...97B}. To understand this better, one should ideally perform simulations including both of these components.
In order to quantify the effects of axion-like dark matter models, one needs to either solve a Schr\"odinger-Poisson system or use the Mandelung formulation. The latter consists of a set of traditional hydrodynamics equations, with an additional pressure term which can be solved by using methods such as smoothed particle hydrodynamics (SPH), as proposed by \citet{2015PhRvD..91l3520M}.
However, it is known that numerical methods based on the Madelung formulation of quantum mechanics are troublesome in regions around voids. This formulation, indeed, breaks down where the density approaches zero and at interference nodes, as the quantum pressure term can easily become singular \citep{2015PhRvD..91l3520M, 2019PhRvD..99f3509L}.
The Schr\"odinger-Poisson system has several applications in cosmology. For instance, the six dimensional Vlasov equation describing collisionless self-gravitating matter is approximated by a Schr\"odinger-Poisson system for a complex wave-function in three dimensions. This was proposed as an alternative way for simulating CDM in \citet{1993ApJ...416L..71W}. It was later shown, by solving the Schr\"odinger-Poisson system and comparing it to the Vlasov solver \texttt{ColDICE} \citep{2016JCoPh.321..644S} in two dimensions, that one has excellent qualitative and quantitative agreement in the solution \citep{2017PhRvD..96l3532K}. A similar study is given in \citet{2018PhRvD..97h3519M}, where the system was solved using a spectral method, demonstrating that one recovers the classical behaviour in the limit where $\hbar \to 0$.
Unfortunately, the methods employed in the above mentioned papers, despite being very accurate, are too expensive to perform high-resolution simulations in three dimensions. The first cosmological, high-resolution, simulation of fuzzy dark matter in three dimensions was performed in \citet{2014NatPh..10..496S} using the code \texttt{GAMER} \citep{2010ApJS..186..457S,2018MNRAS.481.4815S}. There, an explicit method, similar to the one we present in this paper, was used. In \citet{PhysRevD.98.043509}, they used the classical wave-function to perform zoom-in simulations to study the formation and evolution of ultralight bosonic dark matter halos from realistic cosmological initial conditions and in \citet{2017MNRAS.471.4559M} they studied galaxy formation with Bose-Einstein condensate dark matter using a pseudo spectral method (see also \citet{2018JCAP...10..027E}). There have also been a handful of papers that have performed simulations more in line with the hydrodynamical formulation. In \citet{2018ApJ...853...51Z}, a new technique to discretise the quantum pressure is proposed and shown to reproduce the expected density profile of dark matter halos. In \citet{2018arXiv180108144N}, a module \texttt{AX-GADGET} for cosmological simulations using SPH inside the \texttt{P-GADGET3} code is presented. These methods do not solve for the wave-function, but they have the advantage of being much less expensive to run than a full wave-function solver like ours. There have also been simulations performed by using other numerical techniques in either two \citep{2017PhRvD..96l3532K} or three spatial dimensions \citep{2009ApJ...697..850W}.
In this paper we present \texttt{SCALAR} (Simulation Code for ultrA Light Axions in \texttt{RAMSES}): a general adaptive mesh refinement (AMR) code to solve the Schr\"odinger-Poisson system. Our implementation is in the hydrodynamics and {\it N}-body code \texttt{RAMSES} \citep{2002AA...385..337T}.
The structure of the paper is as follows: in Section~\ref{sec:theory} we present the equations we are to solve, in Section~\ref{sec:numerical} we present the numerical implementation, in Section~\ref{sec:tests} we present tests of the code and in Section~\ref{sec:cosmo} we discuss possible cosmological applications before concluding in Section~\ref{sec:conc}.
\section{Theoretical model}\label{sec:theory}
A Bose-Einstein condensate (BEC) is a system of identical bosons, where a large fraction of particles occupies the lowest quantum energy state, or the ground state. This phenomenon typically takes place in gases, at very low temperatures or very high densities and it was observed for the first time in \citet{Anderson:1995gf} and \citet{PhysRevLett.75.3969}. In the condensate regime, these quantum systems behave as a macroscopic fluid and their peculiar features are a macroscopic manifestation of quantum effects.
In general, when Bose-Einstein condensation occurs, thermal de-Broglie wavelengths of particles start to overlap, as they become grater than the mean inter-particle distance. At this point, a coherent state develops and the system behaves as a macroscopic fluid, where only binary collisions are relevant. The dynamics of BECs is complicated, due to the difficulty in modelling particle self-interactions.
However, in the Hartree mean-field theory and in the limit of $T \to 0$, binary collisions are modelled via an effective potential and the whole quantum system can be described by a single-particle wave-function $\psi(\mathbf{x},t)$ obeying the non-linear Sch\"odinger equation:
\begin{align} \label{eq:GPE}
i\hbar\frac{\partial \psi}{\partial t} = \left [ -\frac{\hbar^2}{2m} \nabla^2 + g | \psi |^2 + mV_{\rm ext} \right ] \psi,
\end{align}
where $m$ is the mass of the boson and $g$ is the self-interaction coupling constant. Often, the trapping potential $V_{\rm ext}(\mathbf{x},t)$ is introduced by hand in order to model the presence of a trap, which is responsible for keeping particles confined.
The single-particle wave-function is normalised such that:
\begin{align}
\int | \psi |^2 ~ {\rm d}^3 x= N,
\end{align}
where $N$ is the total number of particles present in the system. As a consequence, the quantity $|\psi(\mathbf{x},t)|^2$ represents the number density of particles.
An alternative description of the macroscopic fluid is provided by the so-called Madelung formulation of the Schr\"odinger equation. In this case, by expressing the single-particle wave-function in polar coordinates:
\begin{align} \label{eq:psipolar}
\psi = \sqrt{\frac{\rho}{m}} \exp \left ( \frac{i}{\hbar}\theta \right ),
\end{align}
the dynamics of the system is described in terms of mass density and velocity, which are macroscopic physical quantities and they are respectively defined as:
\begin{align}
\rho(\mathbf{x},t) &= m |\psi(\mathbf{x},t)|^2, \label{eq:def_dens} \\
\vec{v}(\mathbf{x},t) &= \frac{1}{m} \vec{\nabla} \theta(\mathbf{x},t). \label{eq:def_vel}
\end{align}
Thus, the Schr\"odinger equation can be cast into the following system of equations:
\begin{align}
&\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec{v}) = 0, \label{eq:madelung1} \\
&\frac{\partial \mathbf{v}}{\partial t} + (\vec{v}\cdot \nabla)\vec{v} = - \nabla \left ( V_{\rm ext} + \frac{g}{m^2}\rho + Q \right ), \label{eq:madelung2}
\end{align}
which are known as the Madelung or quantum Euler equations. We recognise Eq.~\eqref{eq:madelung1} as a continuity equation which expresses conservation of mass. Although the second Madelung equation, Eq.~\eqref{eq:madelung2}, expresses conservation of momentum, it is not the same as the classical momentum equation, as it contains an additional term $Q$, which is called quantum pressure and it is defined as:
\begin{align} \label{eq:QP}
Q \equiv -\frac{\hbar^2}{2m^2}\frac{\nabla^2\sqrt{\rho}}{\sqrt{\rho}}.
\end{align}
The quantum pressure is a macroscopic manifestation of quantum effects and it is characteristic of Bose-Einstein condensates.
In this formulation, by defining the velocity as in Eq.~\eqref{eq:def_vel}, we are intrinsically assuming that the fluid is irrotational, since:
\begin{align}
\vec{\nabla} \times \vec{v} = \vec{\nabla} \times \vec{\nabla} \theta = 0.
\end{align}
However, during the evolution of the wave-function, the phase can develop discontinuities of multiples of $2\pi\hbar$ and its gradient can subsequently generate vorticity in the field, as shown in \citet{2014PhRvD..90b3517U,2019PhRvD..99h3524U}.
In cosmology, these kinds of systems can be used to model the dark matter contribution to the energy budget of the Universe. In particular, in the last few decades, models where dark matter is a light boson, such as ultra-light axions or fuzzy dark matter, have received a lot of attention. Due to the small mass of these bosons, macroscopic quantum effects manifest at astronomically relevant scales. In these alternative dark matter models, new signatures are expected within the structure formation process at highly non-linear scales and, therefore, numerical simulations are required in order to explore these scenarios.
Here, the dynamics of dark matter is also described by a system of identical bosons gravitationally bounded. Therefore, the governing equation is a non-linear Schr\"odinger equation, Eq.~\eqref{eq:GPE}, where the external potential is replaced by the gravitational potential. In this class of alternative dark matter models, self-interactions between bosons are often neglected, as the coupling constant $g$ is usually parametrically small. The resulting system of equations describing the dynamics of the dark matter fluid is called Schr\"odinger-Poisson system and, for an expanding Universe, it reads:
\begin{align}\label{eq:axioncosmo}
&i\hbar \left ( \dfrac{\partial \psi}{\partial t} + \dfrac{3}{2}H\psi \right ) = \left ( -\dfrac{\hbar^2}{2ma^2}\nabla^2 + m_a\Phi \right ) \psi, \\
&\nabla^2 \Phi = 4\pi G a^2 \left ( |\psi|^2-|\psi(a)|^2 \right ),
\end{align}
where $a$ is the scale-factor of the Universe, $H \equiv d\log(a)/dt$ is the Hubble rate of expansion, and $\Phi$ is the gravitational potential. With a change of variables $\psi \to a^{3/2}\psi$, the non-linear Schr\"odinger equation above takes on the form of Eq.~\eqref{eq:GPE}.
\section{Numerical methods}\label{sec:numerical}
In this section we provide a brief overview of the code \texttt{RAMSES} and the AMR technique. Then, we discuss in details the numerical aspects of the algorithm we implemented in order to solve the non-linear Schr\"odinger equation. Throughout this section, the dimensionality of the problem is denoted by $\rm{dim}$ and it can be 1, 2 or 3.
\subsection{Overview of \texttt{RAMSES}}
The \texttt{RAMSES} code was originally designed for cosmological simulations of structure formation and subsequently extended to astrophysical applications. It consists of an {\it N}-body particle mesh (PM) code, which solves the gravitational dynamics of a set of macroparticles, sampling the phase space distribution of the dark matter component in the Universe. Through PM algorithms, the mass of each macroparticle is interpolated on a grid and the Poisson equation is solved in order to compute the gravitational potential. Thus, the gravitational force acting on each element of the system and the new phase space position of each macroparticle are computed by solving the corresponding {\it N}-body equation with a leapfrog scheme. In addition, \texttt{RAMSES} can solve the dynamics of the baryonic component present in the Universe. In this case, the grid is also used to sample gas parcels and the evolution of the system is described by the equations of hydrodynamics, which are solved by means of a Godunov scheme. For this purpose, Riemann solvers can be used for computing fluxes of conserved physical quantities among cells.
The \texttt{RAMSES} code implements an AMR strategy, where a hierarchy of nested grids is created in order to increase the local resolution according to a set of refinement criteria. In this way, \texttt{RAMSES} can solve accurately gas dynamics and gravitational potential only where more precision is actually needed. This approach reduces consistently the amount of memory needed in cosmological and hydrodynamical simulations, compared to the case where a uniform high-resolution grid is used.
In \texttt{SCALAR}, we rely on the efficient AMR implementation of \texttt{RAMSES}. In order to solve the dynamics of our theoretical model, the single-particle wave-function is sampled by using the original grid allocated by \texttt{RAMSES} for the Poisson and hydrodynamics equations. Also in this case, the AMR approach provides the opportunity to solve the Schr\"odinger equation with higher resolution only where features of the wave-function are more demanding.
\subsection{Adaptive mesh refinement}
The basic unit of the AMR hierarchy is an oct, which consists of a set of $2^{\rm dim}$ cells. At each level in the AMR hierarchy, a grid is a collection of octs with the same resolution. The grid with the coarsest resolution is called domain grid and it covers the whole computational domain. During the evolution of the physical system, when the solution starts to develop features and its tracking requires higher resolution, any cell at a given level can be split into a child oct, with double the resolution of the parent cell.
At each time-step, the AMR structure is modified according to a set of refinement criteria. First, for a generic level of refinement $\ell$, a refinement map is created by marking all those cells satisfying at least one refinement criterion. Also cells violating the strict refinement rule are marked for refinement, in order to guarantee that each child oct at level $\ell+1$ is surrounded by, at least, $3^{\rm dim}-1$ neighbours at the coarser level. However, if a given cell at level $\ell$ does not satisfy any refinement criteria anymore, it is marked for de-refinement and subsequently its child octs are destroyed. Then, a new child oct is created at level $\ell+1$ for each marked cell and all the relevant physical quantities are interpolated from level $\ell$. Coarse-fine data interpolation, in an AMR context, is often called prolongation and it can be done by using any of the interpolation schemes which are described in the section below.
When computing the refinement map, physical quantities can fluctuate around the refinement threshold in subsequent time-steps. This means that, some cells at the boundary of a fine resolution region can be refined and de-refined many subsequent times. In this case, the refinement map tends to be quite noisy, since each interpolation operation where the solution is characterised by strong gradients introduces numerical noise in the solution of the non-linear Schr\"odinger equation. For this reason, once the refinement map is computed according to a given set of refinement criteria, a mesh smoothing operator is applied. For this purpose, a cubic buffer of $n_{\rm expand}$ cells around the computed map is additionally marked for refinement. In this way, even if cells octs are created and destroyed at coarse-fine boundaries, the interpolation procedure is applied in a regions where the wave-function is smoother and, thus, it introduces a lower level of numerical noise.
\subsection{The Schr\"odinger equation}
The \texttt{SCALAR} code evolves the solution of the non-linear Schr\"odinger equation by using a Taylor's method, similar to the one designed in \texttt{GAMER}.
Given the wave-function $\psi(\mathbf{x},t_0)$, the formal solution of the non-linear Schr\"odinger equation, Eq.~\eqref{eq:GPE}, at time $t_1 = t_0 + \Delta t$ reads:
\begin{align}\label{eq:formal_sol}
\psi(\mathbf{x},t_1) = \hat{U}(t_1,t_0) \psi (\mathbf{x},t_0),
\end{align}
where $\hat{U}(t_1,t_0)$ is the time evolution operator and it maps the solution of the Schr\"odinger equation at two different times. In the general case, the time evolution operator is defined as:
\begin{align} \label{eq:time_evo}
\hat{U}(t_1,t_0) = \exp \left ( -\frac{i}{\hbar} \int_{t_0}^{t_1} \hat{H}(\mathbf{x},t') ~ {\rm d}t' \right ),
\end{align}
where $\hat{H}(\mathbf{x},t)$ denotes the Hamiltonian of the system. The operator $\hat{U}(t_1,t_0)$ has the main following properties:
\begin{itemize}
\item
$\begin{aligned}[t]
\hat{U}(t,t) = 1,
\end{aligned}$
\item
$\begin{aligned}[t]
\hat{U}(t_1,t_2) ~ \hat{U}(t_2,t_3) = \hat{U}(t_1,t_3),
\end{aligned}$
\item
$\begin{aligned}[t]
\hat{U}(t_1,t_2) = \hat{U}^{\dag}(t_2,t_1) = \hat{U}^{-1}(t_2,t_1).
\end{aligned}$
\end{itemize}
In the limit of $ \Delta t \ll 1$, then the following approximation holds:
\begin{align}
\int_{t_0}^{t_1} \hat{H}(\mathbf{x},t') ~ {\rm d}t' \approx \hat{H}(\mathbf{x},t_0) ~ \Delta t,
\end{align}
and, therefore, Eq.~\eqref{eq:time_evo} can be approximated as:
\begin{align} \label{eq:time_evo_discr}
\hat{U}(t_1,t_0) \approx \exp \left ( {-\frac{i}{\hbar} \hat{H}(\mathbf{x},t_0) \Delta t} \right ).
\end{align}
In the general case, the Hamiltonian $\hat{H}(\mathbf{x},t)$ contains different contributions to the total energy of the system. In particular, we can express $\hat{H}(\mathbf{x},t)$ as a sum of contributions describing kinetic and potential energies. Here, we denote these two operators respectively as $\hat{K}(\mathbf{x},t)$ and $\hat{W}(\mathbf{x},t)$, and they are defined as:
\begin{align}
&\hat{K}(\mathbf{x},t) \equiv -\frac{\hbar^2}{2m}\nabla^2, \\
&\hat{W}(\mathbf{x},t) \equiv m \left(V(\mathbf{x},t) + \frac{g}{m}|\psi(\mathbf{x},t)|^2\right).
\end{align}
By means of the Lie-Trotter formula \citep{10.2307/2033649}, the time evolution operator can be split as well:
\begin{align} \label{eq:time_evo_split}
\hat{U}(t_1,t_0) \approx \exp \left ( {-\frac{i}{\hbar} \hat{W}(\mathbf{x},t_0) \Delta t} \right ) \exp \left ( {-\frac{i}{\hbar} \hat{K}(\mathbf{x},t_0) \Delta t} \right ).
\end{align}
As a consequence, the formal solution of the Schr\"odinger equation can be written as:
\begin{align}
\psi (\mathbf{x},t_1) = &\exp \left ( {-\frac{i}{\hbar} \hat{W}(\mathbf{x},t_0) \Delta t} \right ) \exp \left ( {-\frac{i}{\hbar} \hat{K}(\mathbf{x},t_0) \Delta t} \right ) \psi(\mathbf{x},t_0).
\end{align}
In \texttt{SCALAR} , the two contributions to the time evolution operator are applied separately. First, the \textquotesingle drift\textquotesingle ~due to the kinetic part of the Hamiltonian is approximated via Taylor expansion (here for ${{\rm dim}} = 3$):
\begin{align} \label{eq:taylor_K}
\bar{\psi}_{i,j,k}^{n+1} &= \exp \left ( {-\frac{i}{\hbar} \hat{K}(\mathbf{x},t_0) \Delta t} \right ) \psi^n_{i,j,k} \nonumber \\
&= \left [ \sum^{\infty}_{N=0} \frac{1}{N!} \left ( -\frac{i}{\hbar} \hat{K}(\mathbf{x},t_0) \Delta t \right )^{N} \right ] \psi^n_{i,j,k} \nonumber \\
&= \left [ 1 + \left(\frac{i\hbar \Delta t}{2m}\nabla^2\right) + \frac{1}{2}\left(\frac{i\hbar \Delta t}{2m}\nabla^2\right)^2 + \ldots\right] \psi^n_{i,j,k},
\end{align}
where, for a generic operator $\hat{O}$, the notation $\hat{O}^{N}$ denotes $N$ consecutive applications of the same operator. In the \texttt{SCALAR} code, the Taylor expansion is performed up to $\mathcal{O}( \Delta t^3)$, which is the minimum required by the stability analysis of the numerical scheme. Furthermore, the laplacian operator is discretised by a standard second-order finite difference formula:
\begin{align}
\nabla^2 \psi_{i,j,k}^n = &\frac{\psi_{i+1,j,k}^n+\psi_{i-1,j,k}^n-2\psi^n_{i,j,k}}{\Delta x^2}+ \nonumber\\
&\frac{\psi_{i,j+1,k}^n+\psi_{i,j-1,k}^n-2\psi^n_{i,j,k}}{\Delta x^2}+ \nonumber\\
&\frac{\psi_{i,j,k+1}^n+\psi_{i,j,k-1}^n-2\psi^n_{i,j,k}}{\Delta x^2}.
\end{align}
Then, the \textquotesingle kick\textquotesingle ~due to the potential is computed and the wave-function at the new time-step reads:
\begin{align} \label{eq:exp_W}
\psi^{n+1}_{i,j,k} = \exp \left ( {-\frac{i}{\hbar} \hat{W}^n_{i,j,k} \Delta t} \right ) \bar{\psi}_{i,j,k}.
\end{align}
Here, the advantage of the Lie-Trotter splitting is clear: while the kinetic contribution to the time evolution operator needs Taylor expansion in order to be applied, the potential contribution only provides a phase rotation of the wave-function and it can be computed exactly.
Once the new wave-function is computed, the new mass density is computed according to:
\begin{align}
\rho_{i,j,k}^{n+1} \big |_S = m \left | \psi_{i,j,k}^{n+1} \right | ^ 2.
\end{align}
\subsection{The continuity equation}
In quantum mechanics, the time evolution operator is unitary, as expressed by its properties. This means that the mass density carried by the wave-function is conserved. This is true also if we consider separately the two contributions to the Hamiltonian. However, the Taylor expansion, Eq.~\eqref{eq:taylor_K}, breaks the unitarity of the time evolution operator. Therefore, in order to improve the conservation properties of our main numerical scheme, we implement a secondary solver for the continuity equation associated to the non-linear Schr\"odinger equation.
Eq.~\eqref{eq:madelung1} can be written in its conservative form:
\begin{align} \label{eq:cont_eqn_cons}
\frac{\partial \rho}{\partial t} + \mathbf{\nabla} \cdot \mathbf{j} = 0,
\end{align}
where $\rho(\mathbf{x},t)$ represents the mass density. Here, the quantity $\mathbf{j}(\mathbf{x},t)$ is the associated density current, or flux, and it is defined as:
\begin{align} \label{eq:fluxes}
\mathbf{j} \equiv -i\frac{\hbar}{2m} \left ( \psi^{*} \mathbf{\nabla} \psi - \psi \mathbf{\nabla} \psi^{*} \right ).
\end{align}
By explicitly considering real and imaginary part of the wave-function, the density current can also be expresses as:
\begin{align}\label{eq:fluxes2}
\mathbf{j} = \frac{\hbar}{m} \left ( \Re[\psi] \nabla \Im[\psi] - \Im[\psi] \nabla \Re[\psi] \right ).
\end{align}
In \texttt{SCALAR}, Eq.~\eqref{eq:cont_eqn_cons} is discretised by using a first-order Godunov scheme:
\begin{align} \label{eq:new_dens}
\frac{\rho_{i,j,k}^{n+1} - \rho_{i,j,k}^{n}}{\Delta t} &+ \frac{ \left (j_{i+\frac{1}{2},j,k}^{n+\frac{1}{2}} - j_{i-\frac{1}{2},j,k}^{n+\frac{1}{2}} \right )}{\Delta x} \nonumber\\
&+ \frac{\left (j_{i,j+\frac{1}{2},k}^{n+\frac{1}{2}} - j_{i,j-\frac{1}{2},k}^{n+\frac{1}{2}} \right )}{\Delta x} \nonumber\\
&+ \frac{\left (j_{i,j,k+\frac{1}{2}}^{n+\frac{1}{2}} - j_{i,j,k-\frac{1}{2}}^{n+\frac{1}{2}} \right ) }{\Delta x} = 0,
\end{align}
where the time-centered fluxes are computed at cell interfaces. In order to compute the time-centered fluxes at cell interfaces, the wave-function is first computed at half time-step, by advancing the solution of $0.5 \Delta t$. Then, the wave-function at cell interfaces is estimated in each dimension by linear interpolation:
\begin{align}\label{eq:lin_int_psi}
\psi_{i+\frac{1}{2},j,k}^{n+\frac{1}{2}} &= \frac{\psi_{i,j,k}^{n+\frac{1}{2}}+\psi_{i+1,j,k}^{n+\frac{1}{2}}}{2}, \nonumber \\
\psi_{i,j+\frac{1}{2},k}^{n+\frac{1}{2}} &= \frac{\psi_{i,j,k}^{n+\frac{1}{2}}+\psi_{i,j+1,k}^{n+\frac{1}{2}}}{2}, \\
\psi_{i,j,k+\frac{1}{2}}^{n+\frac{1}{2}} &= \frac{\psi_{i,j,k}^{n+\frac{1}{2}}+\psi_{i,j,k+1}^{n+\frac{1}{2}}}{2}. \nonumber
\end{align}
Its gradient, instead, is computed in each dimension by means of the first-order finite difference formula and it reads:
\begin{align}\label{eq:lin_int_nabla_psi}
\nabla\psi_{i+\frac{1}{2},j,k}^{n+\frac{1}{2}} &= \frac{\psi_{i+1,j,k}^{n+\frac{1}{2}}-\psi_{i,j,k}^{n+\frac{1}{2}}}{\Delta x}, \nonumber \\
\nabla\psi_{i,j+\frac{1}{2},k}^{n+\frac{1}{2}} &= \frac{\psi_{i,j+1,k}^{n+\frac{1}{2}}-\psi_{i,j,k}^{n+\frac{1}{2}}}{\Delta x}, \\
\nabla\psi_{i,j,k+\frac{1}{2}}^{n+\frac{1}{2}} &= \frac{\psi_{i,j,k+1}^{n+\frac{1}{2}}-\psi_{i,j,k}^{n+\frac{1}{2}}}{\Delta x}. \nonumber
\end{align}
Thus, the time-centered density current at cell interfaces is computed by means of Eq.~\eqref{eq:fluxes2}.
However, this solver is not used to explicitly advance in time the mass density, but only to enforce the conservation of mass. Indeed, by denoting $\rho_{i,j,k}^{n+1} \big |_S$ and $\rho_{i,j,k}^{n+1} \big |_C$ the new mass densities computed by the main and the secondary solvers respectively, a correcting factor is computed as follows:
\begin{align} \label{eq:r}
R = \frac{\rho^{n+1}_{i,j,k} \big |_C}{\rho^{n+1}_{i,j,k} \big |_S},
\end{align}
which is used to rescale the wave-function.
Although by solving the continuity equation on top of the Schr\"odinger equation sensibly improves the conservation properties of the algorithm, this process does not ensure perfect conservation of mass. Since we truncate at the third order in time the Taylor expansion of the kinetic operator, Eq. \eqref{eq:taylor_K}, the kinetic solver introduces truncation errors in the solution of the Schr\"odinger equation. However, by rescaling the wave-function with the correct amplitude computed from the continuity equation, we are able to significantly reduce truncation errors, as shown by the improvement of several orders of magnitude on the conservation properties of \texttt{SCALAR} in the tests we present in Section \ref{sec:tests}.
Unfortunately, the rescaling procedure is subject to accumulation of round-off errors, leading to an evolution of the error in the conservation of mass itself. However, the round-off errors are of the order of machine precision, in contrast to the much higher amplitude of the truncation errors introduced by the kinetic solver. This solution was already adopted in the \texttt{GAMER} code \citep{2014NatPh..10..496S}.
\subsection{The solver}
\texttt{SCALAR} solves the Schr\"odinger equation from the coarser to the finer level in the AMR hierarchy. For a generic refinement level $\ell$, the optimal time-step is chosen as:
\begin{align} \label{eq:CFL}
\Delta t = \min\left[C_K \cdot \frac{\sqrt{3}}{2\hbar}m(\Delta x)^2,~~C_W \cdot \frac{2\pi \hbar}{m|V_{\rm max}|}\right],
\end{align}
where $|V_{\rm max}|$ denotes the maximum absolute value of the effective potential $V + \frac{g}{m}|\psi|^2$. Here, $C_K$ and $C_W$ are Courant factors which are required to be smaller than one. The first term in the square brackets is determined by the Von Neumann stability analysis of the kinetic part of the solver. The second term, instead, requires that the phase of the wave-function does not rotate by a bigger angle than $2\pi C_W$ within a time-step.
In general, the Courant factors $C_K$ and $C_W$ are chosen empirically, depending on the characteristics of the physical system that one aims to model. In our case, we set $C_K = C_W = 0.2$ since it provides a good accuracy on the solution of the non-linear Schr\"odinger equation, without sacrificing too much computation time. In Appendix~\ref{sect:app_stability} we provide a detailed discussion of the Von Neumann stability analysis of the numerical scheme.
In the original \texttt{RAMSES} code two different options are available regarding the choice of the time-step: a single or an adaptive time-step. While the former consists in using the same time-step for all refinement levels and it is determined by the finest level in the AMR hierarchy, the latter allows to use smaller time-steps for finer refinement levels. However, in case of adaptive time-step, for each coarse time-step at level $\ell$ it is possible to perform only two fine time-steps at level $\ell+1$.
In \texttt{SCALAR}, an additional option is available: a flexible time-step, where for each coarse time-step at level $\ell$, the number of fine steps at level $\ell+1$ is flexible and it is determined by a level dependent Courant-Friedrichs-Lewy (CFL) condition. From Eq. \eqref{eq:CFL}, when the optimal time-step is chosen by the kinetic CFL condition, the time-step scales with the grid size as $\Delta t \propto \Delta x^2$, which represents a stricter condition than the usual case of hydrodynamics equations. Therefore, a flexible time-step can reduce significantly the total amount of coarse time-step in a simulation.
Within a generic level of refinement $\ell$, \texttt{SCALAR} solves the non-linear Schr\"odinger equation for each oct separately. Thus, in order to advance the solution over a time-step, the solver proceeds as follows:
\begin{enumerate}
\item For a given oct, a cubic buffer of neighbours cells is collected. The equations are actually solved only for the central oct, while the buffer cells are used to compute laplacians at each order of the Taylor expansion. If the central oct lies next to the coarse-fine boundary, the wave-function is interpolated into ghost cells from level $\ell-1$.
\item The kinetic solver evolves the wave-function at the new time-step by means of Eq.~\eqref{eq:taylor_K}. First, by advancing the solution by $0.5\Delta t$, the half time-step solution $\bar{\psi}_{i,j,k}^{n+1/2}$ is estimated from $\psi_{i,j,k}^{n}$ and it is used later in order to compute the mass density currents. Then, the full-step solution $\bar{\psi}_{i,j,k}^{n+1}$ and the new mass density $\rho_{i,j,k}^{n+1} \big |_S$ are computed.
\item The wave-function at half time-step $\bar{\psi}_{i,j,k}^{n+1/2}$ is interpolated at cell interfaces by using Eqs.~\eqref{eq:lin_int_psi}--\eqref{eq:lin_int_nabla_psi} and the time-centered density currents are computed by means of Eq.~\eqref{eq:fluxes2}.
\item The continuity equation is solved and the new mass density $\rho_{i,j,k}^{n+1} \big |_C$ is computed via Eq.~\eqref{eq:new_dens}.
\item The rescaling factor given by Eq.~\eqref{eq:r} is computed and $\bar{\psi}_{i,j,k}^{n+1}$ is rescaled in order to preserve mass conservation.
\item If the given oct lies next to the coarse-fine boundary, the estimated flux is stored for the subsequent reflux operation.
\item Finally, the phase rotation due to the potential is computed by applying Eq.~\eqref{eq:exp_W} to $\bar{\psi}_{i,j,k}^{n+1}$, and $\psi_{i,j,k}^{n+1}$ is evaluated.
\end{enumerate}
All steps, except the first and the last ones, are performed separately for each physical dimension. This procedure is called dimensional splitting and it reduces a $N$-dimensional problem into a system of $N$ one dimensional problems. It has the advantage of relaxing the CFL condition of the solver and, therefore, it allows bigger time-steps. A flowchart of our solver is shown in Fig.~(\ref{fig:flowchart}).
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{flowchart.pdf}
\caption{Flowchart of the numerical algorithm implemented to solve the Schr\"odinger-Poisson system. The kinetic solver is shown in blue (left) and the continuity solver is shown in red (right).}
\label{fig:flowchart}
\end{figure}
\subsection{Refinement strategy}
In \texttt{SCALAR}, we implement the same \textquotesingle quasi-Lagrangian\textquotesingle ~approach as \texttt{RAMSES} uses for hydrodynamics: when the total mass of a given cell exceeds a given threshold, the cell is marked for refinement. The level dependent density threshold is defined as:
\begin{align}\label{eq:masscrit}
\rho_{\ell} = \frac{M_c}{(\Delta x_{\ell})^{\rm dim}},
\end{align}
where $M_c$ corresponds to the maximum mass allowed per cell.
In addition, following \citet{2014NatPh..10..496S} and the \texttt{FLASH} code \citep{Fryxell2000}, we implement support for the invariant version of the L\"ohner error estimator. It is based on the second derivative of a given physical quantity, normalised by the first derivative. Considering a generic physical quantity $f$, the error estimator $E_L$ reads:
\begin{align} \label{eq:lohner}
E_L = \left \{ \frac{ \sum_{ij} \left ( \dfrac{\partial^2 f}{\partial x_i \partial x_j} \right )^2 }{
\sum _{ij} \left [
\dfrac{1}{2\Delta x_j} \left ( \left | \dfrac{\partial f}{\partial x_i} \right |_{i+1} + \left | \dfrac{\partial f}{\partial x_i} \right |_{i-1} \right ) + \xi \dfrac{ |\bar{f}_{ij}| }{ \Delta x_i \Delta x_j }
\right ]^2
} \right \}^{1/2},
\end{align}
where the indices $i$,$j$ run over each physical dimensions. Small fluctuations of the physical quantity $f$ are filtered out due to the presence of the second term at denominator. The quantity $|\bar{f}_{ij}|$ is an average of $f$ over dimensions $i$,$j$ and $\xi$ is a small constant. This error estimator is dimensionless and therefore it can be applied to any physical quantity. Furthermore, in Eq. \eqref{eq:lohner}, $E_L$ is bounded in the interval $[0,1]$. In \texttt{SCALAR}, we apply the L\"ohner error estimator separately to ${\Re[\psi]}$ and ${\Im[\psi]}$. Then, the final estimation of the error on the wave-function is given by:
\begin{align}
E_L = \sqrt{ \left (E_L^{ \Re } \right )^2 + \left ( E_L^{ \Im } \right )^2 },
\end{align}
and if it exceeds a user-defined threshold, the cell is marked for refinement.
This threshold can be chosen empirically, depending on the features one needs to resolve in the solution of the governing equations. Although it is currently implemented in \texttt{SCALAR}, we did not employed the L\"ohner error estimator in the test we present here, in Section \ref{sec:tests}. In general, while testing separately the implementation of the L\"ohner error estimator, we find that a value of $E_L = 0.7$ provides a good balance between computational cost and accuracy for the solution of the non-linear Schr\"odinger equation.
\subsection{Spatial and temporal interpolation.}
In \texttt{SCALAR}, interpolation is required when a generic level $\ell$ in the AMR hierarchy is refined and new child octs are created at level $\ell+1$,
or when, during the solving process, boundary conditions need to be specified for a fine-grid patch and ghost cells are created.
In both cases, the wave-function at the coarse level $\ell$ is interpolated down to level $\ell +1$. In order to solve the equations of motion, when the laplacian operator is applied, any discontinuity in the second derivative of the wave-function introduces an error, which propagates into the solution of the non-linear Schr\"odinger equation and it can destroy the wave-function. Therefore, high-order interpolation schemes are implemented in order to keep the wave-function as smooth as possible.
In particular, in \texttt{SCALAR}, we implement two high-order interpolation schemes. In both cases, the interpolating function is a fourth-order polynomial, but the coefficients of the polynomials are chosen in different ways. In one case, Lagrange basis polynomials are computed in order to set the coefficients, resulting in a fourth-order Lagrange interpolation scheme. In the second case, fourth-order conservative interpolation is performed and the coefficients of the interpolation are set by requiring that cell averages of the interpolated quantities are preserved. In case of adaptive time integration, linear temporal interpolation can also be applied when computing boundary conditions for a fine level patch, since coarse-grid and fine-grid wave-functions can be discretised at different times.
Furthermore, the interpolation can be performed on two different sets of variables: the original set of variables ${\Re[\psi]}$ and ${\Im[\psi]}$, or derived variables $m|\psi|^2$ and $\rm{Arg}[\psi]$, corresponding to mass density and phase of the wave-function. The interpolation schemes and the set of variables used for the interpolation process can be specified by the user in the parameter file.
Further details on the interpolation schemes can be found in Appendix~\ref{sect:app_prolong}.
\subsection{Artificial viscosity}
In the tests shown in the upcoming sections, when they were done at the domain level only, the solution of the non-linear Schr\"odinger equation remains stable for as long as we could run \texttt{SCALAR}. However, when refinements were included, the solver had the tendency to develop spurious high-frequency waves at coarse-fine boundary, even after improving the order of accuracy of interpolation schemes. In order to artificially dump spurious oscillations, we introduced an empirical viscosity term in the non-linear Schr\"odinger equation. Thus, incorporating the viscosity term, the non-linear Schr\"odinger equation, Eq.~\eqref{eq:GPE}, is replaced by:
\begin{align}
i\hbar\frac{\partial \psi(\mathbf{x},t)}{\partial t} = \left [ -\frac{\hbar}{2m} \left( 1- i \epsilon \right) \nabla^2 + g | \psi(\mathbf{x},t) |^2 + mV_{\rm ext}(\mathbf{x},t) \right ] \psi(\mathbf{x},t),
\end{align}
where the constant $\epsilon>0$ quantifies the strength of the damping term. For example, if we consider a single plane-wave:
\begin{align}
\psi \propto \exp(i\omega t - ikx),
\end{align}
the viscosity term acts in a similar way as a Gaussian filter, by dumping the wave-function by a factor of:
\begin{align}
\exp \left ( -\frac{k^2 \epsilon t}{2m} \right ).
\end{align}
This means that, including the artificial viscosity, the wave-function is simply replaced by:
\begin{align}
\psi \to \psi \exp \left ( -\frac{k^2 \epsilon t}{2m} \right ),
\end{align}
In this way, the filter leaves untouched physical low frequency modes in the wave-function, while smoothing the spurious numerical oscillations.
In general, an artificial viscosity term would affect mass conservation. However, by solving the continuity equation on top of the non-linear Schr\"odinger equation, mass conservation is enforced and the artificial viscosity simply acts as a viscous force. Indeed, if we consider the Madelung formulation of quantum mechanics instead, Eq.~\eqref{eq:madelung1} and Eq.~\eqref{eq:madelung2}, the artificial viscosity term would be placed, together with the quantum force, in the momentum equation. Thus, Eq.~\eqref{eq:madelung2} would read:
\begin{align}
\frac{\partial v}{\partial t} + (\vec{v}\cdot \nabla)\vec{v} = - \nabla \left ( V + \frac{g}{m^2}\rho + Q - \frac{\epsilon}{2m} \frac{ \nabla \left ( \rho\vec{v} \right )}{\rho} \right ).
\end{align}
The new viscous force term:
\begin{align*}
F_{\rm{viscous}} = -\frac{\epsilon}{2m} \nabla \left( \frac{ \nabla \left ( \rho\vec{v} \right )}{\rho} \right ),
\end{align*}
helps preventing high-frequency waves to build up in time. With the addition of such a viscosity term, we are able to evolve without any issues the wave-function over hundreds of oscillation periods in our tests and, at the same time, preserving mass, energy and agreement with analytical solutions. There is no unique prescription for solving these issues we encountered and, in general, it is possible to design more elaborate artificial viscosity terms.
In an AMR context, when choosing the value of the dumping constant $\epsilon$, one should keep in mind that spurious high-frequency oscillations appear on scales of the local grid resolution, which is not fixed, but it changes according to designed refinement criteria. Thus, the strength of the dumping term should be decided such that it does not over-suppress the wave-function in high-resolution regions, but rather slightly under-suppress spurious oscillations in low-resolution regions. Such a limitation of the artificial viscosity term we implemented motivates further investigations to develop a more accurate scheme for dumping high-frequency spurious oscillations.
Empirically, we find that a value in the range $0.2 < \epsilon < 1$ ensures stability over a long time in all the test cases we present (except the soliton test case, were we set $\epsilon = 0$ and we do not use any artificial viscosity), by preventing the growth of spurious oscillations in the solution of the non-linear Sch\"odinger equation.
\subsection{Code units}
We adopt the set of \textquotesingle super-comoving coordinates\textquotesingle ~introduced in \citet{1998MNRAS.297..467M}, and already used in \texttt{RAMSES}. Thus, the following change of variables is performed:
\begin{align}
\begin{gathered}
\tilde{x} ~= ~ \frac{x}{a L},~~~d\tilde{\tau} ~ = ~ \frac{H_0{\rm d}t}{a^2},~~~\tilde{\psi} ~ = ~ \frac{\psi}{\overline{\psi}}, \\~~~\tilde{V} ~ = ~ \frac{V a^2}{(H_0L)^2},~~~\tilde{m} ~ = ~ \frac{mH_0L^2}{\hbar},~~~\tilde{g} = \frac{a^2g}{H_0\hbar}|\overline{\psi}|^2,
\end{gathered}
\end{align}
where $H_0$ is the Hubble constant, $L$ is the box size, and $\overline{\psi}$ is chosen to ensure that $\int |\tilde{\psi}|^2 ~ {\rm d}^{\rm dim}\tilde{x} = 1$. As a consequence, the resulting non-linear Schr\"odinger equation reads:
\begin{align}
i\frac{d\tilde{\psi}}{d\tilde{\tau}} + \frac{1}{2\tilde{m}}\tilde{\nabla}^2\tilde{\psi} - \tilde{m}\tilde{V} \tilde{\psi} - \tilde{g}|\tilde{\psi}|^2\tilde{\psi} = 0.
\end{align}
This set of coordinates was specifically designed for cosmological applications. However, it can be used for any application by setting the scale factor $a$ to unity and replacing $H_0$ by a general inverse time scale $T^{-1}$. In the remainder of the paper, all equations are in these code units.
For the particular case of axion dark matter in a cosmological setting (see Eq.~\eqref{eq:axioncosmo}) we have $\tilde{g} = 0$, $\overline{\psi} \propto a^{-3/2}$, and the potential is determined via the Poisson equation:
\begin{align}
\tilde{\nabla}^2 \tilde{V} &= \frac{3}{2}a \Big [\Omega_{\rm axions}(|\tilde{\psi}|^2-1) + \Omega_{\rm CDM}(\tilde{\rho}_{\rm CDM}-1) \nonumber \\
&+ \Omega_{\rm baryons}(\tilde{\rho}_{\rm baryons}-1) + \ldots \Big ] ~ ,
\end{align}
where $\Omega_{i}$ is the fraction of the energy budget of our Universe that is in matter component $i$ (axions, baryons, CDM, etc.) and the mean value of $\tilde{\rho}_i$ over the box is set to unity.
\section{Tests of the code}\label{sec:tests}
In this section we present the numerical experiments we performed in order to test the main features of \texttt{SCALAR}. When testing the accuracy of our numerical schemes, we rely on three main tests: conservation of mass, energy, and reproduction of analytical solutions. Given the total mass $M\left( t \right)$ in the simulation box and the total energy $E\left( t \right)$, the corresponding errors are respectively defined as:
\begin{align*}
\epsilon_{\rm{mass}} = \left | \frac{M\left( t \right)-M\left( 0 \right)}{M\left( 0 \right)} \right | ~~~ \text{and} ~~~ \epsilon_{\rm{energy}} = \left | \frac{E\left( t \right)-E\left( 0 \right)}{E\left( 0 \right)} \right | ~ .
\end{align*}
Instead, by denoting the analytical solution as $\psi_{\rm{a}}\left(x,t\right)$ and the numerical solution as $\psi_{\rm{n}}\left(x,t\right)$, at a given time, the error with respect to the analytical solution is computed according to the following formula:
\begin{align*}
\epsilon_{\rm{solution}} = \frac{\left<(\psi_{\rm{a}}\left(x,t\right)-\psi_{\rm{n}}\left(x,t\right))^2\right>_x}{\left<\psi^2_{\rm{a}}\left(x,t\right)\right>_x} ~ .
\end{align*}
where $\left<~~~\right>_x$ denotes the mean over the box. For the tests we present here, we always assume that each cell in the AMR grid has the same size in all dimensions: $\Delta x = \Delta y = \Delta z$. Furthermore, in all the tests, we only employ the refinement criterion based on density, as we found no need to use the L\"ohner error estimator in order to achieve an accurate and stable solution over time.
\paragraph{Accuracy and performances.}
While running the tests we discuss in this section, we measured the overall accuracy and performances of the numerical algorithms implemented in \texttt{SCALAR}.
The accuracy of a numerical method is often measured by comparing the analytical solution with the numerical solution.
For this purpose, we compute the error with respect to the analytical solution as given above. The overall accuracy of \texttt{SCALAR} is obtained by measuring how the global error, computed at a given time, scales with the grid spacing. In Fig.~\ref{fig:accuracy}, we show the sample of errors we obtained for one of the numerical experiments we performed, the soliton test. The data points are fitted by the formula:
\begin{align*}
\log_2 \left (y \right ) = \alpha \log_2\left( x \right) + \beta ~ ,
\end{align*}
where $x$ and $y$ represents the grid size and the error with respect to the analytical solution, respectively. The parameters of the fit $\alpha$ and $\beta$ are determined by means of the Linear Least Square (LLS) method and, in this case, we obtain $\alpha \sim 1.8$ and $\beta \sim 6.7$.
While $\beta$ is not relevant when measuring the convergence of a numerical scheme, the parameter $\alpha$ represents the slope of the fitting polynomial and it corresponds to the global accuracy of the solver. Thus, for \texttt{SCALAR}, we achieve an overall second-order accuracy.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Accuracy.png}
\caption{
Overall accuracy of the numerical scheme implemented to solve the Schr\"odinger-Poisson system. Blue points represent errors with respect to the analytical solution for the soliton test, computed at a given time for $\Delta x = 2^{-8}$, $2^{-7}$, $2^{-6}$. The orange dashed line corresponds to the polynomial fit.}
\label{fig:accuracy}
\end{figure}
We measured general performances of \texttt{SCALAR} on a small cluster. Every node of the cluster is equipped with two Intel E5-2670 processors, with 8 cores each, a clock frequency of $f_{\rm CPU} = 2.6$ GHz and a total memory of $128$ GB. Nodes within the cluster are interconnected through a Mellanox 56 Gb FDR Inifniband. In application with gravity enabled, \texttt{SCALAR} can evolve the solution of the non-linear Schr\"odinger equation by updating a single cell in the AMR hierarchy in $\sim11$ $\mu$s. This means that, our code is able to reach a value of $\sim 10^{5}$ cell updated per second.
\paragraph{Conservation of mass.}
The non-linear Schr\"odinger equation has the conserved quantity:
\begin{align*}
M = \int |\psi|^2 ~ {\rm d}^{\rm dim}x,
\end{align*}
which in the Mandelung formulation is just the total mass of the fluid.
Mass and energy are not manifestly conserved by the main solver, therefore monitoring them is a useful test.
In simulations with no refinements and without enforcing mass conservation, we typically find the error on the conservation of mass of the order of (the prefactor is here for $\Delta x = 2^{-6}$):
\begin{align*}
\frac{\Delta M}{M} \sim 10^{-6}\left(\frac{t}{T} \right),
\end{align*}
where $T$ is the oscillation period. When we allow for refinements, the situation is typically worse and it is not good enough for cosmological simulations. However, by solving the continuity equation on top of the Schr\"odinger equation, we observe an improvement on the conservation of mass up to:
\begin{align*}
\frac{\Delta M}{M} \sim 10^{-13}\left(\frac{t}{T}\right).
\end{align*}
This does not change when we allow refinements and, even though the error grows linearly in time, it is good enough in order to perform cosmological simulations. This is shown in Fig.~(\ref{fig:mass}), where we perform the sine wave test on the domain grid only, with a resolution of $N_{\rm cell} = 2^6$ cells, corresponding to $\ell=6$ and $\Delta x = 2^{-6}$, in one dimension. Details regarding the sine wave test are described in Section~\ref{sec:sinewave}.
Please, refer to Section~\ref{sec:numerical} for an explanation of the temporal evolution of the error in the conservation of mass.
\paragraph{Conservation of energy.}
Since we enforce mass conservation by solving the continuity equation, energy conservation is a better accuracy test for our code.
By defining kinetic and potential energy as:
\begin{align}
K &\equiv \frac{1}{2m^2}\int |\nabla\psi|^2 ~{\rm d}^{\rm dim}x, \\
W &\equiv \frac{1}{2} \int V_{\rm eff}|\psi|^2 ~ {\rm d}^{\rm dim}x,
\end{align}
the temporal change in the total energy of the system is expressed by:
\begin{align}
\frac{d}{dt}(K+W) = \frac{1}{2}\int \frac{\partial V_{\rm eff}}{\partial t}|\psi|^2 ~ {\rm d}^{\rm dim}x.
\end{align}
As we can see, in the case where the effective potential has no explicit time-derivatives, the energy $E = K + W$ is conserved under the evolution.
In a cosmological setting, the potential $V$ depends on time via the scale-factor and this leads to a Lazyer-Irvine equation \citep{2017PhRvD..96l3532K}:
\begin{align}\label{eq:LI}
\frac{d}{dt}(K+W) - HW = 0,
\end{align}
which can be monitored by integrating it up while performing the simulation.
\paragraph{Comparison to analytical solutions.}
The most stringent test we can perform is to directly compare the numerical solution with an analytical solution. However, the discretised version of the governing equations is a different problem than the theoretical continuous limit and, thus, it admits a different solution. Usually, the main difference between the solutions of the continuous and the discretised non-linear Schr\"odinger equations leads to the wave-function evolving with slightly different temporal phases. For this reason, we estimate the phase difference between the two solutions and, starting from the analytical solution of the continuous non-linear Schr\"odinger equation, we compute the theoretical solution of the discretised equation. Then, we compare the numerical solution with the proper solution of the discretised non-linear Schr\"odinger equation.
This represents a way of comparing the numerical solution with the one of the physical problem we are actually modelling. In the limit of $\Delta x \to 0$, the solutions of the continuous and the discretised non-linear Schr\"odinger equations converge to the same solution.
In this way, we show that the numerical solution is in excellent agreement with the theoretical solution of the discretised problem, even in the case when the resolution is not extremely high, where computation time would significantly increase.
\hfill \newline \hfill
\hfill \newline \hfill
In the plots below we show, for different choices of initial conditions and potential, the error on the conservation of mass, the error on the conservation of energy, and the error with respect to the analytical solution.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Mass.png}
\caption{
Errors in conservation of mass and energy when solving the Schr\"odinger equation only (K) and when solving together the Schr\"odinger and the continuity equations (K+C). Solving the continuity equation together with the Schr\"odinger equation improves the conservation properties of the algorithm by $\sim6$ orders of magnitude.}
\label{fig:mass}
\end{figure}
\subsection{Sine wave}\label{sec:sinewave}
The sine wave test evolves a static one dimensional density profile, where the initial wave-function is set as:
\begin{align}
\psi(x,0) = \sin(2\pi n x).
\end{align}
It evolves in a constant potential, which is defined as:
\begin{align}
V = 1 - \frac{2\pi^2 n^2}{m^2},
\end{align}
where $m$ is the mass carried by the wave-function and the period of oscillation is given by:
\begin{align}
T = \frac{2\pi}{m}.
\end{align}
The full analytical solution reads:
\begin{align}\label{eq:an_sol_sine}
\psi(x,t) = e^{-i\frac{2\pi t}{T}} \sin(2\pi n x).
\end{align}
This numerical experiment is designed to test the creation of ghost cells when computing fine levels boundary conditions. Here we only refine according to the mass criterion, Eq.~\eqref{eq:masscrit} and, since the density profile does not evolve in time, there is no dynamical creation or destruction of grids: once the refinement map is computed at the beginning, it does not change. We evolve the solution of the Schr\"odinger equation over $100$ periods of oscillation.
It is possible to show that the solution of the discretised equation - the one we are solving - is the same as Eq.~\eqref{eq:an_sol_sine}, up to second order in space, but with a slightly different period of oscillation. Therefore, to factor out the dependence of the period with resolution (which we test seperately), we correct the analytical solution by replacing $T$ with $T_{\rm discrete}$, where:
\begin{align}\label{eq:periodsinwave}
\frac{T_{\rm discrete}}{T} &= \frac{1}{1 + T^2 n^2\left(\frac{1-\cos(2\pi n\Delta x)}{(2\pi n\Delta x)^2} - \frac{1}{2}\right)} \nonumber\\
&\approx \frac{1}{1-\frac{\pi^2T^2n^2}{12}\Delta x^2}.
\end{align}
This test was performed for ${\rm dim} = 1$. The non-linear Schr\"odinger equation is solved together with the continuity equation, in order to enforce conservation of mass. The domain grid resolution is $N_{\rm cell} = 64$, corresponding to $\ell=6$ and $\Delta x = 2^{-6}$, and the maximum refinement level is set to $\ell_{\rm max} = 8$. When boundary conditions for fine levels are needed, phase and density are interpolated in ghost cells by means of fourth-order conservative interpolation. We used artificial viscosity with $\epsilon = 0.2$. The results from this test are shown in Fig.~(\ref{fig:coswavetest}).
We have also performed a similar test using a quadratic potential leading to a Gaussian profile:
\begin{align*}
\psi \propto \exp\left(-\frac{x^2}{\sigma^2}\right),
\end{align*}
with very similar results.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{CosineWave.png}
\caption{Evolution of the three errors as a function of time for the cosine wave test. While the error on the conservation of mass (orange) evolves accordingly to $\Delta M / M \sim 10^{-13}(t/T)$, the error on the conservation of energy (blue) stays constant. Furthermore, the error with respect to the analytical solution (green) does not evolve over time.
}
\label{fig:coswavetest}
\end{figure}
\subsection{Travelling wave}
This test simulates a one dimensional wave-packet travelling through a periodic box. Here, we test dynamical creation and destruction of grids, since the AMR hierarchy follows the density profile moving towards the direction of the wave. In this case, we have no potential and the initial conditions are defined as:
\begin{align}
\psi(x,0) = \frac{1}{\sqrt{2}}\left[e^{i k_1x} + e^{i k_2x}\right],
\end{align}
where $k_1 = 2\pi n_1$, $k_2 = 2\pi n_2$ with $n_1\not= n_2\in\mathbb{N}$. The oscillation frequency of a single mode is:
\begin{align*}\omega(k) = \frac{k^2}{2m},
\end{align*}
and the analytical solution of the Schr\"odinger equation reads:
\begin{align}
\psi(x,t) = \frac{1}{\sqrt{2}}\left[e^{i(k_1x - \omega(k_1) t)} + e^{i(k_2x - \omega(k_2) t)}\right].
\end{align}
As a consequence, the density is given by:
\begin{align}
|\psi(x,t)|^2 = 1 + \cos \left(2\pi x(n_2-n_1) + \frac{2\pi t}{T}\right),
\end{align}
where the oscillation period is defined as:
\begin{align}
T = \frac{m}{\pi(n_1^2-n_2^2)}.
\end{align}
The wave-function is evolved in time over $100$ oscillation periods.
The non-linear Schr\"odinger equation is solved together with the continuity equation, in order to enforce conservation of mass.
Also in this case, the coarse-fine data interpolation is performed by means of fourth-order conservative interpolation. However, while density and phase are interpolated in ghost cells, new refinements are made by interpolating real and imaginary parts of the wave-function. The domain grid has the same resolution as in the previous test and refinements are allowed up to $\ell_{\rm max} = 8$. We used artificial viscosity with $\epsilon = 0.2$. The results from this test are shown in Fig.~(\ref{fig:travwavetest}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{TravellingWave.png}
\caption{Evolution of the three errors as a function of time for the travelling wave test. While the error on the conservation of mass (orange) evolves accordingly to $\Delta M / M \sim 10^{-13}(t/T)$, the error on the conservation of energy (blue) stays roughly constant. Furthermore, the error with respect to the analytical solution (green) does not evolve over time.
}
\label{fig:travwavetest}
\end{figure}
\subsection{Soliton}
In a cosmological context, \texttt{SCALAR} can be used to simulate the structure formation process with fuzzy dark matter. In this case, the density profiles of the dark matter halos differs from the case of the standard CDM.
We can find a stationary solution which can be tested by taking:
\begin{align*}
\psi(x,t) = e^{-i\frac{2\pi t}{T}}\chi(x),
\end{align*}
and solving the resulting ODE for $\chi(x)$. A numerical fit to the density profile of a soliton in three dimensions was first suggested in \citet{2014NatPh..10..496S} and then in \citet{2015MNRAS.451.2479M}. Despite it is an approximated solution, it is useful when coupling the Poisson equation to the non-linear Schr\"odinger equation. In this works, the density profile of the soliton was found to be on the form:
\begin{align}
\rho \propto \frac{1}{\left[1 + (r/r_{\rm core})^2\right]^8},
\end{align}
where $r_{\rm core}$ can be chosen as a free parameter, see Appendix~\ref{sect:app_soliton} for more details.
We set this density profile analytically and evolve the system. The density profile remains approximately stationary while the wave-function oscillates as:
\begin{align*}
\psi(x,t) \propto e^{-i\frac{2\pi t}{T}}.
\end{align*}
This test was performed for $\rm{dim} = 3$ and the non-linear Schr\"odinger equation is solved together with the continuity equation. The domain grid contains $N_{\rm cell}= 64^3$ cells, corresponding to $\ell=6$ and $\Delta x = 2^{-6}$, and the maximum refinement level allowed is $\ell = 8$. In this case, both refinement and ghost cells are made by fourth-order conservative interpolation on density and phase.
The artificial viscosity term is set to $\epsilon=0$. We found that self-gravity is able to stabilise the wave-function against spurious numerical oscillations. We tested the same physical case with different values of artificial viscosity, but we did not find any improvement in terms of accuracy. In the other tests, where we set $\epsilon=0.2$, the artificial viscosity term was introduced just in order to ensure stability and accuracy of the solution to the non-linear Schr\"odinger equation over a very large number of oscillation periods.
The results for this test are shown in Fig.~(\ref{fig:solitontest}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Soliton.png}
\caption{Evolution of the three errors as a function of time, for a self-gravitating soliton. While the error on the conservation of mass (orange) evolves accordingly to $\Delta M / M \sim 10^{-13}(t/T)$, the error on the conservation of energy (blue) stays constant. Furthermore, the error with respect to the analytical solution (green) does not evolve over time.
}
\label{fig:solitontest}
\end{figure}
\section{Cosmological applications}\label{sec:cosmo}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Delta.png}
\caption{
Evolution of the dimensionless power spectrum $\Delta^2(k)$ with redshift.
}\label{Fig:pofk}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Errs.png}
\caption{
Evolution of the errors on conservation of mass (orange) and energy (blue), as a function of redshift.
}\label{Fig:errs}
\end{figure}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\linewidth]{Dens21n.png}
\caption{
Projection along the $z$ axes of the dark matter density field, normalised by the critical density of the Universe. The box is $1~\text{Mpc}/h$ in comoving units and it represents the entire simulation box.
}\label{Fig:ax_dens}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\linewidth]{Real21.png}
\caption{
Projection along the $z$ axes of the real part of the dark matter field. The box is $1~\text{Mpc}/h$ in comoving units and it represents the entire simulation box.
}\label{Fig:ax_real}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\linewidth]{Imag21.png}
\caption{
Projection along the $z$ axes of the imaginary part of the dark matter field. The box is $1~\text{Mpc}/h$ in comoving units and it represents the entire simulation box.
}\label{Fig:ax_imag}
\end{figure*}
The \texttt{SCALAR} code was originally developed in order to perform numerical simulations of structure formation with fuzzy dark matter. To demonstrate the capabilities of our code, in this section we present a test involving a full 3D cosmological setup.
The simulation box models a $B_0 = 1~\rm{Mpc}/h$ portion of the Universe expanding over time, according to the Einstein-de Sitter model. We take $\Omega_{\rm{\Lambda}} = 0.7$ and $\Omega_{\rm{M}} = 0.3$, and the Hubble constant is set to $H_0 = 100 ~h~\rm{km}~\rm{s}^{-1}~\rm{Mpc}^{-1}$, where $h=0.67$. The base resolution, which defines the domain level, is set to $N_{\rm cell} = 256^3$ and up to one level of refinement is allowed. This means that, in this test, we achieve a maximum resolution of $\Delta x \sim 4~h^{-1}\rm{kpc}$. The mass of the boson is set to $m = 10^{-21}~\rm{eV}$ and conservation of mass is enforced by solving the continuity equation on top of the non-linear Schr\"odinger equation.
In order to better appreciate the differences between CDM and fuzzy dark matter, the initial conditions are computed for the case of CDM.
The initial density and velocity fields are computed by using the Zel'dovich approximation. Then, following \citet{2017PhRvD..96l3532K}, we convert them into an initial wave-function.
The evolution starts at redshift $z=1000$ and we run the simulation as long as we are able to resolve the quantum force with at least two cells, meaning until redshift $z=30$. We did not run this simple test further because, due to lack of resolution, we cannot resolve the inner part of collapsed objects. This will be the subject of an upcoming paper.
In Fig.~\ref{Fig:ax_dens}, Fig.~\ref{Fig:ax_real} and Fig.~\ref{Fig:ax_imag} we show the dark matter density field, the real and the imaginary parts of the wave-function for a selection of redshifts, $z = 200, 100, 50, 30$. As the field clusters under the effect of gravity, the wave-function develops the wave patterns which are characteristic of this class of models.
By taking the density contrast $\delta \left( \mathbf{x} \right)$ inside the simulation box, we expand it in Fourier modes as follows:
\begin{align}
\delta\left( \mathbf{x} \right) = \int d^3 k ~ \delta \left( \mathbf{k} \right) \exp\left( -i \mathbf{k} \cdot \mathbf{x} \right).
\end{align}
The matter power spectrum is defined by means of the autocorrelation function, which can be expressed as:
\begin{align}
\braket{\delta \left( \mathbf{x} \right)\delta \left( \mathbf{x} \right)} &= \int_0^{\infty} \frac{dk}{k} \frac{k^3 \left| \delta\left( k \right)\right|^2}{2\pi^2} \\
&= \int_0^{\infty} \frac{dk}{k} \frac{k^3 P\left( k \right)}{2\pi^2}.
\end{align}
In Fig.~\ref{Fig:pofk}, we plot the dimensionless power spectrum, defined as:
\begin{align}
\Delta^{2} \left( k \right) = \frac{k^3 P\left( k \right)}{2 \pi^2},
\end{align}
for $z = 200, 100, 50, 30$. The results here show the same quantitative behaviour as seen in \citet[see Fig. 2]{2009ApJ...697..850W}, that performed the same kind of simulation as we do here. As the field gravitationally collapses, the quantum pressure leaves immediately its imprints on $\Delta \left( k \right)$ by producing the characteristic suppression of power at small scales, above $k \sim 300~h^{-1}\text{Mpc}$. Given the mass of the boson, the suppression scale is expected to be around the redshift dependent Jeans wave-number, which can be defined as:
\begin{align} \label{eq:supp_scale2}
k_{J} &= \left( \frac{16 \pi G \rho_{a}}{1+z} m^2 \right)^{1/4} \nonumber \\
&= 66.5 \left( \frac{\Omega_{a}}{0.12 h^2} \right)^{1/4} \left( \frac{m}{10^{-22}~{\rm eV}} \right)^{1/2} \left( 1+z \right)^{-1/4} ~ \frac{h}{\text{Mpc}}~,
\end{align}
where $\Omega_a$ is the dimensionless density parameter of axions and $h$ is the dimensionless Hubble constant.
Between $k = 10~h^{-1}\text{Mpc}$ and $k = 100~h^{-1}\text{Mpc}$, the power spectrum describes modes in the density field still in the linear regime and, therefore, $\Delta^2 \left( k \right)$ evolves with redshift according to linear theory:
\begin{align*}
\Delta^2 \propto \left( 1+z \right)^{-2},
\end{align*}
in a similar way to the CDM case.
In Fig.~\ref{Fig:errs}, the evolution of errors in conservation of mass and energy are plotted against redshift. The error in the conservation of mass slowly evolves in time, as described in Section \ref{sec:tests}.
Furthermore, we track the evolution of the error in the conservation of energy, by integrating the Lazyer-Irvine equation along the simulation and checking at which level of accuracy Eq.~\eqref{eq:LI} is satisfied. As shown in the same figure, in the conservation of energy does not grow significantly, thus ensuring that no energy is numerically dissipated by the solver or the artificial viscosity term.
\section{Conclusions}\label{sec:conc}
In \texttt{SCALAR}, we implemented a set of numerical algorithms developed in order to solve the non-linear Schr\"odinger equation in an AMR framework. Eq.~\eqref{eq:GPE} can be used to describe the dynamics of a Bose-Einstein condensate, a system of identical bosons in the ground energy state, by means of a single-particle wave-function in a mean field approach. Here, the non-linearity arises from an effective potential, which can contain both a self-interaction term and a generic external potential. Bose-Einstein condensates find their application in several fields. As an example, alternative dark matter models involving Bose-Einstein condensates have been recently developed, such as ultra-light axion dark matter, fuzzy dark matter and superfluid dark matter. The Schr\"odinger equation is solved with a Taylor method, similar to the algorithm developed in \texttt{GAMER}. In order to improve the conservation properties of the numerical scheme, the continuity equation is solved on top of the non-linear Schr\"odinger equation and mass conservation is enforced by construction.
Empirically, by running several tests, we found that our numerical method is second-order accurate.
In order to test the main components of \texttt{SCALAR}, a test suite was designed. In particular, we tested the performances of the solver with and without solving the continuity equation on top of the non-linear Schr\"odinger equation, the creation of ghost cells when boundary conditions need to be specified for levels with fine resolution and the dynamical creation and destruction of grids during the process of mesh refinement. For this purpose, we tracked mass and energy conservation properties of our numerical schemes during the evolution of the system. The result is that both mass and energy are well conserved. While the latter remains roughly constant in all the different cases, the former evolves in time. Indeed, the error on the conservation of mass grows according to:
\begin{align*}
\frac{\Delta M}{M} \sim 10^{-6} \left( \frac{t}{T} \right),
\end{align*}
for all our test-cases as we advance the solution in time. However, by solving the continuity equation on top of the non-linear Schr\"odinger equation, the error on the conservation of mass improves of several orders of magnitude, growing as:
\begin{align*}
\frac{\Delta M}{M} \sim 10^{-13} \left( \frac{t}{T} \right),
\end{align*}
and it remains significantly small even for cosmological simulations. Furthermore, we compared the numerical solutions found by \texttt{SCALAR} with the analytical solutions of all the test cases. We show that the numerical solution tracks very well the analytical one over a long evolution time. In this case also, the error with respect to the analytical solution remains roughly constant over time. We showed that high-frequency spurious oscillations created at coarse-fine boundaries by interpolation schemes are efficiently dumped by an artificial viscosity term. However, the long term evolution of the single-particle wave-function still represents a challenge in case the artificial viscosity term is not included. We also run a small cosmological simulation where we show that \texttt{SCALAR} is able to capture the relevant features of models like fuzzy dark matter on cosmological scales.
Future work will aim at developing new high-order interpolation schemes which will not require the inclusion of an artificial viscosity term. Furthermore, we plan to compare the performance and accuracy of \texttt{SCALAR} with similar codes. In a following paper, we plan to exploit \texttt{SCALAR} to explore the non-linear regime of the structure formation with alternative dark matter models. In particular, we want to run a set of high-resolution cosmological simulation in order to verify and provide further predictions of fuzzy dark matter.
The code will be soon publicly available through our GitHub repository\footnote{http://github.com/mattiamina}.
\begin{acknowledgements}
We thank the Research Council of Norway for their support. Computations were performed on resources provided by UNINETT Sigma2 -- the National Infrastructure for High Performance Computing and Data Storage in Norway. HAW was supported by the European Research Council through 646702 (CosTesGrav). We also thank the anonymous referee for several suggestions that helped improve the paper.
\end{acknowledgements}
|
3,212,635,537,759 | arxiv | \section{Introduction}
The theoretical discovery of the second-generation topological insulators \cite{TI_Zhang} (2GTIs) triggered an intensive experimental effort to observe the predicted surface states \cite{TI_SbTe_exp,time_dep_ARPES,bise_arpes2,bise_arpes,SARPES_bite,dichroism,SARPES_bite2,CucontorolledBB,FeonBiSe} (SSs) being protected by time-reversal symmetry \cite{fu}.
It turned out, that the physical properties of the prepared samples are greatly affected by the electron acceptor/donor impurities, that can be found either in the bulk or on the surface.
The created Bi$_2$Se$_3\,$ samples are typically electron doped by the inner point defects \cite{BiSe_vacancies_exp,BiSe_vacancies,Cadopants_exp}.
However, it has been shown that the Fermi level can be tuned by the insertion of further bulk dopants into the system \cite{SARPES_bite,bise_arpes,Cadopants_exp}.
The presence of the charged impurities in the system and the inhomogeneous charge distribution close to the surface generates an effective electrostatic field, which can be probed experimentally, for example by second-harmonic generation \cite{potential_probe1,potential_probe2}.
In addition, the evolved electrostatic field induces a band bending (BB) in the bulk band structure close to the surface, which was successfully observed by angle-resolved photomeisson (ARPES) experiments as well \cite{TI_SbTe_exp,time_dep_ARPES,bise_arpes,surfandbulk,Kdopants_exp1,Cadopants_exp}.
The experimental manipulation of the BB field was also accomplished by the insertion of bulk Cu dopants into the Bi$_2$Se$_3\,$ matrix \cite{CucontorolledBB}.
Moreover, ARPES experiments also demonstrated the possibility to shift the Dirac cone by applying charge dopants on the Bi$_2$Se$_3\,$ surface \cite{SARPES_bite,Cadopants_exp}.
An experimental evidence for a large shift of the Dirac cone towards the conduction band was also reported by gated terahertz cyclotron resonance measurements performed on thin Bi$_2$Se$_3\,$ film \cite{Dirac_shift_exp1}.
Besides these comprehensive experimental studies, numerous theoretical works were also devoted to the description of the physical properties of the 2GTIs, including first principle calculations \cite{TI_Zhang,GW,GW2,KKR_bite,Kdopants,Kdopants2,Bibilayer}, tight binding \cite{Kdopants,TBmodel,TBcikk} or effective continuous \cite{fu} models.
The structure of the 2GTIs can be described by a sequence of weakly bound quintuple layers (QLs), each consisting of five atomic layers.
The effect of the bulk dopants on the electronic behavior of Bi$_2$Se$_3\,$ crystal was also addressed in recent theoretical studies \cite{TBcikk,BiSe_doping,Kdopants,Kdopants2}.
In addition, Galanakis \emph{et. al.} \cite{TBcikk} also suggested that the BB profile and the energy of the Dirac point can be controlled by electrostatic effects.
The importance of the BB has been demonstrated in other 2GTI based nanostructures as well, including topological/normal insulator interfaces \cite{BB_TI_I_exp,Dirac_engeneering,BB_TI_I,spatial_Dirac_shift}.
Motivated by these research findings in this work we study the BB effect and the Dirac cone shift on the surface of 2GTIs theoretically.
In particular, we perform screened Korringa-Kohn-Rostoker (SKKR) calculations to examine the role of the charged dopants at Bi$_2$Se$_3\,$ surface.
The slow dynamics of the band bending process suggests that the charge accumulation at the surface is coupled to a much slower surface lattice relaxation \cite{time_dep_ARPES}.
In this work we do not aim to describe the time dependence of the outlined process, but rather to examine the effect of the surface dopants on the band structure.
In addition, according to Ref. \cite{Kdopants2} the lattice relaxation in the presence of adatoms is expected to be negligible small.
Thus, in our surface calculations we consider a rigid lattice excluding any structural relaxation processes of the surface layers.
The rest of the paper is organized as follows.
In Sec.~\ref{sec:method} we present the details of our numerical approach to study the bulk and surface properties of Bi$_2$Se$_3$.
Than in Sec.~\ref{sec:results} we examine the BB and the effect of the surface dopants on the dispersion of the SSs.
Finally we summarize our work in Sec.~\ref{sec:summary}.
\section{Details of the numerical calculations} \label{sec:method}
In order to describe the electron structure of Bi$_2$Se$_3$, we used the relativistic spin-polarized SKKR method \cite{KKR3}.
The first-principles calculations were performed by density functional theory using the local spin-density approximation and the Ceperley-Alder parametrization of the exchange correlation functional \cite{CA_exchange} within the the atomic-sphere approximation (ASA).
In our calculations we used an angular momentum cutoff $l_{\rm max}=2$.
The 2GTIs posses a rhombohedral lattice structure were the atoms are located in parallel layers forming a triangular lattice \cite{BiSe_lattice}
This lattice structure can be described by a periodic sequence of QLs, whereas each of the QL consists of five strongly bound atomic layers.
The QLs, on the other hand, are weakly bound to each other by van der Waals couplings.
In particular, the QLs in the Bi$_2$Se$_3$ crystal consist of atomic layers Se1-Bi-Se2-Bi-Se1, whereas Se1 and Se2 are selenium atoms at inequivalent geometrical positions.
Fig.~\ref{fig:lattice}(a) shows the structure of one QL in the lattice.
\begin{figure}[thb]
\centering
\includegraphics[scale=0.4]{kristalyracs.eps}
\caption{(a) The structure of the Bi$_2$Se$_3\,$ crystal within one QL.
(b) The two-dimensional Brillouin zone related to the surface terminated by an atomic plane of a QL
(c) The scheme of the studied system. Six QLs are surrounded by a semi-infinite bulk of Bi$_2$Se$_3\,$ crystal from the left and by vacuum from the right.
} \label{fig:lattice}
\end{figure}
Since the QLs are weakly bound to each other, the crystal surface is favored to be terminated by Se atoms.
Thus, in our calculations we assumed a flat surface formed by the last Se atomic plane of a QL.
The considered surface has a hexagonal structure, with a 2D lattice constant $a=4.138$ \AA\; and lattice vectors given by $\mathbf{a}_1 = (a, 0,0)$ and $\mathbf{a}_2 = (-\frac{1}{2}a, \frac{\sqrt{3}}{2}a, 0)$, where the axis $z$ is perpendicular to the surface.
The shortest period in the lattice structure along the $z$ axis is given by three successive QLs \cite{TI_Zhang}.
Still, the periodicity of the Bi$_2$Se$_3$ lattice in the $z$ direction can be well described by a skew lattice vector $\mathbf{a}_3 = (5a, \frac{5}{\sqrt{3}}a, z_{QL})$ (with \mbox{$z_{QL}=9.547$ \AA\;} standing for the hight of one QL) resulting in a period length of one QL only in the direction spanned by $\mathbf{a}_3$.
According to the SKKR method \cite{KKR3}, we modeled the bulk system by a single QL surrounded by semi-infinite bulk regions from both the left and right sides.
The surface Greens functions of the semi-infinite regions were calculated by an iterative method \cite{iterGreen1,iterGreen2}.
The charge density distribution was calculated by means of the energy-dependent Greens function, integrated up to the Fermi energy.
The position of the Fermi energy was determined to satisfy the total charge neutrality condition.
In particular, because of the ASA used in our calculations, however, the value of the Fermi level is obtained by a numerical error.
Though, the correct position of the Fermi energy is essential for insulator systems.
Several previous works \cite{Lloyd1,Lloyd2,Lloyd3} proposed a procedure to correct the value of the Fermi energy in a self-consistent way based on the Lloyd's formula \cite{Lloyd_orig}.
Following this procedure we obtained the proper Fermi energy by re-normalizing the wave functions in order to obtain the correct space-integrated charge distribution.
For the surface calculations we considered an interface region surrounded by a semi-infinite bulk system from the left and by a vacuum from the right [see Fig.~\ref{fig:lattice}.(c)].
The interface region was constructed from six QLs to support a smooth transition of the atomic potentials as we proceed from the bulk layers to the vacuum.
We also included additional (in total eight) vacuum layers between the surface of the interface region and the semi-infinite vacuum side.
From an electrostatic point of view, the atomic potentials (as well as the charge density distribution) were obtained by imposing zero electrostatic field far away from the surface of the crystal.
The described numerical method is sufficient to obtain plausible results for the studied system \cite{KKR_bite}, however, the inclusion of empty spheres between the atomic layers further stabilized our numerical approach.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Type & x [\AA]& y [\AA]& z [\AA]& $R_{WS}$ [\AA] \\
\hline
\hline
E1 & 0 & 0 & 0 & 1.447 \\
Se1 & 2.069 & 1.195 & 1.184 & 1.587 \\
E2 & 4.138 & 2.389 & 2.1408 & 1.226 \\
Bi & 6.207 & 3.584 & 2.893 & 1.758 \\
E3 & 8.276 & 4.778 & 3.664 & 1.1854 \\
Se2 & 10.3450 & 5.973 & 4.773 & 1.698 \\
E3 & 12.414 & 7.167 & 5.883 & 1.185 \\
Bi & 14.483 & 8.362 & 6.654 & 1.758 \\
E2 & 16.552 & 9.556 & 7.406 & 1.226 \\
Se1 & 18.621 & 10.751 & 8.363 & 1.587 \\
\hline
\end{tabular}
\caption{The $x$, $y$ and $z$ coordinates and the Wigner-Seitz radii ($R_{WS}$) of the atomic and empty spheres in one QL. Notations Se1 and Se2 [E1, E2, E3] stand for the inequivalent selenium [empty] spheres.
The position of the other spheres in the lattice can be computed via the lattice vectors $\mathbf{a}_1$, $\mathbf{a}_2$ and $\mathbf{a}_3$ (see the text for details). \label{table:spheres}}
\end{center}
\end{table}
In our calculations we used identical geometrical parameters for the lattice structure as in Ref. \cite{BiSe_lattice}, however, we optimized the positions and the Wigner-Seitz radii of the empty spheres to reproduce the main features of the experimentally observed band structure of the SSs.\cite{bise_arpes}
Focusing on the direct band gap at the $\Gamma$ point and on the slope of the Dirac cone, the obtained numerical parameters are summarized in Table \ref{table:spheres}.
Finally, the band structure can be obtained from the Bloch spectral function (BSF).
For surface calculations the layer-resolved BSF depends on the energy $E$ and on the parallel momentum $k_{\parallel}$:
\begin{equation}
A_n(E, \mathbf{k}_{\parallel}) = -\frac{1}{\pi}{\rm Im}{\rm Tr}\int\limits_{\Omega_n} {\rm d}^3r_n\;G^+(\mathbf{r}_n,\mathbf{r}_n,E, \mathbf{k}_{\parallel}), \label{eq:spectral}
\end{equation}
where $G^+(\mathbf{r}_n,\mathbf{r}_n,E, \mathbf{k}_{\parallel})$ is the retarded Greens function at position $\mathbf{r}_n$ in a selected atomic sphere $\Omega_n$ of layer $n$, and the trace is taken over the quantum numbers of the total angular momentum.
Thus, the BSF is ideal to study the surface states, whereas the three-dimensional bulk band structure is projected onto the two-dimensional Brillouin zone (BZ) corresponding to the crystal surface [see Fig.\ref{fig:lattice}(b)].
\section{Band bending and shifting of the Dirac cone by charge dopants} \label{sec:results}
We now turn our attention to the SSs formed on the surface of Bi$_2$Se$_3\,$ crystal.
In this section we discuss our results on the BB effect induced by the electric charge accumulation and/or inhomogeneous charge distribution close to the surface.
We also show that the Dirac cone can be shifted in energy due to the effective electric field generated by the deposited surface dopants.
Our findings are in good agreement with the ARPES experiments \cite{SARPES_bite}.
Making use of the atomic potentials determined by the self-consistent calculations described in Sec. \ref{sec:method} we calculated the layer-resolved BSF in the QLs beneath the surface.
(The band structure calculated for the bulk crystal is presented in appendix \ref{subsec:bulk}.)
\begin{figure}[thb]
\centering
\includegraphics[scale=0.5]{comparison2effmodel.eps}
\caption{The BSF given in Eq.~(\ref{eq:spectral}) and summed over the layers of the (a) 1QL, (b) 2QL, (c) 3QL and (d) 4QL in the interface region plotted along the $K\Gamma M$ cross section of the two-dimensional BZ.
The projection of the bulk bands onto the two-dimensional BZ is shown by a colored areas, while the narrow lines correspond to the bands with low dispersion in the $z$ direction, including the SSs.
The intensity of the SSs rapidly decrease in deeper QLs.
The results of the SKKR model are compared to the energy bands of the effective model (\ref{eq:TI_spektrum}) using parameter set (a) $v_0\approx3.55$ eV\AA, $\lambda\approx128$ eV\AA$^3$, $1/(2m)=0$, $\alpha=0$ and (b) $v_0^{*}\approx1.63$ eV\AA, $\lambda^{*}\approx108$ eV\AA$^3$, $1/(2m^{*})\approx20$ eV\AA$^3$, $\alpha^{*}\approx78$ eV\AA$^3$ (see the text for details).
The BSF was calculated at complex energies with small imaginary part of $\sim0.7$ meV.
} \label{fig:comparison}
\end{figure}
Fig.~\ref{fig:comparison} shows the calculated BSF summed over the layers of the individual QLs.
The narrow lines correspond to the bands with low dispersion in the $z$ direction, including the SSs.
Within the bulk band gap the dispersion of the protected SSs form a Dirac cone, which is anisotropic in the $k_{\parallel}$ plane.
The Dirac point is located $\sim 250$ meV below the bulk conduction minimum, in good agreement with the ARPES measurements \cite{SARPES_bite}.
The SSs penetrates below the surface up to the third QL, where the intensity of the SSs eventually vanishes.
The signatures of the bulk band structure, however, can be observed in all the QLs up to the topmost one.
Thus, the SSs spatially overlap with the bulk states, hence they can hybridize.
Consequently, one can observe an increased width of the SS bands in the vicinity of the bulk bands.
The SSs that are not hybridized with the bulk states, on the other hand, can be described by a $2\times2$ effective Hamiltonian proposed by Fu et al.\cite{fu} up to the third order of the momentum $\hbar\mathbf{k} = (\hbar k_x, \hbar k_y)$:
\begin{equation}
H(\mathbf{k}) = \frac{k^2}{2m} + v_{k}(k_x\sigma_y - k_y\sigma_x) + \frac{\lambda}{2}(k_+^3 + k_-^3)\sigma_z\;. \label{eq:TI_FU_ham}
\end{equation}
Here $v_k = v_0 + \alpha k^2$, $k_{\pm}=k_x\pm{\rm i}k_y$ and $\sigma_{x,y,z}$ stand for the Pauli matrices acting in the spin space.
Parameters $m$, $v_0$, $\alpha$ and $\lambda$ are to be determined either from first principle calculations or from fits to the experimental data.
The energy eigenvalues of the electron states are given by the expression
\begin{equation}
E_{\pm}(\mathbf{k}) = \frac{k^2}{2m} \pm \sqrt{v_k^2k^2 + \lambda^2k_x^2(k_x^2 - 3k_y^2)^2}\;, \label{eq:TI_spektrum}
\end{equation}
where $\pm$ labels the bands above/below the Dirac point.
The effective mass $m$ introduces an asymmetry between the upper and lower side of the Dirac cone that is, indeed, significant according to the calculated band structure shown in Fig.~\ref{fig:comparison}.
In Refernce \cite{bise_arpes} the parameters of Eq.~(\ref{eq:TI_spektrum}) were fitted to the experimental ARPES data in the upper side of the Dirac cone.
Focusing on the parameters describing the most pronounced features of the dispersion, namely the Fermi velocity $v_0$ and the parameter $\lambda$ responsible for the hexagonal warping, the obtained numerical values were $v_0\approx3.55$ eV\AA, $\lambda\approx128$ eV\AA$^3$, $1/(2m)=0$ and $\alpha=0$.
Fig.~\ref{fig:comparison}(a) compares the band structure of the effective model to the SKKR band structure.
As one can see, the effective model with the given parameters reproduces the calculated band structure of the SSs above the Dirac point well.
However, below the Dirac point the agreement of the effective and SKKR model is weak.
To describe the asymmetry between the upper and lower side of the Dirac cone we propose another set of parameters, namely $v_0^{*}\approx1.63$ eV\AA, $\lambda^{*}\approx108$ eV\AA$^3$, $1/(2m^{*})\approx20$ eV\AA$^3$ and $\alpha^{*}\approx78$ eV\AA$^3$.
Using this parameter set we find that both the upper and lower side of the Dirac cone obtained by Eq.~(\ref{eq:TI_spektrum}) is close to the dispersion of the SSs calculated within the SKKR framework.
Comparing the band structure shown in Fig.~\ref{fig:comparison}.(a) to the ones of Figs.~\ref{fig:comparison}.(b)-(d), one can observe the BB in the topmost QL even without any extra charge dopants added to the system.
Due to the effective electrostatic field induced by the inhomogeneous charge distribution close to the surface, the bulk conduction band is repelled upward by about \mbox{$100$ meV} in the first QL compared to the conduction band minimum in the other QLs (see Fig.~\ref{fig:comparison}).
A BB of similar magnitude on the surface of pristine Bi$_2$Se$_3\,$ crystal was also reported in other DFT calculations \cite{fregoso}.
We expect that charge dopants deposited on the Bi$_2$Se$_3\,$ surface further modifies the electrostatic configuration of the top layers.
Indeed, in Hsieh \emph{et. al.} \cite{SARPES_bite} the surface of the Bi$_2$Se$_3\,$ sample was dosed by NO$_2$ molecules and accomplished to shift the Dirac point towards the conduction band.
The presence of charged impurities in the sample, like electron donor Se vacancies generated during the sample preparation process or reactive chemical doping \cite{BiSe_vacancies_exp}, also plays a crucial role in the electrostatic properties of the surface.
The slow migration of the Se vacancies \cite{TI_SbTe_exp,BiSe_vacancies_exp,BiSe_vacancies} results in an increased concentration of positively charged impurities close to the surface.
Thus, a time dependent BB was observed in ARPES experiments \cite{time_dep_ARPES,TI_SbTe_exp,SARPES_bite,BiSe_vacancies_exp,Cadopants_exp,Kdopants_exp1} where the bands at the surface were gradually bent downward.
In addition, it has been showed both experimentally \cite{Kdopants_exp1} and theoretically \cite{Kdopants,Kdopants2} that a potassium (K) layer deposited on the surface of Bi$_2$Se$_3\,$ crystal triggers similar BB effects on the band structure as the Se vacancies.
Thus, the electron donor K adatoms can be also described by positively charged impurities close to the surface.
\subsection{Charge dopants on the surface of the Bi$_2$Se$_3\,$ crystal}
In our calculations we simulated the effect of the surface dopants by a planar capacitor situated on the surface of the Bi$_2$Se$_3\,$ crystal, and charged by $\delta q_s$ per two-dimensional unit cell.
For example, the presence of the electron donor Se vacancies or K adatoms can be described by a positively charged capacitor.
On the other hand, the presence of the electron acceptor NO$_2$ molecules used in the ARPES experiments of Ref. \cite{SARPES_bite} can be modeled by a negatively charged capacitor.
This is a good assumption because the characteristic size of the probed samples are typically much larger than the lattice constant and the surface physics
can be studied in terms of average physical quantities, such as the average concentration of the charged surface dopants.
In our model the charge of the capacitor corresponds to the average charge transfer between the dopants and the surface, while we neglect the inhomogeneities
on the atomic length scale.
Additionally, since we are not confined by the supercell model of other first principle calculations \cite{Kdopants,Kdopants2},
we can now study the shift of the Dirac cone as a function of the doping in the entire concentration range.
The electric field of the charged capacitor is accounted for by means of a term in the Poisson equation, that is solved self-consistently within the SKKR code.
In the vacuum the electric field of the capacitor is canceled due to the attracted (or repelled) electrons from (to) the bulk reservoir.
\begin{figure}[thb]
\centering
\includegraphics[scale=0.43]{delta_charge.eps}
\caption{Charge excess on the individual layers induced by a capacitor charged with \mbox{$\delta q=0.01q_e$} per two-dimensional unit cell.
The interface region consists of six QLs, surrounded by the vacuum (V) from the left and by the bulk (B) from the right.
The solid vertical lines indicate the position of the capacitor in each figure.
} \label{fig:delta_charge}
\end{figure}
Fig.~\ref{fig:delta_charge} shows the induced excess charge ($\Delta Q$) on the individual layers compared to the undoped system.
The results were obtained for a capacitor located close to the surface, and being charged to $\delta q=0.01q_e$, where $q_e\approx1.6\times10^{-19}$ is the elementary charge unit.
In particular, Fig.~\ref{fig:delta_charge} shows the excess charge obtained for two different positions of the capacitor.
As expected, the charge of the capacitor is compensated by the accumulated electrons within the first two QLs, since the induced excess charge can be hosted only by the unsaturated SSs.
It can also be noticed that the induced effective electric field might generate further charge transfer between the layers in the low-lying QLs as well.
For example, in Fig.~\ref{fig:delta_charge}(a) one can clearly observe oscillations of $\Delta Q$ up to the $5$th QLs.
Since the oscillations are centered around $\Delta Q=0$, the net charge of the corresponding QLs remains zero within the numerical precision of our calculations.
Our conclusions are consistent with previous theoretical works \cite{Kdopants,Kdopants2} predicting long range oscillations in the charge transfer.
However, our results indicate that these oscillations becomes suppressed for charge dopants located beneath the surface of the Bi$_2$Se$_3\,$ crystal.
In the following we present our result obtained for the planar capacitor located at position corresponding to Fig.~\ref{fig:delta_charge}(a).
Figure \ref{fig:spectral_surfdoping}. compares the band structures of the doped and clean systems.
\begin{figure}[thb]
\centering
\includegraphics[scale=0.5]{surfdoping.eps}
\caption{Top panels: The BSF at positive surface doping compared to the clean system.
The BSF was calculated using Eq.~(\ref{eq:spectral}) and summed over the atomic layers in the (a) 1QL and (b) 3QL.
Bottom panels: Similar to the top panels but for negative surface doping.
The projection of the bulk bands onto the two-dimensional BZ is shown by colored areas, while the narrow lines correspond to the bands with low dispersion in the $z$ direction, including the SSs.
The BSF was calculated at complex energies with small imaginary part of $\sim0.7$ meV.
} \label{fig:spectral_surfdoping}
\end{figure}
For positively charged capacitor the accumulation of the electrons close to the surface is energetically favorable.
The states for these accumulated electrons are provided by the downward shift of the unsaturated SS bands [see Figs.~\ref{fig:spectral_surfdoping}.(a) and (b)].
Moreover, the BB of the bulk bands can also be tuned by a surface doping.
For a given concentration of the surface dopants, for example, the position of the conduction band at the surface can be shifted back to that in the bulk.
For a negatively charged capacitor one can follow analogous reasoning.
In this case the bands are shifted upward as shown in Fig.~\ref{fig:spectral_surfdoping}(c) and (d).
In previous numerical studies the effective electric field responsible for the BB was controlled by a charge transfer originating from a sheet of adatoms located above the Bi$_2$Se$_3\,$ surface \cite{Kdopants,Kdopants2,Bibilayer,spatial_Dirac_shift}.
In these calculations additional SSs were identified in the band structure being attributed to the presence of the given adatoms.
In the case of potassium for example, even quantum well states have been found within the bulk band gap where the Dirac cone is situated \cite{Kdopants}.
Moreover, due to a conventional insulator interface on the top of Bi$_2$Se$_3\,$, the protected SSs can even be spatially shifted towards the bulk of the Bi$_2$Se$_3\,$ crystal \cite{spatial_Dirac_shift}.
Since we employ a different approach to model the BB effect, these artifacts are entirely absent from our results.
As a result, our method, enables us to study the evolution of the Dirac cone as a function of the surface doping in the entire concentration range, especially in the
low concentration limit
which is hardly accessible for approaches based on a supercell method due to numerical limitations.
The energy shift of the Dirac cone was successfully demonstrated by ARPES measurements as well \cite{SARPES_bite,Kdopants_exp1}.
As it is shown in Fig.~\ref{fig:spectral_surfdoping}, the surface bands are shifted upward or downward equally in each QL.
This behavior is in line with the localized nature of the surface states showing no momentum dispersion along the direction perpendicular to the surface.
On the other hand, the bend of the bulk bands varies continuously with the distance measured from the surface.
Indeed, in Fig.~\ref{fig:comparison}(a)-(d) one can observe a gradual decrease of the conduction band minimum in the successive QLs.
The layer resolved BSF (not shown in the manuscript) also confirms that the energy shift of the conduction band minimum varies smoothly from layer-to-layer.
We now examine the relation between the energy shift of the Dirac cone and the concentration of the charge dopants.
In case the Fermi energy is located inside the bulk band gap the excess charge induced by the surface doping can be hosted only by the unsaturated SSs, forming a Dirac cone at the center of the two-dimensional BZ.
One can then expect, that the energy shift of the Dirac cone is in correspondence with the increment or reduction of the occupied electron states required to host the induced excess charge.
The total charge per two-dimensional unit cell ($\Omega$) corresponding to the electron states on the Dirac cone between energies $E_1$ and $E_2$ can be calculated as
\begin{equation}
\delta q_D = q_e\int\limits_{E_1}^{E_2}\rho_D(E) dE\;, \label{eq:q_dirac}
\end{equation}
where
\begin{equation}
\rho_D(E) = \frac{\Omega}{(2\pi)^2}\oint\limits_{\Gamma_E}\frac{dk_{\parallel}}{\hbar v_g(\mathbf{k})}
\end{equation}
is the density of states and $\hbar v_g(\mathbf{k})=|{\rm grad}_{\mathbf{k}}E_{\pm}(\mathbf{k})|$ is the group velocity.
The integral path is taken over the constant energy contour $\Gamma_E$ of the SS spectrum (\ref{eq:TI_spektrum}) at energy $E$.
\begin{figure}[thb]
\centering
\includegraphics[scale=0.5]{Q_E.eps}
\caption{Energy shift of the Dirac cone predicted by Eq.~(\ref{eq:q_dirac}) as a function of the surface doping per a two-dimensional unit cell (solid line).
Red crosses represent the results obtained by the SKKR calculations.
} \label{fig:Q_E}
\end{figure}
Figure \ref{fig:Q_E}. shows the calculated energy shift of the Dirac cone as a function of the deposited excess charge.
Surprisingly, we have found a significant difference between the predictions of the effective model and the SKKR results.
The effective model highly overestimates the energy shift of the Dirac cone compared to the results obtained by the SKKR model.
One can explain this inconsistency by the following arguments.
Within the effective model we assumed that the surface bands are shifted without any notable changes of the dispersion.
However, this approximation is valid only at energies close to the Dirac point.
Indeed, in Fig.~\ref{fig:spectral_surfdoping} the shape of the Dirac cones are similar in the doped and undoped cases.
One can, however, observe remarkable changes in the surface band structure around the crossing points with the other bands when surface dopants are present in the system (see Fig.~\ref{fig:spectral_surfdoping}).
Consequently, the charge density distribution on these parts of the band structure undergo to a significant change as well.
Thus, the low-energy segments of the unsaturated surface bands also play an important role in the screening of the surface dopants.
Comparing our results to other DFT based calculations \cite{Kdopants,Kdopants2}, generally we found a good qualitative agreement. The quantitative
deviations can be related to the differences in the applied physical models.
While most of the DFT calculations use a supercell approach, our method relies on using semi-infinite bulk and vacuum regions.
Secondly, in Ref. \cite{Kdopants} the charge transfer between the deposited potassium atomic layer (K) and the Bi$_2$Se$_3\,$ surface was controlled by the
K-Bi$_2$Se$_3\,$ distance instead of the K concentration on the surface.
In this work we found the position of the Dirac point to be more sensitive to the amount of charge transfer, resulting in a Dirac cone shift-charge transfer
relation that is closer to the prediction of the effective model.
Our results, on the other hand, indicate that the energy shift of the Dirac cone cannot be estimated within the effective model given by
Hamiltonian (\ref{eq:TI_FU_ham}), since a significant portion of the induced excess charge is hosted by the low-energy segments of the surface bands.
Thus, the position of the Dirac point is very much influenced by the treatment of the electrostatic potential used in the specific surface calculations.
\subsection{Spatially distributed charge dopants}
Besides a planar capacitor we also considered a scenario of spatially distributed charge dopants below the surface, that might be closer to the realistic case of an exposed surface.
One can expect that the concentration of the surface dopants vanish exponentially with the distance measured from the surface of the crystal.
Thus, in our calculations we described the concentration of the charged impurities by an exponential function
\begin{equation}
\delta q_{\xi}(z) = \frac{\delta q}{\xi}{\rm Exp}\left(-\frac{z}{\xi}\right). \label{eq:spatial_dopants}
\end{equation}
The parameter $\xi$ describes the penetration depth of the dopants into the crystal.
The case of the planar capacitor can be recovered by the limit $\xi\rightarrow 0$.
In this section we discuss the results of our SKKR calculations with finite $\xi$ parameters.
In particular the layer dependent excess charge in Fig. \ref{fig:delta_charge_spatial} was calculated for charge dopants concentrated into the first two QLs below the surface.
\begin{figure}[thb]
\centering
\includegraphics[scale=0.43]{delta_charge_spatial.eps}
\caption{Charge excess on the individual layers induced by dopants of total charge \mbox{$\delta q=-0.01q_e$} and being spatially distributed in the layers close to the surface.
The smooth blue line describes the concentration of the charge dopants as a function of the distance measured from the surface calculated by Eq. (\ref{eq:spatial_dopants}) using $\xi\approx5$ \AA.
The interface region consists of six QLs, surrounded by the vacuum (V) from the left and by the bulk (B) from the right.
} \label{fig:delta_charge_spatial}
\end{figure}
As one can see in Fig. \ref{fig:delta_charge_spatial}, the oscillations in the excess charge decay on a length scale of $4-6$ QLs, which is longer that we have seen for a planar capacitor located inside the crystal.
Our calculations also indicate that the decaying length of the oscillations increases with the parameter $\xi$, which is consistent with our expectations.
Figure \ref{fig:spectral_surfdoping_spatial}. compares the band structures of the spatially doped and clean systems.
\begin{figure}[thb]
\centering
\includegraphics[scale=0.5]{surfdoping_spatial.eps}
\caption{Top panels: The BSF at positive spatial doping compared to the clean system.
The BSF was calculated using Eq.~(\ref{eq:spectral}) and summed over the atomic layers of (a) the 1QL and (b) 3QL.
Bottom panels: Similar to the top panels but for negative spatial doping.
The projection of the bulk bands onto the two-dimensional BZ is shown by a colored areas, while the narrow lines correspond to the bands with low dispersion in the $z$ direction, including the SSs.
The BSF was calculated at complex energies with small imaginary part of $\sim0.7$ meV, and $\xi\approx5$ \AA.
} \label{fig:spectral_surfdoping_spatial}
\end{figure}
We found very similar results compared to the case when the charge dopants were modeled by a planar capacitor on the surface (see Fig. \ref{fig:spectral_surfdoping}.).
The downward shift of the Dirac cone induced by positive charge dopants [see Fig. \ref{fig:spectral_surfdoping_spatial}.(a) and (b)] is smaller, but very close to that induced by the planar capacitor used in the previous section [see Fig. \ref{fig:spectral_surfdoping}.(a) and (b)].
For negative charge dopants, on the other hand, the upward shift of the Dirac cone is about the twice of the shift induced by the planar capacitor.
However, this energy shift is still much less than the predicted value by the effective continuous model.
Thus, the qualitative conclusions made in the previous section are also applicable for the case of spatially distributed charge dopants.
\section{Summary}
\label{sec:summary}
In summary, we have calculated the band structure of Bi$_2$Se$_3\,$ topological insulator by using the SKKR method.
In order to examine the effect of the charged impurities on the properties of the Bi$_2$Se$_3\,$ surface, we also calculated the surface band structure in the presence of a charged planar capacitor situated close to the surface and for spatially distributed dopants under the surface.
We have found, that for a Fermi energy located in the bulk band gap the induced excess charge is hosted by the unsaturated SSs.
Thus, the charge of the surface dopants is also screened within the first few QLs below the surface.
In addition, due to the excess charge and the inhomogeneous charge distribution close to the surface, the bulk bands undergo a BB effect even in the pristine Bi$_2$Se$_3\,$ crystal.
Consequently, the bulk bands becomes bent in the atomic layers close to the surface, but one can already recover the properties of the bulk crystal starting from the third QL.
Our results also indicate, that the BB profile can be tuned via the deposited surface dopants.
In contrast to the bulk bands, the Dirac cone (being formed by the SS bands inside the bulk band gap) becomes shifted in energy due to the deposited surface dopants.
The magnitude and the direction of this energy shift depends on the concentration and on the sign of the deposited dopants.
However, this effect cannot be described within the scope of the effective continuous model of the SSs.
Our self-consistent numerical results showed that besides the Dirac cone the low-energy segments of the surface bands also play an important role in the electrostatic properties of the surface.
We also found that in agreement with recent theoretical studies \cite{Kdopants,Kdopants2}, the charged impurities on the surface induce oscillations in the charge density extending deep into the crystal.
However, our results indicate that in the case when the dopants are located beneath the surface, these oscillations became highly suppressed.
In order to check experimentally our findings, one needs to independently measure the doping concentration on the surface and the energy shift of the Dirac cone.
We believe that the combination of the scanning tunneling microscope and ARPES techniques can serve this purpose.
Moreover, the surface doping of the 2GTIs can be used to cancel the BB effect on the surface, which is essential to take an advantage of the protected SSs in transport devices.
Finally, the possibility of tuning the position of the Dirac cone might also be of great importance for future experimental applications of these materials.
\section*{Acknowledgments}
We acknowledge the support from the Hungarian Scientific Research Fund No. OTKA K115575, K115632 and K108676.
|
3,212,635,537,760 | arxiv | \section{Introduction}
\label{sec:introduction}
A real world complex system often counts multiple interactions between multiple different entities. When these interactions are regrouped under multiple families of entities, multilayer network modelling becomes a tool of choice to capture the key components of the system. The use of this model emerges in all fields of science from social sciences to finances, through logistics, biology, and many more~\cite{kivela2014multilayer}.
With multilayer networks, the study of \textit{multiple viewpoints} (or aspects~\cite{kivela2019visual}) on the same network data becomes possible.
This is critical for example in social network analysis, to study the role of users in different networks, and compare them (for example the same individual may behave differently on LinkedIn, Twitter, or Facebook). These different networks form different types of links that may be overlaid.
Motivated by their practical interest, \textit{multilayer networks} also show interesting structures~\cite{battiston2014structural} that could be exploited to mine \textit{community structures} or study the roles of nodes and edges through centrality, for example. These are also possible in a traditional network analysis standpoint but often requires some kind of simplification (such as one-mode projection) but recent advances show that interesting structures can be obtained \textit{directly} from the multilayer networks~\cite{gomez2013diffusion,chen2018suppressing,vskrlj2019cbssd}.
The key concept in multilayer networks are the layers themselves. Since the structure of such networks is driven by the layers and their aspect~\cite{kivela2014multilayer}, understanding how the layers organise can reveal properties unique to a given multilayer network model~\cite{renoust2015detangler, skrlj2019patterns}. Particularly, the intertwining of edges, or \textit{layer entanglement}~\cite{renoust2014entanglement, renoust2013measuring}, shows how layers overlap to form coherent structures and substructures.
Although recent works have focused on multilayer network analysis and description~\cite{wang2018social,omodei2015characterizing}, not many have focused on a large scale analysis grouping multilayer networks of different nature -- and produced in different disciplines, while comparing them to synthetic models. One comparative study of flow analysis~\cite{de2015identifying} has particularly influenced this paper where emerging structures are described, albeit not comparing them to synthetic models.
In their seminal work, McPherson \textit{et al.}~\cite{mcpherson2001birds} discuss how ties emerge in social systems. They investigate how people similarity, \textit{i.e.} homophily, is a strong driver to the formation of ties, with the addition to make them more durable in a dynamic system. They investigate social ties in a multilayer manner, and argue for further research: \textit{``in the impact of multiplex ties on the patterns of homophily; [and] the dynamic of network change over time [...]''}.
Our original work~\cite{skrlj2019patterns} -- that we extend in this paper -- particularly resonates with the first point of McPherson \textit{et al.}, in that we displayed a link between homophily~\cite{mcpherson2001birds,borgatti2009network} in social networks and high entanglement intensity networks.
This paper extends~\cite{skrlj2019patterns}, which originally contributed with an open source implementation of entanglement homogeneity and intensity for multiplex networks, while evaluating them over 30 real world networks. We proposed also a synthetic multiplex network generator. A generation of over 10k synthetic networks, and their comparison with the real world networks, displayed common patterns of entanglement homogeneity and intensity that could be specific to the families of applications that \textit{generated} the networks. In this extended work, we contribute with:
\begin{itemize}
\item the theoretical extension of the entanglement computation to a fully multiplex model that takes into account coupling edges;
\item the extension of our synthetic generator accordingly;
\item the computations on a wider range of real and synthetic networks (1{,}329{,}696 synthetic networks were considered);
\item \REVISE{the study of entanglement in large, temporal multiplex networks;}
\item \REVISE{an open-source implementation of all conducted experiments.}
\end{itemize}
\section{Coupled multilayer and multiplex networks}
A multilayer network can be defined as a sequence $M = \{G_l\}_{l \in L} = \{(V_{l},E_{l})\}_{l \in L}$ where \REVISE{$E_{l} \subseteq V_l \times V_l$} is a set of edges in one network $l \in L$ of the sequence~\cite{kivela2014multilayer}.
Multilayer networks are commonly understood as layers comprised of interactions, where each layer corresponds to a specific aspect of the system. \REVISE{Coupling accounts for transitions between layers. Kivel\"{a} et al.~\cite{kivela2014multilayer} consider a multiplex network as a \textit{``diagonally coupled multilayer networks in which each layer shares at least one node with some other layer in the network''}. They consider also \textit{node-aligned multiplex networks}, which do not specifically address coupling of nodes, but assume that nodes are shared (and coupled) across all layers. In our context, we refer to \textit{coupled multilayer networks} when we specifically consider networks with coupling between nodes across layers, and simply to \textit{multiplex networks} when considering node-aligned multiplex networks. The difference between these two types of multiplex networks is only whether we consider or not the coupling between layers. In multiplex networks, nodes represent \textit{the same} entity across all layers.}
We represent a multiplex network as a structure $M'=(V_M, E_M)$, where $V_M$ is the set of nodes and $E_M$ the set of all edges (in \textit{all} layers). \REVISE{$\mathbb{V}$ denotes the super set of all nodes, and $\mathbb{E} = \mathbb{V} \times \mathbb{V}$ the super set of all edges, regardless of the layers.}
There may exist coupling edges connecting nodes through layers, forming \textit{transition coupling}. This may concern, for example, coupled multilayer networks which are modelling transportation systems~\cite{cozzo2015structure}.
In that case, we can differentiate the \textit{elementary} layers (holding \textit{inner-layer} edges) from the \textit{transition} coupling (holding \textit{coupling} edges). Each \REVISE{transition coupling} $t = (l, l')$ between layer $l$ and $l'$ \REVISE{can be modelled similarly to a} layer, with a set of nodes and edges. If $S \subset L$ represents the subset of all \textit{elementary} layers, and $T \subset L$ the subset of all \textit{transition} coupling, we may define our coupled multilayer network $M$ as the union. It combines a multilayer network with elementary layers only, and another multilayer network with \REVISE{transition coupling} only $M = \{G_l\}_{l \in L} = M_S \cup M_T = \{G_s\}_{s \in S} \cup \{G_t\}_{t \in T}$.
The coupling can heavily influence the structural behaviour of multilayer networks~\cite{cozzo2016characterization}. It can also influence the resilience of the network against failures~\cite{de2014navigability} and naturally the diffusion phenomena~\cite{tejedor2018diffusion} too.
Among other examples of coupled multilayer networks, a biological system can be studied at the protein, RNA, or gene level~\cite{valdeolivas2018random}. Similarly, social networks can be studied by taking into account a person's presence on multiple platforms~\cite{mittal2019analysis}.
For computational purposes, such networks are commonly represented in the form of supra-adjacency matrices, where \textit{block-diagonal structures} connect the same node across individual layers emerges~\cite{cozzo2015structure}. Algorithms can operate on such matrices directly, and thus exploit additional information representing multiple aspects.
Algorithms for analysis of multilayer networks can also operate on sparse adjacency data structure of the multilayer network directly. Yet, they need to take into account that a given node is present in multiple layers. Such representation is suitable for this work, as we are focused primarily on how edges co-occur across \textit{layers}. Hence, this work focuses primarily on the relations \textit{between the layers} of a given multilayer network.
We next discuss the two measures we consider throughout this work.
\section{Entanglement in multiplex networks}
\label{sec:entanglement}
We briefly recall the entanglement measures definitions from previous work~\cite{renoust2014entanglement}.
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
[width=.95\linewidth]{ent_example_thick}
\caption{A toy example of layer entanglement computation: a) separated layers considered in a multiplex network; b) constructing the layer interaction network from the example; c) measuring entanglement from the example.}
\label{fig:toyexample}
\end{figure}
\subsection{Layer interaction network}
\label{subsec:LIN}
Recall our multiplex network $M=(V_M, E_M)=\{G_l \}_{l \in L}$. As mentioned earlier, such a network really distinguishes itself from classical graphs through the use of different layers to connect nodes. These layers may have different patterns and may overlap together. There may even exist latent dependencies among these layers.
To investigate this matter, each layer could be abstracted to one single node and form a new graph, the \textit{Layer Interaction Network} (hereafter $LIN$)~\cite{renoust2014entanglement}. Visualizing the \REVISE{$LIN$} is a key component for multiplex network visualization such as in Detangler~\cite{renoust2015detangler}.
In the \REVISE{$LIN$}, $LIN=(L,F)$, each node $u_{l}, u_{l'}, u_{l''} \ldots $ corresponds to a layer $l, l', l'', \ldots \in L$ of the multiplex network $M$, and each edge $f \in F$ captures when two layers overlap through edges.
More formally, there exists an edge $f=(u_l, u_{l'})$ whenever there exists at least two nodes $v, v' \in V_M$ with the condition that there exists at least one edge connecting these two nodes on each layer $e_M = (v,v') \in l$ and $e'_M = (v,v') \in l'$.
The \REVISE{$LIN$} can be interpreted as an edge-layer co-occurrence graph, and the weight of an edge $f = (u_l, u_{l'})$, denoted as $n_{l, l'}$ equals the number of times layers $l$ and $l'$ co-occur.
By extension, $n_{l, l}$ is the number of edges on layer $l$. This process is illustrated in Figure~\ref{fig:toyexample}b.
\subsection{Layer entanglement}
\label{subsec:entanglement}
The analysis of layer entanglement is inspired by the analysis of \textit{relation content} in social networks~\cite{burt1985relation}. The idea is to study the redundancy between relation content, each forming in our formalism a different layer. The layer entanglement measures the ``influence'' of a layer in its neighbourhood.
This measure is recursively defined: the entanglement $\gamma_l$ of a layer $l$ is defined upon the entanglement of the layers it is entangled with. Similarly to the eigen centrality~\cite{Wasserman1994}, this translates into the recursive equation:
\REVISE{$$\gamma_l.\lambda=\sum_{l'\in T}{\frac{n_{l,l'}}{n_{l,l}}\gamma_{l'}}.$$} The entanglement of a layer $\gamma_l$ can be retrieved from a vector $\vec{\gamma}$ which corresponds to the right eigenvector (associated to the maximum eigenvalue $\lambda$) of the layer overlap frequency matrix with corresponding overlap, defined as:
$$
\REVISE{
C = (c_{l,l'}), \quad \textrm{where} \quad c_{l,l'} = \frac{n_{l,l'}}{n_{l,l}} \quad \textrm{and} \quad c_{l,l} = \frac{n_{l,l}}{|\mathbb{E}|}}
$$
this metric was initially introduced in~\cite{burt1985relation}, then later constructed using the weights in the $LIN$~\cite{renoust2014entanglement} (see Figures~\ref{fig:toyexample} and~\ref{fig:maxHI}).
\subsection{Entanglement intensity and homogeneity}
\label{subsec:inthom}
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
[width=.95\linewidth]{ent_extremes}
\caption{Two very different cases of maximum homogeneity $\mathit{H}=1$, the multiplex network and the $LIN$ are shown, with matrices and entanglement measures. a) all layers are saturating all edges, so we have maximum intensity $\mathit{I}=1$; b) layers are well balanced, but we may have a lot more interactions possible.}
\label{fig:maxHI}
\end{figure}
The layer entanglement $\gamma_l$ measures the share of layer $l$ overlapping with other layers. The more a group of layers interacts together, the more the nodes they connect will be cohesive in view of \textit{these} layers, hence the more $\gamma_l\; \forall l \in L $ values will be similar (their share of entanglement will be similar). This is captured by the \textit{entanglement homogeneity}~\cite{renoust2014entanglement} which is then defined as the following cosine similarity:
\REVISE{$$\mathit{H}=\frac{<\vec{\mathbf{e}_L},\vec{\gamma}>}{\lVert\vec{\mathbf{e}_L} \rVert \lVert\vec{\gamma} \rVert} \in [0,1].$$ With $\vec{\mathbf{e}_L}=[1,1, \ldots, 1]_L$ the vector of size $L$ all filled with 1's.} Optimal homogenity is not necessarily reached only when all nodes are connected through all layers, but also when all nodes are connected in a very balanced manner between all layers (see Figure~\ref{fig:maxHI}). Homogeneity thus permits various \textit{symmetries} in a given \REVISE{$LIN$}.
When a maximum overlap is reached through all layers in the network, the frequencies in the matrix $C$ (of size $|L| \times |L|)$ are saturated with $C_{i,j}=1$. This gives us a theoretical limit to measure the amount of layer overlap through the \textit{entanglement intensity}~\cite{renoust2014entanglement}, defined as: $$\mathit{I}=\lambda/|L|.$$ In practice, both entanglement intensity and homogeneity have been used to measure the coherence of clusters of documents~\cite{renoust2013measuring}.
\subsection{\REVISE{Transition coupling} entanglement}
\label{sec:interlayerdefinition}
We have defined the layer entanglement which measures overlap between layers of a multiplex network, but many multiplex networks include another critical parameter which is coupling edges~\cite{battiston2014structural}.
The coupling often measures the transition of nodes \textit{between} layers, hence the transition of nodes are captured by edges connecting nodes \textit{across} layers.
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
[width=0.95\linewidth]{transition_edges.png}
\caption{Computing entanglement on the \REVISE{transition coupling} edges. (a) Coupling edges are illustrated in orange ($L_1-L_2$ edges) and in purple ($L_2-L_3$ edges). (b) Computing the corresponding $LIN$ and entanglement measures. Coupling edges of a same node resemble loops except they are defined across two layers. We may notice that: the \REVISE{transition coupling} $L_2-L_3$ shows a slightly higher index since there are more transitions for this coupling; the homogeneity $H$ is (almost) maximal since both layers are (almost) equally intertwined (only 2 layers, actual $H \approx 0.99986$).}
\label{fig:transition_layers}
\end{figure}
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
[width=0.95\linewidth]{all_edges.png}
\caption{Computing entanglement on both inner-layer and coupling edges. (a) Note that in contrast to the example in Figure~\ref{fig:transition_layers}, we have added a loop to node $p_5$ in layer $L_3$ (in red) and a coupling edge connecting nodes $p_3$ of layer $L_2$ to $p_5$ in $L_3$. (b) Computing the corresponding $LIN$ and entanglement measures. We can notice that the most intertwined \REVISE{transition coupling} displays the highest entanglement index. Because there is limited overlap between elementary layer edges and \REVISE{transition coupling} edges, entanglement intensity $I$ is rather low.}
\label{fig:all_layers}
\end{figure}
Recall our multiplex graph $M = (V_M, E_M)$. Suppose $S$ is the set of \textit{elementary} layers, we can then have transitions between any pair of elementary layers $l \in S$ and $l' \in S$.
Let $u_l = (u,l), u \in V_M, l \in S$, the connection of a node $u$ within a layer $l$.
A \REVISE{transition coupling} edge $e$ can be defined as follows: $e=(u_l,v_{l'}) \in E_M$ such that $e$ connects nodes $\{u, v\} \subseteq V_M$ across layers $l \neq l', \{l, l'\} \subseteq S$.
Coupling edges often connect a same node across two layers and may be used to model a physical transition, such as a change from subway to train in a station of a transportation network.
As a consequence, a pair of layers $(l, l')=t$ forms a \REVISE{transition coupling} $t \in T$ when there exists at least one such edge $e=(u_l,v_{l'}) \in E_M$.
Note that taken together, these elementary and \REVISE{transition coupling} subsets form the set of all layers $S \cup T = L$, and that the size of $T$ is bounded by the size of $S$ such that $|T| \leq \frac{1}{2}|S|(|S|-1)$.
Now, given this definition, nothing limits the computation of entanglement (introduced in previous Sections~\ref{subsec:LIN} to~\ref{subsec:inthom}) only to the elementary layers part of $M_S$, as illustrated in Figure~\ref{fig:transition_layers}.
\REVISE{Entanglement can also be used to characterise the coupling between these \textit{elementary} layers if applied only to the edges of the \textit{transition} coupling $M_T$.}
\REVISE{The nature of coupling often captures a very distinct characteristic of the network in comparison to its elementary layers. A \REVISE{transition coupling} edge mostly connects the same node across layers, while elementary layers do not always display loops. These cases may happen on rare occasions, one example being an underground path connecting subway stations being modelled as a \REVISE{transition coupling}, but the literature is very poor of such examples. It is however technically possible to consider both elementary layers and \REVISE{transition coupling} in one multiplex network $M$ to compute entanglement (as shown in Figure~\ref{fig:all_layers}), but we keep this discussion for the Appendix. In practice, the intensity and homogeneity greatly differ between them, and often results in clearly separated components of the $LIN$.}
\section{A coupled multilayer network generator}
\label{sec:multiplex-generator}
In this section, we describe an algorithm which generates synthetic coupled multilayer networks, \textit{i.e.} multilayer networks which share some nodes across some layers, but do not guarantee that all nodes are being shared between all layers. \REVISE{These kinds of networks make the link between general multilayer networks and node-aligned multiplex networks (for which the assumption is that all nodes are shared through all layers~\cite{kivela2014multilayer}).}
The algorithm is based on the following observations. Let $M = (V_M, E_M)$ represent a coupled multilayer network with layer set $L$.
Each node is associated to a random number of layers $\{l_1, l_2, \ldots, l_i\} \subseteq L$.
Now for each layer $l_i \in L$ there is a set of nodes $V_{l_i} \subseteq V_M$ which forms a potential set of edges of size $|E_{l_i}| = \frac{1}{2}|V_{l_i}|(|V_{l_i}|-1)$. We introduce $o$, a parameter determining the probability of a node occurring at a given layer. We then introduce the probability $p$ of an edge to be created between any pair of nodes belonging to a layer so we may avoid cliques to form on each layer.
We referred in our previous work to the edge dropout \cite{skrlj2019patterns}, which is $d = 1-p$ as the share of links we drop from the clique model.
Intuitively, the more similar a given random multiplex is to a clique over each layer, the higher its elementary layer intensity should be.
\REVISE{Hence, high intensity implies larger probability that two given nodes will have an edge between them on more than one layer.}
The generator also accounts for coupling by adding \REVISE{transition coupling} edges. These coupling edges are connecting nodes across two layers. We introduce $q$, the probability for a same node to be connected across two layers. The higher $q$, the more nodes will be connected through layers. Note that in our initial work \cite{skrlj2019patterns}, neither $o$ nor $q$ were considered ($o$ was in fact picked uniformly).
\begin{algorithm}[t]
\Parameter{Number of nodes $n$, number of layers $m$, inner-layer edge probability $p$, coupling edge probability $q$,
}
\KwResult{A coupled multilayer network $M$}
$M$ $\leftarrow$ emptyMultilayerObject\;
\For{node in $[1 \dots n]$}{
layerNodes $\leftarrow$ assignNodeToLayers(node, $o$, $m$)\\\Comment*[r]{Nodes are assigned to layers among $m$ with probability $o$.}
update($M$, layerNodes)\Comment*[r]{Update global network.}
}
\For{layer $l_i$ with corresponding node set $V_{l_i}$}{
nodeClique $\leftarrow$ generator of node pairs from $V_{l_i}$\Comment*[r]{With or without possible loops.}
innerLayerEdges $\leftarrow$ sampleWithProbability(nodeClique, $p$)\Comment*[r]{Sample via $p$.}
update($M$,innerLayerEdges)\Comment*[r]{Update global network.}
}
\For{layers $l_i, l_j$ with shared node set $V_{l_i, l_j}$}{
sameNodeTransitionCouplingEdges $\leftarrow$ sampleWithProbability($V_{l_i, l_j}$, $q$)\Comment*[r]{Sample via $q$.}
update($M$,sameNodeTransitionCouplingEdges)\Comment*[r]{Update global network.}
}
\Return{$M$}\;
\caption{A coupled multilayer network generator.}
\label{algo:rep}
\end{algorithm}
The purpose of this generator is to offer a simple \textit{testbed} for further exploration, as well as additional evidence of the relation between homogeneity and intensity on many random, synthetic networks. The Algorithm~\ref{algo:rep} represents the proposed procedure.
The generator first randomly assigns the same node index to the many layers (lines 2-5). Once assigned, the layers are processed by applying sampling on $|V_{l_i}| \choose 2$ possible edges in layer $l_i$.
Note that in line 7, this whole clique is virtually generated.
The global multiplex is updated during this process (lines 6-10).
These steps are then repeated for \REVISE{each \REVISE{transition coupling}} \textit{i.e.} pairs of elementary layers (lines 11-14).
\REVISE{The implementation thus uses a generator, for which \textit{lazy evaluation} avoids potential combinatorial explosion when considering a large number of nodes and low edge probability.}
\subsection{Some theoretical properties of the generator}
\REVISE{In this section we show two properties of the proposed generator. We denote $n = |V_M|$ the parameter setting the number of nodes of the network, $m = |L|$ the parameter setting the number of edge layers in the network, and $p$ the inner-layer edge probability.
Let $\phi \in \mathbb{N}^{+}$ represent the number of possible edges. Then $\phi \leq m \cdot \binom{n}{2}.$
Let $o=1$. Each layer can have at most $n$ nodes. Assuming they form a clique, each layer is thus comprised of $n \choose 2$ edges. As there are $m$ layers, there can be at most $m \cdot \binom{n}{2}$ edges --- a clique of $n$ nodes in each layer (assuming $p = 1$). We refer to this bound as $\phi \leq m \cdot \binom{n}{2}$.}
\REVISE{In the limit, as $p \rightarrow 1$, a full clique needs to be constructed, assuming each node is projected across all layers. The complexity \textit{w.r.t.} the number of layers and edges is: $\mathcal{O}(m \cdot \binom{n}{2}) = \mathcal{O}(|E_M|).$
Note that, even though theoretically, the proposed generator creates a clique and then samples from it, current, lazy implementation only \textit{generates} the edges needed to satisfy a given $p$ percentage. In practice, only when $p \approx 1$, the generator needs larger portions of space (and time). As such, fully connected networks do not represent real systems, we were able to generate a multitude of very diverse networks. This generator-based implementation does not imply that large spatial overheads are not possible: such situations occur when very dense networks are considered.}
\REVISE
We next discuss the impacts of $q$ parameter.
The number of coupling edges has a worst case complexity of $\mathcal{O}(\binom{m}{2}\cdot n)$ since $q$ directly depends on the number of layers available.
Let $l_a$ and $l_b$ represent a given pair of layers, where each layer consists of all $n$ possible nodes. As each node couples only to itself, there are at most $n$ edges between $l_a$ and $l_b$. As there are $\binom{m}{2}$ possible layer pairs, if nodes are in each pair fully coupled, the network can have at most $\binom{m}{2} \cdot n$ coupling edges.}
However, is that also the case when considering only \REVISE{transition coupling}? Consider the following example of a multiplex network without the coupling edges. No matter what $p$ is employed, if $q \approx 0$, coupling intensity will be low -- very few coupling edges are introduced, the observed \REVISE{$LIN$} will be very sparse. Hence, we posit that the distribution of intensity shall be \textit{constant} with respect to a given $p$. The proof of this claim is by contradiction. We assume that $p$ would indeed influence coupling entanglement intensity. Since transition coupling intensity is defined solely based on the coupling edges, this claim would imply a dependency between $p$ and $q$, which is by the definition (and design) not the case. Even if the nodes are \textit{isolated} in each layer, transition coupling intensity can be high. Note also that the node positioning, governed by $o$, directly impacts both elementary and transition coupling entanglement, since there is higher possibilities for edges to overlap when nodes belong to many layers. These points are illustrated in our empirical evaluation Section~\ref{sec:evaluation} and further in the Appendix materials.
\section{Layer entanglement in temporal multiplex networks}
\label{sec:ent-description-model}
Analysis of temporal multiplex networks has shown promising results in multiple fields of science, such as for example healthcare and transportation \cite{sannino2017visibility}.
Since patterns of layer interaction networks result in typical entanglement values,
considering temporal entanglement means textitasizing particular topologies of a temporal multiplex network. For example, a high intensity among members in a multiplex social network communicating through different social media corresponds to a synchronization of communications between them. When such a synchronization corresponds to the preparation of a particular event, understanding such synchronization could help forecast the event.
In this section, we first discuss how we define temporal multiplex networks and entanglement time series. We limit the following discussion to the consideration of entanglement between elementary layers only, \textit{i.e.} only inner-layer edges.
\subsection{Temporal multiplex networks and entanglement}
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
[width = .95\linewidth]{scheme-temporal.png}
\caption{Converting temporal edges of multiple types into temporal entanglement series. a) Edges of different types are defined over time between $t_0$ and $t_f$. b) Time frames $d_1$, $d_2$, and $d_3$ are defined so we may construct the three corresponding multiplex network slices. c) For each slice, we can compute a $LIN$ and the corresponding entanglement intensity $I$ and homogeneity $H$, which compose the series once taken together among all slices.}
\label{schematic:multiplex}
\end{figure}
Real-life networks often evolve over time, making them behave differently at different points. In our current setting, we define the temporal aspect of our network such as each edge $e_t$ is defined at a specific time point $t$. A multiplex network $M_{d}$ can then be defined for a given time window ${d}$. A time window ${d} = [t_0, t_f]$ covers a time frame (beginning at $t_0$ and ending $t_f$), and the multiplex network $M_d$ is defined such as each edge exists within the time window:
\begin{equation*}
M_d=(V_M,\{e_t \in E_M\}_{t\in d}).
\end{equation*}
\noindent The second scenario we considered is that of \textit{moving time windows}. Here, edges from the $f$ \textbf{past} windows are considered when constructing a given network $M$, \textit{i.e.},
\begin{equation*}
M_f=(V_M,\{e_t \in E_M\}_{t\in \{d-f,\dots,d-1\}}).
\end{equation*}
Our intuition is to compare the shape of a network at different moving time windows. For example, we could compare political social networks under different rulers of a country~\cite{renoust2016face, renoust2016visual}. To do so, we can simply compute entanglement homogeneity and intensity for each time window and compare them. Since our computation only focuses on edge, we consider the network as multiplex, the nodes are shared across all time frames.
Slicing the time windows is a very different topic and many options are open~\cite{gomez2013diffusion, beck2014state}. For example, it could be achieved manually, with equal time slices, moving window, or with \textit{volume of changes}. In our context, we consider the identification of time window through slices of equal duration in time, but the principle can be extended. We refer to the duration $r$ in time of the slices as \textit{time resolution}.
We may now investigate entanglement homogeneity and intensity properties with respect to time resolution ($r$), and verify if patterns of intensity/homogeneity variation can be predicted.
Note that one challenge of slice-based modelling of temporal multiplex networks is the problem of selecting the correct resolution $r$, \textit{i.e.} how coarse (or fine)-grained the intervals must be in order to capture desired dynamics.
In a system covering a global period of $D$, once a slicing resolution is chosen, we can observe values of homogeneity and intensity at the time series level, \textit{i.e.} for each slice $d \in D$, and define the intensity time series ${\mathbb S}_{I} = \{I_{M_d}\}, \forall d \in D$ and the homogeneity time series as ${\mathbb S}_{H}= \{H_{M_d}\}, \forall d \in D$. These intensity and homogeneity time series can now feed further processing.
Note that \REVISE{$\mathbb{S}_{I_f}$ and $\mathbb{S}_{H_f}$} are defined analogously (entanglement for the past $f$ slices, moving in the increments of one slice).
The whole processing from temporal edges to time series is illustrated in Figure~\ref{schematic:multiplex}.
In our following evaluation (Section~\ref{sec:temporal}), we explore $\mathbb{S}_I$ and $\mathbb{S}_H$ when also considering a moving window of previous $f$ time slices. The rationale for considering past $f$ slices up to the considered time point is that such information only includes past data, and could indicate whether entanglement can be also used for \textit{forecasting purposes}. The second option considered, where only the current time slice was plotted, can shed insight on whether online monitoring based on $I$ or $H$ is a sensible option.
\section{Empirical evaluation}
\label{sec:evaluation}
We now study entanglement intensity and homogeneity across different series of networks.
We first investigate entanglement measures across different parameters of synthetic settings. We follow with investigations on a large panel of real world networks. We finish our study with the study of entanglement in temporal multiplex networks.
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
\begin{tabular}{cc}
\subcaptionbox{Lower elementary $I$}{[width = 0.45\linewidth]{elementary_H.png}} &
\subcaptionbox{Higher elementary $I$.}{[width = 0.45\linewidth]{elementary_I.png}} \\
\end{tabular}
\caption{\label{fig:viz-inn}Visualization of inner-layer edges in synthetic coupled multilayer networks.}
\end{figure}
\subsection{Entanglement in synthetic networks}
\label{sec:multilayer}
In this first study, we compare entanglement measures over a series of synthetic multiplex networks, using our proposed generator.
\REVISE{
We consider for all our generations, the following key parameters:
\begin{itemize}
\item Number of nodes ($n$) from 10 to 200 in increments of 10.
\item Number of layers ($m$) in 1,2,3,4,6,7,9,10.
\item Layer assignment probability ($o$), from 0 to 1 in increments of 0.05
\item Edge probability ($p$) from 0 to 1 in increments of 0.05.
\item \REVISE{Transition coupling} edge probability ($q$) from 0 to 1 in increments of 0.05.
\end{itemize}
}
\subsubsection{Multiplex networks without \REVISE{transition coupling}}
A first generation concerns multiplex networks settings in which \REVISE{transition coupling} is not specified (for example, friendship over different social platforms), \REVISE{so we do not consider parameter $q$ here.}
We have generated in total 1{,}329{,}696 synthetic networks (a couple are illustrated in Figure~\ref{fig:viz-inn}).
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
[width = 0.75\linewidth]{synthetic_TYPE1_GLOBALHI.png}
\caption{Homogeneity and intensity $H \times I$ on 1{,}329{,}696 synthetic multiplex networks without \REVISE{transition coupling} with density lines (Gaussian kernel density estimation).}
\label{fig:syn_parameters}
\end{figure}
We measure entanglement intensity $I$ and homogeneity $H$ on each generated network (averaged over all connected components \REVISE{of the layer overlap frequency matrix}). We investigate the role of the different parameters over the entanglement measures, as illustrated in Figures ~\ref{fig:syn_parameters},~\ref{fig:inner_nm} and~\ref{fig:syn_dropout}.
There is an obvious dependency between entanglement intensity and homogeneity since we cannot obtain low homogeneity with high intensity values (Figure~\ref{fig:syn_parameters}). This is due to the nature of both measures. With a high intensity, most of the layers are overlapping over most of the network. As a consequence, there is little space for permutations in the way layers overlap, this means the entanglement of all individual layers $\gamma_l$ tends to align, hence resulting in high values of homogeneity.
This leads to a denser production of high homogeneity networks as illustrated by the density lines in Figure~\ref{fig:syn_parameters}.
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
\begin{tabular}{cc}
\subcaptionbox{Elementary $H \times n$}{[width = 0.45\linewidth]{synthetic_TYPE1_Homogeneityn.png}} &
\subcaptionbox{Elementary $I \times n$}{[width = 0.45\linewidth]{synthetic_TYPE1_Intensityn.png}} \\
\subcaptionbox{$H \times m$}{[width = 0.45\linewidth]{synthetic_TYPE1_Homogeneitym.png}} &
\subcaptionbox{Elementary $I \times m$}{[width = 0.45\linewidth]{synthetic_TYPE1_Intensitym.png}} \\
\end{tabular}
\caption{Results on synthetic multiplex networks without considering \REVISE{transition coupling}. Dependency on the number of nodes $n$ (a, b) and layers $m$ (c, d) on the elementary layer entanglement. The intensity (b, d) shows some influence on each parameter.}
\label{fig:inner_nm}
\end{figure}
The number of nodes $n$ and edges $m$ do not show a strong dependency with homogeneity, but a slight one on intensity. Higher values of $n$ and $m$ make it easier to obtain sparser networks, with the consequence of resulting lower values of intensity. We further illustrate these in Figure~\ref{fig:inner_nm}. This effect mitigates quickly with higher numbers of nodes and layers.
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
\begin{tabular}{cc}
\subcaptionbox{Elementary $H \times o$}{[width = 0.45\linewidth]{synthetic_TYPE1_Homogeneityo.png}} &
\subcaptionbox{Elementary $I \times o$}{[width = 0.45\linewidth]{synthetic_TYPE1_Intensityo.png}} \\
\subcaptionbox{Elementary $H \times p$}{[width = 0.45\linewidth]{synthetic_TYPE1_Homogeneity.png}} &
\subcaptionbox{Elementary $I \times p$}{[width = 0.45\linewidth]{synthetic_TYPE1_Intensity.png}} \\
\end{tabular}
\caption{Results on synthetic multiplex networks without considering \REVISE{transition coupling}. There is small dependency on the layer assignment probability $o$ to nodes, since the higher it is, the more overlap may occur. The homogeneity (c) shows less dependency to the inner-layer edge probability $p$ than intensity (d), which also increases the likelihood of layer overlap.}
\label{fig:syn_dropout}
\end{figure}
We further explore the \REVISE{layer} assignment probability of a node $o$, and the inner-layer edge probability $p$ in Figures~\ref{fig:syn_dropout}.
There is a first dependency appearing on the layer assignment probability $o$, for which higher values tend to produce higher homogeneity (Figure~\ref{fig:syn_dropout}b). Higher homogeneity is reached when all layers contribute equally, meaning that a higher $o$ shows more chances for each layer to contain most of the nodes.
We may also observe apparent linear trend between the edge probability $p$ (sparseness) and entanglement intensity (Figure~\ref{fig:syn_dropout}d). This trend confirms that sparser networks (\textit{i.e.} lower $p$) are less ``intensely'' overlapping over edges. As intensity directly measures this property, this result outlines one of the \textit{desired} properties of the proposed network generator.
\subsubsection{Multiplex networks with \REVISE{transition coupling}}
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
\begin{tabular}{cc}
\subcaptionbox{Lower transition $I$.}{[width = 0.45\linewidth]{transition_I1.png}} &
\subcaptionbox{Higher transition $I$.}{[width = 0.45\linewidth]{transition_I2.png}} \\
\end{tabular}
\caption{\label{fig:viz-int}Visualization of coupling edges in synthetic coupled multilayer networks.}
\end{figure}
A second experiment is focusing on multiplex graphs with \REVISE{transition coupling}, \textit{i.e.} considering \REVISE{only the} coupling edges in our 1{,}329{,}696 generated networks (illustrated in Figure~\ref{fig:viz-int}). This experiment reproduces the previous one, but focusing on the \REVISE{transition coupling} entanglement.
Results are shown in Figure~\ref{fig:multilayer-synth} and~\ref{fig:coupled-gen}, dependency on the number of nodes and layers is illustrated in Appendix. From Figure~\ref{fig:multilayer-synth}, the shape is globally the same, with the difference in a skewed density of high-homogeneity without a dense production of very low intensity generated networks (from the density lines).
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
[width = 0.75\linewidth]{synthetic_TYPE2_GLOBALHI.png}
\caption{Homogeneity and intensity $H \times I$ results on 1{,}329{,}696 synthetic multiplex networks considering their \REVISE{transition coupling} with density lines (Gaussian kernel density estimation).
}
\label{fig:multilayer-synth}
\end{figure}
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
\begin{tabular}{cc}
\subcaptionbox{Transition ($H \times o$).}{[width = 0.45\linewidth]{synthetic_TYPE2_Homogeneityo.png}} &
\subcaptionbox{Transition ($I \times o$).}{[width = 0.45\linewidth]{synthetic_TYPE2_Intensityo.png}} \\
\subcaptionbox{Transition ($H \times q$).}{[width = 0.45\linewidth]{synthetic_TYPE2_Homogeneityq.png}} &
\subcaptionbox{Transition ($I \times q$).}{[width = 0.45\linewidth]{synthetic_TYPE2_Intensityq.png}} \\
\end{tabular}
\caption{\label{fig:coupled-gen}Homogeneity and intensity distributions in \REVISE{transition coupling} entanglement \textit{w.r.t.} \REVISE{$o$} and $q$.}
\end{figure}
The profile is sensibly the same than that of the previous experiment, except that the layer assignment probability $o$ appears to have a more diffuse impact, and the direct dependency is this time observed on the coupling edge probability $q$. Comparison with parameter $p$ obviously does not influence entanglement, but can be found in Appendix materials for additional inspection.
Overall, the networks with \REVISE{transition coupling} are more \textit{saturated} when compared to the ones without transition. The reason may be that we only consider here \REVISE{transition coupling} edges that only connect the \textit{same node} across layers.
For the interested reader, we also illustrate in the Appendix material the independence of parameters $q$ over the elementary layer entanglement and $p$ over the \REVISE{transition coupling} entanglement. We also report there the computation of entanglement over the combined elementary layers and \REVISE{transition coupling}, which displays a dependency on both $p$ and $q$ parameters. \REVISE{Finally, we have computed the layer correlation coefficient, as suggested in~\cite{nicosia2015measuring}, confirming the role of the different parameters of our generator.}
\subsection{Multiplex network comparison across disciplines}
\label{sec:results-comparison}
\begin{figure}[htbp]
\centering
\captionsetup{width=.90\linewidth}
\begin{tabular}{cc}
\subcaptionbox{\REVISE{Real networks: $H$}}{[width = 0.40\linewidth]{real_Homogeneity}} &
\subcaptionbox{\REVISE{Real networks: $I$}}{[width = 0.40\linewidth]{real_Intensity}} \\
\end{tabular}
\caption{Entanglement homogeneity and intensity compare for each category of networks, showing quite diverse set of properties proper to the different families of networks.}
\label{fig:real}
\end{figure}
\begin{table}[ht!]
\centering
\caption{Real multiplex networks and their properties. The ID in the second column corresponds to Figure~\ref{fig:real_overlay}.}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{lccccccccc}\toprule
Dataset & ID & Type & Nodes & Edges & Number of layers & Mean degree & CC & Intensity & Homogeneity \\ \midrule
arXiv-Netscience~\cite{de2015identifying} & 34 &Coauthorship & 26796 & 59026 & 13 & 4.41 & 3660 & 0.114786 & 0.641670 \\
PierreAuger~\cite{de2015identifying} & 26 & Coauthorship & 965 & 7153 & 16 & 14.82 & 131 & 0.086551 & 0.716156 \\
Arabidopsis~\cite{stark2006biogrid} & 0 & Genetic & 8765 & 18655 & 7 & 4.26 & 387 & 0.111636 & 0.408940 \\
Bos~\cite{stark2006biogrid} & 1 & Genetic & 369 & 322 & 4 & 1.75 & 82 & 0.160341 & 0.582015 \\
Candida~\cite{stark2006biogrid} & 5 & Genetic & 418 & 398 & 7 & 1.90 & 50 & 0.284783 & 0.888476 \\
Celegans~\cite{stark2006biogrid} & 7 & Genetic & 4557 & 8182 & 6 & 3.59 & 193 &0.115718 & 0.420231 \\
DanioRerio~\cite{stark2006biogrid} & 8 & Genetic & 180 & 188 & 5 & 2.09 & 45 & 0.068219 & 0.870304\\
Drosophila~\cite{stark2006biogrid} & 9 & Genetic & 11970 & 43367 & 7 & 7.25 & 346 & 0.082283 & 0.405509 \\
Gallus~\cite{stark2006biogrid} & 12 & Genetic & 367 & 389 & 6 & 2.12 & 54 & 0.151845 & 0.433374 \\
HepatitusCVirus~\cite{stark2006biogrid} & 13 & Genetic & 129 & 137 & 3 & 2.12 & 4 & 0.304679 & 0.777382 \\
Homo Sapiens~\cite{stark2006biogrid} & 14 & Genetic & 36194 & 170899 & 7 & 9.44 & 785 & 0.101047 & 0.519648 \\
HumanHerpes4~\cite{stark2006biogrid} & 16 & Genetic & 261 & 259 & 4 & 1.98 & 21 & 0.245979 & 0.595037 \\
HumanHIV1~\cite{stark2006biogrid} & 15 & Genetic & 1195 & 1355 & 5 & 2.27 & 13 & 0.158347 & 0.583648 \\
Oryctolagus~\cite{stark2006biogrid} & 24 & Genetic & 151 & 144 & 3 & 1.91 & 21 & 0.241322 & 0.635943 \\
Plasmodium~\cite{stark2006biogrid} & 27 & Genetic & 1206 & 2522 & 3 & 4.18 & 27 & 0.249623 & 0.853694 \\
Rattus~\cite{stark2006biogrid} & 28 & Genetic & 3263 & 4268 & 6 & 2.62 & 296 & 0.126889 & 0.457888 \\
SacchCere~\cite{stark2006biogrid} & 29 & Genetic & 27994 & 282755 & 7 & 20.20 & 432 & 0.070428 & 0.695150 \\
SacchPomb~\cite{stark2006biogrid} & 30 & Genetic & 10178 & 63677 & 7 & 12.51 & 286 & 0.079756 & 0.407135 \\
Xenopus~\cite{stark2006biogrid} & 32 & Genetic & 582 & 620 & 5 & 2.13 & 109 & 0.082539 & 0.829466\\
YeastLandscape~\cite{costanzo2010genetic} & 33 & Genetic & 17770 & 8473997 & 4 & 953.74 & 4 & 0.132035 & 0.534030 \\
CElegans~\cite{chen2006wiring} & 7 & Neuronal & 791 & 5863 & 3 & 14.82 & 6 & 0.339461 & 0.856373 \\
Cannes2013~\cite{omodei2015characterizing} & 6 & Social & 659951 & 991854 & 3 & 3.01 & 48375 & 0.269159 & 0.900587 \\
CKM-Physicians-Innovation~\cite{coleman1957diffusion} & 3 & Social & 674 & 1551 & 3 & 4.60 & 12 &0.394666 & 0.988309 \\
CS-Aarhus~\cite{magnani2013combinatorial} & 4 & Social & 224 & 620 & 5 & 5.54 & 13 &0.341388 & 0.894766 \\
Kapferer-Tailor-Shop~\cite{kapferer1972strategy} & 17 & Social & 150 & 1018 & 4 & 13.57 & 5 & 0.438509 & 0.910168 \\
Krackhardt-High-Tech~\cite{krackhardt1987cognitive} & 18 & Social & 63 & 312 & 3 & 9.90 & 3 &0.412875 & 0.838791 \\
Lazega-Law-Firm~\cite{lazega2001collegial} & 19 & Social & 211 & 2571 & 3 & 24.37 & 3 & 0.516232 & 0.970364 \\
MLKing2013~\cite{omodei2015characterizing} & 21 & Social & 392542 & 396671 & 3 & 2.02 & 36041 & 0.260099 & 0.624426 \\
MoscowAthletics2013~\cite{omodei2015characterizing} & 22 & Social & 133619 & 210250 & 3 & 3.15 & 6323 & 0.246321 & 0.880520 \\
ObamaInIsrael2013~\cite{omodei2015characterizing} & 23 & Social & 3457453 & 4061960 & 3 & 2.35 & 651141 &0.316202 &0.835469 \\
Padgett-Florence-Families~\cite{padgett1993robust} & 25 & Social & 26 & 35 & 2 & 2.69 & 2 & 0.547715 & 0.986433 \\
Vickers-Chan-7thGraders~\cite{vickers1981representing} & 31 & Social & 87 & 740 & 3 & 17.01 & 3 &0.705372 & 0.968908 \\
FAO~\cite{de2015structural} & 11 & Trade & 41713 & 318346 & 364 & 15.26 & 571 & 0.290018 & 0.843847 \\
EUAir~\cite{cardillo2013emergence} & 10 & Transport & 2034 & 3588 & 37 & 3.53 & 41 &0.015499 & 0.743443 \\
London~\cite{de2014navigability} & 20 & Transport & 399 & 441 & 3 & 2.21 & 3 & 0.236502 & 0.875838 \\
\bottomrule
\end{tabular}
}
\label{tab:summary}
\end{table}
We now consider real world static networks.
All considered networks are summarised with their main characteristics in Table~\ref{tab:summary}\footnote{The networks are hosted at \url{https://comunelab.fbk.eu/data.php}}. Unfortunately, we have not found a real case with a large number of \REVISE{transition coupling} edges, so we limit this evaluation to elementary layer entanglement. For each network, we computed elementary layer homogeneity and intensity, for all connected components.
We first investigate individual results through the distributions of each metric across network types, Figure~\ref{fig:real}. We then compare individual networks across entanglement intensity and homogeneity Figure~\ref{fig:real_overlay}.
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
[width = 0.95\linewidth]{real_hom_int}
\caption{Real networks: $H \times I$. Labels of networks map to Table~\ref{tab:summary} (ID). Grey dots represent synthetic samples of Figure~\ref{fig:syn_parameters}, with Gaussian kernel density estimation over lines over the real world samples. Social networks shows \REVISE{a tendency to fall within the high homogeneity/intensity range, coinciding with the high inner-layer edge probability parameter $p$ of synthetic networks.}}
\label{fig:real_overlay}
\end{figure}
\begin{figure}[htbp]
\centering
\captionsetup{width=.90\linewidth}
\begin{tabular}{cc}
\subcaptionbox{\REVISE{Genetic \textit{vs.} Social networks - $H$}}{[width = 0.40\linewidth]{real_homogeneity_dist_}} &
\subcaptionbox{\REVISE{Genetic \textit{vs.} Social networks - $I$}}{[width = 0.40\linewidth]{real_intensity_dist_}} \\
\end{tabular}
\caption{Distributions of homogeneity and intensity when genetic networks are compared to social ones.}
\label{fig:dist}
\end{figure}
Two main observations are apparent when studying the results on real networks. First, the difference between social and genetic (biological) multiplex networks becomes obvious when both entanglement intensity and homogeneity are considered (Figure~\ref{fig:real_overlay}).
To confirm these differences, we further compare their distributions, \textit{i.e.}, the intensity and homogeneity of social \textit{vs.} genetic networks, in Figure~\ref{fig:dist}.
In addition, from Figure~\ref{fig:real_overlay}, we may observe that many genetic networks sit in relatively low intensity/homogeneity places, whereas social networks sit in the top right corner: the high entanglement homogeneity of social networks is quite noticeable. This suggests a few interpretations:
\begin{itemize}
\item genetic networks show in general very little layer overlap;
\item \REVISE{some genetic networks could be matched to synthetic networks of low inner-layer edge probability, especially when homogeneity is low, being very sparse, potentially pointing at low layer assignment probability too;}
\item layers in social networks tend to overlap a lot;
\item social networks tend to be quite dense and may be simulated by synthetic networks with a high inner-layer edge probability;
\end{itemize}
The results on social networks indicate a high level of layer overlap and it may be due to the overall behaviour of people, which is rather similar across different networks, whatever their means of interaction. \textit{Simmelian ties, triadic closure, and homophily} (which are well studied in social sciences) are probably strong drivers of this layer overlap.
\subsection{Entanglement in temporal multiplex networks}
\label{sec:temporal}
In our last experiment, we investigate entanglement across time slices of three real-life temporal multiplex networks:
\textit{MLKing2013}, \textit{MoscowAthletics2013}, and \textit{Cannes2013} (as found in \cite{omodei2015characterizing}). Each network consists in a collection of Twitter activity related to some event. The networks are comprised of three layers of connection, namely \textit{retweets}, \textit{replies} and \textit{comments}. They can be summarised as follows. The \textit{MLKing2013} data set consists of 421{,}083 events covering a week of celebration of M.L. King's speech ``\textit{I have a dream}'' in 2013, forming 396{,}671 edges between 327{,}708 nodes. The \textit{MoscowAthletics2013} data set consists of 303{,}330 events covering two weeks of the World Championships of Athletics held in Moscow in 2013, forming 210{,}250 edges between 88{,}805 nodes.
The \textit{Cannes2013} network consists of 1{,}297{,}545 events (temporal edges) covering a month of the 2013 Cannes Film Festival, together forming a network of 930{,}419 edges and 438{,}538 nodes. Note that the networks are not trivially small, offering additional evidence of the stability of the entanglement computation.
The networks were analysed following the methodology introduced in Section~\ref{sec:ent-description-model}. We propose two experiments with regard to time segmentation.
The first experiment considers fixed time windows of sizes 1h, 3h, 6h, and 12h. We compare with the activity volume in form of a total number of tweets -- as found in \cite{omodei2015characterizing}, Figure 1 for a 1h window size, here reported in Figures~\ref{fig:mlking}a,~\ref{fig:moscow}a, and~\ref{fig:cannes}a. We normalise here this volume so values are in $[0, 1]$.
We selected the coarse windows at their best readability for each dataset (3h for \textit{MLKing2013} in Figure~\ref{fig:mlking}b, 6h for \textit{MoscowAthletics2013} in Figure~\ref{fig:moscow}b, and 12h for \textit{Cannes2013} in Figure~\ref{fig:cannes}b) -- each coarsening is further illustrated in Appendix.
A second experiment considers a moving window of the size corresponding to these best windows, sliding by the hours (Figures~\ref{fig:mlking}c,~\ref{fig:moscow}c, and~\ref{fig:cannes}c).
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
\subcaptionbox{\textit{MLKing2013}, static window of 1h }{[width = 0.45\linewidth]{both_basic_slice_1h_1_mlking_1.png}} \\
\begin{tabular}{cc}
\subcaptionbox{\textit{MLKing2013}, static window of 3h}{[width = 0.45\linewidth]{both_basic_slice_3h_1_mlking_1.png}} &
\subcaptionbox{\textit{MLKing2013}, sliding window of 3h, \REVISE{with 1h steps}}{[width = 0.45\linewidth]{both_window_slice_1h_1_mlking_3.png}}
\end{tabular}
\caption{Visualization of temporal entanglement across \textit{MLKing2013}. In grey, volume over the period of time (dotted line for the aggregated volume over sliding window (c)). Intensity in blue and homogeneity in yellow.}
\label{fig:mlking}
\end{figure}
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
\subcaptionbox{\textit{MoscowAthletics2013}, static window of 1h }{[width = 0.45\linewidth]{both_basic_slice_1h_1_moscow_1.png}} \\
\begin{tabular}{cc}
\subcaptionbox{\textit{MoscowAthletics2013}, static window of 6h}{[width = 0.45\linewidth]{both_basic_slice_6h_1_moscow_1.png}} &
\subcaptionbox{\textit{MoscowAthletics2013}, sliding window of 6h, \REVISE{with 1h steps}}{[width = 0.45\linewidth]{both_window_slice_1h_1_moscow_6.png}}
\end{tabular}
\caption{Visualization of temporal entanglement across \textit{MoscowAthletics2013}. In grey, volume over the period of time (dotted line for the aggregated volume over sliding window (c)). Intensity in blue and homogeneity in yellow.}
\label{fig:moscow}
\end{figure}
\begin{figure}[ht!]
\centering
\captionsetup{width=.90\linewidth}
\subcaptionbox{\textit{Cannes2013}, static window of 1h }{[width = 0.45\linewidth]{both_basic_slice_1h_1_cannes_1.png}} \\
\begin{tabular}{cc}
\subcaptionbox{\textit{Cannes2013}, static window of 12h}{[width = 0.45\linewidth]{both_basic_slice_12h_1_cannes_1.png}} &
\subcaptionbox{\textit{Cannes2013}, sliding window of 12h, \REVISE{with 1h steps}}{[width = 0.45\linewidth]{both_window_slice_1h_1_cannes_12.png}}
\end{tabular}
\caption{Visualization of temporal entanglement across \textit{Cannes2013}. In grey, volume over the period of time (dotted line for the aggregated volume over sliding window (c)). Intensity in blue and homogeneity in yellow.}
\label{fig:cannes}
\end{figure}
In the \textit{MLKing2013} data set (Figure~\ref{fig:mlking}), we can observe that spikes of intensity surround the main spike of volume activity. A smaller spike of intensity consistently coincides with a smaller spike of volume at the end of the main spike.
In the \textit{MoscowAthletics2013} data set (Figure~\ref{fig:moscow}), the 1h-time window does not show a consistent behaviour. However, we can see that spikes in coarser time windows coincide with the spikes in volume. A larger spike in intensity appears before the final spike in volume.
In the \textit{Cannes2013} data set (Figure~\ref{fig:cannes}), the 1h-time window shows some spikes in intensity, especially a major by the end of the period of activity in terms of volume. In coarser time windows, we can notice four main spikes: one before the beginning of volume of activity; the next two ones appear just before a slight increase in the daily volume; the last one appears the day before the last day of the volume activity. This last peak appears even more prominent from the sliding window example.
The volume captures Twitter activity, governed by the human activity following the day/night rhythm. Although entanglement intensity is also submitted to it, we see emerging patterns that seem proper to each type of event. The activity of entanglement shows definitely some relationship with volume while telling a different story. The sports event that is \textit{MoscowAthletics2013} may be much more subject to the day-by-day routine in which different disciplines are at play. On the other hand, the speech celebration in \textit{MLKing2013} has some very specific activity before (could it be anticipation?) and after (could it be ripples?) the event. The movie festival in \textit{Cannes2013} may be governed by sub-events of different importance in terms of networking activity.
In accordance with the position of social networks in our evaluation of real-world networks in Section~\ref{sec:results-comparison}, we see a decrease in homogeneity whenever we see spiking of intensity. This may indicate that a lot of the network activity suddenly focuses on one specific modality of exchange (such as \textit{replies}). Entanglement study may help in targeting when this is driven by a particular modality.
Further studies on the nature of the events, and the specific topologies of the $LIN$ networks that gave rise to these entanglement values is necessary for a more in-depth analysis of each case. Since we see some spiking activity of entanglement before actual events took place, we may suspect that, beyond monitoring, there is a \textit{predictive power} of modelling time series from entanglement in past data (sliding windows).
\section{Discussion and conclusions}
In this work, we have revisited the notion of layer entanglement and extended it to coupled multilayer networks and temporal networks. To investigate entanglement, we have proposed a random generator for coupled multilayer networks, and generated a large set of synthetic ones. We have evaluated entanglement intensity and homogeneity in all cases, and compared to static and temporal real world networks.
Our analysis of the synthetic networks outlined that entanglement intensity is directly correlated with edge probability parameter -- the sparser the network, the lower the intensity. This result indicates the proposed generator indeed emits networks which adhere to this property. We have also observed that large parts of the generated networks are subject to high homogeneity with various degrees of entanglement intensity.
\REVISE{Entanglement in the synthetic networks appears very sensitive to the different probabilities characterising the model ($o$, $p$, and eventually $q$ for the coupled multilayer networks). The influence of each parameter should also be investigated theoretically in future work.}
The high homogeneity observed may be a byproduct of our computations. First, our random generation \REVISE{induces a lot of small connected components of the coupled multilayer networks, and small components tend to show higher homogeneity since there are not so many degrees of freedom for edges to overlap}. Because we are averaging the entanglement intensity and homogeneity over all components, this may go in favour of high homogeneity. Understanding this effect deserves more investigation. Second, entanglement homogeneity is a cosine measure, and the observed values may suffer from the skewness of cosine values when distributed in a linear space, amplifying the effect of having large values. Furthermore, it might also suffer from the curse of dimensionality in the case of a high number of layers. It would be worth considering normalizing this homogeneity with respect to the number of layers involved and the number of edges they cover. \REVISE{Instead of cosine, a Shannon's entropy measure may overcome some of these limitations.}
\REVISE{One of the aspects that was not extensively evaluated as a part of this work is the processing of the repeated links in a given time slice. The current implementation considers, for each time slice, the collection of \textit{unique} links, which are not weighted by their possible multiple occurrences. This way, the diversity of connections is emphasized, instead of link frequency. A more detailed study of how the links can be re-weighted will be considered in future work.}
We further demonstrated that the two measures offer interesting insights when computed across a wide array of real-world networks. The observed relationship between the intensity and homogeneity of layer entanglement with the family of dataset was previously reported for clusters of documents (in~\cite{renoust2013measuring}, Figure 5). In this previous experiments, clusters of documents were mostly located at the left frontier of high intensity for a varying homogeneity. Our current experiments showed that real networks cluster based on their type (\textit{e.g.} biological \textit{vs.} social), also close to this frontier. \REVISE{We have observed (from Figure~\ref{fig:real_overlay}) that the set of genetic networks tend to sit in areas with low entanglement intensity, which could correspond to lower edge probability $p$, but they also tend to show a wider span of entanglement homogeneity including our lowest values measured (from Figure~\ref{fig:real}), which could correspond to lower layer assignment probability $o$. \REVISE{Further work should be invested on finding the reason why genetic networks tend to show lower homogeneity.} This is opposed to social networks which tend to find their way in the higher probability area.} This should be further investigated, but this may be related to \textit{homophily}~\cite{mcpherson2001birds,borgatti2009network}. Homophily is the implied similarity of two entities in a social network, and the property of entities to agglomerate when \textit{being similar}. If the reason of \textit{`being similar'} could be modelled as a layer of interaction, the result of a group of entities in \textit{`being similar'} would lead to the formation of a clique in this layer, hence locating social networks in \REVISE{high probability areas}.
The proposed work offers at least two prospects of multiplex network study which are in our belief worth exploring further. The difference between the genetic and social networks is possibly subject to very distinct topologies which emerge in individual layers. This claim may further be investigated via other measurements, such as graphlets, communities or other structures. Next, genetic networks are less homogeneous. Future work includes exploration of this fact, as it can be merely a property of the networks considered, empirical methodology used to obtain the networks or some other effect.
We believe that theoretical properties of the proposed network generator can also be further studied, offering potential insights into how multiplex networks behave and whether the human-made aspects are indeed representative of a given system's state. The model that we are currently exploring only takes into account a probability of linkage through (or within) layers without guarantee of connectivity. We made this choice to be able to compare between different fields, without prior assumption which could, for example, rule in favour of similarity to social network. Our future work will investigate other generation models including Erd\H{o}s-Rényi-based~\cite{caimo2020multilayer} or other with preferential attachment~\cite{nicosia2014nonlinear}.
The analysis of real-life temporal networks offers cues on evolution in layer entanglement which can happen prior to some other events. We have tested multiple time scales.
Too small time windows mostly result in noisy time series carrying low amounts of useful information, while higher coarsening shows activity related to volume, but with a different light on the events that are captured. Future work will dive deeper into these events, and consider testing entanglement as a predictor using approaches such as of Prophet~\cite{taylor2018forecasting}.
When considering entanglement as a either a monitoring or a predictive variable, its utility largely depends on the time scale at which a given edge stream needs to be considered. We leave extensive, possibly automatic determination of a setting where entanglement would be of practical relevance for future work.
To study the parameters driving the dynamics of entanglement in temporal networks, we will consider comparing entanglement measures with synthetic temporal networks in our future investigations.
\section*{Availability}
The code for reproduction of experiments is freely available at \url{https://gitlab.com/skblaz/entanglement-multiplex}.
\section*{Acknowledgements}
The work of the first author was funded by the Slovenian Research Agency through a young researcher grant.
The work of other authors was supported by the Slovenian Research Agency (ARRS) core research programme \textit{Knowledge Technologies} (P2-0103) and ARRS funded research project
\textit{Semantic Data Mining for Linked Open Data} (financed under the ERC Complementary Scheme, N2-0078). We also acknowledge Dagstuhl seminar 19061 where many ideas implemented in this paper emerged.
\section*{Competing interests}
The authors declare that they have no competing interests.
\section*{Author's contributions}
Both authors have contributed equally to the theoretical background, design of the experiments, and the writing of the manuscript. BR contributed to experiments, but the most of the experiments was handled by BS.
|
3,212,635,537,761 | arxiv | \section{\label{sec:level1}Introduction}
\noindent The concept of cosmological time function was defined in \cite{AG}. Time functions are important in study of the global causal theory of space-time. They can be defined arbitrary and may have little physical significance. But the cosmological time function is defined canonically and consequently, study of it gives us important information about space-time.
Let us recall its definition.\\
Let $(M,g)$ be a space-time and $d:M\times M\rightarrow [0,\infty]$ be the Lorentzian distance function. The cosmological time function $\tau:M\rightarrow [0,\infty)$ is defined by:
$$\tau(q):=sup_{p\leq q} d(p,q),$$
\noindent where $p\leq q$ means that $q\in J^{+}(p)$.\\
A time function on the space-time $(M,g)$ in the usual sense is a real valued continuous function which is strictly increasing on causal curves. The existence of such a function on $M$ requires causal stability (no closed causal curve in any Lorentzian metric sufficiently near the space-time metric exists). Although $\tau$ is not always well behaved, if it is regular then it is a Cauchy time function \cite{AG}.
\begin{defn}\cite{AG} The cosmological time function $\tau$ of $(M,g)$ is regular if and only if:
\begin{itemize}
\item $\tau (q)<\infty $, for all $q\in M,$
\item $\tau\rightarrow 0$ along every past inextendible causal curve.
\end{itemize}
\end{defn}
\noindent The first condition is an assertion that for each point $q$ of the space-time any particle that passes through $q$ has been in existence for a finite time (the space-time has an initial singularity in the strong sense). The second condition asserts that every particle came into existence at the initial singularity.\\
\noindent it is proved \cite{AG} that if the cosmological time function of $(M,g)$ is regular then:
\begin{itemize}
\item it is an almost every where differentiable time function;
\item $(M,g)$ is globally hyperbolic;
\item For each $q\in M$ there is a future-directed timelike ray $\gamma_{q}:(0,\tau(q)]\rightarrow M$ that realizes the distance from the initial singularity to $q$, that is $\gamma_{q}$ is future-directed timelike unit geodesic which is maximal on each segment, such that:\\
\begin{equation}
\gamma_{q}(\tau(q))=q,~~~~\tau(\gamma_{q}(t))=t
,~~~~ for ~t\in (0,\tau(q)].
\end{equation}
\end{itemize}
We recall that since the Lorentzian distance is not conformally invariant, $\tau$ is not too and even if $(M,g)$ is globally hyperbolic, $\tau$ is not necessarily regular.
But it is proved in \cite{NE} that if $(M,g)$ is globally hyperbolic then there is a smooth real function $\Omega >0$ such that $\tau$ is regular, on $(M,\Omega g)$. This gives a characterization for global hyperbolicity.
\\ It is also proved \cite{CH} that the regular $\tau$ is differentiable at $q$ iff there exists only one timelike ray of the form (1).\\
In the whole of this paper we suppose that $\tau$ is finite.
We reduce the second condition of regularity and investigate its effects on the cosmological time function and causal properties of space-time.\\
In the second section, we recall some definitions and theorems which are used widely in this paper.
In the third section of this paper it is proved that a point of discontinuity appears on a past lightlike ray. Hence $\tau$ is a time function in space-times with no past lightlike rays. Using this time function it can be shown by a simple proof that non-totally vicious space-times without lightlike rays are globally hyperbolic. A different proof was given in \cite{MC}.\\
In addition, it can be asked that is it possible to reduce the second condition in the following way:\\
$\tau\rightarrow 0$, along every past inextendible causal geodesic.\\
The following example shows that this is not the case in general.
\begin{exmp} \cite{AG} Let $M:=\{(x,y,t)\in S^1 \times \mathbb{R} \times \mathbb{R} : t>-1\}$ with the metric
\begin{center}
$ g:= dy^2 +e^{2y}(dxdt+(\vert t \vert^{2\alpha} +(e^{y^{2}}-1))dx^2 ).$
\end{center}
\end{exmp}
Although $\tau$ is going to zero along past inextendible causal geodesics it does not go to zero along causal curves.\\
In section 4, we investigate some conditions on $(M,g)$ and $\tau$ which imply that $\tau$ is regular in this situation. In addition, Using the cosmological time function sufficient conditions for causal simplicity and causal pseudoconvexity of space-time is given.\\
\section{preliminaries}
The standard notations from Lorentzian geometry are used in this paper. The reader is referred to \cite{B, P}. We denote with $(M,g)$ a $C^{\infty}$ space-time (a connected, Hausdorff, time orientted Lorentzian manifold) of dimension $n\geq 2$ and signature $(-,+,...,+)$. If $p, q\in M$, then $q\in I^{+}(p)$ (resp. $q\in J^{+}(p)$) means that there is a future directed timelike (resp. causal) curve from $p$ to $q$. $I^{+}(p)$ is called the chronological future and $J^{+}(p)$ the causal future of $p$. Likewise $I^{-}(p)$ and $J^{-}(p)$ are defined and are called chronological and causal past of $p$. $(M,g)$ is causal (resp. chronological) if there is no closed causal (resp. timelike) curve in it. A causal space-time is globally hyperbolic if $J^{+}(p)\cap J^{-}(q)$ be compact, for every $p, q\in M$ and is causally simple if $J^{\pm}(p)$ be closed, for every $p\in M$. In addition, a space-time is strongly causal if every point of it has arbitrary small causally convex neighbourhoods. The interested reader is referred to \cite{M2} for more details.\\
If $q\in J^{+}(p)$ the Lorentzian distance $d(p,q)$ is the supremum of the length of all causal curves from $p$ to $q$ and if $q\notin J^{+}(p)$ then $d(p,q)=0$. If $(M,g)$ is strongly causal then there is a conformal class of $g$ such that $d$ and $\tau$ are bounded.
\begin{lem}\cite{M}
Let $h$ be an auxiliary complete Riemannian metric on $M$ and let $\rho$ be the associated distance. Let $q\in M$ and $B_{n}(q)=\{r: \rho(q,r)<n\}$ be the open ball of radius $n$ centered at $q$. If $(M, g)$ is strongly causal, then there is a smooth function $\Omega>0$, such that $\textmd{diam}(M,\Omega g)=sup \{d(p,q), p,q\in M\}$ is finite and for every $\epsilon>0$ there is $n\in N$ such that if $\gamma:I\rightarrow M$ is any $C^{1}$ causal curve,
$$\int_{I\cap \gamma^{-1}(M-\overline{B_{n}(q)})}\sqrt{-g(\dot{\gamma},\dot{\gamma})}dt<\epsilon$$
that is, its many connected pieces contained in the open set $M-\overline{B_{n}(q)}$ have a total Lorentzian length less than $\epsilon$.
\end{lem}
The limit curve theorem will be used several times. The reader is referred to \cite{ML} for a strong formulation. All the curves are parametrized by $h$- arc length, where $h$ is a complete Riemannian metric. A past (future) ray in $(M,g)$ is a maximal past (future) inextendible causal geodesic $\gamma:[0,\infty)\rightarrow M$.\\ A sequence of causal curves $\gamma_{n}:[a_{n},b_{n}]\rightarrow M$ is called limit maximizing if:\\
$$L(\gamma_{n})\geq d(\gamma_{n}(a_{n}), \gamma_{n}(b_{n}))-\epsilon_{n},$$ where $\epsilon_{n}\rightarrow 0$.\\
The following lemma is used in section 3 and 4.
\begin{lem}\cite{G2} Let $z_{n}$ be a sequence in $M$ with $z_{n}\rightarrow z$. Let $z_{n}\in I^{+}(p_{n})$ with finite $d(z_{n},p_{n})$. Let $\gamma_{n}:[0, a_{n}]\rightarrow M$ be a limit maximizing sequence of causal curves with $\gamma_{n}(0)=z_{n}$ and $\gamma_{n}(a_{n})=p_{n}$. Let $\widehat{\gamma_{n}}:[0,\infty)\rightarrow M$ be any future extension of $\gamma_{n}$. Suppose either:
\begin{itemize}
\item $p_{n}\rightarrow \infty$, i.e. no subsequent is convergent,
\item $d(z_{n},p_{n})\rightarrow \infty$.
\end{itemize}
Then any limit curve $\gamma:[0,\infty)\rightarrow M$ of the sequence $\widehat{\gamma_{n}}$ is a causal ray starting at $z$.
\end{lem}
\section{Continuity of cosmological time function and lightlike rays}
The cosmological time function $\tau$ is not a time function in general, but if $\tau<\infty$ it has the following property:\\
\begin{equation}
q\in J^{+}(p)\Rightarrow \tau(p)+d(p,q)\leq \tau(q).
\end{equation}
This implies that $\tau$ is isotone.
\begin{defn} A function $t:M\rightarrow R$ which satisfies $q\in J^{+}(p)\Rightarrow t(p)\leq t(q)$ is said to be isotone.
\end{defn}
It is proved in \cite{M2} that isotones are almost everywhere continuous and differentiable.
\begin{thm}
Every isotone function $f:M\rightarrow R$ on $(M,g)$ is almost everywhere continuous and almost everywhere differentiable. Moreover, it is differentiable at $ p\in M $ iff it is $G\widehat{a}teaux$- differentiable at p. Finally, if $x:I \rightarrow M $ is a timelike curve, the isotone function $f$ is upper/lower semi-continuous at $x_0 = x(t_0 )$ iff $f\circ x$ has the same property at $t_0$.
\end{thm}
Hence the cosmological time function is almost everywhere continuous. The question is where a point of discontinuity appears.
\begin{thm} Let $(M,g)$ be a space-time with finite cosmological time function $\tau$. If $\tau$ be discontinuous at $q\in M$ then it lies on a past lightlike ray.
\end{thm}
\begin{proof} Since $p\mapsto d(p,q)$, for every $q\in M$, is lower semi-continuous, $\tau$ is lower semi-continuous too. Hence it is not upper semi-continuous at $q$. Consequently, there is a sequence $\{q_{n}\}$ and $\epsilon>0$ that $q_{n}\rightarrow q$ and $\tau(q_{n})\geq \tau(q)+\epsilon$. Suppose that $\{p_{n}\}$ be a sequence in $M$ such that $d(p_{n},q_{n})\geq \tau(q_{n})-1/n$. (2) implies that $\tau(p_{n})+d(p_{n},q_{n})\leq\tau(q_{n})$. Since $\tau$ is finite we have $d(p_{n},q_{n})\leq \tau(q_{n})-\tau(p_{n})$ and consequently:\\
$\tau(q_{n})-1/n\leq d(p_{n},q_{n})\leq \tau(q_{n})-\tau(p_{n})$.\\
This implies that $\tau(p_{n})\rightarrow 0$. Let $\{\gamma_{n}\}$ be a limit maximizing sequence such that for each $n$, $\gamma_{n}:[0,a_{n}]\rightarrow M$, is a past-directed timelike curve, $\gamma_{n}(0)=q_{n},~\gamma_{n}(a_{n})=p_{n}$, i.e. $L(\gamma_{n}) > d(p_{n},q_{n})-\epsilon_{n}$ where $\epsilon_{n}\rightarrow 0$.\\
Let $\widehat{\gamma_{n}}: [0, \infty)\rightarrow M$ be any past inextendible extension of $\gamma_{n}$ and $\gamma:[0,a]\rightarrow M$ be the limit curve of $\widehat{\gamma_{n}}$ with $\gamma(0)=q$. $\{p_{n}\}$ diverges to infinity (if $p_{n}\rightarrow p$ then by lower semi continuity of $\tau$, $\tau(p)\leq liminf(\tau(p_{n}))$. This implies that $\tau(p)=0$, which is a contradiction). Lemma 2 implies that $\gamma$ is a causal ray. If $\gamma$ is not a null ray, it is timelike. As all the curves are parametrized by arc-length, $limsup( l(\gamma_{n}\vert_{ [0,b]}))\leq l(\gamma\vert_{[0,b]})$. There are $\delta,~b>0$ such that $l(\gamma\vert_{[0,b]})+\delta \leq \epsilon/2$. Consequently, $l(\gamma_{n}\vert_{[0,b]})\leq l(\gamma\vert_{[0,b]})+\delta\leq \epsilon/2$, for suffitiently large $n$.\\
$l(\gamma_{n}\vert_{[b,a_{n}]})=l(\gamma_{n})-l(\gamma_{n}\vert _{[0,b]})\geq \tau(q)+\epsilon/2 -1/n-\epsilon_{n}$.\\
Hence for sufficiently large $n$, $l(\gamma_{n}\vert_{[b,a_{n}]})>\tau(q)$. Since $\gamma$ is timelike, $\gamma_{n}(b)\in I^{-}(q)$, for suffitiently large $n$. Hence $\tau(q)\geq l(\gamma_{n}\vert_{[b,a_{n}]})>\tau(q)$ which is a contradiction.
\end{proof}
As an application of Theorem 2 the following two theorems can be proved.
\begin{thm}
Let $(M,g)$ be a space-time without past lightlike rays and $\tau< \infty$ then for each $q$ there is a future-directed unit speed timelike ray $\gamma_{q}:(0,\tau(q)]\rightarrow M$ which is maximal in each segment, such that:\\
$$\gamma_{q}(\tau(q))=q,~~~\tau(\gamma_{q}(t))=t,~~~t\in (0,\tau(q)].$$
\end{thm}
\begin{proof}
Let $q\in M$ and $\{p_{n}\}$ be a sequence such that $d(p_{n},q)\geq \tau(q)-1/n$. $p_{n}$ diverges to infinity as it is proved in the pervious theorem. Let $\gamma_{n}:[0,a_{n}]\rightarrow M$, $\gamma_{n}(0)=q$, $\gamma_{n}(a_{n})=p_{n}$ be a limit maximizing sequence of curves:\\
$\epsilon_{n}=d(\gamma_{n}(a_{n}),\gamma_{n}(0)- l(\gamma_{n}[0,a_{n}])), \epsilon_{n}\rightarrow 0.$\\
and $\widehat{\gamma_{n}}$ be a past inextendible extension of $\gamma_{n}$, for $n\in \mathbb{N}$. By using of Lemma 2 and assumption $\widehat{\gamma_{n}}$ converges to a timelike ray $\gamma:[0,\infty)\rightarrow M$. It suffices to prove that,\\ $d(\gamma(b),q)=\tau(q)-\tau(\gamma(b))$, for every $b\in [0,\infty).$\\
$l(\gamma_{n}([0,b]))=l(\gamma_{n})-l(\gamma_{n}[b,a_{n}])\geq \tau(q)-(1/n)-\epsilon_{n}-\tau(\gamma_{n}(b))$.\\ Since $\tau$ is continuous we have,\\
$d(\gamma(b),q)\geq l(\gamma[0,b])\geq limsup( l(\gamma_{n}([0,b])))\geq \tau(q)-limsup(\tau(\gamma_{n}(b)))=\tau(q)-\tau(\gamma(b))$,\\ and the proof is complete.
\end{proof}
The proof of the following theorem is similar to what is given for regular cosmological time functions in \cite{AG}
\begin{thm}
Let $(M,g)$ be a space-time without past lightlike rays. If $\tau< \infty$ then it is a time function.
\end{thm}
\begin{proof} If $q\in I^{+}(p)$ then by using of (2) it is clear that $\tau(p)< \tau(q)$. Assume that $q\in J^{+}(p)-I^{+}(p)$ then there is a lightlike geodesic from $p$ to $q$. Let $\gamma_{p}$ be the timelike ray to $p$ guaranteed by Theorem 3 and $x\in \gamma_{p}$. By cutting the corner argument near $p$ we have:\\
$d(x,q)>d(x,p)+d(p,q)$. Consequently, $\tau(q)-\tau(p)\geq d(x,q)>d(x,p)=\tau(p)-\tau(x)>0$ and the proof is complete.
\end{proof}
The following theorem was proved in \cite{MC}.
\begin{thm}
Non-totally vicious space-time $(M,g)$ with no lightlike rays is a globally hyperbolic space-time.
\end{thm}
It is proved in \cite{MC} that a chronological space-time without lightlike lines is stably causal and consequently has a time function. Then using this time function the above theorem is proved. In this paper, by using of Theorem 4 and the following lemmas it can be shown that there is a representation of $(M,g)$ such that the cosmological time function is regular.
\begin{lem}
Let $(M,g)$ be a chronological space-time without past (or future) lightlike rays then $(M,g)$ is strongly causal.
\end{lem}
\begin{proof}
Suppose by contradiction that $(M,g)$ is not strongly causal. Then there is $p\in M$ and a sequence of arbitrary small relatively compact neighbourhoods $U_{n}$, $n\in \mathbb{N}$, of $p$ which are not causally convex, i.e, for every $U_{n}$ there exist $p_{n}, q_{n}\in U_{n}$ and a causal curve $\gamma_{n}$ from $p_{n}$ to $q_{n}$ which are not contained in $U_{n}$, $p_{n}\rightarrow p$, $q_{n}\rightarrow q$ and $p=q$. The second part of the limit curve theorem, Theorem 3.1 \cite{ML}, implies that one of the following cases occur:\\
1) $\gamma_{n}$s are contained in a compact set. Since $M$ is chronological, the limit curve $\gamma_{q}$ is a closed maximal lightlike curve. The curve $\gamma$ making infinite rounds around $\gamma_{q}$ is a past lightlike ray which is a contradition.\\
2) $\gamma_{n}$s are not contained in a compact set. Then $\gamma_{p}\circ \gamma_{q}=\gamma$ is the limit curve of $\gamma_{n}$s. Since $\gamma$ is not a lightlike line, the chronology violating set is non empty. This is a contradiction since $\tau< \infty$.
\end{proof}
Lemma 2 and Lemma 3 imply the following theorem.
\begin{thm}Let $(M,g)$ be a chronological space-time without past lightlike rays then there is a positive real function $\Omega$, that the cosmological time function of $(M,\Omega g)$ is regular.
\end{thm}
\begin{proof} $(M,g)$ is strongly causal by using of Lemma 3. Let $q_{0}\in M$ and $\Omega>0$ be given as in Lemma 1. It is clear that $\tau$ is finite. Suppose by contradiction that there is a past inextendible causal curve $\eta:[0,\infty)\rightarrow M$, $\eta(0)=p$ such that $\tau\rightarrow a>0$, along $\eta$. Choose $n$ such that the length of any causal curve out of $\overline{B_{n}(q_{0})}$ is less than $\epsilon<a$ and $p\in B_{n}(q_{0})$. Since $M$ is strongly causal it is non-total imprisoning and consequently there is $t_{0}\in \mathbb{R}$ such that $\eta(t)\in M-\overline{B_{n}(q_{0})}$, for $t>t_{0}$. In addition suppose that $\{t_{n}\}$, $t_{n}\rightarrow \infty$, be a sequence of real numbers and $p_{n}=\eta(t_{n})$. Let $\gamma_{p_{n}}$ be the maximal timelike ray guaranteed by Theorem 3. Since $l(\gamma_{p_{n}})=\tau(p_{n})> \epsilon$, $\gamma_{p_{n}}\cap \overline{B_{n}(q_{0})}\neq \emptyset$. Again by using of non-total imprisoning condition, $\gamma_{p_{n}}$ escape $\overline{B_{n}(q_{0})}$ in a point $q_{n}=\gamma_{p_{n}}(s_{n})$. Since $\overline{B_{n}(q_{0})}$ is compact $q_{n}\rightarrow q$. let $\gamma'_{n}$ be a reparametrization of $\gamma_{p_{n}}\vert [0,s_{n}]$ with $h$ arc length in such a way that $\gamma'_{n}(t)=\gamma_{p_{n}}(s_{n}-t)$. $\gamma'_{n}\rightarrow \gamma$, $\gamma(0)=q$.\\
Since $\gamma_{p_{n}}$ is maximal, for every $n$, Lemma 2 implies that $\gamma$ is a timelike ray.\\
Indeed, we have:\\
$\tau(\gamma(b))-\tau(\gamma(0))=limsup(\tau(\gamma'_{n}(b))-\tau(\gamma'_{n}(0))=limsup(l(\gamma'_{n}\vert [0,b]))\leq l(\gamma\vert [0,b])=d(\gamma(0),\gamma(b))\leq liminf(d(\gamma'_{n}(0),\gamma'_{n}(b))=\tau(\gamma(b))-\tau(\gamma(0))$,
Since $\tau$ is continuous. In addition, $\tau$ is a time function and consequently $\tau(\gamma(b))-\tau(\gamma(0))>0$.\\
Let $\eta_{n}= \gamma'_{n}\circ \eta$. $\gamma$ and $\eta$ are the limit curves of $\eta_{n}$ and non of them are lightlike ray. The limit curve theorem implies that
$\eta\subset I^{+}(q)$. This means that $M$ has a TIF, which is a contradiction since $M$ has no past lightlike ray and the boundary of a TIF is generated by past lightlike rays.
\end{proof}
A non- totally vicious space-time without lightlike rays is choronological \cite{ME} and Theorem 5 is easily given by using of Theorem 6.
\section{The cosmological time function and causal rays}
In this section, we reduce the second condition of regularity and investigate the causality properties of space-times which satisfy:
\begin{itemize}\label{a}
\item $\tau$ is finite;
\item $\tau\rightarrow 0$ on past inextendible causal geodesics (or on past null rays).
\end{itemize}
Example 1 shows that in this case $\tau$ is not necessarily regular. Indeed, it can be checked that it is not globally hyperbolic. Causal pseudoconvexity and causal simplicity are weaker conditions than global hyperbolicity. We will prove that in a reflective space-time the above conditions imply causal pseudoconvexity (or causal simplicity). The space-time $(M,g)$ is called causally pseudoconvex if for each compact set $K$ there exists a compact set $K'$ such that each geodesic with both end points in $K$ has its image in $K'$.
\begin{defn}\cite{V1} Assume $p_n \rightarrow p$ and $q_n \rightarrow q$ for distinct points $p$ and $q$ in space-time $M$. We say that space-time $M$ has causal limit geodesic segment property (LGS), if each pair $p_n$ and $q_n$ can be joined by a geodesic segment, then there is a limit geodesic segment from $p$ to $q$.
\end{defn}
we can define causal, null or maximally null (LGS) property by restricting the condition of the above definition to causal, null or maximally null geodesics, respectively.
Theorem 7, Theorem 8 and Lemma 4 are used to prove the main results of this section. The following theorem gives a characterization for pseudoconvexity.
\begin{thm}\cite{V1}
Let $(M,g)$ be a strongly causal space-time. Then it is (null or maximal null) causal pseudoconvex if and only if it has (null or maximally null) causal LGS property.
\end{thm}
\begin{lem} If the cosmological time function $(M,g)$ satisfies the following conditions:
\begin{itemize}
\item $\tau< \infty$;
\item $\tau \rightarrow 0$, along past lightlike rays,
\end{itemize}
then $(M,g)$ is non-totally imprisoning.
\end{lem}
\begin{proof} Since $\tau< \infty$, $(M,g)$ is chronological. Suppose by contradiction that there is a past inextendible causal curve which is imprisoned in a compact set $K$. Hence there is a lightlike line which is imprisoned in $K$ \cite{ME}. Since $K$ is compact there are $p_{i}\in M$, $i=1,...,n$, that $K\subset \cup I^{+}(p_{i})$ and consequently $\tau(x)\geq min(\tau(p_{i}))$, $i=1,...,n$, for all $x\in K$. But it is a contradiction to assumption.
\end{proof}
\begin{thm}\cite{P}
Let $S \subset M$, and set $B =\partial I^{+}(S)$. Then if $x \in B$, there exists a null geodesic $\eta \subseteq B$ with future endpoint $x$ and which is either past-endless or has a past endpoint on $S$.
\end{thm}
\begin{thm} \cite{V2} Assume that $(M,g)$ be a causal space-time, but not causally simple. Then there are (1) $p$, $q\in M$ such that $p$ has a future inextendible maximal null geodesic ray in $I^{-}(p)$ or (2) $q$ has a past inextendible maximal null geodesic ray in $\partial I^{+}(q)$.
Conversely, assume $(M,g)$ has a $p$ (resp. q) such that $\partial I^{-}(p)$ has a future null geodesic ray (resp. $\partial I^{+}(p)$ has a past inextendible maximal null geodesic ray), then $(M,g)$ is not causally simple.
\end{thm}
{\bf Remark.} We recall that $(M,g)$ is reflecting iff $p \in \overline{I^{-}(q)}\Leftrightarrow q\in \overline{I^{+}(p)}$, for $p,q\in M$.
\begin{thm}
Let $(M,g)$ be a reflecting space-time such that its cosmological time function $\tau$ has the following properties:
\begin{itemize}
\item $\tau$ is finite;
\item $\tau\rightarrow 0$, along every past lightlike ray $\gamma$;
\end{itemize}
then $(M,g)$ is causally simple.
\end{thm}
\begin{proof}
Lemma 4 implies that $(M,g)$ is causal. Suppose by contradiction that $(M,g)$ is not causally simple. Hence there are $p$, $q\in M$ such that $q\in \overline{J^{+}(p)}-J^{+}(p)$ and (1) or (2) in Theorem 9 occurs:\\
In the first case there is a past lightlike ray $\gamma$ which is contained $q$ and lies in $\partial I^{+}(p)$. Since $(M,g)$ is reflecting, there is a sequence $\{p_{n}\}$, $p_{n}\rightarrow p$, such that $p_{n}\in I^{-}(q)$. Lower semi continuity of cosmological time function implies that $\tau(p)\leq liminf \tau(p_{n})\leq \tau(q)$. $q$ can be choosen arbitrary on the causal ray $\gamma$ and consequently $\tau(\gamma)\geq \tau(p)$ which is a contradiction, since $\tau\rightarrow 0$ along it.\\
Again in the second case reflectivity of $(M,g)$ implies that $q$ lies in $\partial I^{+}(p)$, hence by Theorem 8 there is a past lightlike ray $\alpha$ with end point $q$ in $\partial I^{+}(p)$. Indeed, it can not be a null geodesic from $p$ to $q$ since $q\notin J^{+}(p)$. The proof in this case is similar to case (1).
\end{proof}
\begin{thm} Let $(M,g)$ be a reflective space-time such that:
\begin{itemize}
\item $\tau$ is finite,
\item $\tau\rightarrow 0$ along past inextendible causal geodesics,
\end{itemize}
then $(M,g)$ is causally pseudoconvex.
\end{thm}
\begin{proof}
Theorem 10 implies that $(M,g)$ is causally simple. Suppose by contradiction that $(M,g)$ is not causally pseudoconvex then Theorem 7 implies that there are sequences $\{p_{n}\}$, $\{q_{n}\}$ and geodesics $\gamma_{n}$ from $p_{n}$ to $q_{n}$ such that $p_{n}\rightarrow p$, $q_{n}\rightarrow q$ but there limit curve $\gamma$ with $\gamma(0)=q$ does not contain $p$. Hence limit curve theorem implies that $\gamma(t)\in J^{+}(p)$ and consequently $\tau(p)\leq \tau(\gamma(t))$ which is a contradiction, since $\gamma$ is an inextendible past geodesic.
\end{proof}
\begin{thm} Let $(M,g)$ be a space-time which satisfies the following properties:
\begin{itemize}
\item $\tau< \infty$;
\item $\tau\rightarrow 0$, along past inextendible causal rays;
\item $\tau$ is continuous;
\end{itemize}
then $\tau$ is regular.
\end{thm}
\begin{proof} Suppose by contradiction that $\tau\nrightarrow 0$ along a past inextendible causal curve $\gamma:[0,\infty)\rightarrow M$. Let $p_{n}=\gamma(t_{n})$, where $t_{n}\rightarrow \infty$ and $\gamma_{n}:[0,a_{n}]\rightarrow M$, $\gamma_{n}(0)=p$ and $\gamma_{n}(a_{n})=p_{n}$ be a limit maximizing sequence.
Since by Lemma 4 $(M,g)$ is non-total imprisoning, $\{p_{n}\}$ escapes to infinity. Consequently, the limit curve of $\gamma_{n}$, $\eta:[0,\infty)\rightarrow M$, $\eta(0)=p$ is a causal ray. $\gamma_{n}(t)\in I^{+}(p_{m})$, for $m\geq n$. Hence $\eta\subset \overline{ I^{+}(\gamma)}$. Since $\tau>a$ on $ I^{+}(\gamma)$ and it is continuous, $\tau(\eta(t))\geq a$, for every $t$, which is a contradiction.
\end{proof}
\begin{thm} Let $(M,g)$ be a space-time of dimension $n+1$, conformally embedded as an open subset in Minkowski space-time $(E^{n,1},h)$, and its cosmological time function satisfies the following properties:
\begin{itemize}
\item $\tau< \infty$;
\item $\tau\rightarrow 0$, along past causal rays;
\end{itemize}
then $\tau$ is regular.
\end{thm}
\begin{proof} Suppose that $\tau\rightarrow a$, $a\neq 0$, along a past inextendible causal curve $\gamma:[0,\infty)\rightarrow M$. Let $p_{n}$, $\gamma_{n}$ and $\eta$ be as in the pervious theorem.
It is trivial that $\eta\subseteq \overline{I^{+}(\gamma)}$. $\partial (I^{+}(\gamma))$ is an achronal boundary. $\tau> a$ on $ I^{+}(\gamma)$, but $\tau\rightarrow 0$ on $\eta$. Hence $\eta$ has to escape $I^{+}(\gamma)$ in a point $x=\eta(s)$ and $\eta\vert [s,\infty)\subseteq \partial I^{+}(\gamma)$. This implies that $\eta$ is not a timelike ray. In addtion, if $\eta'(0)=w$ then $<w,\alpha'>_{h}=0$, for any lightlike ray $\alpha$ in $\partial I^{+}(\gamma)$, since $M$ is an open subset of Minkowski space-time. Hence $<w,\eta'>_{h}=0$, which is a contradiction.
\end{proof}
|
3,212,635,537,762 | arxiv | \section{Introduction}
\label{sec:intro}
Recently, the possibility of fermion pairing in a two-component
system with unequal Fermi surfaces has attracted
a considerable amount of attention~\cite{Casalbuoni2004a}.
In an electronic system non-matched Fermi surfaces could
be due to the magnetic field interacting
with electron spins and in color superconductivity, due to unequal quark masses.
The newly realized strongly interacting superfluid Fermi
gases~\cite{Jochim2003b,Greiner2003a,Regal2004a,Zwierlein2004a,Bartenstein2004b,Kinast2004a,Chin2004a,Kinast2005a,Zwierlein2005a}
offer a promising playground for the study of pairing and
superfluidity, also with imbalanced Fermi energies.
In ultra cold degenerate gases, different components are typically atoms
in different internal states of an atom, but different isotopes can also
be considered. In ultra cold gases the mismatch of the Fermi surfaces
can be due to having an unequal number of atoms in different
states. After the first experimental studies using a mixture of
two internal states of $^{6}{\rm Li}$
atoms~\cite{Zwierlein2006a,Partridge2006a},
the properties of such
systems, in a harmonic trapping potential, have been extensively discussed in the recent
literature~\cite{Bedaque2003a,Castorina2005a,Mizushima2005b,Yang2005a,Sheehy2006a,Kinnunen2006a,Pieri2005a,Yi2006a,Chevy2006a,DeSilva2006a,Haque2006a}.
In principle, many different types of fermions can be trapped
in a same trap. In order to reach long lifetimes, the atoms should have
favorable collisional properties, but current knowledge
of these collisional properties is limited. However, mixtures
of either Bose
condensed or fermionic atoms in a variety of different internal states
have been experimentally
demonstrated~\cite{Myatt1997a,Stenger1998a,Modugno2002a,Roati2002a,Schmaljohann2004a}, and experimental effort towards realizing
three-component Fermi gases has been initiated. For this reason
we explore the novel possibilities in a three-component
Fermi gas by generalizing the BCS (Bardeen-Cooper-Schrieffer)
theory into three components.
Related three-component Fermi system
has been previously studied by Honerkamp and Hofstetter~\cite{Honerkamp2004a,Honerkamp2004b}
and more recently by Bedaque and D'Incao~\cite{Bedaque2006a}.
The first reference employs a Hubbard lattice Hamiltonian and is focused
on the rather special case
when all fermions have the same mass and interact with
different fermions (as well as with fermions of the same type)
with a single interaction strength.
The setting used in the latter reference includes the possibility
of unequal interaction strengths between various
components. Bedaque and D'Incao draw general
qualitative conclusions on the symmetries of the possible
zero temperature phases, but all fermion masses are assumed equal.
In this paper we discuss pairing in a three-component system both
at zero as well as non-zero temperatures. Furthermore, we allow for the possibility
that the third fermion type has a different mass from the other
two and since a priori there is little reason to expect
identical scattering properties between different fermion components,
we formulate our theory assuming
different interaction strengths between
the components. In ultracold gases, such a system
could be realized by trapping, for instance, $^6Li$ and $^{40}K$ using atoms in two internal
states for one of theses and one internal state for the other
In order to restrict the parameter space somewhat, we make
one simplifying assumption in our Hamiltonian.
Generally one expects three different interactions in a three-component
system. One between the first and the second component, one between
the second and the third component,
and one between the third and the first component. In this paper we
assume that the interaction between the third and the first component
is weaker than the other two interactions and can
be ignored next to the dominant contributions. This restriction is simply
a matter of convenience as it restricts parameter space to manageable
proportions, but does not affect the qualitative picture we find.
We find that, within the BCS theory,
the possible pairing always gives rise to just
one order parameter of broken $U(1)$ symmetry. Whether the pairing takes
place between the first and the second component ($1-2$ channel) or
between the second and the third component ($2-3$ channel)
depends on the strengths of the interactions, atomic masses, as well as
on the differences between the chemical potentials
of different components. The unpaired component constitutes a normal Fermi sea.
At zero temperature there is
a possibility of a quantum phase-transition from
a phase with pairing in the $1-2$ channel into
a phase with pairing in the $2-3$ channel as chemical potential
differences are varied appropriately. This transition is of the first order.
Furthermore, as the temperature increases it is possible
to have a second order transition from a paired phase in the
$2-3$ channel into a paired phase in the $1-2$ channel. At even higher temperature,
there is another transition from the paired state into the normal state.
This paper is organized as follows. In Sec.~\ref{sec:Normal}
we discuss the normal three-component Fermi gas
and the possibility of phase-separation in such a system.
In Sec.~\ref{sec:FE} we present the BCS-style mean-field theory
and discuss the qualitative behavior of the associated
grand potential at zero temperature. In this section we present
the energy landscape
of the grand potential for several different scenarios.
In Sec.~\ref{sec:GE} we find the solutions to the gap-equations
that correspond to the global minimum of the grand potential
for fermions where the third component has
a different mass from the other two components.
We end with some concluding remarks in Sec.~\ref{sec:Conclusions}.
\section{Normal three component Fermi gas}
\label{sec:Normal}
We study a homogeneous three component Fermi gas whose components are
either atoms in different internal states or atoms of different isotopes.
For now we assume that an interaction between the first and the the third component
is sufficiently small,
so that it is justified to focus on only the interactions between the first and the second component
and the second and the third component. The second quantized Hamiltonian for this system is therefore
\begin{equation}
\label{eq:hamilton1}
\begin{split}
H&=\int d{\bf r} \,\left(\sum_{\sigma=1,2,3} {\hat \psi}_{\sigma}^{\dagger}({\bf r})
\left(-\frac{\hbar^2 \nabla^2}{2m_{\sigma}}-\mu_{\sigma}\right){\hat\psi}_{\sigma}({\bf r})\right)\\
&+\tilde g_{12}\int d{\bf r} \,{\hat\psi}_{1}^{\dagger}({\bf r})
{\hat\psi}_{2}^{\dagger}({\bf r}){\hat\psi}_{2}({\bf r}){\hat\psi}_{1}({\bf r})
+\tilde g_{23}\int d{\bf r} \,{\hat\psi}_{3}^{\dagger}({\bf r})
{\hat\psi}_{2}^{\dagger}({\bf r}){\hat\psi}_{2}({\bf r}){\hat\psi}_{3}({\bf r}),
\end{split}
\end{equation}
where ${\hat\psi}_{\sigma}({\bf r})$ and ${\hat\psi}_{\sigma}^\dagger({\bf r})$
are the usual field-operators which annihilate and create particles in state $\sigma$.
In addition, $m_{\sigma}$ is the atomic mass, $\tilde g_{12}$ is the interaction strength between the first and
the second component, and $\tilde g_{23}$ is the interaction strength between the second
and the third component. In terms of the scattering lengths $a_{ij}$
these interaction strengths are
given by $\tilde g_{ij}=2\pi\hbar^2 a_{ij}/\mu$, where $\mu=m_im_j/(m_i+m_j)$
is the reduced mass of the scattering atoms.
Let us first discuss a mixture of three normal fermion components.
If the mixture of normal three-component gas phase-separates spatially,
it is clear that there is no possibility of a superfluid
where all three components are mixed together and the
problem is that of a two-component Fermi gas surrounded by the normal
third component.
Since the temperatures of interests are very low compared to the
Fermi energies of various components
we can consider the zero-temperature limit, in which the free-energy density
$f$ of the mixture of normal Fermi gases is given by
\begin{equation}
\label{eq:free1}
f=\frac{3\hbar^2}{10}(6\pi^2)^{2/3}(\frac{n_1^{5/3}}{m_1}+\frac{n_2^{5/3}}{m_2}+\frac{n_3^{5/3}}{m_3})+ \tilde g_{12}n_1n_2
+\tilde g_{23}n_2n_3,
\end{equation}
where $n_i$ are the component densities.
If both interaction strengths $\tilde g_{12}$, $\tilde g_{23}$ are positive then all three components
separate. The reason for this is that
then the positive definite free-energy Eq.~\eqref{eq:free1} is
minimized when the interaction terms give vanishing contributions. If one of
the interactions is positive and the other is negative,
the components with attractive interaction
mix and the third component separates. Therefore the case when all
three components can coexist spatially occurs only when
both interactions are attractive.
If one of the interactions is attractive then
formally Eq.~\eqref{eq:free1} has a global minimum at infinite density of
the interacting components.
This global minimum is not physically relevant since
there is an energetic barrier separating this solution from the physically
relevant regime of densities. In fact the height of this barrier increases
as the interactions become weaker. From now on we will focus on the most interesting
region where both interactions are fairly large and negative.
\section{Grand potential of the three component Fermi gas}
\label{sec:FE}
The BCS-theory involves approximating the Hamiltonian by the mean-field Hamiltonian density
(in the ${\bf k}$-space)
\begin{equation}
\begin{split}
H_{BCS}&=\frac{1}{V}\sum_{\bf k}\sum_{\sigma=1,2,3}\left(\frac{\hbar^2 k^2}{2m_\sigma}-\mu_{\sigma}\right){\hat
\psi}^{{\dagger}}_{\sigma{\bf k}}
{\hat \psi}_{\sigma{\bf k}}+\Delta_{12} {\hat \psi}^{{\dagger}}_{1,\bf k} {\hat \psi}^{{\dagger}}_{2,\bf -k}
+ \Delta^*_{12} {\hat \psi}_{2,\bf -k} {\hat \psi}_{1,\bf k}+
\\
&+\Delta_{23} {\hat \psi}^{{\dagger}}_{3,\bf {k}}
{\hat \psi}^{{\dagger}}_{2,\bf -k}+
\Delta_{23}^*{\hat \psi}_{2,\bf -k} {\hat \psi}_{3,{\bf k}}
-\frac{|\Delta_{12}|^2}{g_{12}}-\frac{|\Delta_{23}|^2}{g_{32}},
\end{split}
\end{equation}
where
\[\Delta_{\sigma\sigma'}=\frac{g_{\sigma\sigma'}}{V}\sum_{\bf k}\langle {\hat \psi}_{\sigma\bf k}
{\hat \psi}_{\sigma'\bf -k}\rangle\] are the order parameters to be determined self-consistently.
As a short hand notation we define the vector ${\bf \Delta}=\left(\Delta_{12},\Delta_{23}\right)$
characterizing the state of the system.
Since the mean-field Hamiltonian is of second order in the operators it can be
easily diagonalized with a canonical transformation. In this way we can
calculate the grand potential (or free energy)
\begin{equation}
\label{eq:fe_1}
\Omega\left(\Delta_{12},\Delta_{23}\right)=-k_BT\log[Tr(\exp(-\beta H_{BCS}))],
\end{equation}
where $k_B$ is the Boltzmann constant and $\beta=1/k_BT$. In the continuum limit
the grand potential is ultraviolet divergent. This divergence is caused by
the unphysical short distance behavior of the contact interaction and is removed, in the
usual way, by subtracting the divergent contribution from the grand potential.
We have done the diagonalization of the mean field Hamiltonian which can be done analytically,
but in the general case formulas are inconveniently long and not very informative. In order to get
an overview of the expected system behavior it is rather more instructive
to focus on the behavior of the free-energy.
We choose our units by assuming that the first and the second components are
$^{6}{\rm Li}$ atoms and by
using the ideal gas of the second component as a benchmark.
The unit of energy is then $\epsilon_F=\hbar^2k_{F,2}^2/2m_2$,
where $k_{F,2}=(6\pi^2n_2)^{1/3}$ is the Fermi wave-vector. Furthermore, we use the unit
of length $1/k_{F,2}$. Using these units we define the dimensionless coupling strengths as
$g_{12}=2k_{F,2}a_{12}/\pi$ and $g_{23}=(m_2+m_3)k_{F,2}a_{23}/m_2\pi$.
\subsection{Grand potential when all fermions have the same mass}
The Fig.~\ref{fig:fe_1} demonstrates the behavior of the free energy landscape as a function
of the gaps $\Delta_{12}$ and $\Delta_{23}$ when all fermion components have the same mass.
It is clear that when all chemical potentials are equal the free energy landscape
is only a function of $\Delta_{12}^2+\Delta_{23}^2$ and at zero temperature
displays a clear minimum corresponding to the solution of the BCS gap equation.
With the parameters used in the figure, the minimum is located at
$\sqrt{\Delta_{12}^2+\Delta_{23}^2}\approx 0.135$, see the quarter sphere in Fig.~\ref{fig:fe_1}(a).
Any difference in the chemical potentials
(or coupling strengths) breaks the above symmetry and makes it energetically favorable
to have another one of the phases ${\bf \Delta}=(\Delta_{12},0)$ or
${\bf \Delta}=(0,\Delta_{23})$. When interactions are the same,
which of these alternatives is chosen depends on the
average chemical potential between the paired components. The channel corresponding
to the higher average chemical potential has a lower free energy and
is therefore physically realized. At zero temperature, the densities of the paired
components are equal, as is to be expected from the BCS-state.
At higher temperatures the normal state
${\bf \Delta}=(0,0)$ eventually becomes the free energy minimum.
The Figure~\ref{fig:fe_1} also demonstrates
that generally the grand potential has several possible stationary points.
\begin{figure}
\begin{tabular}{ll}
\includegraphics[width=0.50\columnwidth]{fig_1a.eps} & \includegraphics[width=0.50\columnwidth]{fig_1b.eps}\\
\includegraphics[width=0.50\columnwidth]{fig_1c.eps} & \includegraphics[width=0.50\columnwidth]{fig_1d.eps}
\end{tabular}
\caption[Fig1]{(Color online) The free energy landscape as a function of
$\Delta_{12}$ and $\Delta_{23}$ when all three components have the same mass.
The dash-dotted line in (a) and the filled circles in (b)-(d) show the location of the global minimum.
We used equal coupling strengths
$g_{12}=g_{23}=-0.50$ and
the figures (a)-(c) were calculated at zero temperature while
the figure (d) was calculated at $k_BT/\epsilon_F=0.08$ which is above the critical temperature. The chemical potentials
we such that
in the figure (a) $\mu_1=\mu_2=\mu_3=1$, in (b) $\mu_1=1.01$, $\mu_2=1$, $\mu_3=0.99$,
in (c) $\mu_1=0.99$, $\mu_2=1$, $\mu_3=1.01$, and
finally in (d) $\mu_1=\mu_2=\mu_3=1$.
}
\label{fig:fe_1}
\end{figure}
\subsection{Grand potential when the fermion masses differ}
The different components could in principle have also different masses.
For concreteness we assume that the first and the second component are different internal states of
$^{6}{\rm Li}$ atoms while the third component is a
$^{40}{\rm K}$ atom. This therefore corresponds to the mass ratio $m_r=m_1/m_3=0.15$.
In the Fig.~\ref{fig:fe_2} we show some typical free energy landscapes
in this case, when the chemical potential of the third component is varied.
It is clear that the grand potential again has many different stationary points.
For small values of $\mu_3$ the pairing is only possible in the $1-2$ channel.
However, the Fermi surfaces are matched when $\mu_3=m_r$ and this is reflected
as a possibility of pairing in the $2-3$ channel around this value.
The reason why
pairing in this channel is favored for matched Fermi surfaces even
when coupling strengths are equal, is due to the higher density of states
for atoms of higher mass. This higher density of states translates into
reduction in energy. Naturally if $g_{23}$ is reduced sufficiently
we enter a parameter region where pairing in the $2-3$ channel will
never take place. For the parameters used in the figure
this happens when $|g_{23}|<0.33$.
Again we find that at zero temperature the densities of the paired
components are always equal.
It is also interesting to observe that by increasing the temperature, one
can induce a transition from the ${\bf \Delta}=(0,\Delta_{23})$ phase
into the ${\bf \Delta}=(\Delta_{12},0)$ phase before entering a normal state.
In summary, two transitions are visible: a zero temperature quantum phase transition between
the two pairing channels (transition from Fig.~\ref{fig:fe_2}(a) to Fig.~\ref{fig:fe_2}(b), and from Fig.~\ref{fig:fe_2}(b) to
Fig.~\ref{fig:fe_2}(c)), and a finite temperature
second order transition between the pairing channels (transition from Fig.~\ref{fig:fe_2}(b) to Fig.~\ref{fig:fe_2}(d)).
\begin{figure}
\begin{tabular}{ll}
\includegraphics[width=0.50\columnwidth]{fig_2a.eps} & \includegraphics[width=0.50\columnwidth]{fig_2b.eps}\\
\includegraphics[width=0.50\columnwidth]{fig_2c.eps} & \includegraphics[width=0.50\columnwidth]{fig_2d.eps}
\end{tabular}
\caption[Fig2]{(Color online) The free energy landscape as a function of
$\Delta_{12}$ and $\Delta_{23}$ with the mass ratio $m_1/m_3=0.15$
and $\mu_1=\mu_2=1.0$.
The filled circle shows a location of the global minimum.
The coupling strengths were $g_{12}=g_{23}=-0.5$ and
the figures (a)-(c) were calculated at zero temperature while
the figure (d) was calculated at $k_BT/\epsilon_F=0.07$.
In figure (a) $\mu_3=0.07$, in (b) $\mu_3=0.15$, in (c) $\mu_3=0.25$, and
in (d) $\mu_3=0.15$. (In the figure (d) we used a logarithmic scale
to calculate the contour plot in order to enhance the relevant features.)
The results show two types of transitions: a zero temperature quantum phase transition between
the two pairing channels (from (a) to (b), and from (b) to (c)), and a finite temperature
second order transition between the pairing channels (from (b) to (d)).
}
\label{fig:fe_2}
\end{figure}
\section{Solutions of the gap-equations of the three component Fermi gas}
\label{sec:GE}
For a given set of chemical potentials the self-consistent BCS-solution
is found at the minimum of the grand potential. At the extremum value of
the grand potential, the gap-equations
\begin{equation}
\frac{\partial \Omega\left(\Delta_{12},\Delta_{23}\right)}{\partial \Delta_{12}}=0
\end{equation}
and
\begin{equation}
\frac{\partial \Omega\left(\Delta_{12},\Delta_{23}\right)}{\partial \Delta_{23}}=0
\end{equation}
are satisfied. These equations are satisfied at all extremal points of the grand potential,
including local minima, local maxima, and saddle points.
As we found out in the earlier section, the grand potential can have several extremal
points so care must be taken in ensuring that numerics converges to the physically
relevant global minimum of the grand potential rather than to a local stationary point.
We have found the global minima of the free energy by solving the gap equations
under many different circumstances. Fig.~\ref{fig:gaps} shows typical results
for the gaps as a function of temperature and $\mu_3$. At zero temperature
there exists a first order quantum phase transition, as the chemical potential
is varied, from the ${\bf\Delta}=(\Delta_{12},0)$ phase into the ${\bf\Delta}=(0,\Delta_{23})$
phase. We also find that with increasing temperature there can
be a second order transition from the ${\bf\Delta}=(0,\Delta_{23})$
phase into the ${\bf\Delta}=(\Delta_{12},0)$ phase, which then makes a transition
into the normal state at a higher temperature. However, whether this sequence of transitions
takes place or not, does depend on the strength of the interactions.
For weaker interactions it is possible that the ${\bf\Delta}=(0,\Delta_{23})$ phase is
not completely surrounded by the ${\bf\Delta}=(\Delta_{12},0)$ phase as it is in
Fig.~\ref{fig:gaps}.
This overall behaviour can be understood in the following way: The pairing between
atoms of different masses happens around the point where their Fermi momenta are nearly equal (following from
the condition $\mu_3/\mu_1 ~ m_1/m_3$). This pairing is favoured over the pairing with same mass components due to the higher
density of states for the more massive component. At finite temperature, the Fermi surfaces of both pairing components will
be smeared out due to temperature - however, for components with different masses, this smoothing of the Fermi edge
will happen differently (the higher mass component Fermi edge spreads more). Therefore, with increasing temperature,
it becomes increasingly difficult to find a pairing partner with the same magnitude (but opposite sign) momentum, and also
the advantage given by the higher density of states of the more massive component becomes less significant. This makes
the pairing between the same mass components potentially more favourable at high temperatures.
\begin{figure}
\begin{tabular}{lll}
\includegraphics[width=0.333\columnwidth]{fig_3a.eps} & \includegraphics[width=0.333\columnwidth]{fig_3b.eps}
& \includegraphics[width=0.333\columnwidth]{fig_3c.eps}
\end{tabular}
\caption[Fig3]{The gap parameters as a function of temperature $T$ and of the third component
chemical potential $\mu_3$. We used the mass ratio $m_1/m_3=0.15$, $\mu_1=\mu_2=1$,
and the coupling strengths were $g_{12}=g_{23}=-0.5$. The figure (c) shows
$\Delta=\sqrt{\Delta_{12}^2+\Delta_{23}^2}$ and
demonstrates clearly the sharp change in
the order parameter at zero temperature (the quantum phase transition) as well as the smoother transition from the
$\Delta_{23}$ superfluid to $\Delta_{12}$ superfluid at a finite temperature.
}
\label{fig:gaps}
\end{figure}
\section{Summary and conclusions}
\label{sec:Conclusions}
In this paper we have considered the mean-field theory
of an interacting three-component Fermi gas. We found
that in the highly symmetric case
when all fermion masses as well as interactions strengths are equal the
free energy is only a function of $\Delta_{12}^2+\Delta_{23}^2$.
However, any deviation from the most symmetric situation, whether it is
due to different atomic masses, different interaction strengths, or
different chemical potentials, breaks the above symmetry of
the free energy and makes one of the paired states $(\Delta_{12}\neq0 ,\Delta_{23}=0)$,
$(\Delta_{12}=0 ,\Delta_{23}\neq 0)$ or the normal-state
$(\Delta_{12}=0 ,\Delta_{23}= 0)$
energetically favored. Magnitudes of non-zero gaps
as well as the point where the transition to the normal-state occurs
depend on parameters, but we have solved the global minima of the
free energy
for fairly representative range of parameters.
When using a grand canonical ensemble the average densities are quantities
derived from the grand potential through $n_i=-\partial\Omega/\partial\mu_i$.
In this work we have explored the system behavior as a function of the chemical potentials
by minimizing the grand potential. It is therefore useful
to elaborate what kind of density ratios different sets of chemical potentials
actually correspond to. For the case investigated in the earlier section with a mass
ratio $m_1/m_3=0.15$ at zero temperature, we find at
$\mu_3=0.05$ pairing in the $1-2$ channel and density
ratios of $n_3/n_1=n_3/n_2\approx 0.26$. As the third component chemical potential
is increased to $\mu_3=0.15$, we find pairing in the $2-3$ channel
with density ratios $n_2/n_1=n_3/n_1\approx 0.72$. Finally, when
$\mu_3=0.25$, the pairing is reverted back to the $1-2$ channel and the mass ratios
are $n_3/n_1=n_3/n_2\approx 2.9$. Therefore, this transition could be realized simply by
varying the density of one component and keeping the other two fixed. Note that transitions
could also be induced by changing the interaction strengths, i.e. the scattering lenghts,
between the three components as discussed in~\cite{Bedaque2006a}. However, at the present the method used for
tuning the scattering lengths, i.e.\ the use of Feshbach resonances, does not allow to tune the different
scattering lengths independently whereas the densities can be varied completely independent of each other.
The change in the order gap related to the transitions
could be observed using RF-spectroscopy of the pairing
gap~\cite{Chin2004a,Kinnunen2004b,Torma2000a}
and corresponding condensate fraction
by the pair projection method~\cite{Regal2004a,Zwierlein2004a},
and superfluidity could be directly demonstrated by the
creation of
vortices~\cite{Zwierlein2005a} - all these can be studied selectively
for any choice for the pairing between the three
components.
In this paper we have for convenience limited ourselves
to the special case where interaction between the first
and the third component is small enough to be neglected.
This requirement can be relaxed easily and one
could solve the most general case with unequal masses as well
as with unequal interactions. We have checked that
the presence of the omitted interaction
between the first and the third component does not
change the qualitative picture presented in this paper.
For some parameter values the new gap-function
$\Delta_{13}$ can become non-zero, but otherwise the qualitative
physics remains similar to the cases discussed in this paper.
Three component Fermi system has been studied theoretical
\cite{Honerkamp2004a,Honerkamp2004b,Bedaque2006a,Paananen2006a}.
The first reference employs a Hubbard lattice Hamiltonian and is focused
on the rather special case
when all fermions have the same mass and interact with
different fermions (as well as with fermions of the same type)
with a single interaction strength.
The setting used in the latter reference includes the possibility
of unequal interaction strengths between various
components. Bedaque and D'Incao draw general
qualitative conclusions on the symmetries of the possible
zero temperature phases, but all fermion masses are assumed equal.
\begin{acknowledgments}
The authors thank T. Koponen and J. Kinnunen for useful discussions.
This work was supported by Academy of Finland and EUROHORCs
(EURYI award, Academy project numbers 207083, 106299, and 205470).
\end{acknowledgments}
\bibliographystyle{apsrev}
|
3,212,635,537,763 | arxiv | \section{Introduction}\label{sec:intro}
\emph{Higher-order recursion schemes} (\emph{schemes} in short) are used to faithfully represent the control flow of programs in languages with higher-order functions~\cite{Damm82, KNU-hopda, Ong-hoschemes, KobayashiPrograms}.
This formalism is equivalent via direct translations to simply-typed $\lambda Y$-calcu\-lus~\cite{schemes-lY}.
Collapsible pushdown systems~\cite{collapsible} and ordered tree-pushdown systems~\cite{Ordered-Tree-Pushdown} are other equivalent formalisms.
Schemes cover some other models such as indexed grammars~\cite{AhoIndexed} and ordered multi-pushdown automata~\cite{OrderedMultiPushdown}.
In our setting, a scheme is a finite description of an infinite tree.
A useful property of schemes is that the \emph{MSO-model-checking problem} for schemes is decidable.
This means that given a scheme ${\mathcal G}$ and an {\upshape MSO}\xspace sentence $\phi$, it can be algorithmically decided whether the tree generated by ${\mathcal G}$ satisfies $\phi$.
This result has several different proofs~\cite{Ong-hoschemes, collapsible, KobayashiOngtypes, KrivineWS},
and also some extensions like global model checking~\cite{globalMC}, logical reflection~\cite{reflection}, effective selection~\cite{selection}, existence of lambda-calculus model~\cite{ModelSM}.
When the property of trees is given as an automaton, not as a formula, the model-checking problem can be solved efficiently,
in the sense that there exist implementations working in a reasonable running time~\cite{KobayashiPrograms, GTRecS, HorSat, PrefaceTool, TravMC2} (most tools cover only a fragment of {\upshape MSO}\xspace, though).
Recently, an interest has arisen in model-checking trees generated by schemes against properties not expressible in the {\upshape MSO}\xspace logic.
These are properties expressing boundedness and unboundedness of some quantities.
More precisely, it was shown that the \emph{simultaneous unboundedness problem} (aka.~\emph{diagonal problem}) for schemes is decidable~\cite{diagonal-safe, diagonal, types-diagonal}.
This problem asks, given a scheme ${\mathcal G}$ and a set of letters $A$, whether for every $n\in\mathbb N$ there exists a path in the tree generated by ${\mathcal G}$ such that every letter from $A$ appears on this path at least $n$ times.
This result turns out to be interesting, because it entails other decidability results for recursion schemes,
concerning in particular computability of the downward closure of recognized languages~\cite{Zetzsche-dc},
and the problem of separability by piecewise testable languages~\cite{sep-piecewise-test}.
In this paper we show a result of a more general style.
Instead of considering a particular property, like in the simultaneous unboundedness problem, we consider a logic capable to express properties talking about boundedness.
More precisely, we extend the {\upshape MSO}\xspace logic by the unbounding quantifier, \ensuremath{\mathsf U}\xspace~\cite{BojanczykU}.
A formula using this quantifier, $\ensuremath{\mathsf U}\xspace \mathsf{X}.\phi$, says that $\phi$ holds for arbitrarily large finite sets $\mathsf{X}$.
We impose a restriction that $\ensuremath{\mathsf U}\xspace \mathsf{X}.\phi$ can be used only in a context where all free variables of $\phi$ represent finite sets.
We call the resulting logic {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace.
The goal of this paper is to prove the following theorem.
\begin{thm}\label{thm:main}
It is decidable whether the tree generated by a given scheme satisfies a given {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace sentence.
\end{thm}
We remark that the {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace logic extends the {\upshape WMSO+}\unbound logic,
which was widely considered in the context of infinite words~\cite{wmso+u-words} and infinite trees~\cite{wmso+u-kaiser, wmso+u-trees, wmso+up}.
The difference is that in {\upshape WMSO+}\unbound only quantification over finite sets is allowed.
In consequence, {\upshape WMSO+}\unbound cannot express all properties of {\upshape MSO}\xspace~\cite{hummel-skrzypczak-topological}.
On the other hand, in {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace we allow quantification over infinite sets like in standard {\upshape MSO}\xspace,
and we only restrict the use of the \ensuremath{\mathsf U}\xspace quantifier to subformulae in which all free variables represent finite sets.
Furthermore, we remark that some restriction for the \ensuremath{\mathsf U}\xspace quantifier is necessary.
Indeed, the model-checking problem for the full {\upshape MSO+}\unbound logic (where the \ensuremath{\mathsf U}\xspace quantifier can be used in an unrestricted way)
is undecidable already over the infinite word without labels~\cite{mso+u-undecid},
so even more over all fancy trees that can be generated by higher-order recursion schemes.
While proving Theorem~\ref{thm:main}, we depend on several earlier results.
First, we translate {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace formulae to an equivalent automata model using the notion of logical types (aka.\ composition method) following a long series of previous work
(some selection:~\cite{FefermanVaught, Shelah, Lauchli, BlumensathColcombet, wmso+u-kaiser, wmso-model}).
Second, we use the logical-reflection property of schemes~\cite{reflection}.
It says that given a scheme ${\mathcal G}$ and an {\upshape MSO}\xspace sentence $\phi$ one can construct a scheme ${\mathcal G}_\phi$ generating the same tree as ${\mathcal G}$,
where in every node it is additionally written whether $\phi$ is satisfied in the subtree starting in this node.
Third, we use an analogous property for the simultaneous unboundedness problem, called \emph{SUP reflection}~\cite{types-diagonal-journal}:
given a scheme ${\mathcal G}$ we can construct a scheme ${\mathcal G}_\mathit{SUP}$ generating the same tree as ${\mathcal G}$,
where every node is additionally annotated by the solution of the simultaneous unboundedness problem in the subtree starting in this node.
Finally, we use the fact that schemes can be composed with finite tree transducers transforming the generated trees;
this follows directly from the equivalence between schemes and collapsible pushdown systems~\cite{collapsible}.
Although our algorithm depends on a solution to the simultaneous unboundedness problem, it is not known whether the simultaneous unboundedness problem itself can be expressed in {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace.
The difficulty is that every \ensuremath{\mathsf U}\xspace quantifier can entail unboundedness only of a single quantity,
and it seems difficult to express simultaneous unboundedness of multiple quantities in {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace.
This paper is an extended version of a conference paper~\cite{wmsou-schemes}, where the result is shown for the {\upshape WMSO+}\unbound logic.
Besides the fact that we work here with a slightly stronger logic (namely, with {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace instead of {\upshape WMSO+}\unbound),
our proofs follow basically the same ideas as proofs contained in the conference paper~\cite{wmsou-schemes}.
We remark that the conference paper~\cite{wmsou-schemes} contained additionally a justification of the SUP-reflection property for schemes.
This justification was already expanded in another paper~\cite{types-diagonal-journal},
and for this reason we do not include it here.
Our paper is structured as follows.
In Section~\ref{sec:prelim} we introduce all necessary definitions.
In Section~\ref{sec:automata} we show how to translate {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace sentences to automata.
In Section~\ref{sec:main-thm} we prove the main theorem.
Section~\ref{sec:conclusion} contains a few extensions of the main theorem.
\section{Preliminaries}\label{sec:prelim}
The powerset of a set $X$ is denoted ${\mathcal P}(X)$, and the set of finite subsets of $X$ is denoted $\Pp^\mathsf{fin}(X)$.
For a relation $r$, we write $r^*$ for the reflexive transitive closure of $r$.
When $f$ is a function, by $f[x\mapsto y]$ we mean the function that maps $x$ to $y$ and every other $z\in\mathrm{dom}(f)$ to $f(z)$.
\subsection*{Infinitary lambda-calculus.}
We consider infinitary, simply-typed lambda-calculus.
In particular, each lambda-term has an associated sort (aka.\ simple type).
The set of \emph{sorts} is constructed from a unique ground sort $\mathsf{o}$ using a binary operation $\mathbin{\to}$; namely $\mathsf{o}$ is a sort, and if $\alpha$ and $\beta$ are sorts, so is $\alpha\mathbin{\to}\beta$.
By convention, $\mathbin{\to}$ associates to the right, that is, $\alpha\mathbin{\to}\beta\mathbin{\to}\gamma$ is understood as $\alpha\mathbin{\to}(\beta\mathbin{\to}\gamma)$.
While defining lambda-terms we assume
a set of variables $\mathit{Vars}^\lambda=\{x^\alpha,y^\beta,z^\gamma,\dots\}$ containing infinitely many variables of every sort (sort of a variable is written in superscript).
\emph{Infinitary lambda-terms} (or just \emph{lambda-terms}) are defined by coinduction, according to the following rules:
\begin{itemize}
\item node constructor---if $K_1^\mathsf{o},\dots,K_r^\mathsf{o}$ are lambda-terms, and $a$ is an arbitrary object, called a \emph{letter},
then $(\symb{a}[K_1^\mathsf{o}][\dots][K_r^\mathsf{o}])^\mathsf{o}$ is a lambda-term,
\item variable---every variable $x^\alpha\in\mathit{Vars}^\lambda$ is a lambda-term,
\item application---if $K^{\alpha\mathbin{\to}\beta}$ and $L^\alpha$ are lambda-terms, then $(K^{\alpha\mathbin{\to}\beta}\,L^\alpha)^\beta$ is a lambda-term, and
\item lambda-binder---if $K^\beta$ is a lambda-term and $x^\alpha$ is a variable, then $(\lambda x^\alpha.K^\beta)^{\alpha\mathbin{\to}\beta}$ is a lambda-term.
\end{itemize}
Sets of letters are called \emph{alphabets}.
We use unranked letters; this subsumes the setting of ranked letters.
We naturally identify lambda-terms differing only in names of bound variables.
We often omit the sort annotations of lambda-terms, but we keep in mind that every lambda-term (and every variable) has a fixed sort.
Free variables and subterms of a lambda-term, as well as beta-reductions, are defined as usual.
A lambda-term $K$ is \emph{closed} if it has no free variables.
We restrict ourselves to those lambda-terms for which the set of sorts of all subterms is finite.
\subsection*{Trees; B\"ohm trees.}
A \emph{tree} is defined as a lambda-term that is built using only node constructors, that is, not using variables, applications, nor lambda-binders.
For a tree $T=\symb{a}[T_1][\dots][T_r]$, its set of nodes is defined as the smallest set such that
\begin{itemize}
\item $\epsilon$ is a node of $T$, labeled by $a$, and
\item if $u$ is a node of $T_i$ for some $i\in\{1,\dots,r\}$, labeled by $b$, then $iu$ is a node of $T$, also labeled by $b$.
\end{itemize}
A node $v$ is the \emph{$i$-th child} of $u$ if $v=u i$.
We say that two trees $T,T'$ are of \emph{the same shape} if they have the same nodes.
A tree $T$ is \emph{over alphabet $\Sigma$} if all labels of its nodes belong to $\Sigma$,
and it has \emph{maximal arity} $\nChld_{\max}\in\mathbb N$ if its every node has at most $\nChld_{\max}$ children.
When both these conditions are satisfied, we say that $T$ is a \emph{$(\Sigma,\nChld_{\max})$-tree}.
For a tree $T$ and its node $u$, by $T{\restriction}_u$ we denote the \emph{subtree} of $T$ starting at $u$, defined in the expected way.
We consider B\"ohm trees only for closed lambda-terms of sort $\mathsf{o}$.
For such a lambda-term $K$, its \emph{B\"ohm tree} is constructed by coinduction, as follows:
if there is a sequence of beta-reductions from $K$ to a lambda-term of the form $\symb{a}[K_1][\dots][K_r]$,
and $T_1,\dots,T_r$ are B\"ohm trees of $K_1,\dots,K_r$, respectively, then $\symb{a}[T_1][\dots][T_r]$ is a B\"ohm tree of $K$;
if there is no such sequence of beta-reductions from $K$, then $\symb{\omega}$ is a B\"ohm tree of $K$ (where $\omega$ is a fixed letter).
It is folklore that every closed lambda-term of sort $\mathsf{o}$ has exactly one B\"ohm tree (the order in which beta-reductions are performed does not matter); this tree is denoted by $\mathit{BT}(K)$.
A closed lambda-term $K$ of sort $\mathsf{o}$ is called \emph{fully convergent} if every node of $\mathit{BT}(K)$ is explicitly created by a node constructor from $K$
(e.g., $\symb{\omega}$ is fully convergent, while $K=(\lambda x^\mathsf{o}.x)\,K$ is not).
More formally: we consider the lambda-term $K_{-\omega}$ obtained from $K$ by replacing $\omega$ with some other letter $\omega'$, and
we say that $K$ is fully convergent if in $\mathit{BT}(K_{-\omega})$ there are no $\omega$-labeled nodes.
\subsection*{Higher-order recursion schemes.}
Our definition of schemes is less restrictive than usually, as we see them only as finite representations of infinite lambda-terms.
Thus a \emph{higher-order recursion scheme} (or just a \emph{scheme}) is a triple ${\mathcal G}=({\mathcal N},{\mathcal R},N_\mathsf{0}^\mathsf{o})$, where
\begin{itemize}
\item
${\mathcal N}\subseteq\mathit{Vars}^\lambda$ is a finite set of nonterminals,
\item
${\mathcal R}$ is a function that maps every nonterminal $N\in{\mathcal N}$ to a finite lambda-term whose all free variables are contained in ${\mathcal N}$ and whose sort equals the sort of $N$,
and \item $N_\mathsf{0}^\mathsf{o}\in{\mathcal N}$ is a starting nonterminal, being of sort $\mathsf{o}$.
\end{itemize}
We assume that elements of ${\mathcal N}$ are not used as bound variables, and that ${\mathcal R}(N)$ is not a nonterminal for any $N\in{\mathcal N}$.
For a scheme ${\mathcal G}=({\mathcal N},{\mathcal R},N_\mathsf{0})$, and for a lambda-term $K$ whose free variables are contained in ${\mathcal N}$,
we define the infinitary lambda-term \emph{represented by} $K$ with respect to ${\mathcal G}$, denoted $\Lambda_{\mathcal G}(K)$, by coinduction:
to obtain $\Lambda_{\mathcal G}(K)$ we replace in $K$ every nonterminal $N\in{\mathcal N}$ with $\Lambda_{\mathcal G}({\mathcal R}(N))$.
Observe that $\Lambda_{\mathcal G}(K)$ is a closed lambda-term of the same sort as $K$.
The infinitary lambda-term \emph{represented by} ${\mathcal G}$, denoted $\Lambda({\mathcal G})$, equals $\Lambda_{\mathcal G}(N_\mathsf{0})$.
By the \emph{tree generated by ${\mathcal G}$} we mean $\mathit{BT}(\Lambda({\mathcal G}))$.
We write $\Sigma_{\mathcal G}$ for the finite alphabet containing $\omega$ and letters used in node constructors appearing in ${\mathcal G}$,
and $\nChld_{\max}({\mathcal G})$ for the maximal arity of node constructors appearing in ${\mathcal G}$.
Clearly $\mathit{BT}(\Lambda({\mathcal G}))$ is a $(\Sigma_{\mathcal G},\nChld_{\max}({\mathcal G}))$-tree.
In our constructions it is convenient to consider only schemes representing fully-conver\-gent lambda-terms, which is possible due to the following standard result.
\begin{factfactC}[\cite{haddad-fics,models-reflection}]\label{fact:convergent}
For every scheme ${\mathcal G}$ we can construct a scheme ${\mathcal G}'$ generating the same tree as ${\mathcal G}$, and such that $\Lambda({\mathcal G}')$ is fully convergent.
\qed\end{factfactC}
\begin{exam}
Consider the scheme ${\mathcal G}_1=(\{M}%\mathsf{M}^\mathsf{o},N}%\mathsf{N}^{\mathsf{o}\mathbin{\to}\mathsf{o}}\},{\mathcal R},M}%\mathsf{M})$, where
\begin{align*}
{\mathcal R}(N}%\mathsf{N})=\lambda x^\mathsf{o}.\symb{a}[x][N}%\mathsf{N}\,(\symb{b}[x])]\,,
&&\mbox{and}&&
{\mathcal R}(M}%\mathsf{M})=N}%\mathsf{N}\,(\symb{c})\,.
\end{align*}
We obtain $\Lambda({\mathcal G}_1)=K\,(\symb{c})$, where $K$ is the unique lambda-term for which it holds $K=\lambda x^\mathsf{o}.\symb{a}[x][K\,(\symb{b}[x])]$.
The tree generated by ${\mathcal G}_1$ equals $\symb{a}[T_0][\symb{a}[T_1][\symb{a}[T_2][\dots]]]$, where $T_0=\symb{c}$ and $T_i=\symb{b}[T_{i-1}]$ for all $i\geq 1$.
\end{exam}
\begin{rem*}
An usual definition of schemes is more restrictive than ours:
it is required that ${\mathcal R}(N)$ is a of the form $\lambda x_1.\cdots{}.\lambda x_s.K$, where $K$ is of sort $\mathsf{o}$ and does not use any lambda-binders.
We do not have this requirement, so possibly ${\mathcal R}(N)$ does not start with a full sequence of lambda-binders, and possibly some lambda-binders are nested deeper in the lambda-term.
It is, though, not difficult to convert a scheme respecting only our definition to a scheme satisfying these additional requirements (at the cost of introducing more nonterminals).
We can, for example, use a translation between schemes and $\lambda Y$-terms from Salvati and Walukiewicz~\cite{schemes-lY}:
their translation from schemes to $\lambda Y$-terms works well with our definition of schemes,
while the translation from $\lambda Y$-terms to schemes produces schemes respecting the more restrictive definition.
Another difference is that in the definition of the B\"ohm tree we allow arbitrary beta-reductions, while it is sometimes assumed that only outermost beta-reductions are allowed.
It is a folklore that these two definitions are equivalent.
There is one more difference: we expand a scheme to an infinite lambda-term, and then we operate on this lambda-term,
while often finite lambda-terms containing nonterminals are considered, and appearances of nonterminals are expanded only when needed.
This is a purely syntactical difference.
\end{rem*}
\subsection*{{\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace.}
For technical convenience, we use a syntax in which there are no first-order variables.
It is easy to translate a formula from a more standard syntax to ours (at least when the maximal arity of considered trees is fixed).
We assume two infinite sets of variables, ${\mathcal V}^\mathsf{fin}$ and ${\mathcal V}^\mathsf{inf}$, and we let ${\mathcal V}={\mathcal V}^\mathsf{fin}\uplus{\mathcal V}^\mathsf{inf}$.
Variables from ${\mathcal V}^\mathsf{fin}$ are used to quantify over finite sets, while variables from ${\mathcal V}^\mathsf{inf}$ over arbitrary (potentially infinite) sets.
In the syntax of {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace we have the following constructions:
\begin{align*}
\phi::=a(\mathsf{X})\mid
\mathsf{X}\curlywedgedownarrow_i \mathsf{Y}\mid
\mathsf{X}\subseteq \mathsf{Y}\mid
\phi_1\land\phi_2\mid
\neg\phi'\mid
\exists \mathsf{Z}.\phi'\mid
{\exists_{\mathsf{fin}}} \mathsf{F}.\phi'\mid
\ensuremath{\mathsf U}\xspace \mathsf{F}.\phi'
\end{align*}
where $a$ is a letter, and $i\in\mathbb N_+$, and $\mathsf{X},\mathsf{Y}\in{\mathcal V}$, and $\mathsf{Z}\in{\mathcal V}^\mathsf{inf}$, and $\mathsf{F}\in{\mathcal V}^\mathsf{fin}$.
Free variables of a formula are defined as usual; in particular $\ensuremath{\mathsf U}\xspace \mathsf{F}$ is a quantifier, hence it bounds the variable $\mathsf{F}$.
We impose the restriction that $\ensuremath{\mathsf U}\xspace\mathsf{F}.\phi'$ can be used only when all free variables of $\phi'$ are from ${\mathcal V}^\mathsf{fin}$.
The {\upshape MSO}\xspace logic is defined likewise, with the exception that the \ensuremath{\mathsf U}\xspace quantifier is disallowed.
(The fact that a set of tree nodes is finite is expressible in MSO without using the ${\exists_{\mathsf{fin}}}$ quantifier,
thus presence of this quantifier does not change the expressive power of MSO).
We evaluate formulae of {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace in $\Sigma$-labeled trees.
In order to evaluate a formula $\phi$ in a tree $T$, we also need a \emph{valuation}, that is, a partial function $\nu$ from ${\mathcal V}$ to sets of nodes of $T$,
such that $\nu(\mathsf{F})$ is finite whenever $\mathsf{F}\in{\mathcal V}^\mathsf{fin}\cap\mathrm{dom}(\nu)$.
The function should be defined at least for all free variables of $\phi$.
The semantics is defined as follows:
\begin{itemize}
\item $a(\mathsf{X})$ holds when every node in $\nu(\mathsf{X})$ is labeled by $a$,
\item $\mathsf{X}\curlywedgedownarrow_i \mathsf{Y}$ holds when both $\nu(\mathsf{X})$ and $\nu(\mathsf{Y})$ are singletons,
and the unique node in $\nu(\mathsf{Y})$ is the $i$-th child of the unique node in $\nu(\mathsf{X})$,
\item $\mathsf{X}\subseteq \mathsf{Y}$ holds when $\nu(\mathsf{X})\subseteq\nu(\mathsf{Y})$,
\item $\phi_1\land\phi_2$ holds when both $\phi_1$ and $\phi_2$ hold,
\item $\neg\phi'$ holds when $\phi'$ does not hold,
\item $\exists\mathsf{Z}.\phi'$ holds when $\phi'$ holds for an extension of $\nu$ that maps $\mathsf{Z}$ to some set of nodes of $T$,
\item ${\exists_{\mathsf{fin}}} \mathsf{F}.\phi'$ holds when $\phi'$ holds for an extension of $\nu$ that maps $\mathsf{F}$ to some finite set of nodes of $T$, and
\item $\ensuremath{\mathsf U}\xspace \mathsf{F}.\phi'$ holds when for every $n\in\mathbb N$,
$\phi'$ holds for an extension of $\nu$ that maps $\mathsf{F}$ some finite set of nodes of $T$ of cardinality at least $n$.
\end{itemize}
We write $T,\nu\models\phi$ to denote that $\phi$ holds in $T$ with respect to the valuation $\nu$.
In order to see that our definition of {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace is not too poor, let us write a few example formulae.
\begin{itemize}
\item The fact that $\mathsf{X}$ represents an empty set can be expressed as $\mathit{empty}(\mathsf{X})\equiv a(\mathsf{X})\land b(\mathsf{X})$ (where $a,b$ are any two different letters).
\item The fact that $\mathsf{X}$ represents a set of size at least $2$
can be expressed as $\mathit{big}(\mathsf{X})\equiv\exists \mathsf{Y}.(\mathsf{Y}\subseteq \mathsf{X}\land \neg(\mathsf{X}\subseteq \mathsf{Y})\land\neg\mathit{empty}(\mathsf{Y}))$.
\item The fact that $\mathsf{X}$ represents a singleton can be expressed as $\mathit{sing}(\mathsf{X})\equiv\neg\mathit{empty}(\mathsf{X})\land\neg\mathit{big}(\mathsf{X})$.
\item When we only consider trees of a fixed maximal arity $\nChld_{\max}$,
the fact that $\mathsf{X}$ and $\mathsf{Y}$ represent singletons $\{x\},\{y\}$, respectively, such that $y$ is a child of $x$ can be expressed as
\begin{align*}
(\mathsf{X}\curlywedgedownarrow_1 \mathsf{Y})\lor\dots\lor(\mathsf{X}\curlywedgedownarrow_{\nChld_{\max}} \mathsf{Y})\,,
\end{align*}
where $\phi_1\lor\phi_2$ stands for $\neg(\neg\phi_1\land\neg\phi_2)$.
\item Let $A=\{a_1,\dots,a_k\}$ be a finite set of letters.
The fact every node in the set represented by $\mathsf{X}$ has label in $A$ can be expressed as
\begin{align*}
\forall\mathsf{Y}.(\mathit{sing}(\mathsf{Y})\land\mathsf{Y}\subseteq\mathsf{X})\mathbin{\to}(a_1(\mathsf{Y})\lor\dots\lor a_k(\mathsf{Y}))\,,
\end{align*}
where $\forall\mathsf{Y}.\phi$ stands for $\neg\exists\mathsf{Y}.\neg\phi$, and $\phi_1\mathbin{\to}\phi_2$ stands for $\neg(\phi_1\land\neg\phi_2)$.
\end{itemize}
Like in most logics, in {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace we can relativize formulae, as described by Fact~\ref{fact:relativize}.
\begin{factfact}\label{fact:relativize}
Let $\nChld_{\max}\in\mathbb N$.
For every {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace sentence $\phi$ we can construct an {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace formula $\widehat\phi(\mathsf{X})$ with one free variable $\mathsf{X}$
such that for every tree $T$ of maximal arity $\nChld_{\max}$ and every valuation $\nu$,
it holds $T,\nu\models\widehat\phi$ if and only if $\phi$ holds in $T{\restriction}_u$ for every $u\in\nu(\mathsf{X})$.
\end{factfact}
\begin{proof}[Proof sketch]
Suppose first that we want to construct a formula $\phi'(\mathsf{X})$ that satisfies the fact only for valuations mapping $\mathsf{X}$ to singleton sets $\{u\}$.
To this end, we need to relativize quantification in $\phi$ to the subtree starting in $u$.
This means that we replace subformulae of the form $\exists\mathsf{Y}.\psi$ (and likewise ${\exists_{\mathsf{fin}}}\mathsf{Y}.\psi$ and $\ensuremath{\mathsf U}\xspace\mathsf{Y}.\psi$) by $\exists\mathsf{Y}.\eta(\mathsf{X},\mathsf{Y})\land\psi$,
where $\eta(\mathsf{X},\mathsf{Y})$ says that the set represented by $\mathsf{Y}$ contains only (not necessarily proper) descendants of the node represented by $\mathsf{X}$.
We conclude by taking $\widehat\phi(\mathsf{X})\equiv\forall\mathsf{X}'.(\mathit{sing}(\mathsf{X}')\land\mathsf{X}'\subseteq\mathsf{X})\mathbin{\to}\phi'(\mathsf{X}')$,
saying that the formula $\phi'(\mathsf{X}')$ holds whenever $\mathsf{X}'$ represents a singleton subset of the set represented by $\mathsf{X}$.
\end{proof}
\section{Nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automata}\label{sec:automata}
In this section we give a definition of nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automata, a formalism equivalent to the {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace logic.
These are compositions of \ensuremath{\mathsf U}\xspace-prefix automata and {\upshape MSO}\xspace automata, defined below.
A \emph{\ensuremath{\mathsf U}\xspace-prefix automaton} is a tuple ${\mathcal A}=(\Sigma,Q,Q_\mathsf{imp},\Delta)$,
where $\Sigma$ is a finite alphabet,
$Q$ is a finite set of states, $Q_\mathsf{imp}\subseteq Q$ is a set of \emph{important} states,
and $\Delta\subseteq Q\times\Sigma\times (Q\cup\{\top\})^*$ is a finite transition relation (we assume $\top\not\in Q$).
A \emph{run} of ${\mathcal A}$ on a $\Sigma$-labeled tree $T$ is a mapping $\rho$ from the set of nodes of $T$ to $Q\cup\{\top\}$ such that
\begin{itemize}
\item there are only finitely many nodes $u$ such that $\rho(u)\in Q$, and
\item for every node $u$ of $T$, with label $a$ and $r$ children, it holds that either $\rho(u)=\top=\rho(u 1)=\dots=\rho(\unoder)$ or $(\rho(u),a,\rho(\unode1),\dots,\rho(\unoder))\in\Delta$.
\end{itemize}
We use \ensuremath{\mathsf U}\xspace-prefix automata as transducers, relabeling nodes of $T$:
we define ${\mathcal A}(T)$ to be the tree of the same shape as $T$, and such that its every node $u$ originally labeled by $a_u$ becomes labeled by the pair $(a_u,f_u)$,
where $f_u\colon Q\to\{0,1,2\}$ is the function that assigns to every state $q\in Q$
\begin{itemize}
\item $2$, if for every $n\in\mathbb N$ there is a run $\rho_n$ of ${\mathcal A}$ on $T{\restriction}_u$ that assigns $q$ to the root of $T{\restriction}_u$, and such that for at least $n$ nodes $w$ it holds that $\rho_n(w)\in Q_\mathsf{imp}$;
\item $1$, if the above does not hold, but there is a run of ${\mathcal A}$ on $T{\restriction}_u$ that assigns $q$ to the root of $T{\restriction}_u$;
\item $0$, if none of the above holds.
\end{itemize}
\newcommand{q_\mathsf{fin}}{q_\mathsf{fin}}
\newcommand{q_{\exists\mathit{lf}}}%\mathsf{fb}}{q_{\exists\mathit{lf}}
\begin{exam}\label{exa:3.1}
Consider the \ensuremath{\mathsf U}\xspace-prefix automaton ${\mathcal A}_1=(\{a\},\{q_{\exists\mathit{lf}}}%\mathsf{fb},q_\mathsf{fin}\},\{q_\mathsf{fin}\},\Delta)$,
where $\Delta$ contains transitions
\begin{align*}
&(q_\mathsf{fin},a),(q_\mathsf{fin},a,q_\mathsf{fin}),(q_\mathsf{fin},a,q_\mathsf{fin},q_\mathsf{fin}),(q_{\exists\mathit{lf}}}%\mathsf{fb},a),&&\mbox{and}\\
&(q_{\exists\mathit{lf}}}%\mathsf{fb},a,q),(q_{\exists\mathit{lf}}}%\mathsf{fb},a,q,\top),(q_{\exists\mathit{lf}}}%\mathsf{fb},a,\top,q)&&\mbox{for }q\in\{q_{\exists\mathit{lf}}}%\mathsf{fb},q_\mathsf{fin}\}\,.
\end{align*}
Suppose now that a $(\{a\},2)$-tree $T$ comes.
When a state $q_\mathsf{fin}$ is assigned to some node $u$ of $T$, then it has to be assigned as well to all descendants of $u$.
Thus, there is a run of ${\mathcal A}_1$ on $T$ with state $q_\mathsf{fin}$ in the root exactly when the tree is finite.
This is because the definition of a run allows to assign states (other than $\top$) only to finitely many nodes of the tree.
Going further, there is a run of ${\mathcal A}$ on $T$ with state $q_{\exists\mathit{lf}}}%\mathsf{fb}$ in the root exactly when there is a leaf in the tree.
The run can assign $q_{\exists\mathit{lf}}}%\mathsf{fb}$ to all nodes on the branch leading to a selected leaf, and $\top$ to all other nodes.
Alternatively, it can assign $q_{\exists\mathit{lf}}}%\mathsf{fb}$ to nodes on a branch leading to some node $u$,
and then $q_\mathsf{fin}$ to all descendants of $u$, assuming that the subtree starting in $u$ is finite.
Let $B_i$ be the full binary tree of height $i$, for $i\in\mathbb N$.
Let $T_1$ be the tree consisting of an infinite branch, with tree $B_i$ attached below the $i$-th child of the branch;
that is, we take $T_i=\symb{a}[T_{i+1}][B_i]$ for $i\in\mathbb N_+$.
By definition, ${\mathcal A}_1(T_1)$ has the same shape as $T_1$.
Nodes inside all $B_i$ become relabeled to $(a,[q_{\exists\mathit{lf}}}%\mathsf{fb}\mapsto1,q_\mathsf{fin}\mapsto1])$.
This is because every subtree of every $B_i$ is finite and has a leaf.
Moreover, the number of nodes of this subtree to which $q_\mathsf{fin}$ is assigned is bounded by the size of the subtree
(and hence we do not use the value $2$ in the new label).
Nodes of the leftmost branch of $T_1$ are, in turn, relabeled to $(a,[q_{\exists\mathit{lf}}}%\mathsf{fb}\mapsto2,q_\mathsf{fin}\mapsto0])$.
The value $2$ in the $i$-th node of the branch means that for every $n\in\mathbb N$ there is a run $\rho_n$ on $T_i$ that assigns $q_{\exists\mathit{lf}}}%\mathsf{fb}$ to the root of $T_i$,
and assigns $q_\mathsf{fin}$ to at least $n$ nodes.
Such a run $\rho_n$ assigns $q_{\exists\mathit{lf}}}%\mathsf{fb}$ on a branch entering some $B_j$ with at least $n$ nodes,
and assigns $q_\mathsf{fin}$ to all nodes of this~$B_j$.
\end{exam}
An \emph{{\upshape MSO}\xspace automaton} is a triple ${\mathcal A}=(\Sigma,Q,(\phi_q)_{q\in Q})$, where $\Sigma$ is a finite alphabet,
$Q$ is a finite set of states,
and $(\phi_q)_{q\in Q}$ is a bundle of MSO sentences indexed by elements of $Q$.
An effect of running such an automaton ${\mathcal A}$ on a $\Sigma$-labeled tree $T$ is the tree ${\mathcal A}(T)$ that is of the same shape as $T$,
and such that its every node $u$ originally labeled by $a_u$ becomes labeled by the pair $(a_u,f_u)$,
where $f_u\colon Q\to\{0,1,2\}$ is the function that assigns to every index $q\in Q$
\begin{itemize}
\item $1$ if $\phi_q$ is true in $T{\restriction}_u$;
\item $0$ otherwise.\footnote
{\upshape MSO}\xspace automata never assign the value $2$;
nevertheless, for uniformity between \ensuremath{\mathsf U}\xspace-prefix automata and {\upshape MSO}\xspace automata, we assume that the set of values is $\{0,1,2\}$
}
\end{itemize}
\begin{exam}
Let ${\mathcal A}_2=(\{a\}\times\{0,1,2\}^{\{q_{\exists\mathit{lf}}}%\mathsf{fb},q_\mathsf{fin}\}},\{q_1,q_2\},(\phi_{q_1},\phi_{q_2}))$,
where $\phi_{q_1}$ says that the second child of the root exists and is labeled by $(a,[q_{\exists\mathit{lf}}}%\mathsf{fb}\mapsto1,q_\mathsf{fin}\mapsto1])$,
and $\phi_{q_2}$ says that there exists an infinite branch with all nodes labeled by $(a,[q_{\exists\mathit{lf}}}%\mathsf{fb}\mapsto2,q_\mathsf{fin}\mapsto0])$.
Let us analyze ${\mathcal A}_2({\mathcal A}_1(T_1))$, for the tree $T_1$ from Example~\ref{exa:3.1}.
It has the same shape as ${\mathcal A}_1(T_1)$, and as $T_1$.
All leaves become labeled by $((a,[q_{\exists\mathit{lf}}}%\mathsf{fb}\mapsto1,q_\mathsf{fin}\mapsto1]),[q_1\mapsto0,q_2\mapsto0])$,
other nodes of $B_i$ become labeled by $((a,[q_{\exists\mathit{lf}}}%\mathsf{fb}\mapsto1,q_\mathsf{fin}\mapsto1]),[q_1\mapsto1,q_2\mapsto0])$,
and nodes on the leftmost branch of the tree become labeled by $((a,[q_{\exists\mathit{lf}}}%\mathsf{fb}\mapsto2,q_\mathsf{fin}\mapsto0]),[q_1\mapsto1,q_2\mapsto1])$.
\end{exam}
By the \emph{input alphabet} of ${\mathcal A}$, where ${\mathcal A}$ is either a \ensuremath{\mathsf U}\xspace-prefix automaton $(\Sigma,Q,Q_\mathsf{imp},\Delta)$ or an {\upshape MSO}\xspace automaton $(\Sigma,Q,(\phi_q)_{q\in Q})$,
we mean the set $\Sigma^\mathsf{in}({\mathcal A})=\Sigma$.
The \emph{output alphabet} of ${\mathcal A}$ is $\Sigma^\mathsf{out}({\mathcal A})=\Sigma^\mathsf{in}({\mathcal A})\times\{0,1,2\}^Q$.
A \emph{nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automaton} is a sequence ${\mathcal A}={\mathcal A}_1\vartriangleright\dots\vartriangleright{\mathcal A}_k$ (with $k\geq 1$),
where every ${\mathcal A}_i$ is either a \ensuremath{\mathsf U}\xspace-prefix automaton or an {\upshape MSO}\xspace automaton, and where $\Sigma^\mathsf{in}({\mathcal A}_{i+1})=\Sigma^\mathsf{out}({\mathcal A}_i)$ for $i\in\{1,\dots,k-1\}$.
We define ${\mathcal A}(T)$ to be ${\mathcal A}_k(\dots({\mathcal A}_1(T))\dots)$.
The input and output alphabets of ${\mathcal A}$, denoted $\Sigma^\mathsf{in}({\mathcal A})$ and $\Sigma^\mathsf{out}({\mathcal A})$, equal $\Sigma^\mathsf{in}({\mathcal A}_1)$ and $\Sigma^\mathsf{out}({\mathcal A}_k)$, respectively.
The key property is that these automata can check properties expressed in {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace,
as we state in Lemma~\ref{lem:logic-to-automata}, and we prove in the remainder of this section.
\begin{lem}\label{lem:logic-to-automata}
Let $\Sigma$ be a finite alphabet, and let $\nChld_{\max}\in\mathbb N$.
For every {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace sentence $\phi$ we can construct a nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automaton ${\mathcal A}_\phi$ with $\Sigma^\mathsf{in}({\mathcal A}_\phi)=\Sigma$,
and a subset $\Sigma_\mathsf{F}\subseteq\Sigma^\mathsf{out}({\mathcal A}_\phi)$ such that for every $(\Sigma,\nChld_{\max})$-tree $T$,
the root of ${\mathcal A}_\phi(T)$ is labeled by a letter in $\Sigma_\mathsf{F}$ if and only if $\phi$ holds in $T$.
\end{lem}
Recall that our aim is to evaluate $\phi$ in a tree $T$ generated by a recursion scheme ${\mathcal G}$, so the restriction to $(\Sigma,\nChld_{\max})$-trees is not harmful:
as $(\Sigma,\nChld_{\max})$ we are going to take $(\Sigma_{\mathcal G},\nChld_{\max}({\mathcal G}))$.
It is not difficult to see that in {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace we can express properties checked by nested \ensuremath{\mathsf U}\xspace-prefix MSO automata.
This means that the two formalisms are actually equivalent.
Although we do not need this second direction in order to prove Theorem~\ref{thm:main}, we state it in Lemma~\ref{lem:automata-to-logic} for cognitive purposes.
\begin{lem}\label{lem:automata-to-logic}
Let $\nChld_{\max}\in\mathbb N$.
For every nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automaton ${\mathcal A}$, and every letter $\eta\in\Sigma^\mathsf{out}({\mathcal A})$ we can construct an {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace sentence $\phi_{{\mathcal A},\eta}$
such that for every $(\Sigma^\mathsf{in}({\mathcal A}),\nChld_{\max})$-tree $T$, the root of ${\mathcal A}(T)$ is labeled by $\eta$ if and only if $\phi_{{\mathcal A},\eta}$ holds in $T$.
\end{lem}
\begin{proof}[Proof sketch]
When ${\mathcal A}$ is a single {\upshape MSO}\xspace automaton, ${\mathcal A}=(\Sigma,Q,(\psi_q)_{q\in Q})$, it is straightforward to construct $\phi_{{\mathcal A},\eta}$ in question.
Namely, when $\eta=(a,f)$, as $\phi_{{\mathcal A},\eta}$ we take
\begin{align*}
\xi_a\land\bigwedge_{q:f(q)=1}\psi_q\land\bigwedge_{q:f(q)=0}\neg\psi_q\,,
\end{align*}
where $\xi_a$ says that the root is labeled by $a$.
It is also not difficult to deal with a single \ensuremath{\mathsf U}\xspace-prefix automaton.
Indeed, it is standard to express in {\upshape MSO}\xspace that a run of an automaton exists.
The fact that there exist runs with arbitrarily many important states is expressed using the \ensuremath{\mathsf U}\xspace quantifier.
It remains to simulate composition of automata.
Suppose that ${\mathcal A}={\mathcal A}_1\vartriangleright{\mathcal A}_2$ (where ${\mathcal A}_1,{\mathcal A}_2$ may be nested again),
and that we already have sentences $\psi_{{\mathcal A}_1,a}$ corresponding to ${\mathcal A}_1$ for $a\in\Sigma^\mathsf{out}({\mathcal A}_1)=\Sigma^\mathsf{in}({\mathcal A}_2)$,
and $\psi_{{\mathcal A}_2,\eta}$ corresponding to ${\mathcal A}_2$.
Out of every sentence $\psi_{{\mathcal A}_1,a}$ we construct a formula $\widehat\psi_{{\mathcal A}_1,a}(\mathsf{Z})$ saying that $\psi_{{\mathcal A}_1,a}$
holds in all subtrees starting in elements of the set represented by $\mathsf{Z}$ (cf.~Fact~\ref{fact:relativize}).
The formula $\psi_{{\mathcal A}_2,\eta}$ is evaluated in ${\mathcal A}_1(T)$, while the formula $\phi_{{\mathcal A},\eta}$ that we are going to construct is evaluated in $T$.
Thus, whenever $\psi_{{\mathcal A}_2,\eta}$ uses $a(\mathsf{Z})$ for some letter $a\in\Sigma^\mathsf{in}({\mathcal A}_2)$ and some variable $\mathsf{Z}$,
in $\phi_{{\mathcal A},\eta}$ we replace it by $\widehat\psi_{{\mathcal A}_1,a}(\mathsf{Z})$.
\end{proof}
We now come to the proof of Lemma~\ref{lem:logic-to-automata}.
We notice that due to the nested structure, our automata are quite close to the logic.
It is clear that {\upshape MSO}\xspace automata can simulate all of {\upshape MSO}\xspace.
On the other hand, $\ensuremath{\mathsf U}\xspace$-prefix automata check whether something is unbounded, which corresponds to \ensuremath{\mathsf U}\xspace quantifiers.
As states of the $\ensuremath{\mathsf U}\xspace$-prefix automata we take \emph{phenotypes} (aka.\ logical types), which are defined next.
Let $\phi$ be a formula of {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace, let $T$ be a tree, and let $\nu$ be a valuation (defined at least for all free variables of $\phi$).
We define the \emph{$\phi$-phenotype} of $T$ under valuation $\nu$, denoted $\pht{\phi}{T}{\nu}$, by induction on the size of $\phi$ as follows:
\begin{itemize}
\item if $\phi$ is of the form $a(\mathsf{X})$ (for some letter $a$) or $\mathsf{X}\subseteq\mathsf{Y}$ then $\pht{\phi}{T}{\nu}$ is the logical value of $\phi$ in $T,\nu$, that is,
$\mathsf{tt}$ if $T,\nu\models\phi$ and $\mathsf{ff}$ otherwise,
\item if $\phi$ is of the form $\mathsf{X}\curlywedgedownarrow_i \mathsf{Y}$, then $\pht{\phi}{T}{\nu}$ equals
\begin{itemize}
\item $\mathsf{tt}$ if $T,\nu\models\phi$,
\item $\mathsf{empty}$ if $\nu(\mathsf{X})=\nu(\mathsf{Y})=\emptyset$,
\item $\mathsf{root}$ if $\nu(\mathsf{X})=\emptyset$ and $\nu(\mathsf{Y})=\{\epsilon\}$, and
\item $\mathsf{ff}$ otherwise,
\end{itemize}
\item if $\phi\equiv(\psi_1\land\psi_2)$, then $\pht{\phi}{T}{\nu}=(\pht{\psi_1}{T}{\nu},\pht{\psi_2}{T}{\nu})$,
\item if $\phi\equiv(\neg\psi)$, then $\pht{\phi}{T}{\nu}=\pht{\psi}{T}{\nu}$,
\item if $\phi\equiv\exists\mathsf{X}.\psi$, then
\begin{align*}
\pht{\phi}{T}{\nu}=\{\sigma\mid\exists X.\pht{\psi}{T}{\nu[\mathsf{X}\mapsto X]}=\sigma\}\,,
\end{align*}
\item if $\phi\equiv{\exists_{\mathsf{fin}}} \mathsf{X}.\psi$, then
\begin{align*}
\pht{\phi}{T}{\nu}=\{\sigma\mid\exists X.\pht{\psi}{T}{\nu[\mathsf{X}\mapsto X]}=\sigma\land |X|<\infty\}\,,\tag*{\mbox{and}}
\end{align*}
\item if $\phi\equiv\ensuremath{\mathsf U}\xspace \mathsf{X}.\psi$, then
\begin{align*}
\pht{\phi}{T}{\nu}=(&\{\sigma\mid\exists X.\pht{\psi}{T}{\nu[\mathsf{X}\mapsto X]}=\sigma\land |X|<\infty\},\\
&\{\sigma\mid\forall n\in\mathbb N.\exists X.\pht{\psi}{T}{\nu[\mathsf{X}\mapsto X]}=\sigma\land n\leq|X|<\infty\})\,,
\end{align*}
\end{itemize}
where $X$ ranges over sets of nodes of $T$.
For each $\phi$, let $\Pht\phi$ denote the set of all potential $\phi$-phenotypes.
Namely, $\Pht\phi=\{\mathsf{tt},\mathsf{ff}\}$ in the first case,
$\Pht\phi=\{\mathsf{tt}, \mathsf{empty}, \mathsf{root}, \mathsf{ff}\}$ in the second case,
$\Pht\phi=\Pht{\psi_1}\times\Pht{\psi_2}$ in the third case,
$\Pht\phi=\Pht{\psi}$ in the fourth case,
$\Pht\phi={\mathcal P}(\Pht\psi)$ in the fifth and sixth case, and
$\Pht\phi=({\mathcal P}(\Pht\psi))^2$ in the last case.
We immediately see two facts.
First, $\Pht\phi$ is finite for every $\phi$.
Second, the fact whether $\phi$ holds in $T,\nu$ is determined by $\pht{\phi}{T}{\nu}$.
This means that there is a function $\mathit{tv}_\phi\colon\Pht\phi\to\{\mathsf{tt},\mathsf{ff}\}$ such that $\mathit{tv}_\phi(\pht{\phi}{T}{\nu})=\mathsf{tt}$ if and only if $T,\nu\models
\phi$.\label{page:tv}
Next, we observe that phenotypes behave in a compositional way, as formalized below.
Here for a valuation $\nu$ and a node $u$, by $\nu{\restriction}_u$ we mean the valuation that restricts $\nu$ to the subtree starting at $u$,
that is, maps every variable $\mathsf{X}\in\mathrm{dom}(\nu)$ to $\{w\miduw\in\nu(\mathsf{X})\}$.
\begin{lemC}[\cite{wmso+u-kaiser,wmso-model}]\label{lem:compositionality}
For every letter $a$, every $r\in\mathbb N$, and every {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace formula $\phi$,
one can compute a function $\mathit{Comp}_{a,r,\phi}\colon\Pp^\mathsf{fin}({\mathcal V})\times(\Pht\phi)^r\to\Pht\phi$ such that
for every tree $T$ whose root has label $a$ and $r$ children, and for every valuation $\nu$,
\begin{align*}
\pht{\phi}{T}{\nu}=\mathit{Comp}_{a,r,\phi}(\{\mathsf{X}\in\mathrm{dom}(\nu)\mid\epsilon\in\nu(\mathsf{X})\},\pht{\phi}{T{\restriction}_1}{\nu{\restriction}_1},\dots,\pht{\phi}{T{\restriction}_r}{\nu{\restriction}_r})\,.
\end{align*}
\end{lemC}
\begin{proof}
We proceed by induction on the size of $\phi$.
When $\phi$ is of the form $b(\mathsf{X})$ or $\mathsf{X}\subseteq \mathsf{Y}$, then we see that $\phi$ holds in $T,\nu$ if and only if it holds in every subtree $T{\restriction}_i,\nu{\restriction}_i$ and in the root of $T$.
Thus, for $\phi\equiv b(\mathsf{X})$ as $\mathit{Comp}_{a,r,\phi}(R,\tau_1,\dots,\tau_r)$ we take $\mathsf{tt}$ when $\tau_i=\mathsf{tt}$ for all $i\in\{1,\dots,r\}$ and either $a=b$ or $\mathsf{X}\not\in R$.
For $\phi\equiv(\mathsf{X}\subseteq \mathsf{Y})$ the last part of the condition is replaced by ``if $\mathsf{X}\in R$ then $\mathsf{Y}\in R$''.
Next, suppose that $\phi\equiv(\mathsf{X}\curlywedgedownarrow_k \mathsf{Y})$.
Then as $\mathit{Comp}_{a,r,\phi}(R,\tau_1,\dots,\tau_r)$ we take
\begin{itemize}
\item $\mathsf{tt}$ if $\tau_j=\mathsf{tt}$ for some $j\in\{1,\dots,r\}$, and $\tau_i=\mathsf{empty}$ for all $i\in\{1,\dots,r\}\setminus\{j\}$, and $\mathsf{X}\not\in R$, and $\mathsf{Y}\not\in R$,
\item $\mathsf{tt}$ also if $\tau_k=\mathsf{root}$, and $\tau_i=\mathsf{empty}$ for all $i\in\{1,\dots,r\}\setminus\{k\}$, and $\mathsf{X}\in R$, and $\mathsf{Y}\not\in R$,
\item $\mathsf{empty}$ if $\tau_i=\mathsf{empty}$ for all $i\in\{1,\dots,r\}$, and $\mathsf{X}\not\in R$, and $\mathsf{Y}\not\in R$,
\item $\mathsf{root}$ if $\tau_i=\mathsf{empty}$ for all $i\in\{1,\dots,r\}$, and $\mathsf{X}\not\in R$, and $\mathsf{Y}\in R$, and
\item $\mathsf{ff}$ otherwise.
\end{itemize}
By comparing this definition with the definition of the phenotype we immediately see that the thesis is satisfied.
When $\phi\equiv(\neg\psi)$, we simply take $\mathit{Comp}_{a,r,\phi}=\mathit{Comp}_{a,r,\psi}$, and when $\phi\equiv(\psi_1\land\psi_2)$, as $\mathit{Comp}_{a,r,\phi}(R,(\tau_1^1,\tau_1^2),\dots,(\tau^1_r,\tau^2_r))$ we take the pair of $\mathit{Comp}_{a,r,\psi_i}(R,\tau_1^i,\dots,\tau^i_r)$ for $i\in\{1,2\}$.
Suppose now that $\phi\equiv\exists \mathsf{X}.\psi$ or $\phi\equiv{\exists_{\mathsf{fin}}} \mathsf{X}.\psi$.
As $\mathit{Comp}_{a,r,\phi}(R,\tau_1,\dots,\tau_r)$ we take
\begin{align*}
&\{\mathit{Comp}_{a,r,\psi}(R\cup\{\mathsf{X}\},\sigma_1,\dots,\sigma_r),\mathit{Comp}_{a,r,\psi}(R\setminus\{\mathsf{X}\},\sigma_1,\dots,\sigma_r)\\
&\hspace{20em}\mid(\sigma_1,\dots,\sigma_r)\in \tau_1\times\dots\times\tau_r\}\,.
\end{align*}
The two possibilities, $R\cup\{\mathsf{X}\}$ and $R\setminus\{\mathsf{X}\}$, correspond to the fact that when quantifying over $\mathsf{X}$,
the root of $T$ may be either taken to the set represented by $\mathsf{X}$ or not.
Notice that the cases of $\exists\mathsf{X}$ and ${\exists_{\mathsf{fin}}}\mathsf{X}$ are handled in the same way:
for a local behavior near the root it does not matter whether we quantify over all sets or only over finite sets.
Finally, suppose that $\phi\equiv\ensuremath{\mathsf U}\xspace \mathsf{X}.\psi$.
The arguments of $\mathit{Comp}_{a,r,\phi}$ are pairs $(\tau_1,\rho_1),\dots,\allowbreak(\tau_r,\rho_r)$.
Let $A$ be the set of tuples $(\sigma_1,\dots,\sigma_r)\in\tau_1\times\dots\times\tau_r$,
and let $B$ be the set of tuples $(\sigma_1,\dots,\sigma_r)$ such that $\sigma_j\in\rho_j$ for some $j\in\{1,\dots,r\}$ and $\sigma_i\in\tau_i$ for all $i\in\{1,\dots,r\}\setminus\{j\}$.
As $\mathit{Comp}_{a,r,\phi}(R,(\tau_1,\rho_1),\dots,(\tau_r,\rho_r))$ we take
\begin{align*}
(&\{\mathit{Comp}_{a,r,\psi}(R\cup\{\mathsf{X}\},\sigma_1,\dots,\sigma_r),\mathit{Comp}_{a,r,\psi}(R\setminus\{\mathsf{X}\},\sigma_1,\dots,\sigma_r)\mid(\sigma_1,\dots,\sigma_r)\in A\},\\
&\{\mathit{Comp}_{a,r,\psi}(R\cup\{\mathsf{X}\},\sigma_1,\dots,\sigma_r),\mathit{Comp}_{a,r,\psi}(R\setminus\{\mathsf{X}\},\sigma_1,\dots,\sigma_r)\mid(\sigma_1,\dots,\sigma_r)\in B\})\,.
\end{align*}
The first coordinate is defined as for the existential quantifiers.
The second coordinate is computed correctly due to the pigeonhole principle: if for every $n$ we have a set $X_n$ of cardinality at least $n$ (satisfying some property),
then we can choose an infinite subsequence of these sets such that either the root belongs to all of them or to none of them,
and one can choose some $j\in\{1,\dots,r\}$ such that the sets contain unboundedly many descendants of~$j$.
\end{proof}
In order to prove Lemma~\ref{lem:logic-to-automata} by induction on the structure of the sentence $\phi$, we need to generalize it a bit; this is done in Lemma~\ref{lem:logic-to-automata-aux}.
In particular, we need to use phenotypes, instead of the truth value of the sentence (because phenotypes are compositional, unlike truth values).
We also need to allow formulae with free variables, not just sentences, as well as arbitrary valuations.
A special role is played by the valuation $\nu_\emptyset$ that maps every variable to the empty set; for this valuation we have a stronger version of the lemma.
\begin{lem}\label{lem:logic-to-automata-aux}
Let $\Sigma$ be a finite alphabet, and let $\nChld_{\max}\in\mathbb N$.
Then for every {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace formula $\phi$ we can construct
\begin{enumerate}
\item a nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automaton ${\mathcal A}_\phi$ with $\Sigma^\mathsf{in}({\mathcal A}_\phi)=\Sigma$, and {\upshape MSO}\xspace formulae $\xi_{\phi,\tau}$ for all $\tau\in\Pht{\phi}$,
such that for every $(\Sigma,\nChld_{\max})$-tree $T$, every valuation $\nu$ in $T$, and every $\tau\in\Pht{\phi}$
it holds ${\mathcal A}_\phi(T),\nu\models\xi_{\phi,\tau}$ if and only if $\pht{\phi}{T}{\nu}=\tau$, and
\item a nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automaton ${\mathcal B}_\phi$ with $\Sigma^\mathsf{in}({\mathcal B}_\phi)=\Sigma$, and a function $f_\phi\colon\allowbreak\Sigma^\mathsf{out}({\mathcal B}_\phi)\to\Pht{\phi}$,
such that for every $(\Sigma,\nChld_{\max})$-tree $T$ the root of ${\mathcal B}_\phi(T)$ is labeled by a letter $\eta$ such that $f_\phi(\eta)=\pht{\phi}{T}{\nu_\emptyset}$.
\end{enumerate}
\end{lem}
\begin{proof}
Induction on the size of $\phi$.
We start by observing how Item (2) follows from Item (1).
Item (1) gives us an automaton ${\mathcal A}_\phi$ and {\upshape MSO}\xspace formulae $\xi_{\phi,\tau}$ for all $\tau\in\Pht{\phi}$.
We change these formulae into sequences $\xi_{\phi,\tau}'$, assuming that all their free variables are valuated to the empty set.
More precisely, for every free variable $\mathsf{X}$ we change subformulae of $\xi_{\phi,\tau}$ of the form $a(\mathsf{X})$ and $\mathsf{X}\subseteq\mathsf{Y}$ into $\mathsf{tt}$,
subformulae of the form $\mathsf{X}\curlywedgedownarrow_i \mathsf{Y}$ and $\mathsf{Y}\curlywedgedownarrow_i \mathsf{X}$ into $\mathsf{ff}$,
and subformulae of the form $\mathsf{Y}\subseteq\mathsf{X}$, where $\mathsf{Y}$ is a bound variable, into formulae checking that the set represented by $\mathsf{Y}$ is empty.
Then, we take ${\mathcal B}_\phi={\mathcal A}_\phi\vartriangleright{\mathcal C}$ for ${\mathcal C}=(\Sigma^\mathsf{out}({\mathcal A}_\phi),\Pht{\phi},(\xi_{\phi,\tau}')_{\tau\in\Pht{\phi}})$.
If $\eta=(a,h)$ for a function $h$ mapping exactly one phenotype $\tau$ to $1$, we define $f_\phi(\eta)$ to be this phenotype $\tau$,
and for $\eta=(a,h)$ with $|h^{-1}(1)|\neq1$ we define $f_\phi(\eta)$ arbitrarily.
Consider now a $(\Sigma,\nChld_{\max})$-tree $T$.
By Item (1), for $\tau=\pht{\phi}{T}{\nu_\emptyset}$ we have ${\mathcal A}_\phi(T),\nu_\emptyset\models\xi_{\phi,\tau}$ (equivalently, ${\mathcal A}_\phi(T)\models\xi_{\phi,\tau}'$)
and for $\tau\in\Pht{\phi}\setminus\{\pht{\phi}{T}{\nu_\emptyset}\}$ we have ${\mathcal A}_\phi(T),\nu_\emptyset\not\models\xi_{\phi,\tau}$ (equivalently, ${\mathcal A}_\phi(T)\not\models\xi_{\phi,\tau}'$).
It follows that the root of ${\mathcal B}_\phi(T)$ is labeled by $\eta=(a,h)$ where $a$ is the label of the root in ${\mathcal A}_\phi(T)$, and $h(\pht{\phi}{T}{\nu_\emptyset})=1$,
and $h(\tau)=0$ for $\tau\in\Pht{\phi}\setminus\{\pht{\phi}{T}{\nu_\emptyset}\}$.
Then $f_\phi(\eta)=\pht{\phi}{T}{\nu_\emptyset}$, as required.
We now come to the proof of Item (1), where we proceed by case distinction.
When $\phi$ is an atomic formula, that is, equals $a(\mathsf{X})$, $\mathsf{X}\subseteq\mathsf{Y}$, or $\mathsf{X}\curlywedgedownarrow_i\mathsf{Y}$, then the automaton ${\mathcal A}_\phi$ is not needed:
as ${\mathcal A}_\phi$ we can take the MSO automaton with empty set of states (and input alphabet $\Sigma$).
For such an automaton we have that ${\mathcal A}_\phi(T)=T$ for every $(\Sigma,\nChld_{\max})$-tree $T$.
As $\xi_{\phi,\mathsf{tt}}$ we take $\phi$.
When $\phi$ equals $a(\mathsf{X})$ or $\mathsf{X}\subseteq\mathsf{Y}$, the only phenotypes are $\mathsf{tt}$ and $\mathsf{ff}$, and thus we take $\xi_{\phi,\mathsf{ff}}\equiv\neg\phi$.
In the case of $\phi\equiv\mathsf{X}\curlywedgedownarrow_i\mathsf{Y}$, the situation when the formula is false is divided into three phenotypes: $\mathsf{empty}$, $\mathsf{root}$, and $\mathsf{ff}$.
Nevertheless, it is easy to express in {\upshape MSO}\xspace that we have a particular phenotype, following the definition of $\pht{\phi}{T}{\nu}$.
Suppose now that $\phi$ is of the form $\psi_1\land\psi_2$.
From the induction assumption, Item (1), we have two automata, ${\mathcal A}_{\psi_1}$ and ${\mathcal A}_{\psi_2}$,
as well as formulae $\xi_{\psi_1,\tau_1}$ for all $\tau_1\in\Pht{\psi_1}$ and $\xi_{\psi_2,\tau_2}$ for all $\tau_2\in\Pht{\psi_2}$.
We combine the two automata into a single automaton ${\mathcal A}_\phi$ with $\Sigma^\mathsf{in}({\mathcal A}_\phi)=\Sigma$.
More precisely, we take ${\mathcal A}_\phi={\mathcal A}_{\psi_1}\vartriangleright{\mathcal A}_{\psi_2}'$, where ${\mathcal A}_{\psi_2}'$ works exactly like ${\mathcal A}_{\psi_2}$,
but instead of reading a tree $T$ over alphabet $\Sigma$, it reads the tree ${\mathcal A}_{\psi_1}(T)$ and ignores the part of its labels added by ${\mathcal A}_{\psi_1}$.
We also amend $\xi_{\psi_1,\tau_1}$ and $\xi_{\psi_2,\tau_2}$ so that they can read the output of ${\mathcal A}_\phi$:
formulae $\xi_{\psi_1,\tau_1}'$ work like $\xi_{\psi_1,\tau_1}$ but ignore the parts of labels added by ${\mathcal A}_{\psi_2}'$,
and formulae $\xi_{\psi_2,\tau_2}'$ work like $\xi_{\psi_2,\tau_2}$ but ignore the parts of labels added by ${\mathcal A}_{\psi_1}$.
Having the above, for every $\tau=(\tau_1,\tau_2)\in\Pht{\phi}$ we take $\xi_{\phi,\tau}\equiv\xi_{\psi_1,\tau_1}'\land\xi_{\psi_2,\tau_2}'$.
Because a tree has $\phi$-phenotype $\tau$ when it has $\psi_1$-phenotype $\tau_1$ and simultaneously $\psi_2$-phenotype $\tau_2$,
it should be clear that the thesis of Item (1) becomes satisfied.
When $\phi$ is of the form $\neg\psi$, or $\exists\mathsf{X}.\psi$, or ${\exists_{\mathsf{fin}}}\mathsf{X}.\psi$, as ${\mathcal A}_\phi$ we take ${\mathcal A}_\psi$ existing by the induction assumption, Item (1).
The induction assumption gives us also formulae $\xi_{\psi,\tau}$ for all $\tau\in\Pht\psi$.
If $\phi\equiv\neg\psi$, we take $\xi_{\phi,\tau}\equiv\xi_{\psi,\tau}$.
If $\phi\equiv\exists\mathsf{X}.\psi$, we take
\begin{align*}
\xi_{\phi,\tau}\equiv\bigwedge_{\sigma\in\tau}(\exists\mathsf{X}.\xi_{\psi,\sigma})\land\bigwedge_{\sigma\in\Pht{\psi}\setminus\tau}(\neg\exists\mathsf{X}.\xi_{\psi,\sigma})\,.
\end{align*}
If $\phi\equiv{\exists_{\mathsf{fin}}}\mathsf{X}.\psi$, we take the same formula, but with ${\exists_{\mathsf{fin}}}$ quantifiers instead of $\exists$.
Finally, suppose that $\phi\equiv\ensuremath{\mathsf U}\xspace \mathsf{X}.\psi$.
We cannot proceed like in the previous cases, because the \ensuremath{\mathsf U}\xspace quantifier cannot be expressed in {\upshape MSO}\xspace;
we rather need to append a new \ensuremath{\mathsf U}\xspace-prefix automaton at the end of the constructed automaton.
In this case we first prove Item (2), and then we deduce Item (1) out of Item (2).
By Item (2) of the induction assumption we have an automaton ${\mathcal B}_\psi$ and a function $f_\psi\colon\Sigma^\mathsf{out}({\mathcal B}_\psi)\to\Pht{\psi}$ such that
for every node $u$ of $T$, the root of ${\mathcal B}_\psi(T{\restriction}_u)$ is labeled by a letter $\eta_u$ such that $f_\psi(\eta_u)=\pht{\psi}{T{\restriction}_u}{\nu_\emptyset}$.
Moreover, there is a function $g\colon\Sigma^\mathsf{out}({\mathcal B}_\psi)\to\Sigma$ such that $g(\eta_u)$ is the original label of $u$ in $T$
(such a function exists, because the labels from $T$ remain as a part of the labels in ${\mathcal B}_\psi(T)$).
Recall that ${\mathcal B}_\psi(T)$ has the same shape as $T$, and actually $({\mathcal B}_\psi(T)){\restriction}_u={\mathcal B}_\psi(T{\restriction}_u)$ for every node $u$.
We construct a new layer ${\mathcal C}$, which calculates $\phi$-phenotypes basing on $\psi$-phenotypes, and we take ${\mathcal B}_\phi={\mathcal B}_\psi\vartriangleright{\mathcal C}$.
As the state set of ${\mathcal C}$ we take $Q=\{0,1\}\times\Pht{\psi}$; states from $\{1\}\times\Pht{\psi}$ are considered as important.
Transitions are determined by the $\mathit{Comp}$ predicate from Lemma~\ref{lem:compositionality}.
More precisely, for every $r\leq\nChld_{\max}$, every $\eta\in\Sigma^\mathsf{out}({\mathcal B}_\psi)$, and all $((i_1,\sigma_1),\dots,(i_r,\sigma_r))\in Q^r$ we have transitions
\begin{align*}
&((0,\mathit{Comp}_{g(\eta),r,\psi}(\emptyset,\sigma_1,\dots,\sigma_r)),\eta,(i_1,\sigma_1),\dots,(i_r,\sigma_r))\,,&\mbox{and}\\
&((1,\mathit{Comp}_{g(\eta),r,\psi}(\{\mathsf{X}\},\sigma_1,\dots,\sigma_r)),\eta,(i_1,\sigma_1),\dots,(i_r,\sigma_r))\,.
\end{align*}
Moreover, we have transitions that read the $\psi$-phenotype from the label:
\begin{align*}
&((0,f_\psi(\eta)),\eta,\underbrace{\top,\dots,\top}_r)&\mbox{for }r\leq\nChld_{\max}.
\end{align*}
We notice that there is a direct correspondence between runs of ${\mathcal C}$ and choices of a set of nodes $X$ to which the variable $\mathsf{X}$ is mapped.
The first coordinate of the state is set to $1$ in nodes chosen to belong to the set $X$.
The second coordinate contains the $\psi$-phenotype under the valuation mapping $\mathsf{X}$ to $X$ and every other variable to the empty set.
In some nodes below the chosen set $X$ we use transitions of the second kind, reading the $\psi$-phenotype from the label;
it does not matter in which nodes this is done, as everywhere a correct $\psi$-phenotype is written.
The fact that we quantify only over finite sets $X$ corresponds to the fact that the run of ${\mathcal C}$ can assign non-$\top$ states only to a finite prefix of the tree.
Moreover, the cardinality of $X$ is reflected by the number of important states assigned by a run.
It follows that for every $\sigma\in\Pht{\psi}$,
\begin{itemize}
\item there exists a finite set $X$ of nodes of $T$ such that $\pht{\psi}{T}{\nu_\emptyset[\mathsf{X}\mapsto X]}=\sigma$ if and only if for some $i\in\{0,1\}$ there is a run of ${\mathcal C}$ on ${\mathcal B}_\psi(T)$ that assigns $(i,\sigma)$ to the root, and
\item for every $n\in\mathbb N$ there exists a finite set $X_n$ of nodes of $T$ such that $\pht{\psi}{T}{\nu_\emptyset[\mathsf{X}\mapsto X_n]}=\sigma$ and $|X_n|\geq n$
if and only if for some $i\in\{0,1\}$ and for every $n\in\mathbb N$ there is a run $\rho_n$ of ${\mathcal C}$ on ${\mathcal B}_\psi(T)$ that assigns $(i,\sigma)$ to the root,
and such that $\rho_n$ assigns an important state to at least $n$ nodes.
\end{itemize}
Thus, looking at the root's label in ${\mathcal B}_\phi(T)$ we can determine $\pht{\phi}{T}{\nu_\emptyset}$.
This finishes the proof of Item (2) in the case of the \ensuremath{\mathsf U}\xspace quantifier.
Next, still supposing that $\phi\equiv\ensuremath{\mathsf U}\xspace \mathsf{X}.\psi$, we prove Item (1) using Item (2), which is already proved.
It is essential that, by the definition of the {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace logic, all free variables of $\phi$ come from ${\mathcal V}^\mathsf{fin}$, that is, represent finite sets.
This means that only nodes from a finite prefix of a considered tree can belong to $\nu(\mathsf{Y})$ for $\mathsf{Y}$ free in $\phi$ (since clearly the number of free variables is finite).
Outside of this finite prefix we can read the $\phi$-phenotype from the output of ${\mathcal B}_\phi$ (because the valuation is empty there),
and in the finite prefix we can compute them using the $\mathit{Comp}$ function.
More precisely, as ${\mathcal A}_\phi$ we take ${\mathcal B}_\phi$, coming from Item (2).
Item (2) gives us a function $f_\phi\colon\Sigma^\mathsf{out}({\mathcal A}_\phi)\to\Pht\phi$ reading the $\phi$-phenotype of a $(\Sigma,\nChld_{\max})$-tree $T$ out of the root label of ${\mathcal A}_\phi(T)$;
we also have a function $g\colon\Sigma^\mathsf{out}({\mathcal A}_\phi)\to\Sigma$ that extracts original labels out of labels in ${\mathcal A}_\phi(T)$.
For every $\tau\in\Pht\phi$ we define the formula $\xi_{\psi,\tau}$ as follows.
It starts with a sequence of $|\Pht{\phi}|$ existential quantifiers,
quantifying over variables $\mathsf{X}_\rho$ for all $\rho\in\Pht{\phi}$.
The intention is that, in a tree $T$, every $\mathsf{X}_\rho$ represents the set of nodes $u$ such that $\pht{\phi}{T{\restriction}_u}{\nu{\restriction}_u}=\rho$.
Inside the quantification we say that
\begin{itemize}
\item the sets represented by these variables are disjoint, and every node belongs to some of them,
\item the root belongs to $\mathsf{X}_\tau$,
\item if a node with label $\eta\in\Sigma^\mathsf{out}({\mathcal A}_\phi)$ belongs to $\mathsf{X}_\rho$, and its children belong to $\mathsf{X}_{\rho_1},\dots,\mathsf{X}_{\rho_r}$, respectively
(where $r\leq\nChld_{\max}$), and $R$ is the set of free variables $\mathsf{Y}$ of $\phi$ for which the node belongs to $\nu(\mathsf{Y})$, then
$\rho=\mathit{Comp}_{g(\eta),r,\phi}(R,\rho_1,\dots,\rho_r)$
(there are only finitely many possibilities for $\rho,\rho_1,\dots,\rho_r\in\Pht\phi$, for $r\in\{0,\dots,\nChld_{\max}\}$, for $\eta\in\Sigma^\mathsf{out}({\mathcal A}_\phi)$,
and finitely many free variables of $\phi$, thus the constructed formula can be just a big alternative listing all possible cases), and
\item if a node with label $\eta\in\Sigma^\mathsf{out}({\mathcal A}_\phi)$ belongs to $\mathsf{X}_\rho$ and none of $\nu(\mathsf{Y})$ for $\mathsf{Y}$ free in $\phi$ contains this node or some its descendant,
then $\rho=f_\phi(\eta)$.
\end{itemize}
Consider now a $(\Sigma,\nChld_{\max})$-tree $T$, and a valuation $\nu$ in this tree.
If $\pht{\phi}{T}{\nu}=\tau$, then we can show that $\xi_{\phi,\tau}$ is true
by taking for $\mathsf{X}_\rho$ the set of nodes $u$ for which $\pht{\phi}{T{\restriction}_u}{\nu{\restriction}_u}=\rho$ (for every $\rho\in\Pht{\phi}$).
Conversely, suppose that $\xi_{\phi,\tau}$ is true.
Then we can prove that a node $u$ can belong to the set represented by $\mathsf{X}_\rho$ (for $\rho\in\Pht{\phi}$) only when $\pht{\phi}{T{\restriction}_u}{\nu{\restriction}_u}=\rho$.
The proof is by a straightforward induction on the number of descendants of $u$ that belong to $\nu(\mathsf{Y})$ for some $\mathsf{Y}$ free in $\phi$;
we use Lemma~\ref{lem:compositionality} for the induction step.
\end{proof}
Now the proof of Lemma~\ref{lem:logic-to-automata} follows easily.
Indeed, when $\phi$ is a sentence (has no free variables), $\pht{\phi}{T}{\nu_\emptyset}$ determines whether $\phi$ holds in $T$.
Thus, it is enough to take the automaton ${\mathcal B}_\phi$ constructed in Lemma~\ref{lem:logic-to-automata-aux},
and replace the function $f_\phi$ by the set $\Sigma_\mathsf{F}=\{\eta\in\Sigma^\mathsf{out}({\mathcal A})\mid\mathit{tv}_\phi(f_\phi(\eta))\}$
(where $\mathit{tv}_\phi$, defined on page~\pageref{page:tv}, given a $\phi$-phenotype says whether $\phi$ holds in trees having this $\phi$-phenotype).
We remark that the {\upshape WMSO+}\unbound logic (which is weaker than {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace) corresponds to nested \ensuremath{\mathsf U}\xspace-prefix automata, composed of \ensuremath{\mathsf U}\xspace-prefix automata only (i.e., not using {\upshape MSO}\xspace automata).
Indeed, {\upshape MSO}\xspace automata are needed only to deal with infinite sets;
when all quantified sets are finite, we can simulate all the constructs using \ensuremath{\mathsf U}\xspace-prefix automata~\cite{wmsou-schemes}.
We also remark that Bojańczyk and Toruńczyk~\cite{wmso+u-trees} introduce another model of automata equivalent to {\upshape WMSO+}\unbound: nested limsup automata.
A common property of these two models is that both of them are nested; the components of nested limsup automata are of a different form, though.
\section{Proof of the main theorem}\label{sec:main-thm}
In this section we prove our main theorem---Theorem~\ref{thm:main}.
To this end, we have to recall three properties of recursion schemes: logical reflection (Fact~\ref{fact:mso-reflection}), SUP reflection (Fact~\ref{fact:sup-reflection}),
and closure under composition with finite tree transducers (Fact~\ref{fact:transducer}).
The property of logical reflection for schemes comes from Broadbent, Carayol, Ong, and Serre~\cite{reflection}.
They state it for sentences of $\mu$-calculus, but $\mu$-calculus and {\upshape MSO}\xspace are equivalent over infinite trees~\cite{EmersonJutla}.
\begin{factfact}[logical reflection {\cite[Theorem 2(ii)]{reflection}}]\label{fact:mso-reflection}
For every {\upshape MSO}\xspace sentence $\phi$ and every scheme ${\mathcal G}$ generating a tree $T$
one can construct a scheme ${\mathcal G}_\phi$ that generates a tree of the same shape as $T$, and such that its every node $u$ is labeled by a pair $(a_u,b_u)$,
where $a_u$ is the label of $u$ in $T$, and $b_u$ is $\mathsf{tt}$ if $\phi$ is satisfied in $T{\restriction}_u$ and $\mathsf{ff}$ otherwise.
\qed\end{factfact}
The SUP reflection is the heart of our proof.
In order to talk about this property, we need a few more definitions.
By $\#_a(U)$ we denote the number of $a$-labeled nodes in a (finite) tree $U$.
For a set of (finite) trees ${\mathcal L}$ and a set of symbols $A$, we define a predicate $\mathsf{SUP}_A({\mathcal L})$, which holds if for every $n\in\mathbb N$ there is some $U_n\in {\mathcal L}$ such that for all $a\in A$ it holds that $\#_a(U_n)\geq n$.
Originally, in the simultaneous unboundedness problem we consider nondeterministic higher-order recursion schemes, which instead of generating a single infinite tree, recognize a set of finite trees.
We use here an equivalent formulation, in which the set of finite trees is encoded in a single infinite tree.
To this end, we use a special letter $\nd$, denoting a nondeterministic choice.
We write $T\to_\nd U$ if $U$ is obtained from $T$ by choosing some $\nd$-labeled node $u$ and some its child $v$, and attaching $T{\restriction}_v$ in place of $T{\restriction}_u$.
In other words, $\to_\nd$ is the smallest relation such that $\symb{\nd}[T_1][\dots][T_r]\to_\nd T_j$ for $j\in\{1,\dots,r\}$,
and if $T_j\to_\nd T_j'$ for some $j\in\{1,\dots,r\}$, and $T_i=T_i'$ for all $i\in\{1,\dots,r\}\setminus\{j\}$,
then $\symb{a}[T_1][\dots][T_r]\to_\nd \symb{a}[T_1'][\dots][T_r']$.
For a tree $T$, ${\mathcal L}(T)$ is the set of all finite trees $U$ such that $\#_\nd(U)=\#_\omega(U)=0$ and $T\to_\nd^* U$.
\begin{factfact}[SUP reflection~{\cite[Theorem 10.1]{types-diagonal-journal}}]\label{fact:sup-reflection}
For every scheme ${\mathcal G}$ generating a tree $T$ one can construct a scheme ${\mathcal G}_\mathit{SUP}$ that generates a tree of the same shape as $T$,
and such that its every node $u$, having in $T$ label $a_u$, is labeled by
\begin{itemize}
\item a pair $(a_u,\{A\subseteq\Sigma_{\mathcal G}\mid \mathsf{SUP}_A({\mathcal L}(T{\restriction}_u))\})$, if $a_u\neq\nd$, and
\item the letter $\nd$, if $a_u=\nd$.
\qed\end{itemize}
\end{factfact}
The third recalled fact (Fact~\ref{fact:transducer}) talks about finite tree transducers.
A \emph{(deterministic, top-down) finite tree transducer} is a tuple ${\mathcal T}=(\Sigma,\nChld_{\max},Q,q_\mathsf{0},\delta)$, where
$\Sigma$ is a finite alphabet, $\nChld_{\max}$ is the maximal arity of considered trees, $Q$ is a finite set of states, $q_\mathsf{0}\in Q$ is an initial state,
and $\delta$ is a transition function mapping $Q\times\Sigma\times\{0,\dots,\nChld_{\max}\}$ to finite lambda-terms.
A triple $(q,a,r)$ should be mapped by $\delta$ to a term that uses only node constructors and variables of the form $\mathsf{x}_{i,p}$, where $i\in\{1,\dots,r\}$ and $p\in Q$ (applications and lambda-binders are not allowed);
at least one node constructor has to be used (the whole $\delta(q,a,r)$ cannot be equal to a variable).
For a $(\Sigma,\nChld_{\max})$-tree $T$ and a state $q\in Q$, we define ${\mathcal T}_q(T)$ by coinduction, as follows:
if $T=\symb{a}[T_1][\dots][T_r]$, then ${\mathcal T}_q(T)$ is the tree obtained from $\delta(q,a,r)$ by substituting ${\mathcal T}_p(T_i)$ for the variable $\mathsf{x}_{i,p}$, for all $i\in\{1,\dots,r\}$ and $p\in Q$.
In the root we start from the initial state, that is, we define ${\mathcal T}(T)={\mathcal T}_{q_\mathsf{0}}(T)$.
We have the following fact.
\begin{factfact}\label{fact:transducer}
For every finite tree transducer ${\mathcal T}=(\Sigma,\nChld_{\max},Q,q_\mathsf{0},\delta)$ and every scheme ${\mathcal G}$ generating a $(\Sigma,\nChld_{\max})$-tree $T$,
one can construct a scheme ${\mathcal G}_{\mathcal T}$ that generates the tree ${\mathcal T}(T)$.
\qed\end{factfact}
This fact follows from the equivalence between schemes and collapsible pushdown systems~\cite{collapsible}, as it is straightforward to compose a collapsible pushdown system with ${\mathcal T}$
(where due to Fact~\ref{fact:convergent} we can assume that $\Lambda({\mathcal G})$ is fully convergent, i.e., that every node of $T$ is explicitly generated by the collapsible pushdown system).
Since we are not aware of any proof of this fact in the literature, we give more details in Appendix~\ref{app:transducer}.
Using Fact~\ref{fact:mso-reflection} we can compose schemes with {\upshape MSO}\xspace automata, as stated below.
\begin{lem}\label{lem:mso-and-schemes}
For every {\upshape MSO}\xspace automaton ${\mathcal A}$ and every scheme ${\mathcal G}$ generating a tree $T$, where $\Sigma_{\mathcal G}\subseteq\Sigma^\mathsf{in}({\mathcal A})$, one can construct a scheme ${\mathcal G}_{\mathcal A}$ that generates the tree ${\mathcal A}(T)$.
\end{lem}
\begin{proof}
Let ${\mathcal A}=(\Sigma,Q,(\phi_q)_{q\in Q})$.
Assume that $Q=\{1,\dots,n\}$, and take ${\mathcal G}_0={\mathcal G}$ and $T_0=T$.
Consecutively for $q=1,\dots,n$ we want to apply Fact~\ref{fact:mso-reflection} to $\phi_q$ and ${\mathcal G}_{q-1}$, and obtain a scheme ${\mathcal G}_q$ that generates a tree $T_q$ of the same shape as $T$,
and such that its every node $u$ is labeled by a tuple $(a,b_1,\dots,b_q)$, where $a$ is the label of $u$ in $T$,
and $b_i$ says whether $\phi_i$ is satisfied in $T{\restriction}_u$ for $i\in\{1,\dots,q\}$.
Strictly speaking, we cannot apply Fact~\ref{fact:mso-reflection} to the original sentences $\phi_q$ (these sentences can be evaluated in $T$, but not in $T_{q-1}$).
We need to slightly modify the sentences: out of $\phi_q$ we obtain $\phi_q'$ by changing every subformula of the form $a(\mathsf{X})$
to a formula saying that every node in the set represented by $\mathsf{X}$ is labeled by a letter from $\{a\}\times\{\mathsf{tt},\mathsf{ff}\}^{q-1}$.
Then $\phi_q$ is satisfied in $T{\restriction}_u$ if and only if $\phi_q'$ is satisfied in $T_{q-1}{\restriction}_u$;
in consequence, we can apply Fact~\ref{fact:mso-reflection} to $\phi_q'$ and ${\mathcal G}_{q-1}$.
The last tree, $T_n$, contains truth values of all sentences $\phi_q$.
In order to obtain ${\mathcal G}_A$ as required, it is thus enough to rename letters appearing in ${\mathcal G}_n$:
we change every letter $(a,b_1,\dots,b_n)$ to $(a,f)$ for $f\colon Q\to\{0,1,2\}$ mapping every $q\in Q$ to $1$ if $b_q=\mathsf{tt}$, and to $0$ if $b_q=\mathsf{ff}$.
\end{proof}
As one can expect, we can also compose schemes with \ensuremath{\mathsf U}\xspace-prefix automata, and for that we need Facts~\ref{fact:sup-reflection} and~\ref{fact:transducer}.
\begin{lem}\label{lem:main-technical}
For every \ensuremath{\mathsf U}\xspace-prefix automaton ${\mathcal A}$ and every scheme ${\mathcal G}$ generating a tree $T$, where $\Sigma_{\mathcal G}\subseteq\Sigma^\mathsf{in}({\mathcal A})$,
one can construct a scheme ${\mathcal G}_{\mathcal A}$ that generates the tree ${\mathcal A}(T)$.
\end{lem}
It is easy to deduce Theorem~\ref{thm:main} out of Lemmata~\ref{lem:mso-and-schemes} and~\ref{lem:main-technical}.
Indeed, consider an {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace sentence $\phi$ and a scheme ${\mathcal G}_0$ generating a tree $T_0$.
By Lemma~\ref{lem:logic-to-automata}, $\phi$ is equivalent to a nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automaton ${\mathcal A}={\mathcal A}_1\vartriangleright\dots\vartriangleright{\mathcal A}_k$, together with an accepting set $\Sigma_\mathsf{F}$.
By consecutively applying Lemmata~\ref{lem:mso-and-schemes} and~\ref{lem:main-technical} for $i=1,\dots,k$, we combine ${\mathcal G}_{i-1}$ with ${\mathcal A}_i$, obtaining a scheme ${\mathcal G}_i$ that generates the tree $T_i={\mathcal A}_i(T_{i-1})$.
The root of $T_k={\mathcal A}(T_0)$ has label in $\Sigma_\mathsf{F}$ if and only if $\phi$ is satisfied in $T_0$.
Surely this label can be read: having ${\mathcal G}_k$, we simply start generating the tree $T_k$, until its root is generated
(by Fact~\ref{fact:convergent}, we can assume that $\Lambda({\mathcal G}_k)$ is fully convergent).
We now come to the proof of Lemma~\ref{lem:main-technical}.
We are thus given a \ensuremath{\mathsf U}\xspace-prefix automaton ${\mathcal A}=(\Sigma,Q,Q_\mathsf{imp},\Delta)$, and a scheme ${\mathcal G}$ generating a tree $T$, where $\Sigma_{\mathcal G}\subseteq\Sigma$;
our goal is to create a scheme ${\mathcal G}_{\mathcal A}$ that generates the tree ${\mathcal A}(T)$.
As a first step, we create a finite tree transducer ${\mathcal T}$ that converts $T$ into a tree containing all runs of ${\mathcal A}$ on all subtrees of $T$.
Let us write $Q=\{p_1,\dots,p_{|Q|}\}$.
As ${\mathcal T}$ we take $(\Sigma_{\mathcal G},\nChld_{\max}({\mathcal G}),Q\cup\{q_\mathsf{0},\top\},q_\mathsf{0},\delta)$, where $q_\mathsf{0}\not\in Q$ is a fresh state, and $\delta$ is defined as follows.
For $q\in Q$, $a\in\Sigma_{\mathcal G}$, and $r\leq\nChld_{\max}({\mathcal G})$ we take
\begin{align*}
\delta(q,a,r)=\symb{\nd}[\symb{q}[\mathsf{x}_{1,q_{11}}][\dots][\mathsf{x}_{r,q_{1r}}]][\dots][\symb{q}[\mathsf{x}_{1,q_{k1}}][\dots][\mathsf{x}_{r,q_{kr}}]]\,,
\end{align*}
where $(q,a,q_{11},\dots,q_{1r}),\dots,(q,a,q_{k1},\dots,q_{kr})$ are all elements of $\Delta$ being of length $r+2$ and having $q$ and $a$ on the first two coordinates.
Moreover, for $a\in\Sigma_{\mathcal G}$ and $r\leq\nChld_{\max}({\mathcal G})$ (and for a special letter ``?'') we take
\begin{align*}
\delta(q_\mathsf{0},a,r)&=\symb{a}[\mathsf{x}_{1,q_\mathsf{0}}][\dots][\mathsf{x}_{r,q_\mathsf{0}}][\symb{?}[\delta(p_1,a,r)]][\dots][\symb{?}[\delta(p_{|Q|},a,r)]]\,,
&&\mbox{and}\\
\delta(\top,a,r)&=\symb{\top}\,.
\end{align*}
We see that ${\mathcal T}(T)$ contains all nodes of the original tree $T$.
Additionally, below every node $u$ coming from $T$ we have $|Q|$ new children labeled by $?$, such that subtrees starting below these children describe runs of ${\mathcal A}$ on $T{\restriction}_u$,
starting in particular states.
More precisely, when $u$ has $r$ children in $T$, for every $i\in\{1,\dots,|Q|\}$
there is a bijection between trees $U$ in ${\mathcal L}({\mathcal T}(T){\restriction}_{u(r+i)1})$ and runs $\rho$ of ${\mathcal A}$ on $T{\restriction}_u$ such that $\rho(\epsilon)=p_i$.
The label of every node $v$ in such a tree $U$ contains the state assigned by $\rho$ to $v$,
where $U$ contains exactly all nodes to which $\rho$ assigns a state from $Q$, and all minimal nodes to which $\rho$ assigns $\top$
(i.e., such that $\rho$ does not assign $\top$ to their parents).
Recall that by definition $\rho$ can assign a state from $Q$ only to a finite prefix of the tree $T{\restriction}_u$,
which corresponds to the fact that ${\mathcal L}({\mathcal T}(T){\restriction}_{u(r+i)1})$ contains only finite trees.
Actually, we need to consider a transducer ${\mathcal T}'$ obtained from ${\mathcal T}$ by a slight modification: we replace the letter $q$ appearing in $\delta(q,a,r)$ by $1$ if $q\inQ_\mathsf{imp}$, and by $0$ if $q\not\inQ_\mathsf{imp}$.
Then, for a node $u$ of $T$ having $r$ children, and for $i\in\{1,\dots,|Q|\}$, we have the following equivalence:
$\mathsf{SUP}_{\{1\}}({\mathcal T}'(T){\restriction}_{u(r+i)})$ holds if and only if for every $n\in\mathbb N$ there is a run $\rho_n$ of ${\mathcal A}$ on $T{\restriction}_u$ that assigns $p_i$ to the root of $T{\restriction}_u$,
and such that for at least $n$ nodes $v$ it holds that $\rho_n(v)\in Q_\mathsf{imp}$.
We now apply Fact~\ref{fact:transducer} to ${\mathcal G}$ and ${\mathcal T}'$; we obtain a scheme ${\mathcal G}_{{\mathcal T}'}$ that generates the tree ${\mathcal T}'(T)$.
Then, we apply Fact~\ref{fact:sup-reflection} (SUP reflection) to ${\mathcal G}_{{\mathcal T}'}$, which gives us a scheme ${\mathcal G}'$.
The tree $T'$ generated by ${\mathcal G}'$ has the same shape as ${\mathcal T}'(T)$, but in the label of every node $v$ (originally having label other than $\nd$)
there is additionally written a set ${\mathcal U}$ containing these sets $A\subseteq\Sigma_{{\mathcal G}_{{\mathcal T}'}}$ for which $\mathsf{SUP}_A({\mathcal L}(T{\restriction}_v))$ holds.
Next, using Fact~\ref{fact:mso-reflection} (logical reflection) $2|Q|$ times, we annotate every node $u$ of $T'$, having $r'$ children, by logical values of the following properties, for $i=1,\dots,|Q|$:
\begin{itemize}
\item whether $r'\geq|Q|$ and ${\mathcal L}(T'{\restriction}_{u(r'-|Q|+i)1})$ is nonempty, and
\item whether $r'\geq|Q|$ and the label $(a,{\mathcal U})$ of node $u(r'-|Q|+i)$ in $T'$ satisfies $\{1\}\in{\mathcal U}$.
\end{itemize}
Clearly both these properties can be expressed in {\upshape MSO}\xspace.
For nodes $u$ coming from $T$, the first property holds when there is a run of ${\mathcal A}$ on $T{\restriction}_u$ that assigns $p_i$ to the root of $T{\restriction}_u$,
and the second property holds when for every $n\in\mathbb N$ there is a run $\rho_n$ of ${\mathcal A}$ on $T{\restriction}_u$ that assigns $p_i$ to the root of $T{\restriction}_u$, and such that for at least $n$ nodes $w$ it holds that $\rho_n(w)\in Q_\mathsf{imp}$.
Let ${\mathcal G}''$ be the scheme generating the tree $T''$ containing these annotations.
Finally, we create ${\mathcal G}_{\mathcal A}$ by slightly modifying ${\mathcal G}''$:
we replace every node constructor $\symb{(a,{\mathcal U},\sigma_1,\tau_1,\dots,\sigma_{|Q|},\tau_{|Q|})}[P_1][\dots][P_{r+|Q|}]$ with $\symb{(a,f)}[P_1][\dots][P_r]$, where $f\colon Q\to\{0,1,2\}$ is such that
$f(p_i)=2$ if $\tau_i=\mathsf{tt}$, and $f(p_i)=1$ if $\sigma_i=\mathsf{tt}$ but $\tau_i=\mathsf{ff}$, and $f(p_i)=0$ otherwise, for all $i\in\{1,\dots,|Q|\}$
(we do not do anything with node constructors of arity smaller than $|Q|$).
As a result, only the nodes coming from $T$ remain, and they are appropriately relabeled.
\section{Extensions}\label{sec:conclusion}
In this section we give a few possible extensions of our main theorem, saying that we can evaluate {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace sentences on trees generated by recursion schemes.
First, we notice that our solution actually proves a stronger result: logical reflection for {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace.
\begin{thm}\label{thm:msoufin-reflection}
For every {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace sentence $\phi$ and every scheme ${\mathcal G}$ generating a tree $T$ one can construct a scheme ${\mathcal G}_\phi$ that generates a tree of the same shape as $T$,
and such that its every node $u$ is labeled by a pair $(a_u,b_u)$,
where $a_u$ is the label of $u$ in $T$, and $b_u$ is $\mathsf{tt}$ if $\phi$ is satisfied in $T{\restriction}_u$ and $\mathsf{ff}$ otherwise.
\end{thm}
\begin{proof}
In the proof of Theorem~\ref{thm:main} we have constructed a nested \ensuremath{\mathsf U}\xspace-prefix automaton ${\mathcal A}$ equivalent to $\phi$, and then a scheme ${\mathcal G}_{\mathcal A}$ that generates the tree ${\mathcal A}(T)$.
In every node $u$ of ${\mathcal A}(T)$ it is written whether $T{\restriction}_u$ satisfies $\phi$.
Moreover, labels of ${\mathcal A}(T)$ contain also original labels coming from $T$.
Thus in order to obtain ${\mathcal G}_\phi$ it is enough to appropriately relabel node constructors appearing in ${\mathcal G}_{\mathcal A}$.
\end{proof}
In Theorem~\ref{thm:msoufin-reflection}, the sentence $\phi$ talks only about the subtree starting in $u$.
One can obtain a stronger version of logical reflection (Theorem~\ref{thm:better-reflection}), where $\phi$ is a formula allowed to talk about $u$ in the context of the whole tree.
This version can be obtained as a simple corollary of Theorem~\ref{thm:msoufin-reflection} by using the same methods as in Broadbent et al.~\cite[Proof of Corollary 2]{reflection}.
As shown on page~\pageref{proof:better-reflection}, it is also an immediate consequence of our next theorem (Theorem~\ref{thm:msoufin-selection}).
\begin{thm}\label{thm:better-reflection}
For every {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace formula $\phi(\mathsf{X})$ with one free variable $\mathsf{X}$ and every scheme ${\mathcal G}$ generating a tree $T$,
one can construct a scheme ${\mathcal G}_\phi$ that generates a tree of the same shape as $T$, and such that its every node $u$ is labeled by a pair $(a_u,b_u)$,
where $a_u$ is the label of $u$ in $T$, and $b_u$ is $\mathsf{tt}$ if $\phi$ is satisfied in $T$ with $\mathsf{X}$ valuated to $\{u\}$, and $\mathsf{ff}$ otherwise.
\end{thm}
Carayol and Serre~\cite{selection} show one more property of {\upshape MSO}\xspace and schemes, called effective selection.
This time we are given an {\upshape MSO}\xspace sentence $\psi$ of the form $\exists \mathsf{X}.\phi$.
Assuming that $\psi$ is satisfied in the tree $T$ generated by a scheme ${\mathcal G}$, one wants to compute an example set $X$ of nodes of $T$,
such that $\phi$ is true in $T$ with the variable $\mathsf{X}$ valuated to this set $X$.
The theorem says that it is possible to create a scheme ${\mathcal G}_\phi$ that generates a tree of the same shape as $T$, in which nodes belonging to some such example set $X$ are marked.
We can show the same for {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace.
\begin{thm}\label{thm:msoufin-selection}
For every {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace formula $\phi(\mathsf{X})$ with one free variable $\mathsf{X}\in{\mathcal V}^\mathsf{inf}$ and every scheme ${\mathcal G}$ generating a tree $T$,
if $\exists\mathsf{X}.\phi(\mathsf{X})$ holds in $T$, then
one can construct a scheme ${\mathcal G}_\phi$ that generates a tree $T'$ of the same shape as $T$, and such that its every node $u$ is labeled by a pair $(a_u,b_u)$,
where $a_u$ is the label of $u$ in $T$, and $b_u$ belongs to $\{\mathsf{tt},\mathsf{ff}\}$; when $X$ is the set of nodes of $T'$ having $\mathsf{tt}$ on the second coordinate of the label,
$\phi$ is holds in $T$ with $\mathsf{X}$ valuated to $X$.
\end{thm}
The proof of this theorem bases on the following lemma, which is also interesting in itself.
\begin{lem}\label{lem:selection-aux}
For every {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace formula $\phi$ and every scheme ${\mathcal G}$ generating a tree $T$ one can construct a scheme ${\mathcal G}_+$ that generates a tree $T'$ of the same shape as $T$,
and an {\upshape MSO}\xspace formula $\phi_\mathit{MSO}$ (whose all free variables are also free in $\phi$) such that for every valuation $\nu$ in $T$ (defined at least for all free variables of $\phi$)
it holds that $T',\nu\models\phi_\mathit{MSO}$ if and only if $T,\nu\models\phi$.
Moreover, the label of every node of $T'$ contains as its part the label of that node in $T$.
\end{lem}
\begin{proof}
Recall that Lemma~\ref{lem:logic-to-automata-aux} gives us a nested \ensuremath{\mathsf U}\xspace-prefix {\upshape MSO}\xspace automaton ${\mathcal A}_\phi$ and {\upshape MSO}\xspace formulae $\xi_{\phi,\tau}$ for all $\tau\in\Pht{\phi}$
such that for every valuation $\nu$ in $T$ (where $T$ is now the fixed $(\Sigma_{\mathcal G},\nChld_{\max}({\mathcal G}))$-tree generated by ${\mathcal G}$)
it holds that ${\mathcal A}_\phi(T),\nu\models\xi_{\phi,\tau}$ if and only if $\pht{\phi}{T}{\nu}=\tau$.
Applying Lemmata~\ref{lem:mso-and-schemes} and~\ref{lem:main-technical} to components of the automaton ${\mathcal A}_\phi$,
out of the scheme ${\mathcal G}$ we can construct a scheme ${\mathcal G}_+$ that generates the tree ${\mathcal A}_\phi(T)$.
Recall that $\mathit{tv}_\phi(\tau)$ says whether $\phi$ is true in a tree having $\phi$-phenotype $\tau$, and
consider the {\upshape MSO}\xspace formula
\begin{align*}
\phi_\mathit{MSO}\equiv\bigvee_{\substack{\tau\in\Pht\phi\\\mathit{tv}_\phi(\tau)}}\xi_{\phi,\tau}\,.
\end{align*}
By the above, for every valuation $\nu$, it holds that ${\mathcal A}_\phi(T),\nu\models\phi_\mathit{MSO}$ if and only if $T,\nu\models\phi$, as required in the thesis.
\end{proof}
Using the above lemma, we can easily deduce effective selection for {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace out of effective selection for {\upshape MSO}\xspace.
\begin{proof}[Proof of Theorem~\ref{thm:msoufin-selection}]
Using effective selection for {\upshape MSO}\xspace (which is a theorem with the same statement as Theorem~\ref{thm:msoufin-selection}, but for the {\upshape MSO}\xspace logic~\cite{selection})
for the formula $\phi_\mathit{MSO}$ and for the scheme ${\mathcal G}_+$ (created by Lemma~\ref{lem:selection-aux}) we obtain a scheme ${\mathcal G}_\phi'$.
It is almost as required: it generates a tree $T'$ of the same shape as $T$ (but with some additional parts of labels, added by ${\mathcal G}_+$),
where additionally nodes of some set $X$ are marked, so that $\phi_\mathit{MSO}$ holds in $T'$ with $\mathsf{X}$ valuated to $X$.
Lemma~\ref{lem:selection-aux} implies that then also $\phi$ holds in $T$ with $\mathsf{X}$ valuated to $X$.
Thus, it is enough to modify node constructors of ${\mathcal G}_\phi'$: out of every letter we leave only the original letter coming from ${\mathcal G}$, and the last component marking the set $X$,
while we remove all the components added by ${\mathcal G}_+$.
\end{proof}
One may want to obtain an analogous theorem for $\mathsf{X}\in{\mathcal V}^\mathsf{fin}$, that is, for a sentence of the form ${\exists_{\mathsf{fin}}}\mathsf{X}.\phi(X)$.
It is, however, a special case of Theorem~\ref{thm:msoufin-selection}, which can be used with the sentence $\exists\mathsf{X}'.{\exists_{\mathsf{fin}}}\mathsf{X}.\mathsf{X}\subseteq\mathsf{X}'\land\mathsf{X}'\subseteq\mathsf{X}\land\phi(\mathsf{X})$.
We remark, though, that the version of Theorem~\ref{thm:msoufin-selection} for $\mathsf{X}\in{\mathcal V}^\mathsf{fin}$ is actually also a corollary of Theorem~\ref{thm:main},
because there are only countably many finite sets $X$, so we may try one after another, until we find some set for which $\phi$ is satisfied;
it is easy to hardcode a given set $X$ in the formula (or in the scheme).
We now show how Theorem~\ref{thm:better-reflection} follows from Theorem~\ref{thm:msoufin-selection}.
\begin{proof}[Proof of Theorem~\ref{thm:better-reflection}]\label{proof:better-reflection}
We use Theorem~\ref{thm:msoufin-selection} for
\begin{align*}
\phi'(\mathsf{X}')\equiv\forall\mathsf{X}.\mathit{sing}(\mathsf{X})\mathbin{\to}(\mathsf{X}\subseteq\mathsf{X}'\mathbin{\to}\phi(\mathsf{X}))\land(\neg(\mathsf{X}\subseteq\mathsf{X}')\mathbin{\to}\neg\phi(\mathsf{X}))\,.
\end{align*}
The only set $X'$ for which $\phi'$ is true in a tree $T$ is the set of all nodes $u$ for which $\phi$ is true in $T$ with $\mathsf{X}$ valuated to $\{u\}$.
Thus the scheme ${\mathcal G}_{\phi'}$ obtained from Theorem~\ref{thm:msoufin-selection} satisfies the thesis of Theorem~\ref{thm:better-reflection}.
\end{proof}
Our algorithm for Theorem~\ref{thm:main} has nonelementary complexity.
This is unavoidable, as already model-checking of WMSO sentences on the infinite word over an unary alphabet is nonelementary.
It would be interesting to find some other formalism for expressing unboundedness properties, maybe using some model of automata, for which the model-checking problem has better complexity.
We leave this issue for future work.
Finally, we remark that in our solution we do not use the full power of the simultaneous unboundedness problem, we only use the single-letter case.
On the other hand, it seems that {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace is not capable to express simultaneous unboundedness, only its single-letter case.
Thus, another direction for a future work is to extend {\upshape MSO+}\ensuremath{\mathsf{U^{fin}}}\xspace to a logic that can actually express simultaneous unboundedness.
As a possible candidate we see the qcMSO logic introduced in Kaiser, Lang, Le{\ss}enich, and L{\"{o}}ding~\cite{qcMSO}, in which simultaneous unboundedness is expressible.
\bibliographystyle{alpha}
|
3,212,635,537,764 | arxiv | \section{Introduction}
Planetary nebulae (PNe) consist of material ejected from the
Asymptotic Giant Branch (AGB) progenitor, which is photoionized by the
hot central star and swept up by the fast stellar wind
\citep{KOW78}. The morphology of PNe has attracted the attention
of many observers because it is related to and contains information
about the processes of mass ejection involved in their formation.
A number of investigations have been devoted to the morphological
classification of PNe \citep{BAL87,SCH92,STA93,MAN96}. In general,
three broad morphological classes have been considered: round (or
circular), elliptical and bipolar. The formation of the three classes
was explained within the Generalized Interacting Stellar Winds (GISW)
Model \citep{BAL87} assuming an azimuthal dependence of the density in the
slow wind, having its maximum at the equatorial plane. Bipolar and elliptical
PNe would present ``high'' and ``intermediate'' density contrasts, while round PNe
would present a homogeneous slow wind. Numerical simulations have shown that
the three main morphological classes can be reproduced under this
hypothesis \citep[e.g.,][]{FRA93}.
The detection of collimated outflows in PNe
\citep[e.g.,][]{GIE85,MIR92} poses a serious difficulty to the GISW
model because this model is unable to explain the presence of highly
collimated ejections. Moreover, no collimating agent was foreseen in
these evolutionary phases, which could cause high collimation. In
addition, the collimated ejections in most PNe present different
orientations with respect to the central star suggesting that the
collimated agent can precess or wobble \citep{MIR92,MIR99,GUE08}.
More recently, high resolution images obtained with the {\it HST} have
shown that PNe are highly complex objects and that simplistic
models are unable to account for the large variety of structures and
microstructures observed in these objects. In particular, the
signature of collimated outflows appears in a very large fraction of
PNe \citep[e.g.,][]{SAH98}. Many PNe show multiple bipolar ejections with
different geometries that
require episodic ejections \citep[e.g.,][]{GUE04}. These observations have led
to suggest that collimated outflows are the basic mechanism that shapes
PNe \citep{SAH98}. Nevertheless, other physical phenomena should be also
present in PN formation as wind interaction, magnetic fields and/or the binary nature of the central
star \citep[see][]{BYF02}. Although a large number of PNe have already been
imaged \citep{BAL87,SCH92,MAN96,SAH98}, in some cases the images lack spatial resolution and/or
quality to enable a proper identification of the morphological components. In fact,
images at higher resolution and/or deeper than previous ones continue
discovering new structures, even in well observed PNe, which result to be
key for a proper interpretation of their formation
\citep[e.g.,][]{MGM04,MIR06}. These results and the complexity of PNe
clearly justify the imaging of PNe whose images are of insufficient quality.
In this paper, we present narrow-band images of a sample of relatively
large PNe, including elliptical and bipolar PNe. The images have been
taken under particularly good seeing conditions and reveal novel
details in the objects.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.8, angle=0]{AS09028Mirandafig1.eps}
\caption{H$\alpha$, {\mbox{[O\,{\sc iii}]}} and {\mbox{[N\,{\sc ii}]}} images of IC\,351. North is up, east to the left.
The images are displayed in a logarithmic scale. White represents high values of the intensity.}\label{fig1}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.8, angle=0]{AS09028Mirandafig2.eps}
\caption{
H$\alpha$ and {\mbox{[N\,{\sc ii}]}} images of Vy\,1-1. North is up,
east to the left. The images are displayed in a logarithmic scale. White represents high values of the
intensity.}\label{fig2}
\end{center}
\end{figure}
\section{Sample selection and observations}
The observed PNe were selected from the \citet{MAN96} catalog among
those with no independent H$\alpha$ and {\mbox{[N\,{\sc ii}]}} images and/or with low
resolution images due to poor seeing conditions. Images isolating the
H$\alpha$ and {\mbox{[N\,{\sc ii}]}} lines are crucial because these
two emission lines may trace very different regions of the
nebulae; in particular, collimated outflows and microstructures can be
better recognized and studied in {\mbox{[N\,{\sc ii}]}}. In addition,
good seeing conditions allow us a detailed morphological study of the
structures present in a PN. We have added IC\,5217 and NGC\,6778 to this
sample in order to study at high spatial resolution the structures
identified by \citet{MIR06} and \citet{MGM04}, respectively.
Direct images were obtained on 2006 June and 2008 September using
ALFOSC at the 2.56\,m Nordic Optical Telescope on Roque de los
Muchachos Observatory (La Palma, Spain). A 2K$\times$2K E2V CCD
with plate scale of 0.19$''$\,pixel$^{-1}$ was used as detector. We used three narrow-band
filters that isolate the light of H$\alpha$ ($\Delta$$\lambda$ = 9 {\AA}), [N\,{\sc
ii}]$\lambda$6584 (hereafter {\mbox{[N\,{\sc ii}]}}) ($\Delta$$\lambda$ = 9 {\AA}), and [O\,{\sc iii}]$\lambda$5007
(hereafter {\mbox{[O\,{\sc iii}]}}) ($\Delta$$\lambda$ = 30 {\AA}). Exposure time per filter was
900\,s or 1800\,s. The seeing during the observations was between 0.5$''$ and
0.9$''$. The images were reduced using standard IRAF routines.
\section{Results and Discussion}
In the following we will discuss the images of the objects
observed. In some cases, we have grouped together some objects
because of their morphological similarities.
\subsection{IC\,351 and Vy\,1-1}
In the \citet{MAN96} catalog, IC\,351 is classified as an elliptical PNe with
internal structures while Vy\,1-1 is classified as an elliptical
multiple-shell PN. The two objects are not included in the lists of PNe with low-ionization
structures, knots or jets by \citet{DEN01}
Figures\,1 and 2 show our images of IC\,351 and Vy\,1-1,
respectively. The two PNe present noticeable morphological
similarities, including low-ionization polar structures
which are identified in these images for the first time.
IC\,351 is a high-excitation PN consisting of an inner elliptical shell oriented along PA $\sim$
355$^{\circ}$ surrounded by an outer round attached shell. The central star is detected
in the {\mbox{[N\,{\sc ii}]}} and {\mbox{[O\,{\sc iii}]}} filters. The {\mbox{[N\,{\sc ii}]}} image shows
two collimated structures emanating from bright polar caps of the
inner elliptical shell that extend $\sim$ 3$''$ and end in bright knots. The H$\alpha$ and {\mbox{[O\,{\sc iii}]}}
images also show hints of these structures but they are relatively
much fainter, implying low-excitation.
Vy\,1-1 (Figure\,2) shows an inner elliptical shell oriented at PA
$\sim$ 70$^{\circ}$, and an outer round attached shell. Although
no {\mbox{[O\,{\sc iii}]}} image has been obtained for Vy\,1-1, the faint emission in {\mbox{[N\,{\sc ii}]}}
and the bright emission in {\mbox{[O\,{\sc iii}]}} \citep[see][]{MAN96} indicate that both shells
are of high-excitation. The polar regions of the inner shell are particularly bright in
{\mbox{[N\,{\sc ii}]}}. Two polar knots are observed outside but connected to the
elliptical inner shell in the {\mbox{[N\,{\sc ii}]}} image; they are not recognizable in the
H$\alpha$ image. The south-western knot is brighter than the
north-eastern one.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.7, angle=0]{AS09028Mirandafig3.eps}
\caption{
(top) Colour composite picture of NGC\,6778 (green = H$\alpha$, blue =
{\mbox{[O\,{\sc iii}]}} and red = {\mbox{[N\,{\sc ii}]}}). The image is displayed in a logarithmic
scale. (bottom) Unsharp masking {\mbox{[N\,{\sc ii}]}} image. The image is
displayed in a linear scale. North is up, east
to the left in the images.}\label{fig3}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6, angle=0]{AS09028Mirandafig4.eps}
\caption{
(top) Colour composite picture of K\,3-17 (green = H$\alpha$, blue =
{\mbox{[O\,{\sc iii}]}} and red = {\mbox{[N\,{\sc ii}]}}). (bottom) Enlargement of the bright central
region in the {\mbox{[N\,{\sc ii}]}} and {\mbox{[O\,{\sc iii}]}} images. The images are displayed in a
logarithmic scale. North is up, east to
the left in the images. The spatial scale is indicated in the {\mbox{[N\,{\sc ii}]}}
image.}\label{fig4}
\end{center}
\end{figure}
IC\,351 and Vy\,1-1 closely resemble NGC\,6826 and NGC\,7009
\citep{BAL98}. Although the radial velocities of
the polar structures in IC\,351 and Vy\,1-1 have not been measured, it
is probable that they represent FLIERs as in the case of
NGC\,6826 and NGC\,7009. The morphological resemblances
suggest similar formation processes in the four PNe.
\subsection{NGC\,6778}
NGC\,6778 is a PN that contains two bipolar jet systems oriented at
different directions and moving at 100--200 km\,s$^{-1}$ \citep{MGM04}.
Figure\,3 show a colour composite picture of NGC\,6778 constructed with the H$\alpha$, {\mbox{[O\,{\sc iii}]}} and
{\mbox{[N\,{\sc ii}]}} images. The high resolution of the new images allow us a
detailed description of the nebulae. The appearance of NGC\,6778 is
that of a bipolar nebula with its major axis along PA $\simeq$
15$^{\circ}$. The nebula does not display the characteristic waist of
butterfly PNe, but the region along its minor axis is defined by a
fragmented structure that is particularly bright in {\mbox{[N\,{\sc ii}]}}. In the
high-excitation [O\,{\sc iii}] emission, the nebula appears elliptical rather
than bipolar. The images reveal details of the bipolar jets. In
particular, the jet oriented at PA $\simeq$ 195$^{\circ}$ consists of
at least four filaments emanating from a bright knot. In addition,
the nebula presents many low-excitation knots with a cometary
appearance, embedded in high-excitation material. These knots are more
clearly observed in the unsharp masking {\mbox{[N\,{\sc ii}]}} image also shown in
Figure\,3. This image further strengthens the complexity of the nebula. We
found interesing similarities between
NGC\,6778 and Sh\,2-71 \citep{BOH01,MIR05} as both PNe exhibit
extremely knotty and filamentary morphologies that suggests that the previously existing
structures have been fragmented and probably swept up by the fast stellar wind.
\subsection{K\,3-17}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5, angle=0]{AS09028Mirandafig5.eps}
\caption{
(top) H$\alpha$ image of IC\,5217 showing the whole bipolar structure
and the point-symmetric features. North is up, east to the left. (bottom) Images of the equatorial
regions in several filters (upper left). The central star is observed
in the [S\,{\sc ii}] filter. The images are displayed in a logarithmic scale.}\label{fig5}
\end{center}
\end{figure}
K\,3-17 presents a bipolar morphology in the \citet{MAN96} catalog
consisting of a bright compact core and two faint bipolar lobes in the
H$\alpha$+{\mbox{[N\,{\sc ii}]}} image. In the {\mbox{[O\,{\sc iii}]}} image only the bright compact
core was detected. Our new, higher
resolution images reveal that K\,3-17 is a complex PN with a
wealth of structures. A colour composite picture is shown in Figure\,4
along with {\mbox{[N\,{\sc ii}]}} and {\mbox{[O\,{\sc iii}]}} images of the bright core. In the new
images, the bipolar lobes present a spindle-like shape and are
dominated by {\mbox{[N\,{\sc ii}]}} emission. The core is resolved into a series of
bubbles oriented mainly perpendicular to the bipolar lobes, although
small bubbles oriented along the bipolar axis are also observed. No
ring-like or toroidal structure can be identified in the center but
bright knots without a particular orientation. Most of the
structures are of low-excitation while high-excitation is only
observed at the central knots and in the regions of the lobes near the center.
The spindle-like morphology of the lobes suggests the action of
collimated outflows in their formation. A similar morphology is observed in
Hu\,2-1 in which the distorsions of the lobes can be attributed due to the
action of a bipolar collimated
outflows \citep{MIR01}. As for the origin of the
equatorial bubbles in K\,3-17, it could be possible that the stellar wind is
protruding through or breaking an original equatorial
structure. Alternatively, the bubbles may represent collimated
outflows along different directions. In this case, the orientation of the
collimation axis should have changed drastically by $\simeq$
90$^{\circ}$. Large differences in the orientation of multipolar lobes are also
found in other PNe, being Sh\,2-71 an extreme case \citep{MIR05}, while the
equatorial bubbles in K\,3-17 are similar to these observed in {\it HST}
images of Hb\,5 \citep[see][]{MON09}.
\section{IC\,5217 and KjPn\,6}
IC\,5217 is an edge-on bipolar PN with polar point-symmetric features, a large
axial ratio and a narrow waist that contains a very thin, bright equatorial
ring \citep{MIR06}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.8, angle=0]{AS09028Mirandafig6.eps}
\caption{H$\alpha$, {\mbox{[N\,{\sc ii}]}}, and {\mbox{[O\,{\sc iii}]}} images of KjPn\,6. The images
are displayed in a logarithmic scale. North is up, east to left}\label{fig6}
\end{center}
\end{figure}
Figure\,5 presents a new H$\alpha$ image and images of the ring in several
emission lines. The large scale H$\alpha$ image
is similar to that presented by \citet{MIR06}. In particular, the point-symmetric
structures at large distances from the center are observed in this image as
well as the bipolar lobes emanating from the nebular central regions that seem
to be connected with the point-symmetric
features. The higher resolution
of the new images and the observations in several emission line filters
allow us to analyze the bright equatorial ring in great detail. Four local
maxima along the ring are distinguished in the
H$\alpha$ image in agreement with the radio continuum morphology at 3.6\,cm
\citep[see][]{MIR06}. In the rest of the emission lines, a clear
ionization gradient is observed with the radius of the ring in {\mbox{[O\,{\sc iii}]}} being
smaller than in {\mbox{[N\,{\sc ii}]}} and [S\,{\sc ii}]. We also note that
the central star is detected in the [S\,{\sc ii}] image at the center of the ring. It
is worth noting that, if only a short exposure image would have been
acquired, IC\,5217 would be described
as a highly collimated bipolar jet emanating from the central star. This is
not the case as high-resolution, long-slit spectroscopy
demonstrates that the bright structure is an edge-on ring \citep{MIR06}.
KjPn\,6 presents a peculiar, almost triangular shape in the
H$\alpha$+{\mbox{[N\,{\sc ii}]}} image and extremely faint emission in the {\mbox{[O\,{\sc iii}]}}
image \citep{MAN96}. The new images of KjPn\,6 are shown in
Figure\,6. The structure in H$\alpha$ and {\mbox{[O\,{\sc iii}]}} resembles an
elliptical PN with the major axis near the north-south direction,
while in {\mbox{[N\,{\sc ii}]}} the nebula appears more squarish. Faint details are
observed in the inner regions, which are particularly bright in {\mbox{[N\,{\sc ii}]}}
and H$\alpha$ but absent in {\mbox{[O\,{\sc iii}]}}. The morphology of KjPn\,6 is very similar
to this observed in the inner regions of IC\,5217 (Figure\,5), which have been
identified with an edge-on ring. These similarities lead us to suggest that KjPn\,6
may be a bipolar PN with bipolar much fainter than its equatorial
regions. Therefore, IC\,5217 and KjPn\,6 are probably related to ring-like PNe
that are characterized by the
presence of a bright ring accompanied by faint bipolar lobes, although these are not
always detected \citep[e.g.,][]{BPW03}. Interesting similarites are found with
the edge-on ring-like PNe Me\,1-1 \citep{PER08} and IC\,2149
\citep{VAZ02}.
\section{Bipolar planetary nebulae}
Figure\,7 shows colour composite pictures of six bipolar PNe. The new images are
deeper and of higher quality than these published elsewhere. BV\,1 is an
edge-on bipolar with a bright ring and extremely faint bipolar
lobes \citep{KAL88}. HaTr\,10, M\,4-17, and K\,3-46 are likely similar
to BV\,1, the main difference being the orientation
of the polar axis with respect to the observer and/or the evolutionary
stage. The case of NGC\,650 could be different as this PN
is observed almost edge-on, as is the case of BV\,1, but the equatorial region
is thick and there is not a large intensity contrast between this equatorial
region and the bipolar lobes. In addition, NGC\,650 shows bubbles and microstructures that
suggest the presence of focused (collimated) outflows. M\,2-48 is a
bipolar PN with multiple structures and high velocity collimated
outflows \citep{VAZ00,LOP02}. Our high resolution image (Figure\,7) shows in great
detail the structures previously detected. A dark lane, rather than a
bright equatorial torus or ring, separates the bipolar lobes. Elongated protrusions are
observed along the minor axis of the nebula, although their orientation is
different from this of the bipolar lobes and collimated outflows.
A series of knots encircle the bipolar lobes tracing an apparent circular
shell. This shell is brighter at four regions along the main
symmetry axis and its center does not coincide with the nebular
core. The distant outflows
discovered by \citet{VAZ00} appear as knotty structures, particularly the
north-eastern one. Its south-western counterpart is clearly detected in the new
images, while only hints of it were found by
\citet{VAZ00}. M\,2-48 presents noticeable
differences when compared to the other bipolar PNe in Figure\,7. On the other
hand, the presence of distant outflows in M\,2-48 resembles the situation
observed in IC\,5217 (see above).
\begin{figure*}[h]
\begin{center}
\includegraphics[scale=1.2, angle=0]{AS09028Mirandafig7.eps}
\caption{Colour composite pictures of BV\,1 (field of view [fov] $\simeq$
285$''$$\times$155$''$),
M\,4-17 (fov $\simeq$ 255$''$$\times$130$''$), HaTr\,10 (fov $\simeq$
170$''$$\times$140$''$), NGC\,650 (fov $\simeq$ 130$''$$\times$105$'$),
K\,3-46 (fov $\simeq$ 215$''$$\times$150$''$), and M\,2-48 (fov $\simeq$
245$''$$\times$155$''$). In all cases the colour code is green =
H$\alpha$, blue = {\mbox{[O\,{\sc iii}]}} and red = {\mbox{[N\,{\sc ii}]}}. The images are displayed in a
logarithmic scale. North is up, east
to the left in each image.}\label{fig7}
\end{center}
\end{figure*}
\section{Final remarks}
Images obtained at sub-arcsec resolution (0.5$''$ -- 0.9$''$) have
revealed new morphological details in selected PNe. In IC\,351 and Vy\,1-1,
low-excitation, jet-like features have been
detected for the first time. The morphology of these two objects is very
similar to this of well studied PNe like NGC\,6826 and NGC\,7009, strongly
suggesting that the same processes have operated in the formation of these
objects. In NGC\,6778 and K\,3-17 we found very complex structures. NGC\,6778
consists of a large number of cometary knots and filaments with two
systems of collimated outflows. K\,3-17 presents equatorial bubbles oriented at different
directions. Evidence for collimated outflows in K\,3-17 is
provided by the spindle-like bipolar lobes. IC\,5217 and KjPn\,6 are bipolar
PNe consisting in a bright edge-on equatorial ring and faint bipolar lobes. A
comparison of the bipolar PNe in the sample
suggests that the formation processes of NGC\,6778, K\,3-17, and M\,2-48, implying the
action of collimated outflows, may differ from this of NGC\,650, K\,3-46 and BV\,1 which
fit the expectations of the GISW model.
\section*{Acknowledgments}
This work has been supported partially by AYA2005-01495 of the Spanish MEC
(co-funded with FEDER funds),
and AYA2008-01934 of the Spanish MICINN (co-funded with FEDER founds), and
by Consejer\'{\i}a de Innovaci\'on,
Ciencia y Empresa of Junta de Andaluc\'{\i}a.
|
3,212,635,537,765 | arxiv | \section{Introduction}
For well over half a century, metabolism has been described as modular,
i.e.\ divisible into relatively autonomous subunits such
as the citric acid cycle~\cite{citric}, glycolysis~\cite{stryer:bio},
etc. For the pioneers of the 20th century it was a great feat to
describe such connected set of reactions. But do these subunits tell
us that the organization of metabolic networks is fundamentally
modular, or are they a result of the limited knowledge of the time
they were discovered?
For systems, such as the metabolism, where a continuous flow is needed
throughout a large part of the system, a modular organization of a
system is less robust than a more integrated
topology~\cite{wagner:robu}---and metabolic networks are robust;
metabolite fluxes are restored in minutes after large
perturbations~\cite{palsson:minutes}. A very modular system would be
expected to consist of modules with simple (or narrow) inputs and
outputs and a more complex interior. Severing the input of a module
(in a biochemical context, this could happen through mutation or
exposure to atypical conditions, e.g. different types of starvation)
would then affect the whole functioning of the module. On the other
hand, a homogeneous network without explicit interfaces between
modules would be very robust in a way termed ``distributed
robustness''~\cite{wagner:robu}. With the biochemical reaction data
available today, answers to questions around modularity and robustness
of cellular systems is within reach, and we will argue that
biochemical networks are not modular or distributed, but probably best
described as having a little bit of both.
To address such general questions one soon gets into
technicalities. First of all, one has to choose an appropriate level
of description. As many other large-scale studies~\cite{zhao:meta} we
simplify the biochemistry to a network and use graph theory to
describe its organization. Here we use a \textit{substrate graph}
representation where chemical compounds are nodes and (undirected)
edges connect nodes if one of them can produce the other through a
reaction; other representations (reaction graphs, enzyme-centric
graphs and bipartite compound-reaction graphs) have been used (see
Ref.~\cite{zhao:meta} and references therein). Arita~\cite{arita:not}
introduced yet another representation based on the carbon atoms that
are actually transferred during metabolic reactions, and argued that
other representations give a skewed estimate of average path
lengths. However, he also stated that his structure-based description
in a sense gives a compressed view of metabolism, making it difficult
to assess the overall robustness of the networks~\cite{arita:not},
which is one of our aims in this study.
Although biochemical modules are ultimately dynamical
entities~\cite{dyn:mod} there is a prevailing supposition that the
modules of metabolism can be identified with network clusters (densely
connected regions of the
networks)~\cite{our:snwkh,ravasz:hier,gui:meta,ma:meta,ma:zeng}. But
identifying clusters is far from straightforward. One has to choose
network representation, cluster-detection algorithm and, last but not
least, whether or not to preprocess the data by removing abundant
metabolites. The logic behind such preprocessing is that such
substrates (like water, carbon dioxide or adenosine triphosphate) are
so plentiful in normally functioning cells that their concentrations
put no constraints on the activity of a module; in other words, they
can be regarded as externally buffered with respect to the
system~\cite{schus:dec}. Such ubiquitous substrates are sometimes, by
analogy to economy, termed \textit{currency metabolites}---they have a
high turnover and occur in widely different exchange processes. (In
some studies they are instead referred to as ``current'' metabolites,
emphasizing their flow through the metabolic networks.) For example,
adenosine triphosphate (ATP) can be seen as the energy currency of the
cell. Continuing this analogy we will call non-currency metabolites
\textit{commodity metabolites}. So far, the identification of currency
metabolites has made based on non-formalized, chemical
considerations. Among authors who have chosen to preprocess their
metabolic network data some compounds (ATP and NADH, for example) are
virtually always considered currency metabolites, others (e.g.\ small
molecules like water and oxygen) are sometimes removed (e.g.\
Ref.~\cite{ma:zeng}) and sometimes not (e.g.\
Ref.~\cite{wagner:sw}). Since currency metabolites have high degrees
(number of neighbors),
they turn up as ``hub metabolites'' in studies where abundant
substances have not been filtered away (e.g.\
Ref.~\cite{jeong:meta}. Interestingly, versatile currency or hub
metabolites like ATP, NADH and $\textrm{H}_2\textrm{O}$ seem to
support enzyme variability and stimulate pathway
evolution~\cite{schmidt:metab}. We propose a way to identify a set of
currency metabolites from the network structure alone, and, in
extension, a scheme to study the modularity of metabolism that does
not rely on outer information about the substrates. With this
carefully justified network decomposition scheme we look at the
large-scale organization of metabolic networks from several different
organisms.
In the rest of this paper we will describe how the networks are
compiled, state the precise definition of the algorithm, and present
results from the analysis of a number of organisms.
\section{Network construction}
Interaction networks were computed using data downloaded (between
March 13 and 26, 2006) from the anonymous FTP service of KEGG (Kyoto
Encyclopedia of Genes and Genomes) at
\url{http://www.genome.jp/anonftp/}. The following steps were used for
each organism (109 in all): 1. A list of the known enzymes encoded by
the organism's genome was downloaded from the ``genomes''
database of KEGG. 2. The file specifying reactions in the ``ligand''
database of KEGG was scanned for all reactions catalyzed by enzymes
present in the organism in question. 3. reactants and products were
extracted for each of these reactions and all resulting
reactant-product pairs were written to an edge-list specifying the
connections between the substances. A set of Python scripts automated
the process so that all files could be generated in a single batch
without manual intervention. In the resulting networks for each
organism, substances become nodes and the links between substances
from the link file become edges; we will call the number of nodes $N$
and the number of edges $M$ in the following. The adjacency matrix
$A$ for a graph is defined as a matrix where the element $A_{ij}$ is
set to one if the edge $(i,j)$ exists in the graph and zero if it does
not.
\section{Modules and network clusters}
The problem to divide a network into subnetworks that are relatively
densely connected within, and sparsely inter-connected is an old graph
theoretical problem that recently has experienced a second
blooming. Part of the difficulty to construct such \textit{graph
clustering} algorithms is that the objective is not completely
well-defined---the proper definition of a densely-connected
cluster is, to some extent, problem dependent. One definition
commonly used by many modern clustering
algorithms~\cite{gui:mod,arenas:clu,mejn:spectrum} is
\begin{equation}\label{eq:q}
Q=\sum_i\left[e_{ii}-\left(\sum_je_{ij}\right)^2\right],
\end{equation}
where the sum is over a partition into clusters and $e_{ij}$ is
the fraction of edges that leads between vertices of cluster $i$ and
$j$. Given a partition of a network into clusters $Q$ is the fraction
of vertices within clusters minus the expected fraction of edges if
the edges are wired with no structural bias. The division of networks
that maximize $Q$ is then usually taken as the desired
partition. Sampling all divisions is clearly infeasible---different
algorithms typically differs by their methods to perform this
sampling. The specific algorithm we use, presented in
Ref.~\cite{mejn:spectrum} is based one a spectral method that, roughly
speaking, iteratively splits clusters until a further split would
increase $Q$ (for technical details, see
Ref.~\cite{mejn:spectrum}). More precisely, for a subgraph $H$ define
a matrix $\mathbf{B}$ with elements
\begin{equation}\label{eq:b}
B_{ij}=A_{ij}-\frac{k_ik_j}{2M}-\delta_{ij}\left[h_i -
\frac{k_i}{2M} \sum_{l\in H}k_l\right]
\end{equation}
where $h_i$ is $i$'s number of neighbors in $H$, $k_i$ is $i$'s number
of neighbors in $G$, and $\delta_{ij}$ is Kronecker's delta. Then if
$\mathbf{B}$ has a positive leading eigenvalue there is a division of
$H$ such that $Q$ increases. The division is given by the signs of the
leading eigenvector.
$Q$ is a measure of modularity with respect to different partitions of
the same graph. The maximal $Q$ value obtained during the partition
process, $\hat{Q}$, is a crude measure the modularity of a whole
graph. However, fluctuations make $\hat{Q}$ positive even for random
networks in the $N\rightarrow\infty$ limit~\cite{gui:mod}. Indeed,
finite structureless random graphs can have almost any value of
$\hat{Q}$. Instead of measuring $\hat{Q}$ we measure the difference
between $\hat{Q}$ and $\hat{Q}$ averaged over an ensemble of
null-model networks. Since the degree of a vertex is a rather
intrinsic quantity, related to molecular traits (and the set of other
present substrates) we choose random graphs conditioned on the set of
degrees as our null-model. Such null-model networks can be
instantiated by randomly rewiring the original
network~\cite{roberts:mcmc}.
\begin{figure}
\resizebox*{0.95 \linewidth}{!}{\includegraphics{human.eps}}
\caption{ Subsequent values of the effective modularity $\Delta Q$
during the run of the algorithm for the human metabolic
network. The horizontal line marks the $\Delta Q$ value of the
original network. The identified currency metabolites are (in the
order of deletion, from left to right): water, oxygen, hydrogen ion,
nicotinamide adenine dinucleotide phosphate (reduced form, NADPH),
adenosine triphosphate (ATP), nicotinamide adenine dinucleotide
phosphate (NADP), nicotinamide adenine dinucleotide (NAD$^{+}$),
nicotinamide adenine dinucleotide (reduced form, NADH), phosphate
and adenosine diphosphate (ADP). 100 averages of the null-model
networks are used for the calculation of $\Delta Q$. Errorbars
would be smaller than the symbol size.}
\label{fig:human}
\end{figure}
\begin{figure*}
\resizebox*{\linewidth}{!}{\includegraphics{size.eps}}
\caption{ Relative sizes of the detected clusters in the human
metabolic networks. Lines mark the connections between
clusters. The widths of the lines are proportional to the number
of connections. The functional assignments are done by
inspection---they should not be viewed as absolute: a given
assignment reflects the most common function in a cluster, but
other functions are usually also represented to a lesser degree.}
\label{fig:size}
\end{figure*}
How can one identify currency metabolites from a metabolic network? As
previously mentioned, currency metabolites are abundant; not only are
they present in relatively high concentrations throughout the cell,
they are also present in many different reactions. This reflects the
fact that they are used by many different enzyme
superfamilies~\cite{schmidt:metab}. That a vertex is present in many
reactions means two things: First, it has to have large
degree. Second, it will have functionally different substrates
as neighbors in the network, i.e.\ it will have edges to vertices in
different modules. The second statement means that the effective
modularity will increase when a currency metabolite is deleted. To
combine these two precepts, we delete vertices in order of degree and
take the network with the highest value of the effective modularity as
our set of commodity metabolites. More precisely, let $G_t$ be the
network after $t$ vertices are deleted, then the algorithm is as
follows:
\begin{enumerate}
\item Let $G_t$ be $G_{t-1}$ without its vertex of highest degree
and all its incident edges. (If more than one vertex of highest
degree exist, select one randomly.)
\item Run the clustering scheme for the current network $G_t$ and $n$
randomizations of $G_t$.
\item Calculate the effective modularity $\Delta Q = \hat{Q} -
\langle\hat{Q}\rangle$. If it is higher than the currently highest
value, then save the partition.
\end{enumerate}
In practice $\Delta Q$ reaches its maximal value after about ten
iterations, seemingly independent of the network size, so the running
time of the algorithm will be a constant factor times the running
time of the clustering algorithm ($O(N^2\log N)$ in our case).
\section{Numerical results}
\subsection{The human metabolic network: a case study}
\begin{table*}
\begin{ruledtabular}
\begin{tabular}{r|llllll}
taxonomy & number & $\langle N\rangle$ & $\langle M\rangle$ & $\langle
N_\mathrm{currency}\rangle$ & $\langle\Delta Q_\mathrm{max}\rangle$ &
$\langle\Delta Q_0\rangle$ \\ \hline
animals & 5 & $1621\pm 124$&$4662\pm 473$ & $6.2 \pm 1.9$ & $0.157\pm
0.006$ & $0.136\pm 0.002$\\
plants & 1 & 1561 & 4302 & 1 & 0.144 & 0.130\\
fungi & 2 & $1281\pm 97$ & $3654\pm 289$& $1.5\pm 0.5$ & $0.150\pm 0.004$
& $0.135\pm 0.007$\\
bacteria & 99 & $1059\pm 35$& $2739\pm 108$ & $1.7\pm 0.2$ &
$0.140\pm 0.001$ & $0.132\pm 0.001$\\
\end{tabular}
\end{ruledtabular}
\vspace{4mm}
\caption{ Statistics for different classes of organisms. The average
number of substrates $\langle N\rangle$, edges $\langle
M\rangle$, currency metabolites $\langle
N_\mathrm{currency}\rangle$, average maximal effective modularity
$\langle\Delta Q_\mathrm{max}\rangle$, average effective
modularity of the original network $\langle\Delta
Q_\mathrm{max}\rangle$.}
\label{tab:stat}
\end{table*}
Figure~\ref{fig:human} shows the effective modularity of the human
metabolic network as a function of the number of nodes removed as the
algorithm progresses. After ten removed vertices the effective
modularity reaches its maximum. After this point $\Delta Q$
decreases roughly monotonically---no larger increase of $\Delta Q$ is
observed even if one lets the algorithm run until no vertex
remain---so the human currency metabolites seem quite well-defined by
this procedure. Their identities correspond to some of the most
commonly used currency metabolites from previous studies: ATP,
NAD(P)(H), water, oxygen, the hydrogen ion etc. Indeed, our ten
currency metabolites are almost identical to the ten most abundant
metabolites in enzymatic reactions from all organisms in
KEGG~\cite{schmidt:metab} (the only difference is that our human
network has the hydrogen ion substituted for carbon dioxide).
The partition of nodes into groups for the human metabolic network at
its most modular stage of the deletion procedure (after ten currency
metabolites have been removed) is shown in
Fig.~\ref{fig:size}. Although the high-degree currency metabolites are
deleted (and with them 1988 edges) a large part of the network is
still connected. These cross-module edges make the network less than
perfectly modular and contribute to a distributed robustness.
Do the groups identified here correspond to biologically meaningful
subsets of compounds? It turns out that while some of the groups have
a clear biological interpretation, others seem more mixed, and
others---due to peculiarities of the algorithm---are isolated islands
of substances that would have been expected to end up in one of the
larger groups.
The largest clusters in Fig.~\ref{fig:size} correspond roughly to
amino acid metabolism and protein synthesis (ii), sugar metabolism
(iii) and citric acid cycle / porphyrin synthesis (iv). However, the groups are to some
extent overlapping; for example, although cluster (ii) is the main
amino acid metabolism cluster, three nodes representing amino acids occur
in the citric acid cycle cluster (iv), and the nitrogen metabolism
cluster (i) contains nodes relating to the synthesis of cysteine and
methionine. (It also contains many D amino acids, but these never
occur in proteins; most L amino acids are, as expected, located in the
amino acid metabolism and protein synthesis cluster). Also, the
nucleotide metabolism seems to be distributed onto two clusters: one which
mainly deals with DNA metabolism and also contains several substances
relating to glycolysis, such as phosphoenolpyruvate and D-glucose
1-phosphate (ix), and one more clearly separated nucleoside/
nucleotide metabolism cluster (x). Comparing our partitions with previous studies, we note
that such cases of overlap and unclean clusters have been found in
previous studies~\cite{ravasz:hier,gui:meta}. Interestingly, the size
difference among the largest detected clusters is so small that the
size distribution hardly can be a power-law. If the networks would be
truly scale-free one would expect a power-law distribution of cluster
sizes. So even if the degree is power-law distributed, the metabolic
networks are (using the terminology of Ref.~\cite{tanaka:scalerich})
``scale-rich.''
\subsection{General organization of currency metabolites and modules}
A comparison of 109 different organisms (Table~\ref{tab:stat}) shows
the number of currency metabolites is typically rather low and varies
little within each group of organisms. However, animals (human, mouse,
rat, fruit fly and \textit{Caenorhabditis elegans}) have significantly
more than the other groups which tend to have end up with only two
currency metabolites, typically ATP and water. This presumably reflects a
general increase in metabolic complexity in higher organisms; as
mentioned before, currency metabolites are linked to enzyme
variability and pathway evolution~\cite{schmidt:metab}. All computed
networks have markedly higher modularity than null (randomly rewired)
networks with the same degree distribution; even before any currency
metabolites have been removed. This suggests that a considerable
modularity is indeed present in metabolic networks, even if the
modules may be only partly mappable to cellular functions as
understood by contemporary biochemistry. Furthermore, the presence of
inter-modular edges (after deletion of the currency metabolites) and
the non-power-law cluster-size distribution of the human metabolic
network is observed for the vast majority of other organisms.
\section{Summary and conclusions}
In the present paper we propose a network-based method to partition
metabolites into functional groups. In concordance with other
works~\cite{schus:dec,ma:zeng,ma:meta} we propose a fundamental
dichotomy between currency and more specific, commodity
metabolites. We define the currency metabolites as the substrates
that, if omitted, increase the effective modularity of the
network. The effective modularity can be calculated by any modern
graph clustering algorithm~\cite{gui:mod,arenas:clu,mejn:spectrum} (we
use the one in Ref.~\cite{mejn:spectrum}). The same algorithm can be
used (on the fly, while tracing the core of currency metabolites) to
partition the specific metabolites into functional subgroups. Our
method is thus purely graph theoretical and does not rely on any
additional information about the substrates apart from their
connections. This is a rather simple network-view of the metabolism, in
contrast e.g.\ Ref.~\cite{gui:meta} first deletes a set of currency
metabolites from the network (based on chemical considerations in
Ref.~\cite{ma:zeng}), then proposes seven additional functional
categories among the remaining commodity vertices (defined by regions
in the event space of two vertex-specific network measures). One can
indeed proceed from our partitions and group vertices according to
other network properties, but more refined levels are not only more
prone to errors and incompleteness of the data, they are also closer
to the realm where a more complete, dynamical,
modeling~\cite{aaeau:system} is called for.
We find that the networks of all 109 organisms show a clearly positive
effective modularity. This supports the age-old idea of a modular
organization of the cellular biochemistry. But modular architectures
of systems like metabolism are also assumed to be
fragile~\cite{wagner:robu}, and metabolism is known to be robust. We
attribute this robustness to the fair amount of cross-modular
edges---after the removal of currency metabolites the network is still
largely connected. We note that some authors actually associate
modularity with robustness---damage may then be confined to a small
part of the network~\cite{kitano:robu}. While this may be true in some
cases (like the spread of disease in a population or pathogens in an
organism), it scarcely seems applicable to metabolism: If (assuming
that modules can be mapped onto canonical biochemical pathways) an
entire module corresponding to e.g.\ amino acid metabolism or
nucleotide metabolism were knocked out, the organism would hardly be
able to survive even though the rest of the modules had remained
unscathed. Rather than isolating pathways, it would seem to make sense
to intertwine them, so that metabolites whose standard synthesis
pathway has been disrupted can be synthesized through alternative
pathways.
The view of biochemistry that emerges from our study is that of a core
of currency metabolites and a periphery that is strongly, but not
completely, modular. We conjecture than the additional edges making
the modular description less than perfect are important for the
robustness of the biochemical pathways. Presumably the normal
activities of a cell can be well understood from the function of
modules, but to assess the robustness to abnormal conditions one needs
to consider the full network.
The degrees of currency metabolites are, by our definition, higher
than the average degree of commodity metabolites. This broad degree
distribution has been pointed out~\cite{jeong:meta} as a fundamental
organizational principle of metabolic networks. That the structure of
the network, manifested through a modular organization, is more
clearly visible when the currency metabolites are removed, suggests
that the functionality and active evolution is dependent on the
commodity metabolites. That the presence of high-degree vertices is an
inherent property of chemical reaction networks, rather than a result
of evolution, is supported by the fact that astrochemical networks
also show a broad, power-law-like degree
distribution~\cite{wagner:space}. The analogy with currency and
commodities is indeed quite apt: Given that we have a market
(metabolism) the presence of currency (metabolites) is inevitable. The
development (evolution) of the market occurs at the level of
commodities, but never in the absence of currency.
\begin{acknowledgements}
P.H. acknowledges financial support from the Wenner-Gren
foundations. The authors thank Todd Kaplan and Mark Newman for
helpful comments.
\end{acknowledgements}
|
3,212,635,537,766 | arxiv | \section{Introduction}
Characterizing protoplanetary disk structures is essential for understanding planet formation. Disk properties are often inferred by modeling millimeter observations of CO isotopologues (e.g. \citealt{2003AA...399..773D,2013ApJ...774...16R, 2014ApJ...788...59W}). The vertical structure of a disk is typically described in terms of a surface layer in which strong UV radiation photodissociates molecules, an intermediate warm molecular layer, and a cold midplane \citep{2002AA...386..622A}. The relationship between CO abundances and overall gas density structure is tied to the disk temperature profile. Near the star, CO is in the gas phase, but in the cold outer disk midplane, CO is expected to be frozen out onto dust grains. The boundary separating these two regions is the CO snowline, thought to influence the formation locations and compositions of planets and planetesimals (e.g., \citealt{2011ApJ...743L..16O, 2013Sci...341..630Q,2014ApJ...793....9A}). While the fractional abundance of gas-phase CO is generally assumed to be constant in the warm molecular layer, depletion due to photodissociation and freezeout in other layers must be accounted for in order to avoid underestimating the disk mass or surface density \citep{2011ApJ...740...84Q}. Because $^{12}$CO is optically thick in disks, $^{13}$CO and C$^{18}$O observations have been used to derive disk masses, but these may be substantially underestimated if selective photodissociation is neglected \citep{2014AA...572A..96M}. Comparisons of gas mass estimates from HD and C$^{18}$O observations for the TW Hya disk also indicate that the CO:H$_2$ ratio in the warm molecular layer may not follow typical ISM abundances \citep{2013Natur.493..644B, 2013ApJ...776L..38F}.
Desorption in the outer disk midplane, preventing the complete freezeout of CO exterior to the snowline, may be an additional complicating factor in characterizing disk structure with CO observations. Two mechanisms may return CO to the gas phase in the outer disk. First, non-thermal desorption may occur when disk densities decrease enough for cosmic rays, X-rays, and UV radiation to penetrate to the midplane (e.g. \citealt{2007ApJ...660..441W, 2010ApJ...722.1607W}). Second, the inward drift of large grains or external photoevaporation may invert the radial temperature profile, leading to thermal CO desorption \citep{2016ApJ...816L..21C, 2016MNRAS.457.3593F}. {\"O}berg et al. \citeyearpar{2015ApJ...810..112O} presented indirect evidence for CO desorption in the outer disk of T Tauri star IM Lup with a detection of double DCO$^+$ rings, attributing the outer ring to increased production of DCO$^+$ due to UV photodesorption of CO ice.
In this letter, we present evidence for a CO desorption front in the outer disk of AS 209, based primarily on the emission pattern of C$^{18}$O. In sections 2 and 3, we describe ALMA observations of $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=2-1$ transitions in the disk around AS 209, a 1.6 Myr T Tauri star with a mass of 0.9 $M_\odot$ \citep{2009ApJ...700.1502A}. The system is thought to be part of the Ophiuchus star-forming region about 120 pc away \citep{2008ApJ...675L..29L}. In section 4, we simulate observations with a toy model to demonstrate that a CO abundance enhancement in the outer disk is consistent with the different isotopologue emission morphologies. Section 5 discusses approaches for further characterizing CO desorption in disks and implications for disk structure inference.
\section{Observations and Data Reduction}
AS 209 (J2000.0 R.A. $16^{\textup{h}}49^{\textup{m}}15^{\textup{s}}.29$, decl. $-14\degr 22'08\farcs 6$) was observed with the Atacama Large Millimeter/Submillimeter Array on 2014 July 2 (project code ADS/JAO.ALMA\#2013.1.00226), with 21 minutes on source. The configuration consisted of thirty-four 12 m antennae, with baselines between 20 and 650 m. Observations were set up with thirteen Band 6 spectral windows (SPWs). Twelve (including those with CO isotopologue lines) had spectral resolutions of 61 kHz and bandwidths of 59 MHz, while the thirteenth had a resolution of 122 kHz and bandwidth of 469 MHz. An earlier reduction of the 1.4 mm continuum and $^{13}$CO data was published in \citet*{2015ApJ...809L..26H} as part of an N$_2$D$^+$ analysis.
ALMA/NAASC staff provided a calibration script using the quasar J1733-1304 for bandpass and phase calibration and Titan for flux calibration. It was modified to scale visibility weights properly\footnote{See \newline\url{https://casaguides.nrao.edu/index.php/DataWeightsAndCombination}} and executed in CASA 4.4.0 to calibrate visibilities. The CO isotopologue SPWs were phase self-calibrated with solutions obtained from averaging six SPWs free of strong line emission. After subtracting the continuum in the uv-plane, each line was imaged and CLEANed with Keplerian rotation masks. The 1.4 mm dust continuum noise level is $\sigma =0.27$ mJy beam$^{-1}$, and the ALMA systematic flux uncertainty is $\sim$10$\%$. The continuum flux density, obtained by integrating interior to the 3$\sigma$ contour, is 252 $\pm$ 25 mJy, which is consistent with Submillimeter Array observations of AS 209 presented in \citet{2011ApJ...734...98O}. Channel rms for 0.1 km s$^{-1}$ bins and integrated flux values are listed in Table \ref{Table1}.
\begin{deluxetable*}{cccccc}
\tablecaption{Summary of Line Observations\label{Table1}}
\tablehead{
\colhead{Transition} & \colhead{Rest Frequency} &\colhead{ E$_u$} &\colhead{ Beam (P. A.)} &\colhead{Channel rms}&\colhead{Integrated Flux}\\
\colhead{}& \colhead{(GHz)} & \colhead{(K)} &\colhead{}&\colhead{(mJy beam$^{-1}$)}& \colhead{(Jy km s$^{-1}$) }
}
\startdata
$ ^{12}$CO $J = 2-1$ & 230.53800&16.6 &$0\farcs 62\times0\farcs 57$ $(-78\fdg 48)$ &10&7.5$\pm$0.8 \\
$^{13}$CO $J = 2-1$ & 220.39868 & 15.9 &$0\farcs 63\times 0\farcs 58$ $(74\fdg 31)$ &15&2.1$\pm$0.2\\
C$^{18}$O $J = 2-1$ &219.56035 &15.8 &$0\farcs 65\times 0\farcs 59$ $(-72 \fdg 99)$ &8 &0.54$\pm$0.05\\
\enddata
\end{deluxetable*}
\section{Observational Results}
\begin{figure}[htp]
\epsscale{1.3}
\plotone{f1.eps}
\caption{AS 209 disk emission maps and radial intensity profiles. \textit{Left column}: Top left panel shows the 1.4 mm continuum intensity, with [4, 8, 16, 32...]$\sigma$ contours, where $\sigma= 0.27$ mJy beam$^{-1}$. The next three panels are $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=2-1$ integrated intensity maps. Line emission color bars start at 2$\sigma$. Synthesized beams are shown in each panel's lower left. Red crosses mark the continuum peak position. \textit{Right column}: Deprojected and normalized continuum, $^{12}$CO, $^{13}$CO and C$^{18}$O $J=2-1$ radial intensity profiles. Purple ribbons show the standard deviation at each radius.}
\label{fig1}
\end{figure}
\begin{figure}[htp]
\epsscale{1.2}
\plotone{f2.eps}
\caption{Channel maps for the AS 209 disk C$^{18}$O $J=2-1$ transition. Synthesized beams are shown in the lower left and channel velocities (km s$^{-1}$) in the lower right of each panel. Red crosses mark the phase center. The red ellipse traces a $1''$ projected radius, assuming P.A. = 86$\degr$ and inclination = 38$\degr$. Contours are [3, 5, 7]$\sigma$, with $\sigma$ = 5 mJy beam$^{-1}$.
\label{fig2}}
\end{figure}
Column one of Figure \ref{fig1} shows the 1.4 mm dust continuum intensity and $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=2-1$ integrated intensity maps (summed from -2.0 to 11.0 km s$^{-1}$). Column two of Figure \ref{fig1} shows corresponding deprojected and azimuthally averaged intensity profiles. The position angle and inclination of 86$\degr$ and 38$\degr$, respectively, are adopted from \citealt{2009ApJ...700.1502A}. The continuum intensity is normalized to its peak value, 71.7 mJy beam$^{-1}$ (synthesized beam: $0\farcs 53\times0\farcs 51$ $(-78\fdg 07)$). Line profiles are normalized to the peak integrated intensities of 881, 266, and 91 mJy beam$^{-1}$ km s$^{-1}$ for $^{12}$CO, $^{13}$CO, and C$^{18}$O, respectively. The rms in the integrated intensity maps are $\sigma = $ 23, 27, and 14 mJy beam$^{-1}$ km s$^{-1}$ for $^{12}$CO, $^{13}$CO, and C$^{18}$O, respectively.
The continuum and $^{12}$CO emission are centrally peaked and decrease monotonically with radius, although there is significant $^{12}$CO emission outside the continuum detection threshold. The $^{12}$CO emission is weaker on the west side, likely due to cloud contamination \citep{2011ApJ...734...98O}. In contrast to $^{12}$CO, the $^{13}$CO profile bulges outward at $\sim1''$ (120 AU). Like the other isotopologues, the C$^{18}$O emission is centrally peaked, but also has a ring at $\sim1''$ coinciding with the $^{13}$CO ``bulge."
Figure \ref{fig2} shows C$^{18}$O $J=2-1$ channel maps. A red ellipse traces a $1''$ projected radius. Line wings contribute to the central peak observed in the integrated intensity map, but in channels near the systemic velocity of $\sim 4.5$ km s$^{-1}$, little emission is present within $1''$ of the disk center. Like the C$^{18}$O integrated intensity map, the channel maps demonstrate that the emission traces out a ring at $\sim1''$.
\section{A toy model for CO desorption in the outer disk}
A CO desorption front, the onset of desorption in the outer disk outside a CO snowline, may produce an emission ring. To explore qualitatively how such a CO distribution can affect the emission profiles of different isotopologues, we simulated observations of a T Tauri disk using a toy model adapted from parametric gas density and temperature models that have been applied to CO emission in many disks (e.g. \citealt{2013ApJ...774...16R,2014ApJ...788...59W,2015ApJ...806..154C}). Of the isotopologues observed in the AS 209 disk, we assume that C$^{18}$O, the most optically thin, best traces the CO column density. Because the millimeter continuum emission from the AS 209 disk is optically thin at all radii \citep{2012ApJ...760L..17P,2016AA...588A..53T}, dust opacity is expected to have negligible effects on the C$^{18}$O emission. Our CO surface density model is therefore motivated by the continuum-subtracted C$^{18}$O radial intensity profile.
The gas surface density of protoplanetary disks is often modeled with the Lynden-Bell and Pringle \citeyearpar{1974MNRAS.168..603L} similarity solution for a viscous accretion disk. We adapt this for our model CO surface density profile (in cylindrical coordinates) by adding a Gaussian ring in the outer disk to simulate a large-scale return of CO into the gas phase:
\begin{multline}
\Sigma_\text{CO}(r) =\\
\Sigma_c \left ( \left(\frac{r}{r_c}\right)^{-\gamma}\exp\left(-\left(\frac{r}{r_c}\right)^{2-\gamma}\right) + B\exp\left[-\frac{1}{2} \left( \frac{r-r_\text{ring}}{\sigma_\text{ring}} \right)^2\right] \right)
\end{multline}
Following standard prescriptions for gas abundance distributions (e.g. \citealt{2013ApJ...775..136R}), we scale CO abundances vertically with the midplane pressure scale height:
\begin{equation}\label{eq:1}
\rho_\text{CO}(r,z) = \frac{\Sigma_\text{CO}(r)}{\sqrt{2\pi} H_\text{mid}(r)}\exp\left[-0.5\left(\frac{ z} {H_\text{mid}(r) }\right)^2\right],
\end{equation}
with
\begin{equation}\label{eq:3}
H_\text{mid}(r) = \sqrt{\frac{k_B T_\text{mid}(r)r^3}{\mu_\text{gas}m_\text{H} G M_\ast}},
\end{equation}
where $m_\text{H}$ is the mass of atomic hydrogen, and $\mu_\text{gas}=2.37$ is the mean gas particle mass.
Similarly to Rosenfeld et al. \citeyearpar{2013ApJ...774...16R} and Dartois et al. \citeyearpar{2003AA...399..773D}, we model the vertical temperature gradient in cylindrical coordinates as
\begin{equation}
T(r,z) = \begin{cases}
T_{\textup{atm}}(r)+(T_{\textup{mid}}(r)-T_{\textup{atm}}(r))\cos^2{\left(\frac{\pi z}{2 z_q}\right)} & z\leq z_q \\
T_{\textup{atm}}(r) & z>z_q
\end{cases}
\end{equation}
where
\begin{equation}
T_\textup{atm}(r) = T_{\textup{atm},10}\left(\frac{r}{\textup{10 AU}}\right)^{-q_\textup{atm}}
\end{equation}
\begin{equation}
T_\textup{mid}(r) = T_{\textup{mid},10}\left(\frac{r}{\textup{10 AU}}\right)^{-q_\textup{mid}}.
\end{equation}
$ T_{\textup{atm},10}$ and $T_{\textup{mid},10}$ are the atmosphere and midplane temperatures, respectively, at $r = 10$ AU. The temperature gradient scale height $z_q$ is
\begin{equation}
\label{eq:6}
z_q = 4H_\text{mid}(r).
\end{equation}
From a power-law fit to the midplane dust temperature that Andrews et al. \citeyearpar{2009ApJ...700.1502A} calculated for the AS 209 disk, $T_\text{10,mid} =47.3$ K and $q_\text{mid}$ = 0.48. Since the atmosphere gas temperature profile of the AS 209 disk has not been constrained, we set $q_\textup{atm} = 0.5$ and $T_{\textup{atm},10} = 150$ K, similar to values Dartois et al. \citeyearpar{2003AA...399..773D} found for the disk around T Tauri star DM Tau. After some experimentation to determine parameter values that could reproduce the main features of the observations, we set $\Sigma_c = 5\times 10^{-5} \text{ g cm}^{-2}$, $\gamma = 1$, $r_c = 100$ AU, $B = 3$, $r_\text{ring} = 150$ AU, and $\sigma_\text{ring} = 20$ AU.
We use this model and the LAMDA database molecular data \citep{2005AA...432..369S, 2010ApJ...718.1062Y} to calculate $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=2-1$ intensities with the radiative transfer code \textsc{RADMC-3D} \citep{2012ascl.soft02015D}. We fix the $^{12}$CO/$^{13}$CO and $^{12}$CO/C$^{18}$O ratios to the ISM values of 69 and 557, respectively \citep{1999RPPh...62..143W}. Local thermal equilibrium was assumed because CO and its isotopologues have low critical densities relative to disk gas densities \citep{2007ApJ...669.1262P}. Keplerian gas velocities were assumed, and the turbulent broadening parameter was set to $\xi =$ 0.01 km s$^{-1}$, on par with the TW Hya disk upper limit Hughes et al. \citeyearpar{2011ApJ...727...85H} derived. After computing sky brightness images at the orientation of the AS 209 disk, model visibilities were produced at the same spatial frequencies as the AS 209 data by using Python package \textsc{vis\_sample}.\footnote{Available at \url{https://github.com/AstroChem/vis_sample}} Model visibilities were imaged and CLEANed in CASA 4.4.0, then integrated across the same velocity ranges as the observations to produce integrated intensity maps. Observations were then simulated by adding Gaussian noise to the model visibilities so that the rms in the model image cubes was comparable to that of the data image cubes. Temperature and column density profiles and integrated intensity maps are shown for the model with the Gaussian CO ring in Figure \ref{fig3}. For comparison, we also model CO abundances without an outer ring by setting $B = 0$ in Eq. \ref{eq:1}, holding other parameters the same. Corresponding structure plots and integrated intensity maps are shown in Figure \ref{fig4}.
\begin{figure*}[htp]
\epsscale{1.15}
\plotone{f3.eps}
\caption{Model structures and resulting integrated intensity maps for the model with an outer CO ring ($B= 3$ in eq. \ref{eq:1}). Top left: Temperature gradient profile up to four pressure scale heights (Eq. \ref{eq:3}) above the midplane. Red line denotes 20 K isotherm. Bottom left: CO column density (bottom). Middle: Integrated intensity maps for $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=2-1$. Right column: Corresponding maps with gaussian noise added to model visibilities. Synthesized beams are drawn in the lower left. Red crosses mark phase centers.
\label{fig3}}
\end{figure*}
\begin{figure*}[htp]
\epsscale{1.15}
\plotone{f4.eps}
\caption{Same as Figure \ref{fig3}, but for a model with monotonically decreasing CO column densities ($B= 0$ in eq. \ref{eq:1}).
\label{fig4}}
\end{figure*}
For either model, the $^{12}$CO integrated emission is centrally peaked, in line with observations. However, only the model with an outer CO ring, shown in Figure \ref{fig3}, reproduces the central peak and ring in the C$^{18}$O emission, and the shoulder in $^{13}$CO emission. The morphological similarities between the data and model with an outer CO ring suggest that excess CO in the outer disk in conjunction with line opacity can account for the variation in emission patterns for the AS 209 disk's CO isotopologues. The model results highlight that a non-monotonic CO radial column density profile can be obscured in (partially) thick $^{12}$CO and $^{13}$CO observations, and thus direct evidence of CO desorption in the outer disk would often require observations of rarer, optically thin isotopologues such as C$^{18}$O.
\section{Discussion}
\subsection{Origin of the CO isotopologue emission morphology}
The outer emission ring in C$^{18}$O observed in the AS 209 disk suggests that its CO abundances are enhanced at large radii. In the radial intensity profile of C$^{18}$O in Figure \ref{fig1}, the ring peaks at $\sim$ 120 AU. While no snowline estimate has been reported in the literature for the AS 209 disk, the CO snowline has been estimated to lie at 30 AU for the TW Hya disk and 90 AU for the HD 163296 disk, corresponding to midplane temperatures of 17 and 25 K, respectively \citep{2013Sci...341..630Q, 2015ApJ...813..128Q}. TW Hya is a T Tauri star with $T_\text{eff}= 3400$\textendash4000 K and $ L_\ast$ = 0.19\textendash 0.23 $L_\odot$ \citep{1999ApJ...512L..63W,2011ApJ...732....8V}. AS 209 is somewhat warmer ($T_\text{eff} = 4250$ K and $L_\ast = 1.5$ $L_\odot$), suggesting that its CO snowline should lie at a slightly larger radius compared to TW Hya. As a Herbig Ae star, HD 163296 has a much warmer disk, so its CO snowline radius can be treated as an upper bound for AS 209. Furthermore, the AS 209 disk midplane dust temperature profile derived by Andrews et al. \citeyearpar{2009ApJ...700.1502A} drops to 25 K at 35 AU and 17 K at 80 AU, suggesting that the CO snowline lies within this range. The decrement in emission between the central peak and outer ring of C$^{18}$O $J=2-1$ in the AS 209 disk occurs at $\sim0 \farcs 7$ (80 AU), consistent with the expected location of initial CO freezeout. Meanwhile, the C$^{18}$O ring in the AS 209 disk occurs well outside the expected CO snowline.
Based on T Tauri disk models indicating that radial drift could increase midplane dust temperatures at large radii, Cleeves \citeyearpar{2016ApJ...816L..21C} suggested that subsequent thermal desorption of CO in the outer disk could cause CO enhancement followed by a second CO snowline once temperatures began falling again. P\'erez et al. \citeyearpar{2012ApJ...760L..17P} found that larger dust grains emitted from a more compact region than small grains in the AS 209 disk, providing evidence of radial drift of solids. Cleeves' models and P\'erez et al.'s observations suggest that the C$^{18}$O emission morphology in the AS 209 disk can plausibly be explained by thermal CO desorption in the outer disk, with CO snowlines both interior and exterior to the desorption ring. Non-thermal desorption may also substantially enhance outer disk CO. Further observations and modeling are necessary to constrain the relative contributions of thermal versus non-thermal processes.
Simulated observations from our toy model of CO desorption in the outer disk indicate that isotopologue opacity differences in the AS 209 disk can explain why there is a C$^{18}$O ring, but only a shoulder in $^{13}$CO and a monotonically decreasing intensity profile in $^{12}$CO. Selective photodissociation is a less likely alternative. Models from \citet{2014AA...572A..96M} indicate that selective photodissociation leads to lower C$^{18}$O intensities in the outer disk compared to models without, whereas we observe an increase in C$^{18}$O intensity in the outer disk of AS 209.
Another possibility to examine is that dust opacity creates the appearance of a ring in optically thin C$^{18}$O emission. The peak brightness temperature of the 1.4 mm AS 209 continuum is only 7 K, which is consistent with the findings from \citealt{2012ApJ...760L..17P} and \citealt{2016AA...588A..53T} of an optically thin disk. However, \citealt{2015ApJ...810..112O} suggested that such low brightness temperatures could also be consistent with dust concentrated into optically thick rings much narrower than the beam. While existing observations do not point to this scenario, it could be tested with ALMA at higher spatial resolutions.
\subsection{Implications for disk structure modeling}
Disk structure inferences made from CO observations depend on assumptions about chemistry (e.g. \citealt{2013ApJ...776L..38F, 2014AA...572A..96M,2014ApJ...788...59W}). Common parametric disk gas structure models stipulate that CO emission decreases monotonically with radius \citep{2013ApJ...774...16R,2013ApJ...775..136R}, in line with most $^{12}$CO observations and chemical models \citep{2001AA...371.1107A, 2007ApJ...660..441W,2012ApJ...747..114W}. However, the optically thick lower-$J$ transitions of $^{12}$CO in disks could easily obscure CO substructures, as indicated by our toy model. Although CO isotopologue emission rings have been observed in disks, they have been primarily associated with systems with prominent dust cavities, suggesting that gas and dust cleared for similar reasons \citep{2014AA...562A..26B, 2016AA...585A..58V,2016ApJ...820...19T}. The non-monotonicity of the C$^{18}$O radial intensity profile for the AS 209 disk, as well as the double DCO$^+$ rings of the IM Lup disk in \citet{2015ApJ...810..112O}, are unexpected because neither disk shows millimeter dust gaps. Since the outer C$^{18}$O emission ring observed in the AS 209 disk seems inconsistent with the standard parameterizations used to model CO emission in disks, the observations presented in this paper indicate that desorption effects should also be accounted for when tracing disk structure with CO.
\subsection{Developing further constraints on desorption}
Given the evidence of CO desorption in the AS 209 disk in this work, as well as observations of IM Lup \citep{2015ApJ...810..112O}, we expect that signatures of CO desorption and second snowlines will be apparent in the outer regions of other disks observed at high resolution with ALMA. Recent observations of $^{13}$CO and C$^{18}$O $J = 3-2$ in the TW Hya disk indicate an outer ring, which \citet{schwarz} suggested could be due to desorption. Because optically thick $^{12}$CO and even $^{13}$CO transitions could obscure significant features in CO abundance profiles, especially for more massive, gas-rich disks, sensitive high-resolution observations of C$^{18}$O and C$^{17}$O can place further constraints on desorption mechanisms and better inform CO emission parameterizations. {\"O}berg et al. \citeyearpar{2015ApJ...810..112O} suggested that observing multiple transitions would provide better estimates of gas temperature in the outer disk, thereby distinguishing between thermal and non-thermal desorption. Better temperature measurements will also be necessary to derive CO column densities more reliably in the outer disk. Furthermore, N$_2$H$^+$ is often used to locate the CO snowline because its formation is inhibited and its destruction accelerated by gas-phase CO \citep{2013Sci...341..630Q}. If abundant CO returns to the gas phase near the midplane, then N$_2$H$^+$ may trace this desorption front.
\subsection{Summary}
We present ALMA observations of $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=2-1$ emission in the AS 209 disk. The C$^{18}$O emission ring is interpreted as the onset of large-scale CO desorption in the outer disk. Based on simulated observations from a toy model, we propose that CO desorption in the outer disk, along with line opacity differences, can explain variations in emission patterns among the isotopologues. The CO isotopologue emission in AS 209 is not described well by common parametric models used to infer disk structure from CO observations, indicating that CO is not a straightforward tracer of total disk gas properties. To constrain the effects of desorption on CO abundances, especially on the creation of outer disk CO desorption fronts or secondary snowlines, high-resolution observations of rare, optically thin CO isotopologues are crucial.
\acknowledgments
We thank the referee for comments improving this paper, Ian Czekala, Ilse Cleeves, and Meredith MacGregor for helpful discussions, and Ryan Loomis for access to \textsc{vis\_sample}. This paper makes use of ALMA data ADS/JAO.ALMA\#2013.1.00226.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. KI\"O acknowledges funding from the Alfred P. Sloan Foundation and the Packard Foundation. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144152.
\software{CASA 4.4.0, RADMC-3D, \textsc{pwkit} (\url{https://github.com/pkgw/pwkit}), \textsc{vis\_sample} (\url{https://github.com/AstroChem/vis_sample}), \textsc{cubehelix} (\url{https://github.com/jradavenport/cubehelix})}
|
3,212,635,537,767 | arxiv |
\section{Background} \label{sec:background}
The atmospheric transmission between $5,500$ and $12,000$ \r A is
dominated by absorption due to PWV (Figure~\ref{fig:pwv_comparison}). The
strength of PWV absorption lines in observed spectra correlate strongly
with measurements of localized PWV column density \citep{blake11}. This
indicates that PWV measurements can be combined with atmospheric models to
provide estimates of the atmospheric transmission at a given date and time.
However, accomplishing this requires a source of accurate and readily
accessible PWV measurements. Furthermore, since PWV levels can change by
over 10\% per hour, measurements must be available in close to real time.
By measuring the delay of dual-band GPS signals traveling through the
atmosphere, it is possible to determine the PWV column density along the line
of sight (see \cite{braun01}, \cite{dumont01}, and \cite{nahmias04}). This
approach is made even more appealing by the existence of several established
GPS networks dedicated to the measurement of geological and meteorological
data on the international scale. The SuomiNet project \footnote{For more
information see \url{https://www.suominet.ucar.edu}} \citep{ware00} is a
meteorological initiative that uses data from multiple GPS networks to
provide semi-hourly PWV measurements. It currently publishes meteorological
data from hundreds of receivers throughout the United States and Central
America.
\subsection{Effects of PWV on Photometric Calibration} \label{ssec:calibration_effects}
When correcting photometric observations for atmospheric effects,
astronomers commonly express atmospheric absorption as a linear
function of airmass. In this approach photometric observations are corrected
by fitting for a set of extinction coefficients $k'$ and $k''$ in each band.
For example, given an airmass $X$, the observed $i$ and $z$ band magnitudes of
a standard star are related to the tabulated, intrinsic magnitudes $z_0$ and
$i_0$ by a set of linear equations
\begin{eqnarray}
\text{z} &= \text{z}_0 + k_{\text{z}}' \cdot X + k_{\text{z}}'' (\text{b} - \text{v}) \cdot X \label{eq:z} \\
\text{i} &= \text{i}_0 + k_{\text{i}}' \cdot X + k_{\text{i}}'' (\text{b} - \text{v}) \cdot X \label{eq:i}
\end{eqnarray}
The first order extinction term $k'$ accounts for the decrease in a star's
observed flux with airmass. The inclusion of a second order coefficient $k''$
accounts for the fact that the observed flux of blue stars decreases faster
than red stars as they approach the horizon.
To measure the second-order extinction, observations are taken of a red and
blue star over a wide airmass range. The second-order extinction in each
band can then be found by fitting for the difference in magnitude between
the two stars.
\begin{eqnarray}
\Delta \text{z} &= k''_{\text{z}} \Delta(\text{b}-\text{v}) \cdot X + \Delta \text{z}_0 \\
\Delta \text{i} &= k''_{\text{i}} \Delta(\text{b}-\text{v}) \cdot X + \Delta \text{i}_0
\end{eqnarray}
Using the resulting value for $k''$, the first order extinction coefficient
is then found by fitting Equations \ref{eq:z} and \ref{eq:i}. Although this
method does account for a first order airmass dependence, it does not
directly account for any nonlinear effects. This is to say it does not
account for parts of the atmospheric transmission having a nonlinear airmass
dependence.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{zero_point_error.pdf}
\caption{Correcting photometric observations using tabulated values of a
standard star introduces residual error in the magnitudes of
other stars with different spectral types. The residual error
in $z$ band photometric zero point due to absorption by
precipitable water vapor is shown for three black bodies at
$3,000$ (M type), $6,000$ (G type), and $10,000$ K (A type).
Results are shown as a function of the color
of the reference star used to calculate the zero point. Error
values are shown for a PWV column density of 5 (left) and 30 mm
(right).}
\label{fig:zp_error}
\end{figure*}
For a PWV column density at zenith PWV$_{z}$, the column density along the
line of sight is given by
\begin{equation} \label{eq:pwv_los}
\text{PWV}_{\rm los} = \text{PWV}_{z} \cdot X
\end{equation}
However, due to saturation, not all absorption features scale linearly with
PWV concentration -- some features saturate at relatively low concentrations
($<10$~mm). Thus a linear function of airmass and color is not sufficient to
describe the atmospheric transmission from PWV.
Figure~\ref{fig:black_body} details the error introduced by considering PWV
absorption averaged over a bandpass versus the actual absorption spectrum.
Because atmospheric absorption varies with wavelength, it affects stars
differently depending on their spectral type. This means that variations in
the spectral types of photometric standards used to correct an image
introduce errors in the magnitudes of observed targets. This effect is more
pronounced for higher airmass due to the increased PWV along the line of sight,
and is an important consideration for KPNO where $\text{PWV}_{z}$
exceeds $20$ mm over 13\% of the time.
Demonstrated in Figure \ref{fig:zp_error}, when using a type A star to
correct cooler G or M type stars, spectral variations between stars used in
the atmospheric correction can introduce errors as large as $-0.02$ mag.
This error is particularly important when performing high accuracy photometry
to 1\% or better. An alternative is to correct photometric observations using
atmospheric models.
For an atmospheric transmission $T(\lambda)$, the photometric
correction for an object with a spectral energy distribution $S(\lambda)$ is
given by
\begin{equation} \label{eq:atm_correction}
C = \frac{\int_{\lambda_i}^{\lambda_j} S(\lambda) \cdot T(\lambda) \, d\lambda}
{\int_{\lambda_i}^{\lambda_j} S(\lambda) \, d\lambda}
\end{equation}
where the integration bounds are defined by the wavelength range of the
photometric bandpass. Using atmospheric models, measurements of the PWV
column density are used to determine $T(\lambda)$ at a given date, time,
and airmass. If tabulated values for $S(\lambda)$ are not available,
spectral templates are used instead. For example, the SED of a star is well
estimated by its color, due to the strong relationship between stellar
spectral type and intrinsic color.
\subsection{Use of GPS at Kitt Peak} \label{ssec:gps_at_kitt_peak}
In March of 2015, we installed SuomiNet connected weather station on top of
the WIYN 3.5 meter telescope building at Kitt Peak National Observatory. In
addition to a GPS receiver, the station includes barometric, temperature,
and wind speed sensors. SuomiNet compiles measurements from its affiliated
weather stations at thirty minute intervals. These semi-hourly measurements,
in addition to the local PWV column density along zenith, are then released
publicly on an hourly basis.
In order to prevent equipment damage, the weather station at Kitt Peak is
powered down during lightning storms. This creates gaps in the available
SuomiNet data for Kitt Peak. Additionally, the barometric sensor was
malfunctioning in 2016 from January through March, so we ignore any SuomiNet
data published for Kitt Peak during this time period. The sensor has since
been repaired, but occasionally records a non-physical drop in pressure. We
disregard these measurements by ignoring any meteorological measurements
taken for Kitt Peak with a pressure below 775~mbar.
In order to determine the PWV level during periods without SuomiNet data,
measurements from other nearby receivers can be used to model the PWV level
at Kitt Peak. This model can also be used for times before the Kitt Peak
receiver was installed. In addition to data taken at Kitt Peak, the \texttt{pwv\_kpno}\
package uses measurements from four other receivers within a 45 mile radius
at varying levels of altitude.
This includes receivers located at Amado (AMAZ), Sahuarita (P014), Tucson
(SA46), and Sells (SA48) Arizona. The location of these receivers is shown
in Figure \ref{fig:gps_locations}, with SuomiNet measurements for Kitt Peak,
Amado, and Sells shown in Figure \ref{fig:suomi_data}.
\begin{figure*}
\subfigure[]{
\label{fig:gps_locations}
\epsscale{.6}
\includegraphics[width=0.45\textwidth]{gps_locations.png}
}
\hspace{0.05\textwidth}
\subfigure[]{
\includegraphics[width=0.45\textwidth]{elevation.png}
}
\caption{The \texttt{pwv\_kpno}\ package uses PWV measurements for five locations
within a 45 mile radius of Kitt Peak. Shown on the left, these
locations include Kitt Peak (1), Amado (2), Sahuarita (3),
Tucson (4), and Sells (5) Arizona. The elevation of each location
is shown on the right.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{measured_pwv.png}
\caption{Measurements of precipitable water vapor (PWV) from the
SuomiNet project from 2010 onward. From top to bottom, SuomiNet
measurements for Kitt Peak National Observatory, Sahuarita AZ,
and Sells AZ (Blue). The modeled PWV level at Kitt peak is
shown in Orange. Periods of one day or longer where there are
no modeled PWV values are shown in the top panel in grey. The geographic
proximity of these locations means that the primary difference
in PWV between locations is due to differences in altitude.
Measurements taken at Kitt Peak National Observatory begin in
March of 2015.}
\label{fig:suomi_data}
\end{figure*}
\begin{figure*}
\epsscale{.95}
\setlength{\belowcaptionskip}{12pt}
\hspace*{-1cm}
\plotone{site_correlations.png}
\caption{Linear fits to measurements of precipitable water vapor (PWV)
column density taken at four different locations versus
simultaneous measurements taken at Kitt Peak. Each row
corresponds to a different location being compared against
Kitt Peak, with measurements shown on the left and binned
residuals shown on the right. The slope ($m$) and y-intercept ($b$)
is shown for each fit. The correlation in PWV column
density between different sites allows the PWV column density
at Kitt Peak to be modeled using measurements from other
locations.}
\label{fig:site_correlations}
\end{figure*}
Note that the PWV level at each location follows the same seasonal trend,
but the PWV concentration at Kitt Peak tends to be lower. Since each of
the chosen receivers are geographically close together, variations in PWV
between Kitt Peak and the four supplementary locations are predominantly
caused by differences in altitude. Shown in Figure
\ref{fig:site_correlations}, the PWV level at each location can be related
to the PWV level at Kitt Peak by applying a linear fit. Each fit is able to
predict the PWV column density at Kitt Peak to a precision of 1~mm plus 10\%
of the predicted value.
For times when SuomiNet data is unavailable for Kitt Peak, each of the
linear fits are used to estimate the PWV column density at Kitt Peak. The
resulting estimations are then averaged and used to supplement data taken by
the Kitt Peak weather station. This full data set provides a model for the
PWV column density at zenith over time.
To determine the PWV column density for a specific date and time, \texttt{pwv\_kpno}\
first determines the concentration along zenith by interpolating from the
supplemented PWV data. The PWV column density along the line of site is then
calculated using Equation \ref{eq:pwv_los}. Using this value, \texttt{pwv\_kpno}\ is
able to determine the atmospheric transmission using a set of tabulated
MODTRAN models.
\section{Conclusion and Future Work} \label{sec:conclusion}
Atmospheric transmission in the near-infrared is highly dependent on the
column density of precipitable water vapor. By measuring the delay in GPS
signals through the atmosphere, initiatives such as the SuomiNet project
provide accurate water vapor measurements for multiple, international
locations. Through the use of atmospheric models, these measurements provide
a means for determining the atmospheric transmission due to precipitable
water vapor at each location.
Current methods for removing atmospheric effects commonly rely on fitting
for a set of extinction coefficients. Unfortunately, this method does not
capture the complex nature of the atmospheric transmission function.
When calibrating a photometric image, this introduces errors due to spectral
variations of the stars used to determine the extinction coefficients.
Atmospheric modeling has the potential to provide an alternative
that is not influenced by spectral differences.
The Python package \texttt{pwv\_kpno}\ provides models for the atmospheric transmission
due to H$_2$O at user specified sites. For a given date, time, and airmass,
the package uses
measurements from the SuomiNet project to determine the corresponding PWV
column density along the line of sight. By using a set of MODTRAN models,
the resulting concentration is then used to determine the PWV
transmission function between $3,000$ and $12,000$ \r A.
Future work is planned by the primary author to further explore the
relationship between PWV measured by geographically separated GPS receivers.
Measurements from two, geographically close receivers can be related
by a linearly fitting the PWV concentration measured at both sites. However,
this linear relationship does not capture the intrinsic scatter of the
measured data. Additional models will be explored that take into account
simultaneous temperature, pressure, and relative humidity measurements to
improve the ability to model the PWV relationship between GPS receivers.
\section{Package Demonstration} \label{sec:package_demo}
The \texttt{pwv\_kpno}\ package can be used to correct both spectrographic and photometric observations.
As an example, we use the \texttt{pwv\_kpno}\ package to determine the atmospheric correction presented in
Figure \ref{fig:spectral_correction}. We also demonstrate how to calculate the photometric correction
factor defined in Equation \ref{eq:atm_correction} for a black body.
\subsection{Correcting Spectra} \label{ssec:correcting_spectra}
Spectrographic observations are corrected by dividing observed spectra by the modeled atmospheric
transmission function. To account for the spectral resolution function of the observing spectrograph, the
modeled transmission is first binned to approximately match the observed spectra's
resolution. Depending on the resolution of the observation, further smoothing can then be performed using a
Gaussian kernel. Assume that the observed wavelength and flux values are stored in equal length arrays
\code{obs\_wavelength} and \code{obs\_flux} respectively. Using the date, time, and airmass of the observation,
the binned transmission function is found by running
\begin{lstlisting}
>>> import numpy as np
>>>
>>> resolution = 16 # Angstroms
>>> bins = np.arange(min(obs_wavelength),
max(obs_wavelength + 1),
resolution)
>>>
>>> airmass = 1.2
>>> obs_data = datetime(2010, 09, 19, 6, 29
tz_info=pytz.utc)
>>> transm = pwv_atm.trans_for_date(obs_date,
airmass,
bins)
\end{lstlisting}
\begin{comment}
Note that since the GPS receiver at Kitt Peak was not installed until March of 2015, the transmission function for
this observation is found using a modeled PWV column density from the surrounding desert floor.
\end{comment}
In order to divide the observed spectrum and modeled transmission, we linearly interpolate the binned transmission
to the observed wavelength values. We then apply a Gaussian smoothing using an arbitrary standard deviation of 2
\AA.
\begin{lstlisting}
>>> from scipy.ndimage.filters import \
gaussian_filter
>>>
>>> interp_transm = np.interp(
obs_wavelength,
transm['wavelength'],
transm['transmission'])
>>> smoothed_transm = gaussian_filter(
input=interp_transm, sigma=2)
\end{lstlisting}
The corrected spectrum is then given as the observed flux
divided by the smoothed transmission function on a wavelength by wavelength basis.
\begin{lstlisting}
>>> corrected_spec = np.divide(
obs_flux,
smoothed_transm)
\end{lstlisting}
\subsection{Correcting Photometry} \label{ssec:correcting_photometry}
The \texttt{pwv\_kpno}\ package can also be used to correct photometric observations of objects with a known spectral
type. To do so, it
is necessary to evaluate Equation \ref{eq:atm_correction}. Note that the product in the numerator $S(\lambda) \cdot
T(\lambda)$ represents the SED under the influence of atmospheric effects, while $S(\lambda)$ in the denominator
represents the intrinsic SED. For a black body observed in the $i$ band, these values can be found as
\begin{lstlisting}
>>> # S(lambda) * T(lambda)
>>> sed_with_atm = bb_atm.sed(
>>> sed_temp, i_band, pwv)
>>>
>>> # S(lambda)
>>> intrinsic_sed = bb_atm.sed(
>>> sed_temp, i_band, 0)
\end{lstlisting}
In practice the SED of a photometrically observed object may not be available. In such a case it is sufficient
to use spectral templates instead. For example, the SED of a star can be reasonably well parametrized by its
observed color.
Using the above results, we evaluate Equation \ref{eq:atm_correction} by performing trapezoidal integration with
the Numpy package.
\begin{lstlisting}
>>> numerator = np.trapz(sed_with_atm, i_band)
>>> denominator = np.trapz(
>>> intrinsic_sed, i_band)
>>>
>>> photo_corr = np.divide(
>>> numerator, denominator)
\end{lstlisting}
The corrected photometric flux of the black body is then found by dividing the observed flux by the correction
factor \code{photo\_corr}.
\section{Features and Use of pwv\_kpno} \label{sec:features_and_use}
The \texttt{pwv\_kpno}\ package provides access to models for the atmospheric
transmission due to PWV at any location within the SuomiNet GPS network.
However, the package is configured by default to return models for Kitt Peak
National Observatory. We here demonstrate the features of \texttt{pwv\_kpno}\ using
the default model for Kitt Peak and further discuss modeling custom
sites in Section \ref{sec:other_locations}
\texttt{pwv\_kpno}\ is registered with the Python Package Index and is compatible
with both Python 2.7 and 3.5 through 3.7. Using PWV measurements published
by the SuomiNet project, the package is able to determine the atmospheric
transmission between $3,000$ and $12,000$ \r A. The package also provides
methods for the automated retrieval and processing of published SuomiNet data.
\subsection{Accessing PWV Data} \label{ssec:accessing_pwv}
In order to model the atmospheric transmission for a given date and time,
\texttt{pwv\_kpno}\ requires there to be SuomiNet data stored on the user's local
machine. Each package release contains the necessary data to return models
for Kitt Peak from 2010 through the end of the previous year. This data is
automatically included when installing the package.
Access to tabulated PWV data and modeling of the PWV transmission function
is provided by the \code{pwv\_atm} module. A list of years that have been
downloaded from SuomiNet to the user's local machine can be retrieved
using the \code{downloaded\_years} method.
\begin{lstlisting}
>>> from pwv_kpno import pwv_atm
>>> pwv_atm.downloaded_years()
[2010, 2011, 2012, 2013, 2014,
2015, 2016, 2017]
\end{lstlisting}
The returned list includes all years for which any amount of data has been
downloaded.
In order to update the locally stored data, \texttt{pwv\_kpno}\ can be used to
automatically retrieve and processes new data from SuomiNet. This is
achieved using the \code{update\_models} method.
\begin{lstlisting}
>>> pwv_atm.update_models()
[2017, 2018]
\end{lstlisting}
Here the returned list includes any years for which new data was
downloaded. By default, the function will download all published data for
any years not currently present on the local machine. In addition, it will
also download data for the most recent year that is locally available. This
method ensures there are no years with incomplete measurements in the
locally available data. If desired, the user can alternatively specify a
specific year to download from 2010 onward.
In addition to downloading data for Kitt Peak, the \code{update\_models}
method also downloads measurements taken at the four supplementary
locations shown in Figure \ref{fig:gps_locations}. Each time the method is
run, a new set of linear fits is created to describe the PWV concentration
at Kitt Peak as a function of the PWV concentration at each supplementary
location. These new fits are then used to recreate the entire supplemented PWV
model for Kitt Peak. The error in PWV modeled using each of these fits
is taken as the standard deviation of that fit's residuals.
Users can access the locally available SuomiNet data using the
\code{measured\_pwv} method. Results are returned as an Astropy table
\citep{astropy} and can be independently filtered by year, month, day, and
hour.
\begin{lstlisting}
>>> pwv_atm.measured_pwv(
year=2016, month=11, day=14)
date KITT KITT_err P014 ...
UTC mm mm mm ...
---------------- ---- -------- ---- ...
2016-11-14 00:15 4.7 1.025 6.9 ...
2016-11-14 00:45 4.3 1.025 6.7 ...
2016-11-14 01:15 3.9 0.925 6.7 ...
... ... ... ... ...
\end{lstlisting}
Excluding the date column, each column is labeled using the SuomiNet
identification codes for the GPS receivers.
\texttt{pwv\_kpno}\ also provides access to the modeled PWV column density at Kitt
peak via the \code{modeled\_pwv} method. As in the previous example,
these results can also be filtered independently by year, month,
day, and hour.
\begin{lstlisting}
>>> pwv_atm.modeled_pwv(
year=2016, month=11, day=14)
date pwv pwv_err
UTC mm mm
---------------- --- -------
2016-11-14 00:15 4.7 1.025
2016-11-14 00:45 4.3 1.025
2016-11-14 01:15 3.9 0.925
... ... ...
\end{lstlisting}
\subsection{Modeling the Atmosphere} \label{ssec:modeling_atm}
For a known PWV column density, the package provides access to the modeled
atmospheric transmission via the \code{trans\_for\_pwv} function. This
method returns the modeled transmission function as an Astropy table
with wavelengths ranging from $3,000$ to $12,000$ \r A. For example, given
a PWV column density of $13.5$ mm:
\begin{lstlisting}
>>> pwv_atm.trans_for_pwv(13.5)
wavelength transmission
Angstrom
---------- ------------
3000.00 0.9999999916
3000.05 0.9999999916
3000.10 0.9999999916
... ...
\end{lstlisting}
Atmospheric models can also be accessed for a given
datetime and airmass using the function
\code{trans\_for\_date}.
\begin{lstlisting}
>>> from datetime import datetime
>>> import pytz
>>>
>>> obsv_date = datetime(
>>> year=2013, month=12, day=15,
>>> hour=5, minute=35, tzinfo=pytz.utc)
>>>
>>> pwv_atm.trans_for_date(
date=obsv_date, airmass=1.2)
wavelength transmission transmission_err
Angstrom
---------- ------------ ----------------
3000.00 0.9999999916 1.7305648359e-08
3000.05 0.9999999916 1.7305648359e-08
3000.10 0.9999999916 1.7305648359e-08
... ... ...
\end{lstlisting}
If \texttt{pwv\_kpno}\ does not have any supplemented SuomiNet data within a day
of the requested datetime, an exception is raised. Both the
\code{trans\_for\_pwv} and \code{trans\_for\_date} functions
determine the atmospheric transmission by returning a set of MODTRAN
transmission models.
\subsection{Modeling a Black Body} \label{ssec:modeling_blackbody}
\begin{comment}
When fitting for a set of extinction coefficients, astronomers typically
choose bright stars of a known spectral type with very few absorption
features. Early A-type stars are commonly used for this purpose due to
their few, weak metal lines and reasonable approximation as a black body at
about $10,000$ K \citep{Vacca03}. It is therefore useful to understand the
impact of atmospheric absorption on the spectral energy distribution (SED)
of a black body.
\end{comment}
The \code{blackbody\_with\_atm} module provides functions for modeling the
effects of PWV absorption on a black body SED. For example, consider a
black body at \mbox{$8,000$ K} under the effects of atmospheric absorption
due to $15$ mm of PWV. For a given array of wavelengths in angstroms, the
\code{sed} method returns the corresponding spectral energy distribution.
\begin{lstlisting}
>>> from pwv_kpno import blackbody_with_atm \
>>> as bb_atm
>>>
>>> temp = 8000
>>> wavelength = np.arange(7000, 10000, 100)
>>> pwv = 15
>>>
>>> sed = bb_atm.sed(temp, wavelength, pwv)
\end{lstlisting}
The SED from the above example can be seen in Figure \ref{fig:black_body}.
If desired, the SED of a black body without atmospheric effects can also be
achieved by specifying a PWV column density of zero.
Using the \code{magnitude} function, users can determine the magnitude
of a black body in a given band. For example, in the $i$ band, which ranges
from $7,000$ to $8,500$ \r A, the AB magnitude of a black body is found by
running
\begin{lstlisting}
>>> band = (7000, 8500)
>>> mag = bb_atm.magnitude(temp, band, pwv)
\end{lstlisting}
Here the $i$ band is treated as a top-hat function, however, the
\code{magnitude} function also accepts \code{band} as a two dimensional
array specifying the wavelength and response function of a real-world
band. As in the previous example, the magnitude of a black body without
the effects of atmospheric absorption can be found by specifying a PWV
level of zero.
\section{Introduction} \label{sec:introduction}
Upcoming
ground-based surveys, such as the Large Synoptic Survey Telescope,
will require a photometric precision of one percent or better. Understanding
and calibrating for the effects of atmospheric absorption is an important part
of achieving this precision level (see \cite{li16}, \cite{burke14}, and
\cite{burke10}). Ground-based photometry redward of 5,500~\r A suffers from
significant and variable opacity due to water vapor in the atmosphere.
While ozone and aerosol scattering also play significant roles, their
opacity is relatively smooth with wavelength. In contrast, the absorption
due to precipitable water vapor (PWV) has a distinct and complex spectrum.
Astronomers traditionally calibrate broad-band imaging by using a reference
catalog to compute correction terms for color, airmass, and perhaps a
higher-order color-airmass term. This approach implicitly accounts for the
effects of atmospheric opacity on observed images. In general, the color
term accounts for the difference in filter and detector sensitivity with
wavelength, but also includes some average contribution of the atmosphere
above the telescope being used.
More detailed information can be obtained by observing a telluric standard
star. These bright stars of known spectral energy distribution are well
suited for determining the absorption and scattering of the atmosphere. In
order to describe atmospheric effects, spectroscopy should be performed on
a telluric standard at the same airmass as a desired target. This is
ideally performed at the same position and time as the photometric
observations. The total atmospheric absorption per wavelength can then be
found by dividing the observed spectrum by tabulated results already
corrected for absorption.
While this method is effective, the majority of telescopes are not
configured to have an auxiliary spectrograph for observing telluric stars.
Because atmospheric absorption is variable over time, observations of a
standard star must be performed repeatedly and within a short time interval
of other targets. Even in setups with the capability to easily switch back
and forth between mosaic imaging and single-object spectroscopy, such
observations require diverting valuable observation time away from other
targets.
As an alternative, astronomers commonly express the atmospheric absorption
as a linear function of airmass. Using photometric observations taken over
a range of airmass values, corrections are performed by fitting the linear
function in each band. This approach assumes that the absorption scales
linearly with airmass. However, the absorption spectrum of water is a
complex series of very narrow absorption lines. These individual lines can
saturate, and thus the absorption does not scale linearly with airmass.
This non-linearity introduces errors due to higher order effects when
calibrating photometric images \citep{blake11}.
In the redder range of CCD sensitivity ($5,500 < \lambda < 12,000$ \r A),
the atmospheric transmission function is dominated by absorption due to
precipitable water vapor (PWV). The use of GPS to measure the localized,
PWV column density is a recently emerged technology in astronomy and an
accurate alternative to traditional methods \citep{dumont01}. Through the
use of atmospheric modeling, these PWV measurements can be used to simulate
the atmospheric transmission. The resulting transmission function can then
be used to correct photometric observations of sources with known spectral
energy distributions for atmospheric absorption.
We here introduce \texttt{pwv\_kpno}\footnote{\texttt{pwv\_kpno}\ can be downloaded using the
pip package manager or at \url{https://mwvgroup.github.io/pwv_kpno/}}: a
Python package that provides models for the atmospheric transmission due to
H$_2$O at user-specified sites. By using MODTRAN models
\citep{modtran} in conjunction with publicly available PWV measurements,
\texttt{pwv\_kpno}\ is able to return models for the atmospheric transmission between
$3,000$ and $12,000$ \r A.
The package was beta tested at Kitt Peak National Observatory
(KPNO) using a dual-band GPS system that was installed at the
WIYN 3.5-m telescope in 2015. Thus, direct PWV measurements at Kitt Peak are
available starting in early 2015, but by using measurements from stations
on the surrounding desert floor, the package is capable of modeling the
atmospheric transmission for years 2010 onward. The package also provides
access to tabulated PWV measurements, along with easy to use utility
functions for retrieving and processing newly published PWV data.
In Section~\ref{sec:background} we discuss the use of PWV measurements as a
tool for correcting photometric observations. In
Section~\ref{sec:features_and_use} we present the features and
functionality of \texttt{pwv\_kpno}, including how to access tabulated PWV
measurements and the package's modeling capabilities. Section
\ref{sec:other_locations} demonstrates how to model the atmosphere for a
user-specified site other than Kitt Peak. In Section \ref{sec:validation} we
present a validation of the package as a tool for correcting
ground-based observations. A demonstration
of how to use the package to correct for atmospheric effects is presented
in Section~\ref{sec:package_demo}. Finally, we present our conclusions in
Section~\ref{sec:conclusion}.
\newline
\newline
\begin{figure*}
\includegraphics[width=\textwidth]{transmission_levels.pdf}
\caption{The $r$, $i$ and $z$ band mosaic filters of Kitt Peak
National Observatory (grey) compared against the MODTRAN
modeled atmospheric transmission function due to precipitable
water vapor (blue). Atmospheric transmission functions are
shown for an airmass of one and a precipitable water vapor
(PWV) column density of $1$ mm (top), $15$ mm (middle), and
$30$ mm (bottom). Note that absorption features do not scale
linearly with PWV, and some saturate at relatively low column
densities.}
\label{fig:pwv_comparison}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{black_body_sed.pdf}
\caption{The SED of a blackbody at 8,000 K (black) across the $i$-band
(left) and $z$-band (right) ranges. Shown in grey, the modeled
atmospheric absorption for a PWV column density of 15 mm is
applied to the SED. This is compared to the black body SED
scaled using the integrated absorption in each band in red.}
\label{fig:black_body}
\end{figure*}
\section{Modeling Other Locations} \label{sec:other_locations}
By default, \texttt{pwv\_kpno}\ provides models for the PWV transmission
function at Kitt Peak National Observatory. However, \texttt{pwv\_kpno}\ also
provides atmospheric modeling for user customized locations.
Modeling multiple locations is handled by the \code{package\_settings}
module, and allows modeling at any location with a SuomiNet connected
GPS receiver.
Each site modeled by \texttt{pwv\_kpno}\ is represented by a unique configuration
file. Using the \code{ConfigBuilder} class, users can create customized
configuration files for any SuomiNet site. As an example, we create a
new model for the Cerro Tololo Inter-American Observatory (CTIO) near
La Serena, Chile.
\begin{lstlisting}
>>> from pwv_kpno.package_settings import \
>>> ConfigBuilder
>>>
>>> new_config = ConfigBuilder(
site_name='cerro_tololo',
primary_rec='CTIO',
sup_rec=[],
wavelength=custom_wavelengths,
cross_section=custom_cross_sections
)
>>> new_config.save_to_ecsv(
'./cerro_tololo.ecsv')
\end{lstlisting}
Here \code{site\_name} specifies a unique identifier for the site being modeled, \code{primary\_rec} is the
SuomiNet ID code for the GPS receiver located at the modeled site, and \code{sup\_rec} is a list of SuomiNet ID
codes for nearby receivers used to supplement measurements taken by the primary receiver. Unlike the default model
for KPNO, there are no additional receivers near the CTIO and so \code{sup\_rec} in this example is left empty (the
default value). By default, \texttt{pwv\_kpno}\ models use MODTRAN estimates for the wavelength dependent cross section of
H$_2$O from 3,000 to 12,000 \AA. The optional \code{wavelength} and \code{cross\_section} arguments allow a user
to customize these cross sections in units of Angstroms and cm${^2}$ respectively.
If desired, users can specify custom data cuts on SuomiNet data used by the package. Data cuts are defined using a
2d dictionary of boundary values. The first key specifies which receiver the data cuts apply to. The second key
specifies what values to cut. Following SuomiNet's naming convention, values that can be cut include PWV \
(\code{"PWV"}), the PWV error (\code{"PWVerr"}), surface pressure (\code{"SrfcPress"}), surface temperature
(\code{"SrfcTemp"}), and relative humidity (\code{"SrfcRH"}). For example, if we wanted to ignore measurements
taken between two dates, we can specify those dates as UTC timestamps and run
\newline
\newline
\newline
\begin{lstlisting}
>>> data_cuts = {'CTIO':
{'SrfcPress': [
[time_start, time_end]
]}
}
>>> new_config = ConfigBuilder(
site_name='cerro_tololo',
primary_rec='CTIO',
data_cuts=data_cuts)
\end{lstlisting}
Once a configuration file has been created, it can be permanently added to the
locally installed \texttt{pwv\_kpno}\ package by running
\begin{lstlisting}
>>> from pwv_kpno.package_settings import \
>>> settings
>>>
>>> settings.import_site_config(
'./cerro_tololo.ecsv')
\end{lstlisting}
This command only needs to be run once, after which \texttt{pwv\_kpno}\ will retain the new model on disk, even in between
package updates. The package can then be configured to use the new model by running
\begin{lstlisting}
>>> settings.set_site('cerro_tololo')
\end{lstlisting}
After setting \texttt{pwv\_kpno}\ to a model a specific site, the package will return atmospheric models and PWV data
exclusively for that site. It is important to note that this setting is not persistent. When \texttt{pwv\_kpno}\ is first
imported into a new environment the package will always default to using the standard model for Kitt Peak, and the
above command will have to be rerun.
A complete summary of package settings can be accessed using attributes of the \code{settings} object.
\begin{lstlisting}
>>> settings.set_site('kitt_peak')
>>> print(settings.site_name)
kitt_peak
>>> print(settings.available_sites)
['kitt_peak', 'cerro_tololo']
>>> print(settings.receivers)
['AZAM', 'KITT', 'P014', 'SA46', 'SA48']
>>> print(settings.primary_rec)
KITT
>>> print(settings.supplement_rec)
['AZAM', 'P014', 'SA46', 'SA48']
\end{lstlisting}
The configuration file for the currently modeled location can be exported by running
\begin{lstlisting}
>>> settings.export_site_config(
'./current_site_name.ecsv')
\end{lstlisting}
\section{Validation} \label{sec:validation}
\begin{figure*}
\epsscale{.9}
\plotone{spectral_correction.pdf}
\caption{Observations of BD+262606 were taken using the R. C.
Spectrograph at Kitt Peak National Observatory. The top panel
demonstrates the modeled PWV transmission function at the time of
observation (blue) smoothed by a Gaussian kernel (orange). The middle
panel shows the observed spectrum (purple) and the spectrum
corrected using catalog values (green). These are compared against the
catalog corrected spectrum multiplied by the modeled transmission (orange)
and the observed spectrum divided by the smoothed transmission (blue).
Residuals between the catalog corrected and model corrected spectrum
are shown in the bottom panel. H$_\alpha$ lines are highlighted in
red and O$_2$ lines in grey.}
\label{fig:spectral_correction}
\end{figure*}
From 2010 September 16th through September 20th, an observation
run was performed on 18 standard stars using the R.C. spectrograph
on the Mayall 4m telescope. To reduce flux
loss due to atmospheric dispersion, the spectrograph was configured to
use a wide 7\arcsec\ slit. Observations were recorded between 5,500 and
10,200 \r A with an average dispersion of 3.4 \r A per pixel. Seeing for
all observations varied between 1 and 2\arcsec.
As an example, Figure 7 shows the SED of BD+262606 observed at an
airmass of 1.67. To flux-calibrate the observed spectrum, low-airmass
observations were taken of BD+17 4708 each night. This minimized the
introduction of additional telluric effects in the calibrated
spectrum. To correct the observed spectrum for atmospheric effects, the
absorption in the standard star was scaled to match the airmass of the
other observations following the prescription of \citet{wade88}.
Note that the atmospheric models used by \texttt{pwv\_kpno}\ do not directly
account for the smoothing that occurs in observed spectra due to a
spectrograph's spectral resolution function. As a result, directly dividing
the observed spectra and modeled transmission will produce a very high,
unphysical flux for wavelengths where the transmission function is saturated.
To account for any saturated features, the modeled transmission is first
binned to approximately match the observed spectrum's resolution. The
transmission is then smoothed further using a Gaussian kernel.
To correct for atmospheric effects using the \texttt{pwv\_kpno}\ package, the
observed spectrum is divided by the smoothed PWV
transmission function. We note that the observed spectrum was taken
before a GPS receiver was installed at Kitt Peak. This means that
no direct PWV measurements are available for the time of observation,
and we instead determine the modeled PWV transmission using
measurements from GPS receivers on the surrounding desert floor.
In the model-corrected spectrum, the
absorption feature at 6,550 \r A is an H$\alpha$ line intrinsic to the
observed spectrum. Furthermore, the absorption features at 6,875 and 7,650
\AA\ are caused by O$_2$ absorption. Since \texttt{pwv\_kpno}\ only provides models
for the PWV absorption, these two features remain uncorrected. Given that
there are no emission lines relative to the continuum, the feature at 9350
\AA\ is categorized as an unidentified artifact from the reduction process.
Corrections for the PWV absorption features agree reasonably well between
the catalog and model corrected spectrum. The largest deviations between the
corrected spectrum occur redward of 9,000 \AA. Some of these deviations can
be attributed to cloudy observation conditions, creating large spatial
and time variations in the PWV concentration along the line of sight
\citep{querel14}. However, correcting this feature is also difficult since
it is in fact a number of thin, saturated lines that have been blended
together. Overall we find that the model struggles to correct the observed
spectrum past 9,000 \r A, but performed well enough overall to be used to
satisfactorily correct photometric observations.
|
3,212,635,537,768 | arxiv | \section{Introduction}
Unmanned Aerial Vehicles (UAVs) are increasingly being sought in applications such as surveillance, environmental monitoring, and agriculture due to their ability to monitor large areas in a short period of time. {One bottleneck in practice that limits their application is the limited battery capacity, especially for multi-rotor UAVs. One way to overcome this bottleneck is to use a team of aerial and ground vehicles for such tasks, in which the UGV can work as a mobile recharging station and will recharge the UAV during long-range operations. The key to achieving such cooperation on the decision-making level is to design efficient routing algorithms that can tell robots \textit{which task node to visit next}, and \textit{when and where the UAV should be recharged}. Moreover, the rate of battery discharge of a UAV is stochastic in the real world. The routing algorithm should be able to deal with such uncertainties, e.g., trade off task performance with failure risks.}
{In this paper, we consider the cooperative routing problem with a team of a single UGV that can work as a mobile charger and a single energy-constrained UAV, in which the UAV and UGV need to complete a task by visiting task nodes distributed throughout the task area. The UGV can only move on the road network but the can directly fly between any pair of nodes (assuming it has enough charge). Given the task nodes to visit and the stochastic energy consumption model of the UAV, we are interested in finding a routing strategy for the UAV and the UGV such that the expected time to finish the task is minimized and the probability of running out of charge less than a user-defined tolerance. Such problems can be formulated within the stochastic programming (SP) framework \cite{du2020cooperative}. However, since we need to consider not only routes but also recharge decisions and chance constraints, SP-based formulation would involve too many variables, rendering the formulation only solvable for very small instances.}
\begin{figure}
\centering
\includegraphics[scale=0.65]{figures/rendezvous.pdf}
\caption{An illustrative example of the rendezvous problem considered in this paper. When the UAV and UGV are executing tasks, they need to decide when and where to rendezvous to replenish the battery of the UAV while minimizing the travel time of the UAV and satisfying the risk constraint induced by stochastic energy consumption. When they need to rendezvous, they will deviate from their task and meet at a chosen rendezvous location.}
\label{fig:illustrative_example}
\end{figure}
{In this paper, we propose to find the routing strategy in two decoupled phases. In the first phase, a higher-level planner finds deterministic routes for the UAV and the UGV without considering stochasticity in energy consumption based on the task requirement. In the second phase, a risk-aware planner will refine the routes generated from the previous phase to find when and where to rendezvous to satisfy the chance constraint while minimizing the time to finish the UAV task\footnote{We focus on minimizing the time taken for the UAV task instead of the total time which would be the maximum of the UAV and UGV travel times. If the UGV task takes longer than the UAV, then once the UAV's task is done, the UGV simply executes the remaining portion of its route. Therefore, optimizing the UAV time is appropriate. Furthermore, In many applications \cite{yu2018algorithms}, the UGV's task is to simply act as a mobile recharging station}. Our focus in this paper is mainly on the second phase. We formulate our risk-aware refinement as a CMDP, in which the chance constraint is modeled as the secondary cost in the constraint. To the best of our knowledge, this is the first CMDP-based formulation for the UAV-UGV routing problems under energy consumption uncertainty. We use Linear Programming (LP) to find the optimal stationary policy. We validate our formulation and the solution in an Intelligence Surveillance and Reconnaissance (ISR) mission.}
{
The main contributions of this paper are:
\begin{itemize}
\item We show how to formulate the stochastic cooperative UAV and UGV rendezvous problem with energy constraint as a CMDP and use LP to solve the problem optimally in polynomial time.
\item Demonstrate the effectiveness of the formulation and the solution using realistic models that are obtained using field data.
\end{itemize}
}
The rest of this paper is organized as follows. We first give a brief overview of the related work in Section \ref{sec:rel_wrk}. Then we discuss the general problem definition along with its mathematical formulation as a Chance-Constrained Markov Decision Process (CCMDP) in Section \ref{sec:prb_frml}. Next in Section \ref{sec:sol}, we discuss the solution to this problem as an LP instance. Finally, we present the results from numerical simulations in Section \ref{section:numerical simulation} with conclusions in Section \ref{section:conclusion}.
\section{Related Work}
\label{sec:rel_wrk}
The routing of energy-constrained UAVs with stationary recharging stations or assistive UGVs has been studied extensively \cite{otto2018optimization, li2021ground, choudhury2021efficient, maini2018persistent}. Even with deterministic environmental changes or stationary conditions, this problem can be reduced to the Traveling Salesman Problem (TSP), \cite{yu2019coverage, mathew2015planning} making it an NP-hard problem.
The cooperative UAV and UGV routing problem has been studied from different perspectives and thus received various formulations in the literature. It is most commonly formulated as a type of vehicle routing problem. Manyam et al.~\cite{manyam2019cooperative} use a team of one UAV and one UGV with communication constraints to cooperatively visit targets. Along with an exact solver to Mixed Integer Linear Programming (MILP) formulation, they also provide heuristic reduction to the generalized traveling salesman problem (GTSP). Maini et al. \cite{maini2019cooperative} present a two-fold strategy: first, they identify feasible rendezvous points, then they formulate a MILP to find the optimal routes for the UAV and UGV. Thayer et al.~\cite{thayer2021adaptive} present a solution to the Stochastic Orienteering Problem, where the objective is to maximize the sum of rewards associated with each visited node while constrained by the maximum budget over edges with stochastic cost.
Murray and Chu~\cite{murray2015flying} introduced the flying sidekick TSP (FSTSP) for parcel delivery systems, which was later adopted in last-mile delivery applications using drones~\cite{agatz2018optimization, ha2018min}. In literature, the term multi-echelon scheme is often used for systems where delivery consists of multiple layers. Specifically, the two-echelon vehicle routing problem (2E-VRP) is concerned with finding minimal cost routes to deliver packages with trucks/UGVs and drones~\cite{li2021ground, liu2019cooperative}. An important differentiation from the original vehicle routing problem is the synchronization of UAV and UGV tasks.
Learning-based approaches have been used to address cooperative UAV and UGV routing problems. Ermugan et al.~\cite{ermaugan2022learning} also propose a two-phase approach. First, they find a route for UAVs without taking into account the energy constraints. Then, the planner learns to insert into the route recharging stations and replans a new TSP route. Reinforcement learning has also proven to be a possible approach to solving this problem \cite{wu2021reinforcement}.
In our previous work, we studied cooperative planning with a single UGV and an energy-constrained UAV as well~\cite{tokekar2016sensor, yu2018algorithms, yu2019coverage}. Our proposed approach in~\cite{tokekar2016sensor} demonstrated how to maximize the number of sites visited in a single charge in conjunction with the ability to land a UAV on top of a UGV to be transported to the next take-off site. We extended this in~\cite{yu2018algorithms} to allow the UAV to also be recharged while either being transported or stationary on the UGV\@. We extended the latter to the area coverage path planning problem by formulating it as a GTSP~\cite{yu2019coverage}. Here, we extend this body of work by introducing the stochasticity of the UAV’s energy consumption and by assuming that the UGV has its own required set of tasks to be carried out. To the best of our knowledge and based on the presented literature review, none of the works takes into account the stochastic nature of energy consumption.
\section{Problem Formulation}
\label{sec:prb_frml}
The cooperative routing problem studied in this paper involves one UAV and one UGV\@. The UAV and the UGV are executing tasks, which are given by some task planners as shown in Section \ref{subsection: UAV_task_planning}. The UAV needs to visit a sequence of task nodes in order to finish the task, but its battery may not be enough for it to finish the task in a single flight without recharging. Also, the energy consumption of the UAV is stochastic. The UAV needs to decide \textit{when} and \textit{where} it should rendezvous with the UGV to replenish the battery while minimizing the total travel time to finish the task. When the UAV decides to rendezvous with the UGV to replenish power, both the UAV and the UGV will take a detour from their respective tasks and go back to their tasks after recharging.
At a high level, the problem studied in this paper is stated below.
\begin{problem}[Risk-aware UAV-UGV rendezvous]\label{problem:risk-aware-rendezvous}
Given a route of nodes for the UAV $\mathcal{T}_A$, a route of nodes for the UGV $\mathcal{T}_G$, and the stochastic energy consumption model and battery capacity of the UAV, find a policy for the UAV to decide when and where to rendezvous with the UGV for recharging such that the total travel time is minimized and the probability of running out of charge during flight is less than a given tolerance.
\end{problem}
Next, we will present the setup and the assumptions that we use in this paper. Then we will present our CCMDP based formulation and show how to transform a CCMDP into a CMDP\@.
\subsection{Environment and Task Model}
Our problem considers a two-dimensional Euclidean space, which
consists of a road network graph $G=(V_r, E)$ and a set of task points $V_t$ for the UAV to visit.
The UGV has to move on the road network and its task is specified as a sequence of nodes of the road network. UAV's task is specified by some task planners using nodes in $V_t$. More details on the task planner will be discussed in the Section \ref{subsection: UAV_task_planning}. Both UAV and UGV should follow the task specification to visit the task nodes in order and they will deviate from the task route to rendezvous when necessary.
\subsection{Vehicle Motion Model}
The UGV will move at a fixed speed $v_g$ when it transits between two nodes in the road network. When the UAV transits between two nodes, it will fly with either the best endurance speed, $v_{be}$, or the best range speed, $v_{br}$. The best endurance speed is the speed at which the energy consumption rate is minimized. At this speed, the propellers of the multirotor operate more efficiently than in hover, and the UAV is capable of the greatest flight duration. By contrast, when a UAV flies at the best range speed, it minimizes the derivative of energy consumption rate with respect to velocity. This flight speed results in a lower flight duration than operation at $v_{be}$, but will allow a greater range to be traveled per unit of energy. For a no-wind condition, the velocity of the best range is always better than the velocity of best endurance.
\subsection{Recharging and Stochastic Energy Consumption Model}
We assume that it takes constant time $T$ to finish the recharging process, which includes the landing/take-off time and battery-swapping time.
In this paper, we only consider the power consumption when a UAV traverses the route with the assumption that the power needed for computation, takeoff, and landing has been reserved by the power management system. As described in the transition model, the UAV will fly at a fixed speed when it transits between two nodes in the environment. However, given that constant speed, the energy consumption is stochastic considering the disturbances in the environment.
\iffalse
One example is
\begin{equation*}
P_{\bm{\Theta}}(v)=\bm{\xi}_0 + \bm{\xi}_1 v + \bm{\xi}_2 v^2 + \bm{\xi}_3 v^3,
\end{equation*}
where $\bm{\Theta}=[\bm{\xi}_0, \bm{\xi}_1, \bm{\xi}_2, \bm{\xi}_3]$.
\fi
Given the distance $l$ between two task nodes and the flying speed $v$, the energy consumption can be computed as
\begin{equation}\label{equation:energy_consumption}
e_{l, v} = \int_{t=0}^{\frac{l}{v}} P_{\bm{\Theta}}(v) dt,
\end{equation}
where $P_{\bm{\Theta}}(v)$ is the power consumption of the UAV when it flies at a speed $v$, and $\bm{\Theta}$ is a vector of parameters for stochastic variables.
\subsection{Rendezvous Model}
\begin{figure}
\centering
\includegraphics[scale=0.75]{figures/rendezvous_process.pdf}
\caption{First step in the rendezvous process. The UGV (blue triangle) needs to deviate from its task node to rendezvous with the UAV at the rendezvous point (pink star). The rendezvous paths are in dashed lines.}
\label{fig:rendezvous_process}
\end{figure}
\iffalse
\subsubsection{No communication}
When the UAV reaches a node and decides to rendezvous with the UGV, it will not send a rendezvous request to UGV. Instead, it will decide the rendezvous position and time based on the deterministic motion model of the UGV\@. Let $\mathcal{P}_g: T \to \mathbb{R}^2$ be the path executed by the UGV and $d: \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}_{+}$ be the distance metric between two points. We use $p_a(t_0) \in \mathbb{R}^2$ and $p_g(t_0) \in \mathbb{R}^2$ to denote the position of the UAV and UGV when UAV decides to rendezvous at time $t$. The task node after $p_a(t)$ for the UAV is denoted as $p_a(t^{+})$. Then problem to find the rendezvous point can be formulated as
\begin{align}
\min_{p_r \in \mathcal{P}_g(t:\rm{end})} & \Delta t + \frac{d(p_r, p_a(t^{+}))}{v_a} \\
\rm{s.t.}~& p_g(t_0 + \Delta t) = p_r \\
& \frac{d(p_a(t_0), p_r)}{v_a} \leq \Delta t
\end{align}
\fi
When the UAV reaches a node in $\mathcal{T}_A$ and decides to rendezvous with the UGV, the UAV and UGV will deviate from their task temporarily to finish the rendezvous process. There are two steps in the rendezvous process. In the first step, the UAV and UGV will meet at a rendezvous point as shown in Fig. \ref{fig:rendezvous_process} and in the second step, they will go to the next task node in $\mathcal{T}_a$ and $\mathcal{T}_g$ respectively. We want to optimize the time consumed in these two steps to find the optimal rendezvous points.
Let $d: \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}_{+}$ be the distance metric between two points in the Euclidean space. We use $d_G: V_r \times V_r \to \mathbb{R}_{+}$ to denote the length of the shortest path between two nodes in the road network. We use $\mathcal{T}_a(k)$ to denote the position of the UAV when it decides to rendezvous at the $k$th node in its task route and $\mathcal{T}_a(k+1)$ to denote the next position to visit for UAV after the rendezvous. With a slight abuse of notation, we use $\mathcal{T}_g(k)$ to denote the position of the UGV in the road network when the UAV decides to rendezvous at the $k$th node in its task route. With the above notations, the problem to find the rendezvous point can be stated below.
\begin{problem}[Where to rendezvous]\label{problem:where_to_rendezvous}
Given the positions of UAV ($\mathcal{T}_a(k)$) and UGV ($\mathcal{T}_g(k)$) at the beginning of the rendezvous process, UAV's next position to go $\mathcal{T}_a(k+1)$, UAV's flight speed $v_a$, UGV's transition speed $v_g$, and the road network $G$, we want to find a rendezvous point $p_r \in G$ such that the time consumed in the rendezvous process is minimized. Mathematically,
\begin{align}
& \min_{\Delta \geq 0, ~p_r \in G} ~ \Delta + \frac{d(p_r, ~\mathcal{T}_a(k+1))}{v_a} \\
&\text{s.t.}~ \Delta = \max (\frac{d_G(\mathcal{T}_g(k), ~p_r)}{v_g}, ~\frac{d(\mathcal{T}_a(k), ~p_r)}{v_a}).
\iffalse
\fi
\end{align}
\end{problem}
In the first step of the rendezvous process, if the UAV or UGV reaches the rendezvous first, it has to wait for the other vehicle. Therefore, the time consumed in the first step is decided by the vehicle that reaches the rendezvous point later than the other. We encode this fact in the optimization problem by introducing the variable $\Delta$, which describes the maximum time needed for both UAV and UGV to reach the rendezvous point. The time consumed in the second step of the rendezvous process is the time needed for the UAV to fly back to its next task node.
Problem \ref{problem:where_to_rendezvous} can be solved by iterating over the nodes in the road network as what we do in the case study. But such a method will increase the time to extract transition information for the CMDP\@. More efficient way to solving Problem \ref{problem:where_to_rendezvous} is left for our future work.
\iffalse
\subsection{Stochastic Energy Model of UAV}
\iffalse
Given a trajectory $\xi: \left [0, t_f \right ] \to \mathbb{R}^3 \times SO(2) \times \mathbb{R}^3 \times \times SO(2), \xi(t) = [p_x, p_y, p_z, \psi, v_x, v_y, v_z, \dot{\psi}]$ and the corresponding trajectory tracking controller, we are interested in finding the distribution of the energy consumption to track this trajectory. Suppose the stochastic motion model of the UAV is given by
\begin{equation}
\ddot{x}_t = f(x_t, u_t, \eta) + \omega_t,
\end{equation}
and the energy consumption can be characterized as a function of velocity $P(v)$. We can get the distribution of the energy consumption by running forward simulations. By running one forward simulation, we can get one actual trajectory $\Tilde{\xi}$. Then the energy consumption of this trajectory can be computed as
\begin{equation}
e(\Tilde{\xi}) = \int_{t=0}^{t_f} P(v_{\Tilde{\xi}}(t)) dt.
\end{equation}
Statistical information of different $e(\Tilde{\xi})$ can be used to obtain the distribution of the energy consumption.
\fi
In this paper, we only consider the power consumption when a UAV traverses the route with the assumption that the power needed for computation, takeoff, and landing has been reserved by the power management system. As described in the motion model, UAV will fly at a fixed speed when it transits between two points in the environment. But to achieve that constant speed, the energy consumption is stochastic considering the disturbances in the environment. Specifically, we use $P_{\bm{\Theta}}(v)$ to denote the power consumption of the UAV when it flies at a speed $v$, where $\bm{\Theta}$ is a vector for stochastic parameters and can be obtained through experiments. One example is
\begin{equation*}
P_{\bm{\Theta}}(v)=\bm{C}_0 + \bm{C}_1 v + \bm{C}_2 v^2 + \bm{C}_3 v^3,
\end{equation*}
where $\bm{\Theta}=[\bm{c}_0, \bm{c}_1, \bm{c}_2, \bm{c}_3]$.
Given the distance, $l$ between two observation points and the flying speed $v$, the energy consumption can be computed as
\begin{equation}
e_{l, v} = \int_{t=0}^{\frac{l}{v}} P_{\bm{\Theta}}(v) dt.
\end{equation}
\fi
\subsection{Chance-Constrained Markov Decision Process}
\begin{figure}
\centering
\includegraphics[scale=0.65]{figures/state_transition.pdf}
\caption{State transition graph in CMDP.}
\label{fig:state_transition}
\end{figure}
One natural choice to model the sequential decision-making problems described in Problem \ref{problem:risk-aware-rendezvous} is to use MDP\@. In this section, we first show how to formulate Problem \ref{problem:risk-aware-rendezvous} as a CCMDP and then show how to transform a CCMDP into a CMDP in the following section.
The rigorous definition of an MDP can be found in \cite{bertsekas2012dynamic}. Here we define the MDP from the perspective of the application. The MDP corresponding to Problem \ref{problem:risk-aware-rendezvous} is defined as a tuple $\mathcal{M} = (S, A, T, C, s_0)$, where
\begin{itemize}[leftmargin=*]
\item $S = \mathcal{T}_a \times S_g \times \mathcal{T}_g \times \mathcal{B} ~\cup~ \{s_{ob}, ~s_l\}$ is the state space of the problem, where $\mathcal{T}_a$ here is used as an un-ordered set, which describes all possible positions of UAV in a task route; $S_g$ is the set of positions of UGV and this information is needed when we compute the rendezvous points; $\mathcal{T}_g$ here is used as an un-ordered set, which describes the task nodes UGV will visit.
$\mathcal{T}_g$ is included in the state space to inform the MDP about the next node the UGV needs to visit after a rendezvous. Without this information, the system will be non-Markovian; $\mathcal{B}$ is a discretized variable for describing the state of the charge of the UAV; $s_{ob}$ is one failure state representing the out-of-charge state and the UAV will transit to this state whenever it cannot finish its task route; $s_{l}$ is added as an absorbing state and UAV will transit to this state when it either finishes UAV's route or runs into a failure state. {One illustrative example of state transitions is given in Fig. \ref{fig:state_transition}.}
\ item $A$ is the action space of the UAV\@. If the UAV has not finished its route and is not in a failure state, there are four actions for the UAV to choose:
\begin{enumerate*}
\item $v_{be}$: move to the next node in $\mathcal{T}_a$ with the best endurance velocity.
\item $v_{br}$: move to the next node in $\mathcal{T}_a$ with the best range velocity.
\item $v_{be\_be}$: rendezvous with the best endurance velocity.
\item $v_{br\_br}$: rendezvous with the best range velocity.
\end{enumerate*}
When UAV is in a failure state or has finished its route, there is only one action that makes the system transit to the terminal state $s_l$.
\item $T_a(s, s^{\prime}, a) = P( s^{\prime} \mid s, a)$ is the transition function, which depends on the stochastic energy consumption model. When UAV chooses to move forward to its next task node, its battery state at the destination node is a random variable that depends on the current battery state and Equation (1). Since we have discretized the battery charging levels at each node, the probability of reaching the destination node with a given battery charge can be calculated using Equation \eqref{equation:energy_consumption}. When it cannot reach the next task node, i.e., with non-zero probability, it will run out of charge, it transits to the failure state $s_{ob}$. When UAV chooses to rendezvous, a rendezvous point is first computed by solving the Problem \ref{problem:where_to_rendezvous}. Then the distribution of battery remaining when it reaches the rendezvous point can be computed based on Equation \eqref{equation:energy_consumption}. The non-positive portion of the distribution corresponds to the failure probability. After recharging, UAV will transit to its next task node starting with a full battery.
When the UAV transits to the failure state or it finishes the task route, it will transit to the terminal state $s_l$ with probability 1 as shown in Fig. \ref{fig:state_transition}. In the terminal state $s_l$, the system will loop over this state.
\item $C(s, s^{\prime}, a)$ is the cost function for the UAV\@. We define it as the time needed to transit between two states. If the UAV chooses to move to the next node, the cost will be time consumed during that transition. If a UAV chooses to rendezvous, the cost will be the sum of the time consumed in two steps of the rendezvous process. When the state transits to the failure state or to the terminal state, it takes zero cost.
\item $s_0$ is the initial state of the system.
\end{itemize}
\begin{definition}[Risk]
Let $\pi$ be a policy, the risk of the policy given initial state $s_0$ is defined as
\begin{equation}
\rho^{\pi} (s_0) = \mathbf{P}(\exists ~t ~s_t = s_{ob} \mid s_0).
\end{equation}
\end{definition}
We seek the optimal policy $\pi^*$ that satisfies
\begin{align}
\pi^* = & \argmin_{\pi} \mathbb{E}\left [ \sum_{i=0}^{\infty} C(s_i, \pi(s_i)) \right ] \\
\rm{s.t.}~&~ \rho^{\pi}(s_0) \leq \delta,
\end{align}
where $\delta$ is the user-specified risk tolerance.
\subsection{Constrained Markov Decision Process}
We can transform a CCMDP into a CMDP by introducing a new cost function $\overline{C}: S \times S \times A \to \{0, 1\}$ \cite{geibel2005risk}. As shown in Fig. \ref{fig:state_transition}, when the system transits from a non-failure state to the failure state $s_{ob}$, it will incur a cost of one and other transitions will incur zero cost. The new cost function $\overline{C}$ is defined as
\begin{equation}
\overline{C}(s, a, s^{\prime}) =
\begin{cases*}
1 & \rm{if}~$s \neq s_{ob}$ and $s^{\prime}=s_{ob}$ \\
0 & \rm{else}.
\end{cases*}
\end{equation}
As shown in \cite{geibel2005risk} [Proposition 4.1], the risk can be defined using the new cost function $\overline{C}$ as
\begin{equation}
\rho^{\pi}(s_0) = \mathbb{E} \left [ \sum_{i=0}^{\infty} \overline{C}(S_i, \pi(S_i)) \mid s_0 \right].
\end{equation}
As a result, the CCMDP problem can be formulated as
\begin{align}
\pi^* = & \argmin_{\pi} \mathbb{E}\left [ \sum_{i=0}^{\infty} C(s_i, \pi(s_i)) \right ] \\
\rm{s.t.}~&~ \mathbb{E} \left [ \sum_{i=0}^{\infty} \overline{C}(S_i, \pi(S_i)) \mid s_0 \right] \leq \delta.
\end{align}
\section{Solutions to CMDP}
\label{sec:sol}
A CMDP can be solved using Linear Programming (LP) \cite{altman1999constrained, thiebaux2016rao}. The decision variables $y$ in LP are the occupancy measure for each state-action pair and are defined as
\begin{equation}
y(s, a) = \sum_{t} \text{Pr} (S_t = s, A_t = a).
\end{equation}
The LP is formulated as:
\begin{align}
& \min_{y(s, a), \forall s, a} \sum_{(s, a) \in S \times A} y(s, a) C(s, a) \\
\rm{s.t.}~ & \sum_{s, a} y(s, a) \overline{C}(s, a) \leq \delta \label{problem:LP_cost_constraint}\\
&\begin{aligned}
\sum_{a^{\prime}} y(s^{\prime}, a^{\prime}) = \mathbb{I}(s^{\prime}, s_0)+\sum_{s, a} y(s, a) \rm{Pr}(s^{\prime} \mid s, a)~\\
\forall s^{\prime} \in S \setminus \{s_l\}
\end{aligned}\label{problem:LP_flow_constraint}\\
& y(s, a) \geq 0 ~ \forall s, a \label{problem:LP_occupancy_measure_constraint},
\end{align}
where $\mathbb{I}(s^{\prime}, s_0)$ is a Dirac delta function that returns 1 when $s^{\prime}=s_0$ and 0 otherwise. This LP corresponds to the dual linear program for MDPs \cite{altman1999constrained} with one extra cost constraint \eqref{problem:LP_cost_constraint}, which enforces that the cost of entering the failure state be lower than the predefined risk tolerance. Constraint \eqref{problem:LP_flow_constraint} is a flow conservation constraint to define valid occupancy measures and is defined by the initial state and the transition probability (see \cite{altman1999constrained}, ch. 8 for details). The last constraint \eqref{problem:LP_occupancy_measure_constraint} is added to guarantee that $y(s, a)$ is non-negative.
If LP admits a solution, we can construct the policy from the occupancy measures by normalizing them:
\begin{equation}\label{policy_traction}
\pi^*(s, a) = \frac{y(s, a)}{\sum_{a^{\prime}} y(s, a^{\prime})}~ \forall (s, a) \in S \times A,
\end{equation}
where $\pi^*(s, a)$ is the probability of taking action $a$ in the state $s$ in the optimal stationary randomized policy.
If Eq. \eqref{policy_traction} has a zero denominator, which suggests that state $s$ is not reachable from $s_0$, the policy for $(s, a)$ can be defined arbitrarily.
An illustrative example to explain the policy extracted from the solution of LP is given in Fig. \ref{fig:different_risk_tolerance_policy}. When the UAV reaches $s_{k}^a$, it has 20\% battery remaining. It can choose to rendezvous to replenish itself or move forward to its next task node $s_{k+1}^a$.
If the UAV chooses to rendezvous, the distribution of the battery remaining at the rendezvous point is shown in Fig. \ref{fig:different_risk_tolerance_policy}.(a). The distribution corresponding to forward action is given in Fig. \ref{fig:different_risk_tolerance_policy}.(b). If we set $\delta=0.01$, the policy constructed from LP results in rendezvous with a probability 0.997 and forward with 0.003.
Otherwise, if the UAV were to choose to move forward, it would have less than 10\% battery charge remaining with a probability of 0.2, which is not enough to ensure a rendezvous or forward action at $s_{k+1}^a$, suggesting the failure probability is at least 0.2. By contrast, if we set the $\delta=0.5$, the policy becomes that rendezvous with a probability 0.504 and move forward with 0.496. One reason for such a policy is that with a probability of 0.8 the UAV will have 10\% battery remaining when it moves to $s_{k+1}^a$, where a rendezvous process with a shorter travel distance may be possible with a lower failure probability than the risk tolerance as shown in Fig.\ref{fig:different_risk_tolerance_policy}.(c).
\begin{figure}
\centering
\includegraphics[width=0.38\textwidth]{figures/different_risk_tolerance.pdf}
\caption{An illustrative example to explain the policy constructed with occupancy measures from LP\@. When the UAV is at state $s_{k}^a$ with State of Charge (SOC) equal 20\%, it has two actions: move forward (blue dashed line) to its next task node $s_{k+1}^a$ or rendezvous with the UGV (red dashed line) to recharge itself. The distribution of SOC after taking rendezvous is shown in (a) and that for forward action is shown in (b). (c) shows the distribution of SOC if the UAV first chooses forward action at $s_{k}^a$ and then chooses rendezvous $s_{k+1}^a$. }
\label{fig:different_risk_tolerance_policy}
\end{figure}
\section{Experiments}\label{section:numerical simulation}
\begin{figure*}[t]
\centering
\subfloat[]{
\includegraphics[width=0.25 \textwidth]{figures/task_scenario.pdf}
\label{fig:rendezvous_input}
}
\subfloat[]{
\includegraphics[width=0.28\textwidth]{figures/UAV_sample_route_qualitative_example0.1.pdf}
\label{fig:rendezvous_output_uav}
}
\subfloat[]{
\includegraphics[width=0.28\textwidth]{figures/UGV_sample_route_qualitative_example0.1.pdf}
\label{fig:rendezvous_output_ugv}
}
\caption{
A qualitative example to illustrate how UAV and UGV rendezvous with each other under the policy $\pi$ that is obtained by solving the CMDP\@. The risk tolerance is set to be $\delta=0.1$ in this case study. (a) The input of the risk-aware rendezvous problem. (b) One sample route of UAV when it executes the policy $\pi$. (c) The route of UGV corresponds to the route of UAV\@. Lower case letters with or without prime denote the same road node that is visited at different times}
\label{fig:rendezvous_illustrative_example}
\end{figure*}
In this section, we first present a qualitative example to show what the input and output look like for our problem. Next, we study how system parameters (different risk tolerances) influence the rendezvous behaviors between the UAV and the UGV\@. Then, we present quantitative results for the ISR application that motivates our research. Specifically, we will use Monte Carlo (MC) simulations to evaluate
\begin{enumerate*}
\item the satisfaction of the risk constraint for the policy constructed from LP;
\item the effectiveness of the policy in minimizing the expected task duration;
\item the risk tolerance-task duration Pareto curves.
\end{enumerate*}
Moreover, the running time of LP for CMDP is empirically evaluated. All experiments are conducted using Python 3.8 in a PC with i9-8950HK processor. LP is solved using Gurobi 9.5.0.
\subsection{{Task route planner}} \label{subsection: UAV_task_planning}
{The task routes $\mathcal{T}_a$ and $\mathcal{T}_g$ used in Problem \ref{problem:risk-aware-rendezvous} can be either generated jointly by some existing task planners \cite{manyam2019cooperative, yu2018algorithms} or can be generated by separately by different task planners. In our case study, the task for the UGV is to persistently monitor nodes A, B, and C (blue squares in Fig. \ref{fig:rendezvous_illustrative_example}). The task nodes for the UAV are red dots in Fig. \ref{fig:rendezvous_illustrative_example} and the task route (from node 0 to 18 and back to 0) is generated by a planner for Traveling Salesman Problem (TSP). }
\subsection{System Models}
\begin{table}[ht]
\centering
\caption{Coefficients for stochastic energy consumption model}
\begin{tabular}[t]{lcccccc}
\toprule
&$b_0$ &$b_1$ &$b_2$ &$b_3$ &$b_4$ &$b_5$\\
\midrule
{Value} &-88.77 &3.53 &-0.42 &0.043 &107.5 &-2.74\\
\bottomrule
\end{tabular}
\label{table:coefficient}
\end{table}
The UAV task and UGV tasks are from our ongoing project on intelligence, surveillance, and reconnaissance (ISR) as shown in Fig. \ref{fig:rendezvous_input}. In this project, we are interested in the case where $\delta=0.1$. UAV has about 240 KJ energy and its best range speed and best endurance speed are 14 m/s and 9.8 m/s respectively. UGV moves at 4.5 m/s. The rendezvous process will take 300 seconds.
We consider two sources of stochasticity in the energy consumption model of UAVs: weight and wind velocity contribution to longitudinal steady airspeed.
The deterministic energy consumption model of the UAV is a polynomial fit constructed from analytical aircraft modeling data, given as
\begin{equation}
P(\bm{v_\infty}) = b_0 + b_1 \bm{v_\infty} + b_2 \bm{v_\infty}^2 + b_3 \bm{v_\infty}^3 + b_4 \bm{w} + b_5\bm{v_\infty} \bm{w},
\end{equation}
where $b_0$ to $b_5$ are coefficients, and their experimental values are listed in Table \ref{table:coefficient}.
Figure~\ref{fig:power_consumption} shows the agreement between the polynomial regression fit model and the analytical data that it was derived from.
Weight is randomly selected following a normal distribution with a mean of 2.3 kg and a standard deviation of 0.05 kg, $\bm{w} \sim \mathcal{N}(\mu_{\bm{w}}, \sigma^2_{\bm{w}})$.
Vehicle airspeed, $v_\infty$, is the sum of the vehicle ground speed, $v$, and the component of the wind velocity that is parallel to the vehicle ground speed, ignoring sideslip angle and lateral wind components.
\begin{equation}
v_\infty = \lvert \Bar{v_g} + \rm{cos}(-\psi)\bm{\xi}_{a, b} \rvert
\end{equation}
The longitudinal wind speed contribution is derived from two random parameters; wind speed, and wind direction.
Wind speed is modeled using the Weibull probability distribution model of wind speed distribution, $\bm{\xi}_{a, b}$, with a characteristic velocity $a=1.5$ m/s and a shape parameter $b=3$. This is representative of a fairly mild steady wind near ground level.
Wind direction $\psi$ is the heading direction of the wind, and is uniformly randomly selected on a range of $[0,360)$ degrees.
\begin{figure}
\centering
\includegraphics[width=0.28 \textwidth]{figures/powermod.pdf}
\label{fig:power_model_fit}
\caption{
Comparison of analytical data used to derive the polynomial regression fit model of UAV power requirement at three weights and across 11 airspeeds. }
\label{fig:power_consumption}
\end{figure}
\begin{figure*}
\centering
\subfloat[Low risk tolerance ($\delta=0.01$)]{
\includegraphics[width=0.26\textwidth]{figures/UAV_sample_route_risk_level_example_0.01-1.pdf}
\label{fig:risk_example0.01_UAV}
}
\subfloat[Medium risk tolerance ($\delta=0.2$)]{
\includegraphics[width=0.26\textwidth]{figures/UAV_sample_route_risk_level_example_0.2-2.pdf}
\label{fig:risk_example0.2_UAV}
}
\subfloat[High risk tolerance ($\delta=0.5$)]{
\includegraphics[width=0.26\textwidth]{figures/UAV_sample_route_risk_level_example_0.5-3.pdf}
\label{fig:risk_example0.5_UAV}
} \\
\subfloat[UGV route corresponding to Fig. \ref{fig:risk_example0.01_UAV}]{
\includegraphics[width=0.26\textwidth]{figures/UGV_sample_route_risk_level_example_0.01-1.pdf}
\label{fig:risk_example0.01_UGV}
}
\subfloat[UGV route corresponding to Fig. \ref{fig:risk_example0.2_UAV}]{
\includegraphics[width=0.26\textwidth]{figures/UGV_sample_route_risk_level_example_0.2-2.pdf}
\label{fig:risk_example0.2_UGV}
}
\subfloat[UGV route corresponding to Fig. \ref{fig:risk_example0.5_UAV}]{
\includegraphics[width=0.26\textwidth]{figures/UGV_sample_route_risk_level_example_0.5-3.pdf}
\label{fig:risk_example0.5_UGV}
}\\
\subfloat[Quantitative results.]{
\begin{tabular}[t]{lccc}
\toprule
UAV data &$\delta=0.01$ &$\delta=0.2$ &$\delta=0.5$ \\
\midrule
{Empirical failure rate} &0.00975 &0.201 &0.497 \\
Average route travel time &11330 s &{10418} s &10404 s \\
route travel time overhead &160.2\% &139.2 \% &138.9\% \\
Average route travel distance &96.8 km &97.8 km &97.6 km \\
Travel distance overhead &58.2\% &60.5\% &60.3\% \\
Average \# of rendezvous &8.5 &6.4 &6.3 \\
\bottomrule
\end{tabular}
\label{table:risk_example_table}
}
\caption{ How different risk thresholds influence the rendezvous behaviors. UAV route time with the best range speed is 4354 s and the route distance is 61.0 km. (a) UAV is very risk-averse to failures with a risk threshold equal to 0.01. (b) UAV is less risk-averse to failures with a risk threshold equal to 0.2. (c) UAV is neutral to the failures with a risk threshold equal to 0.5.}
\label{fig:risk_averse_example}
\end{figure*}
\subsection{Simulation Results}
An illustrative example of the input and the output of the problem considered is shown in Fig. \ref{fig:rendezvous_illustrative_example}. The input of the problem is shown in Fig. \ref{fig:rendezvous_input}, which consists of UAV task nodes (red dots), UGV task nodes (blue square), and road network (black nodes). Fig. \ref{fig:rendezvous_output_uav} and \ref{fig:rendezvous_output_ugv} shows one sample route of UAV and UGV respectively when the system executes the policy computed by LP. UAV's route starts from node 0. When the UAV reaches node 4, it will choose to rendezvous with UGV using the best range speed in a rendezvous point, which is denoted as a star, and then go to its next task node 5. Similarly, the UAV will rendezvous with the UGV when it reaches nodes 7, 8, 11, 14, and 15. The corresponding route of UGV is presented in Fig. \ref{fig:rendezvous_output_ugv}.
Next, we show how different risk tolerances influence the rendezvous behaviors under our CMDP formulation. In these experiments, we set the risk tolerance $\delta$ to be 0.01, 0.2, and 0.5. {Results shown in Fig. \ref{fig:risk_averse_example}} includes sample routes for the UAV and the UGV and statistical data of the policies. Fig. \ref{fig:risk_example0.01_UAV}, \ref{fig:risk_example0.2_UAV}, and \ref{fig:risk_example0.5_UAV} are sample routes for the UAV when it executes the policy. Fig. \ref{fig:risk_example0.01_UGV}, \ref{fig:risk_example0.2_UGV}, and \ref{fig:risk_example0.5_UGV} are corresponding routes of the UGV\@. The rendezvous point is denoted as a star. The SOC is annotated in red text close to the task node at which the UAV decides to rendezvous. Some statistical data are summarized in table \ref{table:risk_example_table}. In general, we observe that when the risk tolerance is set to be small, the UAV tends to rendezvous more often, and the average route travel time is higher. Here the average route travel time is computed by considering only trials in which the UAV finishes its task route. By contrast, as the risk tolerance is relaxed to a larger value, the average route travel time will decrease, which comes at the cost of a high failure probability.
We also conducted several quantitative experiments to validate our formulation.
The first experiment is to use MC simulation to check whether the failure probability is upper bounded by the set risk tolerance of 0.1.
We use FR to denote the empirical failure rate. As can be seen in Table \ref{table:MC_convergence}, as MC increases, the empirical failure rate is
close to and below the theoretical PF 0.1. In the following experiments, we will use $N_{\scaleto{MC}{3pt}}=2000$ for simulation.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figures/comparison0.1_short.pdf}
\caption{Results comparisons for the CMDP policy and the greedy policies with $\delta=0.1$. The dashed black line represents the task duration if the UAV moves with the best range speed without considering battery limitation. }
\label{fig:comparison}
\end{figure}
\begin{table}[t]
\centering
\caption{Empirical evaluation of failure probability. $\delta =0.1$. }
\begin{tabular}[t]{lcccc}
\toprule
$N_{\scaleto{MC}{3pt}}$ (\# of MC trials)&$500$ &$1000$ &$3000$ &$5000$\\
\midrule
{Failure rate} &0.108 &0.105 &0.099 &0.097\\
\bottomrule
\end{tabular}
\label{table:MC_convergence}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.40]{figures/task_duration_risk_tolerance.pdf}
\caption{Risk tolerance vs task duration Pareto Curve. }
\label{fig:pareto_curve}
\end{figure}
To validate that the policy constructed from LP can minimize the expected travel distance. We compare our policy with a greedy baseline. The greedy policy is set as: always flies with the best range speed and chooses to rendezvous when state-of-charge drops below a set value. {What we observe in experiments is that when the route of a UAV is long, for example, there are more than 15 nodes, the probability of finishing the route is close to zero for the greedy baseline no matter what threshold we set. To have a more informative comparison, we use only nodes 0 to 11 for the task route in the following experiments.} We consider four-set values $40\%$, $50\%$, $60\%$, and $70\%$, and the corresponding policies are denoted as \textit{Greedy-40}, \textit{Greedy-50}, \textit{Greedy-60}, \textit{Greedy-70}. As shown in Fig.\ref{fig:comparison}, our policy can guarantee success probability above the set value of 0.9 and the expected travel time of UAV is shorter compared to the baseline. Though the baseline can achieve a higher success probability in some cases as shown in Fig.\ref{fig:comparison}, its expected task duration is still longer than our policy.
The empirical Pareto curve for risk tolerance and the task duration is shown in Fig. \ref{fig:pareto_curve}. The green curve is the mean value and the shaded area is formed using one standard deviation from the mean. When the risk level is set to be a higher value, the UAV will tend to make more risky decisions, leading to a lower travel time at the cost of a higher failure probability.
\iffalse
\begin{table}[ht]
\centering
\caption{The gap between task duration and conditional task duration.}
\begin{tabular}[t]{lccccc}
\toprule
&$\delta = 0.01$ &$\delta = 0.03$ &$\delta = 0.05$ &$\delta = 0.08$ &$\delta = 0.1$\\
\midrule
Error &0.18\% &1.95\% &3.01\% &5.6\% &6.18\%\\
\bottomrule
\end{tabular}
\label{table:objective_gap}
\end{table}
\subsubsection*{Discussion}
In this paper, the objective for CMDP is to minimize the task duration the same as those in the related work \cite{chow2015trading}. One alternative objective is to minimize the task duration that only considering the route that the UAV successfully finishes its task route. However, as suggested \cite{chow2015trading}, when the risk level is small, for example, $\delta \leq 0.1$, the solution gap between these two formulations is small. We empirically compare these two formulations. As shown in Table \ref{table:objective_gap}, the gap is only about $6\%$ when $\delta \leq 0.1$, which is the case we are interested in for applications. {One way to remove this gap for our application is to modify the CMDP formulation. Instead of letting the state transit to the terminal state when the UAV runs out of battery, we allow the UAV and UGV to transit back to their tasks with an additional cost added to the main objective, which in practice represents the time consumed to do failure recovery for the UAV\@. Another direction to remove the gap would be to find new algorithms that can minimize the expected length of only those successful routes. }
\fi
The running time for the proposed routing problem consists of three parts. The first part is devoted to extracting transition information for LP\@. The second part is about constructing an LP model with Gurobi and the last part is for solving the LP\@. In our case study, there are about 54000 states and it takes about 6 min to extract transition information, 9 minutes to create an LP model, and about 1 second to solve the LP\@.
\iffalse
$$
\begin{aligned}
\mathbb{E} \left [ \rm{task~duration} \right ] = \mathbb{E} \left [ \rm{task~duration} \mid \rm{success} \right]P(\rm{success}) \\
+ \mathbb{E} \left [ \rm{task~duration} \mid \rm{failure} \right]P(\rm{failure})
\end{aligned}
$$
$$
\begin{aligned}
\frac{\mathbb{E} \left [ \rm{task~duration} \right ]}{P(\rm{success})}= \mathbb{E} \left [ \rm{task~duration} \mid \rm{success} \right] \\
+ \mathbb{E} \left [ \rm{task~duration} \mid \rm{failure} \right]\frac{P(\rm{failure})}{P(\rm{success})} \\
\leq \mathbb{E} \left [ \rm{task~duration} \mid \rm{success} \right]
+ \\
\mathbb{E} \left [ \rm{task~duration} \mid \rm{success} \right]\frac{P(\rm{failure})}{P(\rm{success})}
\end{aligned}
$$
$$
\begin{aligned}
\frac{\frac{\mathbb{E} \left [ \rm{task~duration} \right ]}{P(\rm{success})} - \mathbb{E} \left [ \rm{task~duration} \mid \rm{success} \right]}{\mathbb{E} \left [ \rm{task~duration} \mid \rm{success} \right]} \\
\leq
\frac{P(\rm{failure})}{P(\rm{success})}
\end{aligned}
$$
\fi
\section{Conclusion}
In this paper, we study a variant of the cooperative aerial-ground routing problem with energy chance constraint on the UAV\@. We formulate the problem as a CMDP and use LP to find the optimal policy for the CMDP\@. We validate our formulation and the solution in one ISR application. In future work, one direction we will explore is to extend one UAV one UGV routing problems to multiple UAVs and UGVs and consider the distributed solution for such types of risk-aware routing problems.
\label{section:conclusion}
\bibliographystyle{IEEEtran}
|
3,212,635,537,769 | arxiv | \section{Introduction}
Cosmic strings are known as topological defects that occur in
theories with spontaneous symmetry breaking of a local U(1) gauge
symmetry which were formed during phase transitions in the early
universe. They are trapped infinitely long very thin tubes filled
with a previous false-vacuum phase characterized by a energy density
per unit length $\mu_0=T_0\sim\sigma^2$, with $T_0$ the string
tension and $\sigma$ the symmetry breaking scale, immersed in the a
true-vacuum phase created after the phase transition [1]. Typically,
cosmic strings have been hypothesized as the seeds for ulterior
galaxy formation [2] or as the cosmic sites where primordial
inflation took place [3]. At any event, among the different
theoretical objects that are thought to have populated the universe
at some previous or current periods, which also include black holes
with distinct sizes, Lorentzian wormholes or ringholes, etc., cosmic
strings are the sole objects whose existence has been confirmed in
the laboratory [4]. Therefore, the great interest that cosmic
strings raised when they were first introduced in cosmology has
remained alive all the way up to now. Two general kinds of cosmic
strings have been so far considered, straight strings and string
loops [1]. In the present paper we shall investigate how distinct
forms of dark energy can be accreted by cosmic strings. We shall
restrict ourselves to consider only straight cosmic strings. These
are usually described by a static space-time exterior metric, first
derived by Vilenkin [5]
\begin{equation}
ds^2 = -dt^2+dr^2+dz^2+(1-8G\mu_0)r^2 d\phi^2 .
\end{equation}
By defining a new cylindrical angular coordinate $\phi
'=(1-8G\mu_0)\phi$, it can immediately be seen that this metric
corresponds to a flat spacetime with a conical singularity that is
associated with a deficit angle given by $\Delta=8\pi G\mu_0$.
However, an incoming or outgoing energy flux due to dark energy
accretion is no longer strictly possible for an exterior locally
flat metric like that of a motionless straight string having no
wiggles [1]. In fact, a motionless string with no wiggles [1]
cannot accrete anything that is motionless and homogeneous around
it -in particular it could not accrete dark energy. Thus, if we
want to consider accretion of dark energy onto cosmic strings we
need these cosmic string to be perturbed by wiggles. In that case
the exterior string metric can no longer be given by the locally
flat line element (1.1), as wiggle-induced variations of the
string mass per unit length and tension would convert these
quantities into space-time dependent functions, $\mu$ and $T$,
with the state equation $\mu T=\mu_0^2$ and $\mu>T$, whose values
can initially be considered to be very similar to each other and
therefore also very similar to their unperturbed counterparts in
the linear approximation [1]. The linearized wiggly string metric
reads [1]
\begin{eqnarray}
&&ds^2=-\left[1+4G\left(\mu-T\right)\ln\frac{r}{r_0}\right]dt^2+dr^2\nonumber\\
&&+\left[1-4G\left(\mu-T\right)\ln\frac{r}{r_0}\right]dz^2\nonumber\\
&&+\left[1-4G\left(\mu+T\right)\right]r^2 d\theta^2 ,
\end{eqnarray}
which, contrary to metric (1.1), produces a non-vanishing Newtonian
potential. In this case the deficit angle is given by $4\pi
G(\mu+T)$.
Nowadays cosmology, on the other hand, relies mainly on the idea
that the total energy of the current universe and possibly that of
the early universe (that is the two cosmic periods known to show
accelerating expansion) is dominated by some form of the so-called
dark energy [6]. It is therefore of interest to investigate the
effects that dark energy may cause in cosmic strings. Following
the recent studies performed on black holes [7,8], one can
actually suppose that dark energy can also be accreted onto a
cosmic string, inducing some variation in its energy density per
unit length $\mu$. This work aims at considering the effects that
the accretion of dark energy may have in the fate of wiggly
straight cosmic string in an accelerating universe. We shall
represent dark energy as a perfect fluid characterized by a
negative parameter $-1/3>w=p/\rho$ (with $p$ the pressure and
$\rho$ the energy density) filling a Friedmann-Robertson-Walker
universe whose scale factor is given by [9]
\begin{equation}
a(t)=a_0\left(1+\frac{3}{2}(1+w)C^{1/2}(t-t_0)\right)^{2/[3(1+w)]}
,
\end{equation}
where $C=8\pi G\rho_0/3$ and we have taken for the energy density
$\rho=\rho_0 a^{-3(1+w)}$, with $\rho_0$ an integration constant,
if we adopt a general quintessence model. It can be readily seen
that, whereas the universe enters a steady regime of accelerating
expansion which keeps it being finite all the way up to an
infinite time if $w>-1$, in the case that $w<-1$ (a case at which
the dark energy is called phantom energy [10]) the universe would
expand along super-accelerated patterns that drive it to a
singularity at a finite time in the future at which everything
-even the elementary particles - loses any independent, local
behavior by its own to be ripped apart under the sole influence of
the global phantom cosmological law. This singularity has been
dubbed the big rip [11] and takes place at a time
\begin{equation}
t_* =t_0 +\frac{2}{3(|w|-1)C^{1/2}} .
\end{equation}
Such a rather weird behavior takes also place when the other main
contender model for dark energy, that is to say the K-essence model
[12], is assumed to dominate. In fact, if $w<-1$ we obtain in this
case [13]
\begin{equation}
a(t)\propto (t-t_*)^{-2\beta/[3(1-\beta)]} ,\;\;\; 0<\beta<1 ,
\end{equation}
where $t_*$ again represents the time for the big rip which is an
arbitrary parameter in this case.
Of particular interest is the scenario in which we consider that the
dark energy is given in terms of a generalized Chaplygin gas having
an equation of state $p=-A\rho^{-\alpha}$, with $A>0$ and $\alpha$ a
parameter [14]. In this case the cosmic time $t$ relates to the
scale factor by the more complicated expression [15]
\begin{eqnarray}
&&t-t_0=\frac{2\left[1+\frac{B}{A}a^{-3(1+
\alpha)}\right]^{\frac{1+2\alpha}{2(1+
\alpha)}}}{3(1+2\alpha)\sqrt{CA^{1/(1+
\alpha)}/\rho_0}}\times\nonumber\\
&&F\left(1,\frac{1+2\alpha}{2(1+\alpha)};\frac{3+
4\alpha}{2(1+\alpha)}; 1+\frac{B}{A}a^{-3(1+\alpha)}\right) ,
\end{eqnarray}
with $F$ a hypergeometric function and
$B=(\rho^{1+\alpha}-A)a_{0}^{3(1+\alpha)}$. It can be seen that even
in the phantom energy regime a generalized Chaplygin gas does not
lead to any big rip singularity in the future [15,16], but it always
drives a steady regular accelerating expansion for the universe.
In this paper we use a formalism which is able to encompass the
accretion of dark energy described by any of the above models onto
wiggly straight cosmic strings. We obtain that as quintessence or
K-essence dark energy is accreted onto a perturbed straight cosmic
string the energy density per unit length of this string either
progressively increases up to a constant finite value if $w>-1$,
or steadily decreases down to the unperturbed value first and
might then enter a region where quantum accretion makes it reach a
minimum value, quite before the occurrence of the big rip
singularity if $w<-1$. The behavior of the strings when they
accrete Chaplygin gas is similar: their energy density per unit
length also progressively either increases or decreases toward a
extremal value, depending on whether the dominant energy condition
is satisfied or violated.
The paper can be outlined as follows. In Sec. II we present the
general formalism for the accretion of dark energy onto straight
wiggly cosmic strings and obtain a general rate equation for the
string core energy density per unit length, $\mu$, in terms of the
internal dark energy, and apply such a formalism to quintessence
and K-essence cosmological fields, so as to the generalized
Chaplygin gas model. Approximate expressions of $\mu$ as a
function of time for the first two dark energy models are also
derived, both for $w=p/\rho>-1$ and $w=p/\rho<-1$, analyzing the
corresponding evolution of the cosmic strings. Finally we conclude
and add some further comments in Sec. III.
\section{Dark energy accretion onto wiggly straight cosmic strings}
We shall consider next how the general accretion theory can be
applied to the case in which dark energy is accreted onto wiggly
cosmic strings. We shall generally follow the procedure put
forward by Babichev, Dokuchaev and Eroschenko [7] for the case of
Schwarzschild black holes, generalizing it to the case of straight
wiggly cosmic strings. Thus, we start by integrating the
energy-momentum conservation law by using the exterior metric
(1.2). Although for metric (1.1) there are only two non-vanishing
components of the Christoffel symbols,
$\Gamma^r_{\theta\theta}=-\left(1-8G\mu\right)r$ and
$\Gamma^\theta_{r\theta}=1/r$, when the string is perturbed with
wiggles there will be twenty one generally non-vanishing
components of the Christoffel symbols which make the calculation
to follow more complicated. For a cylindrical symmetry we then
have from the time-component of the conservation law of the
energy-momentum tensor, $T_{\mu;\nu}^{\nu}=0$,
\begin{equation}
\sqrt{\mu}ru\sqrt{1-h_{00}}\sqrt{1-b}(1+h_{00})\sqrt{u^2-1}(p+\rho)=C
,
\end{equation}
where
\begin{equation}
h_{00}=4G(\mu-T)\ln\left(r/r_0\right)
\end{equation}
\begin{equation}
b=4G(\mu+T),
\end{equation}
with $r_0$ and $C$ integration constants and $u=dr/ds$.
After integrating the conservation law for the energy-momentum
tensor projected onto the four-velocity,
$u_{\mu}T_{;\nu}^{\mu\nu}=0$, we also obtain
\begin{equation}
ur\sqrt{\mu(1-h^2_{00})(1-b)}e^{\int_{\rho_{\infty}}^{\rho}\frac{d\rho}{p+\rho}}=A,
\end{equation}
where we have taken into account that $u$ should be positive for
incoming energy flux in this case, and $A$ is a positive constant.
From Eqs. (2.1) and (2.4) we can then get
\begin{equation}
\sqrt{(u^2-1)(1+h_{00})}(p+\rho)e^{-\int_{\rho_{\infty}}^{\rho}\frac{d\rho}{p+\rho}}=C_2,
\end{equation}
in which the constant $C_2$ can be expressed as
$C_2=C/A=\hat{A}\left[\rho_{\infty}+p(\rho_{\infty})\right]$, with
$\hat{A}>0$ a constant, for the cylindrical symmetry used.
By integrating now the momentum density $T^r_0$ over the circular
length element of the cylinder we can obtain the rate of change of
the energy per unit length of the wiggly cosmic string, so that
\begin{equation}
\dot{\mu}=-\int_0^{2\pi}rT^r_0
d\phi=\int_0^{2\pi}r(p+\rho)(1+h_{00})\frac{dt}{ds}\frac{dr}{ds}d\phi
.
\end{equation}
Using then the property
$\sqrt{1+h_{00}}dt=\sqrt{\frac{dr^2}{ds^2}-1}ds$ stemming from the
cylindrical symmetry being used and Eqs. (2.5) and (2.6), we
finally derive the relevant rate equation for the energy density
of a wiggly cosmic string
\begin{equation}
\dot{\mu}=\frac{2\pi\bar{A}\left[\rho_{\infty}+
p(\rho_{\infty})\right]}{\sqrt{\mu(1-b)(1-h_{00}^2)}} ,
\end{equation}
with $\bar{A}=A\hat{A}>0$ a constant. Therefore, one has the
following integral expression that governs the evolution of the
wiggled mass per unit length of the cosmic string
\begin{equation}
\int_{\mu_i}^{\mu}\sqrt{\mu(1-b)(1-h_{00}^2)}d\mu=
2\pi\bar{A}\int_{t_0}^{t}\left[\rho_{\infty}+p\left(\rho_{\infty}\right)\right]dt
.
\end{equation}
It is worth noticing that the above expressions restrict by
themselves the interval along which the quantity $\mu$ is allowed to
vary on its real values. In fact, one can derive the two conditions
\begin{equation}
\mu_0 < \frac{1}{8G}
\end{equation}
\begin{equation}
\frac{1-\sqrt{1-64G^2\mu_0^2}}{8G}<\mu<\frac{1+\sqrt{1-64G^2\mu_0^2}}{8G}
.
\end{equation}
Condition (2.9) expresses nothing but the impossibility for an
supermassive wiggleless cosmic string to reach a linear energy
density larger than nearly $1/G$. Even though the concepts of radius
and mass per unit length for a source like the string core are not
unambiguously defined [17], specially in the presence of an
interacting dark energy fluid, at the extreme supermassive case
$\mu=1/8G$ one would expect the string to no longer exist because it
then corresponded to the situation where all the exterior broken
phase is collapsed into the core, leaving a pure false-vacuum phase
in which the picture of a cosmic string with a core region of
trapped is lost [18]. When the string is wiggled then condition
(2.9) reflects into condition (2.10) by which it is seen that a
wiggly cosmic string cannot exceed a given maximum value or be less
than a given minimum nonzero value. If a cosmic string has the
extreme supermassive linear mass density, then it cannot be wiggled
nor accrete any kind of dark energy.
Now, the integration in the left-hand-side of Eq. (2.8) appears to
be very difficult to perform and, in fact, we have been unable to
obtain an integrated expression from it in closed form.
Nevertheless, in the physically relevant cases that $\mu$ is very
close to $\mu_0$ and/or $r$ is very close to $r_0$, that term can
be integrated to approximately give
\begin{eqnarray}
&&\int_{\mu_i}^{\mu}d\mu\sqrt{\mu(1-b)(1-h_{00}^2)}\simeq I(\mu)=
\nonumber\\
&&\left[\frac{8G\mu-1}{16G}\sqrt{-4G\mu^2+\mu-4G\mu_0^2}\right.\nonumber\\
&&\left.\left.+\frac{64G^2\mu_0^2-1}{64G^{3/2}}\arcsin\left(\frac{1-
8G\mu}{\sqrt{1-64G^2\mu_0^2}}\right)\right]\right|_{\mu_i}^{\mu}
.
\end{eqnarray}
The integration of the right-hand-side of Eq. (2.8) will be
performed in what follows for the distinct dark energy models
considered in the Introduction.
\subsection{Quintessence and K-Essence}
Starting with the equation of state $p=w\rho$, where $w$ is
assumed constant, we can use the conservation of cosmic energy to
finally derive
\begin{equation}
\rho=\rho_0\left(\frac{a_0}{a}\right)^{3(1+w)} ,
\end{equation}
with $\rho_0$ and $a_0$ constants. Hence
\begin{eqnarray}
&&2\pi\bar{A}\int_{t_0}^{t}\left[\rho_{\infty}+p\left(\rho_{\infty}\right)\right]dt
=\nonumber\\ &&2\pi\bar{A}(1+w)\rho_0 a_0^{3(1+w)}\int_{t_0}^t dt
a^{-3(1+w)}.
\end{eqnarray}
We then have for the scale factor (1.3) corresponding to a general
flat quintessence universe
\begin{equation}
t=t_0+\frac{I(\mu)}{(1+w)\left(2\pi\bar{A}\rho_0-\frac{3}{2}C^{1/2}I(\mu)\right)}
,
\end{equation}
where $I(\mu)$ is defined in Eq. (2.11). This is a parametric
equation from which one can obtain how the energy per unit length
of a wiggled cosmic string evolves in the accelerating universe.
Thus, if $w>-1$ we see that the string energy in the core will
progressively increases from its initial value $\mu_i$, tending to
the maximum value
\[\mu_{\rm max}=\frac{1+\sqrt{1-64G^2\mu_0^2}}{8G}.\]
The larger $w$ the shorter the time required by the accretion
process to make the string to reach $\mu_{\rm max}$. If $w<-1$,
i.e. if we are in the phantom regime, then the linear energy
density in the string core will rapidly decreases from its initial
value down to recover its unperturbed value at $\mu_0$. The
smaller $w$ the shorter the time taken by the system to reach the
value $\mu_0$. As the string is approaching that value the
gravitational potential should be getting on smaller and smaller
values to finally vanish at $\mu_0$, so that the classical
accretion process will stop at that point. Such a behavior is also
checked to occur in the case that phantom K-energy is accreted.
An interesting question is however posed in the two considered
kinds of phantom energy. Even though the classical, continuous
accretion process must only proceed down to $\mu_0$, if we assumed
that phantom energy accretion would proceed by discrete steps,
then the limit at $\mu_0$ should be overtaken and the linear
energy density of the string core would continue decreasing below
$\mu_0$ as the phantom energy was being accreted. We would reach
in this way a regime where $T>\mu$ which would end when $\mu$
reached the minimum value
\[\mu_{\rm min}=\frac{1-\sqrt{1-64G^2\mu_0^2}}{8G} ,\]
which would never vanish provided $\mu_0>0$. The spacetime metric of
the cosmic string given by Eq. (1.2) would then exchange the values
between the $tt$ and $zz$ components, as in this case $\mu<T$.
\subsection{Generalized Chaplygin gas}
We shall derive now the expression for the rate $\dot{\mu}$ in the
case of a generalized Chaplygin gas. We start with the expression
for the energy density
\begin{equation}
\rho=\left(A_{ch} +\frac{B}{a^{3(1+\alpha)}}\right)^{1/(1+\alpha)} ,
\end{equation}
which has been obtained by integrating the cosmic conservation law
for the case of the equation of state of a generalized Chaplygin
gas, that is $p=-A_{ch}/\rho^{\alpha}$. Now, from the Friedmann
equation we can get
\begin{equation}
\dot{a}=\sqrt{\frac{8\pi
G}{3}}a(t)\left(A_{ch}+\frac{B}{a^{3(1+\alpha)}}\right)^{1/[2(1+\alpha)]}
.
\end{equation}
Hence, from Eq. (2.11) it can be obtained
\begin{eqnarray}
&&I(\mu)=B\bar{A}\sqrt{\frac{3\pi}{2G}}\int_{a_0}^{a}
\frac{\frac{1}{a^{3(1+\alpha)}}}{a\left(A_{ch}+
\frac{B}{a^{3(1+\alpha)}}\right)^{(2\alpha+1)/[2(1+\alpha)]}}da
\nonumber\\
&&=-\bar{A}\sqrt{\frac{2\pi}{3G}}\left[\left(A_{ch}+\frac{B}{a^{3(1+
\alpha)}}\right)^{1/[2(1+\alpha)]}- \sqrt{\rho_0}\right].
\end{eqnarray}
It follows
\begin{equation}
a^{3(1+\alpha)}=\frac{B}{\left(\sqrt{\rho_0}-\sqrt{\frac{3G}{2\pi
\bar{A}^2}}I(\mu)\right)^{2(1+\alpha)}-A_{ch}} .
\end{equation}
Again in this case the setting of a constant $B>0$ implies a
progressive increase of $\mu$ with $a$ up to a maximum given by
$\mu_{\rm max}$, and the assumption of a constant $B<0$ (phantom)
leads to a decrease of $\mu$ with $a$ down to $\mu_0$ classically or
to $\mu_{\rm min}$ if the Chaplygin phantom energy is supposed to be
accreted in discrete steps.
\section{Conclusions and further comments}
While cosmic strings have a long tradition and incidence in
theoretical cosmology, the introduction of cosmic dark energy has
taken place quite more recently though not with less incidence or
surprise. Perhaps therefore their potential mutual relations and
interactions have not been so far considered. This paper is a
first step in the task of studying the effects that the presence
of dark energy may have in the fate of cosmic string in an
accelerating universe. We have restricted ourselves here to just
looking at an approximate model describing how straight wiggly
cosmic strings accrete dark energy during the accelerating
expansion of the universe, leaving for future publications the
accurate treatment for both wiggly straight strings and the
similar accretion onto circular strings, so as the kinematic
effects that the acceleration of the universe may have on the
shape and size of any cosmic strings. A generalized description
has first been thus built up and then adapted to the case of the
cylindrically symmetric accretion of dark energy onto straight
cosmic strings. That description is based on the integration of
the conservation laws for the energy-momentum tensor and its
projection on four-velocity using the exterior geometry of a
wiggly cosmic string. We have considered the dark energy accretion
onto straight cosmic strings using several scalar field models for
the cosmological vacuum, namely quintessence and K-essence field
models with equation of state $p=w\rho$, and a generalized model
of Chaplygin gas with the unusual equation of state
$p=-A/\rho^{\alpha}$. An rate equation for the energy density per
unit length of the strings has been in this way derived and
finally integrated for each of these dark energy models. This
ultimately leads to the prediction that, whereas when the energy
density of the cosmic vacuum decreases with time the linear energy
density of the straight strings progressively increases as the
universe grows bigger for all dark energy models, if the energy
density of the universe grows with expansion, inducing a universal
violation of the dominant energy condition, the stringy energy
density steadily decreases. That energy density dropping makes the
strings to eventually become free of wiggles to get thereafter on
a quantum accreating regime where the string energy density
reaches finally a minimum nonzero value, before the occurrence of
any future big rip singularities.
It appears that the current value of the parameter $w$ in the
equation of state of the universe may be less than -1. So, one
could be tempted to think that the above evolution of cosmic
strings leading eventually to the formation of exotic topological
defects with negative-wiggles perturbations would be inescapable.
However, having now $w<-1$ (provided this turns out to be
definitively the case most favoured by observations) does not
guarantee at all that the phantom regime will endure in the
future. In fact, most general descriptions of quintessence field
are based on tracking models where the parameter $w$ is time
dependent [19] and, therefore, it could well be that what is now
less than -1 would later turn out to be greater than -1, so making
the cosmic string evolution predicted by our constant-$w$ models
inapplicable in the far future. Nevertheless, the initial string
evolution implied by our phantom models looks as being probable.
That behaviour by itself would still be important enough for a
variety of subjects. But even such a behaviour would not be
guaranteed as phantom fields are characterized by Lagrangians
containing negative kinetic terms which have very weird properties
and lead to unwanted instabilities [20] making the whole phantom
scenario problematic.
\acknowledgements
\noindent We thank Professors J.A.S. Lima and E. Babichev for
useful explanations, discussions and correspondence. We also
acknowledge A. Ferrera and M. Rodr\'{\i}guez for constructive
discussions and criticisms. This work was supported by DGICYT
under Research Projects BMF2002-03758 and BFM2002-00778. JAJM
wants to acknowledge IMAFF for kind hospitality.
\pagebreak
|
3,212,635,537,770 | arxiv | \section{Introduction}
Oceanic refractive-index spectrum (ORIS) plays a key role in oceanic optics \cite{Hill78OSA, Hill2, Nikishov:2000,OK:2019}.
Unlike the atmospheric spectrum governed by temperature and humidity \cite{Obukhov:1962, Hill78RC,Hill:78,Andrews:1992},
ORIS is a 2nd order statistical description affected by the fluctuations of temperature, salinity, and their co-operation \cite{Hill78OSA,Nikishov:2000, Leandri:2011}.
ORIS is well-modified in many cases that based on H1 and H4
\cite{ Nikishov:2000,Yao:2017,Li:2019,Yi:2018},
but oceanic outer scale, as a potentially important parameter of ORIS, has not been discussed widely.
Indeed,
the outer scale is a narrow restriction of the range with energy injection \cite{Coulman:1988},
which is defined as the highest degree of anisotropy \cite{Toselli:2014}.
And it affects the properties of light beam obviously,
like low-frequency behavior \cite{Reinhardt:1972}, beam spread \cite{Yura:1973}, adaptive optics \cite{ V.V95,Ziad:2004}, scintillation index \cite{Andrews:1999,Yi:2012}, and angle of arrival fluctuations\cite{Cui:2014}.
Moreover, the outer scale is directly measured by using the temperature structure function \cite{Clifford:1971},
and it is also directly obtained for water turbulence in laboratory conditions \cite{ MD:1997}.
The recent works proposed that using back-reflected light can be considered as a new technique for estimation of atmospheric turbulence outer scale \cite{Kulikov:2019}.
Compared with great significance of the outer scale in atmosphere optics, there are a few reports about oceanic optics \cite{Li:2019}.
The outer scale has been introduced into H1-based oceanic spectra, which shows a lower beam wander \cite{Yue:2019},
and a lower off-axis scintillation index \cite{Li:2019} for Gaussian beams.
However, as stated in Ref.\cite{Hill2},
H4-based spectra are more accurate than H1-based spectra in high wavenumbers.
Motivated by this, an H4-based oceanic refractive-index spectrum was given, which shows a good precision for wide-range Prandtl/Schmidt number \cite{Yao:2019}.
And an application of the spectrum in a bi-LIDAR system elucidated that the underwater optical turbulence degenerates the spectral density and the degree of coherence \cite{OK:2019bi}.
Consequently,
considering the significance of outer scale and the advantage of recently proposed H4-based spectrum, we introduce an outer-scale parameter into the H4-based spectrum,
and use it to calculate the light propagation in ocean.
This paper is arranged as follows.
In section 2.1 we develop an outer-scaled oceanic spectrum based on the approximate H4-based spectrum;
Section 2.2 gives the analytical expression of beam wander with the outer-scaled oceanic spectrum.
Section 3 shows a further numerical calculation and discussion.
Finally, Section 4 makes a summary.
\section{Theory}
Power spectrum of oceanic refractive-index fluctuations is generally given by a linear combination of temperature spectrum, salinity spectrum and their co-spectrum.
Each of the spectra is based on H1 \cite{Nikishov:2000} or H4\cite{Yao:2019}.
The H4-based ORIS can denote the complex oceanic environment affected by seasonal or extreme average temperature, salinity and/or other factors.
In this section, we will introduce the outer scale into H4-based spectrum, and derive the beam wander based on it.
\subsection{The modification of ORIS about outer scale}
Generally, outer scale can be described in three forms\cite{V.Vouter:1995}:
Exponential form, $\kappa^{-11 / 3}\left[1-\exp \left(-\kappa^{2} / \kappa_{0}^{2}\right)\right]$;
von K\'arm\'an form, ${\left( {{\kappa ^2} + \kappa _0^2} \right)^{ - 11/6}}$;
and Greenwood form, ${\left( {{\kappa ^2} + \kappa {\kappa _{0}}} \right)^{ - 11/6}}$,
where $\kappa$ is spatial frequency,
${\kappa _0}$ includes the outer scale.
Here we choose the exponential form for its mathematical simplicity and physical reasonability.
The 3D oceanic spectrum ${\Phi _{n}}(\kappa )$ is given by \cite{Yao:2019}:
\begin{align} \label{eq1}
{\Phi _{n}}(\kappa ) =
{A^2}{\Phi _{\rm{T}}}(\kappa ) + {B^2}{\Phi _{\rm{S}}}(\kappa ) -
2AB{\Phi _{{\rm{TS}}}}(\kappa ),
\end{align}
where $A$ is the thermal expansion coefficient;
$B$ is saline contraction coefficient,
and we express the three spectra on the right of Eq.(1) with outer-scaled H4-based model:
\begin{align} \label{eq2}
\nonumber
{\Phi _i}(\kappa ) =& \frac{1}{{4\pi }}\beta {\varepsilon ^{ - \frac{1}{3}}}{\kappa ^{{\rm{ - 11/3}}}}\left[{{\rm{1 - exp( - }}\frac{{{\kappa ^2}}}{{\kappa _0^2}}{\rm{)}}} \right]{\chi _i}{g_i}\left( {\kappa \eta } \right),\\
\quad & i \in \{ {\rm{T}},{\rm{S}},{\rm{TS}}\} ,
\end{align}
with
\begin{align} \label{eq3}
\nonumber
{g_i}\left( {\kappa \eta } \right) =& \left[ {1 + 21.61{{(\kappa \eta )}^{0.61}}c_i^{0.02} - 18.18{{(\kappa \eta )}^{0.55}}c_i^{0.04}} \right]{\rm{ }} \\
&\times \exp \left[ { - 174.90{{(\kappa \eta )}^2}c_i^{0.96}} \right]{\rm{ }},\\
{c_i} = {0.07}&{2^{4/3}}\beta Pr_i^{ - 1},\\
\kappa_0 = \frac{C_0}{L_0},
\end{align}
where $\beta = 0.72 $ is the Obukhov–Corrsin constant;
$\varepsilon $ is the dissipation rate of turbulent kinetic energy pur unit mass of fluid ,
which varies in the range of $[{10^{ - 10}},{10^{ - 4}}]$ $\rm{m^2}{s^{-3}}$;
$\eta $ is the Kolmogorov microscale length that varies in the range of $[{10^{ - 4}},{10^{ - 2}}]$ $\rm{m}$;
$\chi _i$ is the ensemble-averaged variance dissipation rate;
$P{r_{\rm{S}}} , P{r_{\rm{TS}}} $ are the temperature Prandtl number and salinity Schmidt number, respectively;
the temperature-salinity Prandtl-Schmidt number $P{r_{{\rm{TS}}}} = 2P{r_{\rm{T}}}P{r_{\rm{S}}}{\left( {P{r_{\rm{T}}} + P{r_{\rm{S}}}} \right)^{ - 1}}$;
$L_0$ is the outer scale;
${C_0} \in [2\pi ,8\pi ]$ is the scaling constant, its value depends on the application, and we choose ${C_0} = 4\pi$ here.
\subsection{Theoretical derivation of beam wander}
According to the outer-scaled H4-based ORIS given in the last section, we derive the beam wander as follows.
Beam wander of Gaussian beam is generally expressed as \cite{BOOK1}:
\begin{align} \label{eq4}
\nonumber \left\langle {r_c^2} \right\rangle =
&\frac{4{\pi ^2}{k^2}{W^2}L}{{n}_{0}^{2}}\int_0^1 {\int_0^\infty \kappa } {{\Phi_n}(\kappa )}{H_{\rm{LS}}}(\kappa ,\xi ) \\
& \times \left[ {1 - \exp \left( { - \Lambda L{\kappa ^2}{\xi ^2}/k} \right)} \right]{\rm{d}}\xi {\rm{d}}\kappa,
\end{align}
where
$k = 2\pi {n_0}/\lambda $ is wavenumber,
which contains refractive index of seawater ${n_0}$ and incident wavelength $\lambda$;
Beam radius,
$W = {W_0}\sqrt {\Theta _0^2 + \Lambda _0^2} $,
is associated with the waist of input Gaussian beam ${W_0}$,
${\Theta _0} = 1 - {\bar \Theta _0}$ is the beam curvature parameter at the input plane,
${\bar \Theta _0}$ is the complementary parameter;
$\Lambda = 2L/(k{W^2})$ and ${\Lambda _0} = 2L/(k{W_0}^2)$ are respectively the Fresnel ratio of beam at the receiver and transmitter;
$\xi = 1 - z/L$ is normalized wide-range that carries transmission distance information $L$ and $z$;
${H_{\rm{LS}}}(\kappa ,\xi )$ is the Gaussian filter function.
To capture the influence of outer scale,
let filter function hold the components of beam wander, which is generally given by \cite{BOOK1}:
\begin{align} \label{eq5}
{H_{\rm{LS}}}(\kappa ,\xi ) = \exp \left\{ { - {\kappa ^2}W_0^2\left[ {{{\left( {{\Theta _0} + {{\bar \Theta }_0}\xi } \right)}^2} + \Lambda _0^2{{(1 - \xi )}^2}} \right]} \right\}.
\end{align}
The diffraction effects and small-scale effects are neglected \cite{BOOK1},
so we drop the last term and
consider the following approximation
\begin{align} \label{eq6}
1 - \exp \left( { - \Lambda L{\kappa ^2}{\xi ^2}/k} \right) \approx \Lambda L{\kappa ^2}{\xi ^2}/k,\quad L{\kappa ^2}/k \ll 1.
\end{align}
In cooperating with Eqs.(\ref{eq4}) -(\ref{eq6}), we have
\begin{align} \label{eq7}
\left\langle {r_c^2} \right\rangle = \frac{8{\pi ^2}{L^3}}{{n}_{0}^{2}}\int_0^1 {\int_0^\infty {{\kappa ^3}} {\xi ^2}} {{\Phi_n}(\kappa )}\exp \left[ { - {\kappa ^2}W_0^2{{({\Theta _0} + {{\bar \Theta }_0}\xi )}^2}} \right]{\rm{d}}\xi {\rm{ d}}\kappa,
\end{align}
where
${\Theta _0} \ge 0$ implies the collimated, focused and divergent beam cases of optical transmission respectively.
Here, we adopt the outer-scaled H4-based ORIS in Eq.(\ref{eq2}) and substitute it into Eq.(7),
\begin{align} \label{eq8}
\left\langle {r_c^2} \right\rangle = \frac{2 \pi {L^3}\beta {\varepsilon ^{ - \frac {1} {3}}}{A^2}{\chi _{\rm{T}}}}{{n}_{0}^{2}}\sum\limits_{} {{Y_{i,j}}}.
\end{align}
and
\begin{align} \label{eq9}
\nonumber {Y_{i,j}} = &\int_0^1 {\int_0^\infty {{\kappa ^{{d_{i,j}} - \frac {2} {3}}}} \left[ {{\rm{1 - exp( - }}\frac{{{\kappa ^2}}}{{\kappa _0^2}}{\rm{)}}} \right]{\xi ^2}} {b_{i,j}} \\
&\exp \left[ { - {\kappa ^2}({a_{i,j}} + W_0^2{{({\Theta _0} + {{\bar \Theta }_0}\xi )}^2})} \right]{\rm{d}}\xi {\rm{ d}}\kappa.
\end{align}
It can be obtained directly from the Eq.(\ref{eq8}) where the beam wander is proportional to ${\varepsilon ^{ - \frac {1} {3}}}$ and $\chi _{\rm{T}}$.
${Y_{i,j}}$ is a third-order matrix with three submatrixes ${a_{i,j}}$,${b_{i,j}}$ and ${d_{i, j}}$.
\begin{flushleft}
Further, for the case of a collimated beam ( $\Theta_{0}=1$),
\end{flushleft}
\newpage
\begin{strip}
\begin{align} \label{eq10}
\nonumber{Y_{i,j}} &= \int_0^1 {\int_0^\infty {{\kappa ^{d_{i,j} - \frac {2} {3}}}} \left[ {{\rm{1 - exp( - }}\frac{{{\kappa ^2}}}{{\kappa _0^2}}{\rm{)}}} \right]{\xi ^2}} b_{i,j}\exp \left[ { - {\kappa ^2}(a_{i,j} + W_0^2)} \right]{\rm{d}}\xi {\rm{ d}}\kappa\\
&=\frac{1}{6}\left[ {\frac{1}{{{{\left( {{a_{{i,j}}} + W_0 } \right)}^{\frac{1}{6} + \frac{{{d_{{{i,j}}}}}}{2}}}}}{-}\frac{1}{{{{\left( {{a_{{i,j}}} + W_0 + {\kappa_{0} ^{ - 2}}} \right)}^{\frac{1}{6} + \frac{{{d_{{i,j}}}}}{2}}}}}} \right]{b_{{i,j}}}\Gamma \left[ {\frac{1}{6} + \frac{{{d_{{i,j}}}}}{2}} \right].
\end{align}
\end{strip}
\begin{strip}
\begin{align} \label{eq11}
\left\{ {{a_{i,j}}} \right\} = 174.9{\eta ^2}\left[ {\begin{array}{*{20}{c}}
{c_{\rm{T}}^{0.96}}&{c_{\rm{T}}^{0.96}}&{c_{\rm{T}}^{0.96}}\\
{c_{\rm{S}}^{0.96}}&{c_{\rm{S}}^{0.96}}&{c_{\rm{S}}^{0.96}}\\
{c_{\rm{TS}}^{0.96}}&{c_{\rm{TS}}^{0.96}}&{c_{\rm{TS}}^{0.96}}
\end{array}} \right],
\left\{ {{d_{i,j}}} \right\} = \left[ {\begin{array}{*{20}{c}}
{\rm{0}}&{0.61}&{0.55}\\
{\rm{0}}&{0.61}&{0.55}\\
{\rm{0}}&{0.61}&{0.55}
\end{array}} \right]
\end{align}
\begin{align} \label{eq12}
\left\{b_{i, j}\right\} = A^{2} \chi_{\rm{T}} \left[ \begin{array}{ccc}{1} & {21.61 \eta^{0.61} c_{\rm{T}}^{0.02}} & {-18.18 \eta^{0.55} c_{\rm{T}}^{0.04}} \\
{\frac{1}{\omega^{2}} d r} & {21.61 \eta^{0.61} c_{\rm{S}}^{0.02} \frac{1}{\omega^{2}} d r} & {-18.18 \eta^{0.55} c_{\rm{S}}^{0.04} \frac{1}{\omega^{2}} d r} \\
{-\frac{1}{\omega}(1+d r)} & {-21.61 \eta^{0.61} c_{\rm{TS}}^{0.02} \frac{1}{\omega}(1+d r) }&
{18.18 \eta^{0.55} c_{\rm{TS}}^{0.04} \frac{1}{\omega}(1+d r)}\end{array}\right];
\end{align}
\end{strip}
\noindent
where,
$\omega \in \left[ {{\rm{ - 5}}\left. ,{\rm{0}} \right]} \right.$ is the dominant ratio of temperature and salinity.
On substituting from Eqs.(\ref{eq9})-(\ref{eq12}) into Eq.(\ref{eq8}), beam wander contains a complex dependence of $\omega $, $L_{0}$ and $c_{i}$.
Besides, $\chi _{\rm{T}}$, $\chi_{\rm{S}}$ and $\chi _{\rm{TS}}$ follow the expressions \cite{Elamassie:17}:
\begin{align} \label{eq13}
{\chi _{\rm{S}}} = \frac{{{A^2}}}{{{\omega ^2}{B^2}}}{\chi _{\rm{T}}}{d_r},\quad {\chi _{\rm{TS}}} = \frac{A}{{2\omega B}}{\chi _{\rm{T}}}\left( {1 + {d_r}} \right).
\end{align}
and
\begin{align} \label{eq14}
{d_r} \approx \left\{ {\begin{array}{*{20}{l}}
{|\omega {|} + \sqrt{|\omega {|}{{(|\omega | - 1)}}},}&{|\omega | \ge 1}\\
{1.85|\omega | - 0.85,}&{0.5 \le |\omega | < 1}\\
{0.15|\omega |,}&{|\omega | < 0.5}
\end{array}} \right.,
Pr \in [3,3000].
\end{align}
From the mentioned above,
a outer-scaled ORIS has been given.
We apply it for the transmission of laser beam,
especially,
an analytical expression of beam wander is derived.
Now then, it will be made a discussion about its factors.
\section{Numerical results and analyses }
In this section, by using the proposed outer-scaled ORIS Eq.(\ref{eq2}) and expression of beam wander in Eq.(\ref{eq9}),
we compare and discuss the beam wander and its proportions influencedc by temperature, salinity and their coupling term.
In what follows,
we set
${n_0} = 1.34$,
$ A = 2.56 \times {10^{ - 4}}{\deg ^{ - 1}}{\rm{l}}$,
$\chi _{\rm{T}} = {10^{ - 5}}{\rm{K}^2}{\rm{s}^{ - 1}}$,
$\varepsilon = 1 \times {10^{ - 2}}{\rm{m}^2}{\rm{s}^{ - 3}}$,
$L = 15\rm{m}$,
${W_0} = 0.1\rm{m}$.
With the varying of $\langle T\rangle$, we will achieve the different $P{r_{\rm{T}}}$, $P{r_{\rm{S}}}$ and ${\eta }$ in the table 3 \cite{Yao:2019}.
In Fig.\ref{fig1} we illustrate the influence of outer scale on beam wander when $\langle T\rangle =15^{\circ} \rm C$.
As the outer scale increases,
beam wander gradually increases,
but the extent of impact gradually slows down.
Apparently,
$\left\langle {r_c^2} \right\rangle_{\omega=-0.25} \geq \left\langle {r_c^2} \right\rangle_{\omega=-2.5} \geq \left\langle {r_c^2} \right\rangle_{\omega=-0.5}$,
there is a nonlinear relationship between $\omega$ and beam wander.
We present details of such a nonlinear relationship in Fig.\ref{fig2}.
There are three regions: $\omega \in[-5, -1]$ , $[-1, -0.5]$ and $[-0.5, 0]$.
In first region,
the curve increases slowly,
then decreases near $\omega = -1$;
But in second region, it remains stable,
then begins to fall near $\omega = -0.5$;
Since then,
the curve rises rapidly,
and the influence of outer scale converge gradually..
As can be seen from Eq.(\ref{eq14}) ,
the 'jump phenomenon' mainly comes from the modulation of $d_{r}$ given by $\omega $.
Figure \ref{fig3} shows the influence of $\langle T\rangle$ and $L_{0}$ on beam wander.
The beam wander influenced by outer scale $L _{0}$ is stronger than that influenced by average temperature, although we set $L _{0}$ varying from $10 \rm m$ to $100 \rm m$, and $\langle T\rangle$ ranging from $0^{\circ} \rm C$ to $30^{\circ} \rm C$.
From a physical aspect,
we can understand it as that
the outer scale affects the low frequency inertial region of power spectrum more seriously than average temperature.
In addition, it can be seen from Eq.(\ref{eq1}) and Eq.(\ref{eq7}) that the beam wander consists of temperature, salinity, and coupling terms.
The proportions of the three parts varying with $\omega$ are illustrated in Fig.\ref{fig4}(a)(b)(c)(d) for further discussion, and we vary the value of $\langle T\rangle$ and $L_{0}$.
The proportion caused by the coupling term is beyond the others.
It is a slow drop in the first region; For the second region, it remains at the same level; In the third region, a considerable increase occurs.
This can be interpreted as modulating of the $\chi _i$ in Eq.(\ref{eq13}) due to the relation between $\omega$ and $d_r$, results in the larger proportion of the coupling term.
And there are almost no changes by relative change between temperature and outer scale,
i.e. the difference in outer scale and temperature doesn't affect the proportion of beam wander caused by individual terms to the total beam wander obviously.
From the view of proportion caused by individual terms,
the proportion of temperature term is decreasing slowly, slowly rising, and sharply falling in the three regions, respectively.
Although the proportion of salinity term accounts for a relative small part, its trend is more complex than others.
For the first region,it slowly rises, then falls a little. In the second region,it rises first, then falls sharply. In the third region, it rises slowly again.
Particularly,
The first region can be considered as the temperature dominated, and the effect of salinity is not very obvious.
In the second region, the proportion of temperature term rises, the proportion of salinity term decreases,
so the proportion of coupling term is caused to decrease slowly.
In the third region, the proportion of temperature term drops sharply, and the proportion of salinity term rises slowly,
but their contributions makes the proportion of coupling term rise sharply.
So we should pay more attention to the change of salinity.
\begin{figure}
\centering
\includegraphics[width = 0.35\textwidth]{fg1.eps}
\caption{$\langle T\rangle = 15^{\circ} \rm C$, Beam wander versus $ L_0$ with various $\omega$.}
\label{fig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.35\textwidth]{fg2.eps}
\caption{$\langle T\rangle = 15^{\circ} \rm C$, Beam wander versus $\omega$ with various $ L_0$.}
\label{fig2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.35\textwidth]{fg3.eps}
\caption{Beam wander versus $\omega$ with various $L_0$ and various $\langle T\rangle$. (a) is local enlarged drawing when ${L_0} = 100\rm{m}$, (b) is local enlarged drawing when ${L_0} = 10\rm{m}$.}
\label{fig3}
\end{figure}
\begin{figure*}
\centering
\subfigure {
\includegraphics[width = 0.35\textwidth]{OMG_P_1.eps}
\label{a}}
\subfigure {
\includegraphics[width = 0.35\textwidth]{OMG_P_11.eps}
\label{b}}
\subfigure {
\includegraphics[width = 0.35\textwidth]{OMG_P_2.eps}
\label{c}}
\subfigure {
\includegraphics[width =0.35\textwidth]{OMG_P_22.eps}
\label{d}}
\caption{Proportion of the beam wander caused by the temperature term, salinity term and the coupling term, for several different values of $T$ and ${L_0}$:
(a) $\langle T\rangle = 0^{\circ} \rm C$, ${L_0} = 10 \rm{m}$,
(b) $\langle T\rangle = 30^{\circ} \rm C$, ${L_0} = 10\rm{m}$,
(c) $\langle T\rangle = 0^{\circ} \rm C$, ${L_0} = 100\rm{m}$,
(d) $\langle T\rangle = 30^{\circ} \rm C$, ${L_0} = 100\rm{m}$.}
\label{fig4}
\end{figure*}
\section{Conclusion}
In conclusion,
outer scale is introduced into the H4-based oceanic refractive-index spectrum,
which describes the boundary effect in low frequency region.
Through the outer-scaled H4-based spectrum, we derived the analytical expression of beam wander and plotted the numerical curves of the beam wander varying with several parameters.
Similar with previous report \cite{Yang:2017},
the beam wander rises rapidly with increased outer scale at beginning and then tends to be flat.
When the salinity prevails, the influence of average temperature and outer scale will converge gradually.
We also find that
the beam wander varied with contributed by outer scale is lager than that varied with averaged temperature.
Moreover,
for the influence proportion on beam wander,
temperature-salinity coupling term is the largest part.
Although influence of salinity fluctuation on beam wander is relatively small,
its trend is more complex obviously.
Overall,
The choice of appropriate outer scale influences the information of oceanic optics significantly such as beam wander in communication, detection and sensing.
It is an important extension with the present theoretical significance of outer scale,
but of course the experiments in future could extend into parameter regions dependent on outer scale.
we look forward to offering a realistic means to measure results of the fascinating ocean.
\bibliographystyle{unsrt}
|
3,212,635,537,771 | arxiv | \section{Introduction}
This paper is a new chapter in the saga of the axiomatisation of the \emph{parallel composition operator} $\mathbin{\|}$ (also known as ``\emph{full}'' \emph{merge} \cite{BK84b,BK85}) of the Calculus of Communicating Systems (CCS) \cite{Mi80}.
The saga has its roots in the works \cite{HM80,HM85}, in which Hennessy and Milner studied the \emph{equational theory} of (recursion free) CCS and proposed a \emph{ground-complete axiomatisation} for it modulo \emph{strong bisimilarity} and \emph{observational congruence}, two classic notions of behavioural \emph{congruence} (i.e., an equivalence relation that is compositional with respect to the language operators) that allow one to establish whether two processes have the same \emph{observable behaviour} \cite{Pa81}.
That axiomatisation included infinitely many axioms, which were instances of the \emph{expansion law} used to ``simulate equationally'' the operational semantics of $\mathbin{\|}$.
Then, Bergstra and Klop showed, in \cite{BK84b}, that a \emph{finite} ground-complete axiomatisation modulo bisimilarity can be obtained by enriching CCS with two auxiliary operators, i.e., the \emph{left merge} $~\!\lmerge$ and the \emph{communication merge} $|$, expressing one step in the pure interleaving and the synchronous behaviour of $\|$, respectively.
Their result was strengthened by Aceto et al.\ in \cite{AFIL09}, where it is proved that, over the fragment of CCS without recursion, restriction and relabelling, the auxiliary operators $~\!\lmerge$ and $|$ allow for finitely axiomatising $\|$ modulo bisimilarity also when CCS terms with variables are considered.
Moreover, in \cite{AILT08} that result is extended to the fragment of CCS with relabelling and restriction, but without communication.
From those studies, we can infer that $~\!\lmerge$ and $|$ are \emph{sufficient} to finitely axiomatise $\mathbin{\|}$ over CCS modulo bisimilarity.
(\emph{Henceforth, we only consider the recursion, restriction and relabelling free fragment of CCS.})
Moller showed, in \cite{Mo89,Mo90}, that they are also \emph{necessary}.
He considered a minimal fragment of CCS, including only the inactive process, action prefixing, nondeterministic choice and interleaving, and proved that, even in the presence of a single action, bisimilarity does not afford a finite ground-complete axiomatisation over that language.
Moller's proof technique was then used to show that the same negative result holds if we replace $~\!\lmerge$ and $|$ with the so called \emph{Hennessy's merge} \cite{He88}, which denotes an asymmetric interleaving with communication, or, more generally, with a single binary auxiliary operator satisfying three assumptions given in \cite{ACFIL21}.
The aforementioned works considered equational characterisations of $\mathbin{\|}$ modulo strong bisimilarity.
However, a plethora of behavioural congruences have been proposed in the literature, corresponding to different levels of abstraction from the information on process execution.
Hence, another chapter in the saga consisted in extending the studies recalled above to the behavioural congruences in van Glabbeek's linear time-branching time spectrum \cite{vG90}.
The work \cite{ACILP20} delineated the \emph{boundary} between finite and non-finite axiomatisability of $\mathbin{\|}$ modulo all the congruences in the spectrum.
\subparagraph{Our contribution: branching bisimulation congruence.}
Some information on process behaviour can either be considered irrelevant or be unavailable to an external observer.
\emph{Weak behavioural semantics} have been introduced to study the effects of these unobservable (or \emph{silent}) actions, usually denoted by $\tau$, on the observable behaviour of processes, each semantics considering a different level of abstraction.
A taxonomy of weak semantics is given in \cite{vG93}, and studies on the equational theories of various of these semantics have been carried out over the algebra BCCSP, which consists of the basic operators from CCS and CSP \cite{Ho85} but does not include $\mathbin{\|}$ (see, among others, \cite{AdFGI14,CFvG08,dNH83,vGW96,HM85}).
A finite, ground-complete axiomatisation of parallel composition modulo \emph{rooted weak bisimilarity} (also known as \emph{observational congruence} \cite{HM85}) is provided by Bergstra and Klop in \cite{BK85} over the algebra $\text{ACP}_\tau$ that includes the auxiliary operators $~\!\lmerge$ and $|$.
To the best of our knowledge, the only study on the axiomatisability of CCS's $\mathbin{\|}$ over open terms modulo weak congruences is the negative result from \cite{AACIL21}, which shows that a class of weak congruences (including rooted weak bisimilarity) does not afford a finite, complete axiomatisation over the open terms of the minimal fragment of CCS with interleaving.
In this paper we focus on \emph{branching bisimilarity} \cite{vGW89}, which generalises strong bisimilarity to abstract away from $\tau$-steps of terms while preserving their \emph{branching structure} \cite{vGW89,vGW96}, and its \emph{rooted} version, which is a congruence with respect to CCS operators.
As a first main contribution, we show that \emph{rooted branching bisimilarity affords no finite ground-complete axiomatisation over CCS}.
To this end, we adapt the proof-theoretic technique used by Moller to prove the corresponding negative result for strong bisimilarity.
We remark that, even though the general proof strategy is a natural extension of Moller's, our proof requires a number of original, non-trivial technical results on (rooted) branching bisimilarity.
In particular, we observe that equational proofs of $\tau$-free equations might involve terms having occurrences of $\tau$ in some intermediate steps (see, e.g., page 175 of Moller’s thesis \cite{Mo89}), and our proof of the negative result for rooted branching bisimilarity will account for those uses of $\tau$, thus making our results special for the considered weak congruence.
Moreover, as an intermediate step in our proof, we establish a result of independent interest: we show that \emph{each CCS process has a unique decomposition into indecomposable processes modulo branching bisimilarity}.
A similar result was proven in \cite{Bas16}, but only for interleaving parallel composition.
Here, we extend this result to the full merge operator, including thus the possibility of communication between the parallel components.
Having established the negative result, a natural question is whether the use of the auxiliary operators from \cite{BK84b} can help us to obtain an equational basis for rooted branching bisimilarity.
Hence, as our second main contribution, we consider the language $\text{CCS}_{\text{LC}}$, namely CCS enriched with $~\!\lmerge$ and $|$, and \emph{we provide a complete axiomatisation for rooted branching bisimilarity over $\text{CCS}_{\text{LC}}$ that is finite when so is the set of actions over which terms are defined}.
This axiomatisation is obtained by extending the complete axiom system for strong bisimilarity over $\text{CCS}_{\text{LC}}$ from \cite{AFIL09} with axioms expressing the behaviour of $~\!\lmerge$ and $|$ in the presence of $\tau$-actions (from \cite{BK85}), and with the suitable $\tau$-laws (from \cite{HM85,vGW96}) necessary to deal with rooted branching bisimilarity.
Specifically, we will see that we can express equationally the fact that left merge and communication merge distribute over choice (left merge in one argument, communication merge in both), thus allowing us to expand the behaviour of the parallel components using only a finite number of axioms, regardless of their size.
A key step in the proof of the completeness result consists in another intermediate original contribution of this work: the definition of the semantics of \emph{open} $\text{CCS}_{\text{LC}}$ terms.
Our contribution can then be summarised as follows:
\begin{enumerate}
\item We show that every branching equivalence class of CCS processes has a unique parallel decomposition into indecomposables.
\item We prove that rooted branching bisimilarity admits no finite equational axiomatisation over CCS.
\item We define the semantics of open $\text{CCS}_{\text{LC}}$ terms.
\item We provide a (finite) complete axiomatisation for $\sim_\mathtt{RBB}$ over $\text{CCS}_{\text{LC}}$.
\end{enumerate}
\section{Background}
\label{sec:background}
\subparagraph{Labelled transition systems}
As semantic model we consider classic \emph{labelled transition systems} \cite{Ke76}.
We assume a non-empty set of action names $\mathcal{A}$, and we let $\overline{\mathcal{A}}$ denote the set of action co-names, i.e., $\overline{\mathcal{A}}=\{\overline{a} \mid a \in \mathcal{A}\}$.
As usual, we postulate that $\overline{\overline{a}}=a$ and $a \neq \overline{a}$ for all $a \in \mathcal{A}$.
Then, we define $\mathcal{A}_\tau = \mathcal{A} \cup \overline{\mathcal{A}} \cup \{\tau\}$, where $\tau \not \in \mathcal{A}\cup\overline{\mathcal{A}}$.
Henceforth, we let $\mu,\nu,\dots$ range over actions in $\mathcal{A}_\tau$, and $\alpha,\beta,\dots$ range over actions in $\mathcal{A} \cup \overline{\mathcal{A}}$.
\begin{definition}
[Labelled Transition System]
\label{Def:lts}
A {\sl labelled transition system} (LTS) is a triple $(\mathbf{P},\mathcal{A}_\tau,\trans[])$, where $\mathbf{P}$ is a set of \emph{processes} (or \emph{states}), $\mathcal{A}_\tau$ is a set of {\sl actions}, and ${\trans[]} \subseteq \mathbf{P} \times \mathcal{A}_\tau \times \mathbf{P}$ is a ({\sl labelled}) {\sl transition relation}.
\end{definition}
As usual, we use $p \trans[\mu] p'$ in lieu of $(p,\mu,p') \in {\trans[]}$.
For each $p \in \mathbf{P}$ and $\mu \in \mathcal{A}$, we write $p \trans[\mu]$ if $p \trans[\mu] p'$ holds for some $p'$, and $p \ntrans[\mu]$ otherwise.
The \emph{initials} of $p$ are the actions that label the outgoing transitions of $p$, that is, $\mathrm{init}(p) = \{\mu \in \mathcal{A}_\tau \mid p \trans[\mu] \}$.
\subparagraph{The language CCS}
We consider the recursion, relabelling and restriction free fragment of Milner's CCS~\cite{Mi89}, which for simplicity we still call CCS, given by the following grammar:
\[
t ::=\; \mathbf{0} \;|\;
x \;|\;
\mu.t \;|\;
t+t \;|\;
t \mathbin{\|} t
\enspace ,
\]
where $x$ is a variable drawn from a countably infinite set $\mathcal{V}$ disjoint from $\mathcal{A}_\tau$, and $\mu \in \mathcal{A}_\tau$.
We use the \emph{Structural Operational Semantics} (SOS) framework \cite{Pl81} to equip processes with an operational semantics.
The SOS rules (or inference rules) for the CCS operators given above are reported in Table~\ref{tab:sos_rules} (symmetric rules for $+$ and $\mathbin{\|}$ are omitted).
\begin{table}[t]
\begin{gather*}
\SOSrule{}{\mu.t \trans[\mu] t}
\qquad
\SOSrule{t \trans[\mu] t'}{t + u \trans[\mu] t'}
\qquad
\SOSrule{t \trans[\mu] t'}{t \mathbin{\|} u \trans[\mu] t' \mathbin{\|} u}
\qquad
\SOSrule{t \trans[\alpha] t' \quad u \trans[\overline{\alpha}] u'}{t \mathbin{\|} u \trans[\tau] t' \mathbin{\|} u'}
\end{gather*}
\caption{\label{tab:sos_rules} The SOS rules for CCS operators
($\mu \in \mathcal{A}_\tau$, $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$).
}
\end{table}
We shall use the meta-variables $t,u,v,w$ to range over process terms, and write $\mathrm{var}(t)$ for the collection of variables occurring in the term $t$.
We use a {\em summation} $\sum_{i\in\{1,\ldots,k\}}t_i$ to abbreviate $t_1+\cdots+t_k$, where the empty sum represents $\mathbf{0}$.
We call the term $t_j$ ($j \in \{1,\dots,k\}$) a \emph{summand} of $t = \sum_{i \in \{1,\dots k\}} t_i$ if it does not have $+$ as head operator.
The {\sl size} of a term $t$, denoted by $\mathrm{size}(t)$, is the number of operator symbols in $t$.
A term is {\em closed} if it does not contain any variables.
Closed terms, or {\sl processes}, will be denoted by $p,q,r$.
Moreover, we omit trailing $\mathbf{0}$'s from terms.
A {\sl (closed) substitution} is a mapping from process variables to (closed) terms.
Substitutions are extended from variables to terms, transitions, and rules in the usual way.
Note that $\sigma(t)$ is closed, if so is $\sigma$.
We let $\sigma[x\mapsto p]$ denote the substitution that maps the variable $x$ into process $p$ and behaves like $\sigma$ on all other variables.
In particular, $[x \mapsto p]$ denotes the substitution that maps the variable $x$ into process $p$ and behaves like the identity on all other variables.
The inference rules in Table~\ref{tab:sos_rules} allow us to derive valid transitions between closed terms.
The operational semantics for our language is then modelled by the LTS whose processes are the closed terms, and whose labelled transitions are those that are provable from the SOS rules.
Henceforth, we let $\mathbf{P}$ denote the set of CCS processes.
We remark that whenever $p \trans[\mu] p'$, then $\mathrm{size}(p) > \mathrm{size}(p')$.
\subparagraph{Branching bisimilarity}
\emph{Branching bisimilarity} is a bisimulation-based behavioural equivalence that abstracts away from computation steps in processes that are deemed unobservable, while preserving their \emph{branching structure}.
The abstraction is achieved by labelling these computation steps with $\tau$, and giving $\tau$-labelled transitions a special treatment in the definition of the behavioural equivalence.
Preservation of the branching structure is mainly due to the \emph{stuttering} nature of branching bisimulation, which guarantees that the behaviour of a term is preserved in the execution of a sequence of silent steps \cite{vGW89,vGW96}.
Let $\trans[\varepsilon]$ denote the reflexive and transitive closure of the transition $\trans[\tau]$.
\begin{definition}
[Branching bisimilarity]
\label{def:bb}
Let $(\mathbf{P},\mathcal{A}_\tau,\trans[])$ be a LTS.
\emph{Branching bisimilarity}, denoted by $\sim_\mathtt{BB}$, is the largest symmetric relation over $\mathbf{P}$ such that, whenever $p \sim_\mathtt{BB} q$, if $p \trans[\mu] p'$, then either:
\begin{itemize}
\item $\mu = \tau$ and $p' \sim_\mathtt{BB} q$, or
\item there are processes $q',q''$ such that $q \trans[\varepsilon] q'' \trans[\mu] q'$, $p \sim_\mathtt{BB} q''$, and $p' \sim_\mathtt{BB} q'$.
\end{itemize}
\end{definition}
Branching bisimilarity satisfies the \emph{stuttering property} \cite[Lemma 2.5]{vGW96}:
\emph{Assume that $p \sim_\mathtt{BB} q$.
Whenever $p \trans[\tau] p_1 \trans[\tau] \dots \trans[\tau] p_n$ and $p_n \sim_\mathtt{BB} q$, for some $n \ge 1$, then $p_i \sim_\mathtt{BB} q$ for all $i=1,\dots,n-1$.}
To guarantee compositional reasoning over a process language, we require a behavioural equivalence $\sim$ to be a \emph{congruence} with respect to all language operators.
This consists in verifying whether, for all $n$-ary operators $f$
\[
\text{if } t_i \sim t_i' \text{ for all } i = 1,\dots,n, \text{ then } f(t_1,\dots,t_n) \sim f(t_1',\dots,t_n').
\]
It is well known that branching bisimilarity is an equivalence relation \cite{vGW96,Ba96}.
Moreover, action prefixing and parallel composition satisfy the \emph{simple BB cool rule format} \cite{vG11} and hence $\sim_\mathtt{BB}$ is compositional with respect to those operators.
However, $\sim_\mathtt{BB}$ is not a congruence with respect to nondeterministic choice.
To remedy this inconvenience, the \emph{root condition} is introduced:
\emph{rooted branching bisimilarity} behaves like strong bisimilarity on the initial transitions, and like branching bisimilarity on subsequent transitions.
\begin{definition}
[Rooted branching bisimilarity]
\label{def:rbb}
\emph{Rooted branching bisimilarity}, denoted by $\sim_\mathtt{RBB}$, is the symmetric relation over $\mathbf{P}$ such that, whenever $p \sim_\mathtt{RBB} q$, if $p \trans[\mu] p'$, then there is a process $q'$ such that $q \trans[\mu] q'$ and $p' \sim_\mathtt{BB} q'$.
\end{definition}
It is well known that rooted branching bisimilarity is an equivalence relation \cite{vGW96,Ba96}, and that $\sim_\mathtt{RBB}$ is a congruence over CCS (see, e.g., \cite{vG11}).
\subparagraph{Equational Logic}
An \emph{axiom system} $\mathcal{E}$ is a collection of (\emph{process}) \emph{equations} $t \approx u$ over the considered language, thus CCS in this paper.
An equation $t \approx u$ is \emph{derivable} from an axiom system $\mathcal{E}$, notation $\mathcal{E} \vdash t \approx u$, if there is an \emph{equational proof} for it from $\mathcal{E}$, namely if $t \approx u$ can be inferred from the axioms in $\mathcal{E}$ using the \emph{rules} of \emph{equational logic}.
The rules over CCS are reported in Table~\ref{tab:equational_logic}.
\begin{table}[t]
\begin{gather*}
\scalebox{0.9}{($e_1$)}\; t \approx t
\qquad
\scalebox{0.9}{($e_2$)}\; \frac{t \approx u}{u \approx t}
\qquad
\scalebox{0.9}{($e_3$)}\; \frac{{t \approx u ~~ u \approx v}}{{t \approx v}}
\qquad
\scalebox{0.9}{($e_4$)}\; \frac{{t \approx u}}{{\sigma(t) \approx \sigma(u)}} \\[.2cm]
\scalebox{0.9}{($e_4$)}\; \frac{t \approx u}{\mu. t \approx \mu. u}
\qquad
\scalebox{0.9}{($e_5$)}\; \frac{t \approx u~~ t' \approx u'}{t+t' \approx u+u'}
\qquad
\scalebox{0.9}{($e_6$)}\; \frac{t \approx u~~ t' \approx u'}{t\mathbin{\|} t' \approx u\mathbin{\|} u'}
\enspace .
\end{gather*}
\caption{\label{tab:equational_logic} The rules of equational logic}
\end{table}
We assume, without loss of generality, that the substitution rule is only applied on equations $(t \approx u) \in \mathcal{E}$.
In this case, $\sigma(t) \approx \sigma(u)$ is called a {\em substitution instance} of an axiom in $\mathcal{E}$.
Moreover, by postulating that for each axiom in $\mathcal{E}$ also its symmetric counterpart is present in $\mathcal{E}$, one may assume that the symmetry rule is never used in equational proofs.
\begin{table*}[t]
\setlength{\tabcolsep}{15pt}
\centering
\begin{tabular}{llll}
\multicolumn{3}{l}{Some axioms for bisimilarity over CCS:} \\[.2cm]
\multicolumn{2}{l}{$\scalebox{0.85}{A0}\quad x + \mathbf{0} \approx x$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{P0}\quad x \mathbin{\|} \mathbf{0} \approx x$}
\\
\multicolumn{2}{l}{$\scalebox{0.85}{A1}\quad x+y \approx y+x$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{P1}\quad x \mathbin{\|} y \approx y \mathbin{\|} x$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{A2}\quad (x+y)+z \approx x+(y+z)$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{P2}\quad (x \mathbin{\|} y) \mathbin{\|} z \approx x \mathbin{\|} (y \mathbin{\|} z)$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{A3}\quad x + x \approx x$}
\\
\hline
\hline\\
\multicolumn{3}{l}{Additional axioms for rooted branching bisimilarity over CCS:} \\[.2cm]
\multicolumn{2}{l}{$\scalebox{0.85}{TB}\quad \mu (\tau(x+y) + y) \approx \mu (x +y)$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{T1}\quad \mu \tau x \approx \mu x$}
\end{tabular}
\caption{\label{tab:axioms_b} Some axioms for rooted branching bisimilarity.}
\end{table*}
We are interested in equations that are valid modulo some congruence relation $\sim$ over terms.
The equation $t \approx u$ is said to be \emph{sound} modulo $\sim$ if $\sigma(t) \sim \sigma(u)$ for all closed substitutions $\sigma$.
For simplicity, if $t \approx u$ is sound, then we write $t \sim u$.
An axiom system is \emph{sound} modulo $\sim$ if, and only if, all of its equations are sound modulo $\sim$.
Conversely, we say that $\mathcal{E}$ is \emph{complete} modulo $\sim$ if $t \sim u$ implies $\mathcal{E} \vdash t \approx u$ for all terms $t,u$.
If we restrict ourselves to consider only equations over closed terms then $\mathcal{E}$ is said to be \emph{ground-complete} modulo $\sim$.
We say that $\sim$ has a finite, (ground) complete axiomatisation, if there is a finite axiom system $\mathcal{E}$ that is sound and (ground) complete for $\sim$.
Henceforth, we exploit the associativity and commutativity of $+$ and $\mathbin{\|}$ modulo the relevant behavioural equivalences.
The symbol $=$ will then denote equality modulo A1-A2 and P1-P2 in Table~\ref{tab:axioms_b}.
\section{The main results}
\label{sec:main_results}
Our aim is to study the axiomatisability of rooted branching bisimilarity over CCS.
Our investigations produced, as main outcomes, a negative result (Theorem~\ref{thm:rbb_negative}) and a positive one (Theorem~\ref{thm:rbb_complete_ccslc}).
In detail, in the first part of the paper we prove the following theorem:
\begin{restatable}{theorem}{thmrbbnegative}
\label{thm:rbb_negative}
Rooted branching bisimilarity has no finite equational ground-complete axiomatisation over CCS.
\end{restatable}
Given the negative result, it is natural to wonder whether an equational basis for rooted branching bisimilarity can be obtained if we enrich CCS with some auxiliary operators.
Considering the similarities between $\sim_\mathtt{RBB}$ and strong bisimilarity, the principal candidates for this role are the left merge $~\!\lmerge$ and the communication merge $|$ from \cite{BK84b}.
Indeed, we show that if we add those two operators to the syntax of CCS, then we can obtain a complete axiomatisation of rooted branching bisimilarity over the new language, denoted by $\text{CCS}_{\text{LC}}$.
The desired equational basis is given by the axiom system $\mathcal{E}_\mathtt{RBB}$, which is presented fully in Table~\ref{tab:axioms_rbb} in Section~\ref{sec:completeness}.
$\mathcal{E}_\mathtt{RBB}$ is an extension of the complete axiom system for strong bisimilarity over $\text{CCS}_{\text{LC}}$ from \cite{AFIL09} with axioms expressing the behaviour of left merge and communication merge in the presence of $\tau$-actions (taken from \cite{BK85}), and with the suitable $\tau$-laws necessary to deal with rooted branching bisimilarity (taken from \cite{HM85,vGW96}).
Formally, our second main contribution consists in a proof of the following theorem:
\begin{restatable}[Completeness]{theorem}{thmrbbcompleteccslc}
\label{thm:rbb_complete_ccslc}
Let $t,u$ be $\text{CCS}_{\text{LC}}$ terms.
If $t \sim_\mathtt{RBB} u$, then $\mathcal{E}_\mathtt{RBB} \vdash t \approx u$.
\end{restatable}
We will also argue that this axiomatisation is finite when so is the set of actions.
Hence, when $\mathcal{A}$ is finite, $\text{CCS}_{\text{LC}}$ modulo $\sim_\mathtt{RBB}$ is finitely based, unlike CCS.
Considering the amount of technical results that we will need to fulfil our objectives, we devote Section~\ref{sec:roadmap} to a presentation of the proof strategy that we will apply to obtain Theorem~\ref{thm:rbb_negative}.
Sections~\ref{sec:decomposition}--\ref{sec:negative_result} then present the formalisation of the ideas discussed in that section.
Similarly, in Section~\ref{sec:completeness_roadmap} we give a high-level description of the approach that we will follow to prove Theorem~\ref{thm:rbb_complete_ccslc}.
The technical development of the proof is then reported in Sections~\ref{sec:configurations_bis}--\ref{sec:completeness}.
All the complementary results needed to prove the two theorems, are reported in the Appendix.
\section{Proof strategy for Theorem~\ref{thm:rbb_negative}}
\label{sec:roadmap}
In this section we present the proof strategy we will apply to obtain Theorem~\ref{thm:rbb_negative}.
Our proof follows the so-called \emph{proof-theoretic approach} to non-finite-axiomatisability results, whose use in the field of process algebra stems from \cite{Mo89,Mo90,Mo90a}, where Moller proved that CCS modulo strong bisimilarity is not finitely based.
In the proof-theoretic approach, the idea is to identify a specific property of terms parametric in $n \ge 0$, say $\mathbb{P}_n$, and show that if $\mathcal{E}$ is an arbitrary finite axiom system that is sound with respect to $\sim_\mathtt{RBB}$, then $\mathbb{P}_n$ is preserved by provability from $\mathcal{E}$ when $n$ is ``\emph{large enough}''.
Next, we exhibit an infinite family of equations $\{\mathfrak{e}_n \mid n \ge 0\}$ over closed terms that are all sound modulo $\sim_\mathtt{RBB}$, but are such that only one side of $\mathfrak{e}_n$ satisfies $\mathbb{P}_n$, for each $n \ge 0$.
In particular, this implies that whenever $n$ is ``large enough'' then the sound equation $\mathfrak{e}_n$ cannot be proved from $\mathcal{E}$.
Since $\mathcal{E}$ is an arbitrary finite sound axiom system, it follows that no finite sound axiomatisation can prove all the equations in the family $\{\mathfrak{e}_n \mid n \ge 0\}$ and therefore no finite sound axiomatisation is ground complete for CCS modulo modulo $\sim_\mathtt{RBB}$.
\subparagraph{The choice of $\mathbb{P}_n$ and the family of equations}
In \cite{Mo89,Mo90,Mo90a} Moller applied the proof method sketched above to prove that strong bisimilarity has no finite, complete axiomatisation over CCS.
The key idea underlying this result is that, since $\mathbin{\|}$ does not distribute over $+$ in either of its arguments modulo strong bisimilarity, then no finite, sound axiom system can ``\emph{expand}'' the initial behaviour of process $a \mathbin{\|} \sum_{i = 1}^n a^i$ (where $a^i = aa^{i-1}$ for each $i = 1,\dots,n$, with $a^0 = \mathbf{0}$) when $n$ is large.
Since, by definition, rooted branching bisimilarity behaves exactly like strong bisimilarity on the first step, and parallel composition does not distribute over choice in either of its arguments, modulo $\sim_\mathtt{RBB}$, it is natural to exploit a similar strategy to prove Theorem~\ref{thm:rbb_negative}.
In detail, we will consider, for each $n \ge 2$, the process $p_n = \sum_{i = 2}^n aa^{\le i}$, where $a^{\le i} = \sum_{j = 1}^i a^j$ for each $i = 2,\dots,n$.
Then, for each $n \ge 2$, the property $\mathbb{P}_n$ will consist in having a summand rooted branching bisimilar to the process $a \mathbin{\|} p_n$, and we will show that, when $n$ is large enough, $\mathbb{P}_n$ is an invariant under provability from an arbitrary finite, sound axiom system (Theorem~\ref{thm:rbb_preserves_property}).
Hence, the sound equation
$
\mathfrak{e}_n \;\colon\; a \mathbin{\|} p_n \,\approx\, ap_n + \sum_{i = 2}^n a(a \mathbin{\|} a^{\le i})
$
cannot be derived from $\mathcal{E}$ because its right-hand side has no summand that is rooted branching bisimilar to $a \mathbin{\|} p_n$, unlike its left-hand side.
Therefore no finite sound axiom system can prove the infinite family of equations $\{\mathfrak{e}_n \mid n \ge 2\}$, yielding the desired negative result.
In proving that $\mathbb{P}_n$ is invariant under provability, one pivotal ingredient will be the fact that processes $p_n$ and $a^{\le i}$, for $n \ge 2$ and $i \in \{2,\dots,n\}$, are \emph{indecomposable}.
The existence of a unique parallel decomposition into indecomposable processes modulo \emph{branching bisimilarity} over CCS with \emph{interleaving parallel composition} was studied in \cite{Bas16}.
In Section~\ref{sec:indecomposables}, we extend the result from \cite{Bas16} to the full merge operator, thus including communication (Proposition~\ref{prop:unique_par}).
\subparagraph{The choice of $n$}
The choice of a sufficiently large $n$ plays a crucial
role in proving that $\mathbb{P}_n$ is an invariant under provability from a finite, sound axiom system $\mathcal{E}$ (Theorem~\ref{thm:rbb_preserves_property}).
The key step in that proof deals with the case in which $p \approx q$ is a substitution instance of an equation in $\mathcal{E}$ (Proposition~\ref{prop:rbb_substitution_case}), i.e., $p = \sigma(t)$, $q = \sigma(u)$, and $t \approx u \in \mathcal{E}$ for some terms $t, u$ and closed substitution $\sigma$.
In this case, assuming that $n > \mathrm{size}(t)$, we can prove that if $p = \sigma(t)$ satisfies $\mathbb{P}_n$ then this is due to the behaviour of $\sigma(x)$ for some variable $x$.
In order to reach this conclusion, in Section~\ref{sec:decomposition}, we study how the behaviour of closed instances of terms may depend on the behaviour of the closed instances of variables occurring in them.
Moreover, we show that if $t \approx u$ is sound modulo rooted branching bisimilarity and $x$ occurs in $t$, then it occurs also in $u$ (Proposition~\ref{prop:same_var}).
Hence, we can infer that $\sigma(x)$ triggers in $\sigma(u)$ the same behaviour that it induced in $\sigma(t)$, and thus that $q = \sigma(u)$ satisfies $\mathbb{P}_n$.
All the additional properties of process $a \mathbin{\|} p_n$ used to achieve this conclusion are presented in Appendix~\ref{sec:preliminaries}.
\section{Decomposing the semantics of terms}
\label{sec:decomposition}
In the proofs to follow, we shall sometimes need to establish a correspondence between the behaviour of open terms and that of their closed instances.
In detail, we are interested in the correspondence between a transition $\sigma(t) \trans[\mu] p$, for some term $t$, closed substitution $\sigma$, action $\mu$, and process $p$, and the behaviour of $t$ and that of $\sigma(x)$, for each variable $x$ occurring in $t$.
The simplest case is a direct application of the operational semantics in Table~\ref{tab:sos_rules}.
\begin{lemma}
\label{lem:substitution}
For all terms $t,t'$, substitution $\sigma$, and $\mu \in \mathcal{A}_\tau$, if $t \trans[\mu] t'$ then $\sigma(t) \trans[\mu] \sigma(t')$.
\end{lemma}
Let us focus now on the role of variables.
A transition $\sigma(t) \trans[\mu] p$ may also derive from the initial behaviour of some closed term $\sigma(x)$, provided that the collection of initial moves of $\sigma(t)$ depends, in some formal sense, on that of the closed term substituted for the variable $x$.
In this case, we say that $x$ \emph{triggers the behaviour} of $t$.
To fully describe this situation, we introduce an auxiliary transition relation over open terms.
The notion of {\sl configuration} over terms, which stems from \cite{AFIN06}, will play an important role in their definition.
The presence of communication in CCS entails a complex definition of the semantics of configurations.
In particular, it is necessary to introduce a fresh set of variables $\mathcal{V}_{\mathcal{A}_\tau} = \{x_\mu \mid x \in \mathcal{V}, \mu \in \mathcal{A}_\tau\}$, disjoint from $\mathcal{V}$, and terms.
Intuitively, the symbol $x_\mu$ denotes that the closed term substituted for an occurrence of variable $x$ has begun its execution (expressed in terms of a $\mu$-action), and it contributes thus to triggering the behaviour of the term in which $x$ occurs (see Example~\ref{ex:variabili_vi_odio} below).
Moreover, we also need to introduce special labels and subscripts for the auxiliary transitions over configurations, which will be of the form $c \trans[\ell]_\rho c'$.
Briefly, the label $\ell$ is used to keep track of the variables that trigger the transition $c \trans[\ell]_\rho c'$.
The subscript $\rho$, instead, will allow us to correctly define the semantics of communication: it will allow us to distinguish a $\tau$-action directly performed by (the term substituted for) a variable $x$ (transition $c \trans[(x)]_\tau c'$, with $\rho = \tau$), from a $\tau$-action resulting from the communication of $x$ with a subterm of the configuration (transition $c \trans[(x)]_{\alpha,\tau} c'$, with $\rho = \alpha,\tau$, where $\alpha$ is the action performed by the term substituted for $x$).
\emph{CCS configurations} are defined over the set of variables $\mathcal{V}_{\mathcal{A}_\tau}$ and CCS terms.
\begin{definition}
The collection of {\sl CCS configurations}, denoted by $\mathcal{C}$, is given by:
\[
c :: = \; x_\mu \quad | \quad
t \quad | \quad
c \mathbin{\|} c
\enspace ,
\quad
\text{ where } t \text{ is a term, and } x_\mu \in \mathcal{V}_{\mathcal{A}_\tau}.
\]
\end{definition}
\begin{table}[t]
\begin{gather*}
\scalebox{0.9}{$(a_1)$}\; \SOSrule{}{x \trans[(x)]_{\mu} x_{\mu}}
\qquad
\scalebox{0.9}{$(a_2)$}\; \SOSrule{t \trans[\ell]_\rho c}
{t+u \trans[\ell]_\rho c}
\qquad
\scalebox{0.9}{$(a_3)$}\; \SOSrule{t \trans[\ell]_\rho c}
{t \mathbin{\|} u \trans[\ell]_{\rho} c \mathbin{\|} u}
\\[.2cm]
\scalebox{0.9}{$(a_4)$}\; \SOSrule{t \trans[(x)]_\alpha c \quad u \trans[(y)]_{\overline{\alpha}} c'}
{t \mathbin{\|} u \trans[(x,y)]_{\tau} c \mathbin{\|} c'}
\qquad
\scalebox{0.9}{$(a_5)$}\; \SOSrule{t \trans[(x)]_\alpha c \quad u \trans[\overline{\alpha}] u'}
{t \mathbin{\|} u \trans[(x)]_{\,\alpha,\tau} c \mathbin{\|} u'}
\qquad
\scalebox{0.9}{$(a_6)$}\; \SOSrule{t \trans[\alpha] t' \quad u \trans[(x)]_{\overline{\alpha}} c}
{t \mathbin{\|} u \trans[(x)]_{\,\overline{\alpha},\tau} t' \mathbin{\|} c}
\end{gather*}
\caption{\label{tab:ell_rules} Inference rules for the transition relation $\trans[\ell]_\rho$ ($\mu \in \mathcal{A}_\tau$, $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$).}
\end{table}
The auxiliary transitions of the form $\trans[\ell]_{\rho}$ are then formally defined via the inference rules in Table~\ref{tab:ell_rules}, where we omitted the symmetric rules to ($a_2$), ($a_4$), ($a_5$) and ($a_6$).
We have that $\rho \in \mathcal{A}_\tau \cup ((\mathcal{A} \cup \overline{\mathcal{A}}) \times \{\tau\})$, whereas the label $\ell$ can be either of the form $(x)$ or $(x,y)$, for some variables $x,y \in \mathcal{V}$.
Given a variable $x$ and a label $\ell$, we write $x \in \ell$ if $x$ occurs in $\ell$.
The distinguished variables $x_\mu$ allow us to keep track of which variable and action trigger the behaviour of the term, and they also allow us to present substitutions in an intuitive fashion.
As explained in the following example, it is precisely because of substitutions (and communication) that we need to make the action $\mu$ explicit in $x_\mu$.
\begin{example}
\label{ex:variabili_vi_odio}
Let $x \in \mathcal{V}$ and consider the term $x \mathbin{\|} x$.
By rules ($a_1$) and ($a_4$) in Table~\ref{tab:ell_rules}, we have that $x \mathbin{\|} x \trans[(x,x)]_\tau x_{\alpha} \mathbin{\|} x_{\overline{\alpha}}$ because $x \trans[(x)]_\alpha x_{\alpha}$ and $x \trans[(x)]_{\overline{\alpha}} x_{\overline{\alpha}}$.
Hence, given any substitution $\sigma$ such that $\sigma(x) \trans[\alpha] p_1$ and $\sigma(x) \trans[\overline{\alpha}] p_2$, for some terms $p_1,p_2$, we want to be able to correctly infer that $\sigma(x) \mathbin{\|} \sigma(x) \trans[\tau] p_1 \mathbin{\|} p_2$.
Since the two occurrences of $x$, $x_{\alpha}$ and $x_{\overline{\alpha}}$, can be distinguished by the subscripts, the substitution $\sigma[x_\alpha \mapsto p_1, x_{\overline{\alpha}} \mapsto p_2](x_\alpha \mathbin{\|} x_{\overline{\alpha}}) = p_1 \mathbin{\|} p_2$ is well-defined.
Without the subscripts, it would not have been possible to correctly define the substitution $\sigma$ on the configuration $c$ that is the target of $x \mathbin{\|} x \trans[(x,x)]_\tau c$.
\end{example}
\begin{lemma}
\label{lem:var_to_term}
Let $t$ be term and $\sigma$ be a closed substitution.
Let $x,y \in \mathcal{V}$.
\begin{enumerate}
\item For any $\mu \in \mathcal{A}_\tau$, if $\sigma(x) \trans[\mu] p$, for some process $p$, and $t \trans[(x)]_\mu c$, for some configuration $c \in \mathcal{C}$, then $\sigma(t) \trans[\mu] \sigma[x_\mu \mapsto p](c)$.
\item \label{item:x_u_to_t} For any $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$, if $\sigma(x) \trans[\alpha] p$, for some process $p$, and $t \trans[(x)]_{\alpha,\tau} c$, for some configuration $c\in \mathcal{C}$, then $\sigma(t) \trans[\tau] \sigma[x_\alpha \mapsto p](c)$.
\item For any $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$, if $\sigma(x) \trans[\alpha] p_x$, $\sigma(y) \trans[\overline{\alpha}] p_y$, for some processes $p_x,p_y$, and $t \trans[(x,y)]_{\tau} c\in \mathcal{C}$, for some configuration $c$, then $\sigma(t) \trans[\tau] \sigma[x_\alpha \mapsto p_x, y_{\overline{\alpha}} \mapsto p_y](c)$.
\end{enumerate}
\end{lemma}
Lemma~\ref{lem:var_to_term} shows how the auxiliary transitions can be used to derive the behaviour of $\sigma(t)$ from those of the variables in $t$.
We are now interested in analysing the converse situation: we show how a transition $\sigma(t) \trans[\mu] p$ can stem from transitions of the term $t$ and of the process $\sigma(x)$, for $x \in \mathrm{var}(t)$.
We limit ourselves to present the case of silent actions $\sigma(t) \trans[\tau] p$ as it requires a detailed analysis.
The case of transitions labelled with observable actions is simpler and can be found as Lemma~\ref{lem:closed2open_alpha} in Appendix~\ref{app:decomposition}.
\begin{restatable}{lemma}{lemclosedopentau}
\label{lem:closed2open_tau}
Let $t$ be a term, $\sigma$ be a closed substitution, and $p$ be a process.
If $\sigma(t) \trans[\tau] p$, then one of the following holds:
\begin{enumerate}
\item \label{lem:c2o_prefix}
There is a term $t'$ s.t.\ $t \trans[\tau] t'$ and $\sigma(t') = p$.
\item \label{lem:c2o_x}
There are a variable $x$, a process $q$, and a configuration $c$ s.t.\ $\sigma(x) \trans[\tau] q$, $t \trans[(x)]_\tau c$, and $\sigma[x_\tau \mapsto q](c) = p$.
\item \label{lem:c2o_xu}
There are a variable $x$, a process $q$, and a configuration $c$ s.t., for some $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$, $\sigma(x) \trans[\alpha] q$, $t \trans[(x)]_{\alpha,\tau} c$, and $\sigma[x_\alpha \mapsto q](c) = p$.
\item \label{lem:c2o_xy}
There are variables $x,y$, processes $q_x,q_y$ and a configuration $c$ s.t., for some $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$, $\sigma(x) \trans[\alpha] q_x$, $\sigma(y) \trans[\overline{\alpha}] q_y$, $t \trans[(x,y)]_\tau c$, and $\sigma[x_\alpha \mapsto q_x, y_{\overline{\alpha}} \mapsto q_y](c) = p$.
\end{enumerate}
\end{restatable}
\section{Unique parallel decomposition}
\label{sec:indecomposables}
As explained in Section~\ref{sec:roadmap}, our approach for establishing that $\mathbb{P}_n$ is invariant under equational proofs relies on processes having a unique parallel decomposition modulo $\sim_\mathtt{BB}$.
\begin{definition}
[Parallel decomposition modulo $\sim_\mathtt{BB}$]
\label{def:indecomposable}
A process $p$ is \emph{indecomposable} if $p\not\sim_\mathtt{BB}\mathbf{0}$ and $p \sim_\mathtt{BB} p_1 \mathbin{\|} p_2$ implies $p_1 \sim_\mathtt{BB} \mathbf{0}$ or $p_2 \sim_\mathtt{BB} \mathbf{0}$, for all processes $p_1$ and $p_2$. A \emph{parallel decomposition} of a process $p$ is a finite multiset $\lbag p_1,\dots,p_k \rbag$ of indecomposable processes $p_1,\dots,p_k$ such that $p\sim_\mathtt{BB} p_1\mathbin{\|}\cdots\mathbin{\|}p_k$. We say that $p$ has a \emph{unique parallel decomposition} if $p$ has a parallel decomposition $\lbag p_1,\dots,p_k\rbag$ and for every other parallel decomposition $\lbag p_1',\dots,p_\ell'\rbag$ of $p$ there exists a bijection $f:\{1,\dots,k\}\rightarrow\{1,\dots,\ell\}$ such that $p_i\sim_\mathtt{BB} p_{f(i)}'$ for all $1\leq i \leq k$.
\end{definition}
To prove that processes have a unique parallel decomposition we shall exploit a general result stating that a partial commutative monoid has unique decomposition if it can be endowed with a \emph{weak decomposition order} that satisfies \emph{power cancellation} \cite{Bas16}; we shall define and explain the notions below.
Note that, in view of axioms P0--P2, which are (also) sound modulo $\sim_\mathtt{BB}$, the set of processes $\mathbf{P}$ modulo $\sim_\mathtt{BB}$ is a commutative monoid with respect to the binary operation naturally induced by $\mathbin{\|}$ on $\sim_{\mathtt{BB}}$-equivalence classes and the $\sim_{\mathtt{BB}}$-equivalence class of $\mathbf{0}$ as identity element.
We permit ourselves a minor abuse in notation and use $\rightarrow$ to (also) denote the binary relation $\{(p,q)\mid \exists \mu.\ p\trans[\mu]q\}$, and proceed to argue that $\rightarrow$ induces a weak decomposition order satisfying power cancellation on the commutative monoid of processes modulo $\sim_\mathtt{BB}$.
Given any process $p$ and $n \ge 1$, let $p^n$ denote the $n$-fold parallel composition $p \mathbin{\|} p^{n-1}$, with $p^0 = \mathbf{0}$.
We first state some properties of the reflexive-transitive closure $\rightarrow^{*}$ of $\rightarrow$:
\begin{restatable}{proposition}{rttransprops}
\label{prop:rttransprops}
The relation $\rightarrow^{*}$ is an inversely well-founded partial order on processes satisfying the following properties:
\begin{enumerate}
\item \label{item:rttranspropsleast}
For every process $p$ there exists a process $p'$ such that $p\mathrel{\rightarrow^{*}}p'\sim_\mathtt{BB} \mathbf{0}$.
\item \label{item:rttranspropscompatible}
For all processes $p$, $p'$ and $q$, if $p\rightarrow^{*}p'$, then $p\mathbin{\|}q\rightarrow^{*}p'\mathbin{\|}q$ and $q\mathbin{\|}p\rightarrow^{*}q\mathbin{\|}p'$.
\item \label{item:rttranspropsprecompositional}
For all processes $p$, $q$ and $r$, if $p\mathbin{\|}q\rightarrow^{*}r$, then there exist $p'$ and $q'$ such that $p\rightarrow^{*}p'$, $q \rightarrow^{*}q'$ and $r=p'\mathbin{\|}q'$.
\item \label{item:rttranspropsArchimedean}
For all processes $p$ and $q$, if $p\rightarrow^{*}q^n$
for all $n\in\mathbb{N}$, then $q\sim_{\mathtt{BB}}\mathbf{0}$.
\end{enumerate}
\end{restatable}
The following lemma is a direct consequence of the definition of branching bisimilarity.
\begin{lemma}
For all processes $p$, $p'$ and $q$, if $p\sim_{\mathtt{BB}}q$ and $p\rightarrow^{*}p'$, then there exists $q'$ such that $q\rightarrow^{*}q'$ and $p'\sim_{\mathtt{BB}}q'$.
\end{lemma}
By this lemma we can define a binary relation $\preceq$ on $\mathbf{P}/{\sim_{\mathtt{BB}}}$, the set of $\sim_{\mathtt{BB}}$-equivalence classes of processes, by stating that $[p]_{\sim_{\mathtt{BB}}}\preceq [q]_{\sim_{\mathtt{BB}}}$ if, and only if, there exists $p'\in [p]_{\sim_{\mathtt{BB}}}$ such that $q\rightarrow^{*}p'$ (here $[p]_{\sim_{\mathtt{BB}}}$ and $[q]_{\sim_{\mathtt{BB}}}$ denote the $\sim_{\mathtt{BB}}$-equivalence classes of $p$ and $q$, respectively).
The following result is then a straightforward corollary of Proposition~\ref{prop:rttransprops}.
\begin{corollary}
The relation $\preceq$ is a weak decomposition order on $\mathbf{P}/{\sim_{\mathtt{BB}}}$, namely:
\begin{enumerate}
\item it is well-founded, i.e., every non-empty subset of $\mathbf{P}/{\sim_{\mathtt{BB}}}$ has a $\preceq$-minimal element;
\item the identity element $[\mathbf{0}]_{\sim_{\mathtt{BB}}}$ of $\mathbf{P}/{\sim_{\mathtt{BB}}}$ is the least element of $\mathbf{P}/{\sim_{\mathtt{BB}}}$ with respect to $\preceq$, i.e., $[\mathbf{0}]_{\sim_\mathtt{BB}} \preceq [p]_{\sim_\mathtt{BB}}$ for all $p \in \mathbf{P}$;
\item it is compatible, i.e., for all $p, q, r \in \mathbf{P}$ if $[p]_{\sim_\mathtt{BB}} \preceq [q]_{\sim_\mathtt{BB}}$, then $[p \mathbin{\|} r]_{\sim_\mathtt{BB}} \preceq [q \mathbin{\|} r]_{\sim_\mathtt{BB}}$;
\item it is precompositional, i.e., for all $p, q, r \in \mathbf{P}$ we have that $[p]_{\sim_\mathtt{BB}} \preceq [q \mathbin{\|} r]_{\sim_\mathtt{BB}}$ implies $[p]_{\sim_\mathtt{BB}} = [q' \mathbin{\|} r']_{\sim_\mathtt{BB}}$ for some $[q']_{\sim_\mathtt{BB}} \preceq [q]_{\sim_\mathtt{BB}}$ and $[r']_{\sim_\mathtt{BB}} \preceq [r]_{\sim_\mathtt{BB}}$; and
\item it is Archimedean, i.e., for all $p, q \in \mathbf{P}$ we have that $[p^n]_{\sim_\mathtt{BB}} \preceq [q]_{\sim_\mathtt{BB}}$ for all $n \in \mathbb{N}$ implies that $[p]_{\sim_\mathtt{BB}} = [\mathbf{0}]_{\sim_\mathtt{BB}}$.
\end{enumerate}
\end{corollary}
According to \cite[Theorem 34]{Bas16} it now remains to prove that $\preceq$ satisfies power cancellation.
The weak decomposition order $\preceq$ on the commutative monoid of processes modulo $\sim_{\mathtt{BB}}$ satisfies \emph{power cancellation} if for every indecomposable process $p$ and for all processes $q$ and $r$ such that $[p]_{\sim_{\mathtt{BB}}}\not\prec [q]_{\sim_{\mathtt{BB}}},[r]_{\sim_{\mathtt{BB}}}$, for all $k\in\mathbb{N}$, we have that $[p^k\mathbin{\|}q]_{\sim_{\mathtt{BB}}} = [p^k\mathbin{\|}r]_{\sim_{\mathtt{BB}}}$ implies $[q]_{\sim_\mathtt{BB}} = [r]_{\sim_\mathtt{BB}}$.
\begin{restatable}{proposition}{propbbcancellation}
\label{prop:bb_cancellation}
The weak decomposition order $\preceq$ on the commutative monoid of processes modulo $\sim_{\mathtt{BB}}$ satisfies power cancellation.
\end{restatable}
We have now established that $\preceq$ is a weak decomposition order on the commutative monoid of processes modulo $\sim_{\mathtt{BB}}$ that satisfies power cancellation. Thus, with an application of \cite[Theorem 34]{Bas16} we get the following unique parallel decomposition result.
\begin{proposition}
\label{prop:unique_par}
Every process in $\mathbf{P}{}$ has a unique parallel decomposition.
\end{proposition}
In what follows, we shall make use of the following direct consequence of Proposition~\ref{prop:unique_par}.
\begin{corollary}
\label{cor:cancellation}
If $p \mathbin{\|} r \sim_\mathtt{BB} q \mathbin{\|} r$, then $p \sim_\mathtt{BB} q$.
\end{corollary}
\section{Nonexistence of a finite axiomatisation}
\label{sec:negative_result}
We devote this section to proving Theorem~\ref{thm:rbb_negative}.
Following the strategy sketched in Section~\ref{sec:roadmap}, we introduce a particular family of equations on which we will build our negative result:
\begin{align*}
& p_n \;=\; \sum_{i=2}^{n} a a^{\le i} & (n \ge 2)\phantom{.} \\
& \mathfrak{e}_n \colon \quad a \mathbin{\|} p_n \;\approx\; a p_n + \sum_{i=2}^{n} a (a \mathbin{\|} a^{\le i}) & (n \ge 2).
\end{align*}
It is easy to check that each equation $\mathfrak{e}_n$, for $n \ge 2$, is sound modulo rooted branching bisimilarity (as, in particular, it is sound modulo strong bisimilarity).
In order to prove Theorem~\ref{thm:rbb_negative}, we proceed to show that no finite collection of equations over CCS that are sound modulo rooted branching bisimilarity can prove all of the equations $\mathfrak{e}_n$ ($n \ge 2$) from the family given above.
Formally, for each $n \ge 2$, we consider the property
$\mathbb{P}_n$: \textit{having a summand rooted branching bisimilar to } $a \mathbin{\|} p_n$.
Then, we prove the following:
\begin{restatable}{theorem}{thmrbbpreservesproperty}
\label{thm:rbb_preserves_property}
Let $\mathcal{E}$ be a finite axiom system over CCS that is sound modulo $\sim_\mathtt{RBB}$, let
$n$ be larger than the size of each term in the equations in $\mathcal{E}$, and let $p,q$ be closed terms such that $p,q \sim_\mathtt{RBB} a \mathbin{\|} p_n$.
If $\mathcal{E} \vdash p \approx q$ and $p$ satisfies $\mathbb{P}_n$ then so does $q$.
\end{restatable}
The crucial step in the proof of Theorem~\ref{thm:rbb_preserves_property} is delivered by the proposition below, which ensures that the property $\mathbb{P}_n$ ($n \ge 2$) is preserved by the closure under substitutions of equations in a finite, sound axiom system.
Proposition~\ref{prop:rbb_substitution_case} is proved by means of the technical results provided so far, the ones in Appendix~\ref{sec:preliminaries}, and the notion of $\mathbf{0}$-\emph{factor} of a term:
\begin{definition}
\label{Lem:nil-factors}
We say that a term $t$ has a $\mathbf{0}$ \emph{factor} if it contains a subterm of the form $t' \mathbin{\|} t''$, and either $t' \sim_\mathtt{RBB} \mathbf{0}$ or $t'' \sim_\mathtt{RBB} \mathbf{0}$.
\end{definition}
\begin{restatable}{proposition}{proprbbsubstitutioncase}
\label{prop:rbb_substitution_case}
Let $t \approx u$ be an equation over CCS terms that is sound modulo $\sim_\mathtt{RBB}$.
Let $\sigma$ be a closed substitution with $p = \sigma(t)$ and $q = \sigma(u)$.
Suppose that $p$ and $q$ have neither $\mathbf{0}$ summands nor $\mathbf{0}$ factors, and $p,q \sim_\mathtt{RBB} a \mathbin{\|} p_n$ for some $n$ larger than the sizes of $t$ and $u$.
If $p$ satisfies $\mathbb{P}_n$, then so does $q$.
\end{restatable}
Theorem~\ref{thm:rbb_preserves_property} shows the property $\mathbb{P}_n$ to be an invariant under provability from finite sound axiom systems.
As the left-hand side of equation $\mathfrak{e}_n$, i.e., the term $a \mathbin{\|} p_n$, satisfies $\mathbb{P}_n$, whilst the right-hand side, i.e., the term $a p_n + \sum_{i = 2}^{n} a (a \mathbin{\|} a^{\le i})$, does not, we can conclude that the infinite collection of equations $\mathfrak{e}_n$ ($n \ge 2$) cannot be derived from any finite, sound axiom system.
Hence, Theorem~\ref{thm:rbb_negative} follows.
\section{Towards a positive result}
\label{sec:completeness_roadmap}
We now proceed to study the role of the auxiliary operators \emph{left merge} ($~\!\lmerge$) and \emph{communication merge} ($\,|\,$) from \cite{BK84b} in the axiomatisation of parallel composition modulo $\sim_\mathtt{RBB}$.
We will show that by adding them to CCS we can obtain a complete axiomatisation of rooted branching bisimilarity over the new language.
This axiomatisation is finite if so is $\mathcal{A}_\tau$.
We denote the language obtained by enriching CCS with $~\!\lmerge$ and $|$ by $\text{CCS}_{\text{LC}}$:
\begin{equation}
\tag{$\text{CCS}_{\text{LC}}$}
t ::=\; \mathbf{0} \;|\;
x \;|\;
\mu.t \;|\;
t+t \;|\;
t \mathbin{\|} t \;|\;
t \lmerge t \;|\;
t ~|~ t
\enspace ,
\end{equation}
where $x \in \mathcal{V}$, and $\mu \in \mathcal{A}_\tau$.
The SOS rules for the $\text{CCS}_{\text{LC}}$ operators are given by the rules in Table~\ref{tab:sos_rules} plus those reported in Table~\ref{tab:sos_rules_ccslc}.
\begin{table}[t]
\begin{gather*}
\SOSrule{t \trans[\mu] t'}{t \lmerge u \trans[\mu] t' \mathbin{\|} u}
\qquad
\SOSrule{t \trans[\alpha] t' \quad u \trans[\overline{\alpha}] u'}{t ~|~ u \trans[\tau] t' \mathbin{\|} u'}
\end{gather*}
\caption{\label{tab:sos_rules_ccslc} Additional SOS rules for $\text{CCS}_{\text{LC}}$ operators ($\mu \in \mathcal{A}_\tau$, $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$).
}
\end{table}
To obtain the desired completeness result, we consider the axiom system $\mathcal{E}_\mathtt{RBB}$ (see Table~\ref{tab:axioms_rbb} in Section~\ref{sec:completeness}), obtained by extending the complete axiom system for strong bisimilarity over $\text{CCS}_{\text{LC}}$ from \cite{AFIL09} with axioms expressing the behaviour of $~\!\lmerge$ and $|$ in the presence of $\tau$-actions (from \cite{BK85}), and with the suitable $\tau$-laws (from \cite{HM85,vGW96}) necessary to deal with rooted branching bisimilarity.
Then, we adjust the semantics of configurations given in Section~\ref{sec:decomposition} to the $\text{CCS}_{\text{LC}}$ setting, and we use it to extend the definition of rooted branching bisimilarity to open $\text{CCS}_{\text{LC}}$ terms (Definition~\ref{def:open_rbb}).
Usually, a behavioural equivalence $\sim$ is defined over processes and is then possibly extended to open terms by saying that $t \sim u$ if{f} $\sigma(t) \sim \sigma(u)$ for all closed substitutions $\sigma$.
However, we adopt the same approach of, e.g., \cite{Mi89b,vG93c,AvGFI96}, and present the definition of $\sim_\mathtt{RBB}$ directly over configurations.
We will show in Section~\ref{sec:configurations_bis} that the two approaches yield the same equivalence relation over terms (Theorem~\ref{thm:rbb_on_open}).
Finally, we apply the strategy used in \cite{AvGFI96} to obtain the completeness of the axiomatisation of prefix iteration with silent moves modulo rooted branching bisimilarity:
\begin{enumerate}
\item\label{step1} We identify \emph{normal forms} for $\text{CCS}_{\text{LC}}$ terms (Definition~\ref{def:rbb_nf_ccslc}) and show that each term can be proven equal to a normal form using $\mathcal{E}_\mathtt{RBB}$ (Proposition~\ref{prop:rbb_nf_ccslc}).
\item\label{step2} We establish a relationship between $\sim_\mathtt{BB}$ and derivability in $\mathcal{E}_\mathtt{RBB}$ (Proposition~\ref{prop:rbb_provable_ccslc}).
\item\label{step3} We show that for all terms $t,u$, if $t \sim_\mathtt{RBB} u$, then $\mathcal{E}_\mathtt{RBB} \vdash t \approx u$ (Theorem~\ref{thm:rbb_complete_ccslc}).
\end{enumerate}
\section{Rooted branching bisimilarity over terms}
\label{sec:configurations_bis}
In this section we discuss the decomposition of the semantics of $\text{CCS}_{\text{LC}}$ terms, and the extension of the definition of (rooted) branching bisimilarity to open $\text{CCS}_{\text{LC}}$ terms.
The first step towards our completeness result consists in providing a semantics for open $\text{CCS}_{\text{LC}}$ terms.
To this end, we need to extend the semantics of configurations given in Section~\ref{sec:decomposition}.
For the sake of readability, we present the syntax of $\text{CCS}_{\text{LC}}$ configurations and the inference rules for variables and summations, even though they are identical to the corresponding ones presented in Section~\ref{sec:decomposition} for CCS.
However, we omit the explanations on the roles of labels $\ell$, $\rho$, and variables $x_\mu$, as those can be found in Section~\ref{sec:decomposition}.
In particular, the use of variables $x_\mu \in \mathcal{V}_{\mathcal{A}_\tau}$ (as explained in Example~\ref{ex:variabili_vi_odio}) remains unchanged.
\begin{definition}
[$\text{CCS}_{\text{LC}}$ configuration]
The collection of {\sl $\text{CCS}_{\text{LC}}$ configurations}, denoted by $\mathcal{C}_{\text{LC}}$, is given by:
\[
c :: = \; x_\mu \quad | \quad
t \quad | \quad
c \mathbin{\|} c
\enspace , \quad
\text{ where } t \text{ is a } \text{CCS}_{\text{LC}} \text{ term, and } x_\mu \in \mathcal{V}_{\mathcal{A}_\tau}.
\]
\end{definition}
\begin{table}[t]
\begin{gather*}
\scalebox{0.9}{$(a'_3)$}\; \SOSrule{t \trans[\ell]_\rho c}
{t \lmerge u \trans[\ell]_{\rho} c \mathbin{\|} u}
\\[.3cm]
\scalebox{0.9}{$(a'_4)$}\; \SOSrule{t \trans[(x)]_\alpha c \quad u \trans[(y)]_{\overline{\alpha}} c'}
{t ~|~ u \trans[(x,y)]_{\tau} c \mathbin{\|} c'}
\quad
\scalebox{0.9}{$(a'_5)$}\; \SOSrule{t \trans[(x)]_\alpha c \quad u \trans[\overline{\alpha}] u'}
{t ~|~ u \trans[(x)]_{\,\alpha,\tau} c \mathbin{\|} u'}
\quad
\scalebox{0.9}{$(a'_6)$}\; \SOSrule{t \trans[\alpha] t' \quad u \trans[(x)]_{\overline{\alpha}} c}
{t ~|~ u \trans[(x)]_{\,\overline{\alpha},\tau} t' \mathbin{\|} c}
\end{gather*}
\caption{\label{tab:ell_rules_ccslc} Inference rules for the transition relation $\trans[\ell]_\rho$ ($\mu \in \mathcal{A}_\tau$, $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$).}
\end{table}
The auxiliary transitions of the form $\trans[\ell]_{\rho}$ are formally defined via the inference rules in Table~\ref{tab:ell_rules_ccslc}, where we omitted the rules ($a'_1$) and ($a'_2$) for prefixing and choice (which are identical to, respectively, rules ($a_1$) and ($a_2$) in Table~\ref{tab:ell_rules}) the symmetric rules to ($a'_2$), ($a'_4$), ($a'_5$) and ($a'_6$), as well as the rules for $\mathbin{\|}$.
We remark that Lemma~\ref{lem:closed2open_alpha} and Lemma~\ref{lem:closed2open_tau} can be easily extended to $\text{CCS}_{\text{LC}}$ to show how a transition $\sigma(t) \trans[\mu] p$ can stem from transitions of the $\text{CCS}_{\text{LC}}$ term $t$ and of the process $\sigma(x)$, for $x \in \mathrm{var}(t)$.
\begin{table}[t]
\[
\scalebox{0.9}{$(c_1)$}\; \SOSrule{}{x_\mu \trans[x_\mu] x_{\mu}}
\quad
\scalebox{0.9}{$(c_2)$}\; \SOSrule{c_1 \trans[x_\mu] c_1'}
{c_1 \mathbin{\|} c_2 \trans[x_\mu] c_1' \mathbin{\|} c_2}
\quad
\scalebox{0.9}{$(c_3)$}\; \SOSrule{c_1 \trans[\mu] c_1'}{c_1 \mathbin{\|} c_2 \trans[\mu] c_1' \mathbin{\|} c_2}
\quad
\scalebox{0.9}{$(c_4)$}\; \SOSrule{c_1 \trans[\ell]_\rho c_1'}
{c_1 \mathbin{\|} c_2 \trans[\ell]_\rho c_1' \mathbin{\|} c_2}
\]
\caption{\label{tab:c_rules} Inference rules completing the operational semantics of $\text{CCS}_{\text{LC}}$ configurations ($\mu \in \mathcal{A}_\tau$).}
\end{table}
Since $\mathcal{V}_{\mathcal{A}_\tau}$ is disjoint from $\mathcal{V}$, we also need to introduce auxiliary rules for the special configuration $x_\mu \in \mathcal{V}_{\mathcal{A}_\tau}$.
These are identified by a proper label $x_\mu$ on the transition and reported in Table~\ref{tab:c_rules} as rules ($c_1$) and ($c_2$).
To conclude our analysis of the decomposition of the semantics of terms, we then need to extend the transition relations $\trans[\mu]$ and $\trans[\ell]_\rho$ to configurations.
This is done by rules ($c_3$) and ($c_4$) in Table~\ref{tab:c_rules}, where their symmetric counterparts have been omitted.
Let $\overset{\xi}\twoheadrightarrow$ range over the possible transitions over configurations, i.e., $\overset{\xi}\twoheadrightarrow$ can be either $\trans[\mu]$, $\trans[\ell]_\rho$, or $\trans[x_\mu]$.
The operational semantics of $\text{CCS}_{\text{LC}}$ configurations is then given by the LTS whose states are configurations in $\mathcal{C}_{\text{LC}}$, whose actions are in $\mathcal{A}_\tau \cup \mathcal{V} \cup \mathcal{V}_{\mathcal{A}_\tau}$, and whose transitions are those that are provable from the rules in Tables~\ref{tab:sos_rules},~\ref{tab:sos_rules_ccslc},~\ref{tab:ell_rules_ccslc}, and~\ref{tab:c_rules}.
Following the same approach of, e.g. \cite{Mi89b,vG93c,AvGFI96}, we now present the definitions of branching and rooted branching bisimulation equivalences directly over configurations.
\begin{definition}
[Branching bisimulation over configurations]
\label{def:open_bb}
A symmetric relation $\,{\mathcal R}\,$ over $\mathcal{C}_{\text{LC}}$ is a \emph{branching bisimulation} if{f} whenever $c_1 \,{\mathcal R}\, c_2$, if $c_1 \overset{\xi}\twoheadrightarrow c_1'$ then:
\begin{itemize}
\item either $\overset{\xi}\twoheadrightarrow\; =\; \trans[\tau]$ and $c_1' \,{\mathcal R}\, c_2$,
\item or $c_2 \trans[\varepsilon] c_2'' \overset{\xi}\twoheadrightarrow c_2'$ for some $c_2'',c_2'$ such that $c_1 \,{\mathcal R}\, c_2''$ and $c_1' \,{\mathcal R}\, c_2'$.
\end{itemize}
Two configurations $c_1,c_2$ are \emph{branching bisimilar}, denoted by $c_1 \sim_\mathtt{BB} c_2$, if{f} there exists a branching bisimulation $\,{\mathcal R}\,$ such that $c_1 \,{\mathcal R}\, c_2$.
\end{definition}
The definition of $\sim_\mathtt{BB}$ given in Definition~\ref{def:open_bb} yields the same equivalence relation over configurations that we would have obtained with the standard approach, i.e., by defining $c_1 \sim_\mathtt{BB} c_2$ if{f} $\sigma(c_1) \sim_\mathtt{BB} \sigma(c_2)$ for all closed substitutions $\sigma$.
\begin{restatable}{theorem}{thmbbonopen}
\label{thm:bb_on_open}
For all configurations $c_1, c_2 \in \mathcal{C}_{\text{LC}}$ it holds that $c_1 \sim_\mathtt{BB} c_2$ if{f} $\sigma(c_1) \sim_\mathtt{BB} \sigma(c_2)$ for all closed substitutions $\sigma$.
\end{restatable}
The approach for $\sim_\mathtt{BB}$ can be extended in a straightforward manner to $\sim_\mathtt{RBB}$.
\begin{definition}
[Rooted branching bisimilarity over configurations]
\label{def:open_rbb}
Let $c_1,c_2 \in \mathcal{C}_{\text{LC}}$.
We say that $c_1$ and $c_2$ are \emph{rooted branching bisimilar}, denoted by $c_1 \sim_\mathtt{RBB} c_2$, if{f}:
\begin{itemize}
\item if $c_1 \overset{\xi}\twoheadrightarrow c_1'$ then $c_2 \overset{\xi}\twoheadrightarrow c_2'$ for some $c_2'$ such that $c_1' \sim_\mathtt{BB} c_2'$;
\item if $c_2 \overset{\xi}\twoheadrightarrow c_2'$ then $c_1 \overset{\xi}\twoheadrightarrow c_1'$ for some $c_1'$ such that $c_1' \sim_\mathtt{BB} c_2'$.
\end{itemize}
\end{definition}
\begin{theorem}
\label{thm:rbb_on_open}
For all $c_1, c_2 \in \mathcal{C}_{\text{LC}}$ it holds that $c_1 \sim_\mathtt{RBB} c_2$ if{f} $\sigma(c_1) \sim_\mathtt{RBB} \sigma(c_2)$ for all closed substitutions $\sigma$.
\end{theorem}
\section{The equational basis}
\label{sec:completeness}
We now present the complete axiomatisation for rooted branching bisimilarity over $\text{CCS}_{\text{LC}}$.
\begin{table*}[t]
\setlength{\tabcolsep}{15pt}
\centering
\begin{tabular}{llll}
\multicolumn{3}{l}{Equational basis modulo strong bisimilarity: $\mathcal{E}_\mathtt{B}$} \\[.2cm]
\multicolumn{2}{l}{$\scalebox{0.85}{A0}\quad x + \mathbf{0} \approx x$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{C0}\quad \mathbf{0} ~|~ x \approx \mathbf{0}$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{A1}\quad x+y \approx y+x$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{C1}\quad x ~|~ y \approx y ~|~ x$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{A2}\quad (x+y)+z \approx x+(y+z)$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{C2}\quad (x ~|~ y) ~|~ z \approx x ~|~ (y ~|~ z)$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{A3}\quad x + x \approx x$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{C3}\quad (x + y) ~|~ z \approx x ~|~ z + y ~|~ z$}\\
& &
\multicolumn{2}{l}{$\scalebox{0.85}{C4}\quad \alpha x ~|~ \beta y \approx \tau(x \mathbin{\|} y)$ \quad if $\alpha = \overline{\beta}$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{L0}\quad \mathbf{0} \lmerge x \approx \mathbf{0}$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{C5}\quad \alpha x ~|~ \beta y \approx \mathbf{0}$ \quad if $\alpha \neq \overline{\beta}$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{L1}\quad \mu x \lmerge y \approx \mu (x \mathbin{\|} y)$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{C6}\quad (x \lmerge y) ~|~ z \approx (x ~|~ z) \lmerge y$} \\
\multicolumn{2}{l}{$\scalebox{0.85}{L2}\quad (x \lmerge y) \lmerge z \approx x \lmerge (y \mathbin{\|} z)$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{C7}\quad x ~|~ y ~|~ z \approx \mathbf{0}$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{L3}\quad x \lmerge \mathbf{0} \approx x$}
\\
\multicolumn{2}{l}{$\scalebox{0.85}{L4}\quad (x + y) \lmerge z \approx x \lmerge z + y \lmerge z$}
& \multicolumn{2}{l}{$\scalebox{0.85}{P}\quad x \mathbin{\|} y \approx x \lmerge y + y \lmerge x + x ~|~ y$} \\[.2cm]
\hline
\hline\\
\multicolumn{3}{l}{Additional axioms for $\sim_\mathtt{RBB}$: $\mathcal{E}_\mathtt{RBB} = \mathcal{E}_\mathtt{B} \cup \{TB,TL\}$} \\[.2cm]
\multicolumn{2}{l}{$\scalebox{0.85}{TB}\quad \mu (\tau(x+y) + y) \approx \mu (x +y)$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{TL}\quad x \lmerge \tau y \approx x \lmerge y$}
\\[.2cm]
\hline
\hline\\
\multicolumn{2}{l}{Derivable axioms} \\[.2cm]
\multicolumn{2}{l}{$\scalebox{0.85}{D1}\quad x \mathbin{\|} y \approx y \mathbin{\|} x$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{DT1}\quad \mu \tau x \approx \mu x$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{D2}\quad (x \mathbin{\|} y) \mathbin{\|} z \approx x \mathbin{\|} (y \mathbin{\|} z)$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{DT2}\quad x \lmerge (\tau(y+z) + y) \approx x \lmerge (y + z)$}\\
\multicolumn{2}{l}{$\scalebox{0.85}{D3}\quad (x \lmerge y) ~|~ (z \lmerge w) \approx (x ~|~ z) \lmerge (y \mathbin{\|} w)$}
&
\multicolumn{2}{l}{$\scalebox{0.85}{DT3}\quad \tau x ~|~ y \approx \mathbf{0}$}
\\
\multicolumn{2}{l}{$\scalebox{0.85}{D4}\quad x \mathbin{\|} \mathbf{0} \approx x$}
\end{tabular}
\caption{\label{tab:axioms_rbb} Equational basis modulo rooted branching bisimilarity.}
\end{table*}
In \cite{vGW96} it was proved that if we consider the fragment BCCS of CCS (i.e., the fragment consisting only of $\mathbf{0}$, variables, prefixing, and choice), then a ground-complete axiomatisation of rooted branching bisimilarity over BCCS is given by $\mathcal{E}_0 \cup \{\text{TB}\}$, where $\mathcal{E}_0 = \{\text{A0,A1,A2,A3}\}$ from Table~\ref{tab:axioms_b} (also reported in Table~\ref{tab:axioms_rbb}), and axiom TB is in Table~\ref{tab:axioms_rbb}.
Informally, TB reflects that if executing a $\tau$-step does not discard any observable behaviour, then it is redundant.
In \cite{AFIL09} it was proved that the axiom system $\mathcal{E}_\mathtt{B}$ given in Table~\ref{tab:axioms_rbb}, is a complete axiomatisation of bisimilarity over $\text{CCS}_{\text{LC}}$.
Starting from these works, we now study a complete axiomatisation for $\sim_\mathtt{RBB}$.
Our aim is to show that the axiom system $\mathcal{E}_\mathtt{RBB} = \mathcal{E}_\mathtt{B} \cup \{\text{TB,TL}\}$ presented in Table~\ref{tab:axioms_rbb} is a \emph{complete axiomatisation of rooted branching bisimilarity} over $\text{CCS}_{\text{LC}}$.
If executing a $\tau$-move does not resolve a choice within a parallel component,
then it will also not resolve a choice of the parallel composition; axiom TL expresses this property of rooted branching bisimilarity for left merge.
Interestingly, by combining TL and TB, it is possible to derive, as shown below, equation DT2 in Table~\ref{tab:axioms_rbb}, which is the equation for the left merge corresponding to TB.
\[
x \lmerge (\tau(y+z) +y)
\stackrel{\scalebox{0.75}{(TL)}}{\approx{}}
\;
x \lmerge \tau (\tau(y+z) +y)
\stackrel{\scalebox{0.75}{(TB)}\phantom{1}}{\approx{}}
\;
x \lmerge \tau (y+z)
\stackrel{\scalebox{0.75}{(TL)}}{\approx{}}
\;
x \lmerge (y+z).
\]
In Table~\ref{tab:axioms_rbb} we report also some other equations that can be derived from $\mathcal{E}_\mathtt{RBB}$, and that are useful in the technical development of our results.
We refer the reader interested in the derivation proofs of D1--D3 and DT3 to \cite{AFIL09}.
Notice that DT1 corresponds essentially to the substitution instance of TB in which $y$ is mapped to $\mathbf{0}$.
First of all, it is immediate to prove the soundness of $\mathcal{E}_{\mathtt{RBB}}$ modulo $\sim_\mathtt{RBB}$.
\begin{theorem}
[Soundness]
The axiom system $\mathcal{E}_\mathtt{RBB}$ is sound modulo $\sim_\mathtt{RBB}$ over $\text{CCS}_{\text{LC}}$.
\end{theorem}
To obtain the desired completeness result, we apply the same strategy used in \cite{AvGFI96} that consists in the three steps discussed in Section~\ref{sec:completeness_roadmap}.
Let us proceed to the first step: identifying normal forms for $\text{CCS}_{\text{LC}}$ terms.
\begin{definition}
[Normal forms]
\label{def:rbb_nf_ccslc}
The set of normal forms over $\text{CCS}_{\text{LC}}$ is generated by the following grammar:
\begin{align*}
S ::={} &
\mu.N \quad | \quad
x \lmerge N \quad | \quad
(x ~|~ \alpha) \lmerge N \quad | \quad
(x ~|~ y) \lmerge N \\
N ::={} &
\mathbf{0} \quad | \quad
S \quad | \quad
N + N
\end{align*}
where $x,y \in \mathcal{V}$, $\mu \in \mathcal{A}_\tau$ and $\alpha \in \mathcal{A} \cup \overline{\mathcal{A}}$.
Normal forms generated by $S$ are also called \emph{simple normal forms} and are characterised by the fact that they do not have $+$ as head operator.
\end{definition}
\begin{restatable}{proposition}{proprbbnfccslc}
\label{prop:rbb_nf_ccslc}
For every term $t$ there is a normal form $N$ such that $\mathcal{E}_\mathtt{RBB} \vdash t \approx N$.
\end{restatable}
We can then proceed to prove that branching bisimilar terms can be proven equal to rooted branching bisimilar terms using the axiom system $\mathcal{E}_\mathtt{RBB}$.
\begin{restatable}{proposition}{proprbbprovableccslc}
\label{prop:rbb_provable_ccslc}
For $\text{CCS}_{\text{LC}}$ terms $t,u$,
if $t \sim_\mathtt{BB} u$ then $\mathcal{E}_\mathtt{RBB} \vdash \mu.t \approx \mu.u$, for any $\mu \in \mathcal{A}_\tau$.
\end{restatable}
The completeness of the axiom system $\mathcal{E}_\mathtt{RBB}$ then follows from Proposition~\ref{prop:rbb_nf_ccslc} and Proposition~\ref{prop:rbb_provable_ccslc}.
Notice that axioms L1 and TB are actually axiom schemata that both generate $|\mathcal{A}_\tau|$ axioms.
Similarly, the schema C4 generates $2|\mathcal{A}|$ axioms, and C5 generates $2|\mathcal{A}| \times (2|\mathcal{A}|-1)$ axioms.
Hence, $\mathcal{E}_\mathtt{RBB}$ is \emph{finite} when so is the set of actions.
\thmrbbcompleteccslc*
\section{Concluding remarks}
\label{sec:conclusion}
In this paper we have shown that the use of auxiliary operators left merge and communication merge is crucial to obtain a finite, complete axiomatisation of the CCS parallel composition operator modulo rooted branching bisimilarity.
A natural direction for future research is the extension of our results to other weak congruences from the spectrum \cite{vG93}.
In detail, we will investigate the existence of a general technique to lift the negative result from rooted branching bisimilarity to other weak congruences.
Regarding the positive result, we will focus on three weak congruences, namely \emph{rooted} $\eta$-\emph{bisimilarity} ($\sim_{\mathtt{R}\eta\mathtt{B}}$), \emph{rooted delay bisimilarity} ($\sim_\mathtt{RDB}$), and \emph{rooted weak bisimilarity} ($\sim_\mathtt{RWB}$), and provide the complete axiomatisations for them.
We are confident that the axiomatisation for $\sim_{\mathtt{R}\eta\mathtt{B}}$ can be obtained by exploiting a proof technique from \cite{vG95,AvGFI96} based on the notion of \emph{saturation}.
Intuitively, $\sim_{\mathtt{R}\eta\mathtt{B}}$ coincides with $\sim_\mathtt{RBB}$ on the class of $\eta$-saturated terms.
Hence, if we can show that each term is provably equal to an $\eta$-saturated term using the axiom system for $\sim_{\mathtt{R}\eta\mathtt{B}}$, the completeness of the considered axiom system then directly follows from that for $\sim_\mathtt{RBB}$ we provided in this paper.
The quest for complete axiomatisations for $\sim_\mathtt{RDB}$ and $\sim_\mathtt{RWB}$ will require a different approach, as these equivalences are not preserved by the communication merge operator.
For instance, we have that $\tau.a \sim_\mathtt{RWB} \tau.a + a$, but
$\mathbf{0} \sim_\mathtt{RWB} \tau.a ~|~ \overline{a}.b \not\sim_\mathtt{RWB} (\tau.a + a) ~|~ \overline{a}.b \sim_\mathtt{RWB} \tau.b$.
For $\sim_\mathtt{RDB}$, we have a similar outcome (see \cite{vG11} for more details).
If we look at the seminal paper \cite{BK85}, the complete axiomatisation for observational congruence \cite{HM85} (and thus rooted weak bisimilarity) over $\text{ACP}_\tau$ comes with the axiom
\begin{equation}
\tag{TC}
\tau.x ~|~ y \approx x ~|~ y.
\end{equation}
Simialrly, in \cite{He88,Ac94}, it was argued that in order to reason compositionally, and obtain an equational theory of CCS modulo observational congruence, it is necessary to define the operational semantics of communication merge in terms of inference rules of the form
\[
\SOSrule{t \wtrans[\alpha] t' \quad u \wtrans[\overline{\alpha}] u'}{t ~|~ u \wtrans[\tau] t' \mathbin{\|} u'}
\]
where we use $\wtrans[\mu]$ as a short-hand for the sequence of transitions $\trans[\varepsilon]\trans[\mu]\trans[\varepsilon]$.
This means that in order for $|$ to preserve $\sim_\mathtt{RWB}$ (and/or $\sim_\mathtt{RDB}$), we need to consider a sequence of weak transitions as a single step.
Clearly, since $|$ is an auxiliary operator that we introduce specifically to obtained the axiomatisations, its semantics can be defined in the most suitable way for our purposes, i.e., so that it is consistent with the considered congruence relation.
However, it is also clear that if we modify the semantics of one operator in $\text{CCS}_{\text{LC}}$, then we are working with a new language.
In particular, some axioms that are sound modulo \emph{strong} bisimilarity (and thus also modulo $\sim_\mathtt{RBB}$) over $\text{CCS}_{\text{LC}}$ become unsound modulo rooted weak bisimilarity over the new language: this is the case of axioms C6 and C7 in Table~\ref{tab:axioms_rbb}.
As a consequence, we cannot exploit the completeness of the axiomatisation for rooted branching bisimilarity to derive complete axiomatisations for rooted weak bisimilarity and rooted delay bisimilarity, but we must provide new axiomatisations for them and prove their completeness from scratch.
Hence, we leave as future work the quest for complete axiomatisations for $\sim_\mathtt{RWB}$ and $\sim_\mathtt{RDB}$ over (recursion, relabelling, and restriction free) CCS with left merge and communication merge.
\bibliographystyle{plainurl}
\section{Additional background notions and results}
In this section we present some additional notions and general simple results on CCS terms that we omitted from the main text because of space limits, and that will be useful in the upcoming proofs.
Firstly, we introduce the notion of \emph{derivative} of a process.
\begin{definition}
[Derivative]
For a process $p$, the set of \emph{derivatives} of $p$, notation $\mathrm{der}(p)$, is the least set containing $p$ that is closed under $\trans[]$, i.e., the least set satisfying:
\begin{itemize}
\item $p \in \mathrm{der}(p)$, and
\item if $q \in \mathrm{der}(p)$ and $q \trans[\mu] q'$, for some action $\mu \in \mathcal{A}_\tau$, then $q' \in \mathrm{der}(p)$.
\end{itemize}
\end{definition}
In particular, we say that $p' \in \mathrm{der}(p)$ is a $\mu$-\emph{derivative} of $p$, for some $\mu \in \mathcal{A}_\tau$, if $p \trans[\mu] p'$.
Moreover, we say that $p'$ is a \emph{proper} derivative of $p$ if $p' \in \mathrm{der}(p) \setminus \{p\}$.
Let $p \wtrans[\mu] q$ be a shorthand for $p \trans[\varepsilon] p' \trans[\mu] q$, for some $p'$.
For a sequence of actions $\varphi = \alpha_1 \cdots \alpha_k \in (\mathcal{A}\cup\overline{\mathcal{A}})^*$ ($k \geq 0$), and processes $p,p'$, we write that $p \wtrans[\varphi] p'$ if and only if there exists a sequence of transitions $p = p_0 \wtrans[\alpha_1] p_1 \wtrans[\alpha_2] \cdots \wtrans[\alpha_k] p_k = p'$.
If $p \wtrans[\varphi] p'$ holds for some process $p'$, then $\varphi$ is an {\emph observable trace} of $p$.
Moreover, we say that $\varphi$ is a \emph{maximal} observable trace of $p$ if $\mathrm{init}(p') = \emptyset$.
By means of observable traces, we associate a classic notion with a process $p$, i.e., its (\emph{observable}) \emph{depth}, denoted by $\mathrm{depth}(p)$.
For a process $p$ whose set of traces is finite, it expresses the length of a \emph{longest} observable trace.
Formally, denoting by $|\varphi|$ the length of $\varphi$,
\[
\mathrm{depth}(p) = \max \{ k \mid p \wtrans[\varphi] p' \;\wedge\; |\varphi| = k\}.
\]
The case of rooted branching bisimilarity is slightly more complicated, as we need to treat a possible initial $\tau$-move as an observable action.
To this end, we define the \emph{rooted depth} of a process, denoted by $\mathrm{Rdepth}(p)$:
\[
\mathrm{Rdepth}(p) = \sup \{ 1 + \mathrm{depth}(p') \mid \exists \mu \in \mathcal{A}_\tau, p' \in \mathbf{P} \text{ s.t. } p \trans[\mu] p' \}.
\]
Then, we notice that we can give the following inductive characterisation of the (rooted) depth of CCS processes:
\begin{itemize}
\item $\mathrm{depth}(\mathbf{0}) = 0$;
\item $\mathrm{depth}(\tau.p) = \mathrm{depth}(p)$;
\item $\mathrm{depth}(\alpha.p) = 1+ \mathrm{depth}(p)$;
\item $\mathrm{depth}(p + q) = \max\{\mathrm{depth}(p), \mathrm{depth}(q)\}$;
\item $\mathrm{depth}(p \mathbin{\|} q) = \mathrm{depth}(p) + \mathrm{depth}(q)$.
\item $\mathrm{Rdepth}(\mathbf{0}) = 0$;
\item $\mathrm{Rdepth}(\mu.p) = 1 + \mathrm{depth}(p)$;
\item $\mathrm{Rdepth}(p + q) = \max\{\mathrm{Rdepth}(p), \mathrm{Rdepth}(q)\}$;
\item $\mathrm{Rdepth}(p \mathbin{\|} q) = \mathrm{Rdepth}(p) + \mathrm{Rdepth}(q)$.
\end{itemize}
An immediate result is that branching bisimilarity preserves the observable depth of processes:
\begin{lemma}
\label{lem:bb_same_depth}
If $p \sim_\mathtt{BB} q$, then $\mathrm{depth}(p) = \mathrm{depth}(q)$.
\end{lemma}
Similarly, rooted branching bisimilarity preserves the rooted depth of processes:
\begin{lemma}
\label{lem:rbb_same_depth}
If $p \sim_\mathtt{RBB} q$, then $\mathrm{Rdepth}(p) = \mathrm{Rdepth}(q)$.
\end{lemma}
Moreover, it is well known that rooted branching bisimilarity is finer than branching bisimilarity, in the sense that whenever $p \sim_\mathtt{RBB} q$, then also $p \sim_\mathtt{BB} q$ holds.
In general, the converse implication does not hold.
However, we present here a simple case in which we can establish that two branching bisimilar processes are also rooted branching bisimilar:
\begin{lemma}
\label{lem:same_initials}
Assume that processes $p,q$ are such that $p \sim_\mathtt{BB} q$.
If $\tau \not \in \mathrm{init}(p)\cup\mathrm{init}(q)$, then $p \sim_\mathtt{RBB} q$.
Moreover, if $\tau \not\in \mathrm{init}(p)$, then $p \not\sim_\mathtt{RBB} \mathbf{0}$ implies $p \not\sim_\mathtt{BB} \mathbf{0}$.
\end{lemma}
We conclude this section by recalling that we can restrict the axiom system to a collection of equations that do not introduce unnecessary terms that are rooted branching bisimilar to $\mathbf{0}$ in the equational proofs, namely $\mathbf{0}$ summands and $\mathbf{0}$ factors.
An example of a collection of equations over CCS that are sound modulo $\sim_\mathtt{RBB}$ is given by axioms A0--A3, P0--P2, in Table~\ref{tab:axioms_b}.
Interestingly, axioms A0 and P0 in Table~\ref{tab:axioms_b} (used from left to right) are enough to establish that each term that is rooted branching bisimilar to $\mathbf{0}$ is also provably equal to $\mathbf{0}$.
It is well-known (see, e.g., Section~2 in \cite{Gr90}) that if an equation relating two closed terms can be proven from an axiom system $\mathcal{E}$, then there is a closed proof for it.
In addition, if $\mathcal{E}$ satisfies a further closure property, called \emph{saturation} in \cite[Definition 5.1.1]{Mo89}, and that closed equation relates two terms containing no occurrences of $\mathbf{0}$ as a summand or factor, then there is a closed proof for it in which all of the terms have no occurrences of $\mathbf{0}$ as a summand or factor, as formalised in the following proposition.
\begin{proposition}
\label{Propn:proofswithout0_text}
Assume that $\mathcal{E}$ is a saturated axiom system.
Then, the proof from $\mathcal{E}$ of an equation $p \approx q$, where $p$ and $q$ are terms not containing occurrences of $\mathbf{0}$ as a summand or factor, need not use terms containing occurrences of $\mathbf{0}$ as a summand or factor.
\end{proposition}
Since the proof of this result follows the same lines of that of \cite[Proposition~5.1.5]{Mo89}, we omit it from our work.
In light of Proposition~\ref{Propn:proofswithout0_text} we shall, henceforth, limit ourselves to considering saturated axiom systems.
\section{Additional results to Section~\ref{sec:decomposition}}
\label{app:decomposition}
Firstly, we present a general result on the structure of configurations.
\begin{lemma}
\label{lem:c_structure}
Let $t$ be a term and $x$ be a variable in $\mathcal{V}$.
If $t \trans[\ell]_\rho c$ for some $\rho \in \mathcal{A}_\tau \cup ((\mathcal{A} \cup \overline{\mathcal{A}})\times \{\tau\})$ and label $\ell$ with $x \in \ell$, then $c$ is of the form $x_\mu \mathbin{\|} c'$, for some configuration $c'$, and action $\mu \in \mathcal{A}_\tau$ (modulo the axioms in Table~\ref{tab:axioms_b}).
\end{lemma}
The following lemma shows how transitions labelled with observable actions are triggered by variables.
\begin{lemma}
\label{lem:closed2open_alpha}
Let $\alpha \in \mathcal{A} \cup \overline{\mathcal{A}}$, $t$ be a term, $\sigma$ be a closed substitution, and $p$ be a process.
If $\sigma(t) \trans[\alpha] p$, then one of the following holds:
\begin{enumerate}
\item \label{c2o_prefix}
There is a term $t'$ s.t.\ $t \trans[\alpha] t'$ and $\sigma(t') = p$.
\item \label{c2o_x}
There are a variable $x$, a process $q$ and a configuration $c$ s.t.\ $\sigma(x) \trans[\alpha] q$, $t \trans[(x)]_\alpha c$, and $\sigma[x_\alpha \mapsto q](c) = p$.
\end{enumerate}
\end{lemma}
\section{Proof of Lemma~\ref{lem:closed2open_tau}}
\lemclosedopentau*
\begin{proof}
The proof follows by induction on the derivation of the transition $\sigma(t) \trans[\tau] p$.
The interesting case is the base case corresponding to a communication in the term $t = t_1 \mathbin{\|} t_2$, for some CCS terms $t_1,t_2$, that we expand below.
We can assume, without loss of generality, that $\sigma(t_1) \trans[\alpha] p_1$ and $\sigma(t_2) \trans[\overline{\alpha}] p_2$, for some processes $p_1,p_2$ such that $p_1 \mathbin{\|} p_2 = p$.
The symmetric case of $\sigma(t_1) \trans[\overline{\alpha}]$ and $\sigma(t_2) \trans[\alpha]$ follows by applying the same arguments we use below, switching the roles of $\sigma(t_1)$ and $\sigma(t_2)$.
By Lemma~\ref{lem:closed2open_alpha}, from $\sigma(t_1) \trans[\alpha] p_1$ we can infer that either $t_1 \trans[\alpha] t_1'$ for some term $t_1'$ such that $\sigma(t_1') = p_1$, or there are a variable $x$, a process $q_1$ and a configuration $c_1$ such that $\sigma(x) \trans[\alpha] q_1$, $t_1 \trans[(x)]_\alpha c_1$, and $\sigma[x_\alpha \mapsto q_1](c_1) = p_1$.
Similarly, by Lemma~\ref{lem:closed2open_alpha}, from $\sigma(t_2) \trans[\overline{\alpha}] p_2$ we can infer that either $t_2 \trans[\overline{\alpha}] t_2'$ for some term $t_2'$ such that $\sigma(t_2') = p_2$, or there are a variable $y$, a process $q_2$ and a configuration $c_2$ such that $\sigma(y) \trans[\overline{\alpha}] q_2$, $t_2 \trans[(y)]_{\overline{\alpha}} c_2$, and $\sigma[y_{\overline{\alpha}} \mapsto q_2](c_2) = p_2$.
We can then distinguish four cases, depending on how the possible derivations of the transitions of $t_1$ and $t_2$ mentioned above are combined in the derivation of the synchronisation.
\begin{enumerate}
\item Case $t_1 \trans[\alpha] t_1'$ and $t_2 \trans[\overline{\alpha}] t_2'$.
In this case, the operational semantics of $\mathbin{\|}$ allows us to directly infer that $t_1 \mathbin{\|} t_2 \trans[\tau] t_1' \mathbin{\|} t_2'$.
Hence we have that there is $t' = t_1' \mathbin{\|} t_2'$ such that $t \trans[\tau] t'$ and $\sigma(t') = p_1 \mathbin{\|} p_2 = p$.
\item Case $\sigma(x) \trans[\alpha] q_1$, $t_1 \trans[(x)]_\alpha c_1$, $\sigma[x_\alpha \mapsto q_1](c_1) = p_1$, and $t_2 \trans[\overline{\alpha}] t_2'$.
As $t_1 \trans[(x)]_\alpha c_1$ and $t_2 \trans[\overline{\alpha}] t_2'$ we can apply the auxiliary rule ($a_5$) and obtain that $t_1 \mathbin{\|} t_2 \trans[(x)]_{\alpha,\tau} c_1 \mathbin{\|} t_2'$.
Hence we have that there are a variable $x$, a subterm $t_2$ of $t$, a process $q_1$ and a configuration $c = c_1 \mathbin{\|} t_2'$ such that $\sigma(x) \trans[\alpha] q_1$, $t_2 \trans[\overline{\alpha}] t_2'$, $t \trans[(x)]_{\alpha,\tau} c$ and $\sigma[x_\alpha \mapsto q_1](c) = \sigma[x_\alpha \mapsto q_1](c_1) \mathbin{\|} \sigma(t_2') = p_1 \mathbin{\|} p_2 = p$.
\item Case $t_1 \trans[\alpha] t_1'$, $\sigma(y) \trans[\overline{\alpha}] q_2$, $t_2 \trans[(y)]_{\overline{\alpha}} c_2$, and $\sigma[y_{\overline{\alpha}} \mapsto q_2](c_2) = p_2$.
This is the symmetrical of the previous case and it follows by applying the same reasoning used above, using the auxiliary rule ($a_6$) in place of ($a_5$).
\item Case $\sigma(x) \trans[\alpha] q_1$, $t_1 \trans[(x)]_\alpha c_1$, $\sigma[x_\alpha \mapsto q_1](c_1) = p_1$, $\sigma(y) \trans[\overline{\alpha}] q_2$, $t_2 \trans[(y)]_{\overline{\alpha}} c_2$, and $\sigma[y_{\overline{\alpha}} \mapsto q_2](c_2) = p_2$.
As $t_1 \trans[(x)]_\alpha c_1$ and $t_2 \trans[(y)]_{\overline{\alpha}} c_2$, we can apply the auxiliary rule ($a_4$) and obtain thus $t_1 \mathbin{\|} t_2 \trans[(x,y)]_{\tau} c_1 \mathbin{\|} c_2$.
Hence we have obtained that there are variables $x,y$, processes $q_1,q_2$ and a configuration $c = c_1 \mathbin{\|} c_2$ such that $\sigma(x) \trans[\alpha] q_1$, $\sigma(y) \trans[\overline{\alpha}] q_2$, $t \trans[(x,y)]_\tau c$, and $\sigma[x_\alpha \mapsto q_1, y_{\overline{\alpha}} \mapsto q_2](c) =
\sigma[x_\alpha \mapsto q_1](c_1) \mathbin{\|} \sigma[y_{\overline{\alpha}} \mapsto q_2](c_2) =
p_1 \mathbin{\|} p_2 = p$, where we can distribute the substitutions over $\mathbin{\|}$ since $x_\alpha$ and $y_{\overline{\alpha}}$ are unique by construction (even if $x = y$, the two subscripts allow us to distinguish them).
\end{enumerate}
\end{proof}
\section{Proof of Proposition~\ref{prop:rttransprops}}
\rttransprops*
\begin{proof}
Clearly, $\rightarrow^{*}$ is reflexive and transitive by definition, and since $p\trans[\mu]p'$ implies $\mathrm{size}(p)>\mathrm{size}(p')$
it follows that $\rightarrow^{*}$ is inversely well-founded. Every inversely well-founded reflexive and transitive relation is clearly also anti-symmetric, so $\rightarrow^{*}$ is an inversely well-founded partial order.
By inverse well-foundedness, for every process $p$ there exists a process $p'$ such that $p\rightarrow^{*}p'$ and $p'{\not\rightarrow}$; from $p'{\not\rightarrow}$ it follows that $p'\sim_{\mathtt{BB}}\mathbf{0}$. Hence $\rightarrow^{*}$ satisfies property \ref{item:rttranspropsleast}.
That $\rightarrow^{*}$ satisfies property~\ref{item:rttranspropscompatible} is an immediate consequence of the two rules at the bottom left of Table~\ref{tab:sos_rules}.
That $\rightarrow^{*}$ satisfies property~\ref{item:rttranspropsprecompositional} is straightforwardly established with induction on the length of a transition sequence witnessing $p\mathbin{\|}q\rightarrow^{*}r$, using that the last SOS rule applied in the derivation of each individual transition must be one of the three rules for $\mathbin{\|}$.
To see that $p\rightarrow^{*}q^n$ for all $n\in\mathbb{N}$ implies $q\sim_{\mathtt{BB}}\mathbf{0}$, note that $p\rightarrow^{*}q^n$ implies $\mathrm{depth}(p)\geq\mathrm{depth}(q^n)$ and $\mathrm{depth}(q^n)=n\cdot\mathrm{depth}(q)$, from which it follows that $\mathrm{depth}(q)=0$ and hence $q\sim_{\mathtt{BB}}\mathbf{0}$. This proves that $\rightarrow^{*}$ satisfies property~\ref{item:rttranspropsArchimedean}.
\end{proof}
\section{Proof of Proposition~\ref{prop:bb_cancellation}}
\propbbcancellation*
\begin{proof}
Let $p$ be an indecomposable process and let $q$ and $r$ be processes such that $[p]_{\sim_{\mathtt{BB}}}\not\prec [q]_{\sim_{\mathtt{BB}}}, [r]_{\sim_{\mathtt{BB}}}$, and suppose that $[p^k\mathbin{\|}q]_{\sim_{\mathtt{BB}}}=[p^k\mathbin{\|}r]_{\sim_{\mathtt{BB}}}$ for some $k\in\mathbb{N}$. We need to prove that $[q]_{\sim_{\mathtt{BB}}}=[r]_{\sim_{\mathtt{BB}}}$, and for this it suffices to argue that the relation
\begin{equation*}
\mathcal{R}=\{(q,r)\}\cup{\sim_{\mathtt{BB}}}
\end{equation*}
is a branching bisimulation relation.
The weak decomposition order $\preceq$, since it is well-founded, induces a well-founded order on triples of $\sim_{\mathtt{BB}}$-equivalence classes of processes, defining
$([p_1]_{\sim_{\mathtt{BB}}},[p_2]_{\sim_{\mathtt{BB}}},[p_3]_{\sim_{\mathtt{BB}}})
\preceq
([q_1]_{\sim_{\mathtt{BB}}},[q_2]_{\sim_{\mathtt{BB}}},[q_3]_{\sim_{\mathtt{BB}}})$
if, and only if,
$[p_1]_{\sim_{\mathtt{BB}}}\preceq [q_1]_{\sim_{\mathtt{BB}}}$
and whenever $[p_1]_{\sim_{\mathtt{BB}}}=[q_1]_{\sim_{\mathtt{BB}}}$, then also $[p_2]_{\sim_{\mathtt{BB}}}\preceq [q_2]_{\sim_{\mathtt{BB}}}$ and $[p_3]_{\sim_{\mathtt{BB}}}\preceq [q_3]_{\sim_{\mathtt{BB}}}$.
Our proof is by induction on the well-founded order $\preceq$ on triples of $\sim_{\mathtt{BB}}$-equivalence classes of processes: we assume, by way of induction hypothesis, that for all $p'$, $q'$ and $r'$ such that
\begin{equation*}
([(p')^{k}\mathbin{\|}q']_{\sim_{\mathtt{BB}}},[q']_{\sim_{\mathtt{BB}}},[r']_{\sim_{\mathtt{BB}}})
\prec
([p^{k}\mathbin{\|}q]_{\sim_{\mathtt{BB}}},[q]_{\sim_{\mathtt{BB}}},[r]_{\sim_{\mathtt{BB}}})
\end{equation*}
we have that $(p')^k\mathbin{\|}q'\sim_{\mathtt{BB}} (p')^k\mathbin{\|} r'$ implies $q'\sim_{\mathtt{BB}}r'$,
and establish the property for $p$, $q$ and $r$.
Since $q\rightarrow q'$ implies $\mathrm{size}(q)>\mathrm{size}(q')$, and hence is $\rightarrow$ is terminating, we may, without loss of generality, choose $q$ such that there does not exist $q'$ such that $q\rightarrow q'$ and $q\sim_{\mathtt{BB}}q'$, and similarly we may, without loss of generality, choose $r$ such that there does not exist an $r'\neq r$ such that $r\rightarrow r'$ and $r\sim_{\mathtt{BB}}r'$.
Since $\sim_{\mathtt{BB}}$ is the largest branching bisimulation, it is immediate that all pairs in $\sim_{\mathtt{BB}}$ satisfy the conditions of branching bisimulations. We do need to establish that the pair $(q,r)$ also satisfies these conditions, and to this end suppose that $q\trans[\mu]q'$ for some $q'$; we establish that either $\mu=\tau$ and $q'\mathrel{\mathcal{R}} r$, or there exist processes $r'$ and $r''$ such that $r\trans[\epsilon]r'\trans[\mu]r''$, $q\mathrel{\mathcal{R}}r'$ and $q'\mathrel{\mathcal{R}}r''$. We distinguish two cases:
\begin{enumerate}
\item Suppose that $p^k\mathbin{\|}q'\sim_{\mathtt{BB}} p^k\mathbin{\|}q$. Then, since $\sim_{\mathtt{BB}}$ preserves depth and the depth of a parallel composition is the sum of the depths of its components, we must have that $\mathrm{depth}(q')=\mathrm{depth}(q)$ and hence $\mu=\tau$. Now, by the choice of $q$ it follows that $[q']_{\sim_{\mathtt{BB}}}\prec [q]_{\sim_{\mathtt{BB}}}$. Moreover, from $p^k\mathbin{\|}q'\sim_{\mathtt{BB}} p^k\mathbin{\|}q$ it follows that $[p^k\mathbin{\|}q']_{\sim_{\mathtt{BB}}}=[p^k\mathbin{\|}q]_{\sim_{\mathtt{BB}}}$. Thus we find that
\begin{equation*}
([p^{k}\mathbin{\|}q']_{\sim_{\mathtt{BB}}},[q']_{\sim_{\mathtt{BB}}},[r]_{\sim_{\mathtt{BB}}})
\prec
([p^{k}\mathbin{\|}q]_{\sim_{\mathtt{BB}}},[q]_{\sim_{\mathtt{BB}}},[r]_{\sim_{\mathtt{BB}}})
\enskip,
\end{equation*}
so by the induction hypothesis $q'\sim_{\mathtt{BB}} r$, and hence $q'\mathrel{\mathcal{R}}r$.
\item Suppose that $p^k\mathbin{\|}q'\not\sim_{\mathtt{BB}} p^k\mathbin{\|}q$. Then
$[p^k\mathbin{\|}q']_{\sim_{\mathtt{BB}}}\prec [p^k\mathbin{\|}q]_{\sim_{\mathtt{BB}}}$, and,
by the induction hypothesis, the weak decomposition order $\preceq$ on the partial commutative submonoid $\{x\mid x\preceq [p^k\mathbin{\|}q']_{\sim_{\mathtt{BB}}}\}$ of $\mathbf{P}$ modulo $\sim_{\mathtt{BB}}$ satisfies power cancellation. Hence, by \cite[Theorem 34]{Bas16}, if $s$ is any process such that $p^k\mathbin{\|}q'\sim_{\mathtt{BB}};\rightarrow^{*}s$ (where $;$ denotes relation composition), then $s$ has a unique parallel decomposition.
From $q\trans[\mu]q'$ if follows that $p^k\mathbin{\|}q \trans[\mu]p^k\mathbin{\|}q'$, and hence, since $p^k\mathbin{\|}q\sim_{\mathtt{BB}}p^k\mathbin{\|}r$ there exist processes $p',p'',r',r''$ such that
\begin{gather*}
p^k\rightarrow^{*}p'\rightarrow^{*}p''\enskip,\\
r\rightarrow^{*}r'\rightarrow^{*}r''\enskip,\\
p^k\mathbin{\|}q\sim_{\mathtt{BB}}p'\mathbin{\|}r'\enskip,\ \text{and}\\
p^k\mathbin{\|}q'\sim_{\mathtt{BB}}p''\mathbin{\|}r''\enskip.
\end{gather*}
Moreover, from $p''\mathbin{\|}r''\sim_{\mathtt{BB}}p^k\mathbin{\|}q'\not\sim_{\mathtt{BB}}p^k\mathbin{\|}q\sim_{\mathtt{BB}}p^k\mathbin{\|}r$ it follows that either $p''\not\sim_{\mathtt{BB}} p^k$ or $r''\not\sim_{\mathtt{BB}} r$; we distinguish these two subcases:
\begin{enumerate}
\item Suppose $r''\not\sim_{\mathtt{BB}} r$. Then, since $[r'']_{\sim_{\mathtt{BB}}}\prec[r]_{\sim_{\mathtt{BB}}}$ and $[p]_{\sim_{\mathtt{BB}}}\not\prec [r]_{\sim_{\mathtt{BB}}}$, the unique parallel decomposition of $r''$ cannot have occurrences of a process branching bisimilar to $p$. Since the unique decomposition of $p^k\mathbin{\|}q'$ has at least $k$ occurrences of a process branching bisimilar to $p$, it follows that
$[p^k]_{\sim_{\mathtt{BB}}}\preceq [p'']_{\sim_{\mathtt{BB}}} \preceq [p']_{\sim_{\mathtt{BB}}} \preceq [p^k]_{\sim_{\mathtt{BB}}}$, so $p'\sim_{\mathtt{BB}}p''\sim_{\mathtt{BB}}p^k$. From $p^k\mathbin{\|}q'\sim_{\mathtt{BB}}p^k\mathbin{\|}r''$ it then follows by the induction hypothesis that $q'\sim_{\mathtt{BB}}r''$, and hence $q'\mathrel{\mathcal{R}}r''$.
It remains to establish that $q\mathrel{\mathcal{R}}r'$. If $r'=r$, then, since $q\mathrel{\mathcal{R}}r$ this is immediate. If $r'\neq r$, then by the choice of $r$ such that there does not exist $r'$ such that $r\rightarrow r'$ and $r\sim_{\mathtt{BB}}r'$, it follows that $[r']_{\sim_{\mathtt{BB}}}\prec [r]_{\sim_{\mathtt{BB}}}$. Furthermore, since clearly $p^k\mathbin{\|}r \trans[\epsilon] p^k\mathbin{\|}r' \trans[\epsilon] p'\mathbin{\|}r'$ and $p^k\mathbin{\|}r \sim_{\mathtt{BB}} p'\mathbin{\|}r'$, it follows by the stuttering property
that $p^k\mathbin{\|}r\sim_{bb}p^k\mathbin{\|}r'$. Then $q\sim_{\mathtt{BB}}r'$ follows by the induction hypothesis, and hence $q\mathrel{\mathcal{R}}r'$.
\item Suppose $p''\not\sim_{\mathtt{BB}} p^k$. Then $[p'']_{\sim_{\mathtt{BB}}}\prec[p^k]_{\sim_{\mathtt{BB}}}$, so the multiplicity of $p$ in the unique decomposition of $p''$ is at most $k-1$. Hence, since $p''\mathbin{\|}r''\sim_{\mathtt{BB}}p^k\mathbin{\|}q'$, it follows that $p$ must be an element of the parallel decomposition of $r''$. This means that $[p]_{\sim_{\mathtt{BB}}}\preceq [r'']_{\sim_{\mathtt{BB}}}\preceq [r']_{\sim_{\mathtt{BB}}} \preceq [r]_{\sim_{\mathtt{BB}}}$ while at the same time $[p]_{\sim_{\mathtt{BB}}}\not\prec [r]_{\sim_{\mathtt{BB}}}$ by assumption. It follows that
\begin{equation*}
p\sim_{\mathtt{BB}}r\sim_{\mathtt{BB}}r'\sim_{\mathtt{BB}}r''\enskip.
\end{equation*} Hence, since $r''$ can contribute at most $1$ to the multiplicity of $p$ in the unique parallel decomposition of $p''\mathbin{\|}r''$, while the multiplicity of $p$ is at least $k$, it now follows that the multiplicity of $p$ in the parallel decomposition of $p''$ must be $k-1$. So we can assume without loss of generality that there exist processes $p_1,\dots,p_k$ and $p_1'$ such that
\begin{gather*}
p' = p_1\mathbin{\|}p_2\mathbin{\|}\dots\mathbin{\|}p_k\enskip,\\
p''= p_1' \mathbin{\|}p_2\mathbin{\|}\dots\mathbin{\|}p_k\enskip,\\
p\trans[\epsilon] p_i\quad (1\leq i \leq k)\enskip,\\
p\sim_{\mathtt{BB}}p_i\quad (2\leq i \leq k)\enskip,\ \text{and}\\
p_1\trans[\mu]p_1'\enskip.
\end{gather*}
We must have that $p_1'\not\sim_{\mathtt{BB}}p$, for otherwise we would have $p''\sim_{\mathtt{BB}}p^k$, contradicting the assumption in this case. Therefore, since $p_1\sim_{\mathtt{BB}} p\sim_{\mathtt{BB}}r$, there exist $r_1'$ and $r_1''$ such that $r\trans[\epsilon]r_1'\trans[\mu]r_1''$ with $p\sim_{\mathtt{BB}}r_1'$ and $p_1'\sim_{\mathtt{BB}}r_1''$.
By our choice of $r$ such that there does not exist an $r_1'\neq r$ such that $r\rightarrow r_1'$ and $r\sim_{\mathtt{BB}}r_1'$, it is immediate that $r=r_1'$ and hence $q\mathrel{\mathcal{R}}r'$.
Moreover, from $p^k\mathbin{\|}q' \sim_{\mathtt{BB}} r\mathbin{\|}p_2\mathbin{\|}\cdots\mathbin{\|}p_k\mathbin{\|}p_1'$, $p\sim_{\mathtt{BB}}r$ and $p\sim_{\mathtt{BB}}p_i$ ($2\leq i \leq k$) it follows that $q'\sim_b p_1'\sim_b r_1''$, so $q'\mathrel{\mathcal{R}}r_1''$.
\end{enumerate}
\end{enumerate}
\end{proof}
\section{Properties of processes}
\label{sec:preliminaries}
In this section we discuss some properties of special processes on which we will build our proof of the negative result, alongside some general properties of terms modulo (rooted) branching bisimilarity.
First of all, we notice that the (rooted) depth of a closed instance of a term is always an upper bound to the (rooted) depth of the closed instances of the variables occurring in it.
\begin{lemma}
\label{lem:var_depth}
Let $t$ be a term, and $\sigma$ be a closed substitution.
For each $x \in \mathrm{var}(t)$, $\mathrm{depth}(\sigma(x)) \le \mathrm{depth}(\sigma(t))$, and $\mathrm{Rdepth}(\sigma(x))\le \mathrm{Rdepth}(\sigma(t))$.
\end{lemma}
Then, we recall the notion of $\mathbf{0}$ \emph{factor} of a term:
\begin{definition}
We say that a term $t$ has a $\mathbf{0}$ \emph{factor} if it contains a subterm of the form $t' \mathbin{\|} t''$, and either $t' \sim_\mathtt{RBB} \mathbf{0}$ or $t'' \sim_\mathtt{RBB} \mathbf{0}$.
\end{definition}
Next, we prove that whenever two terms are provably equal modulo rooted branching bisimilarity, then they have the same variables as summands.
\begin{remark}
\label{rmk:summands}
Whenever a process term $t$ has neither $\mathbf{0}$ summands nor factors then we can assume that, for some finite non-empty index set $I$, $t = \sum_{i \in I} t_i$ for some terms $t_i$ such that none of them has $+$ as head operator and moreover, none of them has $\mathbf{0}$ summands nor factors.
\end{remark}
\begin{restatable}{proposition}{propsamevar}
\label{prop:same_var}
Let $t,u$ be two terms without $\mathbf{0}$ summands and $\mathbf{0}$ factors, and let $t \approx u$ be sound modulo $\sim_\mathtt{RBB}$.
If $t$ has a summand $x$, then so does $u$.
\end{restatable}
\begin{proof}
Observe, first of all, that since $t$ and $u$ have no $\mathbf{0}$ summands or factors, by Remark~\ref{rmk:summands} we can assume that $t = \sum_{i\in I} t_i$ and $u = \sum_{j\in J} u_j$ for some finite non-empty index sets $I, J$, where none of the $t_i$ ($i\in I$) and $u_j$ ($j\in J$) has $+$ as its head operator, and none of the $t_i$ ($i\in I$) and $u_j$ ($j\in J$) have $\mathbf{0}$ summands or factors.
Therefore, if $t$ has a summand $x$, then that there is some index $i \in I$ such that $t_i = x$.
We aim to show that $u$ has a summand $x$ as well.
Consider the substitution $\sigma_{\mathbf{0}}$ mapping each variable to $\mathbf{0}$.
Pick an integer $m$ larger than the rooted depth of $\sigma_{\mathbf{0}}(t)$ and of $\sigma_{\mathbf{0}}(u)$.
Let $\sigma$ be the substitution mapping $x$ to the term $a^{m+1}$ and agreeing with $\sigma_{\mathbf{0}}$ on all the other variables.
As $t \approx u$ is sound modulo rooted branching bisimilarity, we have that
$
\sigma(t) \sim_\mathtt{RBB} \sigma(u).
$
Moreover, the term $\sigma(t)$ affords the transition $\sigma(t) \trans[a] a^{m}$, for $t_i=x$ and $\sigma(x)=a^{m+1} \trans[a] a^{m}$.
Hence, for some closed term $p$,
\[
\sigma(u)= \sum_{j\in J} \sigma(u_j) \trans[a] p \sim_\mathtt{BB} a^{m} \enspace .
\]
This means that there is a $j \in J$ such that $\sigma(u_j) \trans[a] p$.
We claim that this $u_j$ can only be the variable $x$.
To see that this claim holds, observe, first of all, that $x\in \mathrm{var}(u_j)$.
In fact, if $x$ did not occur in $u_j$, then we would reach a contradiction thus:
\[
m =
\mathrm{depth}(p) < \mathrm{Rdepth}(\sigma(u_j)) =
\mathrm{Rdepth}(\sigma_{\mathbf{0}}(u_j)) \le
\mathrm{Rdepth}(\sigma_{\mathbf{0}}(u)) <
m \enspace .
\]
Using this observation and Lemma~\ref{lem:var_depth}, it is not hard to show that, for each of the other possible forms $u_j$ may
have, $\sigma(u_j)$ does not afford an $a$-labelled transition leading to a term of depth $m$.
We may therefore conclude that $u_j=x$, which was to be shown.
\end{proof}
We now proceed to analyse the properties of the following family of processes, which will play a crucial role in the proof of our first main result:
\[
p_n = \sum_{i = 2}^n a.a^{\le i}
\qquad (n \ge 2)
\]
where $a^{\le i} = \sum_{j = 1}^i a^j$, and $a^j = a.a^{j-1}$, for each $j\in \{i,\dots,i\}$, $i \in \{2,\dots,n\}$.
\begin{restatable}{lemma}{lemaiindecomposable}
\label{lem:ai_indecomposable}
Let $i\ge 2$.
Process $a^{\le i}$ is indecomposable.
\end{restatable}
\begin{proof}
Assume, towards a contradiction, that there are processes $p,q \not\sim_\mathtt{BB} \mathbf{0}$ such that $a^{\le i} \sim_\mathtt{BB} p \mathbin{\|} q$.
As $a^{\le i} \trans[a] \mathbf{0}$, we have that $p \mathbin{\|} q \trans[\varepsilon] r \trans[a] r'$ for some processes $r,r'$ such that $r \sim_\mathtt{BB} a^{\le i}$ and $r' \sim_\mathtt{BB} \mathbf{0}$.
Notice that $p \not\sim_\mathtt{BB} \mathbf{0}$ implies that there is at least one $p' \in \mathrm{der}(p)$ such that $p' \trans[\mu] p''$ for some process $p''$ and action $\mu \neq \tau$.
A similar property holds for $q$.
Since, moreover, $a^{\le i}$ can only perform sequences of $a$-moves, $p \mathbin{\|} q \sim_\mathtt{BB} a^{\le i}$ implies that also $p$ and $q$ can perform (weak) sequences of $a$-moves.
In particular, it follows that there is no derivative of $p$ or $q$ that can perform action $\overline{a}$.
As a consequence, $p$ and $q$ cannot synchronise.
This implies that there are processes $p',q'$ such that $p \trans[\varepsilon] p'$, $q \trans[\varepsilon] q'$ and $r = p' \mathbin{\|} q'$.
Then $r \trans[a] r'$ can follow either from $p' \trans[a] p''$, or $q' \trans[a] q''$, for some $p'',q''$.
Assume, without loss of generality, that $p' \trans[a] p''$ and $r' = p'' \mathbin{\|} q'$ (the case $q' \trans[a] q''$ is analogous).
Now $p'' \mathbin{\|} q' \sim_\mathtt{BB} \mathbf{0}$ implies $p''\sim_\mathtt{BB} \mathbf{0}$ and $q' \sim_\mathtt{BB} \mathbf{0}$.
Hence, $p' \sim_\mathtt{BB} p' \mathbin{\|} q' = r \sim_\mathtt{BB} a^{\le i}$ and, thus, $\mathrm{depth}(p) \ge i$.
Since $q \not\sim_\mathtt{BB} \mathbf{0}$ gives $\mathrm{depth}(q) \ge 1$, we get that $\mathrm{depth}(p \mathbin{\|} q) > i$.
This gives a contradiction with $p \mathbin{\|} q \sim_\mathtt{BB} a^{\le i}$, since $\mathrm{depth}(a^{\le i}) = i \neq \mathrm{depth}(p \mathbin{\|} q)$ (cf.\ Lemma~\ref{lem:bb_same_depth}).
\end{proof}
\begin{remark}
\label{rmk:norm}
We could not directly infer that $a^{\le i}$ is indecomposable from the fact that the norm of the process is $1$ (where the norm of a process is the length of the shortest maximal trace that it can perform).
This is due to the fact that we are considering the full merge operator.
Consider, for instance, the process $\tau.a + \alpha.a.\overline{\alpha} + \overline{\alpha}.\alpha.a + \alpha.\overline{\alpha}.a$.
This process has observable norm $1$, yet it is not indecomposable since it is branching bisimilar to $\alpha.a \mathbin{\|} \overline{\alpha}$.
Similarly, the process $\tau + \overline{\alpha}.\alpha + \alpha.\overline{\alpha}$ has norm $1$, but it is (rooted) branching bisimilar to $\alpha \mathbin{\|} \overline{\alpha}$.
\end{remark}
\begin{restatable}{lemma}{lempnindecomposable}
\label{lem:pn_indecomposable}
Let $n\ge 2$.
Process $p_n$ is indecomposable.
\end{restatable}
\begin{proof}
Assume, towards a contradiction, that there are processes $p,q \not\sim_\mathtt{BB} \mathbf{0}$ such that $a^{\le i} \sim_\mathtt{BB} p \mathbin{\|} q$.
Notice that $p \not\sim_\mathtt{BB} \mathbf{0}$ implies that there is at least one $p' \in \mathrm{der}(p)$ such that $p' \trans[\mu] p''$ for some process $p''$ and action $\mu \neq \tau$.
A similar property holds for $q$.
Given any $i \in \{2,\dots,n\}$, we have that $p_n \trans[a] a^{\le i}$.
Hence, from $p_n \sim_\mathtt{BB} p \mathbin{\|} q$, we get that $p \mathbin{\|} q \trans[\varepsilon] r \trans[a] r'$ for some processes $r,r'$ such that $r \sim_\mathtt{BB} p_n$ and $r' \sim_\mathtt{BB} a^{\le i}$.
We remark that, since $p_n$ can only perform sequences of $a$-moves, $p \mathbin{\|} q \sim_\mathtt{BB} p_n$ implies that also $p$ and $q$ can perform (weak) sequences of $a$-moves.
In particular, it follows that there is no derivative of $p$ or $q$ that can perform action $\overline{a}$.
As a consequence, $p$ and $q$ cannot synchronise.
We can then distinguish four cases, according to how the sequence of transitions $p \mathbin{\|} q \trans[\varepsilon] r \trans[a] r'$ is derived:
\begin{itemize}
\item $p \mathbin{\|} q \trans[\varepsilon] p' \mathbin{\|} q \trans[a] p'' \mathbin{\|} q$, with $p' \mathbin{\|} q \sim_\mathtt{BB} p_n$ and $p'' \mathbin{\|} q \sim_\mathtt{BB} a^{\le i}$.
Since $q \not\sim_\mathtt{BB} \mathbf{0}$ and $a^{\le i}$ is indecomposable by Lemma~\ref{lem:ai_indecomposable}, we get that $p'' \sim_\mathtt{BB} \mathbf{0}$ and $q \sim_\mathtt{BB} a^{\le i}$.
Hence, given any $j \in \{0,\dots,i-1\}$, $a^{\le i} \trans[a] a^j$ implies that $q \trans[\varepsilon] q' \trans[a] q''$ for some $q',q''$ such that $q' \sim_\mathtt{BB} a^{\le i}$ and $q'' \sim_\mathtt{BB} a^j$.
Therefore, $p \mathbin{\|} q \trans[\varepsilon] p \mathbin{\|} q' \trans[a] p \mathbin{\|} q''$ and $p_n \sim_\mathtt{BB} p \mathbin{\|} q$ imply that $p_n \trans[a] r$ for some process $r$ such that $r \sim_\mathtt{BB} q'' \sim_\mathtt{BB} a^j$ ($p_n$ cannot perform any $\tau$-move, hence it must match any initial sequence of $\tau$-transitions by doing nothing).
However, $p_n \trans[a] a^{\le k}$, for any $k \in \{2,\dots,n\}$, and there are no $j,k$ such that $a^j \sim_\mathtt{BB} a^{\le k}$, thus giving a contradiction with $p_n \sim_\mathtt{BB} p \mathbin{\|} q$.
\item $p \mathbin{\|} q \trans[\varepsilon] p \mathbin{\|} q' \trans[a] p \mathbin{\|} q''$, with $p \mathbin{\|} q' \sim_\mathtt{BB} p_n$ and $p \mathbin{\|} q'' \sim_\mathtt{BB} a^{\le i}$.
This case is symmetric to the previous one and therefore omitted.
\item $p \mathbin{\|} q \trans[\varepsilon] p' \mathbin{\|} q' \trans[a] p'' \mathbin{\|} q'$, with $p' \mathbin{\|} q' \sim_\mathtt{BB} p_n$ and $p'' \mathbin{\|} q'\sim_\mathtt{BB} a^{\le i}$.
Since $a^{\le i}$ is indecomposable (Lemma~\ref{lem:ai_indecomposable}), we can distinguish two cases:
\begin{itemize}
\item $p'' \sim_\mathtt{BB} \mathbf{0}$ and $q' \sim_\mathtt{BB} a^{\le i}$.
Then we can proceed as in the previous case, and show that $p \mathbin{\|} q \trans[\varepsilon] p \mathbin{\|} q' \trans[a] a^j$, for some $j \le i-1$, and thus $p_n \not\sim_\mathtt{BB} p \mathbin{\|} q$.
\item $p'' \sim_\mathtt{BB} a^{\le i}$ and $q' \sim_\mathtt{BB} \mathbf{0}$.
In this case we have that $p \mathbin{\|} q \trans[\varepsilon] p' \mathbin{\|} q \trans[a] p'' \mathbin{\|} q$.
Hence, since $p_n$ cannot perform any $\tau$-move, $p_n \sim_\mathtt{BB} p \mathbin{\|} q$ requires that $p_n \trans[a] a^{\le k}$ for some $k \in \{2,\dots,n\}$ such that $a^{\le k} \sim_\mathtt{BB} a^{\le i} \mathbin{\|} q$.
As $a^{\le k}$ is indecomposable (Lemma~\ref{lem:ai_indecomposable}), $a^{\le i} \not\sim_\mathtt{BB} \mathbf{0}$ for any $i \in \{2,\dots,n\}$, and by the proviso of the lemma $q \not\sim_\mathtt{BB} \mathbf{0}$, there is no index $k \in \{2,\dots,n\}$ realising the required equivalence.
Hence, also this case gives a contradiction with $p_n \sim_\mathtt{BB} p \mathbin{\|} q$.
\end{itemize}
\item $p \mathbin{\|} q \trans[\varepsilon] p' \mathbin{\|} q' \trans[a] p' \mathbin{\|} q''$, with $p' \mathbin{\|} q' \sim_\mathtt{BB} p_n$ and $p' \mathbin{\|} q''\sim_\mathtt{BB} a^{\le i}$.
This case is symmetric to the previous one and therefore omitted.
\end{itemize}
\end{proof}
The next lemma characterises the parallel components of a process that is branching bisimilar to the process $a \mathbin{\|} p_n$, for some $n \ge 2$.
\begin{restatable}{lemma}{lemapnpar}
\label{lem:apn_par}
Let $n \ge 2$.
Assume that $p \mathbin{\|} q \sim_\mathtt{BB} a \mathbin{\|} p_n$ for some processes $p,q \not\sim_\mathtt{BB} \mathbf{0}$.
Then either $p \sim_\mathtt{BB} a$ and $q \sim_\mathtt{BB} p_n$, or $p \sim_\mathtt{BB} p_n$ and $q \sim_\mathtt{BB} a$.
\end{restatable}
\begin{proof}
First of all, we notice that we can apply the same reasoning used in the proof of Lemma~\ref{lem:ai_indecomposable} to argue that no $\tau$-move performed by any derivative of $p \mathbin{\|} q$ can follow from a communication between the two parallel components, for otherwise we get a contradiction with $p \mathbin{\|} q \sim_\mathtt{BB} a \mathbin{\|} p_n$.
We have that $a \mathbin{\|} p_n \trans[a] \mathbf{0} \mathbin{\|} p_n \sim_\mathtt{BB} p_n$.
Hence, as $p \mathbin{\|} q \sim_\mathtt{BB} a \mathbin{\|} p_n$, there are processes $r,r'$ such that $p \mathbin{\|} q \trans[\varepsilon] r \trans[a] r'$, $r \sim_\mathtt{BB} a \mathbin{\|} p_n$, and $r' \sim_\mathtt{BB} p_n$.
Since no communication can occur, we have that there are processes $p',q'$ such that $p \trans[\varepsilon] p'$, $q \trans[\varepsilon] q'$, and $r = p' \mathbin{\|} q'$.
As $r \trans[a] r'$, assume, without loss of generality, that $p' \trans[a] p''$ and $r' = p'' \mathbin{\|} q'$.
(Under this assumption we will obtain that $p \sim_\mathtt{BB} a$ and $q \sim_\mathtt{BB} p_n$.
The symmetric case is obtained when $q' \trans[a] q''$ and $r' = p' \mathbin{\|} q''$, by applying the same reasoning.)
Since $p'' \mathbin{\|} q' \sim_\mathtt{BB} p_n$ and $p_n$ is indecomposable by Lemma~\ref{lem:pn_indecomposable}, we can distinguish the following two cases:
\begin{itemize}
\item $p'' \sim_\mathtt{BB} p_n$ and $q' \sim_\mathtt{BB} \mathbf{0}$.
As $p' \trans[a] p''$, this implies that $\mathrm{depth}(p) \ge n+2$.
At the same time, $q \not\sim_\mathtt{BB} \mathbf{0}$ gives $\mathrm{depth}(q) \ge 1$.
Consequently, we have that $\mathrm{depth}(p \mathbin{\|} q) = \mathrm{depth}(p) + \mathrm{depth}(q) \ge n+3 > n+2 = \mathrm{depth}(a \mathbin{\|} p_n)$, giving thus a contradiction with $p \mathbin{\|} q \sim_\mathtt{BB} a \mathbin{\|} p_n$.
\item $p'' \sim_\mathtt{BB} \mathbf{0}$ and $q' \sim_\mathtt{BB} p_n$.
In this case, we have that $r = p' \mathbin{\|} q' \sim_\mathtt{BB} p' \mathbin{\|} p_n$.
Moreover, $q \trans[\varepsilon] q'$ and $\mathrm{depth}(q') = n+1$ give that $\mathrm{depth}(q) \ge n+1$.
Now, as $p \not\sim_\mathtt{BB} \mathbf{0}$ gives $\mathrm{depth}(p) \ge 1$, from $\mathrm{depth}(p \mathbin{\|} q) = \mathrm{depth}(a \mathbin{\|} p_n) = n+2$ we infer that $\mathrm{depth}(q) = n+1$ and $\mathrm{depth}(p) = 1$.
As $a \mathbin{\|} p_n$ can perform only sequences of $a$-moves, and $\mathrm{depth}(p) = 1$, we obtain that whenever $p \wtrans[\alpha] p'$ then $\alpha = a$ and $p' \sim_\mathtt{BB} \mathbf{0}$.
We can also show that there is no $p'$ such that $p \trans[\varepsilon] p'$ and $p' \sim_\mathtt{BB} \mathbf{0}$.
Assume, towards a contradiction, that such a process $p'$ exists.
Then $p \mathbin{\|} q \trans[\varepsilon] p' \mathbin{\|} q \sim_\mathtt{BB} q$.
As $a \mathbin{\|} p_n$ cannot perform any $\tau$-transition, the branching bisimulation relation requires that $a \mathbin{\|} p_n \sim_\mathtt{BB} q$.
However, this is not possible since $\mathrm{depth}(a \mathbin{\|} p_n) \neq \mathrm{depth}(q)$.
We have therefore obtained that:
\begin{itemize}
\item whenever $p \wtrans[\alpha] p'$ then $\alpha = a$ and $p' \sim_\mathtt{BB} \mathbf{0}$, and
\item there is no $p'$ such that $p \trans[\varepsilon] p'$ and $p' \sim_\mathtt{BB} \mathbf{0}$.
\end{itemize}
This is enough to infer that $p \sim_\mathtt{BB} a$ (as the second condition guarantees that whenever $p \trans[\varepsilon] p' \trans[a] p''$ then $p' \sim_\mathtt{BB} a$ also holds.)
Let $p'$ be any process such that $p \trans[\varepsilon] p' \trans[a] p''$ for some $p'' \sim_\mathtt{BB} \mathbf{0}$.
Then $p \mathbin{\|} q \trans[\varepsilon] p' \mathbin{\|} q \trans[a] p'' \mathbin{\|} q \sim_\mathtt{BB} q$.
As $p \mathbin{\|} q \sim_\mathtt{BB} a \mathbin{\|} p_n$, then $a \mathbin{\|} p_n \trans[\varepsilon] r \trans[a] r'$ for some processes $r,r'$ such that $r \sim_\mathtt{BB} p' \mathbin{\|} q$ and $r' \sim_\mathtt{BB} q$.
Since, however, $a \mathbin{\|} p_n$ cannot perform any $\tau$-transition, we get that $a \mathbin{\|} p_n \trans[a] r'$ for some $r' \sim_\mathtt{BB} q$.
Moreover, as $\mathrm{depth}(q) = n+1$, we have that either $r' = p_n$ or $r' = a \mathbin{\|} a^{\le n}$, as not other $a$-derivative of $a \mathbin{\|} p_n$ has observable depth $n+1$.
Hence, to conclude, it is enough to show that the case $r' = a \mathbin{\|} a^{\le n}$ yields a contradiction with $p \mathbin{\|} q \sim_\mathtt{BB} a \mathbin{\|} p_n$.
Assume that $q \sim_\mathtt{BB} a \mathbin{\|} a^{\le n}$.
Then, as $p \sim_\mathtt{BB} a$, we get that $p \mathbin{\|} q \sim_\mathtt{BB} a \mathbin{\|} (a \mathbin{\|} a^{\le n})$.
Now $a \mathbin{\|} (a \mathbin{\|} a^{\le n}) \trans[a] a \mathbin{\|} (a \mathbin{\|} a^{n-1}) \sim_\mathtt{BB} a^{n+1}$.
However, there is no process $r$ such that $a \mathbin{\|} p_n \trans[a] r$ and $r \sim_\mathtt{BB} a^{n+1}$.
Hence $a \mathbin{\|} p_n \not\sim_\mathtt{BB} a \mathbin{\|} (a \mathbin{\|} a^{\le n})$, giving thus a contradiction with $a \mathbin{\|} p_n \sim_\mathtt{BB} p \mathbin{\|} q$.
\end{itemize}
We have therefore obtained that $p \sim_\mathtt{BB} a$ and $q \sim_\mathtt{BB} p_n$, thus concluding the proof.
\end{proof}
We also need Lemma~\ref{lem:apn_par} to hold modulo rooted branching bisimilarity.
\begin{restatable}{lemma}{lemapnparrbb}
\label{lem:apn_par_rbb}
Let $n \ge 2$.
Assume that $p \mathbin{\|} q \sim_\mathtt{RBB} a \mathbin{\|} p_n$ for some processes $p,q \not\sim_\mathtt{RBB} \mathbf{0}$.
Then either $p \sim_\mathtt{RBB} a$ and $q \sim_\mathtt{RBB} p_n$, or $p \sim_\mathtt{RBB} p_n$ and $q \sim_\mathtt{RBB} a$.
\end{restatable}
\begin{proof}
First of all, we notice that $\tau \not \in \mathrm{init}(a \mathbin{\|} p_n)$ implies $\tau \not \in \mathrm{init}(p)$ and $\tau \not \in \mathrm{init}(q)$.
In particular, this guarantees that $p,q \not\sim_\mathtt{RBB} \mathbf{0}$ implies $p,q \not\sim_\mathtt{BB} \mathbf{0}$ (Lemma~\ref{lem:same_initials}).
We have that $a \mathbin{\|} p_n \trans[a] \mathbf{0} \mathbin{\|} p_n \sim_\mathtt{BB} p_n$.
Hence, as $p \mathbin{\|} q \sim_\mathtt{RBB} a \mathbin{\|} p_n$, there is a process $r$ such that $p \mathbin{\|} q \trans[a] r$ and $r \sim_\mathtt{BB} p_n$.
We can assume, without loss of generality, that $p \trans[a] p'$ and $r = p' \mathbin{\|} q$, for some process $p'$.
(Under this assumption we will obtain that $p \sim_\mathtt{RBB} a$ and $q \sim_\mathtt{RBB} p_n$.
The symmetric case is obtained when $q \trans[a] q'$ and $r = p \mathbin{\|} q'$, by applying the same reasoning.)
Since $p' \mathbin{\|} q \sim_\mathtt{BB} p_n$, $q \not\sim_\mathtt{BB} \mathbf{0}$, and $p_n$ is indecomposable by Lemma~\ref{lem:pn_indecomposable}, it follows that $p' \sim_\mathtt{BB} \mathbf{0}$ and $q \sim_\mathtt{BB} p_n$.
Moreover, as $\mathrm{init}(q) = \mathrm{init}(p_n)$ and $\tau \not\in \mathrm{init}(p_n)$, by Lemma~\ref{lem:same_initials} we get $q \sim_\mathtt{RBB} p_n$.
Hence, $p \mathbin{\|} q \sim_\mathtt{RBB} p \mathbin{\|} p_n$.
Assume now that $p \trans[\mu] p'$ for some action $\mu$ and process $p'$.
Form $p \mathbin{\|} p_n \sim_\mathtt{RBB} a \mathbin{\|} p_n$ we infer $\mu = a$, and we can distinguish two cases, according to $a$-move is performed by $a \mathbin{\|} p_n$ to match the $a$-transition by $p \mathbin{\|} p_n$:
\begin{itemize}
\item $p' \mathbin{\|} p_n \sim_\mathtt{BB} p_n$.
As $p_n$ is indecomposable (Lemma~\ref{lem:pn_indecomposable}), we can directly conclude that $p' \sim_\mathtt{BB} \mathbf{0}$.
\item $p' \mathbin{\|} p_n \sim_\mathtt{BB} a \mathbin{\|} a^{\le i}$ for some $i \in \{2,\dots,n\}$.
Due to the form of $p_n$, this case is not possible.
\end{itemize}
Thus, whenever $p \trans[a] p'$ we have that $p' \sim_\mathtt{BB} \mathbf{0}$, which, together with $\mathrm{init}(p) = \{a\}$, gives $p \sim_\mathtt{RBB} a$.
\end{proof}
We conclude this section with the following result, showing a particular case in which we can establish that not only a variable $x$ triggers the behaviour of a term $t$, but that $t$ actually coincides with $x$.
\begin{restatable}{lemma}{lemaisum}
\label{lem:ai_sum}
Let $t$ be a term that does not have $+$ as head operator, and let $\sigma$ be a closed substitution.
Assume that $\sigma(t)$ has neither $\mathbf{0}$ summands nor $\mathbf{0}$ factors, and that $\sigma(t) \sim_\mathtt{RBB} \sum_{k = 1}^m aa^{\le i_k}$, for some $m > 1$, and $2 \le i_1 < \dots < i_m$.
Then $t = x$ for some variable $x$.
\end{restatable}
\begin{proof}
Assume towards a contradiction that $t$ is not a variable.
We proceed by a case analysis on the other possible forms $t$ may have:
\begin{itemize}
\item $t = \mu.t'$ for some $\mu \in \mathcal{A}_\tau$ and term $t'$.
Then $\mu = a$ and $a^{\le i_1} \sim_\mathtt{BB} \sigma(t') \sim_\mathtt{BB} a^{\le i_m}$.
However, this is a contradiction because, as $i_1 < i_m$, the terms $a^{\le i_1}$ and $a^{\le i_m}$ have different observable depths, and therefore they cannot be branching bisimilar.
\item $t = t' \mathbin{\|} t''$.
Since $\sigma(t)$ has no $\mathbf{0}$ factors we have that $\sigma(t'),\sigma(t'') \not \sim_\mathtt{RBB} \mathbf{0}$.
In particular, notice that since $\tau \not\in \mathrm{init}(\sigma(t')) \cup \mathrm{init}(\sigma(t''))$ (as $\tau \not\in \mathrm{init}(\sum_{k = 1}^m aa^{\le i_k})$), it follows that $\sigma(t'),\sigma(t'') \not \sim_\mathtt{BB} \mathbf{0}$ as well (Lemma~\ref{lem:same_initials}).
Moreover, observe that $\sum_{k =1}^m a.a^{\le i_k} \trans[a] a^{\le i_m}$.
As $\sigma(t) = \sigma(t') \mathbin{\|} \sigma(t'') \sim_\mathtt{RBB} \sum_{k=1}^m a.a^{\le i_k}$, there is a process $r$ such that $\sigma(t') \mathbin{\|} \sigma(t'') \trans[a] r$ and $r \sim_\mathtt{BB} a^{\le i_m}$.
Assume, without loss of generality, that $\sigma(t') \trans[a] p$ and $r = p \mathbin{\|} \sigma(t'')$, for some process $p$.
As $\sigma(t'') \not\sim_\mathtt{BB} \mathbf{0}$ and $a^{\le i_m}$ is indecomposable (Lemma~\ref{lem:ai_indecomposable}), it follows that $p \sim_\mathtt{BB} \mathbf{0}$ and $\sigma(t'') \sim_\mathtt{BB} a^{\le i_m}$.
Now notice that $\sum_{k=1}^m a.a^{\le i_k} \trans[a] a^{\le i_1}$.
We can distinguish two cases according to how this transition is matched by $\sigma(t') \mathbin{\|} \sigma(t'')$ in the rooted branching bisimulation game:
\begin{itemize}
\item $\sigma(t') \trans[a] p'$, for some process $p'$, and $p' \mathbin{\|} \sigma(t'') \sim_\mathtt{BB} a^{\le i_1}$.
Then we can proceed as above to show that $\sigma(t'') \sim_\mathtt{BB} a^{\le i_1}$.
As $a^{\le i_1} \not\sim_\mathtt{BB} a^{\le i_m}$, this gives a contradiction.
\item $\sigma(t'') \trans[a] q$, for some process $q$, and $\sigma(t') \mathbin{\|} q \sim_\mathtt{BB} a^{\le i_1}$.
Since $\sigma(t') \not\sim_\mathtt{BB} \mathbf{0}$ and $a^{\le i_1}$ is indecomposable (Lemma~\ref{lem:ai_indecomposable}), it follows that $q \sim_\mathtt{BB} \mathbf{0}$ and $\sigma(t') \sim_\mathtt{BB} a^{\le i_1}$.
This implies that $\sigma(t') \mathbin{\|} \sigma(t'') \sim_\mathtt{BB} a^{\le i_1} \mathbin{\|} a^{\le i_m}$, and thus $\mathrm{depth}(\sigma(t') \mathbin{\|} \sigma) = i_1 + i_m > i_m +1$ (since $i_1 \ge 2$).
In particular, since $\tau \not \in \mathrm{init}(\sigma(t') \mathbin{\|} \sigma(t'')) \cup \mathrm{init}(a^{\le i_1} \mathbin{\|} a^{\le i_m})$, we can infer that $\mathrm{depth}(\sigma(t') \mathbin{\|} \sigma) = \mathrm{Rdepth}(\sigma(t') \mathbin{\|} \sigma(t''))$.
Therefore, we get a contradiction with $\sigma(t') \mathbin{\|} \sigma(t'') \sim_\mathtt{RBB} \sum_{k=1}^m a.a^{\le i_k}$, since $\mathrm{Rdepth}(\sum_{k=1}^m a.a^{\le i_k}) = i_m+1$.
\end{itemize}
\end{itemize}
\end{proof}
\section{Proof of Proposition~\ref{prop:rbb_substitution_case}}
\proprbbsubstitutioncase*
\begin{proof}
Observe, first of all, that since $\sigma(t)=p$ and $\sigma(u)=q$ have no $\mathbf{0}$ summands or factors, then neither do $t$ and $u$.
Hence, by Remark~\ref{rmk:summands}, we have that for some finite non-empty index sets $I, J$,
$
t = \sum_{i\in I} t_i
\text{ and }
u = \sum_{j\in J} u_j,
$
where none of the $t_i$ ($i\in I$) and $u_j$ ($j\in J$) is $\mathbf{0}$, has $+$ as its head operator, has $\mathbf{0}$ summands and factors.
As $p=\sigma(t)$ has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$, there is an index $i\in I$ such that $\sigma(t_i) \sim_\mathtt{RBB} a \mathbin{\|} p_n$.
Our aim is now to show that there is an index $j\in J$ such that $\sigma(u_j) \sim_\mathtt{RBB} a \mathbin{\|} p_n$, proving that $q=\sigma(u)$ also has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$.
We proceed by a case analysis on the form $t_i$ may have.
\begin{enumerate}
\item \label{Case:t-var}
\textcolor{blue}{\sc Case $t_i = x$ for some variable $x$}.
In this case, we have $\sigma(x) \sim_\mathtt{RBB} a \mathbin{\|} p_n$, and $t$ has $x$ as a summand.
As $t \approx u$ is sound modulo rooted branching bisimilarity and neither $t$ nor $u$ have $\mathbf{0}$ summands or factors, it follows that $u$ also has $x$ as a summand (Proposition~\ref{prop:same_var}).
Thus there is an index $j\in J$ such that $u_j = x$, and, modulo rooted branching bisimulation, $\sigma(u)$ has $a \mathbin{\|} p_n$ as a summand, which was to be shown.
\item \label{Case:t-pre}
\textcolor{blue}{\sc Case $t_i = \mu t'$ for some term $t'$}.
This case is vacuous because, since the root condition implies $\mu = a$, and $a \sigma(t') \trans[a] \sigma(t')$ is the only transition afforded by $\sigma(t_i)$, this term cannot be bisimilar to $a \mathbin{\|} p_n$.
Indeed $a \mathbin{\|} p_n$ can perform both, $a \mathbin{\|} p_n \trans[a] \mathbf{0} \mathbin{\|} p_n \sim_\mathtt{BB} p_n$, and $a \mathbin{\|} p_n \trans[a] a \mathbin{\|} a^{\le i}$, for some $i \in \{2,\dots,n\}$.
Clearly, $p_n \not\sim_\mathtt{BB} a \mathbin{\|} a^{\le i}$ for any $i \in \{2,\dots,n\}$, and thus $\sigma(t')$ cannot be branching bisimilar to both of them.
\item \label{Case:t-par}
\textcolor{blue}{\sc Case $t_i = t' \mathbin{\|} t''$ for some terms $t',t''$}.
In this case, we have $\sigma(t') \mathbin{\|} \sigma(t'') \sim_\mathtt{RBB} a \mathbin{\|} p_n$.
As $\sigma(t_i)$ has no $\mathbf{0}$ factors, it follows that $\sigma(t') \not\sim_\mathtt{RBB} \mathbf{0}$ and $\sigma(t'') \not\sim_\mathtt{RBB} \mathbf{0}$.
Thus by Lemma~\ref{lem:apn_par_rbb} (and without loss of generality) $\sigma(t') \sim_\mathtt{RBB} a$ and $\sigma(t'') \sim_\mathtt{RBB} p_n$.
Now, $t''$ can be written as $t'' = v_1 + \cdots + v_M, (M > 0)$, where none of the summands $v_i$ is $\mathbf{0}$ or a sum.
Observe that, since $n$ is larger than the size of $t$, we have that $M < n$.
Hence, since $\sigma(t'') \sim_\mathtt{RBB} p_n$, there must be some $h \in \{1,\ldots,M\}$ such that $\sigma(v_h) \sim_\mathtt{RBB} \sum_{k=1}^{m} a a^{\le i_k}$ for some $m>1$ and $2 \le i_1 < \ldots < i_m \le n$.
The term $\sigma(v_h)$ has no $\mathbf{0}$ summands or factors, or else so would $\sigma(t'')$, and thus $p=\sigma(t)$.
By Lemma~\ref{lem:ai_sum}, it follows that $v_h$ can only be a variable $x$ and thus that
\begin{equation}
\label{eq:sigma(x)}
\sigma(x) \sim_\mathtt{RBB} \sum_{k=i}^{m} a a^{\le i_k} \enspace .
\end{equation}
Observe, for later use, that, since $t'$ has no $\mathbf{0}$ factors, the above equation yields that $x \not\in \mathrm{var}(t')$, or else $\sigma(t') \not\sim_\mathtt{RBB} a$ (Lemma~\ref{lem:var_depth}).
So, modulo rooted branching bisimilarity, $t_i$ has the form $t' \mathbin{\|} (x+t''')$, for some term $t'''$, with $x \not\in \mathrm{var}(t')$ and $\sigma(t') \sim_\mathtt{RBB} a$.
Our order of business will now be to use the information collected so far to argue that $\sigma(u)$ has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$.
To this end, consider the substitution
\[
\sigma' = \sigma[x \mapsto \overline{a} (a \mathbin{\|} p_n)] \enspace .
\]
We have that
\begin{align*}
\sigma'(t_i) & ={} \sigma'(t') \mathbin{\|} \sigma'(t'') \\
& ={} \sigma(t') \mathbin{\|} \sigma'(t'') & \text{(As $x\not\in\mathrm{var}(t')$)} \\
& \sim_\mathtt{RBB} a \mathbin{\|} (\overline{a} (a \mathbin{\|} p_n) +\sigma'(t''')) & \text{(As $t'' = x + t'''$).}
\end{align*}
Thus, there is some process $p'$ such that $\sigma'(t_i) \trans[\tau] p' \sim_\mathtt{BB} a \mathbin{\|} p_n$, so that
\[
\sigma'(t) \trans[\tau] p' \sim_\mathtt{BB} a \mathbin{\|} p_n
\]
also holds.
Since $t\approx u$ is sound modulo $\sim_\mathtt{RBB}$, it follows that
\[
\sigma'(t) \sim_\mathtt{RBB} \sigma'(u) \enspace .
\]
Hence, we can infer that there are a $j\in J$ and a $q'$ such that
\begin{equation}
\label{eq:A.14}
\sigma'(u_j) \trans[\tau] q' \sim_\mathtt{BB} a \mathbin{\|} p_n \enspace .
\end{equation}
Recall that, by one of the assumptions of the proposition, $\sigma(u) \sim_\mathtt{RBB} a \mathbin{\|} p_n$, and thus $\sigma(u)$ has rooted depth $n+2$.
On the other hand, by Equation (\ref{eq:A.14}),
\[
\mathrm{Rdepth}(\sigma'(u_j)) \ge n+3 \enspace .
\]
Since $\sigma$ and $\sigma'$ differ only in the closed term they map variable $x$ to, it follows that
\begin{equation}
\label{eq:A.15}
x \in \mathrm{var}(u_j) \enspace .
\end{equation}
We now proceed to show that $\sigma(u_j) \sim_\mathtt{RBB} a \mathbin{\|} p_n$ by a further case analysis on the form a term $u_j$ satisfying Equation (\ref{eq:A.14}) and (\ref{eq:A.15}) may have.
\begin{enumerate}
\item \label{Case:uj-var}
\textcolor{red}{\sc Case $u_j = x$}.
This case is vacuous because $\sigma' (x) = \overline{a} (a \mathbin{\|} p_n) \ntrans[\tau]$, and thus this possible form for $u_j$ does not meet Equation (\ref{eq:A.14}).
\item \label{Case:uj-pre}
\textcolor{red}{\sc Case $u_j = \mu u'$ for some term $u'$}.
In light of Equation (\ref{eq:A.14}), we have that $\mu=\tau$ and $q'=\sigma'(u') \sim_\mathtt{BB} a \mathbin{\|} p_n$.
Using Equation (\ref{eq:A.15}), we get $x \in \mathrm{var}(u')$ and, thus, $\mathrm{depth}(\sigma'(u')) \geq n+3$ (Lemma~\ref{lem:var_depth}).
Since $a \mathbin{\|} p_n$ has depth $n+2$, this contradicts $q' \sim_\mathtt{BB} a \mathbin{\|} p_n$.
\item \label{Case:uj-hmerge}
\textcolor{red}{\sc Case $u_j = u' \mathbin{\|} u''$ for some terms $u',u''$}.
Our assumption that $\sigma(u)$ has no $\mathbf{0}$ factors yields that none of the terms $u',u'',\sigma(u')$ and $\sigma(u'')$ is rooted branching bisimilar to $\mathbf{0}$.
In addition, $\mathrm{init}(\sigma(u)) = \{a\}$, gives $\tau \not\in \mathrm{init}(\sigma(u')) \cup \mathrm{init}(\sigma(u''))$, so that, by Lemma~\ref{lem:same_initials}, $\sigma(u'),\sigma(u'') \not\sim_\mathtt{BB} \mathbf{0}$ holds.
Moreover, by Equation (\ref{eq:A.15}), either $x\in\mathrm{var}(u')$ or $x\in\mathrm{var}(u'')$ (and a simple observation on the depth of terms guarantees that it cannot occur in both).
By Equation (\ref{eq:A.14}), we have that $\sigma(u') \mathbin{\|} \sigma'(u'') \trans[\tau] q' \sim_\mathtt{BB} a \mathbin{\|} p_n$.
Hence, we can distinguish three cases, according to the possible origins for such a transition according to Lemma~\ref{lem:closed2open_tau}:
\begin{enumerate}
\item \textcolor{ForestGreen}{There is a term $w$ such that $u' \mathbin{\|} u'' \trans[\tau] w$.}
This case is vacuous, since $u' \mathbin{\|} u'' \trans[\tau] w$ implies $u \trans[\tau] w$, and, thus, $\sigma(u) \trans[\tau] \sigma(w)$, giving a contradiction with $\sigma(u) \sim_\mathtt{RBB} a \mathbin{\|} p_n$.
\item \textcolor{ForestGreen}{$u' \trans[\alpha] w$ and there is a variable $y \in \mathrm{var}(u'')$ such that $\sigma'(y) \trans[\overline{\alpha}] r_y$, $u' \mathbin{\|} u'' \trans[(y)]_{\overline{\alpha},\tau} c$, and $\sigma'[y \mapsto r_y](c) = q'$.}
(The symmetric case with $u'' \trans[\alpha] w$ and $y \in \mathrm{var}(u')$ can be treated in the same way and it is therefore omitted.)
Notice that if $\alpha \neq a$ then $\sigma(u') \trans[\alpha]$ implies $\sigma(u) \trans[\alpha]$, and thus $\mathrm{init}(\sigma(u)) \neq \mathrm{init}(a \mathbin{\|} p_n)$, giving a contradiction with $\sigma(u) \sim_\mathtt{RBB} a \mathbin{\|} p_n$.
Hence, $\alpha = a$ and, consequently, $\overline{\alpha} = \overline{a}$.
Moreover, if $y \neq x$ then $\sigma'(y) = \sigma(y)$ giving $\sigma(y) \trans[\overline{a}]$ and thus $\sigma(u) \trans[\overline{a}]$.
Also in this case we obtain a contradiction with $\sigma(u) \sim_\mathtt{RBB} a \mathbin{\|} p_n$.
Hence $y = x$.
Since $\sigma'(x) \trans[\overline{a}] a \mathbin{\|} p_n$, we have $\sigma'(u') \mathbin{\|} \sigma'(u'') \trans[\tau] \sigma'(w) \mathbin{\|} (a \mathbin{\|} p_n) \mathbin{\|} \sigma'(w')$ (where the general term $w'$ is included, as from $x \in \mathrm{var}(u'')$, and $u'' \trans[(x)]_{\mu}$, we can infer that $u''$ can be written in the general form $((x+w_1) \mathbin{\|} w') + w_2$).
From $\sigma'(w) \mathbin{\|} (a \mathbin{\|} p_n) \mathbin{\|} \sigma'(w') \sim_\mathtt{BB} a \mathbin{\|} p_n$ it follows (reasoning on the depth of terms) that $\sigma'(w) \sim_\mathtt{BB} \mathbf{0} \sim_\mathtt{BB} \sigma'(w')$ and, in particular, $\sigma'(w) = \sigma(w)$ and $\sigma'(w') = \sigma(w')$.
Moreover, as $x \not \in \mathrm{var}(u')$, we have that $\sigma'(u') = \sigma(u')$.
Summarising, so far we have obtained that there is a process $r$ such that $\sigma(u') \trans[a] r$ and $r \sim_\mathtt{BB} \mathbf{0}$, and that $x \in \mathrm{var}(u'')$ with $u \trans[(x)]_\mu$.
Next, we notice that $\sigma(u') \trans[a] r$ implies that $\sigma(u_j) \trans[a] r \mathbin{\|} \sigma(u'')$, so that $\sigma(u) \trans[a] r \mathbin{\|} \sigma(u'') \sim_\mathtt{BB} \sigma(u'')$.
From $\sigma(u) \sim_\mathtt{RBB} a \mathbin{\|} p_n$, it then follows that $a \mathbin{\|} p_n \trans[a] r'$ for some process $r'$ such that $r' \sim_\mathtt{BB} \sigma(u'')$.
Given the structure of $a \mathbin{\|} p_n$, we can distinguish two major cases, according to the possible form of $r'$:
\begin{itemize}
\item $r' = a \mathbin{\|} a^{\le i}$ for some $i \in \{2,\dots,n\}$.
We proceed to show that in this case it is not possible to have $r' \sim_\mathtt{BB} \sigma(u'')$.
Recall that, by Equation (\ref{eq:sigma(x)}), $\sigma(x) \sim_\mathtt{RBB} \sum_{k=1}^m aa^{\le i_k}$ for $m>1$ and $2 \le i_1 < \dots < i_m \le n$.
As $u'' \trans[(x)]_a$, by Lemma~\ref{lem:var_to_term} we have that $\sigma(u'') \trans[a] a^{\le i_1}$ and $\sigma(u'') \trans[a] a^{\le i_m}$
(where no parallel composition is considered because we have seen in the analysis above that any potential parallel component would be branching bisimilar to $\mathbf{0}$).
In particular, $i_1 < i_m$ gives $a^{\le i_1} \not\sim_\mathtt{BB} a^{\le i_m}$ as the two processes have different depths (besides different branching structures).
On the one hand, we have that $r' \trans[a] a \mathbin{\|} a^j$ for some $j \in \{0,\dots,i-1\}$, and $a \mathbin{\|} a^j \sim_\mathtt{BB} a^{j+1} \not\sim_\mathtt{BB} a^{\le i_1},a^{\le i_m}$ for any $j \in \{0,\dots,i-1\}$ and $i_1,i_m \ge 2$.
On the other hand, $r' \trans[a] \mathbf{0} \mathbin{\|} a^{\le i} \sim_\mathtt{BB} a^{\le i}$, but $a^{\le i}$ cannot be branching bisimilar to both $a^{\le i_1}$ and $a^{\le i_m}$.
Hence, $r' \not\sim_\mathtt{BB} \sigma(u'')$ in this case.
\item $r' = \mathbf{0} \mathbin{\|} p_n$.
Then $\sigma(u'') \sim_\mathtt{BB} p_n$.
Notice that $\sigma(u) \sim_\mathtt{RBB} a \mathbin{\|} p_n$ implies $\mathrm{init}(\sigma(u'')) = \{a\}$.
Hence, by Lemma~\ref{lem:same_initials}, $\tau \not\in \mathrm{init}(\sigma(u'')) \cup \mathrm{init}(p_n)$ and $\sigma(u'') \sim_\mathtt{BB} p_n$ imply $\sigma(u'') \sim_\mathtt{RBB} p_n$.
In particular, it follows that $\mathrm{Rdepth}(\sigma(u'')) = n+1$.
Since $\mathrm{Rdepth}(\sigma(u)) = n+2 \ge \mathrm{Rdepth}(\sigma(u_j)) = \mathrm{Rdepth}(\sigma(u')) + \mathrm{Rdepth}(\sigma(u''))$, we obtain that $\mathrm{Rdepth}(\sigma(u')) = 1$.
In addition, $\mathrm{init}(\sigma(u')) = \{a\}$.
It follows that whenever $\sigma(u') \trans[\mu] q'$, for some process $q'$, then $\mu = a$ and $q' \sim_\mathtt{BB} \mathbf{0}$.
Hence, $\sigma(u') \sim_\mathtt{RBB} a$.
Summarising, we have obtained that $\sigma(u_j) = \sigma(u') \mathbin{\|} \sigma(u'') \sim_\mathtt{RBB} a \mathbin{\|} p_n$, and it is thus the summand we were looking for.
\end{itemize}
\item \textcolor{ForestGreen}{There are variables $y,z$ such that $\sigma'(y) \trans[\alpha] r_y$, $\sigma'(z) \trans[\overline{\alpha}] r_z$, $u' \mathbin{\|} u'' \trans[(y,z)]_{\tau} c$, and $\sigma'[y \mapsto r_y, z \mapsto r_z](c) = q'$.}
We can proceed as in the previous case to argue that if both $y$ and $z$ are different from $x$, then we obtain a contradiction with $\sigma(u) \sim_\mathtt{RBB} a \mathbin{\|} p_n$.
Similarly, it cannot be the case that both $y$ and $z$ are equal to $x$, for otherwise they would not be able to synchronise as $\mathrm{init}(\sigma'(x)) = \{\overline{a}\}$.
Hence, we can assume, without loss of generality, that $y \neq x$ and $z = x$.
Thus, $\sigma'(y) = \sigma(y)$ and $\alpha = a$.
As we can also assume, without loss of generality, that $y \in \mathrm{var}(u')$ and $x \in \mathrm{var}(u'')$ (having both in one term gives a contradiction either with $q' \sim_\mathtt{BB} a \mathbin{\|} p_n$ or $u',u'' \not\sim_\mathtt{RBB} \mathbf{0}$).
We can then proceed exactly as in the previous case to show that $\sigma(u') \sim_\mathtt{RBB} a$ and $\sigma(u'') \sim_\mathtt{RBB} p_n$, giving that $u_j$ is the desired summand.
\end{enumerate}
This completes the proof for the case $u_j = u' \mathbin{\| }u''$ for some terms $u',u''$.
\end{enumerate}
This completes the analysis for the case $t_i = t' \mathbin{\| }t''$ for some terms $t',t''$.
\end{enumerate}
The proof of Proposition~\ref{prop:rbb_substitution_case} is now complete.
\end{proof}
\section{Proof of Theorem~\ref{thm:rbb_preserves_property}}
\thmrbbpreservesproperty*
\begin{proof}
Assume that $\mathcal{E}$ is a finite axiom system over the language CCS that is sound modulo rooted branching bisimilarity.
Recall that, without loss of generality, we may assume that the closed terms involved in the proof of an equation $p \approx q$ have no $\mathbf{0}$ summands or factors (by Proposition~\ref{Propn:proofswithout0_text}, as $\mathcal{E}$ may be assumed to be saturated), and that applications of symmetry happen first in equational proofs
(that is, $\mathcal{E}$ is closed with respect to symmetry).
Hence, by the proviso of the theorem, the following hold, for some closed terms $p$ and $q$ and positive integer $n$ larger than the size of each term in the equations in $\mathcal{E}$:
\begin{enumerate}
\item $E \vdash p \approx q$,
\item $p \sim_\mathtt{RBB} q \sim_\mathtt{RBB} a \mathbin{\|} p_n$,
\item $p$ and $q$ contain no occurrences of $\mathbf{0}$ as a summand or factor, and
\item $p$ has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$.
\end{enumerate}
We prove that $q$ also has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$ by induction on the depth of the closed proof of the equation $p \approx q$ from $\mathcal{E}$.
We proceed by a case analysis on the last rule used in the proof of $p \approx q$ from $\mathcal{E}$.
The case of reflexivity is trivial, and that of transitivity follows immediately by using the inductive hypothesis twice.
Below we only consider the other possibilities.
\begin{itemize}
\item \textcolor{blue}{\sc Case $E \vdash p \approx q$, because $\sigma(t) = p$ and $\sigma(u) = q$ for some equation $(t\approx u)\in E$ and closed substitution $\sigma$}.
Since $\sigma(t) = p$ and $\sigma(u) = q$ have no $\mathbf{0}$ summands or factors, and $n$ is larger than the size of each term mentioned in equations in $\mathcal{E}$, the claim follows by Proposition~\ref{prop:rbb_substitution_case}.
\item \textcolor{blue}{\sc Case $E \vdash p \approx q$, because $p=\mu p'$ and $q=\mu q'$ for some $p',q'$ such that $E \vdash p' \approx q'$}.
This case is vacuous because $p=\mu p'\not \sim_\mathtt{RBB} a \mathbin{\|} p_n$, and thus $p$ does not have a summand rooted branching bisimilar to $a \mathbin{\|} p_n$.
\item \textcolor{blue}{\sc Case $E \vdash p \approx q$, because $p = p'+p''$ and $q = q'+q''$ for some $p',q',p'',q''$ such that $E \vdash p' \approx q'$ and $E \vdash p'' \approx q''$}.
Since $p$ has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$, we have that so does either $p'$ or $p''$.
Assume, without loss of generality, that $p'$ has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$.
Since $p$ is bisimilar to $a \mathbin{\|} p_n$, so is $p'$.
Using the soundness of $\mathcal{E}$ modulo $\sim_\mathtt{RBB}$, it follows that $q' \sim_\mathtt{RBB} a \mathbin{\|} p_n$.
The inductive hypothesis now yields that $q'$ has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$.
Hence, $q$ has a summand rooted branching bisimilar to $a \mathbin{\|} p_n$, which was to be shown.
\item \textcolor{blue}{\sc Case $E \vdash p \approx q$, because $p = p' \mathbin{\|} p''$ and $q = q' \mathbin{\|} q''$ for some $p',q',p'',q''$ such that $E \vdash p' \approx q'$ and $E \vdash p'' \approx q''$}.
Since the proof involves no uses of $\mathbf{0}$ as a summand or a factor, we have that $p',p''\not\sim_\mathtt{RBB} \mathbf{0}$ and $q',q''\not\sim_\mathtt{RBB} \mathbf{0}$.
It follows that $q$ is a summand of itself.
By our assumptions,
$
a \mathbin{\|} p_n \sim_\mathtt{RBB} q.
$
Therefore we have that $q$ has a summand bisimilar to $a \mathbin{\|} p_n$, and we are done.
\end{itemize}
This completes the proof of Theorem~\ref{thm:rbb_preserves_property}.
\end{proof}
\section{Additional properties of terms and auxiliary transitions}
We discuss here some properties of configurations and auxiliary transitions, that we omitted from the main text in Section~\ref{sec:configurations_bis} because of space limits.
The proofs of the results in this section follow by simple inductions on the derivation of transitions and/or on the structure of terms (configurations), and are therefore omitted.
We recall that thanks to the axioms in Table~\ref{tab:axioms_b} we can prove the following classic result for $\text{CCS}_{\text{LC}}$ terms, which will be useful in the rest of paper:
\begin{lemma}
\label{lem:summands}
For every $\text{CCS}_{\text{LC}}$ term $t$ there are terms $t_1,\dots,t_n$ ($n \ge 0$) that do not have $+$ as head operator such that $t \approx \sum_{i = 1}^n t_i$ (by A0--A3).
\end{lemma}
The terms $t_i$ are also called the \emph{summands} of $t$.
Moreover, notice that Lemmas~\ref{lem:c_structure} and~\ref{lem:var_to_term} presented for CCS configurations, can be extended in a straightforward manner to $\text{CCS}_{\text{LC}}$ terms.
\begin{remark}
For Lemma~\ref{lem:var_to_term}.\ref{item:x_u_to_t}, Lemma~\ref{lem:substitution} guarantees that applying $\sigma$ to the term $t'$ that triggers, together with $x$, the transition $t \trans[(x)]_{\alpha,\tau}$, does not affect the possibility of synchronisation between $\sigma(x)$ and $\sigma(t')$.
\end{remark}
We also underline the following properties of variables $x_\mu \in \mathcal{V}_{\mathcal{A}_\tau}$.
We extend the definition of $\mathrm{var}(c)$ in accordance with the newly introduced set of variables:
$\mathrm{var}(c) = \{x \in \mathcal{V} \mid x \text{ occurs in } c\} \cup \{x_\mu \in \mathcal{V}_{\mathcal{A}_\tau} \mid x_\mu \text{ occurs in } c\}$.
\begin{lemma}
\label{lem:xmu_occurrence}
Let $c \in \mathcal{C}_{\text{LC}}$ and $x \in \mathcal{V}$.
If $c \trans[\ell]_\rho c'$, for some $c'$ and $\ell$ with $x \in \ell$,
then $c'$ is of the form $x_\mu \mathbin{\|} c''$ for some (possibly null) configuration $c''$ (modulo the axioms in Table~\ref{tab:axioms_b}).
Similarly, if $x_\mu \in \mathrm{var}(c)$, then $c$ is of the form $x_\mu \mathbin{\|} c''$ for some (possibly null) configuration $c''$ (modulo the axioms in Table~\ref{tab:axioms_b}).
\end{lemma}
The following two lemmas, that correspond to Lemma~\ref{lem:closed2open_alpha} and Lemma~\ref{lem:closed2open_tau} respectively, allow us to show how a transition $\sigma(t) \trans[\mu] p$ can stem from transitions of the $\text{CCS}_{\text{LC}}$ term $t$ and of the process $\sigma(x)$, for $x \in \mathrm{var}(t)$.
\begin{lemma}
\label{lem:closed2open_alpha_ccslc}
Let $\alpha \in \mathcal{A} \cup \overline{\mathcal{A}}$, $t$ be a $\text{CCS}_{\text{LC}}$ term, $\sigma$ be a closed substitution, and $p$ be a process.
If $\sigma(t) \trans[\alpha] p$, then one of the following holds:
\begin{enumerate}
\item \label{c2o_prefix_ccslc}
There is a $\text{CCS}_{\text{LC}}$ term $t'$ s.t.\ $t \trans[\alpha] t'$ and $\sigma(t') = p$.
\item \label{c2o_x_ccslc}
There are a variable $x$, a process $q$ and a configuration $c$ s.t.\ $\sigma(x) \trans[\alpha] q$, $t \trans[(x)]_\alpha c$, and $\sigma[x_\alpha \mapsto q](c) = p$.
\end{enumerate}
\end{lemma}
\begin{lemma}
\label{lem:closed2open_tau_ccslc}
Let $t$ be a $\text{CCS}_{\text{LC}}$ term, $\sigma$ be a closed substitution, and $p$ be a process.
If $\sigma(t) \trans[\tau] p$, then one of the following holds:
\begin{enumerate}
\item \label{lem:c2o_prefix_ccslc}
There is a $\text{CCS}_{\text{LC}}$ term $t'$ s.t.\ $t \trans[\tau] t'$ and $\sigma(t') = p$.
\item \label{lem:c2o_x_ccslc}
There are a variable $x$, a process $q$, and a configuration $c$ s.t.\ $\sigma(x) \trans[\tau] q$, $t \trans[(x)]_\tau c$, and $\sigma[x_\tau \mapsto q](c) = p$.
\item \label{lem:c2o_xu_ccslc}
There are a variable $x$, a process $q$, and a configuration $c$ s.t., for some $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$, $\sigma(x) \trans[\alpha] q$, $t \trans[(x)]_{\alpha,\tau} c$, and $\sigma[x_\alpha \mapsto q](c) = p$.
\item \label{lem:c2o_xy_ccslc}
There are variables $x,y$, processes $q_x,q_y$ and a configuration $c$ s.t., for some $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$, $\sigma(x) \trans[\alpha] q_x$, $\sigma(y) \trans[\overline{\alpha}] q_y$, $t \trans[(x,y)]_\tau c$, and $\sigma[x_\alpha \mapsto q_x, y_{\overline{\alpha}} \mapsto q_y](c) = p$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof follows by induction on the derivation of the transition $\sigma(t) \trans[\tau] p$.
The interesting case is the base case $t = t_1 ~|~ t_2$, for some CCS terms $t_1,t_2$, that we expand below.
According to the operational semantics of the communication merge, a $\tau$-move by $\sigma(t_1 ~|~ t_2)$ can only be generated by a communication between $\sigma(t_1)$ and $\sigma(t_2)$.
We can assume, without loss of generality, that $\sigma(t_1) \trans[\alpha] p_1$ and $\sigma(t_2) \trans[\overline{\alpha}] p_2$, for some processes $p_1,p_2$ such that $p_1 \mathbin{\|} p_2 = p$.
The symmetric case of $\sigma(t_1) \trans[\overline{\alpha}]$ and $\sigma(t_2) \trans[\alpha]$ follows by applying the same arguments we use below, switching the roles of $\sigma(t_1)$ and $\sigma(t_2)$.
By Lemma~\ref{lem:closed2open_alpha_ccslc}, from $\sigma(t_1) \trans[\alpha] p_1$ we can infer that either $t_1 \trans[\alpha] t_1'$ for some term $t_1'$ such that $\sigma(t_1') = p_1$, or there are a variable $x$, a process $q_1$ and a configuration $c_1$ such that $\sigma(x) \trans[\alpha] q_1$, $t_1 \trans[(x)]_\alpha c_1$, and $\sigma[x_\alpha \mapsto q_1](c_1) = p_1$.
Similarly, by Lemma~\ref{lem:closed2open_alpha_ccslc}, from $\sigma(t_2) \trans[\overline{\alpha}] p_2$ we can infer that either $t_2 \trans[\overline{\alpha}] t_2'$ for some term $t_2'$ such that $\sigma(t_2') = p_2$, or there are a variable $y$, a process $q_2$ and a configuration $c_2$ such that $\sigma(y) \trans[\overline{\alpha}] q_2$, $t_2 \trans[(y)]_{\overline{\alpha}} c_2$, and $\sigma[y_{\overline{\alpha}} \mapsto q_2](c_2) = p_2$.
We can then distinguish four cases, depending on how the possible derivations of the transitions of $t_1$ and $t_2$ mentioned above are combined in the derivation of the synchronisation.
\begin{enumerate}
\item Case $t_1 \trans[\alpha] t_1'$ and $t_2 \trans[\overline{\alpha}] t_2'$.
In this case, the operational semantics of $~|~$ allows us to directly infer that $t_1 ~|~ t_2 \trans[\tau] t_1' \mathbin{\|} t_2'$.
Hence we have that there is $t' = t_1' \mathbin{\|} t_2'$ such that $t \trans[\tau] t'$ and $\sigma(t') = p_1 \mathbin{\|} p_2 = p$.
\item Case $\sigma(x) \trans[\alpha] q_1$, $t_1 \trans[(x)]_\alpha c_1$, $\sigma[x_\alpha \mapsto q_1](c_1) = p_1$, and $t_2 \trans[\overline{\alpha}] t_2'$.
As $t_1 \trans[(x)]_\alpha c_1$ and $t_2 \trans[\overline{\alpha}] t_2'$ we can apply the auxiliary rule ($a_5$) and obtain that $t_1 ~|~ t_2 \trans[(x)]_{\alpha,\tau} c_1 \mathbin{\|} t_2'$.
Hence we have that there are a variable $x$, a subterm $t_2$ of $t$, a process $q_1$ and a configuration $c = c_1 \mathbin{\|} t_2'$ such that $\sigma(x) \trans[\alpha] q_1$, $t_2 \trans[\overline{\alpha}] t_2'$, $t \trans[(x)]_{\alpha,\tau} c$ and $\sigma[x_\alpha \mapsto q_1](c) = \sigma[x_\alpha \mapsto q_1](c_1) \mathbin{\|} \sigma(t_2') = p_1 \mathbin{\|} p_2 = p$.
\item Case $t_1 \trans[\alpha] t_1'$, $\sigma(y) \trans[\overline{\alpha}] q_2$, $t_2 \trans[(y)]_{\overline{\alpha}} c_2$, and $\sigma[y_{\overline{\alpha}} \mapsto q_2](c_2) = p_2$.
This is the symmetrical of the previous case and it follows by applying the same reasoning used above, using the auxiliary rule ($a_6$) in place of ($a_5$).
\item Case $\sigma(x) \trans[\alpha] q_1$, $t_1 \trans[(x)]_\alpha c_1$, $\sigma[x_\alpha \mapsto q_1](c_1) = p_1$, $\sigma(y) \trans[\overline{\alpha}] q_2$, $t_2 \trans[(y)]_{\overline{\alpha}} c_2$, and $\sigma[y_{\overline{\alpha}} \mapsto q_2](c_2) = p_2$.
As $t_1 \trans[(x)]_\alpha c_1$ and $t_2 \trans[(y)]_{\overline{\alpha}} c_2$, we can apply the auxiliary rule ($a_4$) and obtain thus $t_1 ~|~ t_2 \trans[(x,y)]_{\tau} c_1 \mathbin{\|} c_2$.
Hence we have obtained that there are variables $x,y$, processes $q_1,q_2$ and a configuration $c = c_1 \mathbin{\|} c_2$ such that $\sigma(x) \trans[\alpha] q_1$, $\sigma(y) \trans[\overline{\alpha}] q_2$, $t \trans[(x,y)]_\tau c$, and $\sigma[x_\alpha \mapsto q_1, y_{\overline{\alpha}} \mapsto q_2](c) =
\sigma[x_\alpha \mapsto q_1](c_1) \mathbin{\|} \sigma[y_{\overline{\alpha}} \mapsto q_2](c_2) =
p_1 \mathbin{\|} p_2 = p$, where we can distribute the substitutions over $\mathbin{\|}$ since $x_\alpha$ and $y_{\overline{\alpha}}$ are unique by construction (even if $x = y$, the two subscripts allow us to distinguish them).
\end{enumerate}
\end{proof}
\begin{remark}
\label{rmk:single_step}
It is worth noticing that in Lemmas~\ref{lem:closed2open_alpha_ccslc} and~\ref{lem:closed2open_tau_ccslc}, we only consider the derivation of a \emph{single computation step}.
In particular, each (occurrence of a) variable can trigger at most one (possibly nondeterministic) computation step of a term.
In order words, we cannot use a single (occurrence of a) variable to derive a sequence of transitions.
This step-by-step approach is applied the substitutions as well.
Consider the term $x \mathbin{\|} x$ and let $\sigma$ be a closed substitution such that $\sigma(x) = \mu.p_1 + \mu.p_2$, with $p_1 \neq p_2$.
It is immediate that $\sigma(x) \mathbin{\|} \sigma(x) \trans[\mu] p_1 \mathbin{\|} \sigma(x) \trans[\mu] p_1 \mathbin{\|} p_2$ is a valid computation.
At the same time, we have that $x \mathbin{\|} x \trans[x] x_{\mathrm{d}} \mathbin{\|} x$, and $x_{\mathrm{d}} \mathbin{\|} x \trans[x] x_\mathrm{d} \mathbin{\|} x_{\mathrm{d}}$.
As you can see, there is no way in which we can distinguish the two occurrences of $x_\mathrm{d}$ in the last term.
Thus, the substitution $\sigma[x_\mathrm{d} \mapsto p_1, x_\mathrm{d} \mapsto p_2](x_\mathrm{d} \mathbin{\|} x_\mathrm{d})$ is not well defined.
However, this is not a problem, since the substitution is applied at each step, thus $\sigma(x) \mathbin{\|} \sigma(x) \trans[\mu] \sigma[x_\mathrm{d} \mapsto p_1] (x_\mathrm{d} \mathbin{\|} x) = p_1 \mathbin{\|} \sigma(x)$ and $p_1 \mathbin{\|} \sigma(x) \trans[\mu] \sigma[x_\mathrm{d} \mapsto p_2](p_1 \mathbin{\|} x_\mathrm{d}) = p_1 \mathbin{\|} p_2$.
\end{remark}
\section{Proof of Theorem~\ref{thm:bb_on_open}}
Before proceeding to the proof, we list some properties that can be directly inferred from the operational semantics given in Tables~\ref{tab:sos_rules},~\ref{tab:sos_rules_ccslc},~\ref{tab:ell_rules_ccslc}, and~\ref{tab:c_rules}.
They can be seen as an extension of Lemma~\ref{lem:var_to_term} from terms to configurations.
\begin{lemma}
\label{lem:config_properties}
Let $c$ be a $\text{CCS}_{\text{LC}}$ configuration and $\sigma$ be a closed substitution.
Then:
\begin{enumerate}
\item For all $\mu \in \mathcal{A}_\tau$, whenever $c \trans[\mu] c'$ for some configuration $c'$, then $\sigma(c) \trans[\mu] \sigma(c')$.
\item For all variables $x,y \in \mathcal{V}$:
\begin{enumerate}
\item For all $\mu \in \mathcal{A}_\tau$, whenever $c \trans[(x)]_\mu c'$ for some configuration $c'$, then $\sigma(x) \trans[\mu] p_x$ implies $\sigma(c) \trans[\mu] \sigma[x_\mu \mapsto p_x](c')$.
\item Whenever $c \trans[(x,y)]_\tau c'$ for some configuration $c'$, then $\sigma(x) \trans[\alpha] p_x$ and $\sigma(y) \trans[\overline{\alpha}] p_y$ imply $\sigma(c) \trans[\tau] \sigma[x_\alpha \mapsto p_x, y_{\overline{\alpha}} \mapsto p_y](c')$, for any $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$.
\item For all $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$, whenever $c \trans[(x)]_{\alpha,\tau} c'$ for some configuration $c'$, then $\sigma(x) \trans[\alpha] p_x$ implies $\sigma(c) \trans[\tau] \sigma[x_\alpha \mapsto p_x](c')$.
\end{enumerate}
\item Whenever $c \trans[x_\mu] c'$ for some configuration $c'$, then $c \not\sim_\mathtt{BB} \mathbf{0}$, and $\sigma(c) = \sigma(c')$ for all closed substitutions $\sigma$.
\end{enumerate}
\end{lemma}
The following lemma proves a fundamental, albeit immediate, property of branching bisimilar configurations: they contain the same variables.
\begin{lemma}
\label{lem:same_variables}
Assume that $c_1 \sim_\mathtt{BB} c_2$.
Then, given any variable $x \in \mathcal{V}$, $x \in \mathrm{var}(c_1)$ if{f} $x \in \mathrm{var}(c_2)$.
Similarly, for any $\mu \in \mathcal{A}_\tau$, $x_\mu \in \mathrm{var}(c_1)$ if{f} $x_\mu \in \mathrm{var}(c_2)$.
\end{lemma}
Moreover, we introduce the notion of \emph{derivative} to configurations.
\begin{definition}
[Derivative]
For a configuration $c$, the set of \emph{derivatives} of $c$, notation $\mathrm{der}(c)$, is the least set containing $c$ that is closed under $\twoheadrightarrow$, i.e., the least set satisfying:
\begin{itemize}
\item $c \in \mathrm{der}(c)$, and
\item if $c' \in \mathrm{der}(c)$ and $c' \overset{\xi}\twoheadrightarrow c''$, for some label $\xi$, then $c'' \in \mathrm{der}(c)$.
\end{itemize}
\end{definition}
The following lemma can be proved by induction over the structure of configurations (noticing that $\mathrm{der}(x_\mathrm{d}) = \{x_\mathrm{d}\}$ and $\mathrm{der}(x_\mu) = \{x_\mu\}$, for all $x_\mathrm{d} \in \mathcal{V}_\mathrm{d}$ and $x_\mu \in \mathcal{V}_{\mathcal{A}_\tau}$).
\begin{lemma}
\label{lem:der_finite}
For every $\text{CCS}_{\text{LC}}$ configuration $c$, the set $\mathrm{der}(c)$ is finite.
\end{lemma}
We can now prove Theorem~\ref{thm:bb_on_open}.
\thmbbonopen*
\begin{proof}
We prove the statement only for branching bisimilarity $\sim_\mathtt{BB}$.
The proofs for the other equivalences can be carried out in a similar fashion.
We prove the two implications separately.
$(\Rightarrow)$.
Assume that $c_1 \sim_\mathtt{BB} c_2$.
Our aim is to show that $\sigma(c_1) \sim_\mathtt{BB} \sigma(c_2)$ for all closed substitutions $\sigma$.
To this end it is enough to prove that the relation
\[
\,{\mathcal R}\, = \{ (\sigma(c), \sigma(c')) \mid c \sim_\mathtt{BB} c', \sigma \text{closed substitution}\}
\]
is a branching bisimulation.
This is an immediate consequence of Lemma~\ref{lem:substitution}, Lemma~\ref{lem:config_properties} and Definition~\ref{def:open_bb}.\\
$(\Leftarrow)$.
Assume now that $\sigma(c_1) \sim_\mathtt{BB} \sigma(c_2)$ for all closed substitutions $\sigma$.
We aim to show that $c_1 \sim_\mathtt{BB} c_2$ according to Definition~\ref{def:open_bb}.
To this end we proceed by induction on the number of variables occurring in $c_1$ and $c_2$, i.e. on $|\mathrm{var}(c_1) \cup \mathrm{var}(c_2)|$.
\begin{itemize}
\item Base case: $\mathrm{var}(c_1) \cup \mathrm{var}(c_2) = \emptyset$.
In this case, we have that $c_1$ and $c_2$ are closed CCS terms, giving that $c_i = \sigma(c_i)$ for all closed substitutions $\sigma$, $i = 1,2$, and also that the all the transitions from $c_1,c_2$ and their derivatives are of the form $\trans[\mu]$ for some $\mu \in \mathcal{A}_\tau$.
It is then immediate to verify that $c_1 \sim_\mathtt{BB} c_2$.
\item Inductive step: $\mathrm{var}(c_1) \cup \mathrm{var}(c_2) \neq \emptyset$.
This means that there is at least one variable $x \in \mathcal{V}$ or $y_\mu \in \mathcal{V}_{\mathcal{A}_\tau}$ in the union.
Assume that we actually have both, i.e., $x,y_\mu \in \mathrm{var}(c_1) \cup \mathrm{var}(c_2)$.
This assumption does not invalidate the generality of our approach: the cases in which only $x$ or only $y_\mu$ occur in $\mathrm{var}(c_1) \cup \mathrm{var}(c_2)$ can be easily obtained from the combined case that we present here.
Let $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$.
It is immediate to verify that, for $n,m > 0$, $\alpha^n \sim_\mathtt{BB} \alpha^m$ if{f} $n = m$ (notice that $\alpha \neq \tau$).
Since the set of derivatives of $c_1$ and $c_2$ are finite (Lemma~\ref{lem:der_finite}), we can find $n > 0$ such that
\begin{equation}
\label{eq:alphan_not_bb_c}
\alpha^n \not\sim_\mathtt{BB} c \quad \text{ for each } c \in \mathrm{der}(c_1) \cup \mathrm{der}(c_2).
\end{equation}
In particular, this implies that
\begin{equation}
\label{eq:alphan_not_bb_c_sub}
\begin{split}
& \alpha^n \not\sim_\mathtt{BB} [x \mapsto \alpha^{n+2}](c), \\
& \alpha^n \not\sim_\mathtt{BB} [y_\mu \mapsto \alpha^{n+1}](c), \text{ and} \\
& \alpha^n \not\sim_\mathtt{BB} [x \mapsto \alpha^{n+2}, y_\mu \mapsto \alpha^{n+1}](c).
\end{split}
\end{equation}
In fact, we have that:
\begin{itemize}
\item If $x,y_\mu \not \in \mathrm{var}(c)$, then $[x \mapsto \alpha^{n+2}](c) = [y_\mu \mapsto \alpha^{n+1}](c) = [x \mapsto \alpha^{n+2}, y_\mu \mapsto \alpha^{n+1}](c) = c$, and the relation immediately follows from Equation (\ref{eq:alphan_not_bb_c}).
The same reasoning applies to all derivatives $c$ such that $c \ntrans[\ell]_\rho $ for any $\ell,\rho$ with $x \in \ell$, and for all those such that $c \ntrans[y_\mu]$.
\item If $y_\mu \in \mathrm{var}(c)$, then $c$ is of the form $y_\mu \mathbin{\|} c'$ for some configuration $c'$ (Lemma~\ref{lem:xmu_occurrence}), and it then immediate to verify that $[y_\mu \mapsto \alpha^{n+1}](c)$ can perform a sequence of $n+1$ $\alpha$-moves, whereas $\alpha^n$ can perform at most $n$ of such transitions.
\item If $x \in \mathrm{var}(c)$, then the only interesting case to analyse is that of an occurrence of $x$ in $c$ within the scope of communication merge.
All other cases can be treated as done for $y_\mu$.
We remark here that a potential occurrence of $x$ in the scope of $~|~$ is precisely the reason why $x$ is mapped to $\alpha^{n+2}$ whereas $y_\mu$ is mapped to $\alpha^{n+1}$.
According to the operational semantics in Table~\ref{tab:ell_rules_ccslc}, we can distinguish three cases:
\begin{itemize}
\item $c \trans[(x,y)]_\tau c'$ for some variable $y$ and configuration $c'$.
Hence, the occurrence of $x$ is in a subterm of the form $x ~|~ y$.
Consequently, no synchronisation can occur in $[x \mapsto \alpha^{n+2}](c)$ between $\alpha^{n+2}$ and $y$, and this case can be treated as the case in which $x \not \in \mathrm{var}(c)$.
\item $c \trans[(x)]_{\beta,\tau} c'$ for some configuration $c'$ and $\beta \neq \alpha$.
Hence, $x$ occurs in a subterm of the form $x ~|~ \overline{\beta}.t$ for some $t$, and we can proceed as in the previous case.
\item $c \trans[(x)]_{\alpha,\tau} c'$ for some configuration $c'$.
The occurrence of $x$ is then in a subterm of the form $x ~|~ \overline{\alpha}.t$.
Hence, in this case $[x \mapsto \alpha^{n+2}](c)$ has a derivative (the one obtained from the synchronisation of $\alpha^{n+2}$ with $\overline{\alpha}.t$) that can perform a trace with at least $n+1$ $\alpha$-transitions.
Clearly, $\alpha^n$ has no such a derivative.
\end{itemize}
\end{itemize}
We now notice that for every closed substitution $\sigma$ it holds that
\[
\sigma[x \mapsto \alpha^{n+2}, y_\mu \mapsto \alpha^{n+1}](c_1)
\sim_\mathtt{BB}
\sigma[x \mapsto \alpha^{n+2}, y_\mu \mapsto \alpha^{n+1}](c_2).
\]
Since
\[
\left|
\begin{array}{c}
\mathrm{var}([x \mapsto \alpha^{n+2}, y_\mu \mapsto \alpha^{n+1}](c_1))\; \cup \\ \mathrm{var}([x \mapsto \alpha^{n+2}, y_\mu \mapsto \alpha^{n+1}](c_2))
\end{array}
\right|
< |\mathrm{var}(c_1) \cup \mathrm{var}(c_2)|,
\]
we can apply the inductive hypothesis and obtain that
\begin{equation}
\label{eq:derivatives_all_bb}
[x \mapsto \alpha^{n+2}, y_\mu \mapsto \alpha^{n+1}](c_1)
\sim_\mathtt{BB}
[x \mapsto \alpha^{n+2}, y_\mu \mapsto \alpha^{n+1}](c_2).
\end{equation}
Hence, to conclude our proof, we proceed to show that this implies that $c_1 \sim_\mathtt{BB} c_2$.
This can be done simply by showing that the relation
\begin{align*}
\,{\mathcal R}\, = \{ (c,c') \mid & (c,c') \in \mathrm{der}(c_1) \times \mathrm{der}(c_2) \text{ and } \\
& [x \mapsto \alpha^{n+2}, y_\mathrm{d} \mapsto \alpha^{n+1}](c) \sim_\mathtt{BB} [x \mapsto \alpha^{n+2}, y_\mathrm{d} \mapsto \alpha^{n+1}](c')\}
\end{align*}
is a branching bisimulation.
This follows from Equation (\ref{eq:derivatives_all_bb}), Lemma~\ref{lem:substitution}, Lemma~\ref{lem:config_properties} and Equation (\ref{eq:alphan_not_bb_c_sub}), which ensures that whenever $c \,{\mathcal R}\, c'$, then $x \in \mathrm{var}(c)$ if{f} $x \in \mathrm{var}(c')$, (and similarly for $y_\mu$).
\end{itemize}
\end{proof}
\section{Proof of Proposition~\ref{prop:rbb_nf_ccslc}}
Since we defined $\sim_\mathtt{RBB}$ over the set of $\text{CCS}_{\text{LC}}$ configurations (that include $\text{CCS}_{\text{LC}}$ terms), before proceeding we prove that, in equational proofs from $\mathcal{E}_\mathtt{RBB}$, terms are never transformed into configurations, in the sense that if a configuration $x_\mu$ is not introduced in the proof by a substitution instance of an axiom, then it cannot be introduced by any application of an axiom or of a rule of equational logic.
Hence, we can prove our results on the set of $\text{CCS}_{\text{LC}}$ terms (thus simplifying the technical development).
\begin{lemma}
\label{lem:only_terms_ccslc}
\begin{enumerate}
\item \label{lem:only_terms_ccslc_1}
If $\mathcal{E}_\mathtt{RBB} \vdash t \approx u$ and $t$ is a $\text{CCS}_{\text{LC}}$ term (i.e. it does not contain any occurrence of a variable from $\mathcal{V}_{\mathcal{A}_\tau}$), then also $u$ is a $\text{CCS}_{\text{LC}}$ term.
\item \label{lem:only_terms_ccslc_2}
If $\mathcal{E}_\mathtt{RBB} \vdash t \approx u$ and $t$ is a $\text{CCS}_{\text{LC}}$ term, then a proof of $t \approx u$ from $\mathcal{E}_\mathtt{RBB}$ uses only equations over $\text{CCS}_{\text{LC}}$ terms.
\end{enumerate}
\end{lemma}
\begin{proof}
The first item directly follows from the soundness of $\mathcal{E}_\mathtt{RBB}$ modulo $\sim_\mathtt{RBB}$.
In fact, as $t \approx u$ implies $t \sim_\mathtt{RBB} u$, by Lemma~\ref{lem:same_variables} we get that $\mathrm{var}(t) = \mathrm{var}(u)$.
Hence, $\mathrm{var}(t) \cap \mathcal{V}_\mathrm{d} = \emptyset$ implies that $\mathrm{var}(u) \cap \mathcal{V}_\mathrm{d} = \emptyset$ as well.
Let us now deal with the second item.
First of all, we notice that since $t$ is a $\text{CCS}_{\text{LC}}$ term, we can apply Lemma~\ref{lem:only_terms_ccslc}.\ref{lem:only_terms_ccslc_1} and obtain that $u$ is a $\text{CCS}_{\text{LC}}$ term as well.
The proof then proceeds by induction on the length of the proof of $t \approx u$ from $\mathcal{E}_\mathtt{RBB}$, where the inductive step is carried out by a case analysis on the last rule of equational logic that is used in the proof.
The proof is standard and therefore we omit it.
We only want to point out that the only way in which a variable $x_\mu \in \mathcal{V}_{\mathcal{A}_\tau}$ can occur in an equational proof is by means of an application of a substitution.
However, we remark that, in our setting, substitutions can only be applied to axioms in equational proofs.
We can then distinguish two cases:
\begin{itemize}
\item Either no substitution instance happen in the proof for $t \approx u$, or it happens, but it does not map any variable into a configuration containing an occurrence of a variable in $\mathcal{V}_{\mathcal{A}_\tau}$.
In this case, the proof follows by induction.
\item A substitution instance is applied and it introduces a configuration with (at least) an occurrence of a variable $x_\mu$ from $\mathcal{V}_{\mathcal{A}_\tau}$.
Then we can prove that $x_\mu$ is not removed by the rules of equational logic.
More precisely, one can show that if $\mathcal{E}_\mathtt{RBB} \vdash c_1 \approx c_2$ for some configurations $c_1,c_2$, and $x_\mu \in \mathrm{var}(c_1)$, then, not only $x_\mu \in \mathrm{var}(c_2)$, but $x_\mu$ occurs in all the equations in the equational proof.
This property can be proved by induction on the length of the proof of $c_1 \approx c_2$ from $\mathcal{E}_\mathtt{RBB}$.
Hence, in this case we get a contradiction with the proviso of the Lemma stating that $t$ is a $\text{CCS}_{\text{LC}}$ term.
\end{itemize}
\end{proof}
The property of terms in Remark~\ref{rmk:summands} can be extended to normal forms: every normal form can be rewritten modulo A0--A3 as a summation of simple normal forms.
\begin{lemma}
\label{lem:simple_sums}
For each normal form $N$ there is a sequence (possibly empty) of simple normal forms $S_1,\dots,S_n$ such that $N \approx \sum_{i=1}^n S_i$ (by A0--A3).
\end{lemma}
\begin{remark}
\label{rmk:rbb_nf_ccslc}
Notice that variables in $\mathcal{V}_{\mathcal{A}_\tau}$ cannot occur in normal forms.
\end{remark}
Each term can be proven equal using $\mathcal{E}_\mathtt{RBB}$ to a normal form.
\proprbbnfccslc*
\begin{proof}
The proof can be carried out by induction over the size of term exactly like in the proof of Lemma 4.4 in \cite{AFIL09}, and therefore we omit it.
\end{proof}
\begin{remark}
As direct consequence of Proposition~\ref{prop:rbb_nf_ccslc}, we can henceforth assume that each $\text{CCS}_{\text{LC}}$ term $t$ can be expressed in the general form
\[
t \approx
\sum_{i \in I} \mu_i N_i +
\sum_{j \in J} x_j \lmerge N_j +
\sum_{h \in H} (x_h ~|~ \alpha_h) \lmerge N_h +
\sum_{k \in K} (x_k ~|~ x'_k) \lmerge N_k
.
\]
\end{remark}
\section{Proof of Proposition~\ref{prop:rbb_provable_ccslc}}
Before proceeding to the proof, we present an auxiliary result.
\begin{lemma}
\label{lem:cancellation_xmu}
Let $x_\mu \in \mathcal{V}_{\mathcal{A}_\tau}$.
Let $c_1,c_2$ be two configurations such that either $x_\mu \not\in \mathrm{var}(c_1) \cup \mathrm{var}(c_2)$, or $x_\mu \in \mathrm{var}(c_1) \cap \mathrm{var}(c_2)$.
If $x_\mu \mathbin{\|} c_1 \sim_\mathtt{BB} x_\mu \mathbin{\|} c_2$, then $c_1 \sim_\mathtt{BB} c_2$.
\end{lemma}
\begin{proof}
To prove the statement it is enough to show that the relation
\[
\,{\mathcal R}\, = \{(c,c') \mid x_\mu \mathbin{\|} c \sim_\mathtt{BB} x_\mu \mathbin{\|} c' \text{ and either } x_\mu \not\in \mathrm{var}(c) \cup \mathrm{var}(c') \text{, or } x_\mu \in \mathrm{var}(c) \cap \mathrm{var}(c')\}
\]
is a branching bisimulation.
Let $c_1,c_2 \in \,{\mathcal R}\,$.
We consider first the case of $c_1,c_2$ such that $x_\mu \not \in \mathrm{var}(c_1) \cup \mathrm{var}(c_2)$.
Notice that, in this case, we have that there are no $c_1',c_2'$ such that $c_1 \trans[\varepsilon] c_1' \trans[x_\mu]$ and $c_2 \trans[\varepsilon] c_2' \trans[x_\mu]$.
Assume that $c_1 \overset{\xi}\twoheadrightarrow c_1'$.
Clearly, this gives $x_\mu \mathbin{\|} c_1 \trans[\xi] x_\mu \mathbin{\|} c_1'$.
We proceed by a case analysis on the possible forms of $\xi$ and on how $x_\mu \mathbin{\|} c_2'$ matches the transition from $x_\mathrm{d} \mathbin{\|} c_1$, according to Definition~\ref{def:open_bb}:
\begin{itemize}
\item $\xi = \tau$ and $x_\mu \mathbin{\|} c_1' \sim_\mathtt{BB} x_\mu \mathbin{\|} c_2$.
Clearly, in this case, $x_\mu \not \in \mathrm{var}(c_1') \cup \mathrm{var}(c_2)$, and thus $(c_1',c_2) \in \,{\mathcal R}\,$.
\item $c_1 \overset{\xi}\twoheadrightarrow c_1'$ either because $c_1 \trans[\mu] c_1'$ for some $\mu \in \mathcal{A}_\tau$, or $c_1 \trans[\ell]_\rho c_1'$ for some $\ell$ with $x \not\in \ell$, $x \in\ell$ but $\rho \neq \mu,(\mu,\tau)$.
(Notice that this includes also the case of $\xi = \tau$ and $x_\mu \mathbin{\|} c_2$ matching the $\tau$-transition from $x_\mu \mathbin{\|} c_1$ through a sequence of silent moves.)
Then $x_\mu \mathbin{\|} c_1 \overset{\xi}\twoheadrightarrow x_\mu \mathbin{\|} c_1'$, and $x_\mu \mathbin{\|} c_1 \sim_\mathtt{BB} x_\mu \mathbin{\|} c_2$ implies that there are configurations $r,r'$ such that $x_\mu \mathbin{\|} c_2 \trans[\varepsilon] r \overset{\xi}\twoheadrightarrow r'$, $r \sim_\mathtt{BB} x_\mu \mathbin{\|} c_1$, and $r' \sim_\mathtt{BB} x_\mu \mathbin{\|} c_1'$.
Since none of the transitions in the sequence $x_\mu \mathbin{\|} c_2 \trans[\varepsilon] r \overset{\xi}\twoheadrightarrow r'$ can be performed by $x_\mu$, we have that there are $c_2',c_2''$ such that $c_2 \trans[\varepsilon] c_2' \overset{\xi}\twoheadrightarrow c_2''$, $r = x_\mu \mathbin{\|} c_2'$ and $r' = x_\mu \mathbin{\|} c_2''$.
Moreover, from the considerations above, we can also infer that $x_\mu \not \in \mathrm{var}(c_2'),\mathrm{var}(c_2'')$.
Therefore, we have obtained that there are configurations $c_2',c_2''$ such that $c_2 \trans[\varepsilon] c_2' \overset{\xi}\twoheadrightarrow c_2''$, $(c_1,c_2') \in \,{\mathcal R}\,$, and $(c_1',c_2'') \in \,{\mathcal R}\,$.
\item $c_1 \overset{\xi}\twoheadrightarrow c_1'$ because $c_1 \trans[\ell]_\rho c_1'$ with $x \in \ell$ and $\rho\in\{\mu,(\mu,\tau)\}$.
Then $c_1 \overset{\xi}\twoheadrightarrow c_1'$ where, by Lemma~\ref{lem:xmu_occurrence}, $c_1'$ is of the form $x_\mu \mathbin{\|} c$ for some (possibly null) configuration $c$.
As $x_\mu \mathbin{\|} c_1 \sim_\mathtt{BB} x_\mu \mathbin{\|} c_2$, it follows that $x_\mu \mathbin{\|} c_2 \trans[\varepsilon] r \overset{\xi}\twoheadrightarrow r'$ for some configurations $r,r'$ such that $r \sim_\mathtt{BB} x_\mu \mathbin{\|} c_1$ and $r' \sim_\mathtt{BB} x_\mu \mathbin{\|} c_1'$.
Also in this case, none of the transitions in the sequence $x_\mu \mathbin{\|} c_2 \trans[\varepsilon] r \overset{\xi}\twoheadrightarrow r'$ can be due to $x_\mu$.
Hence, there are configurations $c_2',c_2''$ such that $c_2 \trans[\varepsilon] c_2' \overset{\xi}\twoheadrightarrow c_2''$, $r = x_\mu \mathbin{\|} c_2'$, and $r' = x_\mu \mathbin{\|} c_2''$.
Moreover, we notice that $x_\mu \not \in \mathrm{var}(c_2')$, whereas, by Lemma~\ref{lem:xmu_occurrence}, $x_\mu \in \mathrm{var}(c_2'')$.
Summarising, we have obtained that there are configurations $c_2',c_2''$ such that $c_2 \trans[\varepsilon] c_2' \overset{\xi}\twoheadrightarrow c_2''$, $(c_1,c_2') \in \,{\mathcal R}\,$, and $(c_1',c_2'') \in \,{\mathcal R}\,$.
\end{itemize}
Symmetrically, we can prove that any transition from $c_2$ is matched by $c_1$ according to the definition of branching bisimilarity.
Consider now the case in which $c_1 \overset{\xi}\twoheadrightarrow c_1'$ and $x_\mu \in \mathrm{var}(c_1) \cap \mathrm{var}(c_2)$.
The proof follows as in the previous case, by noticing that, by the operational semantics of $\mathbin{\|}$ defined in Tables~\ref{tab:sos_rules_ccslc},~\ref{tab:ell_rules_ccslc}, and~\ref{tab:c_rules}, for any configuration $c$ it holds that $x_\mu \in \mathrm{var}(c)$ implies $x_\mu \in \mathrm{var}(c')$ for all $c' \in \mathrm{der}(c)$.
Moreover, there is an extra case that we need to consider in the case analysis over the possible forms of the label $\xi$, namely:
\begin{itemize}
\item $\xi = x_\mu$.
This case is trivial as $x_\mu \in \mathrm{var}(c_i)$ implies $c_i \trans[x_\mu] c_i$, for $i \in \{1,2\}$.
\end{itemize}
\end{proof}
\begin{remark}
\label{rmk:index_notation}
We are aware that, in the following proof, the notation used for the sets of indexes is a bit heavy.
We did our best to to explain every step in the proof so that the reader has always a clear idea of what we are doing.
\end{remark}
\proprbbprovableccslc*
\begin{proof}
By Proposition~\ref{prop:rbb_nf_ccslc} and Lemma~\ref{lem:only_terms_ccslc} it is enough to prove the statement for normal forms.
So let $t$ and $u$ be branching bisimilar normal forms.
In particular we can write them in the general forms
\begin{align*}
t \approx{} &
\sum_{i \in I} \mu_i.N_i +
\sum_{j \in J} x_j \lmerge N_j +
\sum_{h \in H} (x_h ~|~ \alpha_h) \lmerge N_h +
\sum_{k \in K} (x_k ~|~ x'_k) \lmerge N_k
\\
u \approx{} &
\sum_{\bar{i} \in \bar{I}} \nu_{\bar{i}}.M_{\bar{i}} +
\sum_{\bar{j} \in \bar{J}} y_{\bar{j}} \lmerge M_{\bar{j}} +
\sum_{\bar{h} \in \bar{H}} (y_{\bar{h}} ~|~ \beta_{\bar{h}}) \lmerge M_{\bar{h}} +
\sum_{\bar{k} \in \bar{K}} (y_{\bar{k}} ~|~ y'_{\bar{k}}) \lmerge M_{\bar{k}},
\end{align*}
where all the $N_i,N_j,N_h,N_k,M_{\bar{i}},M_{\bar{j}},M_{\bar{h}},M_{\bar{k}}$ are themselves in normal form.
We proceed to prove that $\mathcal{E}_\mathtt{RBB} \vdash \mu.t \approx \mu.u$ by induction over the sum of the sizes of $t$ and $u$.
As $t \sim_\mathtt{BB} u$, we can distinguish three cases, according to the form of the actions $\mu_i,\nu_{\bar{i}}$, for $i \in I, \bar{i} \in \bar{I}$.
\begin{enumerate}
\item\label{case:uno} There is no $i \in I$ such that $\mu_i = \tau$ and $N_i \sim_\mathtt{BB} u$, and there is no $\bar{i} \in \bar{I}$ such that $\nu_{\bar{i}} = \tau$ and $t \sim_\mathtt{BB} M_{\bar{i}}$.
In this case, as $t \sim_\mathtt{BB} u$, each transition $t \overset{\xi}\twoheadrightarrow t'$ must be matched by a transition $u \overset{\xi}\twoheadrightarrow u'$ for some $u'$ such that $t' \sim_\mathtt{BB} u'$.
Given the characterisation of the semantics of terms, and the definition of $\sim_\mathtt{BB}$ (Definition~\ref{def:open_bb}), we can infer the following:
\begin{itemize}
\item For each summand $\mu_i.N_i$ there is a summand $\nu_{\bar{i}}.M_{\bar{i}}$ such that $\nu_{\bar{i}} = \mu_i$ and $N_i \sim_\mathtt{BB} M_{\bar{i}}$.
Symmetrically, for each summand $\nu_{\bar{i}}.M_{\bar{i}}$ there is a summand $\mu_i.N_i$ such that $\mu_i = \nu_{\bar{i}}$ and $N_i \sim_\mathtt{BB} M_{\bar{i}}$.
As the sum of the sizes of $N_i$ and $M_{\bar{i}}$ is strictly smaller than $\mathrm{size}(t)+\mathrm{size}(u)$, by induction we get that
\begin{equation}
\label{eq:rbb_provable_I}
\mathcal{E}_\mathtt{RBB} \vdash \mu_i.N_i \approx \mu_i.M_{\bar{i}} = \nu_{\bar{i}}.M_{\bar{i}}.
\end{equation}
\item For each $k \in K$ there is a $\bar{k} \in \bar{K}$ such that, for any $\alpha \in \mathcal{A}\cup \overline{\mathcal{A}}$:
\begin{enumerate}
\item $t \trans[(x_k,x'_k)]_\tau x_{k,\alpha} \mathbin{\|} x'_{k,\overline{\alpha}} \mathbin{\|} N_k$;
\item $u \trans[(y_{\bar{k}},y'_{\bar{k}})]_\tau y_{\bar{k},\alpha} \mathbin{\|} y'_{\bar{k},\overline{\alpha}} \mathbin{\|} M_{\bar{k}}$;
\item\label{item:tre} $x_k ~|~ x'_k = y_{\bar{k}} ~|~ y'_{\bar{k}}$ modulo C1;
\item\label{item:quattro} $x_{k,\alpha} \mathbin{\|} x'_{k,\overline{\alpha}} \mathbin{\|} N_k \sim_\mathtt{BB} y_{\bar{k},\alpha} \mathbin{\|} y'_{\bar{k},\overline{\alpha}} \mathbin{\|} M_{\bar{k}}$.
\end{enumerate}
Symmetrical relations hold for each $\bar{k} \in \bar{K}$.
As $x_{k,\alpha},x'_{k,\overline{\alpha}},y_{\bar{k},\alpha},y'_{\bar{k},\overline{\alpha}}$ are univocally determined by, respectively, $x_k,x'_k,y_{\bar{k}},y'_{\bar{k}}$, from item (\ref{item:tre}) above we infer that $x_{k,\alpha} \mathbin{\|} x'_{k,\overline{\alpha}} = y_{\bar{k},\alpha} \mathbin{\|} y'_{\bar{k},\overline{\alpha}}$ modulo D1.
Hence, by item (\ref{item:quattro}) and two applications of Lemma~\ref{lem:cancellation_xmu}, we obtain that $N_k \sim_\mathtt{BB} M_{\bar{k}}$.
As the sum of the sizes of $N_k$ and $M_{\bar{k}}$ is strictly smaller than $\mathrm{size}(t)+\mathrm{size}(u)$, by induction we get that $\mathcal{E}_\mathtt{RBB} \vdash \tau.N_k \approx \tau.M_{\bar{k}}$.
(Please notice that since $N_k$ and $M_{\bar{k}}$ are normal forms, they are in particular CCS terms, and induction is therefore well defined on them.)
Then, we have
\begin{align*}
(x_k ~|~ x'_k) \lmerge N_k &
\stackrel{\scalebox{0.7}{(TL)}}{\approx{}} (x_k ~|~ x'_k) \lmerge \tau.N_k \\
& \approx{} (x_k ~|~ x'_k) \lmerge \tau.M_{\bar{k}} \\
& \stackrel{\scalebox{0.7}{(TL)}}{\approx{}} (x_k ~|~ x'_k) \lmerge M_{\bar{k}} \\
& \approx{} (y_{\bar{k}} ~|~ y'_{\bar{k}}) \lmerge M_{\bar{k}}
\end{align*}
Summarising, we have obtained that for each $k \in K$ (respectively, $\bar{k} \in \bar{K}$) there is a $\bar{k} \in \bar{K}$ (respectively, $k \in K$) such that:
\begin{equation}
\label{eq:rbb_provable_K}
\mathcal{E}_\mathtt{RBB} \vdash (x_k ~|~ x'_k) \lmerge N_k \approx (y_{\bar{k}} ~|~ y'_{\bar{k}}) \lmerge M_{\bar{k}}.
\end{equation}
\item For each $j \in J$ there is a $\bar{j} \in \bar{J}$ such that, given any $\mu \in \mathcal{A}_\tau$:
\begin{enumerate}
\item $t \trans[(x_j)]_\mu x_{j,\mu} \mathbin{\|} N_j$;
\item $u \trans[(y_{\bar{j}})]_\mu y_{\bar{j},\mu} \mathbin{\|} M_{\bar{j}}$;
\item $x_j = y_{\bar{j}}$;
\item $x_{j,\mu} \mathbin{\|} N_j \sim_\mathtt{BB} y_{\bar{j},\mu} \mathbin{\|} M_{\bar{j}}$.
\end{enumerate}
Symmetrical relations hold for each $\bar{j} \in \bar{J}$.
We can then proceed as the case of indexes in $K, \bar{K}$ and obtain that for each $j \in J$ (respectively, $\bar{j} \in \bar{J}$) there is a $\bar{j} \in \bar{J}$ (respectively, $j \in J$) such that:
\begin{equation}
\label{eq:rbb_provable_J}
\mathcal{E}_\mathtt{RBB} \vdash x_j \lmerge N_j \approx y_{\bar{j}} \lmerge M_{\bar{j}}.
\end{equation}
\item For each $h \in H$ there is a $\bar{h} \in \bar{H}$ such that, given any $\alpha \in \mathcal{A}\cup\overline{\mathcal{A}}$:
\begin{enumerate}
\item $t \trans[(x_h)]_{\overline{\alpha_h},\tau} x_{h,\overline{\alpha_h}} \mathbin{\|} N_h$;
\item $u \trans[(y_{\bar{h}})]_{\overline{\beta_{\bar{h}}},\tau} y_{\bar{h},\overline{\beta_{\bar{h}}}} \mathbin{\|} M_{\bar{h}}$;
\item $x_h = y_{\bar{h}}$ and $\alpha_h = \beta_{\bar{h}}$;
\item $x_{h,\overline{\alpha_h}} \mathbin{\|} N_h \sim_\mathtt{BB} y_{\bar{h},\overline{\beta_{\bar{h}}}} \mathbin{\|} M_{\bar{h}}$.
\end{enumerate}
Symmetrical relations hold for each $\bar{h} \in \bar{H}$.
We can then proceed as the case of indexes in $K, \bar{K}$ and obtain that for each $h \in H$ (respectively, $\bar{h} \in \bar{H}$) there is a $\bar{h} \in \bar{H}$ (respectively, $h \in H$) such that:
\begin{equation}
\label{eq:rbb_provable_H}
\mathcal{E}_\mathtt{RBB} \vdash (x_h ~|~ \alpha_h) \lmerge N_h \approx (y_{\bar{h}} ~|~ \beta_{\bar{h}}) \lmerge M_{\bar{h}}.
\end{equation}
\end{itemize}
Equations (\ref{eq:rbb_provable_I})--(\ref{eq:rbb_provable_H}) taken together give $\mathcal{E}_\mathtt{RBB} \vdash t \approx u$, from which it is immediate to infer $\mathcal{E}_\mathtt{RBB} \vdash \mu.t \approx \mu.u$, for any $\mu \in \mathcal{A}_\tau$, and the proof is complete is this case.
\item Assume now that $\mu_i = \tau$ and $N_i \sim_\mathtt{BB} u$ for some $i \in I$, and that $\nu_{\bar{i}} = \tau$ and $t \sim_\mathtt{BB} M_{\bar{i}}$ for some $\bar{i} \in \bar{I}$.
Clearly, we have that $N_i \sim_\mathtt{BB} u \sim_\mathtt{BB} t \sim_\mathtt{BB} M_{\bar{i}}$, and $\mathrm{size}(N_i) + \mathrm{size}(M_{\bar{i}}) < \mathrm{size}(t) + \mathrm{size}(u)$, so that by induction we obtain
\[
\mathcal{E}_\mathtt{RBB} \vdash \mu.N_i \approx \mu.M_{\bar{i}}
\qquad
\mathcal{E}_\mathtt{RBB} \vdash \mu.t \approx \mu.M_{\bar{i}}
\qquad
\mathcal{E}_\mathtt{RBB} \vdash \mu.N_i \approx \mu.u
\]
from which $\mathcal{E}_\mathtt{RBB} \vdash \mu.t \approx \mu.u$ can be inferred, and the proof is complete in this case.
\item Assume that there is an index $i \in I$ such that $\mu_i = \tau$ and $N_i \sim_\mathtt{BB} u$, but there is no $\bar{i} \in \bar{I}$ such that $\nu_{\bar{i}} = \tau$ and $t \sim_\mathtt{BB} M_{\bar{i}}$.
(The symmetric case can be treated similarly and it is therefore omitted.)
For every summand $\tau.N_i$ of $t$ with $N_i \sim_\mathtt{BB} u$ we have that the sum of the sizes of $N_i$ and $u$ is strictly smaller than $\mathrm{size}(t)+\mathrm{size}(u)$.
Hence, by induction we obtain that $\mathcal{E}_\mathtt{RBB} \vdash \tau.N_i \approx \tau.u$ for all such summands.
Thus, possibly applying axioms A0--A3, we can infer that
\begin{equation}
\mathcal{E}_\mathtt{RBB} \vdash t \approx \tau.u + N
\end{equation}
where
\[
N = \sum_{i \in I_t} \mu_i.N_i +
\sum_{j \in J} x_j \lmerge N_j +
\sum_{h \in H} (x_h ~|~ \alpha_h) \lmerge N_h +
\sum_{k \in K} (x_k ~|~ x'_k) \lmerge N_k
\]
with $I_t = \{i \in I \mid \mu_i \neq \tau \vee N_i \not\sim_\mathtt{BB} u\}$.
Given the condition on the indexes in $I_t$, and considering that but there is no $\bar{i} \in \bar{I}$ such that $\nu_{\bar{i}} = \tau$ and $t \sim_\mathtt{BB} M_{\bar{i}}$, it is immediate to verify that whenever $N \overset{\xi}\twoheadrightarrow C$ then $u \overset{\xi}\twoheadrightarrow c$ for some $c$ such that $C \sim_\mathtt{BB} c$.
In particular, by applying the same reasoning used in the analysis of case~\ref{case:uno} above, we have:
\begin{itemize}
\item for each $i \in I_t$ there is a $\bar{i}_i \in \bar{I}$ such that
\[
\mathcal{E}_\mathtt{RBB} \vdash \mu_i.N_i \approx \nu_{\bar{i}_i}.M_{\bar{i}_i};
\]
\item for each $j \in J$ there is a $\bar{j}_j \in \bar{J}$ such that
\[
\mathcal{E}_\mathtt{RBB} \vdash x_j \lmerge N_j \approx y_{\bar{j}_j} \lmerge M_{\bar{j}_j};
\]
\item for each $h \in H$ there is a $\bar{h}_h \in \bar{H}$ such that
\[
\mathcal{E}_\mathtt{RBB} \vdash (x_h ~|~ \alpha_h) \lmerge N_h \approx (y_{\bar{h}_h} ~|~ \beta_{\bar{h}_h}) \lmerge M_{\bar{h}_h};
\]
\item for each $k \in K$ there is a $\bar{k}_k \in \bar{K}$ such that
\[
\mathcal{E}_\mathtt{RBB} \vdash (x_k ~|~ x'_k) \lmerge N_k \approx (y_{\bar{k}_k} ~|~ y'_{\bar{k}_k}) \lmerge M_{\bar{k}_k}.
\]
\end{itemize}
Summarising, we have obtained that
\[
\mathcal{E}_\mathtt{RBB} \vdash u \approx N + M
\]
where
\begin{align*}
M ={} &
\sum \{\nu_{\bar{i}}.M_{\bar{i}} \mid \bar{i} \neq \bar{i}_i \text{ for all } i\} +
\sum \{y_{\bar{j}} \lmerge M_{\bar{j}} \mid \bar{j} \neq \bar{j}_j \text{ for all } j \} + \\
& \sum \{(y_{\bar{h}} ~|~ \beta_{\bar{h}} \mid \lmerge M_{\bar{h}} \mid \bar{h} \neq \bar{h}_h \text{ for all } h\} +
\sum \{(y_{\bar{k}} ~|~ y'_{\bar{k}} \mid \lmerge M_{\bar{k}} \mid \bar{k} \neq \bar{k}_k \text{ for all } k\}.
\end{align*}
Then:
\begin{align*}
\mathcal{E}_\mathtt{RBB} \vdash \mu.t \approx{} &
\mu.(\tau.u + N) \\
\approx{} &
\mu.(\tau.(N + M) + N) \\
\stackrel{\scalebox{0.7}{(TB)}}{\approx{}} &
\mu.(N + M) \\
\approx{} &
\mu.u
\end{align*}
and the proof follows also in this case.
\end{enumerate}
\end{proof}
\section{Proof of Theorem~\ref{thm:rbb_complete_ccslc}}
\thmrbbcompleteccslc*
\begin{proof}
To prove the claim, it is enough to prove that $\mathcal{E}_\mathtt{RBB} \vdash t \approx t + u$, since, by symmetry of $\sim_\mathtt{RBB}$, this also gives $\mathcal{E}_\mathtt{RBB} \vdash u \approx t + u$ and thus that $\mathcal{E}_\mathtt{RBB} \vdash t \approx u$.
By Proposition~\ref{prop:rbb_nf_ccslc} and Lemma~\ref{lem:only_terms_ccslc}, $t$ and $u$ can be written in normal form as follows:
\begin{align*}
t \approx{} &
\sum_{i \in I} \mu_i.N_i +
\sum_{j \in J} x_j \lmerge N_j +
\sum_{h \in H} (x_h ~|~ \alpha_h) \lmerge N_h +
\sum_{k \in K} (x_k ~|~ x'_k) \lmerge N_k
\\
u \approx{} &
\sum_{\bar{i} \in \bar{I}} \nu_{\bar{i}}.M_{\bar{i}} +
\sum_{\bar{j} \in \bar{J}} y_{\bar{j}} \lmerge M_{\bar{j}} +
\sum_{\bar{h} \in \bar{H}} (y_{\bar{h}} ~|~ \beta_{\bar{h}}) \lmerge M_{\bar{h}} +
\sum_{\bar{k} \in \bar{K}} (y_{\bar{k}} ~|~ y'_{\bar{k}}) \lmerge M_{\bar{k}},
\end{align*}
where all the $N_i,N_j,N_h,N_k,M_{\bar{i}},M_{\bar{j}},M_{\bar{h}},M_{\bar{k}}$ are themselves in normal form.
As $t \sim_\mathtt{RBB} u$ and the equations in $\mathcal{E}_\mathtt{RBB}$ are sound modulo rooted branching bisimilarity, we can apply the same reasoning used in case~\ref{case:uno} of the proof of Proposition~\ref{prop:rbb_provable_ccslc} and obtain that:
\begin{itemize}
\item For each index $\bar{i} \in \bar{I}$ there is an index $i_{\bar{i}} \in I$ such that $\mu_{i_{\bar{i}}} = \nu_{\bar{i}}$ and $N_{i_{\bar{i}}} \sim_\mathtt{BB} M_{\bar{i}}$.
Then, for each $\bar{i} \in \bar{I}$, by Proposition~\ref{prop:rbb_provable_ccslc} we obtain that
\begin{equation}
\label{eq:rbb_complete_I}
\mathcal{E}_\mathtt{RBB} \vdash \mu_{i_{\bar{i}}}.N_{i_{\bar{i}}} \approx \mu_{i_{\bar{i}}}.M_{\bar{i}} = \nu_{\bar{i}}.M_{\bar{i}}.
\end{equation}
\item For each $\bar{j} \in \bar{J}$ there is a $j_{\bar{j}} \in J$ such that $y_{\bar{j}} = x_{j_{\bar{j}}}$ and $M_{\bar{j}} \sim_\mathtt{BB} N_{j_{\bar{j}}}$.
Then, for each $\bar{j} \in \bar{J}$, by Proposition~\ref{prop:rbb_provable_ccslc} and axiom TL we obtain that
\begin{equation}
\label{eq:rbb_complete_J}
\mathcal{E}_\mathtt{RBB} \vdash x_{j_{\bar{j}}} \lmerge N_{j_{\bar{j}}} \approx
x_{j_{\bar{j}}} \lmerge \tau.N_{j_{\bar{j}}} \approx y_{\bar{j}} \lmerge \tau.M_{\bar{j}} \approx y_{\bar{j}} \lmerge M_{\bar{j}}.
\end{equation}
\item For each $\bar{h} \in \bar{H}$ there is a $h_{\bar{h}} \in H$ such that $y_{\bar{h}} = x_{h_{\bar{h}}}$, $\beta_{\bar{h}} = \alpha_{h_{\bar{h}}}$ and $M_{\bar{h}} \sim_\mathtt{BB} N_{h_{\bar{h}}}$.
Then, for each $\bar{h} \in \bar{H}$, by Proposition~\ref{prop:rbb_provable_ccslc} and axiom TL we obtain that
\begin{equation}
\label{eq:rbb_complete_H}
\mathcal{E}_\mathtt{RBB} \vdash (x_{h_{\bar{h}}} ~|~ \alpha_{h_{\bar{h}}}) \lmerge N_{h_{\bar{h}}} \approx (x_{h_{\bar{h}}} ~|~ \alpha_{h_{\bar{h}}}) \lmerge \tau.N_{h_{\bar{h}}} \approx (y_{\bar{h}} ~|~ \beta_{\bar{h}}) \lmerge \tau.M_{\bar{h}} \approx (y_{\bar{h}} ~|~ \beta_{\bar{h}}) \lmerge M_{\bar{h}}.
\end{equation}
\item For each $\bar{k} \in \bar{K}$ there is a $k_{\bar{k}} \in K$ such that $y_{\bar{k}} ~|~ y'_{\bar{k}} = x_{k_{\bar{k}}} ~|~ x'_{k_{\bar{k}}}$, modulo C1, and $M_{\bar{k}} \sim_\mathtt{BB} N_{k_{\bar{k}}}$.
Then, for each $\bar{k} \in \bar{K}$, by Proposition~\ref{prop:rbb_provable_ccslc} and axiom TL we obtain that
\begin{equation}
\label{eq:rbb_complete_K}
\mathcal{E}_\mathtt{RBB} \vdash (x_{k_{\bar{k}}} ~|~ x'_{k_{\bar{k}}}) \lmerge N_{k_{\bar{k}}} \approx (x_{k_{\bar{k}}} ~|~ x'_{k_{\bar{k}}}) \lmerge \tau.N_{k_{\bar{k}}} \approx (y_{\bar{k}} ~|~ y'_{\bar{k}}) \lmerge \tau.M_{\bar{k}} \approx (y_{\bar{k}} ~|~ y'_{\bar{k}}) \lmerge M_{\bar{k}}.
\end{equation}
\end{itemize}
The fact that $\mathcal{E}_\mathtt{RBB} \vdash t \approx t + u$ then immediately follows from Equations (\ref{eq:rbb_complete_I})--(\ref{eq:rbb_complete_K}).
\end{proof}
|
3,212,635,537,772 | arxiv | \section{Introduction}
A newborn protostar generates a fast and well collimated jet, possibly surrounded by a wider angle wind. In turn, the ejected material drives (bow-)shocks travelling through the surrounding high-density medium and traced by H$_2$ ro-vibrational lines at excitation temperatures of around 2000 K. Consequently, slower and cold (10--20 K) molecular outflows are formed by swept-up material, usually traced by CO. Shocks heat the gas and trigger several processes such as endothermic chemical reactions and ice grain mantle sublimation or sputtering. Several molecules, such as H$_2$O, NH$_3$, CH$_3$OH, H$_2$CO, undergo spectacular enhancements by orders of magnitude in their abundances \citep{vandis98}, as observed at mm-wavelengths in a number of outflows \citep{Garay98,Bachiller97,Jorgensen07}. The link between the gas components at $\sim$ 10 K and the hot 2000 K shocked component is crucial to understand how the protostellar wind transfers momentum and energy back to the ambient medium. In this context, studies of the chemical composition of typical molecules in bow-shocks are essential because they represent a very powerful diagnostic tool for probing their physical conditions. Such studies are also paramount to get a comprehensive understanding of chemistry throughout the various phases of the interstellar medium.
\begin{figure
\centering
\includegraphics[bb=82 27 537 450,angle=-90,width=9.0cm]{iras4a_map.ps}
\caption{NGC 1333 IRAS4A outflow and corresponding observed positions. The background image represent the H$_2$O emission at 179 $\mu$m from the WISH program \citep{Santangelo14}. The circles show the vertical (V) and horizontal (H) polarization HPBW at the observed positions, whose centers are indicated by the triangles. The stars mark the position of the continuum sources \citep[A and B:][]{Looney00}.
}
\label{map-iras4}
\end{figure}
\begin{figure
\centering
\includegraphics[bb=72 65 532 428,angle=-90,width=9.0cm]{l1157_map.ps}
\caption{L1157 outflow and corresponding observed positions. The background image represent the H$_2$O emission at 179 $\mu$m from the WISH program \citep{Nisini10b}. The blue contours show the SiO emission from \citet{Bachiller2001}. The circles show the vertical (V) and horizontal (H) polarization HPBW at the observed positions, whose centers are indicated by the triangles. The star marks the position of the central source (L1157-mm).
}
\label{map-l1157}
\end{figure}
As part of the Herschel Key Program CHESS \citep[Chemical Herschel Surveys of Star forming regions:][]{Cecc10}, the bow-shock L1157-B1 has been investigated with a spectral survey using the HIFI instrument. From the comparison between NH$_3$(1$_0$--0$_0$) and H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) profiles, a straightforward estimate of the relative abundance ratios of the gas at different velocities was obtained \citep{Codella10}. As a notable example, the NH$_3$/H$_2$O intensity ratio decreases by a factor of $\sim$ 5 moving towards higher velocities suggesting, in case of optically thin emission along the wings, a similar decrease in the abundance ratios. Other tracers of shocked material such as CH$_3$OH and H$_2$CO show the same profile as that of NH$_3$. In \citet{Codella10} we propose that the difference between the H$_2$O and other species reflects different formation mechanisms: for example, while the bulk of NH$_3$ is released from the grain mantles, H$_2$O is enhanced by the release of the icy mantles {\it as well as} by endothermic reactions occurring in the warm ($\geq 220$ K) shocked gas, which convert all gaseous atomic oxygen into water \citep[e.g.,][and references therein]{Isaskun08}.
However, a model by \citet{Viti11} made especially for these data set suggests that the differences observed in the profile of the different molecular tracers are due mainly to the temperature of the gas: if the latter undergoes a period at a temperature close to 4000 K, then NH$_3$ is easily destroyed by the reaction with hydrogen which leads to NH$_2$ $+$ H$_2$ (this reaction has a high barrier of $\sim$5000 K),
while H$_2$O remains high in abundance. Such scenario can be explained by the presence of a C-type shock whose pre-shock density and velocity are such that the maximum temperature of the shock reaches 4000 K along the B1 shock of L1157. These findings called for observations of more molecular shocked regions associated with protostellar outflows to investigate whether the difference in profiles between H$_2$O and other species are unique to L1157 or whether it is an ubiquitous characteristic of chemically rich outflows.
\begin{figure
\centering
\includegraphics[bb=152 67 511 342,angle=-90,width=9.0cm]{l1448_map.ps}
\caption{L1448 outflow and corresponding observed positions. The background image represent the H$_2$O emission at 179 $\mu$m from the WISH program \citep{Nisini13}. The circles show the vertical (V) and horizontal (H) polarization HPBW at the observed positions, whose centers are indicated by the triangles. The green crosses indicate the positions of the millimeter continuum sources \citep[N, S, A, B, and W:][]{Kwon06}.
}
\label{map-l1448}
\end{figure}
In this article we present observations of the $J_K=$1$_{\rm 0}$--0$_{\rm 0}$ transition of ortho-NH$_3$ at 572.5 GHz in a number of outflow spots already observed in the ortho-H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) line as part of the Herschel Key Program WISH \citep[Water In Star-forming regions with Herschel:][]{WISH} and reported by \citet{Tafalla13}. In Sect. 2 the target selection and Herschel observations with HIFI are described, in Sect. 3 we report the line profiles obtained, in Sect. 4 and 5 we develop the analysis of the data, and in Sect. 6 we present the summary and conclusions.
\begin{figure
\centering
\includegraphics[angle=90,width=8.5cm]{iras2a-key.ps}
\caption{NGC1333 IRAS2A outflow and corresponding observed positions \citep[image adapted from][]{Bachiller98}. The contours show the CH$_3$OH ($2_k-1_k$) PdBI map. The circles show the vertical (V) and horizontal (H) polarization HPBW at the observed positions, whose centers are indicated by the triangles. The star indicates the position of the central source (IRAS2A).
}
\label{map-iras2}
\end{figure}
\section{Observations}
\subsection{Targeted outflow positions}
The WISH protostellar outflow sample contains 26 outflows driven by Class 0/I low-mass Young Stellar Objects \citep{Tafalla13}. For each source, two ’hot’ spots (blue- and red-shifted) have been observed. The positions were carefully selected by inspecting the maps obtained from ground-based facilities using typical tracers of shocked material (e.g. high-velocity CO, CH$_3$OH, SiO, H$_2$CO). The selected spots are associated with a definite enhancement of the abundance of molecules tracing shocked material or with high-velocity molecular components (up to about 100 km s$^{-1}$ with respect to the systemic velocity).
From the WISH survey we selected a sample containing 8 'hot' spots in shocks associated with four 'classical' flows driven by low-mass Young Stellar Objects (YSOs) in the earliest evolutionary stages (Class 0), and associated with bright H$_2$O emission: L1448, L1157, NGC1333-IRAS2A, NGC1333-IRAS4A (see Table \ref{soulist}). These YSOs are approximately at the same distance from Earth, with bolometric luminosities between 4-25 $L_{\rm bol}$.
\subsection{HIFI Observations}
The observations were carried out with the band 1b of the HIFI instrument on-board Herschel, during 2011 October and December, and 2012 February, as part of an OT-1 observing program (OT1\_ccodella\_1). The observations log is shown in Table \ref{obs}. The pointed positions are shown in Table 1 and indicated on Figs. 1--4. The single-pointing observations were made in Position Switching mode and with spatial offsets derived by inspecting large-scale CO maps in order to avoid off-source contamination.
\begin{table
\centering
\caption{HIFI band 1b observations.}
\label{obs}
\centering
\begin{tabular}{lccc}
\hline
Target & Duration & Date & Obs. Id \\
&(seconds)&&\\
\hline
L1448-R4 & 5703 & 2012-02-24 & 1342239631 \\
L1448-B2 & 5703 & 2012-02-24 & 1342239632 \\
IRAS4A-B & 5703 & 2012-02-25 & 1342239633 \\
IRAS4A-R & 5703 & 2012-02-25 & 1342239634 \\
IRAS2A-B & 5703 & 2012-02-25 & 1342239635 \\
IRAS2A-R & 5703 & 2012-02-25 & 1342239636 \\
L1157-R & 12828 & 2011-12-22 & 1342235065 \\
L1157-B2 & 5703 & 2011-10-08 & 1342230366 \\
\hline
\end{tabular}
\end{table}
The Wide Band Spectrometer (WBS) was used with a frequency resolution of 1.1 MHz. Two High Resolution Spectrometers (HRS) were used in parallel with a frequency resolution of 250 kHz. The observing set-up was prepared in order to observe the o--NH$_3$(1$_0$--0$_0$) line in lower side band with both the WBS as well as with one HRS unit. In addition to our target line, we observed the CS(12--11) line at 587.6 GHz in the upper side band with the WBS and the second HSR unit, and the CO(5--4) line at 576.3 GHz in lower side band with the WBS. Also, the a--CH$_3$OH (12$_{1,11}$--11$_{1,10}$) line at 584.8 GHz and the p--H$_2$CO (8$_{2,6}$--7$_{2,5}$) at 587.4 GHz were observed in the upper side band with the WBS only. Both H and V polarization were observed and then averaged to increase sensitivity. However, we caution that H and V pointings were separated by 6$\farcs$6
and therefore each polarization may cover a slightly different region. Thus our analysis refers to the whole area covered by the two beams. In appendix A we provide and discuss the H and V spectra. The molecular line parameters as well as Herschel's antenna HPBW, taken according to \citet{Roelf12}, are reported in Table \ref{nspecie}. The Herschel observations were processed with the ESA-supported package HIPE 8.10\footnote{HIPE is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortia.} \citep[Herschel Interactive Processing Environment;][]{Ott10}. FITS files from level 2 were then created and transformed into GILDAS\footnote{http://iram.fr/IRAMFR/PDB/gildas/gildas.html} format for data analysis.
The spectra in this article are reported in units of main-beam brightness temperature (T$_{\rm MB}=$T$_A^* \times$ $F_{\rm eff}$/$B_{\rm eff}$), for which we have used the $F_{\rm eff}$ of 0.96 and $B_{\rm eff}$ of 0.76 (nominal for band 1b) for all the lines, according to Roelfsema et al. (2012). After smoothing, the spectral resolution in all the cases is 0.5 km s$^{-1}$.
\begin{table
\centering
\caption{Transitions and parameters of the lines observed.}
\footnotesize{
\label{nspecie}
\centering
\begin{tabular}{lcccc}
\hline
Transition$^a$ & $\nu_{\rm 0}$ & $E_{\rm u}$ & $A_{\rm ij}$ & HPBW \\
& (GHz) & (K) & (10$^{-3}$s$^{-1}$)&($''$) \\
\hline
o--NH$_3$(1$_0$--0$_0$) & 572.498 & 28 & 1.61 & 37 \\
CO(5--4) & 576.267 & 83 & 0.01 & 37 \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$)A$^+$ & 584.822 & 197 & 0.89 & 36 \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & 587.453 & 173 & 5.66 & 36 \\
CS(12--11) & 587.616 & 183 & 4.34 & 36 \\
\hline
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$)$^b$ & 556.936 & 61 & 3.46 & 39 \\
\hline
\end{tabular}}
\begin{center}
$^a$ Transition properties are taken from the Cologne Database for Molecular Spectroscopy: \citet{muller05}. $^b$ Taken from \citet{Tafalla13} \\
\end{center}
\end{table}
\section{Results}
\begin{table*
\centering
\caption{Line properties.}
\scriptsize{
\label{linesdata}
\centering
\begin{tabular}{lccccc}
\hline
Transition$^a$ & $T_{\rm MB}^{\rm peak}$ & r.m.s. & $V_{peak}$ & $V_{min},V_{max}$$^b$ & $\int$ $T_{\rm MB}dv$$^c$\\
& (mK) & (mK) & (km s$^{-1}$) & (km s$^{-1}$) & (mK km s$^{-1}$)\\
\hline
\multicolumn{6}{c}{IRAS4A-B}\\
\hline
o--NH$_3$(1$_0$--0$_0$) & 280(4) & 4 & +5.7(0.5) & $-$9.0, $+$12.7 & 624(10) \\
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 852(2) & 2 & +1.1(0.5) & $-$15.9, $+$25.1 & 14237(9) \\
CO(5--4) & 5131(2) & 2 & +6.0(0.5) & $-$18.1, $+$49.7 & 29472(12) \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$) A$^+$ & 30(3) & 3 & +6.2(0.5) & $+$0.9, $+$9.6 & 130(7) \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & 20(3) & 3 & +5.7(0.5) & $+$3.0, $+$7.8 & 88(7) \\
CS(12--11) & 46(3) & 3 & +3.3(0.5) & $-$5.6, $+$13.3 & 532(9) \\
\hline
\multicolumn{6}{c}{IRAS4A-R}\\
\hline
o--NH$_3$(1$_0$--0$_0$) & 220(4) & 4 & +5.5(0.5) & $-$4.8, $+$24.2 & 721(8) \\
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 532(2) & 2 & +10.5(0.5) & $-$1.7, $+$31.9 & 7263(8) \\
CO(5--4) & 4604(2) & 2 & +6.0(0.5) & $-$12.3, $+$51.5 & 28245(12) \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$) A$^+$ & 30(4) & 4 & +8.0(0.5) & $+$4.9, $+$12.0 & 130(8) \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & 30(3) & 3 & +7.8(0.5) & $+$6.1, $+$10.1 & 77(9) \\
CS(12--11) & 51(3) & 3 & +8.2(0.5) & $+$5.0, $+$19.4 & 379(10) \\
\hline
\multicolumn{6}{c}{IRAS2A-B}\\
\hline
o--NH$_3$(1$_0$--0$_0$) & 50(4) & 4 & +8.5(0.5) & $-$5.0, $+$9.0 & 413(11) \\
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 565(2) & 2 & +0.2(0.5) & $-$11.2, $+$7.6 & 6427(6) \\
CO(5--4) & 1063(3) & 2 & +5.5(0.5) & $-$16.1, $+$6.2 & 6427(7) \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$) A$^+$ & -- & 3 & -- & -- & -- \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & -- & 3 & -- & -- & -- \\
CS(12--11) & -- & 2 & -- & --& -- \\
\hline
\multicolumn{6}{c}{IRAS2A-R}\\
\hline
o--NH$_3$(1$_0$--0$_0$) & 190(4) & 4 & +6.5(0.5) & $+$4.5, $+$19.2 & 858(14) \\
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 581(2) & 2 & +11.5(0.5) & $+$3.2, $+$28.5 & 7182(8) \\
CO(5--4) & 5832(3) & 3 & +6.7(0.5) & $-$11.8, $+$33.3 & 33186(9) \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$) A$^+$ & 20(3) & 3 & +10.0(0.5) & $+$8.3, $+$10.7 & 64(7) \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & -- & 4 & -- & -- & -- \\
CS(12--11) & 9(3) & 3 & +10.0(0.5) & $+$7.2, $+$11.4 & 44(4) \\
\hline
\multicolumn{6}{c}{L1448-B2}\\
\hline
o--NH$_3$(1$_0$--0$_0$) & 120(4) & 4 & +4.2(0.5) & $+$2.7, $+$6.3 & 295(8) \\
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 348(2) & 2 & $-$1.1(0.5) & $-$61.7, $+$5.5 & 8911(11) \\
CO(5--4) & 2711(3) & 3 & +3.5(0.5) & $-$77.3, $+$35.8 & 31603(9) \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$) A$^+$ & -- & 3 & -- & -- & -- \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & -- & 4 & -- & -- & -- \\
CS(12--11) & -- & 3 & -- & -- & -- \\
\hline
\multicolumn{6}{c}{L1448-R4}\\
\hline
o--NH$_3$(1$_0$--0$_0$) & 20(3) & 4 & +10.2(0.5) & $+$6.3, $+$18.8 & 179(14) \\
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 406(1) & 1 & +19.5(0.5) & $+$1.3, $+$47.4 & 11388(6) \\
CO(5--4) & 1936(03) & 3 & +6.2(0.5) & $-$1.1, $+$51.5 & 18381(10) \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$) A$^+$ & -- & 4 & -- & -- & -- \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & -- & 4 & -- & -- & -- \\
CS(12--11) & -- & 4 & -- & -- & -- \\
\hline
\multicolumn{6}{c}{L1157-B2}\\
\hline
o--NH$_3$(1$_0$--0$_0$) & 130(4) & 4 & +1.1(0.5) & $-$4.5, $+$5.5 & 717(9) \\
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 1150(5) & 5 & +4.5(0.5) & $-$6.6, $+$9.4 & 9274(13) \\
CO(5--4) & 6394(2) & 2 & +1.9(0.5) & $-$15.2, $+$9.1 & 21733(7) \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$) A$^+$ & -- & 4 & -- & -- & -- \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & -- & 3 & -- & -- & -- \\
CS(12--11) & -- & 4 & -- & -- & -- \\
\hline
\multicolumn{6}{c}{L1157-R}\\
\hline
o--NH$_3$(1$_0$--0$_0$) & 40(4)& 4 & +7.6(0.5) & $+$2.2, $+$18.7 & 383(13) \\
o--H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 360(5) & 5 & +19.3(0.5) & $-$1.1, $+$30.5 & 6651(20)\\
CO(5--4) & 2323(2) & 2 & +4.1(0.5) & $-$2.9, $+$33.5 & 21975(8) \\
CH$_3$OH(12$_{1,11}$--11$_{1,10}$) A$^+$ & -- & 3 & -- & -- & -- \\
p--H$_2$CO(8$_{2,6}$--7$_{2,5}$) & -- & 2 & -- & -- & -- \\
CS(12--11) & -- & 4 & -- & -- & -- \\
\hline
\end{tabular}}
\begin{center}
$^a$ Apart from the transitions reported in Table \ref{nspecie}, included here is the o--H$_2$O transition taken from \citet{Tafalla13}. $^b$ Velocity boundaries where the emission is $\geq$ 3$\sigma$. $^c$ The integrated area between $V_{min}$ and $V_{max}$. \\
\end{center}
\end{table*}
Table \ref{linesdata} summarizes the results of the observations, indicating the line intensities ($T_{\rm MB}^{\rm peak}$), velocity of the peak ($V_{\rm peak}$), velocity limits of the emission ($V_{min},V_{max}$), and the total integrated emission ($\int$ $T_{\rm MB}dv$). Table \ref{linesdata} also reports the same parameters for the H$_2$O line, which we measured from the spectra reported in \citet{Tafalla13}. We computed the column densities, from the total integrated emission, assuming LTE and optically thin emission. Table \ref{coldens} shows the results assuming a typical range of temperatures observed toward these kind of objects \citep[20-100 K, e.g.,][]{Lefloch12,Tafalla13}. We find that the ammonia column densities are in the range of $\sim$ 10$^{10}$--5$\times$10$^{11}$ cm$^{-2}$. To probe that these estimations are accurate, in Appendix B we present the results of radiative transfer calculations showing that the ammonia emission is optically thin, $\tau$ $\leq$ 1.3 $\times$ 10$^{-2}$, for these values of column densities and temperatures (see Fig. \ref{tau-nh3}). For CS, CO, CH$_3$OH, and H$_2$CO, the column densities ranges are $\sim$ 10$^{12}$--5$\times$10$^{14}$ cm$^{-2}$, 10$^{15}$--7$\times$10$^{16}$, 8$\times$10$^{12}$--8$\times$10$^{15}$, and 9$\times$10$^{11}$--9$\times$10$^{13}$ cm$^{-2}$, respectively. Note that in the case of CO, due to the strong absorption, the numbers are only lower limits.
\subsection{NH$_3$ Profiles}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=15.5cm]{h2onh3_spectra_wbs.ps}
\caption{The WBS NH$_3$ spectra (magenta) at the different observed positions. Also displayed are the ortho--H$_2$O spectra (black) from \citet{Tafalla13}. Dashed vertical lines indicates the cloud velocity.
}
\label{nh3-h2o}
\end{figure*}
The ammonia WBS spectra observed at each of the outflow positions are shown in Fig. \ref{nh3-h2o}, overlaid on the corresponding water spectra. Ammonia emission was detected in all positions, showing extended line wings (up to $\sim$ 15 km s$^{-1}$ with respect to the systemic velocity) in all but the L1448-B2 position, where we found just a narrow profile (a peculiar feature discussed in Sect. 4).
With the exception of L1448-B2, the WBS spectra show an absorbing dip at systemic velocity plausibly due to the absorption from the extended envelope of the protostar.
The high resolution spectra, from the HRS, do not show additional information on the low-velocity NH$_3$ emission already provided by the WBS spectra, with the exception of L1448-B2, which shows multiple peaks at low velocities (Fig. \ref{L1448B2-hr}). The Horizontal and Vertical polarization spectra are slightly different, as the result of the different area covered by each of them (see Sect. 2.2). The two most prominent spectral features are seen in the Vertical polarization spectrum (which correspond to the beam closer to the central region), showing peaks at 4.0 and 4.7 km s$^{-1}$. These spectral features are discussed in Sect. 4 and 5.
\subsection{CS, H$_2$CO, and CH$_3$OH profiles}
The spectra of all observed lines are presented in Figs. A1--A4. Only in IRAS4A-B and IRAS4A-R the CS, H$_2$CO, and CH$_3$OH transitions were found. These two positions are also the strongest line emitters among the sources studied here. This is possibly due to the chemical richness of this source as reported by previous investigations \citep[e.g.,][]{Wakelam05,Santangelo14}. However, we point out that later might also be due to sensitivity. We notice that these transitions have larger column densities than NH$_3$. In general, we see that the CS, H$_2$CO, and CH$_3$OH transitions have profiles more similar to NH$_3$ than to H$_2$O, in particular in terms of the maximum velocity. While a more detailed analysis of such species is out of the scope of the present paper, we shall briefly discuss these species in Section 5.
\begin{figure
\centering
\includegraphics[angle=0, bb=1 1 650 360,width=8.5cm]{l1448B2-hr.eps}
\caption{High spectral resolution (HRS) NH$_3$ spectrum towards L1448-B2. Horizontal and Vertical polarization are shown in black and red, respectively. The vertical dashed lines indicate the velocities of the two clouds (4.0 km s$^{-1}$ and 4.7 km s$^{-1}$) found in ammonia centimeter transitions by Bachiller \& Cernicharo (1986).
}
\label{L1448B2-hr}
\end{figure}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=15.5cm]{ammwat-rat.ps}
\caption{The NH$_3$/H$_2$O intensity ratio at the different observed positions. Dashed vertical lines indicate the cloud velocity. To indicate the region where the ratio is valid, the velocity boundaries (V$_{\rm min}$,V$_{\rm max}$) where the NH$_3$ and H$_2$O emission is $\geq$ 3$\sigma$ are indicated by vertical lines, in magenta and black, respectively. The NH$_3$ spectra are shown in magenta as a reference.
}
\label{nh3-h2o-rat}
\end{figure*}
\section{Ammonia versus water}
In order to properly compare the ammonia and water profiles, a more complete radiative transfer analysis of both species would be necessary to discard excitation and opacity effects. Such analysis is out of the scope of the present work, since we only have one NH$_3$ transition. Despite that, we have shown in Appendix B that our NH$_3$ transition is optically thin at the positions here investigated, and \citet{Tafalla13}, based on the analysis of two H$_2$O transitions, provided evidence that the water emission is also optically thin at the same positions. In addition, radiative transfer models from \citet{Tafalla13} revealed a water gas component with a density in the range of 10$^7$--10$^8$ cm$^{-3}$, and therefore close to critical density of our NH$_3$ transition \citep[$\sim$10$^8$ cm$^{-3}$, e.g.][]{Menten10}. All these information give confidence that the line profile comparison presented in the following is appropriate. We also caution in sect. 5 on the need of radiative transfer calculations in the context of the chemical models.
From the comparison between the NH$_3$ and H$_2$O spectra we see that the profiles of the two species are more often different in shape and in maximum velocity. In all positions the maximum terminal velocity ($V_{\rm ter}$) reached by NH$_3$ is lower than in H$_2$O (always by more than a factor of two; see Fig. \ref{velocities}), which then confirm this tendency first found in L1157-B1 by \citet{Codella10}. The more prominent cases are L1157-R and L1448-B2, which show either a considerably different $V_{\rm peak}$ or very different linewidths.
The ammonia-to-water line ratio, as function of velocity, is shown in Fig. \ref{nh3-h2o-rat}. The maximum line ratio is found around the cloud velocity in all but IRAS2A-B position. In the L1157-B2 and L1157-R positions the behaviour of the line ratio as a function of velocity is similar to what was found by \citet{Codella10} at the L1157-B1 position, i.e. a maximum close to the cloud velocity and slow decrease of the ratio toward high velocities.
The most peculiar case is the L1448-B2 position, in which the line ratio profile is just a sharp peak at the systemic velocity. This is the consequence of the very different NH$_3$ line profile with respect to H$_2$O, as pointed out previously. Here the main difference is that while the H$_2$O reaches V$_{\rm LSR}\sim -$ 61 km s$^{-1}$, the NH$_3$ reaches only V$_{\rm LSR}\sim +$3 km s$^{-1}$ (with linewidths of $\sim$ 67 and 4 km s$^{-1}$, respectively). As pointed out in the previous section, the high spectral resolution data (Fig. \ref{L1448B2-hr}) reveal structure of this narrow profile: at least two peaks, at 4.0 and 4.7 km s$^{-1}$.
Similar narrow profiles were also found by \citet{Isaskun05} in CH$_3$OH millimeter transitions at other nearby positions in the blue lobe of L1448, which they interpret as the magnetic precursor \citep[see also][]{Isaskun04,Isaskun09}. However, the spectral feature of such precursor is very narrow ($\sim$ 0.6 km s$^{-1}$) and red-shifted by about 0.5 km s$^{-1}$ with respect to the 4.7 km s$^{-1}$ cloud, i.e. at V$_{\rm LSR}\sim +$5.2 km s$^{-1}$. On the other hand, our NH$_3$ spectra is not that narrow (in fact, a few km s$^{-1}$) and does not show a peak at that V$_{\rm LSR}$. We notice that our peaks are coincident with the peak velocities of the clouds previously reported in centimeter ammonia transitions by \cite{Bachiller86}. Taking into account the discrepancy between the H and V polarization (the two peaks only clearly noticed in the V polarization) and the noise, it does not seem reliable to identify the narrow feature as the magnetic precursor. Although the noisy high resolution spectra might not be sufficient evidence, our ammonia observations of the L1448-B2 position suggest that the emission may be tracing these two clouds enclosed within the HIFI beam. An additional argument for the absence of ammonia emission from the high-velocity shock is that in previous NH$_3$(1,1) and (2,1) interferometric maps by \citet{Curiel99}, no emission from the B2 position was found, with the maps showing emission only towards the central part of L1448-C and L1448-N objects. This peculiar narrow line profile is further discussed, in its comparison with the water profile, within the context of the chemical models in Sect. 5.
\begin{figure
\centering
\includegraphics[angle=0,width=8.0cm]{terminal-velo-2.ps}
\caption{Comparison between the maximum (terminal, $V_{\rm ter}$) velocity (absolute values with respect to the systemic velocity, see Table 4) of the H$_2$O emission at 179 $\mu$m and that of NH$_3$ at 572 GHz. Black points show the comparison between $V_{\rm ter}$[H$_2$O]--$V_{\rm ter}$[NH$_3$] versus $V_{\rm ter}$[H$_2$O], while red points compares $V_{\rm ter}$[NH$_3$] with $V_{\rm ter}$[H$_2$O].}
\label{velocities}
\end{figure}
\section{Chemical models}
The NH$_3$-H$_2$O profile differences first discovered in L1157-B1 (e.g. Codella et al. 2010) are clearly present in other outflows. In Viti et al. (2011)
a gas-grain chemical and shock model was used to investigate the differences in profile between the water and the ammonia transitions; it was found that these differences are purely chemical and can be explained by the presence of a C-type shock whose maximum temperature must be close to $\sim$4000 K along the B1 clump. More specifically, assuming, as Codella et al. (2010) imply, that the NH$_3$/H$_2$O intensity ratio decreases at high velocities because a similar decrease in the abundance ratios occurs, then models where the maximum temperature of the shock is close to $\sim$4000K lead to water remaining abundant at all velocities (i.e. throughout the C-shock) while NH$_3$ decreases at high velocities in the warm postshock gas. This is due to i) the efficient destruction of ammonia in the postshock gas via the endothermic reaction H $+$ NH$_3$ $\rightarrow$ NH$_2$ $+$ H$_2$ once its activation barrier ($\sim$5000 K) is overcome at temperatures higher than $\sim$4000 K (see Viti et al. 2011); and ii) to the absence of formation routes of NH$_3$ at high temperature. In Viti et al. (2011) the models that best fit the NH$_3$/H$_2$O also fit the emission from other shock tracers such as CH$_3$OH and H$_2$CO, which fairly follow the behaviour of ammonia (see also Section 3.2 for the case of these molecules in this work). Due to the few detections of the later molecules and the low signal-to-noise in their wings, we do not further discuss them here in the context of the models.
The maximum temperature of the shock in turn constrains the pre-shock density of the clump and velocity of the shock. If the temperature of the gas is indeed the key physical parameter determining the amount of ammonia as a function of velocity, then, outflows spots differing in pre-shock density and shock velocities will have different profile behaviours for NH$_3$ and hence different water-to-ammonia ratios as a function of velocities.
To illustrate this, in Figure 8 we report the difference between the terminal velocities of H$_2$O and NH$_3$ as a function of the terminal velocity of H$_2$O. Since the abundance of H$_2$O remains high across the full shock width (Viti et al. 2011), we use the terminal velocity of the H$_2$O line profile as a proxy of the velocity of the C-shock. We note that this is strictly true for shock velocities between 10 and 45 km$\,$s$^{-1}$, i.e. for the majority of the shock spots discussed in this paper (see Section 5.2 below). Therefore, we assume that the terminal velocity of H$_2$O lies closely to the actual velocity of the shock. As a result, from Figure 8 we find that the terminal velocities of water and ammonia become more unlike for increasing C-shock velocities, suggesting a clear chemical effect for ammonia with the strength of the shock. In this section we test this hypothesis by using the \citet{Viti11} model and by determining a set of physical and shock parameters that can fit each of the spectra.
\subsection{The shock and chemical model}
The code used in \citet{Viti11} is the UCL$\_$CHEM \citep{Viti04a} augmented with a shock module (from Jimenez-Serra et al. 2008). The code runs essentially in two phases: Phase I follows the evolution of a core, with initial density of 100 cm$^{-3}$, gravitationally collapsing; gas-phase chemistry, sticking on to dust particles with subsequent surface processing occur. The sticking coefficients for all species are all assumed to be 1, equivalent to a 100\% sticking efficiency \citep[see][Equation 2]{Rawlings92}. However for consistency with the Viti et al. (2011) study, we have varied these coefficients in order to simulate different degree of freeze out at the end of Phase I. The final density is a free parameter (see below). Note that the collapse in Phase I is not meant to represent the formation of a protostar, but it is simply a way to compute the chemistry of high density gas in a self-consistent way starting from a diffuse atomic gas, i.e. without assuming the chemical composition at its final density. Phase II computes the time-dependent chemical evolution of the gas and dust once the core
has formed and stellar activity and shocks are present. The model self-consistently accounts for both thermal desorption, due to the dust being heated by the presence of the outflow, as well as sputtering of the icy mantles: the latter are sputtered once the dynamical age across the C-shock has reached the "saturation timescales" ($t_{sat}$; see Table 5), as in Jimenez-Serra et al. (2008).
In all the cases considered, sputtering occurs at earlier times than thermal desorption. The model is the same as employed in Viti et al. (2011) to model the same shock spot so we refer the reader to that paper for further details.
For this work, we ran a grid of models varying in (i) pre-shock density (n$_H$); (ii) shock velocity (v$_{\rm shock}$); (iii) efficiency of freeze out of gas phase species during the cold phase (Phase I). The maximum temperature of the shock, which varies depending on the pre-shock density and shock velocity, is extracted from Figures 8b and 9b in \citet{Draine83}. The saturation times are taken from Jimenez-Serra et al. (2008). \footnote{The saturation times are related to the time-scales within the shock at which most of the molecular material in the ices have been injected into the gas phase by the sputtering of dust grains (see Jimenez-Serra et al. 2008 for the actual definition and determination of this parameter
).} Table 5 lists the models ran. Columns 7 lists, for each model, the length of the dissipation region which is the shock length scale and depends on the shock velocity as well as on the pre-shock density.
\begin{table*
\centering
\caption{Model parameters: Model number, pre-shock density, shock velocity, saturation
time, maximum temperature of the neutral gas (these four parameters are interconnected - see Jimenez-Serra et al. 2008), degree of depletion
(note that we use the fraction of CO in the icy mantles at the end of Phase I to estimate this fraction),
and dissipation length. The last column lists the group we categorized the model in (see Sect. 5.1). a(b) stands for a $\times$ 10$^b$.}
\label{models}
\centering
\begin{tabular}{lccccccc}
\hline
n$^o$ & n$_H$ (cm$^{-3}$) & V$_s$ (km s$^{-1}$) & t$_{sat}$ (yrs) & T$_{max}$ (K) & Freeze-out (\%) & L$_{diss}$ (cm) & Group \\
\hline
1&10$^5$& 40& 4.6& 4000& 1 & 1.5(16) &2 \\
2&10$^5$& 40& 4.6& 4000& 15 & 1.5(16) &2 \\
3&10$^5$& 40& 4.6& 4000& 30 & 1.5(16) &2 \\
4&10$^5$& 35& 4.6& 3200& 30 & 1.3(16) &1 \\
5&10$^5$& 35& 4.6& 3200& 1 & 1.3(16) &1\\
6&10$^3$& 40& 455& 2200& 1 & 1.5(18) &1 \\
7&10$^4$& 40& 45.5& 2200& 1 & 1.5(17) &1 \\
8&10$^4$& 60& 38& 4000& 3 & 2.2(17) &2 \\
9&10$^3$& 60& 380& 4000& 3 & 2.2(18) &2 \\
10&10$^3$& 60& 380& 4000& 1 & 2.2(18) &2 \\
11&10$^5$& 40& 4.6& 4000& 60 & 1.5(16) &2 \\
12&10$^4$& 40& 45.5& 2200& 6 & 1.5(17) &1 \\
13&10$^5$& 35& 4.6& 3200& 60 & 1.3(16) &1 \\
14&10$^4$& 10& 10.5& 300& 18 & 3.7(16) &1 \\
15&10$^4$& 40& 45.5& 2200& 18 & 1.5(17) &1 \\
16&10$^3$& 10& 954& 300& 1 & 3.7(17) &3 \\
17&10$^4$& 10& 10.5& 300& 3 & 3.7(16) &1 \\
18&10$^5$& 10& 10.5& 300& 60 & 3.7(15) &1 \\
19&10$^5$& 10& 10.5& 300& 30 & 3.7(15) &1 \\
20&10$^5$& 10& 10.5& 300& 15 & 3.7(15) &1 \\
21&10$^5$& 10& 10.5& 300& 1 & 3.7(15) &1 \\
22&10$^3$& 20& 570& 900& 1 & 7.4(17) &3 \\
23&10$^3$& 20& 570& 900& 3 & 7.4(17) &3 \\
24&10$^4$& 20& 5.7& 900& 3 & 7.4(16) &3 \\
25&10$^4$& 20& 5.7& 900& 18 & 7.4(16) &1 \\
26&10$^4$& 20& 57.0& 900& 18 & 7.4(16) &1 \\
27&10$^4$& 20& 57.0& 900& 3 & 7.4(16) &1 \\
28&10$^5$& 20& 5.7& 800& 15 & 7.4(15) &1 \\
29&10$^5$& 20& 5.7& 800& 1 & 7.4(15) &1 \\
30&10$^5$& 20& 5.7& 800& 60 & 7.4(15) &1 \\
31&10$^5$& 20& 5.7& 800& 30 & 7.4(15) &1 \\
32&10$^4$& 10& 95.4& 300& 18 & 3.7(16) &1 \\
33&10$^4$& 10& 95.4& 300& 3 & 3.7(16) &1 \\
34&10$^6$& 40& 0.5& 4000& 100 & 1.5(15) &1 \\
35&10$^6$& 40& 0.5& 4000& 50 & 1.5(15) &2 \\
36&10$^6$& 40& 0.5& 4000& 80 & 1.5(15) &2 \\
37&10$^3$& 30& 440& 1800& 1 & 1.1(18) &2 \\
38&10$^6$& 30& 0.4& 2000& 80 & 1.1(15) &3 \\
39&10$^3$& 30& 440& 1800& 3 & 1.1(18) &3 \\
40&10$^4$& 30& 44.0& 1800& 18 & 1.1(17) &1 \\
41&10$^4$& 30& 44.0& 1800& 3 & 1.1(17) &1 \\
42&10$^6$& 30& 0.4& 2000& 100 & 1.1(18) &1 \\
43&10$^6$& 30& 0.4& 2000& 50 & 1.1(18) &1 \\
44&10$^5$& 30& 4.4& 2000& 30 & 1.1(16) &1 \\
45&10$^5$& 30& 4.4& 2000& 1 & 1.1(16) &1 \\
46&10$^5$& 30& 4.4& 2000& 6 & 1.1(16) &1 \\
47&10$^5$& 30& 4.4& 2000& 15 & 1.1(16) &1 \\
48&10$^4$& 15& 68 & 600& 18 & 5.5(16) &1 \\
49&10$^5$& 15& 6.8& 550& 60 & 5.5(15) &1 \\
50&10$^6$& 15& 0.7& 550& 80 & 5.5(14) &1 \\
51&10$^4$& 25& 49 & 1200& 18 & 9.2(16) &1 \\
52&10$^5$& 25& 4.9& 1500& 60 & 9.2(15) &1 \\
53&10$^6$& 25& 0.5& 1500& 80 & 9.2(14) &1 \\
54&10$^4$& 45& 33.8& 2800& 18 & 1.7(17) &1 \\
55&10$^5$& 45& 3.4& 5000& 60 & 1.7(16) &3 \\
56&10$^6$& 45& 0.3& 5000& 80 & 1.7(15) & 3\\
57& 5$\times$10$^4$ & 45& 6.7& 6500 & 30& 3.7(16) &3 \\
58&10$^4$& 65& 38& 10000& 30 & 2.2(17) &3 \\
\hline
\end{tabular}
\end{table*}
\subsection{Chemical trends}
The behaviour of both water and ammonia for each model is analysed and we find that broadly speaking we can divide our models in three groups (see last column of Table 5):
\begin{itemize}
\item Group 1: models where ammonia and water behave in a similar way, i.e.
they are both either abundant, or otherwise, at each position across the dissipation length; Models 4-7, 12-15, 17-21, 25-34, 40-54 belong to this category.
\item Group 2: models where ammonia decreases 'earlier' in the postshock gas (i.e at lower velocities) than water; Models 1-3, 8-11, 35-36 belong to this group.
\item Group 3: models where the behaviour of NH$_3$ and/or water does not follow a clear trend. Models 16, 22-24, 37-39, 55-58 belong to this category.
\end{itemize}
In Figure 9 we plot the fractional abundance of water with respect to the total number of hydrogen nuclei (black line) and ammonia (red line) as a function of velocity within the postshock gas for a selected subset of models covering all the behaviours.
\begin{figure
\centering
\includegraphics[angle=0,width=8.5cm]{newfigure9.eps}
\caption{Fractional abundances with respect to the total number of hydrogen nuclei of water and ammonia as a function of velocity within the postshock gas, for representative models within each Group.
}
\label{models}
\end{figure}
The behaviour of ammonia and water in
Groups 1 and 2 were amply discussed in Viti et al. (2011).
Group 3 includes models where the trend of water and ammonia are not easily categorized;
for example in Model 16, both species decrease considerably but the water abundance does not drop as much as the ammonia one, and eventually reaches a plateau. Model 22, 23, 24 and 37 have the same behaviour of models belonging to Group 1 up to 7 km s$^{-1}$ but then water increases due to
the temperature in the shock reaching above $\sim$200 K;
ammonia on the other hand reaches a plateau: both species are in fact quite abundant throughout the dissipation length.
Models 38 and 39 do show a decrease in ammonia as the velocity increases but without a sharp drop in its abundance, which remains constant at 10$^{-8}$. This decrease is again due to the high gas temperature reached in the shock, which allows the destruction of NH$_3$ to occur (Viti et al. 2011). The decrease of the NH$_3$/H$_2$O ratio for models 22, 23 and 24 is expected to be similar to that for models 38 and 39. However, the reactions responsible for this decrease are different: if the shock velocity is high, ammonia is destroyed (models 38 and 39), while if the shock velocity is low, water is enhanced (models 22, 23 and 24). In Models 55, 56, 57, 58, where the temperature of the gas is $\geq$ 5000 K, both
water and ammonia decrease in abundance with velocity but then water increases again
and remains high in abundance, while NH$_3$ only briefly increases in abundance: the higher the
shock temperature, the larger and longer is the drop in water abundance.
The subdivision above is purely based on an empirical analysis of our plots. We now attempt at identifying the chemical or physical processes that yield such categorization. In Viti et al (2011) we claimed that the differences between NH$_3$ and H$_2$O are purely chemical and
are driven by the maximum shock temperature, which needed to be over $\sim$4000~K in order for the NH$_3$ to be destroyed at high velocities. In order to test this claim we now compare the models belonging to Groups 1 and 2 (see Table 5): we note indeed that all models belonging to Group 1 have a maximum shock temperature below 3500~K and, on the contrary, all the models in Group 2 have a temperature of $\sim$4000~K. Of particular interest are the models in Group 3: most have a maximum shock temperature well below 3500~K, indicating that the observed behaviour of our two species is not simply determined
by whether the maximum temperature of the shock is high enough to destroy ammonia at high velocities but still low enough to not affect the water abundance. It seems that at very low temperatures ($<$ 1000K) the abundance of ammonia never in fact increases in abundance in the first place
(as its main route of formation, H$_2$ + NH$_2$, is endothermic, with a barrier of $\sim$1400 K - see Viti et al. 2011 for more details on the formation of NH$_3$ under different conditions); more interestingly Models 55-58 have a maximum shock temperature above $\sim$4000~K which means that the clear trend where H$_2$O
shows bright emission at all the sampled high velocities is only true for a very narrow range of maximum shock temperatures: once the temperature is close to 5000~K water is also destroyed, as indeed we stated in Viti et al. (2011).
In conclusion, while we confirm that it is the maximum shock temperature what determines the behaviour of these species as a function of velocity, different maximum shock temperatures lead to a varied range of behaviours. Independently from the shock model employed, our results indicate that the behaviours of water and ammonia are simply a function of temperature, which is set by the shock velocity and the physical conditions of the pre-shock gas. Although the temperature varies very quickly with the passage of the shock, at the high temperatures reached within it, the chemistry of ammonia is fast enough to leave a signature in the post-shock gas.
\subsection{Comparison with observations}
Now that we have established the sensitivity of the NH$_3$ and H$_2$O profiles to the shock conditions, we attempt, qualitatively, to associate each observed outflow spot with a Group of models and, if possible, to a range of physical conditions. We note that it is not possible to directly compare the abundances of our models to the observations. Nevertheless if one assumes, as in Codella et al. (2010), that the differences between the H$_2$O and NH$_3$ profiles as a function of velocity should be reflected in the differences in abundances as a function of velocities then one can use the models as plotted in Figure 9 to aid such comparison.
For the comparison with observations, it is also important to note that the high-temperature region in our shock model does not correspond to the temperature of any gas component that can be observed directly in these outflow spots: what we are likely tracing is the far downstream postshock gas with the 'fingerprint' of the chemistry occurred during the high temperature shock phase(s). We also underline that without a detailed radiative transfer model that takes into consideration the source size, beam dilution and the excitation of the H$_2$O and NH$_3$ lines, it is impossible at this stage to quantitatively match a particular model to an object. In the following, we try to determine the most likely shock parameters that better match the NH$_3$ and H$_2$O line profiles observed in our outflow spot sample. We do this by comparing the observed profile as a function of velocity with the molecular abundance as a function of depth. We recognize that this can only lead to a qualitative match and that the abundances would need to be fed into a radiative transfer model in order to be able to directly fit the observations (as it was done in Viti et al. 2011).
{\it L1157-B2 and IRAS2A-B}: these two objects show a very similar, and relatively narrow, profiles in both molecules; models from Group 1 that seem to best match these objects are Models 45, 50, 53; however they are probably better matched by some Models in Group 3, i.e. Model 16 may be the best match for L1157-B2 while Models 55-57 may be the closest to IRAS2A-B. Model 16 implies that L1157-B2 has a lower pre-shock density than L1157-B1 as well as a much lower shock velocity (10 km s$^{-1}$), which is also consistent with the behaviour of the CO emission (see Figure A.2). IRAS2A-B on the other hand may be an example of a very fast shock, with a high pre-shock density: although the terminal velocity of water is only $\sim$ 20 km/s, the CO spectrum (See Figure A.3) show emission at higher velocities. This object is in fact considered one of the strongest emitters.
The fact that two very different models are invoked to match two objects with very similar water and ammonia profiles is a consequence of the fact that very different physical and/or chemical conditions can lead to theoretical abundances profiles that can be grouped together. Indeed, the difference in CO profiles between L1157-B2 and IRAS2A-B is an indication that these two objects may in fact be very different. Nevertheless, it is worth underlining however that since more than one model can match the behavioural trend of the NH$_3$ and H$_2$O, we are {\it not} claiming a unique match between one model and one object.
{\it L1448-B2, IRAS4A-R, and IRAS2A-R}: these three objects show a narrow ammonia emission, with the water profile being quite extended. We would therefore expect these objects to be best matched by models in Group 2 where ammonia is only abundant for a short period of time (i.e for a small velocity bin). Models that may be good matches are Models 1, 8, 10, where the pre-shock density can range from 10$^3$--10$^6$ cm$^{-3}$ but where the shock velocity is always at least 40 km s$^{-1}$ and the depletion on the grains during the pre-shock phase is low: this implies a lower abundance of water and ammonia at the time of the sputtering of the icy mantles (as both species are enhanced on the grains as a function of freeze out, due to hydrogenation of oxygen and nitrogen respectively). The peculiar narrow profile in L1448-B2 can be then understood in the context of these models: since the NH$_3$ decreases earlier, the narrow line implies that almost all the shocked ammonia is gone and we see mainly the contribution of the cloud cores.
{\it L1157-R, L1448-R4, IRAS4A-B}: resemble L1157-B1 whereby water is indeed more extended in velocity than ammonia, but the latter does not have a narrow profile. Models from Group 2 are best fits, in particular Models 2, 3, 9, 11, or 35. These models span the same pre-shock densities and maximum velocities as for the L1448-B2, IRAS4A-R, and IRAS2A-R objects but seem to have a higher depletion on grains during Phase I.
The association of particular models to individual objects has been done {\it solely} on the basis of the comparison of the line profiles with the abundances as a function of velocity, as explained at the beginning of this section. The iteration of such parameters to match observations is all we can do without line radiative transfer modelling. It is useful to crudely estimate whether the abundances in our chosen models can at least lead to observable intensities for the ammonia lines. We therefore run some RADEX calculations (van der Tak. et al. 2007) for Models 1, 5 and 16 (as representative of Groups 2, 1, and 3) using representative values of the abundance of NH$_3$ at different velocities. We find that for Models 1 and 5 it is very easy to reach the observed line intensities; for Model 16 we can obtain line intensities of the order of 0.1 K, as long as we use the abundance as averaged only up to 3.5 km s$^{-1}$ and a narrow ($<$ 10 km s$^{-1}$) linewidth.
\section{Summary and conclusions}
In the following we summarize the main results and conclusions from our {\it Herschel}/HIFI observations of the ammonia emission from protostellar outflows:
\begin{enumerate}
\item We detected the NH$_3$ emission from all eight outflow positions we have observed.
In all the cases, the ammonia emission reaches terminal velocities ($V_{\rm ter}$) that are lower than H$_2$O, proving that this behaviour is not exclusive of the L1157-B1 position.
In addition to ammonia, all the bonus lines (due to CS,
H$_2$CO, and CH$_3$OH) were detected in only IRAS4A-B and IRAS4A-R positions, confirming the chemical richness of these regions.
\item Comparisons with chemical modelling confirms that the behaviour of ammonia is determined principally by the temperature of the gas.
\item While a quantitative comparison between models and observations is not feasible without a proper line radiative transfer model, we constrain the pre-shock density and/or shock velocity for each object based on a comparison of abundance trends. We find that, while several models show agreement with the profiles of the different objects, the best matching model for L1157-B2 has a very low pre-shock density (10$^3$ cm$^{-3}$ and velocity (10 km s$^{-1}$), while IRAS2A-B abundances are best reproduced by a gas that has undergone a relatively high velocity shock (45 km s$^{-1}$) with a pre-shock density of $\sim$ 10$^5$ cm$^{-3}$. L1448-B2, IRAS4A-R and IRAS2A-R are matched by models where ammonia is heavily destroyed at high velocities, as explained above due to the short period when the temperature of the gas is high, at 4000 K. We are not able to constrain the pre-shock density for these objects as it can range from as low as 1000 cm$^{-3}$ to as high as 10$^{6}$ {\it as long as} the maximum temperature of the shock is 4000 K, which can be achieved for a shock velocity of $\sim$ 40km s$^{-1}$. The best matching models also indicate a low level of depletion in the cold phase prior to the passage of the shock, hence it is likely that the pre-shock density is in fact towards the lower limit. Finally, L1157-R, L1448-R4 and IRAS4A-B seem to resemble very closely the abundance profile of L1157-B1. They are likely therefore to have a pre-shock density of 10$^5$-10$^6$ cm$^{-3}$ and a shock velocity of the order of 40 km s$^{-1}$, although we can not exclude a faster shock with a lower pre-shock density: in other words, the behaviour of the H$_2$O/NH$_3$ is again determined by the high temperature the gas can attain and the latter can be achieved by more than one combination of shock parameters. In terms of theoretical abundances, a high H$_2$O/NH$_3$ for much of the dissipation length is only reached within a small range of maximum shock temperatures; in terms of profiles, on the other hand, half of our sample show a high H$_2$O/NH$_3$ ratio: not all of them however require a maximum shock temperature to be close to 4000 K as group 3 models indicate. It is also important to point out that within the observed beam it is unlikely that we are seeing one episodic shock or a group of shocks, all at the same velocities; hence with the present observations it is not possible to draw any statistically meaningful conclusion on the type of shock that is prevalent in outflows around low mass stars.
In conclusion, the H$_2$O/NH$_3$ as a function of velocity can be used to determine the most likely combination of 'pre-shock density and shock velocity', although it is not sufficient in itself to be able to constrain each individual parameter.
\end{enumerate}
\section{acknowledgements}
The Italian authors gratefully acknowledge the support from the Italian Space Agency (ASI) through the contract I/005/011/0, which also provided a fellowship for A.I G\'omez-Ruiz, who is now supported by Consejo Nacional de Ciencia y Tecnolog\'ia, through the program C\'atedras CONACYT para J\'ovenes Investigadores. I.J.-S. acknowledges the financial support received from the STFC through an Ernest Rutherford Fellowship (proposal number ST/L004801/1). HIFI has been designed and built by a consortium of institutes and university departments from across Europe, Canada and the United States under the leadership of SRON Netherlands Institute for Space Research, Groningen, The Netherlands and with major contributions from Germany, France and the US. Consortium members are: Canada: CSA, U. Waterloo; France: CESR, LAB, LERMA, IRAM; Germany: KOSMA, MPIfR, MPS; Ireland, NUI Maynooth; Italy: ASI, IFSI-INAF, Osservatorio Astrofisico di Arcetri-INAF; The Netherlands: SRON, TUD; Poland: CAMK, CBK; Spain: Observatorio Astron\'omico Nacional (IGN), Centro de Astrobiolog\'ia (CSIC-INTA). Sweden: Chalmers University of Technology - MC2, RSS \& GARD; Onsala Space Observatory; Swedish National Space Board, Stockholm University - Stockholm Observatory; Switzerland: ETH Zurich, FHNW; USA: Caltech, JPL, NHSC.
|
3,212,635,537,773 | arxiv | \section{Introduction}
\label{sec:introduction}
The excitation spectrum of bound quark-antiquark states that are
composed of $u$, $d$, and $s$~quarks, \ie light-quark mesons, has
regained interest in recent years. Excited light-quark mesons are
currently studied extensively in high-flux fixed-target experiments
with hadrons at CERN~\cite{Abbon:2014aex} and with photons at
Jefferson Lab~\cite{Battaglieri:2010zza,Ghoul:2015ifw}. They are also
produced, for example, in multibody decays of heavy-quark mesons and
in $e^+ e^-$ collisions with initial-state radiation. Both processes
are studied, for example, at BESIII~\cite{Asner:2008nq},
\babar~\cite{Bevan:2014iga}, and Belle~\cite{Bevan:2014iga}.
Recently, the formulation of QCD on the lattice has gained new
momentum because it now also addresses light-meson decays; see \eg\
\refsCite{Feng:2010es,Wilson:2015dqa,Prelovsek:2013ela,Wilson:2014cna}.
In the future, this will lead to more realistic predictions for masses
and widths of excited hadrons. Hence obtaining more precise
experimental knowledge of the properties of light mesons has become
important. Despite many decades of research, the spectroscopic
information coming from different experiments is sometimes
inconsistent or even controversial. Extensive discussions of the
light-meson sector can be found
in~\refsCite{Patrignani:2016xqp,klempt:2007cp,Crede:2008vw,Ochs:2013gi,Meyer:2010ku,Meyer:2015eta,Brambilla:2014jmp}.
Light-meson states are characterized by spin~$J$, parity $P$, charge
conjugation~$C$,\footnote{Although the $C$~parity is not defined for
the charged states considered here, it is customary to quote the
\JPC quantum numbers of the corresponding neutral partner state in
the isospin triplet. The $C$~parity can be generalized to the
$G$~parity, $G \equiv C\, e^{i \pi I_y}$, which is a multiplicative
quantum number that is defined for the nonstrange states of a meson
multiplet. Here, $I_y$ is the $y$~component of the isospin.} and
isospin~$I$ quantum numbers. The mesons are grouped into
SU(3)$_\text{flavor}$ multiplets that contain states with the same \JP
quantum numbers. In this paper, we restrict ourselves to isovector
mesons with masses below about \SI{2.1}{\GeVcc}, which decay into
three charged pions and hence have negative $G$~parity. The Particle
Data Group (PDG) provides a complete listing of the known
states~\cite{Patrignani:2016xqp}.
\Cref{fig:spectroscopy_recent_measurements} shows a summary of recent
measurements of masses and widths of these states grouped by their
\JPC quantum numbers. For each resonance, the four most recent
entries from the PDG are confronted with the results that will be
presented in this work. For some states, the variation of the
resonance parameters extracted from different experiments is by far
larger than the statistical uncertainties of the individual
measurements. In many cases, these variations originate from
different analysis methods and model assumptions. Substantial
differences among the measurements are found, for example, for the
parameters of the \PaOne* ground state, \PaOne, and the first excited
states of the \PaOne* and the \PaTwo*, \PaOne[1640] and \PaTwo[1700].
The situation is similar for the \PpiOne[1600], which has
\enquote{exotic} $\JPC = 1^{-+}$ quantum numbers that are forbidden
for ordinary \qqbarPrime quark-model states in the nonrelativistic
limit. The resonance interpretation of the \PpiOne[1600] signal is
controversial, in particular in the $\Prho \pi$ decay mode that will
be addressed in this analysis. For all states discussed here, we
exploit the observed dependence of the production amplitudes on the
squared four-momentum transfer in order to better separate resonant
and nonresonant contributions. We also extract branching-fraction
ratios for the $\Prho \pi$ and $\PfTwo \pi$ decays of \PaTwo and
\PaFour.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{0.75\textwidth}]{fig1}
\caption{Masses and widths of light isovector mesons with positive
$C$~parity and a $3\pi$ decay mode. For each resonance, the four
most recent measurements of masses (circles) and widths (vertical
size of boxes), as listed by the PDG~\cite{Patrignani:2016xqp},
are compared to the masses and widths obtained in this analysis
(crosses and black-framed boxes, respectively). The measurements
are grouped according to the \JPC quantum numbers of the states.
Higher excitations with the same \JPC are shown in different
colors.}
\label{fig:spectroscopy_recent_measurements}
\end{figure}
The COMPASS Collaboration has already published properties of
isovector $3\pi$ resonances with masses in the range between
\SIlist{1.1;2.1}{\GeVcc}, produced in pion scattering off a solid-lead
target~\cite{alekseev:2009aa,adolph:2014mup}. In particular, we
reported in \refCite{alekseev:2009aa} the observation of the
spin-exotic \PpiOne[1600] in the $\Prho \pi$ decay mode. Our recent
observation of a new axial-vector resonancelike structure, the
\PaOne[1420], with the same quantum numbers as the elusive
\PaOne~\cite{Adolph:2015pws} has spurred much work on the
interpretation of states (including heavy-quark states), for which the
assignment to quark-model multiplets is unclear; see \eg\
\refsCite{wang:2014bua,Basdevant:2015ysa,Basdevant:2015wma,Ketzer:2015tqa,Wang:2015cis,Chen:2015fwa,Aceti:2016yeb,Gutsche:2017oro,Liu:2015taa,Guo:2017jvc}.
The present study uses the same data but yields more accurate
resonance parameters.
This work is based on the world's largest data set to date on
diffractively produced mesons decaying into three charged pions. The
data were obtained by the COMPASS experiment and were already
presented in detail in~\refCite{Adolph:2015tqa}. They contain
exclusive events from the inelastic reaction
\begin{equation}
\reaction,
\label{eq:reaction}
\end{equation}
which was induced by a \SI{190}{\GeVc} $\pi^-$ beam impinging on a
liquid-hydrogen target. The recoiling target proton is denoted by
$p_\text{recoil}$. In such single-diffractive reactions, the target
particle stays intact and the beam pion is excited via the exchange of
a Pomeron with the target nucleon to a short-lived intermediate state
$X^-$ that then decays into \threePi as shown in
\cref{fig:3pi_reaction_isobar}.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig2}
\caption{Diffractive dissociation of a beam pion on a target proton
into the \threePi final state via an intermediate $3\pi$ state
$X^-$. The decay of $X^-$ is described using the isobar model,
which assumes that the decay proceeds via an intermediate \twoPi
state $\xi^0$, the so-called isobar. At the two decay vertices,
the couplings $\alpha_{X \to \xi \pi}$ (vertex~1) and
$\alpha_{\xi \to \pi \pi}$ (vertex~2) appear, which are in general
complex numbers.}
\label{fig:3pi_reaction_isobar}
\end{figure}
Reaction~\eqref{eq:reaction} depends on two Mandelstam variables: the
squared $\pi^- p$ center-of-mass energy~$s$, which is fixed by the
beam energy, and the squared four-momentum~$t$ transferred by the
Pomeron. It is convenient to define the \emph{reduced four-momentum
transfer squared}
\begin{equation}
\tpr \equiv \tabs - \tmin \geq 0,
~\text{where}~
\tmin \approx \rBrk{\frac{\mThreePi^2 - m_\pi^2}{2 \Abs{\vec{p}_\text{beam}}}}^2
\label{eq:tPrime}
\end{equation}
is the minimum absolute value of the four-momentum transfer
needed to excite the beam pion to a $3\pi$ state with invariant mass
\mThreePi. The beam momentum $\vec{p}_\text{beam}$ is
defined in the laboratory frame. The analysis is
limited to the kinematic range \SIvalRange{0.1}{\tpr}{1.0}{\GeVcsq}.
Typical values of \tmin are well below \SI{e-3}{\GeVcsq} for the
$3\pi$ mass range from \SIrange{0.5}{2.5}{\GeVcc} considered in this
analysis.
Since reaction~\eqref{eq:reaction} is dominated by Pomeron
exchange,\footnote{The Pomeron is a quasiparticle with vacuum quantum
numbers and therefore has $\IG = 0^+$.} isospin and $G$ parity of
the beam pion are conserved so that the quantum numbers of the
intermediate state $X^-$ are restricted\footnote{We do not consider
flavor-exotic states with isospin~2.} to $\IG = 1^-$. This limits
the analysis to meson states that belong to the \piJ and \aJ
families.\footnote{Note that due to parity conservation, $a_0$ states
cannot decay into \threePi.} The $X^-$ decay is assumed to proceed
independently of the $X^-$ production; \ie the amplitude for the
process factorizes into production and decay amplitudes.
In our previous publication~\cite{Adolph:2015tqa}, the data were
subjected to a partial-wave analysis~(PWA) of the outgoing $3\pi$
system. The employed PWA model relies on the isobar model, which
describes the $X^- \to \threePi$ decay as a sequence of two two-body
decays, $X^- \to \xi^0 \pi^-$ and $\xi^0 \to \twoPi$ via intermediate
\twoPi states $\xi^0$, the so-called \emph{isobars} (see
\cref{fig:3pi_reaction_isobar}). Each isobar is characterized by its
\IGJPC quantum numbers and an assumed dependence of its decay
amplitude on the \twoPi invariant mass \mTwoPi, which in the simplest
case is a Breit-Wigner amplitude representing a $\pi\pi$ resonance.
The PWA model used in \refCite{Adolph:2015tqa} assumed that the data
are a mixture of interfering contributions of various partial waves
that are defined by the quantum numbers of the $X^-$ and their decay
modes. This set of partial waves included six different isobars, and
we allowed for total spins $0 \leq J \leq 6$ and orbital angular
momenta $0 \leq L \leq 6$ between the isobars and the bachelor
$\pi^-$. Independent fits of the set of partial-wave amplitudes to
the data were carried out in \num{1100} $(\mThreePi, \tpr)$ bins
without applying model assumptions about the resonance content of the
$3\pi$ system. We refer to this first step that was performed prior
to the present analysis as \emph{mass-independent analysis}. The
results of a PWA fit in a given $(\mThreePi, \tpr)$ bin were
represented in terms of a spin-density matrix that contains all
information about the partial-wave amplitudes and their mutual
interferences that can be extracted from the data. This
mass-independent analysis is a prerequisite to searching for $3\pi$
resonances produced in reaction~\eqref{eq:reaction}, which can be
identified only if we combine the information contained in the
spin-density matrices over a wide range of \mThreePi.
In this paper, the results of the mass-independent analysis from
\refCite{Adolph:2015tqa} are used as input for a resonance-model fit,
which is also referred to as \emph{mass-dependent fit}. In this
second analysis step, we search for $3\pi$ resonances that contribute
to the intermediate $X^-$ states by modeling the \mThreePi dependence
of the earlier extracted spin-density matrices over a wide range of
\mThreePi. Resonances appear as characteristic structures in the
\mThreePi dependence not only of the moduli squared of the
partial-wave amplitudes, \ie in the \emph{partial-wave intensities},
but also of the mutual interference terms of the partial waves. In
addition to the product of the moduli of the partial-wave amplitudes,
an interference term contains information about the relative phase
between a pair of waves. The change of a relative phase with
increasing \mThreePi is called \emph{phase motion}. The fit model
assumes that the partial-wave amplitudes can be described by a
coherent sum of Breit-Wigner amplitudes representing the resonances
and amplitudes that describe nonresonant components. In a novel
approach, we extend this analysis technique that was used in most of
the previous analyses (see \eg\
\refsCite{daum:1980ay,Amelin:1995gt,Gunter:2000am,alekseev:2009aa,Salgado:2013dja})
by including for the first time to our knowledge the information on
the dependence of the partial-wave amplitudes on \tpr in the fit. By
requiring that the shape parameters of the resonances are independent
of \tpr, a better separation of the resonant and nonresonant
components is achieved, which is a substantial improvement over
previous analyses.
Most of the details on the event selection and the mass-independent
analysis have already been presented in our previous
publication~\cite{Adolph:2015tqa}. Therefore, we give in
\cref{sec:setup_and_event_selection} only a brief summary of the basic
features of the experimental setup and the event selection.
\Cref{sec:mass-independent_fit} contains a discussion of those details
of the mass-independent analysis from \refCite{Adolph:2015tqa} that
are relevant for the resonance-model fit. In \cref{sec:method}, we
explain the fit model and the employed fitting method. Because of the
large number of events, statistical uncertainties of the extracted
resonance parameters are negligible compared to systematic
uncertainties. Hence we performed extensive systematic studies, which
are described in \cref{sec:systematics}. The results of the
resonance-model fit are presented and discussed in \cref{sec:results}
grouped by the \JPC quantum numbers of the resonances. This includes
a comparison of the obtained resonance parameters with world data and
a discussion of the extracted \tpr spectra of the resonant and
nonresonant components. The \tpr dependence of the relative phases of
the wave components is discussed in \cref{sec:production_phases}. In
\cref{sec:conclusions}, we summarize our findings. The appendixes
contain the details about an alternative description of the
nonresonant contributions, about alternative formulations of the
\chisq~function that is minimized to determine the resonance
parameters, and about the systematic uncertainties of the extracted
resonance parameters. The supplemental
material\ifMultiColumnLayout{~\cite{paper3_supplemental_material}}{ in
\cref{sec:spin-dens_matrices,sec:phase-space_vol}} contains the
amplitude data that enter in the resonance-model fit, the full fit
result, and additional information required to perform the
resonance-model fit. The data required to perform the resonance-model
fit are provided in computer-readable format at~\cite{paper3_hepdata}.
%
\section{Experimental setup and event selection}
\label{sec:setup_and_event_selection}
The experimental setup and the data selection criteria are described
in detail in \refsCite{Adolph:2015tqa,haas:2014bzm}. Here, we give
only a brief summary.
The COMPASS experiment~\cite{Abbon:2007pq,Abbon:2014aex} is located at
the M2~beam line of the CERN Super Proton Synchrotron. The data used
for the analysis presented in this paper were recorded in the year
2008. A beam of negatively charged hadrons with \SI{190}{\GeVc}
momentum and \SI{96.8}{\percent} $\pi^-$ content was incident on a
\SI{40}{\cm} long liquid-hydrogen target that was surrounded by a
recoil-proton detector (RPD). Incoming pions were identified using a
pair of beam Cherenkov detectors (CEDARs) that were placed in the beam
line upstream of the target. Outgoing charged particles were detected
by the tracking system, and their momenta were determined using two
large-aperture dipole magnets. The large-acceptance high-precision
two-stage magnetic spectrometer was well suited for investigating
high-energy reactions at low to intermediate values of the reduced
four-momentum transfer squared \tpr. For the present analysis, \tpr
was chosen to be in the range from \SIrange{0.1}{1.0}{\GeVcsq}, where
the lower bound is dictated by the acceptance of the RPD and the upper
bound by the decrease of the number of events with increasing \tpr.
Data were recorded using a trigger based on a recoil-proton signal in
the RPD in coincidence with an incoming beam particle and no signal in
the veto counters (see Sec.~II~B in \refCite{Adolph:2015tqa}). In the
analysis, we require a production vertex located within the target
volume. This vertex must have one incoming beam pion and three
outgoing charged particles. The sum of the energies of the outgoing
particles, $E_\text{sum}$, is required to be equal to the average beam
energy within 2~standard deviations $\sigma_{E_\text{sum}}$, \ie
within $\pm \SI{3.78}{\GeV}$. Contributions from double-diffractive
processes, in which also the target proton is excited, are suppressed
by the RPD and veto trigger signals and by requiring exactly one
recoil particle detected in the RPD that is back-to-back with the
outgoing \threePi system in the plane transverse to the beam
(transverse momentum balance; see Sec.~II~C in
\refCite{Adolph:2015tqa}). Events are disregarded if the incoming
beam particle is identified by the CEDARs as a kaon. If at least one
of the three forward-going particles is identified by the ring-imaging
Cherenkov detector (RICH) as not being a pion, the event is also
rejected. In addition, we require Feynman-$x$ of the fastest
final-state $\pi^-$ to be below~0.9 for rapidity differences between
the fast $\pi^-$ and the slower \twoPi pair in the range from
\numrange{2.7}{4.5}. This suppresses the small contamination by
centrally produced \twoPi final states in the analyzed mass range (see
Sec.~II~C in \refCite{Adolph:2015tqa}). The selected kinematic region
of \SIvalRange{0.5}{\mThreePi}{2.5}{\GeVcc} and
\SIvalRange{0.1}{\tpr}{1.0}{\GeVcsq} contains a total of \num{46E6}
exclusive events that enter into the partial-wave analysis (see
\cref{sec:mass-independent_fit}).
%
\section{Partial-wave decomposition}
\label{sec:mass-independent_fit}
We use a two-step procedure for the determination of the spectrum of
$3\pi$ resonances produced in the reaction \reaction. In the first
analysis step published in \refCite{Adolph:2015tqa}, a partial-wave
decomposition was performed independently in 100~\mThreePi bins each
divided into 11~\tpr bins, which serves as input for the
resonance-model fit presented in this paper. The PWA method and the
results are discussed in detail in \refCite{Adolph:2015tqa}. Here, we
summarize the facts relevant for the resonance-model fit, which is
introduced in \cref{sec:method}.
Our basic assumption for the PWA model is that resonances dominate the
$3\pi$ intermediate states $X^-$ that are produced in the scattering
process. We therefore describe the process as an inelastic two-body
scattering reaction $\pi^- + p \to X^- + p_{\text{recoil}}$ with
subsequent decay of $X^-$ into the three final-state pions,
$X^- \to \threePi$.
For fixed center-of-mass energy~$\sqrt{s}$, the kinematic distribution
of the final-state particles depends on \mThreePi, \tpr, and a set of
five additional phase-space variables represented by $\tau$. The
latter fully describes the three-body decay. The set of variables
used in our analysis is defined in Sec.~III~A of
\refCite{Adolph:2015tqa}. For the reaction \reaction, a perfect
detector with unit acceptance would measure the intensity distribution
\begin{equation}
\label{eq:intensity_def}
\begin{splitOrNot}
\mathcal{I}(\mThreePi, \tpr, \tau)
\alignOrNot\equiv \frac{\dif{N}}{\dif{\mThreePi}\, \dif{\tpr}\, \dif{\varphi_3}(\mThreePi, \tau)} \newLineOrNot
\alignOrNot\propto \frac{\dif{\sigma_{\reaction}}}{\dif{\mThreePi}\, \dif{\tpr}\, \dif{\varphi_3}(\mThreePi, \tau)} \newLineOrNot
\alignOrNot\propto \mThreePi\, \Abs{\mathcal{M}_{fi}(\mThreePi, \tpr, \tau)}^2,
\end{splitOrNot}
\end{equation}
where $N$ is the number of events, $\dif{\varphi_3}$ the
five-dimensional differential Lorentz-invariant three-body phase-space
element of the three outgoing pions, $\dif{\sigma_{\reaction}}$ the
differential cross section for the measured process, and
$\mathcal{M}_{fi}$ the transition matrix element from the initial to
the final state.\footnote{To simplify notation, the term
$\Abs{\mathcal{M}_{fi}}^2$ is assumed to include incoherent sums,
\eg over the helicities of the particles with nonzero spin [see
\cref{eq:intensity_ansatz}].} The right-hand side of
\cref{eq:intensity_def} is derived from Fermi's golden rule as given
\eg in \refCite{pdg_kinematics:2016}. We factorize the phase space of
the four outgoing particles into the two-body phase space for $X^-$
and $p_{\text{recoil}}$ and the three-body phase space for the decay
$X^- \to \threePi$, which introduces the factor \mThreePi. The
differential two-body phase space element is expressed in terms of
\tpr. All constant factors have been dropped from the right-hand side
of \cref{eq:intensity_def}. It is worth noting that, since
$\mathcal{I}$ is differential in the three-body phase-space element,
it is independent of the particular choice of the
variables~$\tau$.\footnote{The simplest parametrization of the
differential three-body phase-space element is in terms of the
energies of two of the final-state particles, \eg $E_1$ and $E_3$,
and the Euler angles $(\alpha, \beta, \gamma)$ that define the
spatial orientation of the plane that is formed by the daughter
particles in the $X^-$~rest frame:
\begin{equation*}
\dif{\varphi_3}(\mThreePi, \underbrace{E_1, E_3, \alpha, \beta, \gamma}_{\equiv \tau})
\propto \dif{E_1}\, \dif{E_3}\, \dif{\alpha}\, \dif{\cos \beta}\, \dif{\gamma}
\end{equation*}
For different choices of~$\tau$, the respective Jacobians have to be
taken into account.}
Since we assume that the $3\pi$ intermediate state is dominated by
resonances, the production of $X^-$ can be treated independently of
its decay (see \cref{fig:3pi_reaction_isobar}). The amplitude for a
particular intermediate state $X^-$ therefore factorizes into two
terms: \one~the \emph{transition amplitude}
$\mathcal{T}(\mThreePi, \tpr)$, which encodes the \mThreePi-dependent
strength and phase of the production of a state $X^-$ with specific
quantum numbers, and \two~the \emph{decay amplitude}
$\Psi(\mThreePi, \tau)$, which describes the decay of $X^-$ into a
particular \threePi final state.
As demonstrated in \refCite{Adolph:2015tqa}, we observe dominant
contributions of resonances in the \twoPi subsystem of the \threePi
final state. Therefore, we factorize the three-body decay amplitude
into two two-body decay terms (see \cref{fig:3pi_reaction_isobar}).
This factorization is known as the \emph{isobar model}\footnote{An
early detailed discussion can be found in \refCite{herndon:1973yn}.}
and the intermediate neutral \twoPi state $\xi^0$ is called the
\emph{isobar}. In the first two-body decay, $X^- \to \xi^0 \pi^-$, a
relative orbital angular momentum $L$ appears. The orbital angular
momentum in the isobar decay $\xi^0 \to \twoPi$ is equal to the spin
of the isobar. For a given three-pion mass, the decay amplitude
accounts for the deviation of the kinematic distribution of the three
outgoing pions from the isotropic phase-space distribution and is
specified by the quantum numbers of $X^-$ (isospin $I$, $G$ parity,
spin $J$, parity $P$, $C$ parity, and the spin projection $M$) and its
decay mode ($\xi$, $L$). For convenience, we introduce the
partial-wave index
\begin{equation}
\label{eq:wave_index}
a \equiv (\IG, \JPC, M, \xi, L).
\end{equation}
We describe the decay $X^- \to \xi^0 \pi^-$ in the Gottfried-Jackson
rest frame of the $X^-$ (see Sec.~III~A in \refCite{Adolph:2015tqa}),
where the quantization axis is chosen along the beam direction, and we
employ the \emph{reflectivity basis}, where positive and negative
values of the spin projection $M$ are combined to yield amplitudes
characterized by $M \geq 0$ and by the reflectivity quantum number
$\refl = \pm 1$~\cite{chung:1974fq}. The reflectivity \refl is the
eigenvalue of the reflection through the $X^-$~production plane. In
the high-energy limit, \refl corresponds to the naturality of the
exchange in the scattering process such that $\refl = +1$ corresponds
to natural spin parity of the exchanged Reggeon, \ie
$\JP = (\text{odd})^-$ or $(\text{even})^+$ transfer to the beam
particle. Conversely, $\refl = -1$ corresponds to unnatural spin
parity of the exchanged Reggeon, \ie $\JP = (\text{even})^-$ or
$(\text{odd})^+$ transfer to the beam particle.
The isobar-model decay amplitudes are calculable using the helicity
formalism up to the unknown complex-valued couplings
$\alpha_{X \to \xi \pi}$ and $\alpha_{\xi \to \pi \pi}$, which appear
at each decay vertex (see \cref{fig:3pi_reaction_isobar}). Assuming
that these couplings do not depend on the kinematics, they are moved
from the decay amplitudes into the transition amplitudes. The
transition and decay amplitudes redefined in this way are represented
by $\Widebar{\mathcal{T}}_a(\mThreePi, \tpr)$ and
$\Widebar{\Psi}_a(\mThreePi, \tau)$. It is worth noting that due to
this redefinition, the transition amplitudes $\Widebar{\mathcal{T}}_a$
depend not only on the $X^-$ quantum numbers but also on the $X^-$
decay mode. Details are explained in Sec.~III~B of
\refCite{Adolph:2015tqa}.
We model the intensity distribution
$\mathcal{I}(\mThreePi, \tpr, \tau)$ of the final-state particles in
\cref{eq:intensity_def} as a truncated series of partial waves, which
are denoted by the index $a$ as defined in \cref{eq:wave_index}. The
$N_\text{waves}^\refl$ partial-wave amplitudes for the contributing
intermediate $X^-$ states and their decays are summed coherently:
\begin{multlineOrEq}
\label{eq:intensity_ansatz}
\mathcal{I}(\mThreePi, \tpr, \tau) \newLineOrNot
= \sum_{\refl = \pm1} \sum_{r = 1}^{N_r^\refl} \Abs[3]{\sum_a^{N_\text{waves}^\refl}
\Widebar{\mathcal{T}}_a^{r \refl}(\mThreePi, \tpr)\,
\Widebar{\Psi}_a^\refl(\mThreePi, \tau)}^2 \newLineOrNot
+ \Widebar{\mathcal{T}}_\text{flat}^2(\mThreePi, \tpr).
\end{multlineOrEq}
In the above formula,\footnote{\Cref{eq:intensity_ansatz} corresponds
to Eq.~(17) in \refCite{Adolph:2015tqa}. The explicit factor
\mThreePi that appears on the right-hand side of
\cref{eq:intensity_def} is absorbed into
$\Widebar{\mathcal{T}}_a^{r \refl}(\mThreePi, \tpr)$.} the
contributions to the intensity distribution corresponding to
reflectivity \refl and \emph{rank index} $r$ (see next paragraph) are
summed incoherently. The former is due to parity conservation that
forbids interference of states with different
reflectivities~\cite{chung:1974fq}. We also introduced an additional
incoherently added wave that is isotropic in the three-body phase
space and is referred to as \emph{flat wave}. The purpose of this
wave is to absorb intensity of events with three uncorrelated pions in
the final state, \eg nonexclusive background. The corresponding
transition amplitude $\Widebar{\mathcal{T}}_\text{flat}$ is
real-valued.\footnote{The decay amplitude
$\Widebar{\Psi}_\text{flat}(\mThreePi, \tau)$ of the flat wave is a
constant and was set to unity.}
Several processes, \eg spin-flip and spin-nonflip processes or the
excitation of baryon resonances at the target vertex, may disturb the
coherence of the intermediate states. Incoherence may also be
introduced by integrating over large ranges of \tpr, if intermediate
states are produced with different dependences on \tpr. Incoherences
are incorporated by the additional rank index~$r$ for the transition
amplitudes, which is summed over incoherently [see
\cref{eq:intensity_ansatz}]. In general, the \emph{rank} $N_r$ may be
different in the two reflectivity sectors, \ie $N_r^\refl$.
The goal of the partial-wave analysis is to extract the unknown
transition amplitudes in \cref{eq:intensity_ansatz} from the data.
The $\Widebar{\mathcal{T}}_a^{r \refl}$ contain information about the
intermediate $3\pi$ resonances. Since the \mThreePi dependence of the
transition amplitudes is unknown, the event sample is divided into
\mThreePi bins that are chosen to be much narrower than the width of
typical hadronic resonances. The analyzed mass range
\SIvalRange{0.5}{\mThreePi}{2.5}{\GeVcc} is subdivided into
100~equidistant \mThreePi bins with a width of \SI{20}{\MeVcc}.
Within each mass bin, the \mThreePi dependence of the amplitudes is
assumed to be negligible, so that the transition amplitudes only
depend on \tpr.
We do not know \apriori the \tpr dependence of the transition
amplitudes. In previous analyses, it was often assumed that the
\mThreePi and \tpr dependences are uncorrelated and the \tpr
dependence was modeled by real functions $g_a^\refl(\tpr)$. These
functions were extracted from the analyzed data sample by integrating
over wide \mThreePi ranges, often only for groups of waves. We have
shown in \refCite{Adolph:2015tqa} that for the process under study
this assumption is not valid. The \tpr dependence of the intensity of
individual waves depends on \mThreePi and may differ significantly
from wave to wave. This agrees with previous studies of diffractive
dissociation of pions (see \eg\
\refsCite{alekseev:2009aa,dzierba:2005jg,kachaev:2001jj,daum:1980ay}),
which revealed contributions of nonresonant background processes such
as the Deck effect~\cite{deck:1964hm}. The nonresonant processes
typically exhibit \mThreePi and \tpr dependences that are different
from those of resonances. In particular, the analyses presented
in~\refsCite{dzierba:2005jg,daum:1980ay} showed the importance of the
kinematic variable \tpr in a partial-wave analysis of the
diffractively produced $3\pi$ system and illustrated the power of
accounting for the different \tpr dependences of the reaction
mechanisms and also of the different resonances. Therefore, for each
\mThreePi bin the partial-wave decomposition was performed
independently in 11~nonequidistant \tpr slices of the analyzed range
\SIvalRange{0.1}{\tpr}{1.0}{\GeVcsq} as listed in \cref{tab:t-bins}.
Within each \tpr bin, we assumed the transition amplitudes to be
independent of \tpr. In this work, we further develop this approach
to better disentangle resonant and nonresonant components (see
\cref{sec:method,sec:production_phases}).
\begin{table*}[tbp]
\sisetup{%
round-mode = places,
round-precision = 3
}
\caption{Borders of the 11~nonequidistant \tpr bins, in which
the partial-wave analysis is performed. The intervals are chosen
such that each bin contains approximately \num[round-mode =
places, round-precision = 1]{4.6e6} events. Only the last range
from \SIrange{0.448588}{1.000000}{\GeVcsq} is subdivided further
into two bins.}
\label{tab:t-bins}
\renewcommand{\arraystretch}{1.2}
\newcolumntype{Z}{%
>{\Makebox[0pt][c]\bgroup}%
c%
<{\egroup}%
}
\setlength{\tabcolsep}{0pt} %
\ifMultiColumnLayout{%
\begin{tabular}{l@{\extracolsep{12pt}}c@{\extracolsep{6pt}}Z*{10}{cZ}c}
\hline
\hline
Bin && 1 && 2 && 3 && 4 && 5 && 6 && 7 && 8 && 9 && 10 && 11 & \\
\hline
\tpr [\si{\GeVcsq}] &
\num{0.100000} &&
\num{0.112853} &&
\num{0.127471} &&
\num{0.144385} &&
\num{0.164401} &&
\num{0.188816} &&
\num{0.219907} &&
\num{0.262177} &&
\num{0.326380} &&
\num{0.448588} &&
\num{0.724294} &&
\num{1.000000} \\
\hline
\hline
\end{tabular}%
}{%
\begin{tabular}{l@{\extracolsep{12pt}}c@{\extracolsep{6pt}}Z*{5}{cZ}c}
\hline
\hline
Bin && 1 && 2 && 3 && 4 && 5 && 6 \\
\hline
\tpr [\si{\GeVcsq}] &
\num{0.100000} &&
\num{0.112853} &&
\num{0.127471} &&
\num{0.144385} &&
\num{0.164401} &&
\num{0.188816} &&
\num{0.219907} \\
\hline
\hline
\end{tabular}
\\[3ex]
\begin{tabular}{l@{\extracolsep{12pt}}c@{\extracolsep{6pt}}Z*{4}{cZ}c}
\hline
\hline
Bin && 7 && 8 && 9 && 10 && 11 & \\
\hline
\tpr [\si{\GeVcsq}] &
\num{0.219907} &&
\num{0.262177} &&
\num{0.326380} &&
\num{0.448588} &&
\num{0.724294} &&
\num{1.000000} \\
\hline
\hline
\end{tabular}%
}
\end{table*}
In order to simplify notation, we consider the intensity in
\cref{eq:intensity_ansatz} in a particular $(\mThreePi, \tpr)$ bin.
Within this kinematic bin, \mThreePi and \tpr are considered to be
constant, and hence $\mathcal{I}$ is only a function of the set $\tau$
of phase-space variables.
In the resonance-model fit, special care has to be taken about the
normalization of the transition amplitudes. A consistent
normalization that makes the transition amplitudes comparable across
different experiments is achieved by normalizing the decay amplitudes
to the integrals $I_{a a}^\refl$, which are the diagonal elements of
the integral matrix
\begin{equation}
\label{eq:integral_matrix_def}
I_{a b}^\refl(\mThreePi)
\equiv \int\! \dif{\varphi_3(\tau; \mThreePi)}\,
\Widebar{\Psi}_a^\refl(\tau; \mThreePi)\, \Widebar{\Psi}_b^{\refl \text{*}}(\tau; \mThreePi),
\end{equation}
where $a$~and~$b$ are wave indices as defined in \cref{eq:wave_index}.
We define\footnote{Since the decay amplitude
$\Widebar{\Psi}_\text{flat}$ of the flat wave was set to unity, the
corresponding normalized decay amplitude is given by
\begin{equation}
\label{eq:decay_amplitude_norm_flat}
\Psi_\text{flat}(\tau; \mThreePi)
\equiv \frac{1}{\sqrt{V_{\varphi_3}(\mThreePi)}}
\end{equation}
with
\begin{equation}
\label{eq:phase_space_vol}
V_{\varphi_3}(\mThreePi)
\equiv \int\! \dif{\varphi_3(\tau; \mThreePi)}.
\end{equation}}
\begin{equation}
\label{eq:decay_amplitude_norm}
\Psi_a^\refl(\tau; \mThreePi)
\equiv \frac{\Widebar{\Psi}_a^\refl(\tau; \mThreePi)}{\sqrt{I_{a
a}^\refl(\mThreePi)}}.
\end{equation}
The normalization of the transition amplitudes is determined by the
expression for the number of events $N_{\text{pred}}$ predicted for
the $(\mThreePi, \tpr)$ bin by the model in
\cref{eq:intensity_ansatz}:
\begin{equation}
\label{eq:expected_ev_nmb_corr}
N_{\text{pred}}(\mThreePi, \tpr)
= \int\! \dif{\varphi_3(\tau; \mThreePi)}\, \mathcal{I}(\tau; \mThreePi, \tpr).
\end{equation}
Based on \cref{eq:decay_amplitude_norm}, the transition amplitudes are
redefined according to\footnote{Similarly, the transition amplitude of
the flat wave is redefined based on
\cref{eq:decay_amplitude_norm_flat}:
\begin{equation}
\label{eq:flat_amplitude_norm}
\mathcal{T}_\text{flat}(\mThreePi, \tpr)
\equiv \Widebar{\mathcal{T}}_\text{flat}(\mThreePi, \tpr)\, \sqrt{\smash[b]{V_{\varphi_3}}(\mThreePi)}.
\end{equation}}
\begin{equation}
\label{eq:prod_amplitude_norm}
\mathcal{T}_a^{r \refl}(\mThreePi, \tpr)
\equiv \Widebar{\mathcal{T}}_a^{r \refl}(\mThreePi, \tpr)\, \sqrt{I_{a a}^\refl(\mThreePi)},
\end{equation}
so that $\mathcal{I}$ remains unchanged. Using the fact that the
decay amplitudes $\Psi_a^\refl$ are normalized via
\cref{eq:decay_amplitude_norm,eq:decay_amplitude_norm_flat},
\cref{eq:expected_ev_nmb_corr} reads
\begin{multlineOrEq}
\label{eq:expected_ev_nmb_corr_amp}
N_{\text{pred}}
= \sum_{\refl = \pm1} \Bigg\{ \sum_a^{N_\text{waves}^\refl}
\sum_{r = 1}^{N_r^\refl} \Abs[1]{\mathcal{T}_a^{r \refl}}^2 \newLineOrNot
\ifMultiColumnLayout{\mbox{}\hfill}{{}}
+ 2 \sum_{a < b}^{N_\text{waves}^\refl}
\Re\!\sBrk[4]{\sum_{r = 1}^{N_r^\refl} \mathcal{T}_a^{r \refl}\, \mathcal{T}_b^{r \refl \text{*}}
\frac{I_{a b}^\refl}{\sqrt{I_{a a}^\refl}\, \sqrt{I_{b b}^\refl}}} \Bigg\} \newLineOrNot
+ \mathcal{T}_\text{flat}^2.
\end{multlineOrEq}
We introduce the \emph{spin-density matrix} for the
$(\mThreePi, \tpr)$ bin,
\begin{equation}
\label{eq:spin_density}
\varrho_{a b}^\refl(\mThreePi, \tpr)
\equiv \sum_{r = 1}^{N_r^\refl} \mathcal{T}_a^{r \refl}(\mThreePi, \tpr)\, \mathcal{T}_b^{r \refl \text{*}}(\mThreePi, \tpr),
\end{equation}
which represents the full information that can be obtained about the
$X^-$ states. The parameter~$N_r^\refl$ is the \emph{rank} of the
spin-density matrix. With the above,
\cref{eq:expected_ev_nmb_corr_amp} simplifies to
\begin{multlineOrEq}
\label{eq:expected_ev_nmb_corr_rho}
N_{\text{pred}}
= \sum_{\refl = \pm1} \Bigg\{ \sum_a^{N_\text{waves}^\refl}
\ifMultiColumnLayout{\overbrace}{\underbrace}{\varrho_{a a}^\refl
\ifMultiColumnLayout{}{\vphantom{\sum_r}}}\ifMultiColumnLayout{^}{_}{%
\mathclap{\displaystyle \text{\ifMultiColumnLayout{\hspace*{2.5em}}{}Intensities}}} \newLineOrNot
\ifMultiColumnLayout{\mbox{}\hfill}{{}}
+ \sum_{a < b}^{N_\text{waves}^\refl}
\underbrace{2 \Re\!\sBrk[4]{\varrho_{a b}^\refl
\frac{I_{a b}^\refl}{\sqrt{I_{a a}^\refl}\, \sqrt{I_{b b}^\refl}}}}_{%
\displaystyle \text{Overlaps}} \Bigg\} \newLineOrNot
+ \mathcal{T}_\text{flat}^2.
\end{multlineOrEq}
From this equation, we can derive an interpretation for the
spin-density matrix elements. The diagonal elements
$\varrho_{a a}^\refl$ are the \emph{partial-wave intensities}, \ie the
expected number of events in wave~$a$.\footnote{For a real experiment,
this corresponds to the acceptance-corrected number of events.} The
off-diagonal elements $\varrho_{a b}^\refl$, which contain information
about the relative phase between waves~$a$ and~$b$, contribute to the
so-called \emph{overlaps}, which are the number of events originating
from the interference between waves~$a$ and~$b$.\footnote{For
constructive interference, this number is positive; for destructive
interference, it is negative.} Limiting the summation in
\cref{eq:expected_ev_nmb_corr_rho} to a subset of partial waves yields
the expected number of events in these waves including all
interferences. Such sums will be denoted as \emph{coherent sums} of
partial waves in the following text.
We used an extended maximum-likelihood approach~\cite{Barlow:1990vc}
to determine the unknown transition amplitudes
$\mathcal{T}_a^{r \refl}$ by fitting the model intensity
$\mathcal{I}(\tau)$ of \cref{eq:intensity_ansatz} to the measured
$\tau$ distribution, in narrow bins of \mThreePi and \tpr. The
extended likelihood function for a $(\mThreePi, \tpr)$
bin,\footnote{For better readability, we do not explicitly write the
\mThreePi and \tpr dependences.}
\begin{equation}
\label{eq:likelihood_function_ansatz}
\mathcal{L}
= \underbrace{\frac{\Widebar{N}^{N}\, e^{-\Widebar{N}}}{N!\vphantom{\Widebar{N}}}}_{%
\substack{\displaystyle{\text{Poisson\vphantom{y}}} \\ \displaystyle{\text{probability}}}}\,
\prod_{i = 1}^{N}
\underbrace{\frac{\mathcal{I}(\tau_i)}{\Widebar{N}}}_{%
\mathclap{\substack{\displaystyle{\text{Probability}} \\ \displaystyle{\text{for event $i$}}}}},
\end{equation}
contains a Poisson term for the actually \emph{observed} number of
events $N(\mThreePi, \tpr)$ and the number of events
\begin{multlineOrEq}
\label{eq:expected_ev_nmb}
\Widebar{N}(\mThreePi, \tpr) \newLineOrNot
= \int\! \dif{\varphi_3(\tau; \mThreePi)}\, \eta(\tau; \mThreePi, \tpr)\, \mathcal{I}(\tau; \mThreePi, \tpr)
\end{multlineOrEq}
that is \emph{expected} to be observed by the detector. Via this
term, the detection efficiency $\eta(\tau; \mThreePi, \tpr)$ of the
experimental setup is taken into account by the PWA model. In
addition, \cref{eq:expected_ev_nmb} together with
\cref{eq:decay_amplitude_norm,eq:decay_amplitude_norm_flat} ensures
the correct normalization of the transition amplitudes according to
\cref{eq:prod_amplitude_norm,eq:flat_amplitude_norm}. This also fixes
the normalization of the diagonal elements of the spin-density matrix
in \cref{eq:spin_density} to the acceptance-corrected number of events
in the particular wave.
In principle, the partial-wave expansion in \cref{eq:intensity_ansatz}
includes an infinite number of waves. In practice, the expansion
series has to be truncated. We thus have to define a \emph{wave set}
describing the data sufficiently well, without too many free
parameters. We included \pipiS, \Prho, \PfZero[980], \PfTwo,
\PfZero[1500], and \PrhoThree as isobars in the fit model, where
\pipiS represents a parametrization of the broad component of the
\pipiSW, which dominates the \mTwoPi spectrum from low to intermediate
two-pion masses and exhibits a slow phase motion (see Fig.~10 in
\refCite{Adolph:2015tqa}). This selection of isobars is based on
features observed in the \twoPi invariant mass spectrum (see
\refCite{Adolph:2015tqa}) and on analyses of previous
experiments~\cite{amelin:1995gu,adams:1998ff,kachaev:2001jj,chung:2002pu,dzierba:2005jg,alekseev:2009aa}.
Based on the six isobars, we have constructed a set of 88~partial
waves, \ie 80~waves with reflectivity $\refl = +1$, seven waves with
$\refl = -1$, and a noninterfering flat wave representing three
uncorrelated pions (see Table~IX in Appendix~A of
\refCite{Adolph:2015tqa} for a complete list). This wave set is the
largest used so far in a PWA of the \threePi final state. It includes
partial waves with spin $J \leq 6$, orbital angular momentum
$L \leq 6$, and spin projection $M = \numlist{0;1;2}$. The wave set
consists mainly of positive-reflectivity waves, which is expected due
to Pomeron dominance at high energies. As discussed in
\refCite{Adolph:2015tqa}, it was found that the ranks
$N_r^{(\refl = +1)} = 1$ and $N_r^{(\refl = -1)} = 2$ describe the
data well. In the reflectivity basis, partial waves are completely
defined by the wave index~$a$, as given in \cref{eq:wave_index}, and
the reflectivity \refl. For the remaining text, we adopt the
\emph{partial-wave notation}
\wave{J}{PC}{M}{\refl}{[\text{isobar}]}{L}.
The \emph{total intensity} of all partial waves is defined as the
total number of acceptance-corrected events as given by
\cref{eq:expected_ev_nmb_corr}. The \emph{relative intensity} of a
particular partial wave, as \eg listed in
\cref{tab:method:fitmodel:waveset} in \cref{sec:method}, is defined as
the ratio of its intensity integral over the analyzed range
\SIvalRange{0.5}{\mThreePi}{2.5}{\GeVcc} and the corresponding
integral of the total intensity. Owing to interference effects
between the waves, \ie overlaps, this value is in general different
from the contribution of a wave to the total intensity.\footnote{The
relative intensities include effects from interference due to Bose
symmetrization of the two indistinguishable final-state $\pi^-$.}
Hence in our fit, the relative intensities of all 88~partial waves add
up to \SI{105.3}{\percent} instead of \SI{100}{\percent}.
As shown in \refCite{Adolph:2015tqa}, the waves with negative
reflectivity corresponding to unnatural-parity exchange processes
contribute only \SI{2.2}{\percent} to the total intensity and do not
interfere with the positive-reflectivity waves. This dominance of
natural-parity exchange processes is consistent with the expected
dominance of the Pomeron contribution at COMPASS energies. In this
paper, we only consider a selection of positive-reflectivity partial
waves.
%
\section{Resonance-model fit}
\label{sec:method}
The goal of the analysis described in this paper is to extract $3\pi$
resonances contributing to the reaction \reaction and to determine
their quantum numbers and parameters, \ie masses and widths. The
starting point of the analysis is the spin-density matrix
$\varrho_{a b}(\mThreePi, \tpr)$ as defined in \cref{eq:spin_density}.
It has been extracted from the data in the first step of the analysis
by performing a partial-wave decomposition independently in 100~bins
of \mThreePi and 11~bins of \tpr for each \mThreePi bin using a model
with 88~waves (see \refCite{Adolph:2015tqa} and
\cref{sec:mass-independent_fit}).
For the resonance extraction presented here, we select a subset of
waves that exhibit resonance signals in their intensity spectra and in
their phase motions. Some waves contain well-known resonances that
are used as an interferometer to study the resonance content of more
interesting waves, such as the spin-exotic
\wave{1}{-+}{1}{+}{\Prho}{P} wave. All selected waves have positive
reflectivity. Since the spin-density submatrix of the $\refl = +1$
waves was chosen to have rank~1, we will drop reflectivity and rank
indices from \cref{eq:spin_density} and from all formulas that will
follow below. We therefore write
\begin{equation}
\label{eq:method:spindens}
\varrho_{a b}(\mThreePi, \tpr)
= \mathcal{T}_a(\mThreePi, \tpr)\, \mathcal{T}_b^\text{*}(\mThreePi, \tpr).
\end{equation}
For the selected waves, the \mThreePi and \tpr dependences of the
corresponding elements of the spin-density submatrix in
\cref{eq:method:spindens} are parametrized in terms of the transition
amplitudes. The fit model must therefore reproduce not only the
measured partial-wave intensities but also their mutual interferences.
Performing the analysis on the amplitude level greatly improves the
sensitivity for potential resonance signals. We employ a
parametrization similar to the ones used by previous analyses (see
\eg\
\refsCite{daum:1980ay,Amelin:1995gt,Chung:1999we,chung:2002pu,alekseev:2009aa,Salgado:2013dja}).
In the following, model quantities will be distinguished from the
corresponding measured quantities by a hat
(\enquote{\,$\widehat{\phantom{\rho}}$\;}).
We model the transition amplitudes $\mathcal{T}_a(\mThreePi, \tpr)$ as
the product of an amplitude $\mathcal{P}(\mThreePi, \tpr)$, which
accounts for the overall strength of the production of a $3\pi$ system
with mass \mThreePi at a given \tpr (see \cref{sec:method:fitmodel}),
and a term that coherently sums over possible resonance propagators
and nonresonant background contributions of the $3\pi$ system with
quantum numbers defined by the wave index~$a$ [see
\cref{eq:wave_index}]. The model $\widehat{\mathcal{T}}_a$ for the
measured transition amplitude $\mathcal{T}_a$ for wave~$a$ is
\begin{multlineOrEq}
\label{eq:method:transitionampl}
\widehat{\mathcal{T}}_a(\mThreePi, \tpr)
= \sqrt{I_{a a}(\mThreePi)}\, \sqrt{\mThreePi\vphantom{I_a}}\;
\mathcal{P}(\mThreePi, \tpr)\, \newLineTimesOrNot
\sum_{\mathclap{j\; \in\; \mathbb{S}_a}} \mathcal{C}_a^j(\tpr)\, \mathcal{D}_j(\mThreePi, \tpr; \zeta_j).
\end{multlineOrEq}
Here, $I_{a a}$ is the \emph{decay phase-space volume} of wave~$a$ as
defined in \cref{eq:integral_matrix_def}. This factor enters, because
the partial-wave intensities $\abs{\mathcal{T}_a}^2$ are normalized
via \cref{eq:prod_amplitude_norm} to represent the
acceptance-corrected number of events in wave~$a$. The factor
$\sqrt{\mThreePi}$ results from the splitting of the four-body phase
space of the final-state particles in \cref{eq:intensity_def}. The
functions $\mathcal{D}_j(\mThreePi, \tpr; \zeta_j)$ are the
\emph{dynamical amplitudes} that represent the resonant or nonresonant
wave components, which are enumerated by the index~$j$. The coherent
sum runs over the subset $\mathbb{S}_a$ of the indices of those wave
components that we assume to appear in wave a. The dynamical
amplitudes depend on the set $\zeta_j$ of \emph{shape parameters},
which are \eg the masses and widths in the case of resonance
components. It should be stressed that if the same wave component
$\mathcal{D}_j(\mThreePi, \tpr; \zeta_j)$ appears in several partial
waves, which must have the same \JPC quantum numbers, it has the same
values of the shape parameters $\zeta_j$. The coefficients
$\mathcal{C}_a^j(\tpr)$ in \cref{eq:method:transitionampl} are the
so-called \emph{coupling amplitudes}. They collect the unknown parts
of the model, which are the \tpr dependences of the production
strengths and phases of the $X^-$ and the complex-valued couplings,
$\alpha_{X \to \xi \pi}$ and $\alpha_{\xi \to \pi \pi}$, which appear
at the two vertices in the isobar decay chain.
Based on \cref{eq:method:transitionampl}, we can formulate the model
for the spin-density submatrix of the selected waves
\begin{wideEqOrNot}%
\begin{equation}
\label{eq:method:param:spindens}
\begin{aligned}
\widehat{\varrho}_{a b}(\mThreePi, \tpr)
&= \widehat{\mathcal{T}}_a(\mThreePi, \tpr)\, \widehat{\mathcal{T}}_b^\text{*}(\mThreePi, \tpr) \\
&= \ifMultiColumnLayout{}{\aligned[t]}
\ifMultiColumnLayout{}{&}\sqrt{I_{a a}(\mThreePi)}\, \sqrt{I_{b b}(\mThreePi)}\;
\mThreePi\, \Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2 \ifMultiColumnLayout{}{\\}
\ifMultiColumnLayout{}{&\times} \sBrk[4]{\,\smashoperator[r]{\sum_{j\; \in\; \mathbb{S}_a}}
\mathcal{C}_a^j(\tpr)\, \mathcal{D}_j(\mThreePi, \tpr; \zeta_j)}
\sBrk[4]{\,\smashoperator[r]{\sum_{k\; \in\; \mathbb{S}_b}}
\mathcal{C}_b^k(\tpr)\, \mathcal{D}_k(\mThreePi, \tpr; \zeta_k)}^\text{*},
\ifMultiColumnLayout{}{\endaligned}
\end{aligned}
\end{equation}
\end{wideEqOrNot}%
which describes the \mThreePi and \tpr dependences of the measured
spin-density matrix elements $\varrho_{a b}(\mThreePi, \tpr)$. The
free parameters to be determined by the resonance-model fit are the
coupling amplitudes $\mathcal{C}_a^j(\tpr)$ and the shape parameters
$\zeta_j$.
In \cref{eq:method:param:spindens} we extended the commonly used
ansatz for the parametrization of the spin-density matrix to
explicitly include the \tpr dependence. In particular, the coupling
amplitudes $\mathcal{C}_a^j(\tpr)$ are allowed to take different
values in each \tpr bin. This novel approach allows us to perform for
the first time a \tpr-resolved resonance-model fit. The \tpr
information that was extracted in the mass-independent analysis
performed in the first analysis step (see
\cref{sec:mass-independent_fit}) is exploited here to better separate
the resonant and nonresonant contributions by allowing them to have
different \tpr dependences. The resonance-model fit yields as
additional results the \tpr dependence of the intensity and the
production phases of the wave components (see
\cref{sec:method:tp,sec:production_phases} ).
Assuming factorization of production and decay of the intermediate
$3\pi$ state $X^-$, the resonant amplitudes
$\mathcal{D}^\text{R}_j(\mThreePi; \zeta^\text{R}_j)$, which represent
the on-shell propagators of the produced $3\pi$ resonances, should be
independent of \tpr. This is in particular true for the corresponding
shape parameters $\zeta^\text{R}_j$ of the resonant amplitudes, \ie
the masses and widths of the resonances. This constraint is built
into the model by using the same shape parameters across all \tpr
bins. Only the strengths and coupling phases of the resonant
components, which are represented by the $\mathcal{C}_a^j(\tpr)$, can
be chosen freely by the fit for each individual \tpr bin. We exploit
the factorization of production and decay further for the case, where
a resonance appears in several partial waves, which have the same
\JPCMrefl quantum numbers. These waves represent different decay
modes of the same $X^-$~state and differ only in the isobar~$\xi^0$ or
the orbital angular momentum~$L$. The resonant amplitude is expected
to follow the same \tpr dependence in these partial waves. This is
built into the model by fixing the \tpr dependence
$\mathcal{C}_b^j(\tpr)$ of a resonance~$j$ that appears in wave~$b$ to
the \tpr dependence $\mathcal{C}_a^j(\tpr)$ that this resonance has in
wave~$a$ via
\begin{equation}
\label{eq:method:branchingdefinition}
\mathcal{C}_b^j(\tpr)
= \prescript{}{b}{\mathcal{B}}_a^j\, \mathcal{C}_a^j(\tpr).
\end{equation}
This replaces the set of independent coupling amplitudes
$\mathcal{C}_b^j(\tpr)$ for wave~$b$ by a single \tpr-independent
complex-valued \emph{branching amplitude}
$\prescript{}{b}{\mathcal{B}}_a^j$ as a free fit parameter. This
quantity represents the relative strength and phase of the two decay
modes of resonance~$j$. The constraint expressed by
\cref{eq:method:branchingdefinition} significantly reduces the number
of free parameters and was also found to stabilize the fit (see
\cref{sec:systematics,sec:results}).
In general, the above assumptions do not hold for the nonresonant
amplitudes
$\mathcal{D}^\text{NR}_j(\mThreePi, \tpr; \zeta^\text{NR}_j)$. The
shape of their \mThreePi distribution may vary with \tpr and may also
depend on the $X^-$ quantum numbers and decay mode. Therefore, for
each wave in the fit, a separate nonresonant component is added to the
model. Although the nonresonant amplitudes may have an explicit \tpr
dependence, the shape parameters $\zeta^\text{NR}_j$ are kept the same
across all \tpr bins.
\subsection{Fit model}
\label{sec:method:fitmodel}
Ideally, the resonance model would describe the \mThreePi dependence
of the full $88 \times 88$ spin-density matrix obtained from the PWA
fit in the first analysis step. However, in practice such a fit would
require very large computing resources owing to the large number of
free parameters. In addition, some partial waves, which mostly have
small relative intensities, are affected by imperfections in the PWA
model. These imperfections may cause artifacts at the stage of the
mass-independent analysis that the physical model is not able to
describe. Thus the resonance-model fit is commonly performed using
only a selected submatrix of the spin-density matrix. For the present
analysis, we selected a subset of 14~waves that are listed in
\cref{tab:method:fitmodel:waveset} out of the 88~waves used in the
partial-wave decomposition (see Table~IX in Appendix~A of
\refCite{Adolph:2015tqa}). Compared to previous analyses of the
$3\pi$ final state this constitutes the so far largest wave set
included in a resonance-model fit. The sum of the relative
intensities (see definition in \cref{sec:mass-independent_fit}) of the
14~waves is \SI{56.8}{\percent}, whereas the coherent sum of these
waves amounts to \SI{57.9}{\percent}. The intensity distributions of
the waves are discussed in detail in \refCite{Adolph:2015tqa} with the
exception of the spin-exotic \wave{1}{-+}{1}{+}{\Prho}{P} wave. The
waves contain signals of the well-known resonances \PaOne, \PaTwo,
\PpiTwo, \Ppi[1800], \PpiTwo[1880], and \PaFour, which appear as peaks
in the intensity distributions of the partial waves with the
corresponding quantum numbers. In addition, the set of selected waves
includes a clear signal of the novel resonancelike \PaOne[1420], which
was first reported in \refCite{Adolph:2015pws}, and potential signals
of the less well-known or disputed states \PpiOne[1600], \PaOne[1640],
and \PaTwo[1700]. In the development of the analysis model it was
found that a third $\JPC = 2^{-+}$ resonance, the \PpiTwo[2005], is
required to describe the data.
\subsubsection{Parametrization of the dynamical amplitudes for resonances}
\label{sec:method:fitmodel:resonances}
The selected 14~waves are described using the resonance model of
\cref{eq:method:param:spindens} with six \aJ-like and five \piJ-like
resonances. The resonances are parametrized using relativistic
Breit-Wigner amplitudes~\cite{breit:1936zzb},
\begin{equation}
\label{eq:BreitWigner}
\mathcal{D}^\text{R}_j(\mThreePi; \underbrace{m_j, \Gamma_j}_{\displaystyle{\equiv \zeta^\text{R}_j}})
= \frac{m_j\, \Gamma_j}{m_j^2 - \mThreePi^2 - i\, m_j\, \Gamma_{j, \text{tot}}(\mThreePi)},
\end{equation}
with the mass-dependent total width
$\Gamma_{j, \text{tot}}(\mThreePi)$. The shape parameters to be
determined by the fit are mass $m_j$ and width $\Gamma_j$ of the
resonance~$j$. For most resonances, the decay modes and relative
branching fractions are not or only poorly known. In these cases, we
approximate the mass-dependent width by a constant:
\begin{equation}
\label{eq:method:fixedwidth}
\Gamma_{j, \text{tot}}(\mThreePi) \approx \Gamma_j.
\end{equation}
Only for \PaOne and \PaTwo are different parametrizations used. Due
to the large width of the \PaOne, we use the Bowler parametrization
[\namecref{eq:method:bowlerG}~(9) in \refCite{Bowler:1987bj}] to
account for the variation of the decay phase space across the
resonance width:
\begin{multlineOrEq}
\label{eq:method:bowlerG}
\Gamma_{\PaOne, \text{tot}}(\mThreePi)
\newLineOrNot
= \Gamma_{\PaOne}\, \frac{I_{aa}(\mThreePi)}{I_{aa}(m_{\PaOne})}\, \frac{m_{\PaOne}}{\mThreePi}
\end{multlineOrEq}
with $a = \wave{1}{++}{0}{+}{\Prho}{S}$. Here, $I_{aa}$ is the decay
phase-space volume of the \wave{1}{++}{0}{+}{\Prho}{S} wave calculated
according to \cref{eq:integral_matrix_def}, which takes into account
the finite width of the \Prho, the angular-momentum barrier factor in
the \Prho decay, and the Bose symmetrization of the decay amplitude.
For the \PaTwo, we approximate the total width by assuming that it is
saturated by the two dominant decay modes, $\Prho \pi$ and
$\Peta* \pi$, both in a
$D$~wave~\cite{beladidze:1993km,Adolph:2014rpp},\footnote{We neglect
the additional mass dependence of the \PaTwo width that would be
induced by the $\omega\pi\pi$ and \KKbar decay modes, which have
branching fractions of \SI{10.6(32)}{\percent} and
\SI{4.9(8)}{\percent}, respectively~\cite{Patrignani:2016xqp}.}
\begin{wideEqOrNot}%
\begin{multline}
\label{eq:method:a2dynamicwidth}
\Gamma_{\PaTwo, \text{tot}}(\mThreePi)
= \Gamma_{\PaTwo}\, \frac{m_{\PaTwo}}{\mThreePi}\,
\Bigg[ (1 - x)\, \frac{q_{\Prho* \pi}(\mThreePi)}{q_{\Prho* \pi}(m_{\PaTwo})}\,
\frac{F_2^2\rBrk[1]{q_{\Prho* \pi}(\mThreePi)}}{F_2^2\rBrk[1]{q_{\Prho* \pi}(m_{\PaTwo})}} \\
{} + x\, \frac{q_{\Peta* \pi}(\mThreePi)}{q_{\Peta* \pi}(m_{\PaTwo})}\,
\frac{F_2^2\rBrk[1]{q_{\Peta* \pi}(\mThreePi)}}{F_2^2\rBrk[1]{q_{\Peta* \pi}(m_{\PaTwo})}} \Bigg].
\end{multline}
\end{wideEqOrNot}%
In \cref{eq:method:a2dynamicwidth}, we neglect the width of the \Prho
and use the quasi-two-body approximation, where $q_{\xi \pi}$ is the
two-body breakup momentum in the decay $X^- \to \xi^0 \pi^-$. It is
given by
\begin{multlineOrEq}
\label{eq:method:breakupmomentum}
q_{\xi \pi}^2(\mThreePi)
\newLineOrNot
= \frac{\sBrk{\mThreePi^2 - (m_\pi + m_\xi)^2} \sBrk{\mThreePi^2 - (m_\pi - m_\xi)^2}}{4\mThreePi^2}
\end{multlineOrEq}
with $m_\xi$ being the mass of the isobar~$\xi^0$.\footnote{For the
\PaTwo, the lower bound of the fitted \mThreePi range was chosen
such that $q_{\Prho* \pi}^2 > 0$.} The $F_\ell(q_{\xi \pi})$ terms
in \cref{eq:method:a2dynamicwidth} are the Blatt-Weisskopf
angular-momentum barrier factors~\cite{blatt:1952}, which take into
account the centrifugal-barrier effect caused by the orbital angular
momentum $\ell = 2$ between the bachelor $\pi^-$ and the \Prho or the
\Peta*. We use the parametrization of von~Hippel and
Quigg~\cite{VonHippel:1972fg} as given in Sec.~IV~A of
\refCite{Adolph:2015tqa} with a range parameter of
$q_R = \SI{200}{\MeVc}$.\footnote{This corresponds to an assumed
strong-interaction range of \SI{1}{\femto\meter}.} We approximate
the relative branching fraction between both \PaTwo decay modes by
setting $x = 0.2$.\footnote{The masses of $\pi$, \Peta*, and \Prho in
\cref{eq:method:breakupmomentum} are set to
$m_\pi = \SI{139}{\MeVcc}$, $m_{\Peta*} = \SI{547}{\MeVcc}$, and
$m_{\Prho*} = \SI{770}{\MeVcc}$.}
\subsubsection{Parametrization of the dynamical amplitudes for nonresonant components}
\label{sec:method:fitmodel:nonres}
For each of the 14~selected partial waves, a separate nonresonant
component is included in the fit model. We adopt a phenomenological
parametrization for the nonresonant amplitude in the form of a
Gaussian in the two-body breakup momentum~$q$ of the decay that was
inspired by \refCite{tornqvist:1995kr}. We extend this
parametrization to have a more flexible threshold behavior and to
include an explicit empirical \tpr dependence:
\begin{multlineOrEq}
\label{eq:method:nonresterm}
\mathcal{D}^\text{NR}_j(\mThreePi, \tpr;
\ifMultiColumnLayout{\overbrace}{\underbrace}{b, c_0, c_1, c_2}\ifMultiColumnLayout{^}{_}{%
\displaystyle{\equiv \zeta^\text{NR}_j}}) \newLineOrNot
= \sBrk{\frac{\mThreePi - m_\text{thr}}{m_\text{norm}}}^b\,
e^{-(c_0 + c_1 \tpr + c_2\tpr^2)\, \tilde{q}_{\xi \pi}^2(\mThreePi)}.
\end{multlineOrEq}
Here, $b$ and the $c_i$ are the free shape parameters for the
nonresonant component~$j$.\footnote{In order to simplify notation, we
omit the subscript~$j$ for these parameters.} The parameters
$m_\text{norm}$ and $m_\text{thr}$ are the same for all nonresonant
components and are empirically fixed to \SI{1}{\GeVcc} and
\SI{0.5}{\GeVcc}, respectively. The quasi-two-body breakup momentum
for the decay $X^- \to \xi^0 \pi^-$ is represented by
$\tilde{q}_{\xi \pi}(\mThreePi)$. However, we cannot use
\cref{eq:method:breakupmomentum} to calculate this quantity because
$q_{\xi \pi}(\mThreePi)$ becomes imaginary for
$\mThreePi < m_\pi + m_\xi$. We therefore construct an approximation,
$\tilde{q}_{\xi \pi}(\mThreePi)$, to the two-body breakup momentum,
which is valid also below the quasi-two-body threshold and takes into
account the finite width of the isobar~$\xi^0$,\footnote{We start from
the ansatz that the two-body phase-space volume
$\varphi_2 \propto q_{\xi \pi} / \mThreePi$ approximates the
three-body phase-space volume $I_{aa}$ well at large values of
\mThreePi because the effects from the finite width of the $\xi^0$
and from the barrier factors become negligible. For lower values of
\mThreePi, these effects are taken into account by defining an
\enquote{effective} two-body breakup momentum via
$I_{aa} \propto \tilde{q}_{\xi \pi} / \mThreePi$.}
\begin{equation}
\label{eq:method:nonresterm:qtilde}
\tilde{q}_{\xi \pi}(\mThreePi)
\equiv q_{\xi \pi}(m_\text{norm})\, \frac{I_{aa}(\mThreePi)}{I_{aa}(m_\text{norm})}\,
\frac{\mThreePi}{m_\text{norm}}.
\end{equation}
Here, $\tilde{q}_{\xi \pi}$ is normalized such that it is equal to the
value of $q_{\xi \pi}$ at
$m_\text{norm} = \SI{2.4}{\GeVcc}$.\footnote{The value of
$m_\text{norm}$ was somewhat arbitrarily chosen to lie above the
maximum of the fit range of \SI{2.3}{\GeVcc} (see
\cref{tab:method:fitmodel:waveset}) and low enough so that the decay
phase-space volume $I_{aa}(m_\text{norm})$ can be calculated
reliably.} The decay phase-space volume $I_{aa}$ of wave~$a$ is
calculated according to \cref{eq:integral_matrix_def}.
For partial waves with small relative intensities
$\leq \SI{2.4}{\percent}$, we simplify the parametrization in
\cref{eq:method:nonresterm} to
\begin{multlineOrEq}
\label{eq:method:nonrestermsmall}
\mathcal{D}^\text{NR}_j(\mThreePi; b = 0, c_0, c_1 = 0, c_2 = 0)
\newLineOrNot
= e^{-c_0\, \tilde{q}_{\xi \pi}^2(\mThreePi)}.
\end{multlineOrEq}
This reduces the number of free parameters and increases the fit
stability. The only exception is the spin-exotic
\wave{1}{-+}{1}{+}{\Prho}{P} wave because of its dominant nonresonant
contribution.
\subsubsection{Parametrization of the production probability}
\label{sec:method:fitmodel:prod}
At high energies, hadronic scattering reactions are dominated by
$t$-channel Pomeron (\Ppom) exchange. In earlier measurements of
inclusive diffractive reactions of the type $p + p \to X^+ + p$ at the
CERN ISR~\cite{Albrow:1976sv}, the differential cross section
$\dif[2]{\sigma} / \dif{m_X^2} \dif{t}$ was observed to fall
approximately as $s / m_X^2$, with $\sqrt{s}$ being the center-of-mass
energy of the reaction and $m_X$ the invariant mass of the produced
system~$X^+$. This behavior is described by Regge
theory~\cite{Collins:1977jy,Kaidalov:1979jz},
\begin{equation}
\label{eq:method:param:regge_cross_sect}
\frac{\dif[2]{\sigma}}{\dif{m_X^2}\, \dif{t}}
= g^\Ppom_{pp}(t)\, \sigma_{\Ppom p}^\text{tot}(m_X^2, t)\, \sBrk[3]{\frac{s}{m_X^2}}^{2 \alpha_\Ppom(t) - 1},
\end{equation}
where $g^\Ppom_{pp}$ is the $t$-dependent proton-proton-Pomeron
coupling and $\sigma_{\Ppom p}^\text{tot}(m_X^2, t)$ is the total
Pomeron-proton cross section. The Regge trajectory of the Pomeron is
$\alpha_\Ppom(t) = \alpha_0 + \alpha'\, t$, which yields the
$\dif[2]{\sigma} / \dif{m_X^2} \dif{t} \propto s / m_X^2$ behavior for
$\alpha_0 = 1$ and $\alpha' = 0$.
In \refCite{Ataian:1991gn}, a phenomenological Regge framework was
developed to describe exclusive central-production reactions of the
type $p + p \to p + X^0 + p$ in terms of double-Pomeron exchange. In
these calculations, the cross section is proportional to the so-called
\enquote{Pomeron flux} factor
\begin{equation}
\label{eq:method:param:pomeron_flux}
F_{\Ppom p}(x_\Ppom, t)
\propto \frac{e^{-b_\Ppom\, \abs{t}}}{x_\Ppom^{2 \alpha_\Ppom(t) - 1}}
\end{equation}
using the approximate relation $m_X^2 / s \approx x_\Ppom$ with
$x_\Ppom$ being the longitudinal proton-momentum fraction carried by
the Pomeron in the center-of-mass frame of the reaction. The slope
parameter of the Pomeron exchange is $b_\Ppom$.
\Cref{eq:method:param:pomeron_flux} can be interpreted as the
probability for Pomeron emission by the proton, which in the limit of
$\alpha_0 = 1$ and $\alpha' = 0$ is proportional to $1 / x_\Ppom$ and
therefore similar to the probability of photon emission in the case of
bremsstrahlung. Assuming that \cref{eq:method:param:pomeron_flux} is
universal, it can be used to model various diffractive processes in
terms of single-Pomeron exchange~\cite{Cox:2000jt}. We follow this
approach and have chosen the $3\pi$ production probability in
\cref{eq:method:param:spindens} to be proportional to the probability
of Pomeron emission by the target proton:
\begin{equation}
\label{eq:method:param:prods}
\Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2
\equiv \frac{1}{x_\Ppom^{2 \alpha_\Ppom(\tpr) - 1}}
= \sBrk[3]{\frac{s}{\mThreePi^2}}^{2 \alpha_\Ppom(\tpr) - 1}.
\end{equation}
Here, \mThreePi takes the role of $m_X$ and we have made the
approximation $\tpr \approx -t$ thereby neglecting \tmin, so that
$\alpha_\Ppom(\tpr) = \alpha_0 - \alpha' \tpr$. The normalization and
the explicitly \tpr-dependent factor $e^{-b_\Ppom\, \tpr}$ in
\cref{eq:method:param:pomeron_flux} are both absorbed into the
coupling amplitudes $\mathcal{C}_a^j(\tpr)$ in
\cref{eq:method:param:spindens}. We use a value of $\alpha_0 = 1.2$,
based on an analysis of data from the H1~experiment at
HERA~\cite{Adloff:1997sc}, while for the shrinkage parameter we use a
value of $\alpha' = \SI{0.26}{\perGeVcsq}$, which was obtained from a
simultaneous fit to CDF (Fermilab) and ISR (CERN)
data~\cite{Abe:1993xx}.\footnote{The result for $\alpha_0$ in
\refCite{Adloff:1997sc} is based on the $\alpha'$ value from
\refCite{Abe:1993xx}. The results of our resonance-model fit are
not sensitive to the particular choice of the values for $\alpha_0$
and $\alpha'$.} \Cref{fig:method:param:prods} shows the deviation
of \cref{eq:method:param:prods} from the $s / \mThreePi^2$ dependence
in the analyzed kinematic range.
\begin{figure}[tbp]
\centering
\includegraphics[width=\twoPlotWidth]{fig3}
\caption{Deviation of the \mThreePi dependence of the $3\pi$
production probability $\Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2$,
as given by \cref{eq:method:param:prods}, from the
$s / \mThreePi^2$ dependence for various \tpr values. The curves
are normalized to~1 at $\mThreePi = \SI{0.5}{\GeVcsq}$.}
\label{fig:method:param:prods}
\end{figure}
\subsubsection{Discussion of the fit model}
\label{sec:method:fitmodel:discussion}
Our analysis focuses on $3\pi$ resonances with masses up to about
\SI{2}{\GeVcc}. The goal was to parametrize the data with a minimum
number of resonances while at the same time covering an \mThreePi
range as large as possible. The employed \mThreePi fit ranges are
listed in \cref{tab:method:fitmodel:waveset}. For most waves, the
lower bound of the fit range is determined either by thresholds
applied in the PWA (see Table~IX in Appendix~A of
\refCite{Adolph:2015tqa}) or by the phase-space opening. For some
waves, the reduced phase-space volume at low \mThreePi causes
ambiguities in the solutions of the mass-independent analysis leading
to unphysical structures. Such regions are excluded.\footnote{By
limiting the fit ranges, \SI{4.2}{\percent} of the summed
intensities of all 14~waves are excluded from the fit.} Seven of
the 14~waves are described by the model up to masses of
\SI{2.3}{\GeVcc}. For the other waves, the model departs from the
data already at lower masses. This could be due to higher-lying
excited states above \SI{2}{\GeVcc} or due to increased nonresonant
contributions. Motivations for the particular choice of the fit
ranges will be discussed in more detail in \cref{sec:results}.
We summarize in \cref{tab:method:fitmodel:waveset} the 14-wave fit
model. In total, the model has 722~free real-valued parameters, to be
determined by the fit: 22~resonance shape parameters, 29~shape
parameters for the nonresonant components, 22~real-valued parameters
for the branching amplitudes $\prescript{}{b}{\mathcal{B}}_a^j$ [see
\cref{eq:method:branchingdefinition}], and 649~real-valued parameters
for the coupling amplitudes. The coupling amplitudes for the \PaOne
in the \wave{1}{++}{0}{+}{\Prho}{S} wave are chosen to be real.
\begin{wideTableOrNot}[tbp]
\renewcommand{\arraystretch}{1.2}
\centering
\caption{Fit model with 11~resonances to describe the elements of
the spin-density matrix of the selected 14~partial waves from six
\JPC sectors using \cref{eq:method:param:spindens}. The relative
intensities listed in the second column are evaluated as a sum
over the 11~\tpr bins and are normalized to the total number of
acceptance-corrected events~\cite{Adolph:2015tqa}. The relative
intensities do not include interference effects between the waves.
The third column lists the resonances used to describe the waves.
For most resonances, the total width is approximated by a constant
[see \cref{eq:method:fixedwidth}]. For the other resonances, the
width parametrization is given in square brackets. The fourth
column lists the parametrizations used for the nonresonant
components, the last column the fit ranges (see
\cref{sec:method:fitmethod} for details).}
\label{tab:method:fitmodel:waveset}
\begin{tabular}{lrp{\ifMultiColumnLayout{0.25\textwidth}{0.33\textwidth}}cc}
\hline
\hline
Partial wave & {Relative} & Resonances & Nonresonant & \mThreePi fit range \\
& {intensity} & & component Eq. & [\si\GeVcc] \\
\hline
\wave{0}{-+}{0}{+}{\PfZero}{S} & \SI{2.4}{\percent} & \Ppi[1800] & \eqref{eq:method:nonrestermsmall} & \numrange{1.20}{2.30} \\[1.2ex]
\wave{1}{++}{0}{+}{\Prho}{S} & \SI{32.7}{\percent} & \PaOne~[\cref{eq:method:bowlerG}],
\PaOne[1640] & \eqref{eq:method:nonresterm} & \numrange{0.90}{2.30} \\
\wave{1}{++}{0}{+}{\PfZero}{P} & \SI{0.3}{\percent} & \PaOne[1420] & \eqref{eq:method:nonrestermsmall} & \numrange{1.30}{1.60} \\
\wave{1}{++}{0}{+}{\PfTwo}{P} & \SI{0.4}{\percent} & \PaOne~[\cref{eq:method:bowlerG}],
\PaOne[1640] & \eqref{eq:method:nonrestermsmall} & \numrange{1.40}{2.10} \\[1.2ex]
\wave{1}{-+}{1}{+}{\Prho}{P} & \SI{0.8}{\percent} & \PpiOne[1600] & \eqref{eq:method:nonresterm} & \numrange{0.90}{2.00} \\[1.2ex]
\wave{2}{++}{1}{+}{\Prho}{D} & \SI{7.7}{\percent} & \rdelim\}{3}{\linewidth}[~\mbox{\PaTwo~[\cref{eq:method:a2dynamicwidth}],~\PaTwo[1700]}]
& \eqref{eq:method:nonresterm} & \numrange{0.90}{2.00} \\
\wave{2}{++}{2}{+}{\Prho}{D} & \SI{0.3}{\percent} & & \eqref{eq:method:nonrestermsmall} & \numrange{1.00}{2.00} \\
\wave{2}{++}{1}{+}{\PfTwo}{P} & \SI{0.5}{\percent} & & \eqref{eq:method:nonrestermsmall} & \numrange{1.00}{2.00} \\[1.2ex]
\wave{2}{-+}{0}{+}{\Prho}{F} & \SI{2.2}{\percent} & \rdelim\}{4}{\linewidth}[~\mbox{\PpiTwo, \PpiTwo[1880], \PpiTwo[2005]}]
& \eqref{eq:method:nonresterm} & \numrange{1.20}{2.10} \\
\wave{2}{-+}{0}{+}{\PfTwo}{S} & \SI{6.7}{\percent} & & \eqref{eq:method:nonresterm} & \numrange{1.40}{2.30} \\
\wave{2}{-+}{1}{+}{\PfTwo}{S} & \SI{0.9}{\percent} & & \eqref{eq:method:nonrestermsmall} & \numrange{1.40}{2.30} \\
\wave{2}{-+}{0}{+}{\PfTwo}{D} & \SI{0.9}{\percent} & & \eqref{eq:method:nonrestermsmall} & \numrange{1.60}{2.30} \\[1.2ex]
\wave{4}{++}{1}{+}{\Prho}{G} & \SI{0.8}{\percent} & \rdelim\}{2}{\linewidth}[~\mbox{\PaFour}]
& \eqref{eq:method:nonrestermsmall} & \numrange{1.25}{2.30} \\
\wave{4}{++}{1}{+}{\PfTwo}{F} & \SI{0.2}{\percent} & & \eqref{eq:method:nonrestermsmall} & \numrange{1.40}{2.30} \\[1.2ex]
Intensity sum & \SI{56.8}{\percent} & & & \\
\hline
\hline
\end{tabular}
\end{wideTableOrNot}
In the partial-wave decomposition (see
\cref{sec:mass-independent_fit}), resolution effects of the
spectrometer in \mThreePi and \tpr are not corrected, because the
analysis is performed independently in $(\mThreePi, \tpr)$ bins.
Since the estimated resolution effects are small,\footnote{The $3\pi$
mass resolution varies between \SI{5.4}{\MeVcc} at small \mThreePi
(in the range from \SIrange{0.5}{1.0}{\GeVcc}) and \SI{15.5}{\MeVcc}
at large \mThreePi (in the range from \SIrange{2.0}{2.5}{\GeVcc}).
The \tpr resolution as obtained from the reconstructed $3\pi$ final
state ranges between \SIlist{7e-3;20e-3}{\GeVcsq} depending on the
\mThreePi and \tpr region. See \refCite{Adolph:2015tqa} for
details.} they are neglected in the resonance-model fit.
Although the fit model describes the data rather well (see
\cref{sec:results}), it has a number of potential caveats and
limitations that are mainly rooted in its
simplicity~\cite{pdg_resonances:2016}. Breit-Wigner amplitudes are in
general good approximations only for single narrow resonances. When
using a constant-width parametrization [\cref{eq:method:fixedwidth}],
the resonance in addition has to be far above thresholds. The
description of a set of resonances with the same quantum numbers as a
sum of Breit-Wigner amplitudes may violate unitarity and is a good
approximation only for well-separated resonances with little overlap.
In particular for the $\JPC = 2^{-+}$ resonances, this condition is
not well fulfilled. Also coupled-channel effects are not taken into
account. All the above effects render the extracted Breit-Wigner
parameters model and process dependent. An additional process and
model dependence is introduced by the decomposition of the
partial-wave amplitudes into resonant and nonresonant components,
which is not unique. However, our results can be compared directly to
previous analyses of diffractive three-pion production (see \eg\
\refCite{daum:1980ay,adams:1998ff,chung:2002pu,alekseev:2009aa}). The
model assumption that the phase of the nonresonant amplitudes does not
depend on \mThreePi may not be well justified for cases where these
amplitudes exhibit pronounced peaks in their intensity distribution.
One may also remark that singularities in the scattering matrix that
are not related to resonances might mimic Breit-Wigner resonances. A
possible example is the \PaOne[1420]~\cite{Adolph:2015pws}, which
could be the singularity of a triangle
diagram~\cite{Ketzer:2015tqa,Aceti:2016yeb} (see also
\cref{sec:onePP}).
Some of the potential issues mentioned above are expected to be
mitigated by the fact that in our model most of the resonances are
fitted in at least two decay modes. In addition, we combine in the
fit the information of 11~\tpr bins while forcing the resonances to
appear with the same parameters in each \tpr bin. By performing such
a \tpr-resolved analysis, resonance parameters are constrained by the
various production processes that may contribute with different
strengths and phases to the reaction under study depending on the \tpr
region.
Instead of Breit-Wigner parameters, one could attempt to extract the
poles on the second Riemann sheet of the scattering amplitude, which
correspond to resonances. The location of a resonance pole in the
complex energy plane and its residue represent the universal resonance
properties. However, the construction of coupled-channel models for
the reaction \reaction that are consistent with the fundamental
principles of unitarity and analyticity is a formidable task. In the
past, quasi-two-body $K$-matrix approaches were applied to analyze
$3\pi$ resonances in diffractive production (see \eg\
\refCite{daum:1980ay,amelin:1995gu}). The extraction of resonance
pole positions using an analytical model based on the principles of
the relativistic $S$-matrix is currently under
development~\cite{Jackura:2016llm,Mikhasenko:2017jtg}. A first
successful application of this model to the $\eta \pi$ $D$-wave
extracted from COMPASS data yielded pole positions for the \PaTwo and
\PaTwo[1700]~\cite{Jackura:2017amb}. In \cref{sec:twoPP_discussion}
we compare those results to the ones from our analysis.
\subsection{Fit method}
\label{sec:method:fitmethod}
The free parameters of the model in \cref{eq:method:param:spindens},
\ie the set of coupling amplitudes $\mathcal{C}_a^j(\tpr)$ and the set
of shape parameters $\zeta_j$ of the wave components, are extracted by
a fit to the spin-density matrix $\varrho_{a b}(\mThreePi, \tpr)$ that
was extracted in the mass-independent analysis (see
\cref{sec:mass-independent_fit}). In the resonance-model fit, the
information of the Hermitian spin-density matrix is represented by a
real-valued matrix $\Lambda_{a b}(\mThreePi, \tpr)$ of the same
dimension. The elements of this matrix are defined by the upper
triangular part of $\varrho_{a b}$:
\begin{equation}
\label{eq:method:fitmethod:rhoredef}
\Lambda_{a b}(\mThreePi, \tpr) =
\begin{cases}
\Re\sBrk{\varrho_{a b}(\mThreePi, \tpr)} & \text{for}~a < b, \\[1.1ex]
\Im\sBrk{\varrho_{b a}(\mThreePi, \tpr)} & \text{for}~a > b, \\[1.1ex]
\varrho_{a a}(\mThreePi, \tpr) = \Abs{\mathcal{T}_a(\mThreePi, \tpr)}^2 & \text{for}~a = b.
\end{cases}
\end{equation}
Hence the diagonal elements of $\Lambda_{a b}(\mThreePi, \tpr)$ are
the partial-wave intensities, the upper off-diagonal elements are the
real parts of the interference terms, and the lower off-diagonal
elements are the corresponding imaginary parts.
The deviation of the resonance model $\widehat{\Lambda}_{a b}$ from
the matrix $\Lambda_{a b}$, which is extracted from data, is measured
by summing up the squared Pearson's residuals~\cite{Pearson:1900} of
all matrix elements for all \mThreePi and \tpr
bins~\cite{adams:1998ff}:
\begin{wideEqOrNot}%
\begin{equation}
\label{eq:method:fitmethod:chi2}
\chisq
= \sum_{a, b}^{N_{\text{waves}}\vphantom{)_{a b}}}\;
\sum^{\text{\tpr bins}\vphantom{N_{\text{w}})_{a b}}}\;
\sum^{(\text{\mThreePi bins})_{a b}\vphantom{N_{\text{w}}}}
\sBrk{\frac{\Lambda_{a b}(\mThreePi, \tpr)
- \widehat{\Lambda}_{a b}(\mThreePi, \tpr)}{\sigma_{a b}(\mThreePi, \tpr)}}^2.
\end{equation}
\end{wideEqOrNot}%
Here, $N_{\text{waves}}$ is the number of partial waves included in
the fit model and $\sigma_{a b}(\mThreePi, \tpr)$ is the statistical
uncertainty of $\Lambda_{a b}(\mThreePi, \tpr)$ as determined by the
mass-independent analysis. The sum in \cref{eq:method:fitmethod:chi2}
runs over all 11~\tpr bins and those \mThreePi bins that lie within
the fit ranges. The fit ranges for the intensity terms
$\Lambda_{a a}$ are listed in \cref{tab:method:fitmodel:waveset}. The
fit ranges for the off-diagonal interference terms $\Lambda_{a b}$ are
defined by the intersections of the fit ranges for the intensities of
waves~$a$ and~$b$. The values of the model parameters are determined
by minimizing the \chisq function using the \textsc{Migrad} algorithm
of the \textsc{Minuit} program~\cite{James:1975dr}.
Although we use the notation \chisq in \cref{eq:method:fitmethod:chi2}
for the quantity that is minimized in the resonance-model fit, it is
important to note that the minimum of \cref{eq:method:fitmethod:chi2}
does not follow a \chisq~distribution. Therefore, the expectation
value of~\chisq is neither the number of degrees of freedom~(n.d.f.)
nor is its deviation from the n.d.f. an absolute measure for the
goodness of the fit. The reason for this is that
\cref{eq:method:fitmethod:chi2} does not take into account
correlations among the spin-density matrix elements. Although the
spin-density matrix elements from different \mThreePi or \tpr bins are
independent from each other, within an $(\mThreePi, \tpr)$ bin, two
kinds of correlations appear: \one~statistical correlations of the
spin-density matrix elements and \two~mathematical dependences caused
by using a rank-1 spin-density matrix for the positive-reflectivity
waves in the partial-wave decomposition (see
\cref{sec:mass-independent_fit}). The result of the mass-independent
analysis in principle includes the covariance matrix of the extracted
transition amplitudes $\mathcal{T}_a$. However, the propagation of
this information to the covariance matrix for $\Lambda_{a b}$ is not
well-defined because the spin-density matrix has more free real-valued
parameters than the set of transition amplitudes.\footnote{In each
$(\mThreePi, \tpr)$ bin, the resonance-model fit minimizes the
distance to $N_{\text{waves}}^2$ data points, which are the elements
of $\Lambda_{a b}(\mThreePi, \tpr)$. However, the transition
amplitudes extracted in the mass-independent analysis with rank-1
spin-density matrix represent only $(2 N_{\text{waves}} - 1)$ data
points.} The rank-1 condition leads to analytical relations among
the spin-density matrix elements for waves~$a$, $b$, $c$, and~$d$ of
the form
\begin{equation}
\label{eq:spin_dens_corr}
\varrho_{a b}\, \varrho_{c d}
= \varrho_{a d}\, \varrho_{c b}.
\end{equation}
We have performed studies using alternative formulations of~\chisq
that take into account the statistical correlations and
\cref{eq:spin_dens_corr} (see \cref{sec:alt_chi_2}). For most
parameters, the obtained results are similar to those obtained with
\cref{eq:method:fitmethod:chi2} and the systematic effects are smaller
than those from the other systematic studies (see
\cref{sec:systematics}). Exceptions are discussed in
\cref{sec:results,sec:syst_uncert}. Given the limitations of our
model in describing details of the data, the \chisq~formulation in
\cref{eq:method:fitmethod:chi2} has practical advantages. The
information from the 14~waves enters symmetrically; \ie
\cref{eq:method:fitmethod:chi2} does not require one to choose a
reference wave as it is the case in the alternative
\chisq~formulations. In addition, compared to the alternative
\chisq~formulations, \cref{eq:method:fitmethod:chi2} effectively
assigns more weight to the interference terms, which contain the phase
information. This tends to improve the fit stability as imperfections
in the description of the intensity distributions of some waves have
less influence. A possible issue of neglecting the correlations of
the spin-density matrix elements in \cref{eq:method:fitmethod:chi2} is
that it may lead to biased estimates for the statistical uncertainties
of the fit parameters. However, in our analysis this effect can be
safely ignored because, due to the large data set, all uncertainties
on physical parameters are dominated by systematic effects outweighing
the statistical ones. For the above reasons, we use the
\chisq~definition of \cref{eq:method:fitmethod:chi2} to determine the
physical parameters.
The extraction of the resonance parameters using the fit model
described in \cref{sec:method:fitmodel} is based on highly precise
physical information obtained from the mass-independent analysis. The
722~free parameters of the model are constrained by the matrix
$\Lambda_{a b}(\mThreePi, \tpr)$, which has $14 \times 14$~elements
for each of the 100~\mThreePi and 11~\tpr bins. Taking into account
the chosen \mThreePi fit ranges (see
\cref{tab:method:fitmodel:waveset}), this yields a total number of
\num{76505}~data points that enter into the fit.
The fit model described in \cref{sec:method:fitmodel} is highly
nonlinear in the shape parameters $\zeta_j$ of the wave components.
Some of the model parameters are also strongly correlated. In
addition, the employed parametrizations are only approximations or in
the case of the nonresonant components purely empirical. Hence they
often do not describe all details of our high-precision data. The
resulting deviations between model and data lead to a multimodal
behavior of the minimized \chisq~function. Therefore, the fit result
may depend on the start values for the fit parameters. To avoid the
fit being trapped in local \chisq~minima, we perform numerous fit
attempts using different sets of start values for the shape
parameters, which are randomly picked from uniform distributions. For
the resonance parameters, conservatively wide ranges are chosen for
these distributions based on previous
measurements~\cite{Patrignani:2016xqp}. The ranges are shown as
dotted rectangles in \cref{fig:method:fitmethod:startparameters}. For
the shape parameters of the nonresonant components, we use wide
uniform distributions to pick the start values as there is no prior
knowledge. Details are discussed in \refCite{msc_thesis_wallner}.
The central values for the fit parameters are estimated by performing
fits with 250~different sets of start values, which are shown as dots
in \cref{fig:method:fitmethod:startparameters}. For the systematic
studies discussed in \cref{sec:systematics}, we typically use
50~random sets of start values.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{0.8\linewidth}]{fig4}
\caption{The dotted boxes indicate the ranges of the uniform
distributions that were used to randomly generate start values for
the mass and width parameters of the resonances included in the
fit model. Different colors encode different resonances.}
\label{fig:method:fitmethod:startparameters}
\end{figure}
With the randomly chosen start values for the fit parameters, it is in
general not possible to fit all 722~free parameters at once.
Therefore, a multistaged approach is used, where first only a subset
of the parameters is left free, while the others are kept fixed. The
parameter values found in this first stage are then used as start
values for the next fit stages, in which in addition some of the
previously fixed parameters are freed. In the last fit stage, all
722~model parameters are left free. Since also the order, in which
the parameters are released during the fit, may influence the fit
result, we perform for each set of start values four different schemes
of releasing the fit parameters (see \refCite{msc_thesis_wallner} for
details). Using this procedure, the central values of the model
parameters are estimated based on a total of \num{1000}~fit attempts
performed using the 250~independent randomly chosen sets of start
values.
\Cref{fig:method:fitmethod:chi2:beforecuts} shows the frequency
distribution of the \chisq~values from the \num{1000} fit attempts in
narrow bins of \SI{0.1}{units} of~\chisq. We assume that fits falling
into the same \chisq~bin correspond to identical solutions. In order
to remove unphysical solutions from this set of solutions, we apply a
series of selection criteria. Most of these criteria aim at rejecting
solutions, where components of the resonance model are misused to
compensate for imperfections in the model. The fit ranges listed in
\cref{tab:method:fitmodel:waveset} were chosen such that they cover
the peak regions of the resonances included in our model. Therefore,
solutions are rejected if the mass value of any of the resonance
components lies outside of the respective fit ranges\footnote{An
exception is made for the \PaOne component in the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave.} (see
\cref{tab:method:fitmodel:waveset}). Solutions are also rejected if
any of the resonance width values lie at the border of the allowed
parameter range from \SIrange{40}{1000}{\MeVcc}. Furthermore,
solutions are rejected if a component that represents an excited
resonance is misused by the fit to describe a lower-lying state and
vice versa. Such solutions are clearly unphysical. For example, in
some unphysical solutions the \PaOne and \PaOne[1640] components
become wide and have nearly identical masses to better describe the
dominant peak in the intensity distribution of the
\wave{1}{++}{0}{+}{\Prho}{S} wave. The above condition removes in
particular all 17~solutions, which have a lower~\chisq than the
selected physical solution [the latter one is shown in red in
\cref{fig:method:fitmethod:chi2:beforecuts}]. In the last step, we
remove solutions that are found only once.\footnote{With this step, we
remove in particular solutions, where the fitting algorithm was
trapped in shallow local minima. It is worth stressing that all
solutions removed by this criterion have a larger \chisq than the
selected physical solution.} More details can be found in
\refsCite{msc_thesis_wallner,msc_thesis_schmeing}.
The fit method described above is computationally expensive, but it
avoids constraining the range of parameter values in the fit, while at
the same time it allows us to use wide ranges for the random choice of
the start values.
\begin{figure}
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig5a}%
\label{fig:method:fitmethod:chi2:beforecuts}%
}%
\newLineOrHspace{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig5b}%
\label{fig:method:fitmethod:chi2:aftercuts}%
}%
\caption{\subfloatLabel{fig:method:fitmethod:chi2:beforecuts}~Distribution
of the \chisq~values of the 832~fits that converged out of the
\num{1000}~fit attempts. The selected physical solution (see
text) is shown in red. Additional solutions that are considered
physical are shown in blue and unphysical solutions in green.
\subfloatLabel{fig:method:fitmethod:chi2:aftercuts}~Corresponding
distribution after removing all unphysical solutions. Note the
narrower \chisq~range.}
\end{figure}
For~252 out of the total of \num{1000}~fit attempts, the \chisq
minimization procedure converged and the resulting solution passed the
selection criteria. The \chisq~distribution of those solutions is
shown in \cref{fig:method:fitmethod:chi2:aftercuts}. The solution
with the lowest~\chisq of \num{289834} is shown in red and is found
190~times. In addition, \cref{fig:method:fitmethod:chi2:aftercuts}
shows four physical solutions with slightly larger \chisq~values. For
all four solutions, the parameter values lie within the estimated
systematic uncertainties (see \cref{sec:systematics}). The solution
with the lowest \chisq, which is also the most frequently found
solution, is called \emph{main solution} in the remaining text. It is
interesting to disentangle the contributions from the intensities and
interference terms to the \chisq in \cref{eq:method:fitmethod:chi2}.
This is visualized for the main solution in \cref{fig:chi2Matrix} in
the form of a matrix, which shows the \chisq contributions (summed
over the \mThreePi and \tpr bins) from the elements of the matrix
$\Lambda_{a b}(\mThreePi, \tpr)$ defined in
\cref{eq:method:fitmethod:rhoredef}. The diagonal elements in
\cref{fig:chi2Matrix} show the \chisq contributions from the intensity
distributions of each partial wave, the off-diagonal elements the
\chisq contributions from the real (upper triangle) and imaginary
parts (lower triangle) of the interference terms between the waves.
The intensity distribution of the \wave{1}{++}{0}{+}{\Prho}{S} wave
gives by far the largest contribution to the \chisq. Also the \chisq
contributions of some of its interference terms are large. The reason
for this is that the model is not able to describe all details of this
partial-wave amplitude within the extremely small statistical
uncertainties, which are a consequence of the large relative intensity
of the \wave{1}{++}{0}{+}{\Prho}{S} wave of \SI{32.7}{\percent} and
the large data set. Due to the dominant contribution of the
\wave{1}{++}{0}{+}{\Prho}{S} amplitude to the \chisq, the parameters
of resonances in other waves are sensitive to the parametrizations
used for the $1^{++}$ waves (see
\cref{sec:systematics,sec:syst_uncert}).
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig6}
\caption{Contributions from the intensities and interference terms
to the \chisq in \cref{eq:method:fitmethod:chi2} summed over the
\mThreePi and \tpr bins. The two cells for the interference term
of the \wave{1}{++}{0}{+}{\PfZero[980]}{P} and
\wave{2}{-+}{0}{+}{\PfTwo}{D} waves are empty because the fit
ranges for these two waves do not overlap (see
\cref{tab:method:fitmodel:waveset}).}
\label{fig:chi2Matrix}
\end{figure}
\subsection{Extraction of \tpr spectra of wave components}
\label{sec:method:tp}
Performing the partial-wave analysis in bins of \tpr not only helps to
better disentangle resonant and nonresonant contributions via their
different \tpr dependences but also allows us to determine the \tpr
dependence of each wave component in the resonance model. Since the
analysis is performed on the amplitude level, we can extract the \tpr
dependence of the intensity, \ie the \tpr spectrum, of each wave
component and the \tpr dependence of the relative phases of the
coupling amplitudes $\mathcal{C}_a^j(\tpr)$ of the components. The
latter is discussed in more detail in \cref{sec:production_phases}.
Starting from \cref{eq:method:transitionampl}, we can write the model
$\widehat{\mathcal{T}}_a$ for the transition amplitude of wave~$a$ as
\begin{equation}
\widehat{\mathcal{T}}_a(\mThreePi, \tpr)
= \smashoperator[r]{\sum_{j\; \in\; \mathbb{S}_a}} \widehat{\mathcal{T}}_a^j(\mThreePi, \tpr).
\end{equation}
Here, $\widehat{\mathcal{T}}_a^j$ is the transition amplitude for
component~$j$ in this wave and given by
\begin{multlineOrEq}
\label{eq:method:transitionamplcomp}
\widehat{\mathcal{T}}_a^j(\mThreePi, \tpr)
\equiv \sqrt{I_{aa}(\mThreePi)}\, \sqrt{\mThreePi\vphantom{I_a}}\;
\mathcal{P}(\mThreePi, \tpr)\,
\newLineTimesOrNot
\mathcal{C}_a^j(\tpr)\, \mathcal{D}_j(\mThreePi, \tpr; \zeta_j).
\end{multlineOrEq}
With the above, the partial-wave intensity reads
\begin{wideEqOrNot}%
\begin{multline}
\label{eq:method:intensitywave}
\Abs[1]{\widehat{\mathcal{T}}_a(\mThreePi, \tpr)}^2
= \smashoperator[r]{\sum_{j\; \in\; \mathbb{S}_a}}
\Overbrace{I_{aa}(\mThreePi)\, \mThreePi\, \Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2\,
\Abs[1]{\mathcal{C}_a^j(\tpr)}^2\, \Abs[1]{\mathcal{D}_j(\mThreePi, \tpr; \zeta_j)}^2}{%
\text{Intensity of wave component $j$}} \\
+ \sum_{\mathclap{j\, <\, k\; \in\; \mathbb{S}_a}} I_{aa}(\mThreePi)\, \mThreePi\, \Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2\,
\Underbrace{2\Re\sBrk{\mathcal{C}_a^j(\tpr)\, \mathcal{D}_j(\mThreePi, \tpr; \zeta_j)\,
\mathcal{C}_a^{k \text{*}}(\tpr)\, \mathcal{D}_k^\text{*}(\mThreePi, \tpr; \zeta_k)}}{%
\text{Overlap of wave components $j$ and $k$}}.
\end{multline}
\end{wideEqOrNot}%
Due to the chosen normalization of the transition amplitudes via
\cref{eq:prod_amplitude_norm}, the partial-wave intensity in
\cref{eq:method:intensitywave} corresponds to the expected number of
events in wave~$a$. Using the same reasoning as for
\cref{eq:expected_ev_nmb_corr_rho}, we interpret the terms
\begin{equation}
\label{eq:method:intensitycomp}
\begin{splitOrNot}
\Abs[1]{\widehat{\mathcal{T}}_a^j(\mThreePi, \tpr)}^2
\alignOrNot= \begin{multlinedOrNot} I_{aa}(\mThreePi)\, \mThreePi\, \Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2\,
\newLineTimesOrNot
\Abs[1]{\mathcal{C}_a^j(\tpr)}^2\, \Abs[1]{\mathcal{D}_j(\mThreePi, \tpr; \zeta_j)}^2 \end{multlinedOrNot}
\newLineOrNot
\alignOrNot\equiv \frac{\dif{N_a^j}}{\dif{\mThreePi}\, \dif{\tpr}}
\end{splitOrNot}
\end{equation}
as the expected number of events $N_a^j$ in component~$j$ in wave~$a$
in the $(\mThreePi, \tpr)$ bin. Integrating
\cref{eq:method:intensitycomp} over \mThreePi gives the \tpr-dependent
yield, \ie the \tpr spectrum
$\mathcal{I}_a^j(\tpr) \equiv \dif{N_a^j} / \dif{\tpr}$ of wave
component~$j$ in wave~$a$. To account for the nonequidistant \tpr
binning, we normalize in each \tpr bin the intensity to the respective
bin width $\Delta \tpr$:
\begin{multlineOrEq}
\label{eq:tprim-dependence}
\mathcal{I}_a^j(\tpr)
= \frac{1}{\Delta \tpr}\, \Abs[1]{\mathcal{C}_a^j(\tpr)}^2
\smashoperator[r]{\int_{m_\text{min}}^{m_\text{max}}}\! \dif{\mThreePi}\,
I_{aa}(\mThreePi)\, \mThreePi\,
\newLineTimesOrNot
\Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2\, \Abs[1]{\mathcal{D}_j(\mThreePi, \tpr; \zeta_j)}^2.
\end{multlineOrEq}
The model for the nonresonant amplitudes is valid only within the
applied fit ranges in \mThreePi. Therefore, we use the fit ranges
from \cref{tab:method:fitmodel:waveset} as the \mThreePi integration
range in \cref{eq:tprim-dependence} for all wave components.
As an example, we show in \cref{fig:method:tp:examplespectrum:m0} the
\tpr spectrum of the \PpiTwo[1880] component in the
\wave{2}{-+}{0}{+}{\Prho}{F} wave. In each \tpr bin, the black
horizontal line indicates the central value of the intensity
$\mathcal{I}_a^j(\tpr)$ of the wave component as determined by
\cref{eq:tprim-dependence}. The horizontal extent of the line
indicates the width of the \tpr bin. The statistical uncertainty is
represented by the height of the gray box around the central value.
It is calculated from the statistical uncertainties of the
resonance-model parameters using Monte Carlo error propagation. For
many wave components, the statistical uncertainties are very small and
barely visible in the diagrams.
\begin{figure}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig7a}%
\label{fig:method:tp:examplespectrum:m0}%
}%
\newLineOrHspace{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig7b}%
\label{fig:method:tp:examplespectrum:m1}%
}%
\caption{Examples for \tpr spectra of wave components extracted
according to \cref{eq:tprim-dependence}. The black horizontal
lines indicate the central values, and the gray boxes the
statistical uncertainties (see text for details).
\subfloatLabel{fig:method:tp:examplespectrum:m0}~\PpiTwo[1880]
component in the \wave{2}{-+}{0}{+}{\Prho}{F} wave;
\subfloatLabel{fig:method:tp:examplespectrum:m1}~\PaTwo[1700]
component in the \wave{2}{++}{1}{+}{\PfTwo}{P} wave. The red
curves and lines show the result of a fit of
\cref{eq:slope-parametrization} to the data (see text for
details).}
\label{fig:method:tp:examplespectrum}
\end{figure}
The intensities of most wave components fall approximately
exponentially with increasing \tpr. This is consistent with Regge
theory, which at high energies describes the scattering process as
Pomeron exchange between the beam pion and target proton. For waves
with spin projection $M \neq 0$, the exponential behavior is modified
by an additional $\rbrk{\tpr}^{\abs{M}}$ factor, which is given by the
forward limit of the Wigner $D$-functions~\cite{perl:1974high} and
suppresses the intensity at small~\tpr [see for example
\cref{fig:method:tp:examplespectrum:m1}]. We therefore parametrize
the \tpr spectra by the model
\begin{equation}
\label{eq:slope-parametrization}
\widehat{\mathcal{I}}_a^j(\tpr)
= \dod{\widehat{N}_a^j}{\tpr}
= A_a^j \cdot \rBrk{\tpr}^{\abs{M}} \cdot e^{-b_a^j\, \tpr}
\end{equation}
with the real-valued amplitude parameter $A_a^j$ and the slope
parameter $b_a^j$ for component~$j$ in wave~$a$ as free parameters.
The red curves in \cref{fig:method:tp:examplespectrum} show the result
of a \chisq fit of \cref{eq:slope-parametrization} to the data. In
the formulation of the \chisq, the model function is integrated over
each \tpr bin (red horizontal lines) and compared to the data (black
horizontal lines). For most wave components, the simple model in
\cref{eq:slope-parametrization} holds only approximately and in a
limited \tpr range. Therefore, we exclude the two extremal \tpr bins
and fit the data in the reduced range
\SIvalRange{0.113}{\tpr}{0.724}{\GeVcsq}. For some wave components,
narrower fit ranges are used (see \cref{tab:slopes} in
\cref{sec:results}). The \tpr bins excluded from the fit and the
extrapolations of the model curve are shown in lighter colors.
Special cases are resonance components, for which the coupling
amplitudes in different waves are constrained via
\cref{eq:method:branchingdefinition}. This constrains the \tpr
dependence of the coupling amplitudes $\mathcal{C}_a^j(\tpr)$ in the
different waves to be the same up to complex-valued proportionality
constants, \ie the branching amplitudes
$\prescript{}{b}{\mathcal{B}}_a^j$. Although the dynamic amplitude
$\mathcal{D}_j$ for a resonance component is independent of \tpr, the
\tpr spectra of the resonance component in the different waves can be
slightly different even in this case. This is caused by the
$I_{aa}(\mThreePi)\, \Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2$ term in
the integrand in \cref{eq:tprim-dependence} as the function
$I_{aa}(\mThreePi)$ is different for different waves. In addition,
the statistical uncertainties of the extracted intensities
$\mathcal{I}_a^j(\tpr)$ are different in the different waves.
Therefore, the slope parameters of resonances in different waves,
which are extracted using \cref{eq:slope-parametrization}, may be
slightly different even though the coupling amplitudes are related by
\cref{eq:method:branchingdefinition}.
\subsection{Extraction of branching-fraction ratios}
\label{sec:method:br}
In order to extract the branching-fraction ratios of resonances that
appear in more than one decay channel, we calculate the yields
$\mathcal{N}_a^j(\tpr)$ of resonance component~$j$ in the
corresponding waves. To this end, we integrate the resonance
intensity in a given \tpr bin over \mThreePi:
\begin{multlineOrEq}
\label{eq:resonance_yield}
\mathcal{N}_a^j(\tpr)
= \Abs[1]{\mathcal{C}_a^j(\tpr)}^2
\smashoperator[r]{\int_{m_\text{min}}^{m_\text{max}}}\! \dif{\mThreePi}\,
I_{aa}(\mThreePi)\, \mThreePi\,
\newLineTimesOrNot
\Abs[1]{\mathcal{D}_j^\text{R}(\mThreePi; \zeta_j^\text{R})}^2.
\end{multlineOrEq}
This expression corresponds to \cref{eq:tprim-dependence} with the
production probability $\Abs[1]{\mathcal{P}(\mThreePi, \tpr)}^2$ set
to unity\footnote{\Cref{eq:resonance_yield} does not include the
production probability because the branching-fraction ratio is a
property of the resonance decay only. Therefore, the yields have
arbitrary units and are not normalized to number of events.} and
without the division by the \tpr bin width. The branching-fraction
ratio for resonance component~$j$ is defined as the ratio of the
\tpr-summed yields in the two waves~$a$ and~$b$:
\begin{equation}
\label{eq:branch_fract_ratio}
B_{ab}^j
\equiv \frac{\dsum^{\tpr~\text{bins}} \mathcal{N}_a^j(\tpr)}{\dsum^{\tpr~\text{bins}} \mathcal{N}_b^j(\tpr)}.
\end{equation}
It is important to note that due to the phase-space factor
$I_{aa}(\mThreePi)$ in \cref{eq:resonance_yield},
$\mathcal{N}_{a, b}^j(\tpr)$ and therefore also $B_{ab}^j$ depend on
the chosen \mThreePi integration limits. We use
$m_\text{min} = \SI{0.5}{\GeVcc}$ and
$m_\text{max} = \SI{2.5}{\GeVcc}$ for all resonances in all waves.
This mass range is much wider than the width of any of the resonances.
%
\section{Systematic studies}
\label{sec:systematics}
The physical parameters obtained from the resonance-model fit, \ie the
resonance parameters, the branching-fraction ratios, and the \tpr
slope parameters of the wave components, are subject to systematic
uncertainties related to our fit model and fitting method (see
\cref{sec:method}). In order to estimate these uncertainties, we
performed a large variety of studies. In each study, an aspect of the
analysis is modified and the result is compared to our main result.
In addition to studies that test the stability of the fit result, we
performed studies to evaluate the evidence for selected resonance
signals. These studies are discussed in
\cref{sec:results,sec:syst_uncert}.
Due to the multimodal nature of the \chisq~function (see
\cref{sec:method:fitmethod}), the effects observed in the various
systematic studies are statistically not always independent of one
another. In fact, for some studies the systematic effects are
correlated in a highly nonlinear way. Because of the complexity of
the resonance-model fits and their high computational cost, it is not
possible to estimate the correlations between the various systematic
studies. We therefore estimate the systematic uncertainty intervals
using the minimum and maximum values of the physical parameters
observed in the performed studies. The uncertainties estimated with
this approach do in general not represent Gaussian uncertainties.
Unless stated otherwise in \cref{sec:results,sec:syst_uncert}, all
systematic studies discussed below are included in the estimation of
the uncertainty intervals for the extracted parameters. The obtained
systematic uncertainties are found to be at least 1~order of magnitude
larger than the statistical uncertainties. Hence we quote in
\cref{sec:results} only the systematic uncertainties and omit
statistical uncertainties.
In this section, we describe only the most important studies that
either define the systematic uncertainties of some resonance
parameters or illustrate interesting effects. We will discuss in
\cref{sec:results,sec:syst_uncert} the effects of these studies on the
resonance and \tpr slope parameters in detail. For easier reference,
the studies are labeled by uppercase letters.
\paragraph*{\StudyK:}
In this study, the influence of background contaminations from kaon
diffraction, kaon pairs in the final state, central-production
reactions, and nonexclusive events in the selected data sample on the
fit result is studied. To this end, the analysis is performed on a
data sample, in which \one~the information from the
particle-identification detectors for the beam (CEDARs) and the
final-state particles (RICH) was not used, \two~the rejection of
central-production events was not applied, and \three~the requirements
of exactly one recoil proton detected in the RPD and of transverse
momentum balance were not applied in the event selection (see
\cref{sec:setup_and_event_selection}). Possible background
contributions are expected to be enhanced in this data sample, which
is \SI{76.2}{\percent} larger than that used for the main analysis.
\paragraph*{Studies~\studyA through~\studyD:}
The selection of the 14~waves that enter the resonance-model fit (see
\cref{tab:method:fitmodel:waveset}) is to some extent subjective. In
addition, the fit model has difficulties describing details of some
partial-wave amplitudes. This in particular is true for the intensity
distribution of the \wave{1}{++}{0}{+}{\Prho}{S} wave, which is the
most dominant wave in the data. We therefore investigate in
Studies~\studyA through~\studyD how various waves influence the fit
result, by omitting single waves or combinations of waves from the
fit. The various studies are listed in
\cref{tab:syst_studies:wave_set}.
\begin{table}[tbp]
\renewcommand{\arraystretch}{1.2}
\centering
\caption{List of studies performed on smaller wave sets, in which some
of the 14~waves that are used in the main fit (see
\cref{tab:method:fitmodel:waveset}) are omitted.}
\label{tab:syst_studies:wave_set}
\begin{tabular}{cl}
\hline
\hline
Study & Omitted waves \\
\hline
\studyA & All four $2^{-+}$ waves \\
\studyB & \wave{1}{++}{0}{+}{\Prho}{S} and \wave{1}{++}{0}{+}{\PfTwo}{P} \\
\studyC & All two $4^{++}$ waves \\
\studyH & \wave{2}{++}{1}{+}{\Prho}{D} \\
\studyI & \wave{2}{++}{2}{+}{\Prho}{D} \\
\studyJ & \wave{2}{++}{1}{+}{\PfTwo}{P} \\
\studyE & \wave{2}{++}{2}{+}{\Prho}{D} and \wave{2}{++}{1}{+}{\PfTwo}{P} \\
\studyF & \wave{2}{++}{1}{+}{\Prho}{D} and \wave{2}{++}{1}{+}{\PfTwo}{P} \\
\studyG & \wave{2}{++}{1}{+}{\Prho}{D} and \wave{2}{++}{2}{+}{\Prho}{D} \\
\studyD & All three $2^{++}$ waves \\
\hline
\hline
\end{tabular}
\end{table}
\paragraph*{\StudyL:}
We investigate the impact of the \tpr binning by applying a coarser
\tpr binning to the data using only eight bins, which are given in
\cref{tab:t-bins_8}.
\begin{table*}[tbp]
\sisetup{%
round-mode = places,
round-precision = 3
}
\caption{Borders of the eight nonequidistant \tpr bins used for
\StudyL. The intervals are chosen such that each bin contains
approximately \num[round-mode = places, round-precision =
1]{5.8e6} events.}
\label{tab:t-bins_8}
\renewcommand{\arraystretch}{1.2}
\newcolumntype{Z}{%
>{\Makebox[0pt][c]\bgroup}%
c%
<{\egroup}%
}
\setlength{\tabcolsep}{0pt} %
\begin{tabular}{l@{\extracolsep{12pt}}c@{\extracolsep{6pt}}Z*{7}{cZ}c}
\hline
\hline
Bin && 1 && 2 && 3 && 4 && 5 && 6 && 7 && 8 & \\
\hline
\tpr [\si{\GeVcsq}] &
\num{0.100} &&
\num{0.116} &&
\num{0.136} &&
\num{0.159} &&
\num{0.188} &&
\num{0.227} &&
\num{0.285} &&
\num{0.395} &&
\num{1.000} \\
\hline
\hline
\end{tabular}%
\end{table*}
\paragraph*{\StudyT:}
The impact of the assumption that the \tpr dependence of resonance
amplitudes is the same in partial waves with the same \JPCMrefl
quantum numbers but different decay modes is investigated in this
study. To this end, we performed a resonance-model fit without the
constraint in \cref{eq:method:branchingdefinition}, so that the \tpr
dependence of the resonance amplitudes can be chosen freely by the fit
in all partial waves. This model has 942~free parameters in
comparison to the 722~free parameters of the main fit. Despite the
largely increased number of free parameters, the minimum \chisq~value
decreases only by a factor of \num{0.93} \wrt the main fit. This
shows that for many resonances the constraint in
\cref{eq:method:branchingdefinition} is consistent with the data.
\StudyT plays a special role in the determination of the systematic
uncertainties of the branching-fraction ratios that are calculated
using \cref{eq:branch_fract_ratio}. For a true resonance, the
branching-fraction ratio is expected to be independent of \tpr. We
include the values found in the individual \tpr bins in the estimation
of the uncertainty intervals for the branching-fraction ratios.
\paragraph*{\StudyO:}
As described in \cref{sec:method:fitmodel}, we use a purely
phenomenological parametrization for the nonresonant contributions
[see \cref{eq:method:nonresterm}]. The choice of this parametrization
may impact the fit result, in particular for waves with significant
nonresonant contributions. Although, we cannot uniquely identify the
underlying physics processes, Deck-like processes~\cite{deck:1964hm}
are believed to play a major role. Several models exist for the Deck
process. An example is shown in \cref{fig:deck_model}. Using the
Deck model in \cref{eq:deck_ampl}, which is discussed in
\cref{sec:deck_model}, we generated \num{e8}~Monte Carlo events and
performed a mass-independent analysis using the same model with
88~waves as for the real data. In \StudyO, we replace the
parametrizations of the nonresonant amplitudes [see
\cref{eq:method:nonresterm,eq:method:nonrestermsmall}] by the square
root of the intensity distributions of the Deck Monte Carlo data in
each partial wave. As in the main fit, the phases of these
partial-wave projections of the Deck amplitude are assumed to be
independent of \mThreePi. In \StudyO, the fit model has 693~free
parameters in comparison to the 722~free parameters of the main
fit. \Wrt the main fit, the minimum \chisq~value increases by a factor
of~\num{1.42}. In order to find out which partial-wave amplitudes are
described differently, we decompose the \chisq~difference between the
study and the main fit into contributions from the elements of the
matrix $\Lambda_{a b}(\mThreePi, \tpr)$ defined in
\cref{eq:method:fitmethod:rhoredef}. This is visualized in
\cref{fig:DeckMC_chi2difference} in the same way as in
\cref{fig:chi2Matrix}. The diagonal elements show the contributions
to the \chisq~difference from the intensity distributions of each
partial wave, the off-diagonal elements the contributions from the
real (upper triangle) and imaginary parts (lower triangle) of the
interference terms between the waves.
\Cref{fig:DeckMC_chi2difference} shows that the largest contribution
to the \chisq~increase in \StudyO comes from the
\wave{1}{++}{0}{+}{\Prho}{S} amplitude. \StudyO is particularly
relevant for the interpretation of the resonance signals in the
$1^{++}$ and $1^{-+}$ waves (see
\cref{sec:onePP_results,sec:oneMP_results}).
\begin{figure}[tbp]
\centering
\ifMultiColumnLayout{\includegraphics[width=\linewidth]{fig8}}{\includegraphics[scale=1]{fig8}}
\caption{Example for a nonresonant production process for the $3\pi$
final state as proposed by Deck~\cite{deck:1964hm}. In this
process, the beam pion dissociates into the isobar~$\xi^0$ and the
bachelor~$\pi^-$, followed by diffractive scattering of one of
these beam fragments (typically the~$\pi^-$, as shown here) off
the target proton.}
\label{fig:deck_model}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig9}
\caption{Decomposition of the \chisq~difference between the main fit
and the fit, in which the parametrization of the nonresonant
amplitude was replaced by the square root of the intensity
distribution of the partial-wave projections of Deck Monte Carlo
data [\StudyO]. The \chisq~difference is visualized in the form
of a matrix, which shows the contributions (summed over the
\mThreePi and \tpr bins) from the intensities and interference
terms to the \chisq~difference. Positive values (red colors)
indicate that the data are described less well in the study. The
rare negative values (blue colors) indicate that the data are
described better in the study.}
\label{fig:DeckMC_chi2difference}
\end{figure}
\paragraph*{Studies~\studyS and~\studyR:}
As explained in \cref{sec:method:fitmethod}, the minimum value of the
\chisq~function that is determined by the resonance-model fit does not
follow a \chisq~distribution because \cref{eq:method:fitmethod:chi2}
does not take into account the correlations of the spin-density matrix
elements. In order to test the potential bias introduced by this, we
constructed two possible \chisq~functions that take into account these
correlations (see \cref{sec:alt_chi_2}). In \StudyS, we use the
\chisq~formulation in \cref{eq:systematics:chi2_alt1} with
\cref{eq:systematics:deviation1,eq:systematics:deviation2,eq:systematics:deviation3},
which is based on a single row of the spin-density matrix. In
\StudyR, we use \cref{eq:systematics:chi2_alt1} with
\cref{eq:systematics:substitution}, which directly compares the
modeled and measured transition amplitudes. The differences between
the resonance parameters estimated in Studies~\studyS and~\studyR are
small compared to the systematic uncertainties. Comparing the two
studies with the main solution, large effects are only observed for
the resonances in the $1^{++}$ and $1^{-+}$ waves. They are discussed
in \cref{sec:onePP_results,sec:oneMP_results}.
\paragraph*{\StudyAF:}
The model we employ for the diffractive-production probability
$\Abs[0]{\mathcal{P}(\mThreePi, \tpr)}^2$ in
\cref{eq:method:param:spindens,eq:method:param:prods} also influences
the fit result. In order to estimate the systematic effect, we
performed a study, in which this factor was set to unity. \Wrt the
main fit, the minimum \chisq~value increased by a factor of \num{1.01}
while the number of free parameters remained unchanged. This shows
that both models describe the data on average equally well. For most
of the resonance parameters the effects observed in \StudyAF are
small. Exceptions are the \PaOne[1640] (see
\cref{sec:syst_uncert_onePP}), the \Ppi[1800] (see
\cref{sec:syst_uncert_zeroMP}), and the \PpiTwo* resonances (see
\cref{sec:syst_uncert_twoMP}).
\paragraph*{Studies~\studyM and~\studyN:}
We also studied the effect of the range parameter~$q_R$ of the
Blatt-Weisskopf factors in the decay $X^- \to \xi^0 \pi^-$ (vertex~1
in \cref{fig:3pi_reaction_isobar}). These factors appear explicitly
in \cref{eq:method:a2dynamicwidth} and implicitly in the phase-space
integrals~$I_{aa}$ in
\cref{eq:method:transitionampl,eq:method:nonresterm:qtilde}. In
\StudyM we set~$q_R$ to \SI{267}{\MeVc} and in \StudyN to
\SI{155}{\MeVc} corresponding to assumed strong-interaction ranges of
\SI{0.75}{\femto\meter} and \SI{1.29}{\femto\meter}, respectively.
Most resonance parameters change only slightly in both studies.
Exceptions are the \PaTwo* resonances (see
\cref{sec:syst_uncert_twoPP}), the \PpiOne[1600] (see
\cref{sec:syst_uncert_oneMP}), and the \PaFour (see
\cref{sec:syst_uncert_fourPP}).
Integrating the model function in \cref{eq:method:fitmethod:chi2} over
the \mThreePi bins instead of taking the function values at the mass
bin centers does not significantly influence the resonance parameters.
%
\section{Results on resonance parameters and \tpr spectra of wave components}
\label{sec:results}
In this section, we describe and discuss the results of the
resonance-model fit grouped by the \JPC quantum numbers of the
resonances. The subsections are ordered by increasing complexity of
the results. We start with the \JPC sectors that contain the clearest
resonance signals that are well described by our model and later
discuss the more complicated cases, where several resonances with the
same \JPC quantum numbers appear. In the last \cref{sec:oneMP}, we
discuss the resonance content of the spin-exotic $\JPC = 1^{-+}$ wave.
The extracted Breit-Wigner resonance parameters and their systematic
uncertainties are listed in \cref{tab:parameters} and are compared to
the PDG averages as listed in \refCite{Patrignani:2016xqp}. The
positions of the resonance poles of the Breit-Wigner amplitudes in the
complex energy plane are discussed in \cref{sec:pole_positions}. The
\tpr slope parameters of the resonant and nonresonant wave components,
determined by fitting the extracted \tpr spectra using
\cref{eq:slope-parametrization} (see \cref{sec:method:tp}), are listed
in \cref{tab:slopes}. In the presentation of the results, we restrict
ourselves to figures that illustrate the typical quality of the fit or
certain aspects of the analysis. The full fit result can be found in
the supplemental
material\ifMultiColumnLayout{~\cite{paper3_supplemental_material}}{ in
\cref{sec:spin-dens_matrices,sec:phase-space_vol}} together with
additional information required to perform the resonance-model fit.
The data required to perform the resonance-model fit are provided in
computer-readable format at~\cite{paper3_hepdata}.
\begin{wideTableOrNot}[tbh]
\renewcommand{\arraystretch}{\ifMultiColumnLayout{1.0}{1.2}}
\centering
\captionsetup[subtable]{position=top}
\caption{Resonance parameters with systematic uncertainties as
extracted in this analysis. The statistical uncertainties are at
least an order of magnitude smaller than the systematic ones and
are hence omitted. For comparison, the PDG averages are
listed~\cite{Patrignani:2016xqp}. For the \PaTwo, we quote the
PDG average for the $3\pi$ decay mode. For the two entries marked
with a \enquote{\,*\,} no PDG average exists. The \PaOne[1420] is
listed as \enquote{omitted from summary table} and the quoted mass
and width values were estimated in an earlier COMPASS analysis
based on the same data set that is used here but with only three
waves in the resonance-model fit~\cite{Adolph:2015pws}. The
\PpiTwo[2005] is listed as a \enquote{further state} and we quote
for comparison the parameters measured by the BNL E852
experiment~\cite{Lu:2004yn} with the statistical and systematic
uncertainties added in quadrature.}
\label{tab:parameters}
\let\parColWidth\relax
\newlength{\parColWidth}
\ifMultiColumnLayout{\setlength{\parColWidth}{0.095\linewidth}}{\setlength{\parColWidth}{0.111\linewidth}}
\subfloat[$a_{J}$-like resonances]{%
\label{tab:parameters:a}%
\begin{tabular}{llp{\parColWidth}p{\parColWidth}p{\parColWidth}p{\parColWidth}p{\parColWidth}p{\parColWidth}}
\hline
\hline
& &
\multicolumn{1}{c}{\PaOne} &
\multicolumn{1}{c}{\PaOne[1420]} &
\multicolumn{1}{c}{\PaOne[1640]} &
\multicolumn{1}{c}{\PaTwo} &
\multicolumn{1}{c}{\PaTwo[1700]} &
\multicolumn{1}{c}{\PaFour} \\
& &
\multicolumn{3}{c}{(\cref{sec:onePP})} &
\multicolumn{2}{c}{(\cref{sec:twoPP})} &
\multicolumn{1}{c}{(\cref{sec:fourPP})} \\
\hline
\rule{0pt}{1.1\normalbaselineskip}
\multirow{4}{*}{\rotatebox{90}{\hspace{3pt} COMPASS}} & Mass &
\multirow{2}{*}{$1299\,^{+12}_{-28}$} &
\multirow{2}{*}{$1411\,^{+4}_{-5}$} &
\multirow{2}{*}{$1700\,^{\phantom{1}+35}_{-130}$} &
\multirow{2}{*}{$1314.5\,^{+4.0}_{-3.3}$} &
\multirow{2}{*}{$1681\,^{+22}_{-35}$} &
\multirow{2}{*}{$1935\,^{+11}_{-13}$} \\[\ifMultiColumnLayout{0ex}{-1ex}]
& {\small [\si{\MeVcc}]} & & & & & & \\
& Width &
\multirow{2}{*}{$\phantom{1}380 \pm 80$} &
\multirow{2}{*}{$\phantom{1}161\,^{+11}_{-14}$} &
\multirow{2}{*}{$\phantom{1}510\,^{+170}_{\phantom{1}-90}$} &
\multirow{2}{*}{$\phantom{1}106.6\,^{+3.4}_{-7.0}$} &
\multirow{2}{*}{$\phantom{1}436\,^{+20}_{-16}$} &
\multirow{2}{*}{$\phantom{1}333\,^{+16}_{-21}$} \\[\ifMultiColumnLayout{0ex}{-1ex}]
& {\small [\si{\MeVcc}]} & & & & & & \\[0.5ex]
\hline
\rule{0pt}{1.1\normalbaselineskip}
\multirow{4}{*}{\rotatebox{90}{\centering PDG}} & Mass &
\multirow{2}{*}{$1230 \pm 40$} &
\multirow{2}{*}{$1414\,^{+15}_{-13}$} &
\multirow{2}{*}{$1647 \pm 22$} &
\multirow{2}{*}{$1319.0\,^{+1.0}_{-1.3}$} &
\multirow{2}{*}{$1732 \pm 16$} &
\multirow{2}{*}{$1995\,^{+10}_{\phantom{1}-8}$} \\[\ifMultiColumnLayout{0ex}{-1ex}]
& {\small [\si{\MeVcc}]} & & & & & & \\
& Width &
\multirow{2}{*}{$250$ to $600$} &
\multirow{2}{*}{$\phantom{1}153\,^{\phantom{1}+8}_{-23}$} &
\multirow{2}{*}{$\phantom{1}254 \pm 27$} &
\multirow{2}{*}{$\phantom{1}105.0\,^{+1.6}_{-1.9}$} &
\multirow{2}{*}{$\phantom{1}194 \pm 40$} &
\multirow{2}{*}{$\phantom{1}257\,^{+25}_{-23}$} \\[\ifMultiColumnLayout{0ex}{-1ex}]
& {\small [\si{\MeVcc}]} & & & & & & \\[\ifMultiColumnLayout{-3ex}{-1ex}]
& & & \multicolumn{1}{c}{*} & & & & \\[\ifMultiColumnLayout{-2ex}{-1ex}]
\hline
\hline
\end{tabular}%
}%
\\
\subfloat[$\pi_{J}$-like resonances]{%
\label{tab:parameters:p}%
\begin{tabular}{llp{\parColWidth}p{\parColWidth}p{\parColWidth}p{\parColWidth}p{\parColWidth}}
\hline
\hline
& &
\multicolumn{1}{c}{\Ppi[1800]} &
\multicolumn{1}{c}{\PpiOne[1600]} &
\multicolumn{1}{c}{\PpiTwo} &
\multicolumn{1}{c}{\PpiTwo[1880]} &
\multicolumn{1}{c}{\PpiTwo[2005]} \\
& &
\multicolumn{1}{c}{(\cref{sec:zeroMP})} &
\multicolumn{1}{c}{(\cref{sec:oneMP})} &
\multicolumn{3}{c}{(\cref{sec:twoMP})} \\
\hline
\rule{0pt}{1.1\normalbaselineskip}
\multirow{4}{*}{\rotatebox{90}{\hspace{3pt} COMPASS}} & Mass &
\multirow{2}{*}{$1804\,^{+6}_{-9}$} &
\multirow{2}{*}{$1600\,^{+110}_{\phantom{1}-60}$} &
\multirow{2}{*}{$1642\,^{+12}_{\phantom{1}-1}$} &
\multirow{2}{*}{$1847\,^{+20}_{\phantom{1}-3}$} &
\multirow{2}{*}{$1962\,^{+17}_{-29}$} \\[\ifMultiColumnLayout{0ex}{-1ex}]
& {\small [\si{\MeVcc}]} & & & & & \\
& Width &
\multirow{2}{*}{$\phantom{1}220\,^{\phantom{1}+8}_{-11}$} &
\multirow{2}{*}{$\phantom{1}580\,^{+100}_{-230}$} &
\multirow{2}{*}{$\phantom{1}311\,^{+12}_{-23}$} &
\multirow{2}{*}{$\phantom{1}246\,^{+33}_{-28}$} &
\multirow{2}{*}{$\phantom{1}371\,^{\phantom{1}+16}_{-120}$} \\[\ifMultiColumnLayout{0ex}{-1ex}]
& {\small [\si{\MeVcc}]} & & & & & \\[0.5ex]
\hline
\rule{0pt}{1.1\normalbaselineskip}
\multirow{4}{*}{\rotatebox{90}{\centering PDG}} & Mass &
\multirow{2}{*}{$1812 \pm 12$} &
\multirow{2}{*}{$1662\,^{+8}_{-9}$} &
\multirow{2}{*}{$1672.2 \pm 3.0$} &
\multirow{2}{*}{$1895 \pm 16$} &
\multirow{2}{*}{$1974 \pm 84$} \\[\ifMultiColumnLayout{0ex}{-1ex}]
& {\small [\si{\MeVcc}]} & & & & & \\
& Width &
\multirow{2}{*}{$\phantom{1}208 \pm 12$} &
\multirow{2}{*}{$\phantom{1}241 \pm 40$} &
\multirow{2}{*}{$\phantom{1}260\phantom{.1} \pm 9$} &
\multirow{2}{*}{$\phantom{1}235 \pm 34$} &
\multirow{2}{*}{$\phantom{1}341 \pm 152$} \\[\ifMultiColumnLayout{0ex}{-1ex}]
& {\small [\si{\MeVcc}]} & & & & & \\[\ifMultiColumnLayout{-3ex}{-1ex}]
& & & & & & \multicolumn{1}{c}{*} \\[\ifMultiColumnLayout{-2ex}{-1ex}]
\hline
\hline
\end{tabular}%
}%
\end{wideTableOrNot}
\ifMultiColumnLayout{%
\begin{table*}[p]
%
\renewcommand{\thempfootnote}{[\arabic{mpfootnote}]} %
}{%
\begin{table}[h!]
\begin{minipage}{\textwidth}
\renewcommand\footnoterule{}}
\renewcommand{\arraystretch}{1.4}
\captionsetup[subtable]{position=top}
\caption{The \tpr slope parameters~$b_a^j$ in units of
\si{\perGeVcsq} extracted by fitting
\cref{eq:slope-parametrization} to the \tpr spectra of the wave
components. The quoted uncertainties are of systematic origin.
Statistical uncertainties are at least an order of magnitude
smaller than the systematic ones and are hence omitted. For most
wave components, the fits are performed in the range
\SIvalRange{0.113}{\tpr}{0.724}{\GeVcsq}. Reduced fit ranges are
given in the footnotes. Cases where the model is not able to
describe the \tpr spectrum are marked by a dagger
(\enquote{$\dagger$}). Partial waves, for which the \tpr
dependence of the resonance amplitudes is connected via
\cref{eq:method:branchingdefinition}, are marked with a star
(\enquote{\,*\,}). Slight differences of the extracted
slope-parameter values for the resonances in these waves originate
from differences in the decay phase-space volumes and in the
statistical uncertainties (see \cref{sec:method:tp}).}
\label{tab:slopes}
\let\parColWidth\relax
\newlength{\parColWidth}
\ifMultiColumnLayout{\setlength{\parColWidth}{0.08\linewidth}}{\setlength{\parColWidth}{0.14\linewidth}}
\let\parColWidthA\relax
\newlength{\parColWidthA}
\ifMultiColumnLayout{\setlength{\parColWidthA}{0.1475\linewidth}}{\setlength{\parColWidthA}{0.2\linewidth}}
\ifMultiColumnLayout{}{\captionsetup[subtable]{%
%
singlelinecheck=false,
justification=raggedright,
margin=6em, %
}}
\subfloat[$0^{-+}$ Waves (\cref{sec:zeroMP})]{%
\label{tab:slopes:0mp}%
\small%
\begin{tabular}{p{\parColWidthA}cp{\parColWidth}p{\parColWidth}}
\hline
\hline
\multicolumn{1}{c}{Wave} & \phantom{*} & \multicolumn{1}{c}{\Ppi[1800]} & \multicolumn{1}{c}{Nonresonant} \\
\hline
\wave{0}{-+}{0}{+}{\PfZero[980]}{S} & & $\phantom{1}8.8\,^{+0.7}_{-0.3}$ & $26\,^{+6}_{-5}$\stepcounter{footnote}\footnotemark[\value{footnote}] \\
\hline
\hline
\end{tabular}%
}%
\\
\subfloat[$1^{++}$ Waves (\cref{sec:onePP})]{%
\label{tab:slopes:1pp}%
\small%
\begin{tabular}{p{\parColWidthA}cp{\parColWidth}p{\parColWidth}p{\parColWidth}p{\parColWidth}}
\hline
\hline
\multicolumn{1}{c}{Wave} & \phantom{*} & \multicolumn{1}{c}{\PaOne} & \multicolumn{1}{c}{\PaOne[1420]} & \multicolumn{1}{c}{\PaOne[1640]} & \multicolumn{1}{c}{Nonresonant} \\
\hline
\wave{1}{++}{0}{+}{\Prho}{S} & * & $11.8\,^{+0.9}_{-4.2}$ & \multicolumn{1}{c}{---} & $\phantom{1}7.7\,^{+6.2}_{-0.4}$ & $12.5\,^{+2.1}_{-1.5}$ \\
\wave{1}{++}{0}{+}{\PfZero[980]}{P} & & \multicolumn{1}{c}{---} & $\phantom{1}9.5\,^{+0.6}_{-1.0}$ & \multicolumn{1}{c}{---} & $11.8\,^{+0.8}_{-1.2}$ \\
\wave{1}{++}{0}{+}{\PfTwo}{P} & * & $11 \pm 4$ & \multicolumn{1}{c}{---} & $\phantom{1}7.6\,^{+1.6}_{-0.5}$ & $11.2\,^{+2.7}_{-2.2}$ \\
\hline
\hline
\end{tabular}%
}%
\\
\subfloat[$1^{-+}$ Waves (\cref{sec:oneMP})]{%
\label{tab:slopes:1mp}%
\small%
\begin{tabular}{p{\parColWidthA}cp{\parColWidth}p{\parColWidth}}
\hline
\hline
\multicolumn{1}{c}{Wave} & \phantom{*} & \multicolumn{1}{c}{\PpiOne[1600]} & \multicolumn{1}{c}{Nonresonant} \\
\hline
\wave{1}{-+}{1}{+}{\Prho}{P} & & \multicolumn{1}{c}{$\dagger$} & $19.1\,^{+1.4}_{-4.7}$\stepcounter{footnote}\footnotemark[\value{footnote}] \\
\hline
\hline
\end{tabular}%
}%
\\
\subfloat[$2^{++}$ Waves (\cref{sec:twoPP})]{%
\label{tab:slopes:2pp}%
\small%
\begin{tabular}{p{\parColWidthA}cp{\parColWidth}p{\parColWidth}p{\parColWidth}}
\hline
\hline
\multicolumn{1}{c}{Wave} & \phantom{*} & \multicolumn{1}{c}{\PaTwo} & \multicolumn{1}{c}{\PaTwo[1700]} & \multicolumn{1}{c}{Nonresonant} \\
\hline
\wave{2}{++}{1}{+}{\Prho}{D} & * & $\phantom{1}7.9 \pm 0.5$ & $\phantom{1}7.3\,^{+2.4}_{-0.9}$ & $13.6\,^{+0.4}_{-1.8}$ \\
\wave{2}{++}{2}{+}{\Prho}{D} & & $\phantom{1}9.0\,^{+1.2}_{-0.7}$ & \multicolumn{1}{c}{$\dagger$} & $\phantom{1}8.1\,^{+1.6}_{-0.5}$ \\
\wave{2}{++}{1}{+}{\PfTwo}{P} & * & $\phantom{1}7.8\,^{+0.6}_{-0.5}$ & $\phantom{1}7.2\,^{+1.1}_{-0.8}$ & \multicolumn{1}{c}{$\dagger$} \\
\hline
\hline
\end{tabular}%
}%
\\
\subfloat[$2^{-+}$ Waves (\cref{sec:twoMP})]{%
\label{tab:slopes:2mp}%
\small%
\begin{tabular}{p{\parColWidthA}cp{\parColWidth}p{\parColWidth}p{\parColWidth}p{\parColWidth}}
\hline
\hline
\multicolumn{1}{c}{Wave} & \phantom{*} & \multicolumn{1}{c}{\PpiTwo} & \multicolumn{1}{c}{\PpiTwo[1880]} & \multicolumn{1}{c}{\PpiTwo[2005]} & \multicolumn{1}{c}{Nonresonant} \\
\hline
\wave{2}{-+}{0}{+}{\Prho}{F} & * & $\phantom{1}8.5\,^{+0.9}_{-0.5}$ & $\phantom{1}7.8\,^{+0.5}_{-0.9}$ & $\phantom{1}6.8\,^{+0.4}_{-3.9}$ & \multicolumn{1}{c}{$\dagger$} \\
\wave{2}{-+}{0}{+}{\PfTwo}{S} & * & $\phantom{1}8.5\,^{+0.9}_{-0.5}$ & $\phantom{1}7.8\,^{+7.5}_{-0.9}$ & $\phantom{1}6.7\,^{+0.4}_{-1.3}$ & \multicolumn{1}{c}{$\dagger$} \\
\wave{2}{-+}{1}{+}{\PfTwo}{S} & & \multicolumn{1}{c}{$\dagger$} & \multicolumn{1}{c}{$\dagger$} & $\phantom{1}7.1\,^{+3.5}_{-2.6}$ & $\phantom{1}6.9\,^{+1.1}_{-1.9}$ \\
\wave{2}{-+}{0}{+}{\PfTwo}{D} & * & $\phantom{1}8.4\,^{+0.8}_{-1.7}$ & $\phantom{1}7.8\,^{+0.5}_{-0.9}$ & $\phantom{1}6.7\,^{+0.4}_{-1.3}$ & $12\,^{+6}_{-2}$ \\
\hline
\hline
\end{tabular}%
}%
\\
\subfloat[$4^{++}$ Waves (\cref{sec:fourPP})]{%
\label{tab:slopes:4pp}%
\small%
\begin{tabular}{p{\parColWidthA}cp{\parColWidth}p{\parColWidth}}
\hline
\hline
\multicolumn{1}{c}{Wave} & \phantom{*} & \multicolumn{1}{c}{\PaFour} & \multicolumn{1}{c}{Nonresonant} \\
\hline
\wave{4}{++}{1}{+}{\Prho}{G} & * & $\phantom{1}9.2\,^{+0.8}_{-0.5}$ & $14 \pm 4$ \\
\wave{4}{++}{1}{+}{\PfTwo}{F} & * & $\phantom{1}9.2\,^{+0.8}_{-0.5}$ & $14.5\,^{+1.8}_{-3.7}$ \\
\hline
\hline
\end{tabular}%
}%
\ifMultiColumnLayout{\renewcommand{\thempfootnote}{\arabic{mpfootnote}}}{} %
\addtocounter{footnote}{-1}%
\footnotetext[\value{footnote}]{Fit range \SIvalRange{0.113}{\tpr}{0.326}{\GeVcsq}}
\stepcounter{footnote}%
\footnotetext[\value{footnote}]{Fit range \SIvalRange{0.113}{\tpr}{0.449}{\GeVcsq}.}
\ifMultiColumnLayout{\end{table*}}{\end{minipage}\end{table}}
\clearpage
\subsection{$\JPC = 0^{-+}$ resonances}
\label{sec:zeroMP}
\subsubsection{Results on $0^{-+}$ resonances}
\label{sec:zeroMP_results}
The only $\JPC = 0^{-+}$ wave included in the resonance-model fit is
the \wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave. It contributes
\SI{2.4}{\percent} to the total intensity in the mass range from
\SIrange{0.5}{2.5}{\GeVcc}. The intensity distribution of this wave
is shown in
\cref{fig:intensity_0mp_tbin1,fig:intensity_0mp_tbin10,fig:intensity_0mp_tbin11}
for three \tpr bins. Except in the highest \tpr bin, the intensity
distributions exhibit a clear peak of the \Ppi[1800] resonance at
$\mThreePi \approx \SI{1.8}{\GeVcc}$ with a shoulder toward lower
masses. The picture changes dramatically in the highest \tpr bin,
where the intensity at the \Ppi[1800] peak position is close to zero
and hence the low-mass shoulder dominates the spectrum.
\Cref{fig:intensity_phases_0mp} also shows, as an example, the
\mThreePi dependence of the relative phases of the $0^{-+}$ wave \wrt
the \wave{1}{++}{0}{+}{\Prho}{S} and the \wave{2}{-+}{1}{+}{\PfTwo}{S}
waves. Clearly rising phase motions are observed in the
\SI{1.8}{\GeVcc} mass region.
\ifMultiColumnLayout{\begin{figure*}[p]}{\begin{figure}[tbp]}
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10a}%
\label{fig:intensity_0mp_tbin1}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10b}%
\label{fig:phase_0mp_1pp_rho_tbin1}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10c}%
\label{fig:phase_0mp_2mp_f2_tbin1}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10d}%
\label{fig:intensity_0mp_tbin10}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10e}%
\label{fig:phase_0mp_1pp_rho_tbin10}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10f}%
\label{fig:phase_0mp_2mp_f2_tbin10}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10g}%
\label{fig:intensity_0mp_tbin11}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10h}%
\label{fig:phase_0mp_1pp_rho_tbin11}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig10i}%
\label{fig:phase_0mp_2mp_f2_tbin11}%
}%
\caption{The \wave{0}{-+}{0}{+}{\PfZero[980]}{S} partial-wave
amplitude in three \tpr bins (rows): (left column) intensity
distributions; (central column) phase motions \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave; (right column) phase motions
\wrt the \wave{2}{-+}{1}{+}{\PfTwo}{S} wave. The data points are
taken from \refCite{Adolph:2015tqa} and represent the so-called
mass-independent analysis (see \cref{sec:mass-independent_fit}).
The red curve represents the full model (see
\cref{tab:method:fitmodel:waveset}), which is the coherent sum of
the wave components. The other curves represent the wave
components: \Ppi[1800] resonance (blue curves), nonresonant
contribution (green curves). The extrapolations of the model and
the wave components beyond the fit range are shown in lighter
colors.}
\label{fig:intensity_phases_0mp}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
The data are well described by the fit model (red curves in
\cref{fig:intensity_phases_0mp}), which contains two $0^{-+}$
components: a Breit-Wigner resonance for the \Ppi[1800] (blue curves)
and a nonresonant component (green curves). The extrapolations of
these curves below and above the fitted mass range of
\SIvalRange{1.2}{\mThreePi}{2.3}{\GeVcc} are shown in lighter colors
in \cref{fig:intensity_phases_0mp}. The \Ppi[1800] is parametrized
using \cref{eq:BreitWigner,eq:method:fixedwidth}, the nonresonant
component using \cref{eq:method:nonrestermsmall} (see
\cref{tab:method:fitmodel:waveset}). In our fit model, the
nonresonant contribution is attributed to the low-mass shoulder. At
low values of \tpr, it interferes constructively with the resonance at
the \Ppi[1800] peak position. At higher values of \tpr, the
interference of the two components is destructive at the peak position
due to a sign flip of the coupling amplitude of the nonresonant
component at $\tpr \approx \SI{0.3}{\GeVcsq}$ (see discussion in
\cref{sec:production_phases}). In the highest \tpr bin, the
destructive interference of the two components is complete and leads
to a dip in the intensity distribution around the \Ppi[1800] mass.
The remaining low-mass shoulder is completely described by the
nonresonant component. In the intensity distributions, the model
exhibits some disagreement with the observed peak shape and does not
reproduce the high-mass shoulder in the two highest \tpr bins.
The strong variation of the intensity of the $0^{-+}$ wave with \tpr
originates from the very different \tpr dependences of the amplitudes
of the two $0^{-+}$ wave components. \Cref{fig:tprim_0mp} shows the
\tpr spectra for both components as determined using
\cref{eq:tprim-dependence} together with the results of fits using
\cref{eq:slope-parametrization}. While the intensity of the
\Ppi[1800] component exhibits an approximately exponential behavior
with slope parameter \SIaerrSys{8.8}{0.7}{0.3}{\perGeVcsq}, the
intensity of the nonresonant component first drops steeply with
$b = \SIaerrSys{26}{6}{5}{\perGeVcsq}$ at low values of \tpr, before
it starts to rise again with \tpr, forming a dip at
$\tpr \approx \SI{0.3}{\GeVcsq}$.
\begin{figure}[tbp]
\centering
\includegraphics[width=\twoPlotWidth]{fig11}
\caption{Similar to \cref{fig:method:tp:examplespectrum}, but
showing the \tpr spectra of the two $\JPC = 0^{-+}$ wave
components as given by \cref{eq:tprim-dependence}: the \Ppi[1800]
component is shown as blue lines (central values) and light blue
boxes (statistical uncertainties; not visible for most bins), the
nonresonant component is shown as black lines and gray boxes as in
\cref{fig:method:tp:examplespectrum}. The red and green curves
and horizontal lines represent fits using
\cref{eq:slope-parametrization}.}
\label{fig:tprim_0mp}
\end{figure}
The $0^{-+}$ wave exhibits clearly rising phases \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave in the \Ppi[1800] region (see
central column of \cref{fig:intensity_phases_0mp}). At low \tpr, the
relative phase decreases at about \SI{1.3}{\GeVcc} due to the \PaOne
and rises at about \SI{1.8}{\GeVcc} due to the \Ppi[1800]. At higher
values of \tpr, the decrease is less pronounced and the relative phase
rises steeply starting at \SI{1.5}{\GeVcc}. This is explained in our
fit model by a sign change of the coupling amplitude of the
nonresonant $0^{-+}$ component, which dominates the low-mass region,
leading to an additional rise of the total phase of the $0^{-+}$
amplitude. The extremely rapid phase motion at \SI{1.8}{\GeVcc} in
the highest \tpr bin is a direct consequence of the nearly vanishing
intensity at this mass. Since the phase is defined \wrt the origin of
the complex plane, its value changes rapidly by \SI{\pm 180}{\degree}
if the amplitude passes close to the origin~\cite{Grigorenko:1999}.
We observe a similar \tpr dependence of the phase motions \wrt other
waves. As an example, we show in the right column of
\cref{fig:intensity_phases_0mp} the phase motions \wrt the
\wave{2}{-+}{1}{+}{\PfTwo}{S} wave. Here, the phase drop in the
lowest \tpr bin, which appears at about \SI{1.6}{\GeVcc}, is caused by
the \PpiTwo. Within the fit ranges, the model describes all relative
phases of the $0^{-+}$ wave well in all \tpr bins.
From the fit, we obtain the Breit-Wigner resonance parameters
$m_{\Ppi[1800]} = \SIaerrSys{1804}{6}{9}{\MeVcc}$ and
$\Gamma_{\Ppi[1800]} = \SIaerrSys{220}{8}{11}{\MeVcc}$. The
\Ppi[1800] resonance parameters are rather insensitive to changes of
the fit model discussed in \cref{sec:systematics}. The estimated
systematic uncertainties are therefore the smallest of all \piJ-like
resonances in the model. More details on the results of the
systematic studies are discussed in \cref{sec:syst_uncert_zeroMP}.
It is worth mentioning that in the study, in which the fit range was
narrowed to \SIvalRange{1.6}{\mThreePi}{2.3}{\GeVcc}, the nonresonant
component is practically vanishing. This demonstrates that indeed
most of the peak structure arises from the \Ppi[1800].
\subsubsection{Discussion of results on $0^{-+}$ resonances}
\label{sec:zeroMP_discussion}
Although the \Ppi[1800] in principle has been well known for more than
three decades, its resonance parameters are not well determined. In
particular the \Ppi[1800] mass values extracted by previous
experiments show a large spread and fall into two
clusters~\cite{Patrignani:2016xqp}: one with central values around
\SI{1780}{\MeVcc} and the other around \SI{1860}{\MeVcc}. Our result
for the \Ppi[1800] mass of
$m_{\Ppi[1800]} = \SIaerrSys{1804}{6}{9}{\MeVcc}$ falls between these
two clusters and is in good agreement with the PDG world average of
$m_{\Ppi[1800]} = \SI{1812(12)}{\MeVcc}$~\cite{Patrignani:2016xqp}.
This is also true for the \Ppi[1800] width, for which the PDG average
is $\Gamma_{\Ppi[1800]} = \SI{208(12)}{\MeVcc}$ compared to our value
of $\Gamma_{\Ppi[1800]} = \SIaerrSys{220}{8}{11}{\MeVcc}$. Our
measurement of the \Ppi[1800] parameters is the most precise and
accurate so far. It is also consistent within uncertainties with a
previous COMPASS measurement using a lead
target~\cite{alekseev:2009aa}.
In the \wave{0}{-+}{0}{+}{\pipiS}{S} wave, we observe a peak that is
similar in shape and position to the \Ppi[1800] peak in the
\wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave [see Figs.~24 and 25(b) in
\refCite{Adolph:2015tqa}]. Although the $\pipiS \pi$ wave was not
included in the resonance model fit for reasons discussed below, the
observed similarity of the peaks suggests that the \Ppi[1800]
resonance parameters would be similar in this wave.
The \Ppi[1800] is the second radial excitation of the pion. Its
lighter partner state is the \Ppi[1300]. This state has been observed
in the $\Prho \pi$ and $\pipiS \pi$ final states, as well as in
$\gamma \gamma$ production~\cite{Patrignani:2016xqp}. The parameters
of the \Ppi[1300] are only poorly known. The world averages estimated
by the PDG are $m_{\Ppi[1300]} = \SI{1300(100)}{\MeVcc}$ and
$\Gamma_{\Ppi[1300]} =
\SIrange{200}{600}{\MeVcc}$~\cite{Patrignani:2016xqp}. Also the
coupling of the \Ppi[1300] to the $\pipiS \pi$ final state is
controversial. The Obelix Collaboration claims that the coupling is
\num{2.2(4)}~times stronger than for the $\Prho \pi$ final state and
extracts a resonance mass of
$m_{\Ppi[1300]}=\SI{1200(40)}{\MeVcc}$~\cite{salvini:2004gz}. The
Crystal Barrel Collaboration, however, sets an upper limit for the
coupling to the $\pipiS \pi$ decay channel of 0.15~times the coupling
to $\Prho \pi$~\cite{Abele:2001js} and quotes a mass of
$m_{\Ppi[1300]}=\SI{1375(40)}{\MeVcc}$. The two experiments also
disagree on the value of the \Ppi[1300] width. In our data, we
observe an unusually strong \tpr dependence of the intensity of the
\wave{0}{-+}{0}{+}{\pipiS}{S} wave in the \SI{1.3}{\GeVcc} region [see
Figs.~24 and~35(a) in \refCite{Adolph:2015tqa}], which is similar to
that of the nonresonant component in the
\wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave (see \cref{fig:tprim_0mp}).
In addition, the intensity in this mass region is strongly dependent
on the PWA model employed for the mass-independent analysis. We
therefore did not include the \wave{0}{-+}{0}{+}{\pipiS}{S} wave in
the resonance-model fit.
Since the intensity spectra of the \wave{0}{-+}{0}{+}{\PfZero[980]}{S}
wave show a significant shoulder at
$\mThreePi \approx \SI{1.3}{\GeVcc}$, we tried an alternative
description of this partial-wave amplitude using a \Ppi[1300]
resonance component instead of the nonresonant component. This model
describes the data less well than the main fit and does not yield
meaningful \Ppi[1300] resonance parameters.\footnote{In the solutions
with the lowest \chisq~values, the \Ppi[1300] mass is found at the
lower parameter limit of \SI{1}{\GeVcc}.}
If we include in another study a nonresonant component, the minimum
\chisq~value decreases by a factor of~\num{0.97} \wrt the main
fit.\footnote{Compared to the \num{722} free parameters of the main
fit, this fit has \num{746} free parameters.} In the solution with
the lowest~\chisq, the \Ppi[1300] is found with a mass and width of
about \SI{1630}{\MeVcc} and \SI{380}{\MeVcc}, respectively. While the
width value is compatible with previous measurements, the mass value
is clearly not.\footnote{Only local minima with significantly larger
\chisq~values yield \Ppi[1300] masses of about \SI{1270}{\MeVcc}
that are compatible with previous measurements.} Moreover, the
\wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave does not show any phase rise
in the \SI{1.3}{\GeVcc} mass region. Within our model, we therefore
conclude that the data do not support a \Ppi[1300] signal in the
$\PfZero[980] \pi$ decay mode. This conclusion is consistent with the
fact that so far no observation of such a \Ppi[1300] decay has been
claimed.
Heavier excited pion states with masses around
\SIlist{2070;2360}{\MeVcc} were reported by the authors of
\refCite{anisovich:2001pn}. We do not see clear resonance signals of
heavy pions in the mass range from \SIrange{2000}{2500}{\MeVcc} in the
\wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave.
%
\subsection{$\JPC = 4^{++}$ resonances}
\label{sec:fourPP}
\subsubsection{Results on $4^{++}$ resonances}
\label{sec:fourPP_results}
We include two $\JPC = 4^{++}$ waves, \wave{4}{++}{1}{+}{\Prho}{G} and
\wave{4}{++}{1}{+}{\PfTwo}{F}, in the resonance-model fit. Both have
small intensities and contribute \SI{0.8}{\percent} and
\SI{0.2}{\percent}, respectively, to the total intensity in the mass
range from \SIrange{0.5}{2.5}{\GeVcc}. The intensity distributions of
the two waves are shown for the lowest \tpr bin in
\cref{fig:intensity_4pp_f2_tbin1,fig:intensity_4pp_rho_tbin1}. In
both waves, a clear peak around \SI{1.9}{\GeVcc} is observed. The
shape of the intensity distributions depends only weakly on \tpr. The
\wave{4}{++}{1}{+}{\PfTwo}{F} wave exhibits a slight shoulder at high
masses. In the \wave{4}{++}{1}{+}{\Prho}{G} wave, this shoulder is
more pronounced, and in addition a low-mass shoulder is visible. In
both waves, these features are most pronounced at low \tpr and vanish
in the highest \tpr bin. \Cref{fig:intensity_phases_4pp} also shows,
as an example, the \mThreePi dependence of the relative phases of the
$4^{++}$ waves \wrt the \wave{1}{++}{0}{+}{\Prho}{S} and
\wave{1}{++}{0}{+}{\PfTwo}{P} waves in the lowest \tpr bin. Clearly
rising phases are observed in the \SI{1.9}{\GeVcc} mass region. In
addition, \cref{fig:phase_4pp_rho_4pp_f2_tbin1} shows the relative
phase between the two $4^{++}$ waves. The approximately constant
phase indicates that there is a common dominant resonance in the two
waves.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig12a}%
\label{fig:intensity_4pp_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig12b}%
\label{fig:phase_4pp_rho_4pp_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig12c}%
\label{fig:phase_4pp_rho_1pp_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig12d}%
\label{fig:phase_4pp_rho_0mp_tbin1}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig12e}%
\label{fig:intensity_4pp_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig12f}%
\label{fig:phase_4pp_f2_1pp_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig12g}%
\label{fig:phase_4pp_f2_0mp_tbin1}%
}%
\caption{Amplitudes of the two $\JPC = 4^{++}$ waves in the lowest
\tpr bin.
\subfloatLabel{fig:intensity_4pp_rho_tbin1}~through~\subfloatLabel{fig:phase_4pp_rho_0mp_tbin1}:
intensity distribution and relative phases for the
\wave{4}{++}{1}{+}{\Prho}{G} wave.
\subfloatLabel{fig:intensity_4pp_f2_tbin1}~through~\subfloatLabel{fig:phase_4pp_f2_0mp_tbin1}:
intensity distribution and relative phases for the
\wave{4}{++}{1}{+}{\PfTwo}{F} wave. The model and the wave
components are represented as in \cref{fig:intensity_phases_0mp},
except that here the blue curve represents the \PaFour.}
\label{fig:intensity_phases_4pp}
\end{wideFigureOrNot}
Our model contains one $\JPC = 4^{++}$ resonance, the \PaFour, which
is the only confirmed isovector state with these quantum
numbers~\cite{Patrignani:2016xqp}. The \PaFour is parametrized using
\cref{eq:BreitWigner,eq:method:fixedwidth}, the nonresonant components
using \cref{eq:method:nonrestermsmall} (see
\cref{tab:method:fitmodel:waveset}). The data are well described
within the fit range, which for the $\PfTwo \pi F$ wave is
\SIvalRange{1.4}{\mThreePi}{2.3}{\GeVcc}. The low-mass tail of the
$\Prho \pi G$ wave allows us to extend the fit range for this wave
down to \SI{1.25}{\GeVcc}. In our fit model, the nonresonant
components are small in both $4^{++}$ waves. Their contribution
decreases with increasing \tpr and almost vanishes at higher values of
\tpr. The nonresonant components interfere destructively with the
high-mass tail of the \PaFour in the $\PfTwo \pi F$ wave and
constructively with the low-mass tail of the \PaFour in the
$\Prho \pi G$ wave. The model is not able to reproduce the high-mass
shoulder in the intensity distributions of the $\Prho \pi G$ wave at
low \tpr.
The two $4^{++}$ waves exhibit clearly rising phases in the
\SI{1.9}{\GeVcc} mass region, \eg\ \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave, as shown in
\cref{fig:phase_4pp_f2_1pp_rho_tbin1,fig:phase_4pp_rho_1pp_rho_tbin1}.
This rise is observed for all \tpr bins. Its magnitude is slightly
smaller for the \wave{4}{++}{1}{+}{\Prho}{G} wave. The phase
variations \wrt the \wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave exhibit a
more complex pattern. The phase drop around \SI{1.8}{\GeVcc} due to a
stronger \Ppi[1800] signal in the \wave{0}{-+}{0}{+}{\PfZero[980]}{S}
wave is compensated by the phase motion of the \PaFour leading to a
rising phase around \SI{1.9}{\GeVcc} [see
\cref{fig:phase_4pp_f2_0mp_tbin1,fig:phase_4pp_rho_0mp_tbin1}]. The
magnitude of these phase motions decreases with \tpr.
The relative phase between the two $4^{++}$ partial-wave amplitudes
shows only little variation over the fitted mass region. Together
with the phase motions discussed in the previous paragraph, this
demonstrates that the two waves are dominated by resonances and that
they have the same resonance content. The residual slight rise of the
phase between the $\PfTwo \pi F$ and $\Prho \pi G$ waves is caused by
differences in the small nonresonant components.
We extract the Breit-Wigner parameters of the \PaFour and find
$m_{\PaFour} = \SIaerrSys{1935}{11}{13}{\MeVcc}$ and
$\Gamma_{\PaFour} = \SIaerrSys{333}{16}{21}{\MeVcc}$. The \PaFour
resonance parameters are rather insensitive to the systematic studies
(see \cref{sec:systematics,sec:syst_uncert_fourPP}).
The \tpr spectra of the $4^{++}$ wave components are shown in
\cref{fig:tprim_4pp} together with the results of fits using
\cref{eq:slope-parametrization}. In our model, the \tpr dependence of
the amplitudes of the \PaFour components in the two waves is
constrained by \cref{eq:method:branchingdefinition}. The fit finds a
relative phase of the branching amplitudes close to \SI{0}{\degree}
for the \PaFour components in the two waves (see
\cref{sec:production_phases}). The slope parameters of the \tpr
spectra of the \PaFour component in the two waves have practically
identical values of \SIaerrSys{9.2}{0.8}{0.5}{\perGeVcsq}. For both
\tpr spectra, the model curve undershoots the data at small values of
\tpr. This could indicate that in this \tpr range our resonance model
overestimates the \PaFour yields in both waves. The nonresonant
contributions have steeper falling \tpr spectra with almost identical
slope parameters of \SIerrSys{14}{4}{\perGeVcsq} for the $\Prho \pi G$
wave and \SIaerrSys{14.5}{1.8}{3.7}{\perGeVcsq} for the $\PfTwo \pi F$
wave. It is worth noting that, if we do not constrain the \PaFour
coupling amplitudes via \cref{eq:method:branchingdefinition} and thus
allow them to have different \tpr dependence [\StudyT; see
\cref{sec:systematics}], we obtain \PaFour resonance and slope
parameters that are consistent within the systematic uncertainties.
Also the relative phase of approximately \SI{0}{\degree} between the
$\Prho \pi G$ and $\PfTwo \pi F$ decay modes is recovered. This
confirms the assumptions contained in
\cref{eq:method:branchingdefinition}.
\begin{figure}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig13a}%
\label{fig:tprim_4pp_rho}%
}%
\newLineOrHspace{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig13b}%
\label{fig:tprim_4pp_f2}%
}%
\caption{Similar to \cref{{fig:tprim_0mp}}, but showing the \tpr
spectra of \subfloatLabel{fig:tprim_4pp_rho}~the
\wave{4}{++}{1}{+}{\Prho}{G} and
\subfloatLabel{fig:tprim_4pp_f2}~the \wave{4}{++}{1}{+}{\PfTwo}{F}
wave components as given by \cref{eq:tprim-dependence}: the
\PaFour component is shown as blue lines and light blue boxes, and
the nonresonant components as black lines and gray boxes. The red
and green curves and horizontal lines represent fits using
\cref{eq:slope-parametrization}.}
\label{fig:tprim_4pp}
\end{figure}
From the \PaFour yields in the two analyzed decay branches we derive
the ratio of branching fractions according to
\cref{eq:branch_fract_ratio}:
\begin{multlineOrEq}
\label{eq:branch_fract_ratio_a4}
B_{\Prho* \pi G, \PfTwo* \pi F}^{\PaFour*}
\newLineOrNot
\begin{alignedOrNot}
\alignOrNot= \frac{\text{BF}\!\sBrk{\PaFour^- \to \Prho^0 \pi^- \to \threePi}\hfill}{\text{BF}\!\sBrk{\PaFour^- \to \PfTwo \pi^- \to \threePi}}
\newLineOrNot
\alignOrNot= \numaerrSys{2.5}{0.5}{0.3}.
\end{alignedOrNot}
\end{multlineOrEq}
Taking into account the unobserved decays
$\PaFour*^- \to \Prho*^- \pi^0$ and $\PaFour*^- \to \PfTwo* \pi^-$ to
the \threePiN final state and assuming isospin symmetry, this value
increases by a factor of $4/3$:
\begin{equation}
\label{eq:branch_fract_ratio_a4_iso}
\begin{splitOrNot}
B_{\Prho* \pi G, \PfTwo* \pi F}^{\PaFour*, \text{iso}}
\alignOrNot= \frac{\text{BF}\!\sBrk{\PaFour^- \to \Prho \pi \to 3\pi}\hfill}{\text{BF}\!\sBrk{\PaFour^- \to \PfTwo \pi \to 3\pi}}
\newLineOrNot
\alignOrNot= \numaerrSys{3.3}{0.7}{0.4}.
\end{splitOrNot}
\end{equation}
The isospin factor needs to be corrected for self-interference
effects. Unlike the $\Prho \pi$ channel, the $\PfTwo \pi$ channel is
affected by different Bose symmetrizations in the \threePi and
\threePiN final states. In addition, the branching fraction of the
\PfTwo into $2\pi$ of
\SIaerr{84.2}{2.9}{0.9}{\percent}~\cite{Patrignani:2016xqp} needs to
be included. Taking both effects into account, the isospin factor
$4/3$ should be replaced by \numaerr{1.19}{0.04}{0.02}\footnote{We
only take into account the uncertainty of the $\PfTwo \to 2\pi$
branching fraction.} leading to the corrected ratio
\begin{equation}
\label{eq:branch_fract_ratio_a4_corr}
\begin{splitOrNot}
B_{\Prho* \pi G, \PfTwo* \pi F}^{\PaFour*, \text{corr}}
\alignOrNot= \frac{\text{BF}\!\sBrk{\PaFour^- \to \Prho \pi}\hfill}{\text{BF}\!\sBrk{\PaFour^- \to \PfTwo \pi}}
\newLineOrNot
\alignOrNot= \numaerr{2.9}{0.6}{0.4}.
\end{splitOrNot}
\end{equation}
\subsubsection{Discussion of results on $4^{++}$ resonances}
\label{sec:fourPP_discussion}
The PDG world averages for mass and width of the \PaFour are
$m_{\PaFour} = \SIaerr{1995}{10}{8}{\MeVcc}$ and
$\Gamma_{\PaFour} =
\SIaerr{257}{25}{23}{\MeVcc}$~\cite{Patrignani:2016xqp}. Our
measurement of the \PaFour parameters of
$m_{\PaFour} = \SIaerrSys{1935}{11}{13}{\MeVcc}$ and
$\Gamma_{\PaFour} = \SIaerrSys{333}{16}{21}{\MeVcc}$ is the most
accurate and precise so far, but we find the \PaFour mass to be
\SI{60}{\MeVcc} smaller and the width \SI{76}{\MeVcc} larger than the
world average. We agree with our two previous analyses: the one based
on the measurement of the \threePi final state diffractively produced
on a solid lead target~\cite{alekseev:2009aa}, and the other based on
the measurement of the $\eta \pi$ and $\eta' \pi$ final states
diffractively produced on a liquid-hydrogen
target~\cite{Adolph:2014rpp}. Also, the results on diffractively
produced \threePi by the BNL E852 experiment~\cite{chung:2002pu} and
$\omega \pi^- \pi^0$ by the VES experiment~\cite{Amelin:1999gk} are in
good agreement with our results.
Our measurement of the \PaFour width is especially at variance with
the value of $\Gamma_{\PaFour} = \SI{180(30)}{\MeVcc}$ obtained by the
authors of \refCite{anisovich:2001pn}. They analyzed $3\pi^0$,
$\eta \pi^0$, and $\eta' \pi^0$ final states produced in \pbarp
annihilations. They used a model with two $4^{++}$ resonances below
\SI{2.5}{\GeVcc} and claimed an excited \PaFour* state with a mass of
\SI{2255(40)}{\MeVcc} and a width of \SIaerr{330}{110}{50}{\MeVcc}.
In the two analyzed waves, we do not see clear resonance signals of
heavier \PaFour* resonances in the mass range from
\SIrange{2000}{2500}{\MeVcc}.
The measured value of \numaerrSys{2.5}{0.5}{0.3} of the
branching-fraction ratio $B_{\Prho* \pi G, \PfTwo* \pi F}^{\PaFour*}$
in \cref{eq:branch_fract_ratio_a4} is larger than the value
\numerrs{1.1}{0.2}{0.2} that was reported by the BNL E852 experiment
in a study of the same channel at \SI{18}{\GeVc} beam
momentum~\cite{chung:2002pu}. Taking into account the unobserved
\threePiN decay mode and the \PfTwo branching fraction into $2\pi$,
the present result of \numaerr{2.9}{0.6}{0.4} for
$B_{\Prho* \pi G, \PfTwo* \pi F}^{\PaFour*, \text{corr}}$ in
\cref{eq:branch_fract_ratio_a4_corr} agrees with the value of
\num{3.3} predicted by the ${}^3P_0$ decay model~\cite{barnes:1996ff}.
In this model, the strong decay of a \qqbar state to the
$(q\widebar{q}\,')\, (q'\widebar{q})$ exit channel proceeds via
production of a $q'\widebar{q}\,'$ pair with vacuum quantum numbers,
$\JPC = 0^{++}$. Note that the \PaFour width predicted by this model
is a factor of~2 smaller than our measured value of
$\Gamma_{\PaFour}$.
%
\subsection{$\JPC = 2^{++}$ resonances}
\label{sec:twoPP}
\subsubsection{Results on $2^{++}$ resonances}
\label{sec:twoPP_results}
We include three $\JPC = 2^{++}$ waves in the resonance-model fit:
\wave{2}{++}{1}{+}{\Prho}{D}, \wave{2}{++}{2}{+}{\Prho}{D}, and
\wave{2}{++}{1}{+}{\PfTwo}{P}. The \wave{2}{++}{1}{+}{\Prho}{D} wave
has the third largest intensity of the 88 waves in the
mass-independent analysis (see \cref{sec:mass-independent_fit}) and
contributes \SI{7.7}{\percent} to the total intensity in the mass
range from \SIrange{0.5}{2.5}{\GeVcc}. The two other $2^{++}$ waves
contribute \SI{0.3}{\percent} and \SI{0.8}{\percent} to the total
intensity, respectively. The intensity distributions of the three
waves are shown in
\cref{fig:intensity_2pp_m1_rho_tbin1,fig:intensity_2pp_m2_rho_tbin1,fig:intensity_2pp_f2_tbin1}
for the lowest \tpr bin and in
\cref{fig:intensity_2pp_m1_rho_tbin11,fig:intensity_2pp_m2_rho_tbin11,fig:intensity_2pp_f2_tbin11}
for the highest \tpr bin.
\ifMultiColumnLayout{\begin{figure*}[t]}{\begin{figure}[tbp]}
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14a}%
\label{fig:intensity_2pp_m1_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14b}%
\label{fig:phase_2pp_m1_rho_2pp_m2_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14c}%
\label{fig:phase_2pp_m1_rho_2pp_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14d}%
\label{fig:phase_2pp_m1_rho_1pp_rho_tbin1}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14e}%
\label{fig:intensity_2pp_m2_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14f}%
\label{fig:phase_2pp_m2_rho_2pp_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14g}%
\label{fig:phase_2pp_m2_rho_1pp_rho_tbin1}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14h}%
\label{fig:intensity_2pp_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig14i}%
\label{fig:phase_2pp_f2_1pp_rho_tbin1}%
}%
\caption{Amplitudes of the three $\JPC = 2^{++}$ waves in the lowest
\tpr bin.
\subfloatLabel{fig:intensity_2pp_m1_rho_tbin1}~through~\subfloatLabel{fig:phase_2pp_m1_rho_1pp_rho_tbin1}:
Intensity distribution and relative phases for the
\wave{2}{++}{1}{+}{\Prho}{D}
wave. \subfloatLabel{fig:intensity_2pp_m2_rho_tbin1}~through~\subfloatLabel{fig:phase_2pp_m2_rho_1pp_rho_tbin1}:
Intensity distribution and relative phases for the
\wave{2}{++}{2}{+}{\Prho}{D} wave.
\subfloatLabel{fig:intensity_2pp_f2_tbin1}~and~\subfloatLabel{fig:phase_2pp_f2_1pp_rho_tbin1}:
Intensity distribution and relative phase for the
\wave{2}{++}{1}{+}{\PfTwo}{P} wave. The model and the wave
components are represented as in \cref{fig:intensity_phases_0mp},
except that here the blue curves represent the \PaTwo and the
\PaTwo[1700].}
\label{fig:intensity_phases_2pp_tbin1}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
\ifMultiColumnLayout{\begin{figure*}[t]}{\begin{figure}[tbp]}
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15a}%
\label{fig:intensity_2pp_m1_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15b}%
\label{fig:phase_2pp_m1_rho_2pp_m2_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15c}%
\label{fig:phase_2pp_m1_rho_2pp_f2_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15d}%
\label{fig:phase_2pp_m1_rho_1pp_rho_tbin11}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15e}%
\label{fig:intensity_2pp_m2_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15f}%
\label{fig:phase_2pp_m2_rho_2pp_f2_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15g}%
\label{fig:phase_2pp_m2_rho_1pp_rho_tbin11}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15h}%
\label{fig:intensity_2pp_f2_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig15i}%
\label{fig:phase_2pp_f2_1pp_rho_tbin11}%
}%
\caption{Similar to \cref{fig:intensity_phases_2pp_tbin1} but for
the highest \tpr bin.}
\label{fig:intensity_phases_2pp_tbin11}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
All three waves exhibit a clear peak around \SI{1.3}{\GeVcc}. The
intensity distributions of the two $\Prho \pi D$ waves are dominated
by this peak. The peak shape is nearly independent of \tpr. At low
\tpr, the $\Prho \pi D$ wave with $M = 1$ exhibits a dip in the
intensity distribution at about \SI{1.8}{\GeVcc} [see
\cref{fig:intensity_2pp_m1_rho_tbin1_log}]. With increasing \tpr,
this dip moves toward higher masses and becomes shallower until it
disappears in the two highest \tpr bins [see
\cref{fig:intensity_2pp_m1_rho_tbin8_log,fig:intensity_2pp_m1_rho_tbin11_log}].
A much stronger variation of the shape of the intensity distribution
with increasing \tpr is observed for the $\PfTwo \pi P$ wave. In
addition to the peak at \SI{1.3}{\GeVcc}, this wave exhibits a
shoulder at about \SI{1.6}{\GeVcc}, which is absent at low \tpr and
increases with increasing \tpr, and a high-mass tail that becomes
weaker with increasing \tpr.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig16a}%
\label{fig:intensity_2pp_m1_rho_tbin1_log}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig16b}%
\label{fig:intensity_2pp_m1_rho_tbin8_log}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig16c}%
\label{fig:intensity_2pp_m1_rho_tbin11_log}%
}%
\caption{Intensity distributions of the \wave{2}{++}{1}{+}{\Prho}{D}
wave for three \tpr bins shown in logarithmic scale. The model
and the wave components are represented as in
\cref{fig:intensity_phases_2pp_tbin1}.}
\label{fig:intensity_2pp_log}
\end{wideFigureOrNot}
The right columns of
\cref{fig:intensity_phases_2pp_tbin1,fig:intensity_phases_2pp_tbin11}
show the \mThreePi dependence of the relative phases of the $2^{++}$
waves \wrt the \wave{1}{++}{0}{+}{\Prho}{S} wave. Clearly rising
phases are observed in the \SI{1.3}{\GeVcc} mass region in all \tpr
bins.
\Cref{fig:intensity_phases_2pp_tbin1,fig:intensity_phases_2pp_tbin11}
also show the relative phases between the three $2^{++}$ waves. Here,
a more complex pattern is observed that points to different relative
contributions of the components in these waves.
In our model, the three $\JPC = 2^{++}$ waves are described using two
resonances, \PaTwo and \PaTwo[1700]. The \PaTwo is parametrized using
\cref{eq:BreitWigner,eq:method:a2dynamicwidth}, the \PaTwo[1700] using
\cref{eq:BreitWigner,eq:method:fixedwidth}, and the nonresonant
components using \cref{eq:method:nonresterm} for the
\wave{2}{++}{1}{+}{\Prho}{D} wave and \cref{eq:method:nonrestermsmall}
for the other two $2^{++}$ waves (see
\cref{tab:method:fitmodel:waveset}).
Taking into account the high precision of the data in particular for
the \wave{2}{++}{1}{+}{\Prho}{D} wave, the model describes the data
well within the fit range, which is
\SIvalRange{0.9}{\mThreePi}{2.0}{\GeVcc} for the
\wave{2}{++}{1}{+}{\Prho}{D} wave and
\SIvalRange{1.0}{\mThreePi}{2.0}{\GeVcc} for the other two waves. The
two $\Prho \pi D$ waves are dominated by the \PaTwo with only small
contributions from the \PaTwo[1700]. This is strikingly different in
the $\PfTwo \pi P$ wave, in which the \PaTwo[1700] has an intensity
comparable to that of the \PaTwo and the relative \PaTwo[1700]
intensity grows with increasing \tpr. In our fit model, the
nonresonant components behave differently in the three $2^{++}$ waves.
Compared to the dominant \PaTwo peak, the nonresonant component in the
\wave{2}{++}{1}{+}{\Prho}{D} wave is small and vanishes nearly
completely in the highest \tpr bin. The corresponding wave with
$M = 2$ exhibits a larger nonresonant contribution relative to the
\PaTwo, which slightly increases with increasing \tpr. We find the
largest nonresonant contribution \wrt the \PaTwo peak in the
\wave{2}{++}{1}{+}{\PfTwo}{P} wave. The relative nonresonant
intensity, which grows slightly with increasing \tpr, is concentrated
mostly in the \PaTwo[1700] region and---as in the other two $2^{++}$
waves---is small in the \PaTwo peak region.
In the \PaTwo region, interference effects of the wave components are
small in all three waves. The largest effect is a slight asymmetric
distortion of the \PaTwo peak in the \wave{2}{++}{2}{+}{\Prho}{D} wave
due to interference of the \PaTwo with the nonresonant component.
This is different for the \PaTwo[1700] region. In the
\wave{2}{++}{1}{+}{\Prho}{D} wave, a complicated interplay between
\PaTwo, \PaTwo[1700], and the nonresonant contribution becomes
apparent. At low \tpr, destructive interference causes the intensity
to drop by 4~orders of magnitude from the \PaTwo peak down to the dip
at about \SI{1.8}{\GeVcc}. In the two highest \tpr bins, the
nonresonant contribution practically vanishes in the \PaTwo[1700]
region and the interference pattern changes so that the dip in the
high-mass region disappears. This distinct interference pattern helps
the fit to separate the small \PaTwo[1700] contribution despite the
presence of the dominant \PaTwo. In the \wave{2}{++}{1}{+}{\PfTwo}{P}
wave, the high-mass shoulder is described by a relatively large
\PaTwo[1700] contribution. At high \tpr, the rather sharp drop of
this shoulder around \SI{1.8}{\GeVcc} [see
\cref{fig:intensity_2pp_f2_tbin11}] is described by the interference
of all three wave components.
Although the fit model describes the intensity distributions in
general well, it falls short in some regions. In the dominant
\wave{2}{++}{1}{+}{\Prho}{D} wave, it does not reproduce well the
high-mass tail, which is most pronounced at low \tpr [see \eg
\cref{fig:intensity_2pp_m1_rho_tbin1_log}]. Also, the extrapolation
of the fit model above \SI{2.0}{\GeVcc}, which is the upper limit of
the fit range, disagrees with the data. We observe a similar behavior
also in the \wave{2}{++}{1}{+}{\PfTwo}{P} wave [see \eg
\cref{fig:intensity_2pp_f2_tbin1}]. In this wave, the model in
addition undershoots the low-mass tail below \SI{1.2}{\GeVcc}, which
is, however, mainly defined by the opening of the $\PfTwo \pi$ phase
space.
The interpretation of the structures in the intensity distributions in
terms of resonances is supported by the relative phases \wrt selected
waves. The \wave{2}{++}{1}{+}{\Prho}{D} wave exhibits rapidly rising
phases \wrt the \wave{1}{++}{0}{+}{\Prho}{S} wave in the
\SI{1.3}{\GeVcc} region, which are caused by the \PaTwo, and slower
rising phases in the \SI{1.6}{\GeVcc} region [see
\cref{fig:phase_2pp_m1_rho_1pp_rho_tbin1,fig:phase_2pp_m1_rho_1pp_rho_tbin11}].
Both features depend only weakly on \tpr. The dominant \PaTwo leads
to approximately constant phases relative to the other $2^{++}$ waves
in the region between \SIlist{1.0;1.4}{\GeVcc} [see
\cref{fig:phase_2pp_m1_rho_2pp_m2_rho_tbin1,fig:phase_2pp_m1_rho_2pp_f2_tbin1,fig:phase_2pp_m1_rho_2pp_m2_rho_tbin11,fig:phase_2pp_m1_rho_2pp_f2_tbin11}].
The extremely rapidly decreasing phases around \SI{1.8}{\GeVcc} in
\cref{fig:phase_2pp_m1_rho_2pp_m2_rho_tbin1,fig:phase_2pp_m1_rho_2pp_f2_tbin1,fig:phase_2pp_m1_rho_1pp_rho_tbin1}
are connected to the dip in the intensity distribution of the
\wave{2}{++}{1}{+}{\Prho}{D} wave. At this dip, the partial-wave
amplitude becomes nearly zero due to destructive interference. This
behavior of the phases is analogous to the one observed in the
\wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave [see
\cref{sec:zeroMP_results,fig:phase_0mp_1pp_rho_tbin11,fig:phase_0mp_2mp_f2_tbin11}].
As the dip in the intensity distributions, the phase drop disappears
toward higher \tpr. At large values of \tpr, the phase of the
\wave{2}{++}{1}{+}{\Prho}{D} wave \wrt the
\wave{2}{++}{1}{+}{\PfTwo}{P} wave becomes approximately constant [see
\cref{fig:phase_2pp_m1_rho_2pp_f2_tbin11}], consistent with the
\PaTwo[1700] appearing in both waves. The phases of the
\wave{2}{++}{2}{+}{\Prho}{D} wave \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave also exhibit the rapid rise in the
\SI{1.3}{\GeVcc} region due to the \PaTwo. It is followed by a drop
of the phase toward the \SI{1.7}{\GeVcc} region. The missing rising
phase from the \PaTwo[1700] is consistent with the small intensity of
this component in this wave. The phase motion changes only slightly
with \tpr. The phase \wrt the other two $2^{++}$ waves are
approximately constant around the \PaTwo. The phase relative to the
\wave{2}{++}{1}{+}{\PfTwo}{P} wave falls by more than
\SI{180}{\degree} above about \SI{1.4}{\GeVcc}. This drop is
approximately independent of \tpr and covers the mass region of the
\PaTwo[1700].\footnote{A similar behavior is observed for the
\PaOne[1640] in the \wave{1}{++}{0}{+}{\PfTwo}{P} wave (see
\cref{sec:onePP_discussion}).} The phases of the
\wave{2}{++}{1}{+}{\PfTwo}{P} wave \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave exhibit two consecutive phase rises
due to \PaTwo and \PaTwo[1700]. Unlike the intensity distributions of
this wave, the phase motions do not change drastically with \tpr.
From the fit, we extract the following \PaTwo Breit-Wigner resonance
parameters: $m_{\PaTwo} = \SIaerrSys{1314.5}{4.0}{3.3}{\MeVcc}$ and
$\Gamma_{\PaTwo} = \SIaerrSys{106.6}{3.4}{7.0}{\MeVcc}$. Due to the
large intensity of the \PaTwo, its small width, and the small
contributions from the nonresonant components in the \SI{1.3}{\GeVcc}
region, the systematic uncertainties of the \PaTwo resonance
parameters are the smallest of all resonances in the model (see
\cref{sec:syst_uncert_twoPP}).
The extracted Breit-Wigner resonance parameters for the \PaTwo[1700]
are $m_{\PaTwo[1700]} = \SIaerrSys{1681}{22}{35}{\MeVcc}$ and
$\Gamma_{\PaTwo[1700]} = \SIaerrSys{436}{20}{16}{\MeVcc}$. They are
mainly determined by the \wave{2}{++}{1}{+}{\PfTwo}{P} wave. Since
the \PaTwo[1700] signal is much smaller than that of the \PaTwo, the
\PaTwo[1700] parameters have much larger systematic uncertainties.
The \PaTwo[1700] parameters are sensitive to the parametrization of
the nonresonant component and to the value of the range parameter
$q_R$ in the Blatt-Weisskopf factors (see
\cref{sec:syst_uncert_twoPP}).
The \tpr dependence of the intensities of the resonant and nonresonant
$2^{++}$ wave components is shown in
\cref{fig:tprim_2pp,fig:tprim_2pp_f2_nonres} together with the results
of fits using \cref{eq:slope-parametrization}. The coupling
amplitudes of the resonance components in the two $2^{++}$ waves with
$\Mrefl = 1^+$ are constrained by
\cref{eq:method:branchingdefinition}. Therefore, the extracted values
of the slope parameters are nearly identical: the \PaTwo slope
parameter has a value of \SIerrSys{7.9}{0.5}{\perGeVcsq} in the
$\Prho \pi D$ wave with $M = 1$ and of
\SIaerrSys{7.8}{0.6}{0.5}{\perGeVcsq} in the $\PfTwo \pi P$ wave (see
\cref{tab:slopes}). Similar to the $1^{++}$ and $2^{-+}$ sectors (see
\cref{sec:onePP_results,sec:twoMP_results}, respectively), the slope
parameter of the higher-mass state, here the \PaTwo[1700], is smaller.
It has a value of \SIaerrSys{7.3}{2.4}{0.9}{\perGeVcsq} in the
$\Prho \pi D$ wave with $M = 1$ and
\SIaerrSys{7.2}{1.1}{0.8}{\perGeVcsq} in the $\PfTwo \pi P$ wave. If
we do not constrain the coupling amplitudes of the resonance
components via \cref{eq:method:branchingdefinition} [\StudyT; see
\cref{sec:systematics}], the above slope values remain essentially
unchanged. The only exception is the \PaTwo[1700] in the
$\Prho \pi P$ wave, the slope of which becomes about
\SI{2}{\perGeVcsq} steeper.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig17a}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig17b}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig17c}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig17d}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig17e}%
\label{fig:tprim_2pp_m2}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig17f}%
}%
\caption{Similar to \cref{fig:method:tp:examplespectrum} but showing
the \tpr spectra of the components in the
\wave{2}{++}{}{}{\Prho}{D} waves with (upper row) $\Mrefl = 1^+$
and (lower row) $\Mrefl = 2^+$ as given by
\cref{eq:tprim-dependence}: (left) \PaTwo component, (center)
\PaTwo[1700] component, and (right) nonresonant components. The
red curves and horizontal lines represent fits using
\cref{eq:slope-parametrization}. \subfloatLabel{fig:tprim_2pp_m2}
shows in addition to the \PaTwo[1700] \tpr spectrum from the main
fit (black/gray) the \tpr spectra obtained in the various
systematic studies (central values shown in green, statistical
uncertainties in light green).}
\label{fig:tprim_2pp}
\end{wideFigureOrNot}
\begin{figure}[tbp]
\centering
\includegraphics[width=\twoPlotWidth]{fig18}
\caption{Similar to \cref{fig:tprim_2pp_m2}, but showing the \tpr
spectrum of the nonresonant component in the
\wave{2}{++}{1}{+}{\PfTwo}{P} wave as given by
\cref{eq:tprim-dependence}.}
\label{fig:tprim_2pp_f2_nonres}
\end{figure}
As in other waves, we observe that the \tpr spectra of the nonresonant
components in the \wave{2}{++}{1}{+}{\Prho}{D} and
\wave{2}{++}{1}{+}{\PfTwo}{P} waves are distinctly different from
those of the resonances. In particular in the $\Prho \pi D$ wave with
$M = 1$, the nonresonant component exhibits a much steeper \tpr
spectrum with a slope parameter value of
\SIaerrSys{13.6}{0.4}{1.8}{\perGeVcsq}. The nonresonant \tpr spectrum
in the $\PfTwo \pi P$ wave is sensitive to changes of the fit model
discussed in \cref{sec:systematics} (see
\cref{fig:tprim_2pp_f2_nonres}). It is not well described by the
model, \cref{eq:slope-parametrization}. The $\rbrk{\tpr}^{\abs{M}}$
factor in the model induces a downturn toward lower \tpr, which is
inconsistent with the data. From the above, we conclude that the
nonresonant component in this wave seems to have too much freedom. We
also cannot exclude that it is distorted by leakage into the small
$\PfTwo \pi P$ wave at the stage of the mass-independent analysis.
In the fit model, the \tpr dependence of the coupling amplitudes of
the resonant components in the \wave{2}{++}{2}{+}{\Prho}{D} wave is
not constrained by \cref{eq:method:branchingdefinition} and is
therefore determined independently of the other two $2^{++}$ waves.
In the $M = 2$ wave, we observe slope parameters for the \PaTwo and
the nonresonant contribution of \SIaerrSys{9.0}{1.2}{0.7}{\perGeVcsq}
and \SIaerrSys{8.1}{1.6}{0.5}{\perGeVcsq}, respectively. The value
for the \PaTwo is slightly larger than in the other two $2^{++}$
waves, while the one for the nonresonant component is significantly
smaller. Both effects are not understood at present and illustrate
the limitations of our model. The \tpr spectrum of the \PaTwo[1700]
in the $M = 2$ wave differs strongly from the \tpr spectra in the
other two waves [see \cref{fig:tprim_2pp_m2}]. It has a rather
peculiar shape: after an initial rise with increasing \tpr, the
intensity drops sharply with \tpr until about \SI{0.3}{\GeVcsq} and
then levels off. The fit function in \cref{eq:slope-parametrization}
is not able to describe these data. The \tpr spectrum is sensitive to
changes of the fit model discussed in \cref{sec:systematics}. We
therefore conclude that with our model the \PaTwo[1700] signal in the
$M = 2$ wave is too small in order to reliably extract \PaTwo[1700]
yields, although it helps to constrain the \PaTwo[1700] parameters.
We extract the branching-fraction ratio for the decays of the \PaTwo
into the $\Prho \pi D$ and $\PfTwo \pi P$ decay modes with $M = 1$,
where the latter one is a subthreshold decay. Using
\cref{eq:branch_fract_ratio} we get
\begin{multlineOrEq}
\label{eq:branch_fract_ratio_a2}
B_{\Prho* \pi D, \PfTwo* \pi P}^{\PaTwo*}
\newLineOrNot
\begin{alignedOrNot}
\alignOrNot= \frac{\text{BF}\!\sBrk{\PaTwo^- \to \Prho^0 \pi^- \to \threePi}\hfill}{\text{BF}\!\sBrk{\PaTwo^- \to \PfTwo \pi^- \to \threePi}}
\newLineOrNot
\alignOrNot= \numaerrSys{17.6}{1.1}{2.6}.
\end{alignedOrNot}
\end{multlineOrEq}
This is the first measurement of this quantity. As for the \PaFour
(see \cref{sec:fourPP_results}), this ratio increases by a factor of
$4/3$ when we take into account the unobserved decays
$\PaTwo*^- \to \Prho*^- \pi^0$ and $\PaTwo*^- \to \PfTwo* \pi^-$ to
the \threePiN final state and assume isospin symmetry. Hence
\begin{equation}
\label{eq:branch_fract_ratio_a2_iso}
\begin{splitOrNot}
B_{\Prho* \pi D, \PfTwo* \pi P}^{\PaTwo*, \text{iso}}
\alignOrNot= \frac{\text{BF}\!\sBrk{\PaTwo^- \to \Prho \pi \to 3\pi}\hfill}{\text{BF}\!\sBrk{\PaTwo^- \to \PfTwo \pi \to 3\pi}}
\newLineOrNot
\alignOrNot= \numaerrSys{23.5}{1.5}{3.5}.
\end{splitOrNot}
\end{equation}
Taking into account the branching fraction of the \PfTwo to $2\pi$ and
the effect of the different Bose symmetrizations in the \threePi and
\threePiN final states, the isospin factor $4/3$ should be replaced by
\numaerr{0.936}{0.032}{0.010}\footnote{We only take into account the
uncertainty of the $\PfTwo \to 2\pi$ branching fraction.} yielding
the corrected branching-fraction ratio
\begin{equation}
\label{eq:branch_fract_ratio_a2_corr}
\begin{splitOrNot}
B_{\Prho* \pi D, \PfTwo* \pi P}^{\PaTwo*, \text{corr}}
\alignOrNot= \frac{\text{BF}\!\sBrk{\PaTwo^- \to \Prho \pi}\hfill}{\text{BF}\!\sBrk{\PaTwo^- \to \PfTwo \pi}}
\newLineOrNot
\alignOrNot= \numaerr{16.5}{1.2}{2.4}.
\end{splitOrNot}
\end{equation}
\subsubsection{Discussion of results on $2^{++}$ resonances}
\label{sec:twoPP_discussion}
From our analysis, we conclude that we observe two resonances with
$\JPC = 2^{++}$. The \PaTwo appears as a clear peak in all three
$2^{++}$ waves, whereas the \PaTwo[1700] shows up most prominently in
the $\PfTwo \pi P$ wave and is seen to couple only weakly to
$\Prho \pi D$. In order to study the significance of the extracted
\PaTwo[1700] signal, we performed a fit, in which the \PaTwo[1700]
component was removed from the fit model. Compared to the main fit,
this fit has a minimum \chisq~value that is larger by a factor
of~\num{1.48}.\footnote{Compared to the \num{722} free parameters of
the main fit, this fit has \num{674} free parameters.}
\Cref{fig:no-a2(1700)_chi2difference} shows the contributions from the
spin-density matrix elements to the \chisq~difference between this and
the main fit. As expected, the largest contribution to the observed
\chisq~increase comes from the \wave{2}{++}{1}{+}{\PfTwo}{P} wave
intensity and from its interferences. This shows that most of the
support for the \PaTwo[1700] component comes from the
\wave{2}{++}{1}{+}{\PfTwo}{P} wave, which is consistent with the
observation that the \PaTwo[1700] signal is small in the two
$\Prho \pi D$ waves. \Cref{fig:no-a2(1700)_2pp_f2_intensity} shows
that the \wave{2}{++}{1}{+}{\PfTwo}{P} wave cannot be described
without the \PaTwo[1700]. The model without the \PaTwo[1700] that is
represented by the dashed red curve is in particular unable to
describe the shoulder at about \SI{1.6}{\GeVcc} in the intensity
distribution.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig19}
\caption{Similar to \cref{fig:DeckMC_chi2difference}, but for the
study, in which the \PaTwo[1700] component was omitted from the
fit model.}
\label{fig:no-a2(1700)_chi2difference}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=\twoPlotWidth]{fig20}
\caption{\tpr-summed intensity of the \wave{2}{++}{1}{+}{\PfTwo}{P}
wave with the result of the main fit (continuous curves) and of
the fit, in which the \PaTwo[1700] component was removed from the
fit model (dashed curves). The model and the wave components are
represented as in \cref{fig:intensity_phases_2pp_tbin1}.}
\label{fig:no-a2(1700)_2pp_f2_intensity}
\end{figure}
We clearly observe the production of \PaTwo with $M = 2$. This is
consistent with the peak observed in the $M = 2$ $D$-wave of the
$\pi^-\eta$ final state [see Fig.~3(g) in \refCite{Adolph:2014rpp}].
Also the intensity ratio of the $M = 1$ and $M = 2$ waves at the
\PaTwo peak position is similar for the two final states. In the
present analysis, we have studied in detail the \tpr dependence of the
\PaTwo component in the $\Prho \pi D$ waves with $M = 1$ and $M = 2$.
Despite the different functional dependence due to the
$\rbrk{\tpr}^{\abs{M}}$ factor in \cref{eq:slope-parametrization}, the
extracted slope parameters have similar values. In addition, the
relative phase of the coupling amplitudes of the \PaTwo in the two
waves exhibits only a weak \tpr dependence and departs from zero by no
more than \SI{20}{\degree} (see \cref{sec:production_phases}). All
this points to the same production mechanism and shows that Pomeron
exchange can transfer helicity~2 to the produced state.
The PDG quotes world averages for the \PaTwo parameters of
$m_{\PaTwo} = \SIaerr{1319.0}{1.0}{1.3}{\MeVcc}$ and
$\Gamma_{\PaTwo} =
\SIaerr{105}{1.6}{1.9}{\MeVcc}$~\cite{Patrignani:2016xqp} for the
$3\pi$ decay mode. While our estimate of
$m_{\PaTwo} = \SIaerrSys{1314.5}{4.0}{3.3}{\MeVcc}$ is
\SI{4.5}{\MeVcc} lower, our width value of
$\Gamma_{\PaTwo} = \SIaerrSys{106.6}{3.4}{7.0}{\MeVcc}$ agrees well
with the PDG average. Our present \PaTwo parameters agree with the
results of our two previous analyses: the one based on the measurement
of the \threePi final state diffractively produced on a solid lead
target~\cite{alekseev:2009aa}, and the other based on the measurement
of the $\eta \pi$ and $\eta' \pi$ final states diffractively produced
on a liquid-hydrogen target~\cite{Adolph:2014rpp}. The finite
resolution in \mThreePi, which is neglected in our analysis, is
estimated to affect the width by less than \SI{1}{\MeVcc}. Our values
for the slope parameter of the \PaTwo in the $\Prho \pi D$ and
$\PfTwo \pi P$ waves with $M = 1$ are in good agreement with the value
of \SI{7.3(1)}{\perGeVcsq} measured by ACCMOR~\cite{daum:1980ay}.
The \PaTwo[1700] is listed by the PDG as \enquote{omitted from summary
table} with world averages for mass and width of
$m_{\PaTwo[1700]} = \SI{1732(16)}{\MeVcc}$ and
$\Gamma_{\PaTwo[1700]} =
\SI{194(40)}{\MeVcc}$~\cite{Patrignani:2016xqp}. Our result of
$m_{\PaTwo[1700]} = \SIaerrSys{1681}{22}{35}{\MeVcc}$ is consistent
with the world average, but our width value of
$\Gamma_{\PaTwo[1700]} = \SIaerrSys{436}{20}{16}{\MeVcc}$ is
\SI{242}{\MeVcc} larger. Our width estimate is especially in
disagreement with the result of the Belle experiment, which measured
an enhancement in the invariant mass spectrum of $K^+ K^-$ pairs
produced in two-photon collisions~\cite{Abe:2003vn} with a width of
only \SIerrs{151}{22}{24}{\MeVcc}. The PDG assigns this measurement
to the \PaTwo[1700] and includes it in the world average. It is
interesting to compare our results with an analysis of the $\eta \pi$
$D$-wave intensity using an analytical model based on the principles
of the relativistic $S$-matrix~\cite{Jackura:2017amb}. The analysis
is based on the partial-wave decomposition of COMPASS data from
\refCite{Adolph:2014rpp}. The extracted \PaTwo pole parameters from
\refCite{Jackura:2017amb} are consistent with the values of our
Breit-Wigner parameters. The same is true for the \PaTwo[1700] mass,
but the \PaTwo[1700] width of \SIerrs{280}{10}{70}{\MeVcc} that is
found in \refCite{Jackura:2017amb} appears to be lower than our value.
This is a hint that our simplifying model assumptions may cause an
overestimation of the \PaTwo[1700] width.
We observe that the \PaTwo[1700] predominantly decays into
$\PfTwo \pi P$ and less into $\Prho \pi D$. This finding is difficult
to reconcile with the dominance of the $\Prho \pi$ over the
$\PfTwo \pi$ decay mode observed by the L3~experiment in an analysis
of the $\pi^+\pi^-\pi^0$ final state produced in two-photon
collisions~\cite{Shchegelsky:2006es}. At the current stage of the
analysis we do not make a quantitative statement on the \PaTwo[1700]
branching fractions because the \PaTwo[1700] region in the two
$\Prho \pi D$ waves is dominated by the \PaTwo high-mass tail and the
nonresonant components.
A number of observations of potential higher excited \PaTwo* states
are listed by the PDG as \enquote{further
states}~\cite{Patrignani:2016xqp}:
\PaTwo[1950]~\cite{anisovich:2001pn},
\PaTwo[1990]~\cite{Shchegelsky:2006es,Lu:2004yn},
\PaTwo[2030]~\cite{anisovich:2001pn},
\PaTwo[2175]~\cite{anisovich:2001pn}, and
\PaTwo[2255]~\cite{anisovich:2001pp}. We do not see clear resonance
signals of heavy \PaTwo* states above the \PaTwo[1700] in the analyzed
waves.
%
\subsection{$\JPC = 2^{-+}$ resonances}
\label{sec:twoMP}
\subsubsection{Results on $2^{-+}$ resonances}
\label{sec:twoMP_results}
We include four waves with $\JPC = 2^{-+}$ in the resonance-model fit.
The \wave{2}{-+}{0}{+}{\Prho}{F} and \wave{2}{-+}{0}{+}{\PfTwo}{S}
waves have relatively large intensities and contribute
\SI{2.2}{\percent} and \SI{6.7}{\percent} to the total intensity,
respectively. The \wave{2}{-+}{1}{+}{\PfTwo}{S} and
\wave{2}{-+}{0}{+}{\PfTwo}{D} waves have smaller intensities and each
contributes \SI{0.9}{\percent} to the total intensity.
\Cref{fig:intensity_phases_2mp} shows the intensity distributions of
the four waves for the lowest and the highest \tpr bins (first and
third rows, respectively).
\ifMultiColumnLayout{\begin{figure*}[p]}{\begin{figure}[tbp]}
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21a}%
\label{fig:intensity_2mp_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21b}%
\label{fig:intensity_2mp_m0_f2_S_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21c}%
\label{fig:intensity_2mp_m1_f2_S_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21d}%
\label{fig:intensity_2mp_f2_D_tbin1}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21e}%
\label{fig:phase_2mp_rho_1pp_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21f}%
\label{fig:phase_2mp_m0_f2_S_1pp_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21g}%
\label{fig:phase_2mp_m1_f2_S_1pp_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21h}%
\label{fig:phase_2mp_f2_D_1pp_rho_tbin1}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21i}%
\label{fig:intensity_2mp_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21j}%
\label{fig:intensity_2mp_m0_f2_S_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21k}%
\label{fig:intensity_2mp_m1_f2_S_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21l}%
\label{fig:intensity_2mp_f2_D_tbin11}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21m}%
\label{fig:phase_2mp_rho_1pp_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21n}%
\label{fig:phase_2mp_m0_f2_S_1pp_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21o}%
\label{fig:phase_2mp_m1_f2_S_1pp_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig21p}%
\label{fig:phase_2mp_f2_D_1pp_rho_tbin11}%
}%
\caption{Amplitudes of the four $\JPC = 2^{-+}$ waves: (first
column) \wave{2}{-+}{0}{+}{\Prho}{F} wave, (second column)
\wave{2}{-+}{0}{+}{\PfTwo}{S} wave, (third column)
\wave{2}{-+}{1}{+}{\PfTwo}{S} wave, and (fourth column)
\wave{2}{-+}{0}{+}{\PfTwo}{D} wave. (first and third rows)
Intensity distributions in the lowest and highest \tpr bins,
respectively. (second and fourth rows) Phases relative to the
\wave{1}{++}{0}{+}{\Prho}{S} wave in the lowest and highest \tpr
bins, respectively. The model and the wave components are
represented as in \cref{fig:intensity_phases_0mp}, except that
here the blue curves represent the \PpiTwo, the \PpiTwo[1880], and
the \PpiTwo[2005].}
\label{fig:intensity_phases_2mp}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
The intensities of the $\Prho \pi F$ wave and of the two
$\PfTwo \pi S$ waves exhibit a clear peak at \SI{1.65}{\GeVcc}, which
dominates in particular the $\PfTwo \pi S$ waves. The position of
this peak does not depend strongly on \tpr. The $\Prho \pi F$ wave
has an additional high-mass shoulder at \SI{1.9}{\GeVcc}, which
becomes a dominant peak in the highest \tpr bin. The $\PfTwo \pi S$
wave with $M = 0$ has a smaller high-mass shoulder at about
\SI{2.05}{\GeVcc}, which also grows relative to the \SI{1.65}{\GeVcc}
peak with increasing \tpr. This shoulder is absent in the
$\PfTwo \pi S$ wave with $M = 1$. The $\PfTwo \pi D$ wave has no
structure at \SI{1.65}{\GeVcc}. Instead, it exhibits a dominant peak
at \SI{1.8}{\GeVcc} and a slight high-mass shoulder at
\SI{2.05}{\GeVcc}, which becomes more pronounced toward higher \tpr.
The position of the peak is independent of \tpr.
The \wave{2}{-+}{0}{+}{\Prho}{F} wave and the two
$2^{-+} \PfTwo \pi S$ waves exhibit clearly rising phases \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave in the region of the
\SI{1.65}{\GeVcc} peak (see second and fourth rows in
\cref{fig:intensity_phases_2mp}). At low \tpr, the phases of the
\wave{2}{-+}{0}{+}{\Prho}{F} and \wave{2}{-+}{1}{+}{\PfTwo}{S} waves
continue to rise in the \SI{1.9}{\GeVcc} region [see
\cref{fig:phase_2mp_rho_1pp_rho_tbin1,fig:phase_2mp_m1_f2_S_1pp_rho_tbin1}].
The phase motion of the \wave{2}{-+}{0}{+}{\Prho}{F} wave is
approximately independent of \tpr, whereas the phase of the
\wave{2}{-+}{1}{+}{\PfTwo}{S} wave flattens out at about
\SI{1.9}{\GeVcc} at higher \tpr, making the phase motion of this wave
similar to that of the corresponding $M = 0$ wave. The phase motion
of the \wave{2}{-+}{0}{+}{\PfTwo}{D} wave \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave exhibits a rapid rise in the region
of the \SI{1.8}{\GeVcc} peak and a slower rise in the region of the
\SI{2.05}{\GeVcc} shoulder. The amplitude of the phase motion
decreases with increasing \tpr.
The fit model contains three resonances, \PpiTwo, \PpiTwo[1880], and
\PpiTwo[2005], to describe the four $\JPC = 2^{-+}$ waves. The
resonances are parametrized using
\cref{eq:BreitWigner,eq:method:fixedwidth}, the nonresonant components
using \cref{eq:method:nonresterm} for the
\wave{2}{-+}{0}{+}{\PfTwo}{S} and \wave{2}{-+}{0}{+}{\Prho}{F} waves
and \cref{eq:method:nonrestermsmall} for the other two $2^{-+}$ waves
(see \cref{tab:method:fitmodel:waveset}). The $\Prho \pi F$ wave is
fit in the range from \SIrange{1.2}{2.1}{\GeVcc}, the two
$\PfTwo \pi S$ waves from \SIrange{1.4}{2.3}{\GeVcc}, and the
$\PfTwo \pi D$ wave from \SIrange{1.6}{2.3}{\GeVcc}.
The $\Prho \pi F$ wave and the two $\PfTwo \pi S$ waves are dominated
by the \PpiTwo. In the $\Prho \pi F$ wave, the nonresonant component
is small compared to the \PpiTwo component. Only in the two highest
\tpr bins does it have a larger intensity. The contributions from the
nonresonant components are larger in the two $\PfTwo \pi S$ waves.
These waves also show a stronger interference of the wave components
in the \PpiTwo region, in particular at lower \tpr. In the
$\Prho \pi F$ wave and the two $\PfTwo \pi S$ waves, the intensities
of the two excited \PpiTwo* components are comparable to those of the
nonresonant components or even smaller. In the $\PfTwo \pi S$ wave
with $M = 0$, the \PpiTwo[1880] component is practically vanishing.
The excited \PpiTwo* components show different interference patterns.
In the $\Prho \pi F$ wave, significant constructive interference of
the wave components describes the high-mass shoulder at
\SI{1.9}{\GeVcc}. In the $\PfTwo \pi S$ wave with $M = 0$ these
interference effects are much smaller, whereas in the $\PfTwo \pi S$
wave with $M = 1$ the components interfere destructively leading to a
steeper drop of the intensity in the \SI{1.8}{\GeVcc} region at larger
\tpr.
The composition of the $\PfTwo \pi D$ wave is strikingly different.
In this wave, all three resonance components play a significant role,
with the \PpiTwo[1880] being the dominant one that destructively
interferes with the other components. At lower values of \tpr, the
\PpiTwo and the \PpiTwo[2005] appear with similar intensities. In the
two highest \tpr bins, the \PpiTwo[2005] component becomes larger.
The contribution from the nonresonant component is small.
Within the fit ranges, the fit model describes the intensity
distributions in general well. This is in particular true for the two
$\PfTwo \pi S$ waves. The fit model does not reproduce the details of
the high-mass shoulder at \SI{2.05}{\GeVcc} in the $\PfTwo \pi D$
wave. In this wave, also the extrapolation of the fit model above the
fit range of \SI{2.3}{\GeVcc} deviates from the data, in particular at
lower \tpr. In the $\Prho \pi F$ wave, the fit model does not
reproduce details of the peak at \SI{1.65}{\GeVcc} and of the shoulder
at \SI{1.9}{\GeVcc}. The extrapolation of the fit model above the fit
range of \SI{2.1}{\GeVcc} deviates from the data.
The dominance of the \PpiTwo in the $\Prho \pi F$ wave and in the two
$\PfTwo \pi S$ waves is supported by the clearly rising phases of
these waves \wrt the \wave{1}{++}{0}{+}{\Prho}{S} wave (see
\cref{fig:intensity_phases_2mp}). It is also consistent with the
approximately constant relative phases among these three $2^{-+}$
waves in the \SI{1.6}{\GeVcc} region at low \tpr (see
\cref{fig:intensity_phases_2mp_tbin1}). Above the \PpiTwo region, the
similar relative strengths of \PpiTwo[1880] and \PpiTwo[2005] in the
\wave{2}{-+}{0}{+}{\Prho}{F} and \wave{2}{-+}{1}{+}{\PfTwo}{S} waves
lead to only small variations of their relative phase. The
\wave{2}{-+}{0}{+}{\Prho}{F} and \wave{2}{-+}{1}{+}{\PfTwo}{S} waves
exhibit more pronounced phase motions in the \SI{1.9}{\GeVcc} region
\wrt the \wave{2}{-+}{0}{+}{\PfTwo}{S} wave because of the vanishing
\PpiTwo[1880] component in the latter wave. The interference pattern
of the three $2^{-+}$ waves changes toward higher \tpr mainly because
of the changing composition of the \wave{2}{-+}{0}{+}{\Prho}{F} wave
(see \cref{fig:intensity_phases_2mp_tbin11}). The phase of the
\wave{2}{-+}{0}{+}{\PfTwo}{D} wave \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave rises in the \SI{1.8}{\GeVcc} region
and, less rapidly, in the \SI{2.0}{\GeVcc} region. This phase motion
is caused by \PpiTwo[1880] and \PpiTwo[2005] and is connected to the
phases of the coupling amplitudes of the two heavier \PpiTwo*, which
are close to \SI{180}{\degree} relative to the \PpiTwo in this wave
(see \cref{sec:production_phases}). \Wrt the other three $2^{-+}$
waves, the $\PfTwo \pi D$ wave shows similar phase motions. This is
consistent with the large contributions from \PpiTwo[1880] and
\PpiTwo[2005] in this wave compared to the \PpiTwo component.
\ifMultiColumnLayout{\begin{figure*}[t]}{\begin{figure}[p]}
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22a}%
\label{fig:intensity_2mp_rho_tbin1_log}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22b}%
\label{fig:phase_2mp_rho_2mp_m0_f2_S_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22c}%
\label{fig:phase_2mp_rho_2mp_m1_f2_S_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22d}%
\label{fig:phase_2mp_rho_2mp_f2_D_tbin1}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22e}%
\label{fig:intensity_2mp_m0_f2_S_tbin1_log}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22f}%
\label{fig:phase_2mp_m0_f2_S_2mp_m1_f2_S_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22g}%
\label{fig:phase_2mp_m0_f2_S_2mp_f2_D_tbin1}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22h}%
\label{fig:intensity_2mp_m1_f2_S_tbin1_log}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22i}%
\label{fig:phase_2mp_m1_f2_S_2mp_f2_D_tbin1}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig22j}%
\label{fig:intensity_2mp_f2_D_tbin1_log}%
}%
\caption{Amplitudes of the four $\JPC = 2^{-+}$ waves in the lowest
\tpr bin.
\subfloatLabel{fig:intensity_2mp_rho_tbin1_log}~through~\subfloatLabel{fig:phase_2mp_rho_2mp_f2_D_tbin1}:
Intensity distribution and relative phases for the
\wave{2}{-+}{0}{+}{\Prho}{F}
wave. \subfloatLabel{fig:intensity_2mp_m0_f2_S_tbin1_log}~through~\subfloatLabel{fig:phase_2mp_m0_f2_S_2mp_f2_D_tbin1}:
Intensity distribution and relative phases for the
\wave{2}{-+}{0}{+}{\PfTwo}{S} wave.
\subfloatLabel{fig:intensity_2mp_m1_f2_S_tbin1_log}~and~\subfloatLabel{fig:phase_2mp_m1_f2_S_2mp_f2_D_tbin1}:
Intensity distribution and relative phase for the
\wave{2}{-+}{1}{+}{\PfTwo}{S} wave.
\subfloatLabel{fig:intensity_2mp_f2_D_tbin1_log}: Intensity
distribution for the \wave{2}{-+}{0}{+}{\PfTwo}{D} wave. The
model and the wave components are represented as in
\cref{fig:intensity_phases_2mp}.}
\label{fig:intensity_phases_2mp_tbin1}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
\ifMultiColumnLayout{\begin{figure*}[t]}{\begin{figure}[p]}
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23a}%
\label{fig:intensity_2mp_rho_tbin11_log}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23b}%
\label{fig:phase_2mp_rho_2mp_m0_f2_S_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23c}%
\label{fig:phase_2mp_rho_2mp_m1_f2_S_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23d}%
\label{fig:phase_2mp_rho_2mp_f2_D_tbin11}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23e}%
\label{fig:intensity_2mp_m0_f2_S_tbin11_log}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23f}%
\label{fig:phase_2mp_m0_f2_S_2mp_m1_f2_S_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23g}%
\label{fig:phase_2mp_m0_f2_S_2mp_f2_D_tbin11}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23h}%
\label{fig:intensity_2mp_m1_f2_S_tbin11_log}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23i}%
\label{fig:phase_2mp_m1_f2_S_2mp_f2_D_tbin11}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig23j}%
\label{fig:intensity_2mp_f2_D_tbin11_log}%
}%
\caption{Similar to \cref{fig:intensity_phases_2mp_tbin1} but for
the highest \tpr bin.}
\label{fig:intensity_phases_2mp_tbin11}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
Within the fit range, the model is able to describe well most of the
phase motions. Some details in the high-mass regions are not
reproduced. Often, the high-mass extrapolations of the fit model
deviate from the data (see \eg \cref{fig:intensity_phases_2mp_tbin1}
and the second row of \cref{fig:intensity_phases_2mp}). In some
cases, this is also true for the extrapolations below the low-mass
limits of the fit ranges (see \eg
\cref{fig:intensity_phases_2mp_tbin1}). However, in many of these
cases the intensities of the waves are small.
The extracted resonance parameters for \PpiTwo, \PpiTwo[1880], and
\PpiTwo[2005] are
\begin{align*}
m_{\PpiTwo} &= \SIaerrSys{1642}{12}{1}{\MeVcc}\text{,} \\
\Gamma_{\PpiTwo} &= \SIaerrSys{311}{12}{23}{\MeVcc}\text{,} \\
m_{\PpiTwo[1880]} &= \SIaerrSys{1847}{20}{3}{\MeVcc}\text{,} \\
\Gamma_{\PpiTwo[1880]} &= \SIaerrSys{246}{33}{28}{\MeVcc}\text{,} \\
m_{\PpiTwo[2005]} &= \SIaerrSys{1962}{17}{29}{\MeVcc}\text{, and} \\
\Gamma_{\PpiTwo[2005]} &= \SIaerrSys{371}{16}{120}{\MeVcc}.
\end{align*}
Hence in the $2^{-+}$ sector, the model assumption of well-separated
resonances with little overlap is not well fulfilled. Although
constrained by the amplitudes of four waves, the $2^{-+}$ resonance
parameters exhibit a larger sensitivity to changes of the fit model
discussed in \cref{sec:systematics}. They therefore have larger
systematic uncertainties than, for example, the parameters of the
\Ppi[1800]. In addition, some of the systematic uncertainty intervals
are highly asymmetric. The parameters of the three $2^{-+}$
resonances are correlated in a complicated way and depend, among other
things, on the set of waves included in the fit. Also the number of
background events in the selected data sample influences the resonance
parameters. The parameters of \PpiTwo[1880] and \PpiTwo[2005] are in
addition sensitive to the number of \tpr bins. This underlines the
importance of using a fine-grained \tpr binning in order to capture
the evolution of the $2^{-+}$ amplitudes with \tpr. The $2^{-+}$
resonance parameters exhibit an exceptionally large sensitivity to the
\mThreePi and \tpr dependences of the production probability
$\Abs[0]{\mathcal{P}(\mThreePi, \tpr)}^2$ in
\cref{eq:method:param:spindens,eq:method:param:prods}.
The widths of \PpiTwo and \PpiTwo[1880] are also affected by the
interference of the $2^{-+}$ waves with the low-mass part of the
\wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave. More details on the results
of the systematic studies can be found in
\cref{sec:syst_uncert_twoMP}.
\Cref{fig:tprim_2mp_1,fig:tprim_2mp_2} show the \tpr dependence of the
intensities of the resonant and nonresonant $2^{-+}$ wave components
together with the results of fits using
\cref{eq:slope-parametrization}. In our fit model, the coupling
amplitudes of the resonance components in the three $2^{-+}$ waves
with $M = 0$ are constrained by \cref{eq:method:branchingdefinition}.
The \tpr spectra of the resonance components are well described by the
exponential model in \cref{eq:slope-parametrization}. The extracted
values of the slope parameters for \PpiTwo, \PpiTwo[1880], and
\PpiTwo[2005] are approximately \SI{8.5}{\perGeVcsq},
\SI{7.8}{\perGeVcsq}, and \SI{6.7}{\perGeVcsq}, respectively [see
\cref{tab:slopes} for details], which are typical values for
resonances. As for the $1^{++}$ and $2^{++}$ resonances, the slope
parameter decreases with increasing mass of the resonance. This
flattening of the \tpr slope with increasing \mThreePi was also
observed in the \tpr spectra before partial-wave decomposition (see
\eg Fig.~31 in \refCite{Adolph:2015tqa}). The three-component Deck
model~\cite{CohenTannoudji:1976tj,Kaidalov:1979jz,Antunes:1984cy} may
explain this behavior. The relative enhancement of higher-mass states
at larger values of \tpr helps to better disentangle the various
resonance components.
In the fit model, the \tpr dependence of the coupling amplitudes of
the resonant components in the \wave{2}{-+}{1}{+}{\PfTwo}{S} wave is
not constrained by \cref{eq:method:branchingdefinition}. Due to the
relative smallness of this wave, the intensities of the wave
components are extracted less
reliably. \Cref{eq:slope-parametrization} does not describe well the
\tpr spectra of the wave components. This is in particular true for
the \PpiTwo and \PpiTwo[1880]. Hence only a rough comparison of the
slope parameters is possible. The slope parameter values for
\PpiTwo[1880] and \PpiTwo[2005] are compatible with those found in the
other three $2^{-+}$ waves. However, the slope of the \PpiTwo\ \tpr
spectrum is significantly smaller with $b = \SI{5.0}{\perGeVcsq}$.
This effect is not understood but it is consistent with the shallower
\tpr slope of the intensity of this wave in the \PpiTwo mass region
(see Table~VI in \refCite{Adolph:2015tqa}).
Compared to the other \JPC sectors, where we observe in general a
steeper \tpr slope for the nonresonant components than for the
resonances, the nonresonant components in the $2^{-+}$ sector behave
somewhat irregularly. The only exception is the nonresonant component
in the $\PfTwo \pi D$ wave. Its \tpr spectrum is well described by
the exponential in \cref{eq:slope-parametrization} and has a slope of
\SIaerrSys{12}{6}{2}{\perGeVcsq}, which is considerably steeper than
the slopes of the \PpiTwo* resonances [see
\cref{fig:tprim_2mp_f2_D_nonres}]. The \tpr spectrum of the
nonresonant component in the $\PfTwo \pi S$ wave with $M = 1$ has a
shallower slope of \SIaerrSys{6.9}{1.1}{1.9}{\perGeVcsq} that is
comparable to those of the \PpiTwo* resonances [see
\cref{fig:tprim_2mp_m1_f2_S_nonres}]. However, at low \tpr the data
deviate from the fit model. Also for the nonresonant component in the
$\PfTwo \pi S$ wave with $M = 0$, the model deviates from the data at
low \tpr [see \cref{fig:tprim_2mp_m0_f2_S_nonres}].
\Cref{eq:slope-parametrization} cannot reproduce the step at
$\tpr \approx \SI{0.16}{\GeVcsq}$. The extracted value of
\SI{5.1}{\perGeVcsq} for the slope parameter is smaller than that for
the \PpiTwo* resonances, but is not well defined. The nonresonant
component in the $\Prho \pi F$ wave exhibits a complicated \tpr
spectrum [see \cref{fig:tprim_2mp_rho_nonres}]. It has a narrow dip
at about \SI{0.16}{\GeVcsq} at the same location where we observe a
step in the \tpr spectrum of the nonresonant component in the
\wave{2}{-+}{0}{+}{\PfTwo}{S} wave. \Cref{eq:slope-parametrization}
cannot describe such a distribution. The complicated shape of the
\tpr spectrum may be an artifact caused by forcing the same \tpr
dependence of the resonances in the $M = 0$ waves via
\cref{eq:method:branchingdefinition}. However, if we leave the \tpr
dependence of all resonance components free [see discussion of \StudyT
below], the dip at low \tpr remains. Since at low \tpr the
nonresonant component is much smaller than any of the three resonance
components, its intensity is less well determined and more sensitive
to systematic effects. Monte Carlo studies of a model for the Deck
effect (see \cref{sec:deck_model}) have shown that the projection of
this nonresonant amplitude into the $\Prho \pi F$ wave is vanishingly
small so that the observed nonresonant intensity is presumably of
different origin.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig24a}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig24b}%
\label{fig:tprim_2mp_m0_f2_S_pi2_1880}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig24c}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig24d}%
\label{fig:tprim_2mp_m0_f2_S_nonres}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig24e}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig24f}%
\label{fig:tprim_2mp_m1_f2_S_pi2_1880}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig24g}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig24h}%
\label{fig:tprim_2mp_m1_f2_S_nonres}%
}%
\caption{Similar to \cref{fig:method:tp:examplespectrum}, but
showing the \tpr spectra of the components in the two
$2^{-+} \PfTwo \pi S$ waves as given by
\cref{eq:tprim-dependence}: (top row) $\Mrefl = 0^+$ wave and
(bottom row) $\Mrefl = 1^+$ wave; (first column) \PpiTwo
component, (second column) \PpiTwo[1880] component, (third column)
\PpiTwo[2005] component, and (fourth column) nonresonant
components. The red curves and horizontal lines represent fits
using \cref{eq:slope-parametrization}.}
\label{fig:tprim_2mp_1}
\end{wideFigureOrNot}
\begin{figure}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig25a}%
\label{fig:tprim_2mp_rho_nonres}%
}%
\newLineOrHspace{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig25b}%
\label{fig:tprim_2mp_f2_D_nonres}%
}%
\caption{Similar to \cref{fig:method:tp:examplespectrum}, but
showing the \tpr spectra of the nonresonant components
\subfloatLabel{fig:tprim_2mp_rho_nonres}~in the
\wave{2}{-+}{0}{+}{\Prho}{F} wave and
\subfloatLabel{fig:tprim_2mp_f2_D_nonres}~in the
\wave{2}{-+}{0}{+}{\PfTwo}{D} wave. The red curve and horizontal
lines in \subfloatLabel{fig:tprim_2mp_f2_D_nonres} represent a fit
using \cref{eq:slope-parametrization}.}
\label{fig:tprim_2mp_2}
\end{figure}
If we do not constrain the coupling amplitudes via
\cref{eq:method:branchingdefinition} and thus allow the resonance
components to have different \tpr dependences [\StudyT; see
\cref{sec:systematics}], the extracted \tpr spectra agree in general
less with the simple model of \cref{eq:slope-parametrization}. The
components of the \wave{2}{-+}{1}{+}{\PfTwo}{S} and
\wave{2}{-+}{0}{+}{\PfTwo}{D} waves show similar \tpr spectra with
slope parameters that deviate by at most \SI{2}{\perGeVcsq} from those
of the main fit. This is also true for the \PpiTwo component in the
other two $2^{-+}$ waves, the \PpiTwo[1880] component in the
\wave{2}{-+}{0}{+}{\Prho}{F} wave, and the \PpiTwo[2005] component in
the \wave{2}{-+}{0}{+}{\PfTwo}{S} wave. However, in the latter wave,
the slope parameter of the \PpiTwo[1880] becomes almost twice as large
and thus inconsistent with the \PpiTwo[1880] slope parameters in the
other three waves. In the $\Prho \pi F$ wave, the \PpiTwo[2005] \tpr
spectrum changes drastically and becomes similar to the \tpr spectrum
of the nonresonant component in the main fit. In turn, the \tpr
spectrum of the nonresonant component becomes steeper. In addition to
the \tpr spectra, also the resonance parameters of \PpiTwo[1880] and
\PpiTwo[2005] change in \StudyT. The \PpiTwo[1880] becomes
\SI{29}{\MeVcc} wider, whereas the \PpiTwo[2005] becomes
\SI{75}{\MeVcc} narrower. The results of this study indicate that
without the constraint of \cref{eq:method:branchingdefinition}, the
relative intensities of the three \PpiTwo* states and the nonresonant
components are not well constrained by the data. A possible reason
for this behavior is that our approach to model the partial-wave
amplitudes as a sum of Breit-Wigner amplitudes might not be a good
approximation anymore because of the considerable overlap of the three
\PpiTwo* resonances. Applying more advanced models is the topic of
future research~\cite{Jackura:2016llm,Mikhasenko:2017jtg}.
\subsubsection{Discussion of results on $2^{-+}$ resonances}
\label{sec:twoMP_discussion}
We observe three distinct resonances with $\JPC = 2^{-+}$ in our data
set, which are clearly identified owing to their different production
characteristic and decay paths. The \PpiTwo appears as a dominant
peak with associated phase motion in the $\Prho \pi F$ and the two
$\PfTwo \pi S$ waves with $\Mrefl = 0^+$ and $1^+$. The strongest
signal for the \PpiTwo[1880] appears in the $\PfTwo \pi D$ wave in the
form of a dominant peak with associated phase motion. The relative
intensity of the \PpiTwo[1880] in the other three $2^{-+}$ waves is
small, which is in particular true for the
\wave{2}{-+}{0}{+}{\PfTwo}{S} wave. The \PpiTwo[2005] appears as
high-mass shoulders in the \wave{2}{-+}{0}{+}{\Prho}{F},
\wave{2}{-+}{0}{+}{\PfTwo}{S}, and \wave{2}{-+}{0}{+}{\PfTwo}{D}
waves, which due to the shallower \tpr slope of the \PpiTwo[2005] are
more pronounced in the highest \tpr bin. In the $\Prho \pi F$ wave,
this shoulder even turns into a clear peak at large \tpr. The
\PpiTwo[2005] contribution is significantly larger than that of the
\PpiTwo[1880] in the $\Prho \pi F$ wave and in the $\PfTwo \pi S$ wave
with $M = 0$. In the $\PfTwo \pi S$ wave with $M = 1$, the two
contributions are of comparable strength.
The parameters of the \PpiTwo are well known. The PDG quotes world
averages for its mass and width of
$m_{\PpiTwo} = \SI{1672.2(30)}{\MeVcc}$ and
$\Gamma_{\PpiTwo} = \SI{260(9)}{\MeVcc}$,
respectively~\cite{Patrignani:2016xqp}. We find a mass of
$m_{\PpiTwo} = \SIaerrSys{1642}{12}{1}{\MeVcc}$, which is smaller by
\SI{30}{\MeVcc}, and a width of
$\Gamma_{\PpiTwo} = \SIaerrSys{311}{12}{23}{\MeVcc}$, which is larger
by \SI{51}{\MeVcc}. However, within uncertainties our result is
consistent with our previous measurement of the \threePi final state
diffractively produced on a solid lead target~\cite{alekseev:2009aa}.
It is interesting to note that a study with a reduced set of only
11~waves, from which all $2^{-+}$ waves but the
\wave{2}{-+}{0}{+}{\PfTwo}{S} wave have been removed, yields \PpiTwo
resonance parameters of $m_{\PpiTwo} = \SI{1663}{\MeVcc}$ and
$\Gamma_{\PpiTwo} = \SI{256}{\MeVcc}$, which are close to the world
average. In that fit, the \wave{2}{-+}{0}{+}{\PfTwo}{S} amplitude was
described in a smaller mass range from \SIrange{1.4}{1.9}{\GeVcc}
using the \PpiTwo as the only $2^{-+}$ resonance component.
The \PpiTwo[1880] appears to be experimentally well established
according to the PDG, although its measured mass and width values vary
considerably. The PDG lists no observation for the decay
$\PpiTwo[1880] \to 3\pi$. The PDG world averages of the \PpiTwo[1880]
parameters are $m_{\PpiTwo[1880]} = \SI{1895(16)}{\MeVcc}$ and
$\Gamma_{\PpiTwo[1880]} =
\SI{235(34)}{\MeVcc}$~\cite{Patrignani:2016xqp}. While we find a
value for the \PpiTwo[1880] width of
$\Gamma_{\PpiTwo[1880]} = \SIaerrSys{246}{33}{28}{\MeVcc}$ that is
compatible with the world average, our mass value of
$m_{\PpiTwo[1880]} = \SIaerrSys{1847}{20}{3}{\MeVcc}$ is
\SI{48}{\MeVcc} smaller. The four measurements listed by the PDG fall
into two subsets. The first consists of two measurements with lower
masses $m_{\PpiTwo[1880]} \leq \SI{1880}{\MeVcc}$ and smaller widths
$\Gamma_{\PpiTwo[1880]} \leq
\SI{255}{\MeVcc}$~\cite{anisovich:2001hj,Lu:2004yn}. Our estimate of
the \PpiTwo[1880] parameters is within uncertainties compatible with
these two measurements, although there is some disagreement with the
extremely small width estimate of \SIerrs{146}{17}{62}{\MeVcc} from
\refCite{Lu:2004yn}. The other two measurements with larger masses
$m_{\PpiTwo[1880]} \geq \SI{1929}{\MeVcc}$ and larger widths
$\Gamma_{\PpiTwo[1880]} \geq
\SI{306}{\MeVcc}$~\cite{kuhn:2004en,eugenio:2008zza} are better
compatible with our estimates for the \PpiTwo[2005] parameters.
The \PpiTwo[2005] is listed by the PDG only as a \enquote{further
state} with two observations~\cite{Patrignani:2016xqp}. It was
claimed in an analysis by the BNL~E852 experiment of the
$\omega \pi^0 \pi^-$ final state diffractively produced on a proton
target~\cite{Lu:2004yn} and in two analyses based on \ppbar
annihilation data from the Crystal Barrel experiment: a combined
analysis of $3\pi^0$, $\pi^0 \eta$, and $\pi^0 \eta'$ final
states~\cite{anisovich:2001pn} and an analysis of
$\eta \eta \pi^0$~\cite{anisovich:2001pp}. The mass range explored in
\ppbar annihilations in flight starts only around \SI{1.95}{\GeVcc}
and thus covers only the high-mass part of the \PpiTwo[2005]
resonance. Within uncertainties, our estimate for the \PpiTwo[2005]
parameters, $m_{\PpiTwo[2005]} = \SIaerrSys{1962}{17}{29}{\MeVcc}$ and
$\Gamma_{\PpiTwo[2005]} = \SIaerrSys{371}{16}{120}{\MeVcc}$, is
compatible with either measurement.
In order to study the significance of the \PpiTwo[2005] signal in our
data, we have performed a systematic study, in which we omitted the
\PpiTwo[2005] from the fit model. The minimum \chisq~value found in
this fit is \num{1.07}~times larger than the one of the main
fit.\footnote{Compared to the \num{722} free parameters of the main
fit, this fit has \num{672} free parameters.}
\Cref{fig:no-pi2(2005)_chi2difference} shows the contributions from
the spin-density matrix elements to the \chisq~difference between this
and the main fit. Without the \PpiTwo[2005], the model describes the
$2^{-+}$ intensity distributions and interference terms less well, in
particular for the \wave{2}{-+}{0}{+}{\PfTwo}{S} and
\wave{2}{-+}{0}{+}{\PfTwo}{D} waves.
\Cref{fig:no-pi2(2005)_intensities} shows that the high-mass shoulders
cannot be reproduced well. Omitting the \PpiTwo[2005] component also
shifts some of the resonance parameters. On the one hand, the
\PpiTwo[1880] becomes \SI{20}{\MeVcc} lighter and \SI{100}{\MeVcc}
wider, which would be contradictory to all previous measurements. On
the other hand, the \PpiTwo parameters move closer to the PDG world
average.\footnote{The \PpiTwo becomes \SI{17}{\MeVcc} heavier and
\SI{20}{\MeVcc} narrower. Large changes are also observed for the
\PaOne[1640], which becomes \SI{42}{\MeVcc} lighter and
\SI{82}{\MeVcc} wider, and for the \PpiOne[1600], which becomes
\SI{51}{\MeVcc} narrower.}
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig26}
\caption{Similar to \cref{fig:DeckMC_chi2difference}, but for the
study in which the \PpiTwo[2005] resonance was omitted from the
fit model.}
\label{fig:no-pi2(2005)_chi2difference}
\end{figure}
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig27a}%
\label{fig:no-pi2(2005)_intensity_2mp_m0_f2_S}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig27b}%
\label{fig:no-pi2(2005)_intensity_2mp_m0_f2_S_zoom}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig27c}%
\label{fig:no-pi2(2005)_intensity_2mp_f2_D}%
}%
\caption{\tpr-summed intensities of
\subfloatLabel{fig:no-pi2(2005)_intensity_2mp_m0_f2_S}~the
\wave{2}{-+}{0}{+}{\PfTwo}{S} wave and
\subfloatLabel{fig:no-pi2(2005)_intensity_2mp_f2_D}~the
\wave{2}{-+}{0}{+}{\PfTwo}{D} wave with the result of the main fit
(continuous curves) and of the fit in which the \PpiTwo[2005]
resonance was omitted from the fit model (dashed curves). The
model and the wave components are represented as in
\cref{fig:intensity_phases_2mp}. In~\subfloatLabel{fig:no-pi2(2005)_intensity_2mp_m0_f2_S_zoom},
a zoomed view of the high-mass region in
\subfloatLabel{fig:no-pi2(2005)_intensity_2mp_m0_f2_S} is shown.}
\label{fig:no-pi2(2005)_intensities}
\end{wideFigureOrNot}
In addition to \PpiTwo, \PpiTwo[1880], and \PpiTwo[2005], the PDG
lists the \PpiTwo[2100] as \enquote{omitted from summary
table}~\cite{Patrignani:2016xqp}. The PDG entry is based on two
observations reported by the ACCMOR~\cite{daum:1980ay} and the VES
experiments~\cite{amelin:1995gu} in the diffractively produced
\threePi final state. The \PpiTwo[2100] thus requires further
experimental confirmation. It is close in mass to the \PpiTwo[2005],
but has a much larger width of \SI{625(50)}{\MeVcc}. In the ACCMOR
analysis, the intensity distributions of the
\wave{2}{-+}{0}{+}{\pipiS}{D}, \wave{2}{-+}{0}{+}{\Prho}{P},
\wave{2}{-+}{0}{+}{\PfTwo}{S}, and \wave{2}{-+}{0}{+}{\PfTwo}{D} waves
were fit together with selected relative phases of these waves using a
model with two $2^{-+}$ resonances, \PpiTwo and \PpiTwo[2100], which
was based on the $K$-matrix approach~\cite{daum:1980ay}. In this
model, the dominant peak at \SI{1.8}{\GeVcc} in the $\PfTwo \pi D$
wave is explained as a constructive interference of the two resonance
components. The VES analysis is similar and confirms this
finding~\cite{amelin:1995gu}. It is worth noting that in both
analyses rather high \PpiTwo masses $\geq \SI{1710}{\MeVcc}$ are
found. Our data exhibit similar features as the ACCMOR and VES data.
In particular, considering the uncertainties it is likely that our
\PpiTwo[2005] signal corresponds to the \PpiTwo[2100] measurements
discussed above, although the width estimates differ significantly.
The main difference of our analysis is that we include different waves
in the resonance-model fit. We did not include the $\Prho \pi P$ wave
because it exhibits a sizable and not well understood low-mass
enhancement below the \PpiTwo region [see Fig.~57(e) in
\refCite{Adolph:2015tqa_suppl}]. The $\pipiS \pi D$ and
$\PfZero[980] \pi D$ waves have complicated intensity distributions
[see Figs.~25(c) and 25(d) in \refCite{Adolph:2015tqa}]. At low
\mThreePi, both partial-wave intensities are sensitive to the wave set
that is used in the mass-independent analysis. They also may be
affected by the particular parametrizations chosen for the \pipiS and
\PfZero[980] isobar amplitudes. A less model-dependent analysis, in
which the amplitude of the \twoPi $S$-wave subsystem was extracted
from the data instead of using a parametrization with fixed functional
form, shows a clear correlation of a peak in the region of
$\mThreePi = \SI{1.9}{\GeVcc}$, which is presumably the \PpiTwo[1880],
with a peak in the \PfZero[980] region in the \twoPi mass spectrum
[see Figs.~40 and 43(c) in \refCite{Adolph:2015tqa}]. However, shape,
position, and strength of the observed peak structure in the \PpiTwo
region depend strongly on \tpr, which hints at large contributions
from nonresonant components.
As discussed above, the four $2^{-+}$ waves selected for the
resonance-model fit are not well described if we include only two
$2^{-+}$ Breit-Wigner resonances in the model. In particular, we do
not observe solutions similar to those found by ACCMOR or VES with a
second resonance in the \SI{2.1}{\GeVcc} region. It is therefore
unlikely that the \PpiTwo[1880] signal is caused by a constructive
interference of the other two resonances. It is also unlikely that
the \PpiTwo[1880] signal arises from an interference with a
nonresonant component since the \tpr spectrum of the \PpiTwo[1880]
exhibits a resonancelike behavior [see
\cref{fig:tprim_2mp_m0_f2_S_pi2_1880,fig:tprim_2mp_m1_f2_S_pi2_1880}].
The PDG lists another potential higher excited \PpiTwo* state, the
\PpiTwo[2285], as a \enquote{further state}~\cite{Patrignani:2016xqp}.
It was reported with the parameters
$m_{\PpiTwo[2285]} = \SIerrs{2285}{20}{25}{\MeVcc}$ and
$\Gamma_{\PpiTwo[2285]} = \SIerrs{250}{20}{25}{\MeVcc}$ by the authors
of \refCite{Anisovich:2010nh} in an analysis of the
$\eta \pi^0 \pi^0 \pi^0$ final state produced in \ppbar annihilations
in flight, which was based on data from the Crystal Barrel experiment.
Although we do not see clear resonance signals of heavy \PpiTwo*
states in the mass range from \SIrange{2200}{2500}{\MeVcc} in the
analyzed waves, we cannot exclude that the observed deviations of the
model from the data at high masses, in particular in the
\wave{2}{-+}{0}{+}{\Prho}{F} wave, are due to additional excited
\PpiTwo* states.
The mass of the \PpiTwo agrees well with the quark-model prediction
for the \PpiTwo* ground state by Godfrey and
Isgur~\cite{Godfrey:1985xj}. The mass of the \PpiTwo[2005] agrees
with the prediction for the first radial excitation of the \PpiTwo*.
However, the \PpiTwo[1880] does not fit into this picture. The
interpretations of the \PpiTwo[1880] are manifold. It has been
interpreted as a supernumerous exotic meson with conventional quantum
numbers. It has in particular been considered as a good candidate for
a hybrid meson by the authors of
\refsCite{anisovich:2001hj,klempt:2007cp}. In contrast, Li and Zhou
argue in \refCite{Li:2008xy} that the observed decay width of
approximately \SI{235}{\MeVcc} is too large for a pure hybrid state,
for which a smaller width of rather \SI{100}{\MeVcc} would be
expected. In addition, the dominant decay into the $\PfTwo \pi D$
wave and the small coupling to the $\PfTwo \pi S$ wave that we observe
in our data contradict the hybrid-meson interpretation based on model
calculations for the decay of such objects performed by Page, Swanson,
and Szczepaniak in \refCite{page:1998gz}, which predict the opposite
behavior for a hybrid resonance. Li and Zhou argue that the
\PpiTwo[1880] decay pattern is more similar to model predictions for
the first radial excitation of the conventional
\PpiTwo*~\cite{Li:2008xy}. However, they do not exclude a possible
small admixture of a hybrid state.
In an alternative approach, Dudek and Szczepaniak have proposed in
\refCite{Dudek:2006ud} that the \SI{1.65}{\GeVcc} peak in the
$\PfTwo \pi S$ wave and the \SI{1.8}{\GeVcc} peak in the
$\PfTwo \pi D$ wave are caused by the same \PpiTwo* ground-state
resonance. The seemingly different structures are caused by
interference of this resonance with a type of nonresonant background
originally proposed by Deck~\cite{deck:1964hm}, which is much stronger
in the $\PfTwo \pi S$ wave. In order to explain the phase motions,
this model requires a second \PpiTwo* resonance at a higher mass as in
the ACCMOR and VES analyses discussed above. Their hypothesis may be
tested by including the \tpr dependence and the population of the $M$
substates of the Deck amplitude.
%
\subsection{$\JPC = 1^{++}$ resonances}
\label{sec:onePP}
\subsubsection{Results on $1^{++}$ resonances}
\label{sec:onePP_results}
The resonance-model fit includes three waves with $\JPC = 1^{++}$.
The \wave{1}{++}{0}{+}{\Prho}{S} wave is the most dominant wave in the
88-wave set with a relative intensity of \SI{32.7}{\percent}. The
\wave{1}{++}{0}{+}{\PfZero[980]}{P} and \wave{1}{++}{0}{+}{\PfTwo}{P}
waves are about 2~orders of magnitude less intense with relative
intensities of \SI{0.4}{\percent} and \SI{0.3}{\percent},
respectively. The intensity distributions of the three $1^{++}$
waves, as shown in
\cref{fig:intensity_1pp_rho_tbin1_log,fig:intensity_1pp_f0_tbin1,fig:intensity_1pp_f2_tbin1}
for the lowest \tpr bin and in
\cref{fig:intensity_1pp_rho_tbin11_log,fig:intensity_1pp_f0_tbin11,fig:intensity_1pp_f2_tbin11}
for the highest \tpr bin, are surprisingly different.
\ifMultiColumnLayout{\begin{figure*}[t]}{\begin{figure}[p]}
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28a}%
\label{fig:intensity_1pp_rho_tbin1_log}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28b}%
\label{fig:phase_1pp_rho_1pp_f0_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28c}%
\label{fig:phase_1pp_rho_1pp_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28d}%
\label{fig:phase_1pp_rho_4pp_rho_tbin1}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28e}%
\label{fig:intensity_1pp_f0_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28f}%
\label{fig:phase_1pp_f0_1pp_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28g}%
\label{fig:phase_1pp_f0_4pp_rho_tbin1}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28h}%
\label{fig:intensity_1pp_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig28i}%
\label{fig:phase_1pp_f2_4pp_rho_tbin1}%
}%
\caption{Amplitudes of the three $\JPC = 1^{++}$ waves in the lowest
\tpr bin.
\subfloatLabel{fig:intensity_1pp_rho_tbin1_log}~through~\subfloatLabel{fig:phase_1pp_rho_4pp_rho_tbin1}:
Intensity distribution and relative phases for the
\wave{1}{++}{0}{+}{\Prho}{S} wave. Note that the intensity
distribution in~\subfloatLabel{fig:intensity_1pp_rho_tbin1_log} is
shown in logarithmic scale.
\subfloatLabel{fig:phase_1pp_rho_4pp_rho_tbin1} Corresponds to
\cref{fig:phase_4pp_rho_1pp_rho_tbin1}. \subfloatLabel{fig:intensity_1pp_f0_tbin1}~through~\subfloatLabel{fig:phase_1pp_f0_4pp_rho_tbin1}:
Intensity distribution and relative phases for the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave.
\subfloatLabel{fig:intensity_1pp_f2_tbin1}~and~\subfloatLabel{fig:phase_1pp_f2_4pp_rho_tbin1}:
Intensity distribution and relative phase for the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave. The model and the wave
components are represented as in \cref{fig:intensity_phases_0mp},
except that in
\subfloatLabel{fig:intensity_1pp_rho_tbin1_log}~and~\subfloatLabel{fig:intensity_1pp_f2_tbin1}
the blue curves represent the \PaOne and the \PaOne[1640], whereas
in~\subfloatLabel{fig:intensity_1pp_f0_tbin1} the blue curve
represents the \PaOne[1420].}
\label{fig:intensity_phases_1pp_tbin1}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
\ifMultiColumnLayout{\begin{figure*}[t]}{\begin{figure}[p]}
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29a}%
\label{fig:intensity_1pp_rho_tbin11_log}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29b}%
\label{fig:phase_1pp_rho_1pp_f0_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29c}%
\label{fig:phase_1pp_rho_1pp_f2_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29d}%
\label{fig:phase_1pp_rho_4pp_rho_tbin11}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29e}%
\label{fig:intensity_1pp_f0_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29f}%
\label{fig:phase_1pp_f0_1pp_f2_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29g}%
\label{fig:phase_1pp_f0_4pp_rho_tbin11}%
}%
\\
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\hspace*{\fourPlotWidth}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29h}%
\label{fig:intensity_1pp_f2_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig29i}%
\label{fig:phase_1pp_f2_4pp_rho_tbin11}%
}%
\caption{Similar to \cref{fig:intensity_phases_1pp_tbin1} but for
the highest \tpr bin.}
\label{fig:intensity_phases_1pp_tbin11}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
The \wave{1}{++}{0}{+}{\Prho}{S} intensity exhibits a broad peak
around \SI{1.2}{\GeVcc}, which changes its shape and shifts by about
\SI{140}{\MeVcc} toward higher masses with increasing \tpr (see
\cref{fig:intensity_1pp_rho_pi_S}). This behavior suggests large
contributions from nonresonant components in addition to the expected
\PaOne signal and underlines the importance of a \tpr-resolved
analysis to better disentangle these components.
\ifMultiColumnLayout{\begin{figure*}[t]}{\begin{figure}[p]}
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30a}%
\label{fig:intensity_1pp_rho_pi_S_tbin1}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30b}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30c}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30d}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30e}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30f}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30g}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30h}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30i}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30j}%
\label{fig:intensity_1pp_rho_pi_S_tbin10}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30k}%
\label{fig:intensity_1pp_rho_pi_S_tbin11}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig30l}%
\label{fig:intensity_1pp_rho_pi_S_tsum}%
}%
\caption{\subfloatLabel{fig:intensity_1pp_rho_pi_S_tbin1}~to~\subfloatLabel{fig:intensity_1pp_rho_pi_S_tbin11}:
Intensity distributions of the \wave{1}{++}{0}{+}{\Prho}{S} wave
in the 11~\tpr bins.
\subfloatLabel{fig:intensity_1pp_rho_pi_S_tsum}~The \tpr-summed
intensity. The model and the wave components are represented as
in \cref{fig:intensity_phases_1pp_tbin1}. The contribution of the
\PaOne[1640] component is so small that it is barely visible in
linear scale.}
\label{fig:intensity_1pp_rho_pi_S}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
The \wave{1}{++}{0}{+}{\PfTwo}{P} intensity distribution exhibits a
low-mass enhancement below threshold and a broad peak structure at
about \SI{1.8}{\GeVcc} that disappears in the two highest \tpr bins.
In addition, a weaker enhancement appears around \SI{1.5}{\GeVcc} at
lower \tpr. A portion of the low-mass enhancement might originate
from leakage within the $1^{++}$ sector at the stage of the
partial-wave decomposition.\footnote{This is supported by our finding
that the low-mass enhancement in the intensity distribution of the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave changes significantly if a
reduced set of 53~waves is used for the partial-wave decomposition
(see Sec.~IV~F in \refCite{Adolph:2015tqa}).} This leakage is
presumably induced by Deck-like nonresonant contributions. Monte
Carlo simulations of a model for the Deck amplitude (see
\cref{sec:deck_model}) have shown that at low \tpr, the shapes of the
isobars are distorted, especially that of the \Prho. This might cause
leakage into the $\PfTwo \pi P$ wave, which has an intensity that is
2~orders of magnitude smaller than that of the $\Prho \pi S$ wave.
The most peculiar intensity distribution is observed for the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave. It has a dominant narrow
peak at approximately \SI{1.45}{\GeVcc} that disappears in the highest
\tpr bin. In this mass region, large and rapid phase motions of the
$\PfZero \pi P$ wave are observed relative to the other two $1^{++}$
waves in all \tpr bins [see
\cref{fig:phase_1pp_rho_1pp_f0_tbin1,fig:phase_1pp_f0_1pp_f2_tbin1,fig:phase_1pp_rho_1pp_f0_tbin11,fig:phase_1pp_f0_1pp_f2_tbin11}].
This suggests that the $\PfZero \pi P$ wave has a different resonance
content. Similar phase motions are also observed \wrt other waves.
As an example,
\cref{fig:phase_1pp_f0_4pp_rho_tbin1,fig:phase_1pp_f0_4pp_rho_tbin11}
show the phases relative to the \wave{4}{++}{1}{+}{\Prho}{G} wave,
where the latter was discussed in \cref{sec:fourPP_results}.
Also the relative phase between the \wave{1}{++}{0}{+}{\Prho}{S} and
\wave{1}{++}{0}{+}{\PfTwo}{P} waves changes substantially with
\mThreePi [see
\cref{fig:phase_1pp_rho_1pp_f2_tbin1,fig:phase_1pp_rho_1pp_f2_tbin11}],
which suggests that the wave components contribute with different
strengths to these two waves. At high \tpr, this phase becomes
approximately constant in the \PaOne region and the phase motion in
the \SI{1.6}{\GeVcc} region becomes shallower. In general, the
\wave{1}{++}{0}{+}{\Prho}{S} wave shows only slowly changing or
approximately constant phases \wrt other waves in the \PaOne mass
region. As an example,
\cref{fig:phase_0mp_1pp_rho_tbin1,fig:phase_0mp_1pp_rho_tbin10,fig:phase_0mp_1pp_rho_tbin11}
in \cref{sec:zeroMP_results} show the phase \wrt the
\wave{0}{-+}{0}{+}{\PfZero}{S} wave. The dominant feature is a rising
phase in the \SI{1.8}{\GeVcc} region due to the \Ppi[1800]. In a
similar way, the phase \wrt the \wave{4}{++}{1}{+}{\Prho}{G} wave is
dominated by the \PaFour\ [see
\cref{fig:phase_1pp_rho_4pp_rho_tbin1,fig:phase_1pp_rho_4pp_rho_tbin11}].
The \wave{1}{++}{0}{+}{\PfTwo}{P} wave shows phase motions in the
\SI{1.65}{\GeVcc} region, for example \wrt the other two $1^{++}$
waves and the \wave{4}{++}{1}{+}{\Prho}{G} wave [see
\cref{fig:phase_1pp_rho_1pp_f2_tbin1,fig:phase_1pp_f0_1pp_f2_tbin1,fig:phase_1pp_f2_4pp_rho_tbin1,fig:phase_1pp_rho_1pp_f2_tbin11,fig:phase_1pp_f0_1pp_f2_tbin11}].
In the highest \tpr bin, the phase \wrt the
\wave{4}{++}{1}{+}{\Prho}{G} wave becomes constant [see
\cref{fig:phase_1pp_f2_4pp_rho_tbin11}].
We model the three $1^{++}$ waves using three resonance components,
\PaOne, \PaOne[1420], and \PaOne[1640]. The \PaOne and \PaOne[1640]
appear in both the \wave{1}{++}{0}{+}{\Prho}{S} and
\wave{1}{++}{0}{+}{\PfTwo}{P} waves, whereas the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave is described using the
\PaOne[1420] as the only resonance component (see
\cref{tab:method:fitmodel:waveset}). The \PaOne is parametrized by
\cref{eq:BreitWigner,eq:method:bowlerG}, and the \PaOne[1420] and
\PaOne[1640] by \cref{eq:BreitWigner,eq:method:fixedwidth}. For the
nonresonant component in the $\Prho \pi S$ wave we use
\cref{eq:method:nonresterm}, for those in the other two waves
\cref{eq:method:nonrestermsmall}. The $\Prho \pi S$ wave is fit in
the mass range from \SIrange{0.9}{2.3}{\GeVcc} and the $\PfTwo \pi P$
wave from \SIrange{1.4}{2.1}{\GeVcc}. For the $\PfZero[980] \pi P$
wave, a narrower fit range from \SIrange{1.3}{1.6}{\GeVcc} was
chosen.\footnote{Therefore, this wave has no overlap with the fit
range of the \wave{2}{-+}{0}{+}{\PfTwo}{D} wave, which starts only
at \SI{1.6}{\GeVcc}.}
The employed model is in fair agreement with the data. In particular
it is able to describe the change of the $\Prho \pi S$ intensity with
\tpr in terms of a \tpr-dependent interference between the \PaOne and
the nonresonant component (see \cref{fig:intensity_1pp_rho_pi_S}).
The relative phase of the coupling amplitudes of the nonresonant
component \wrt the \PaOne changes from approximately \SI{0}{\degree}
at low \tpr to \SI{+100}{\degree} at high \tpr (see
\cref{fig:tprim_phase_1pp_rho} in \cref{sec:production_phases}).
Although the model reproduces the main features of the data, the
extremely small statistical uncertainties of the $\Prho \pi S$ data
points lead to significant disagreement of the model with the data in
the \PaOne region. The intensity distributions of the $\Prho \pi S$
wave and the real and imaginary parts of its interference terms in the
11~\tpr bins contribute together already
about \SI{25}{\percent} to the total~\chisq of the model [see
\cref{eq:method:fitmethod:chi2}]. The model systematically deviates
from the $\Prho \pi S$ intensity in the low- and high-mass flanks of
the peak and also cannot well describe the tip of the peak (see
\cref{fig:intensity_1pp_rho_pi_S_zoom}). Some of the discontinuities
in this mass region might be induced by the thresholds applied to some
of the 88~waves used in the partial-wave decomposition (see Table~IX
in Appendix~A of \refCite{Adolph:2015tqa}). The deviations of the
model from the $\Prho \pi S$ intensity increase with \tpr. In the two
highest \tpr bins, the peak becomes significantly narrower, which the
model is not able to reproduce [see
\cref{fig:intensity_1pp_rho_pi_S_tbin10,fig:intensity_1pp_rho_pi_S_tbin11}].
The model also does not reproduce smaller details in the high-mass
region. The \PaOne and the nonresonant component contribute with
similar intensities to the $\Prho \pi S$ wave and interfere
constructively in the \PaOne region. In the low-\tpr region, the two
components interfere destructively at higher masses. The contribution
of the \PaOne[1640] component to the $\Prho \pi S$ wave is
approximately 2~orders of magnitude smaller than that of the \PaOne.
It accounts for the small shoulder at \SI{1.8}{\GeVcc}.
\begin{figure}[tbp]
\centering
\includegraphics[width=\twoPlotWidth]{fig31}%
\caption{Zoomed view of the \tpr-summed \wave{1}{++}{0}{+}{\Prho}{S}
intensity distribution in \cref{fig:intensity_1pp_rho_pi_S_tsum}.}
\label{fig:intensity_1pp_rho_pi_S_zoom}
\end{figure}
The \PaOne[1640] parameters are mainly determined by the
$\PfTwo \pi P$ wave. The model describes the low-mass enhancement of
the $\PfTwo \pi P$ intensity by a dominant nonresonant component that
is sharply peaked in the \PaOne region and a comparatively small
\PaOne component. The high-mass region of the $\PfTwo \pi P$
intensity is dominated by a peak at about \SI{1.8}{\GeVcc} that is
described well as the constructive interference of the \PaOne and
\PaOne[1640] components. The peak disappears toward
$\tpr = \SI{1.0}{\GeVcsq}$ and so do the resonance components. The
extrapolations of the model below and above the fit range undershoot
the $\PfTwo \pi P$ intensity at low and high \mThreePi [see
\cref{fig:intensity_1pp_f2_tbin1,fig:intensity_1pp_f2_tbin11}].
The intensity distribution of the $\PfZero[980] \pi P$ wave is
peculiar in that it shows a peak slightly above the \PaOne but
significantly narrower. The peak is well described by the third
$1^{++}$ resonance in our model, the \PaOne[1420]. The \PaOne[1420]
interferes destructively with a smaller nonresonant component that
peaks at about \SI{1.3}{\GeVcc}. The model is not able to describe
the high-mass tail, which grows with increasing \tpr. This is why the
fit range was limited to below \SI{1.6}{\GeVcc}.
Within the fit ranges, the model describes the relative phases of the
$1^{++}$ waves better than the intensity distributions discussed
above. In particular the rapid phase motion of the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} \wrt other waves is well
reproduced. This is also true for the phase motions of the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave in the \SI{1.6}{\GeVcc} region,
which are caused by the \PaOne[1640]. A significant \PaOne[1640]
component in this wave is also consistent with the phase relative to
the \wave{2}{++}{1}{+}{\PfTwo}{P} wave (see \cref{fig:phases_1pp_f2}).
The \PaTwo causes a decreasing phase in the \SI{1.3}{\GeVcc} region.
At higher masses, the relative phase varies only slightly due to a
compensation of the phase motions of \PaTwo[1700] and \PaOne[1640].
The phase of the \wave{1}{++}{0}{+}{\Prho}{S} wave shows a completely
different behavior [see
\cref{fig:phase_2pp_f2_1pp_rho_tbin1,fig:phase_2pp_f2_1pp_rho_tbin11}].
In addition to the rapid phase motion caused by the \PaTwo, also the
\PaTwo[1700] creates a clear phase motion that is not canceled by the
\PaOne[1640]. Hence neither \PaOne nor \PaOne[1640] causes strong
phase motions of the \wave{1}{++}{0}{+}{\Prho}{S} wave. This is also
true for the phases of this wave \wrt other waves [see \eg
\cref{fig:phase_1pp_rho_1pp_f0_tbin1,fig:phase_1pp_rho_1pp_f2_tbin1,fig:phase_1pp_rho_4pp_rho_tbin1,fig:phase_1pp_rho_1pp_f0_tbin11,fig:phase_1pp_rho_1pp_f2_tbin11,fig:phase_1pp_rho_4pp_rho_tbin11}].
The behavior of the phases is consistent with the large nonresonant
component over the full mass range and the weak signal of the
\PaOne[1640] compared to the \PaOne ground state in the $\Prho \pi S$
wave. For many phases, the extrapolations of the model below and
above the fit range follow approximately the data. Deviations appear
in particular at low \tpr. For the phases of the
\wave{1}{++}{0}{+}{\Prho}{S} wave, the model extrapolations deviate
from the data in the region above \SI{2.3}{\GeVcc} [see \eg
\cref{fig:phase_1pp_rho_1pp_f2_tbin1,fig:phase_1pp_rho_4pp_rho_tbin1}].
For the phases of the \wave{1}{++}{0}{+}{\PfTwo}{P} wave, the model
deviates typically at low masses [see \eg
\cref{fig:phase_1pp_rho_1pp_f2_tbin1,fig:phase_1pp_f2_4pp_rho_tbin1}]
where also the intensity distribution is not well reproduced.
\begin{figure}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig32a}%
\label{fig:phase_1pp_f2_2pp_f2_tbin1}%
}%
\newLineOrHspace{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig32b}%
\label{fig:phase_1pp_f2_2pp_f2_tbin11}%
}%
\caption{Phase of the \wave{1}{++}{0}{+}{\PfTwo}{P} wave relative to
the \wave{2}{++}{1}{+}{\PfTwo}{P} wave,
\subfloatLabel{fig:phase_1pp_f2_2pp_f2_tbin1}~for the lowest \tpr
bin and \subfloatLabel{fig:phase_1pp_f2_2pp_f2_tbin11}~for the
highest \tpr bin. The model is represented as in
\cref{fig:intensity_phases_1pp_tbin1}.}
\label{fig:phases_1pp_f2}
\end{figure}
\Cref{fig:tprim_1pp} shows the \tpr spectra of the $1^{++}$ wave
components together with the results of fits using
\cref{eq:slope-parametrization}. The \tpr dependence of the
amplitudes of \PaOne and \PaOne[1640] in the
\wave{1}{++}{0}{+}{\Prho}{S} and \wave{1}{++}{0}{+}{\PfTwo}{P} waves
is constrained via \cref{eq:method:branchingdefinition}. The \tpr
dependence of the \PaOne[1420] amplitude in the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave is independently determined
by the fit. The simple exponential model in
\cref{eq:slope-parametrization} is in fair agreement with the \tpr
spectra of all $1^{++}$ wave components. The extracted
slope-parameter values for the \PaOne are
\SIaerrSys{11.8}{0.9}{4.2}{\perGeVcsq} in the $\Prho \pi S$ wave and
\SIerrSys{11}{4}{\perGeVcsq} in the $\PfTwo \pi P$ wave. The \PaOne
has the steepest \tpr spectrum of all resonances in the model (see
\cref{tab:slopes}) although the uncertainty toward smaller slope
values is considerable. The \PaOne slope values agree within
uncertainties with the slope values of the nonresonant components in
all three $1^{++}$ waves. This is in contrast to most other waves,
for which we typically observe steeper \tpr spectra for the
nonresonant components. This might be a hint that the model is not
able to completely separate the \PaOne from the nonresonant
components. As expected, the \PaOne[1640] has a shallower \tpr
spectrum with slope-parameter values close to \SI{8}{\perGeVcsq}.
This value is similar to those of other resonances. In particular, it
agrees with the slopes of the \PaTwo[1700], which has similar
resonance parameters.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig33a}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig33b}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig33c}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig33d}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig33e}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig33f}%
}%
\caption{Similar to \cref{fig:method:tp:examplespectrum}, but
showing the \tpr spectra of some of the components in the three
$1^{++}$ waves as given by \cref{eq:tprim-dependence}. The red
curves and horizontal lines represent fits using
\cref{eq:slope-parametrization}.}
\label{fig:tprim_1pp}
\end{wideFigureOrNot}
The \tpr spectrum of the \PaOne[1420] in the $\PfZero[980] \pi P$ wave
is consistent with the resonance interpretation of this signal. The
\PaOne[1420] slope parameter has a value of
\SIaerrSys{9.5}{0.6}{1.0}{\perGeVcsq}, which confirms the tendency
that slopes decrease with increasing mass.
If none of the coupling amplitudes of the resonance components is
constrained via \cref{eq:method:branchingdefinition} [\StudyT; see
\cref{sec:systematics}], the model has more freedom and can better
describe the intensity distribution of the $\Prho \pi S$ wave at high
\tpr [see \cref{fig:intensity_1pp_rho_tbin11_noCoupling}]. The \PaOne
resonance parameters change only slightly. However, the extracted
\PaOne slope parameters become inconsistent: \SI{9.0}{\perGeVcsq} in
the $\Prho \pi S$ and \SI{15}{\perGeVcsq} in the $\PfTwo \pi P$ wave.
The slope of the \PaOne[1640] increases to \SI{14}{\perGeVcsq} in the
$\Prho \pi S$ wave but remains practically unchanged in the
$\PfTwo \pi P$ wave. This confirms that the \PaOne[1640] resonance is
well determined by the $\PfTwo \pi P$ wave. The results of \StudyT
also indicate that without the constraint of
\cref{eq:method:branchingdefinition} the relative intensities of the
two \PaOne* states and the nonresonant components are not well
constrained by the data.
\begin{figure}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig34a}%
\label{fig:intensity_1pp_rho_noCoupling}%
}%
\newLineOrHspace{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig34b}%
\label{fig:intensity_1pp_rho_tbin11_noCoupling}%
}%
\newLineOrHspace{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig34c}%
\label{fig:intensity_1pp_f2_noCoupling}%
}%
\caption{\subfloatLabel{fig:intensity_1pp_rho_noCoupling}~\tpr-summed
intensity of the \wave{1}{++}{0}{+}{\Prho}{S} wave.
\subfloatLabel{fig:intensity_1pp_rho_tbin11_noCoupling} Intensity
of this wave in the highest \tpr bin.
\subfloatLabel{fig:intensity_1pp_f2_noCoupling}~\tpr-summed
intensity of the \wave{1}{++}{0}{+}{\PfTwo}{P} wave. The result
of the main fit is represented by the continuous curves. The fit,
in which none of the coupling amplitudes of the resonance
components was constrained via
\cref{eq:method:branchingdefinition} [\StudyT; see
\cref{sec:systematics}], is represented by the dashed curves. The
model and the wave components are represented as in
\cref{fig:intensity_phases_1pp_tbin1}.}
\label{fig:intensities_1pp_noCoupling}
\end{figure}
For the \PaOne, we extract the resonance parameters
$m_{\PaOne} = \SIaerrSys{1299}{12}{28}{\MeVcc}$ and
$\Gamma_{\PaOne} = \SIerrSys{380}{80}{\MeVcc}$. The extracted
resonance parameters for the \PaOne[1640] are
$m_{\PaOne[1640]} = \SIaerrSys{1700}{35}{130}{\MeVcc}$ and
$\Gamma_{\PaOne[1640]} = \SIaerrSys{510}{170}{90}{\MeVcc}$. Due to
the dominance of the \PaOne signal, the parameters of the \PaOne[1640]
are correlated with those of the \PaOne. The fit model does not
describe well the $\Prho \pi S$ and $\PfTwo \pi P$ intensities in some
mass regions. This leads to a bimodal behavior of the fit with a
second solution with a narrower \PaOne and a wider and heavier
\PaOne[1640]. In the main fit, this solution has a larger~\chisq but
in some of the systematic studies (see \cref{sec:systematics}), the
solution with the narrow \PaOne is preferred.
The parameters of \PaOne and \PaOne[1640] depend strongly on the
interference of the $1^{++}$ and $2^{++}$ waves and therefore on the
set of $2^{++}$ waves included in the fit.
We also observe a large dependence of the parameters of \PaOne and
\PaOne[1640] on the number of background events in the selected data
sample.
Studies~\studyS and~\studyR with alternative \chisq~formulations (see
\cref{sec:alt_chi_2}) indicate that the model deviates more from the
measured intensity distributions than from the phases of the $1^{++}$
waves.
The results from the above mentioned systematic studies are discussed
in more detail in \cref{sec:syst_uncert_onePP}.
Since the \wave{1}{++}{0}{+}{\Prho}{S} wave has a large nonresonant
component, the fit result depends on the choice of the parametrization
used for the nonresonant component. Also the strongly peaked shape of
the nonresonant component at about \SI{1.5}{\GeVcc} in the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave seems rather implausible. We
therefore studied the dependence of the fit result on the
parametrization used for the nonresonant component. In \StudyO, we
replace the parametrization of the nonresonant amplitude by the square
root of the intensity distribution of the partial-wave decomposition
of Deck Monte Carlo data that were generated according to the model
described in \cref{sec:deck_model}. This model describes the measured
\wave{1}{++}{0}{+}{\Prho}{S} amplitude well [see
\cref{fig:DeckMC_chi2difference,fig:intensity_1pp_rho_DeckMC,fig:intensity_1pp_rho_tbin11_DeckMC}].
The $\Prho \pi S$ intensity distribution in the high-\tpr region is
described even better than in the main fit. The shape of the
nonresonant component from the Deck model in the $\Prho \pi S$ wave is
qualitatively similar to that obtained in the main fit. The \PaOne
parameters change only slightly but the yield of the \PaOne component
becomes larger and that of the nonresonant component smaller in
particular at high \tpr. The model is also in fair agreement with the
\wave{1}{++}{0}{+}{\PfTwo}{P} intensity distribution [see
\cref{fig:intensity_1pp_f2_DeckMC}], although the shape of the
nonresonant component from the Deck model is drastically different
from that used in the main fit. The \PaOne[1640] width increases by
\SI{126}{\MeVcc} in \StudyO.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig35a}%
\label{fig:intensity_1pp_rho_DeckMC}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig35b}%
\label{fig:intensity_1pp_rho_tbin11_DeckMC}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig35c}%
\label{fig:intensity_1pp_f0_DeckMC}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig35d}%
\label{fig:intensity_1pp_f2_DeckMC}%
}%
\caption{\subfloatLabel{fig:intensity_1pp_rho_DeckMC}~\tpr-summed
intensity of the \wave{1}{++}{0}{+}{\Prho}{S} wave.
\subfloatLabel{fig:intensity_1pp_rho_tbin11_DeckMC} Intensity of
this wave in the highest \tpr bin.
\subfloatLabel{fig:intensity_1pp_f0_DeckMC}~and~\subfloatLabel{fig:intensity_1pp_f2_DeckMC}:
The \tpr-summed intensity of the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} and the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave, respectively. The result of
the main fit is represented by the continuous curves. The fit, in
which the parametrization of the nonresonant amplitude was
replaced by the square root of the intensity distribution of the
partial-wave decomposition of Deck Monte Carlo data [\StudyO; see
\cref{sec:systematics}], is represented by the dashed curves. The
model and the wave components are represented as in
\cref{fig:intensity_phases_1pp_tbin1}.}
\label{fig:intensities_1pp_DeckMC}
\end{wideFigureOrNot}
Compared to the studies discussed above, the \PaOne and \PaOne[1640]
parameters depend only weakly on the particular choice of the \tpr
binning. This was verified in \StudyL, in which the analysis was
performed using only 8~\tpr bins.
From the fit, we extract \PaOne[1420] resonance parameters of
$m_{\PaOne[1420]} = \SIaerrSys{1411}{4}{5}{\MeVcc}$ and
$\Gamma_{\PaOne[1420]} = \SIaerrSys{161}{11}{14}{\MeVcc}$. In spite
of the smallness of the \PaOne[1420] signal, its resonance parameters
are found to be remarkably stable in the systematic studies described
above, which results in small systematic uncertainties\footnote{We
excluded \StudyO in the determination of the systematic uncertainty
of the \PaOne[1420] parameters, because the shape of the intensity
distribution of the Deck model in the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave contradicts the data [see
\cref{fig:intensity_1pp_f0_DeckMC}].} (see
\cref{sec:syst_uncert_onePP} for details). This result supersedes our
previous measurement of the \PaOne[1420] parameters reported in
\refCite{Adolph:2015pws}, which was obtained using the same data set
and the same analysis technique but with only three waves included in
the resonance-model fit.
\subsubsection{Discussion of results on $1^{++}$ resonances}
\label{sec:onePP_discussion}
We observe three $\JPC = 1^{++}$ resonances in our analysis. The
\PaOne appears in the \wave{1}{++}{0}{+}{\Prho}{S} wave, which is the
most dominant wave, together with a large contribution of the
nonresonant component. The contribution of the \PaOne to the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave is not well determined, since the
model does not describe well the data in the region below
\SI{1.5}{\GeVcc} because of the apparent leakage as pointed out in
\cref{sec:onePP_results} above. The \PaOne[1640] appears clearly as a
peak in the $\PfTwo \pi P$ wave with associated phase motion but has
only a small relative contribution to the $\Prho \pi S$ wave. In
general, the description of the $\Prho \pi S$ and $\PfTwo \pi P$
intensities appears to be difficult. The disagreement of the model
with the data induces large systematic uncertainties. The
\PaOne[1420] is observed as a clear peak in the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave with associated phase motion
(see
\cref{fig:intensity_phases_1pp_tbin1,fig:intensity_phases_1pp_tbin11}).
There is no clear signature for the presence of the \PaOne[1420] in
the other two $1^{++}$ waves.
In order to study the significance of the \PaOne[1420] resonance we
have removed it from the fit model, so that the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave is described by the
nonresonant component only. This fit has a minimum \chisq~value that
is \num{1.44} times larger than that of the main
fit.\footnote{Compared to the \num{722} free parameters of the main
fit, this fit has \num{698} free parameters.} Without the
\PaOne[1420], the model is not able to describe the
$\PfZero[980] \pi P$ intensity and relative phases (red dashed curve
in \cref{fig:intensity_phase_1pp_f0_no_a1_1420}).
\begin{figure}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig36a}%
\label{fig:intensity_1pp_f0_no_a1_1420}%
}%
\newLineOrHspace{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig36b}%
\label{fig:phase_1pp_f0_1pp_rho_tbin1_no_a1_1420}%
}%
\caption{\subfloatLabel{fig:intensity_1pp_f0_no_a1_1420}~\tpr-summed
intensity of the \wave{1}{++}{0}{+}{\PfZero[980]}{P} wave and
\subfloatLabel{fig:phase_1pp_f0_1pp_rho_tbin1_no_a1_1420}~phase of
this wave \wrt the \wave{1}{++}{0}{+}{\Prho}{S} wave in the lowest
\tpr bin. The result of the main fit is represented by the
continuous curves. The fit, in which the \PaOne[1420] component
was removed from the model, is represented by the dashed curves.
These curves correspond to the nonresonant component. The model
and the wave components are represented as in
\cref{fig:intensity_phases_1pp_tbin1}.}
\label{fig:intensity_phase_1pp_f0_no_a1_1420}
\end{figure}
In order to check if the peak in the $\PfZero[980] \pi P$ wave could
be a threshold effect of the \PaOne, we performed a fit, in which the
\PaOne[1420] component is replaced by the \PaOne component, so that
the latter appears in all three $1^{++}$ waves. The minimum
\chisq~value of this fit is \num{1.09} times larger than that of the
main fit.\footnote{Compared to the \num{722} free parameters of the
main fit, this fit has \num{700} free parameters.}
\Cref{fig:a1(1260)_in_1pp_f0_chi2difference} shows the contributions
from the spin-density matrix elements to the \chisq~difference between
this and the main fit. The model with the \PaOne in the
$\PfZero[980] \pi P$ wave describes the peak in this wave less well
[see \cref{fig:intensity_1pp_f0_a1(1260)_in_1pp_f0}]. The model
requires a larger nonresonant component and a more destructive
interference. While the description of the peak in the $\Prho \pi S$
wave is slightly improved [see
\cref{fig:intensity_1pp_rho_a1(1260)_in_1pp_f0}], the interference
term of this wave with the \wave{2}{++}{1}{+}{\Prho}{D} wave is
described less well [see
\cref{fig:a1(1260)_in_1pp_f0_chi2difference}]. The \PaOne resonance
parameters and the decomposition of the \wave{1}{++}{0}{+}{\Prho}{S}
wave in terms of its components change drastically. The \PaOne
becomes \SI{85}{\MeVcc} heavier and \SI{188}{\MeVcc} narrower so that
its resonance parameters actually become close to those of the
\PaOne[1420] in the main fit [cf.\ continuous and dashed blue curves
in \cref{fig:intensity_1pp_f0_a1(1260)_in_1pp_f0}].\footnote{Also the
parameters of the \PaOne[1640] change. It becomes \SI{85}{\MeVcc}
heavier and \SI{20}{\MeVcc} wider.} The $\Prho \pi S$ intensity is
described nearly completely by the nonresonant component with only a
small contribution from the \PaOne\ [see
\cref{fig:intensity_1pp_rho_a1(1260)_in_1pp_f0}]. This interpretation
of the $\Prho \pi S$ intensity seems implausible and would disagree
with previous results on the \PaOne (see discussion below). We
therefore conclude that the peak in the $\PfZero[980] \pi P$ wave
requires a resonance in our model, which is not the \PaOne.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig37}
\caption{Similar to \cref{fig:DeckMC_chi2difference}, but for the
study, in which the \PaOne[1420] resonance in the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave was replaced by the
\PaOne.}
\label{fig:a1(1260)_in_1pp_f0_chi2difference}
\end{figure}
\begin{figure}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig38a}%
\label{fig:intensity_1pp_rho_a1(1260)_in_1pp_f0}%
}%
\newLineOrHspace{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig38b}%
\label{fig:intensity_1pp_f0_a1(1260)_in_1pp_f0}%
}%
\caption{\tpr-summed intensities of the \wave{1}{++}{0}{+}{\Prho}{S}
and the \wave{1}{++}{0}{+}{\PfZero[980]}{P} wave. The result of
the main fit is represented by the continuous curves. The fit, in
which the \PaOne[1420] resonance in the
\wave{1}{++}{0}{+}{\PfZero[980]}{P} wave was replaced by the
\PaOne, is represented by the dashed curves. The model and the
wave components are represented as in
\cref{fig:intensity_phases_1pp_tbin1}.}
\label{fig:intensities_1pp_a1(1260)_in_1pp_f0}
\end{figure}
We estimate the strength of a possible \PaOne[1420] component in the
other two $1^{++}$ waves by adding the \PaOne[1420] component to the
amplitudes of these waves. The minimum~\chisq of this fit is
\num{0.96} times smaller than that of the main fit.\footnote{Compared
to the \num{722} free parameters of the main fit, this fit has
\num{766} free parameters.} The largest contribution to this
improvement in the description of the data comes from the intensity of
the $\Prho \pi S$ wave (see
\cref{fig:a1(1420)_in_all_1pp_chi2difference}). Adding the
\PaOne[1420] component to this wave improves the description of the
peak in the \PaOne region [see
\cref{fig:intensity_1pp_rho_a1(1420)_in_all_1pp}]. Within the fit
range, the description of the $\PfTwo \pi P$ wave changes only
slightly [see \cref{fig:intensity_1pp_f2_a1(1420)_in_all_1pp}].
However, the extrapolation of the model toward lower masses disagrees
even more strongly with the data than in the main fit. The
description of the $\PfZero[980] \pi P$ wave remains practically
unchanged.\footnote{The \PaOne[1420] parameters change only slightly.
Its mass increases by \SI{4}{\MeVcc} and its width by
\SI{11}{\MeVcc}. In contrast, the parameters of the \PaOne and
\PaOne[1640] change substantially. The \PaOne becomes
\SI{27}{\MeVcc} lighter and \SI{75}{\MeVcc} wider as compared to the
main fit. The width of the \PaOne[1640] decreases by
\SI{93}{\MeVcc}.} The relative contributions of the \PaOne[1420] to
the $\PfTwo \pi P$ and in particular to the $\Prho \pi S$ wave are
small. The coupling amplitudes of the \PaOne[1420] in the three waves
are not constrained by \cref{eq:method:branchingdefinition} and are
therefore freely determined by the fit. The values of the
\PaOne[1420] slope parameters in the three waves differ significantly:
in the $\Prho \pi S$ wave the slope is \SI{6.7}{\perGeVcsq}, in the
$\PfTwo \pi P$ wave it is \SI{17.5}{\perGeVcsq}, and in the
$\PfZero[980] \pi P$ wave it is \SI{9.5}{\perGeVcsq}. The latter
value is identical to the one from the main fit (see
\cref{tab:slopes}). The phase of the \PaOne[1420] coupling amplitude
in the $\Prho \pi S$ and the $\PfTwo \pi P$ waves exhibits a stronger
dependence on \tpr than that in the $\PfZero[980] \pi P$ wave. From
the above, we conclude that we do not see convincing evidence for an
\PaOne[1420] component in the $\Prho \pi S$ or the $\PfTwo \pi P$
wave, although we cannot rule out a small contribution.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig39}
\caption{Similar to \cref{fig:DeckMC_chi2difference}, but for the
study, in which the \PaOne[1420] resonance was also included in
the \wave{1}{++}{0}{+}{\Prho}{S} and \wave{1}{++}{0}{+}{\PfTwo}{P}
waves.}
\label{fig:a1(1420)_in_all_1pp_chi2difference}
\end{figure}
\begin{figure}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig40a}%
\label{fig:intensity_1pp_rho_a1(1420)_in_all_1pp}%
}%
\newLineOrHspace{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig40b}%
\label{fig:intensity_1pp_f2_a1(1420)_in_all_1pp}%
}%
\caption{\tpr-summed intensities of
\subfloatLabel{fig:intensity_1pp_rho_a1(1420)_in_all_1pp}~the
\wave{1}{++}{0}{+}{\Prho}{S} wave and
\subfloatLabel{fig:intensity_1pp_f2_a1(1420)_in_all_1pp}~the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave. The result of the main fit is
represented by the continuous curves. The fit, in which the
\PaOne[1420] resonance is also included in the
\wave{1}{++}{0}{+}{\Prho}{S} and \wave{1}{++}{0}{+}{\PfTwo}{P}
waves, is represented by the dashed curves. The model and the
wave components are represented as in
\cref{fig:intensity_phases_1pp_tbin1}.}
\label{fig:intensities_1pp_a1(1420)_in_all_1pp}
\end{figure}
In order to study the significance of the \PaOne[1640] component, we
performed a fit, in which we omitted the \PaOne[1640] resonance from
the fit model. The minimum \chisq~value of this fit is \num{1.13}
times larger than that of the main fit.\footnote{Compared to the
\num{722} free parameters of the main fit, this fit has \num{696}
free parameters.} \Cref{fig:no-a1(1640)_chi2difference} shows the
contributions from the spin-density matrix elements to the
\chisq~difference between this and the main fit. Without the
\PaOne[1640], the model describes less well in particular the
intensity distributions of the \wave{1}{++}{0}{+}{\Prho}{S} and
\wave{1}{++}{0}{+}{\PfTwo}{P} waves (see
\cref{fig:intensities_1pp_no-a1(1640)}). The width of the \PaOne
becomes \SI{47}{\MeVcc} larger. From the above, we conclude that the
\PaOne[1640] component is necessary to describe the data but its
parameters are not well determined.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig41}
\caption{Similar to \cref{fig:DeckMC_chi2difference}, but for the
study, in which the \PaOne[1640] resonance was omitted from the
fit model.}
\label{fig:no-a1(1640)_chi2difference}
\end{figure}
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig42a}%
\label{fig:intensity_1pp_rho_no-a1(1640)}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig42b}%
\label{fig:intensity_1pp_rho_no-a1(1640)_zoom}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig42c}%
\label{fig:intensity_1pp_f2_no-a1(1640)}%
}%
\caption{\tpr-summed intensities of
\subfloatLabel{fig:intensity_1pp_rho_no-a1(1640)}~the
\wave{1}{++}{0}{+}{\Prho}{S} wave and
\subfloatLabel{fig:intensity_1pp_f2_no-a1(1640)}~the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave.
In~\subfloatLabel{fig:intensity_1pp_rho_no-a1(1640)_zoom}, a
zoomed view of~\subfloatLabel{fig:intensity_1pp_rho_no-a1(1640)}
is shown. The result of the main fit is represented by the
continuous curves. The fit, in which the \PaOne[1640] component
was removed from the fit model, is represented by the dashed
curves. The model and the wave components are represented as in
\cref{fig:intensity_phases_1pp_tbin1}.}
\label{fig:intensities_1pp_no-a1(1640)}
\end{wideFigureOrNot}
Although the \PaOne is a well-established resonance that has been
observed in many experiments, its parameters are not well determined.
Depending on the analyzed process and the employed parametrizations,
the values of the \PaOne parameters differ
substantially~\cite{pdg_a1_1260:2006}. The measurements listed by the
PDG cover a wide range of mass values from
\SI{1041(13)}{\MeVcc}~\cite{Gavillet:1977kx} up to
\SIerrs{1331}{10}{3}{\MeVcc}~\cite{Asner:1999kj} and width values from
\SI{230(50)}{\MeVcc}~\cite{Gavillet:1977kx} up to
\SIerrs{814}{36}{13}{\MeVcc}~\cite{Asner:1999kj}. Due to the large
spread of the measured parameter values, the PDG does not perform an
average but provides only an estimate of
$m_{\PaOne} = \SI{1230(40)}{\MeVcc}$ and
$\Gamma_{\PaOne} =
\SIrange{250}{600}{\MeVcc}$~\cite{Patrignani:2016xqp}. Our measured
\PaOne mass of $m_{\PaOne} = \SIaerrSys{1299}{12}{28}{\MeVcc}$ is
larger than the PDG estimate but compatible within our large
uncertainties. Our measured width of
$\Gamma_{\PaOne} = \SIerrSys{380}{80}{\MeVcc}$ has large uncertainties
and is close to the center of the range estimated by the PDG.
Compared to our previous measurement of the \threePi final state
diffractively produced on a solid lead target~\cite{alekseev:2009aa},
the width agrees well but we obtain a larger mass that is in slight
disagreement. However, since the lead-target data sample is
approximately 2~orders of magnitude smaller, the analysis in
\refCite{alekseev:2009aa} was performed by integrating over the \tpr
range from \SIrange{0.1}{1.0}{\GeVcsq} and assuming a model for the
\tpr dependence of the partial-wave amplitudes. Considering the
unexpected \tpr dependence of the shape of the
\wave{1}{++}{0}{+}{\Prho}{S} intensity distribution as observed in
\cref{fig:intensity_1pp_rho_pi_S}, this might have been an inadequate
approximation in former analyses and might explain the mass
difference.
As already discussed in \refCite{Adolph:2015pws}, the nature of the
peculiar resonancelike \PaOne[1420] signal, which is listed by the PDG
as \enquote{omitted from summary table}~\cite{Patrignani:2016xqp}, is
still unclear and several interpretations were proposed. In
\refCite{Adolph:2015pws} and in this analysis we have shown that it is
consistent with a Breit-Wigner amplitude. Hence it could be the
isospin partner to the $f_1(1420)$. Isovector
$[n\, n]\,[\overline{n}\,\overline{n}]$ and
$[n\, s]\,[\overline{n}\,\overline{s}]$ states with $n = u$ or $d$
were predicted in the \SI{1.4}{\GeVcc} mass range in quark-model
calculations that included tetraquark states~\cite{Vijande:2005jd}.
The \PaOne[1420] signal was also described as a two-quark-tetraquark
mixed state~\cite{wang:2014bua} and as a tetraquark with mixed flavor
symmetry~\cite{Chen:2015fwa}. In addition, calculations based on a
soft-wall AdS/QCD approach predict a
$[n\, \overline{s}]\,[s\, \overline{n}]$ tetraquark with a mass of
\SI{1414}{\MeVcc}~\cite{Gutsche:2017oro}. The authors of
\refCite{Gutsche:2017twh} studied the two-body decay rates for the
modes $\PaOne[1420] \to \PfZero[980] \pi$ and
$\PaOne[1420] \to K \PKbar^*(892)$ for four-quark configurations using
the covariant confined quark model. They found that a molecular
configuration is preferred over a compact diquark-antidiquark state.
However, other models were proposed that do not require an additional
resonance. Basdevant and Berger proposed resonant rescattering
corrections in the Deck process as an
explanation~\cite{Basdevant:2015ysa,Basdevant:2015wma}, whereas the
authors of \refCite{Ketzer:2015tqa} suggested an anomalous triangle
singularity in the rescattering diagram for
$\PaOne \to K \PKbar^*(892) \to K \PKbar \pi \to \PfZero[980] \pi$.
The results of the latter calculation were confirmed in
\refCite{Aceti:2016yeb}. Preliminary studies show that the amplitude
for the triangle diagram describes the data equally well as the
Breit-Wigner model. In
the case of a triangle singularity, the production rates of the
\PaOne[1420] would be completely determined by those of the \PaOne.
Therefore, the slope parameters of the two peaks would be equal.
Unfortunately, in our analysis the systematic uncertainties of the
slope parameters are too large in order to draw any conclusion (see
\cref{tab:slopes}). Hence more detailed studies are still needed in
order to distinguish between different models for the \PaOne[1420].
The \PaOne[1640] is listed by the PDG as \enquote{omitted from summary
table} based on four
measurements~\cite{Bellini:1984fz,Baker:1999fc,chung:2002pu,baker:2003jh}.
This state therefore requires further confirmation. The PDG world
averages for the \PaOne[1640] parameters are
$m_{\PaOne[1640]} = \SI{1647(22)}{\MeVcc}$ and
$\Gamma_{\PaOne[1640]} =
\SI{254(27)}{\MeVcc}$~\cite{Patrignani:2016xqp}. Compared to other
waves, the agreement of our model with the
\wave{1}{++}{0}{+}{\Prho}{S} and \wave{1}{++}{0}{+}{\PfTwo}{P}
intensities is worse and thus our measured \PaOne[1640] parameters,
$m_{\PaOne[1640]} = \SIaerrSys{1700}{35}{130}{\MeVcc}$ and
$\Gamma_{\PaOne[1640]} = \SIaerrSys{510}{170}{90}{\MeVcc}$, have large
systematic uncertainties. Our \PaOne[1640] mass value is larger but
within uncertainties compatible with the world average. However, our
width value is significantly larger. As the study with the
\PaOne[1420] component in all three $1^{++}$ waves suggests (see
discussion above), this discrepancy might be due to the disagreement
between model and data in the mass region between \PaOne and
\PaOne[1640] in the $\Prho \pi S$ and $\PfTwo \pi P$ intensities. It
might also be a consequence of not including any higher-lying \PaOne*
states in the fit model.
The PDG~\cite{Patrignani:2016xqp} lists three further \PaOne* states:
\PaOne[1930]~\cite{anisovich:2001pn}, \PaOne[2095]~\cite{kuhn:2004en},
and \PaOne[2270]~\cite{anisovich:2001pn}. Although we do not see
clear resonance signals of heavy \PaOne* states in the mass range from
\SIrange{1900}{2500}{\MeVcc} in the analyzed waves, we cannot exclude
that some of the observed deviations of the model from the data at
high masses are due to additional excited \PaOne* states that we do
not take into account.
%
\subsection{$\JPC = 1^{-+}$ resonances}
\label{sec:oneMP}
\subsubsection{Results on $1^{-+}$ resonances}
\label{sec:oneMP_results}
In addition to waves with ordinary \qqbar quantum numbers, our
analysis also includes the \wave{1}{-+}{1}{+}{\Prho}{P} wave with an
exotic \JPC\ combination. This wave contributes \SI{0.8}{\percent} to
the total intensity. \Cref{fig:intensities_1mp} shows the intensity
distributions for all 11~\tpr bins. The shapes of these distributions
exhibit a surprisingly strong dependence on \tpr. At low \tpr, the
intensity distribution is dominated by a broad structure that extends
from about \SIrange{1.0}{1.7}{\GeVcc} with a maximum at approximately
\SI{1.2}{\GeVcc}. With increasing \tpr, the structure becomes
narrower and the maximum moves to about \SI{1.6}{\GeVcc}. This
behavior suggests large contributions from nonresonant processes in
this wave. In the highest \tpr bin, a dip appears at
\SI{1.25}{\GeVcc} where the intensity nearly vanishes. At low \tpr, a
narrow enhancement appears at \SI{1.1}{\GeVcc} on top of the broad
structure. This enhancement is sensitive to details of the wave set
that is used in the partial-wave decomposition and we therefore
suspect it to be an artifact induced by imperfections in the PWA
model.
\Cref{fig:phases_1mp} shows selected phases of the
\wave{1}{-+}{1}{+}{\Prho}{P} wave \wrt other waves in the lowest and
the highest \tpr bins (top and bottom rows, respectively). At low
\tpr, decreasing phases appear at masses that correspond to resonances
in the other waves.\footnote{The slightly decreasing phase \wrt the
\wave{1}{++}{0}{+}{\Prho}{S} wave around \SI{1.2}{\GeVcc} is caused
by the \PaOne\ [see
\cref{fig:phase_1mp_1pp_rho_tbin1,sec:onePP_results}]. The rapidly
decreasing phase \wrt the \wave{2}{++}{1}{+}{\Prho}{D} wave around
\SI{1.3}{\GeVcc} is caused by the \PaTwo\ [see
\cref{fig:phase_1mp_2pp_m1_rho_tbin1,sec:twoPP_results}]. The
slightly decreasing phase \wrt the \wave{2}{-+}{0}{+}{\PfTwo}{S}
wave around \SI{1.7}{\GeVcc} is caused by the \PpiTwo\ [see
\cref{fig:phase_1mp_2mp_m0_f2_tbin1,sec:twoMP_results}]. The
decreasing phase \wrt the \wave{4}{++}{1}{+}{\Prho}{G} wave around
\SI{1.9}{\GeVcc} is caused by the \PaFour\ [see
\cref{fig:phase_1mp_4pp_rho_tbin1,sec:fourPP_results}].} In
\cref{fig:phase_1mp_1pp_rho_tbin1,fig:phase_1mp_4pp_rho_tbin1}
slightly rising phases are observed in the \SI{1.6}{\GeVcc} region.
The phase \wrt the \wave{2}{++}{1}{+}{\Prho}{D} wave is approximately
constant between \SIlist{1.4;1.6}{\GeVcc}. Its rapid rise at
\SI{1.7}{\GeVcc} [see \cref{fig:phase_1mp_2pp_m1_rho_tbin1}] is
induced by the nearly vanishing intensity of the
\wave{2}{++}{1}{+}{\Prho}{D} wave (see \cref{sec:twoPP_results}).
Compared to the intensity of the $1^{-+}$ wave, its phase motions \wrt
most waves show less dependence on \tpr in the \SI{1.6}{\GeVcc} region
(see bottom row of \cref{fig:phases_1mp}). At high \tpr, rapidly
decreasing phases appear at \SI{1.25}{\GeVcc} because of the nearly
vanishing intensity of the $1^{-+}$ wave.\footnote{This is the same
effect as seen in the \wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave; see
\cref{sec:zeroMP_results,fig:phase_0mp_1pp_rho_tbin11,fig:phase_0mp_2mp_f2_tbin11}.}
It is worth noting that we do not observe any phase motions in the
\SI{1.1}{\GeVcc} region, where the narrow enhancement is observed in
the intensity distribution. This supports interpretation of this
structure as a model artifact.
\ifMultiColumnLayout{\begin{figure*}[t]}{\begin{figure}[p]}
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43a}%
\label{fig:intensity_1mp_tbin1}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43b}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43c}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43d}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43e}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43f}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43g}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43h}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43i}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43j}%
\label{fig:intensity_1mp_tbin10}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43k}%
\label{fig:intensity_1mp_tbin11}%
}%
\hspace*{\threePlotSmallSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotSmallWidth]{fig43l}%
\label{fig:intensity_1mp_tsum}%
}%
\caption{\subfloatLabel{fig:intensity_1mp_tbin1}~to~\subfloatLabel{fig:intensity_1mp_tbin11}:
Intensity distribution of the \wave{1}{-+}{1}{+}{\Prho}{P} wave in
the 11~\tpr bins. \subfloatLabel{fig:intensity_1mp_tsum}~The
\tpr-summed intensity. The model and the wave components are
represented as in \cref{fig:intensity_phases_0mp}, except that the
blue curve represents the \PpiOne[1600].}
\label{fig:intensities_1mp}
\ifMultiColumnLayout{\end{figure*}}{\end{figure}}
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig44a}%
\label{fig:phase_1mp_1pp_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig44b}%
\label{fig:phase_1mp_2pp_m1_rho_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig44c}%
\label{fig:phase_1mp_2mp_m0_f2_tbin1}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig44d}%
\label{fig:phase_1mp_4pp_rho_tbin1}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig44e}%
\label{fig:phase_1mp_1pp_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig44f}%
\label{fig:phase_1mp_2pp_m1_rho_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig44g}%
\label{fig:phase_1mp_2mp_m0_f2_tbin11}%
}%
\hspace*{\fourPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\fourPlotWidth]{fig44h}%
\label{fig:phase_1mp_4pp_rho_tbin11}%
}%
\caption{Phase of the \wave{1}{-+}{1}{+}{\Prho}{P} wave relative to
\subfloatLabel{fig:phase_1mp_1pp_rho_tbin1}~the
\wave{1}{++}{0}{+}{\Prho}{S},
\subfloatLabel{fig:phase_1mp_2pp_m1_rho_tbin1}~the
\wave{2}{++}{1}{+}{\Prho}{D},
\subfloatLabel{fig:phase_1mp_2mp_m0_f2_tbin1}~the
\wave{2}{-+}{0}{+}{\PfTwo}{S}, and
\subfloatLabel{fig:phase_1mp_4pp_rho_tbin1}~the
\wave{4}{++}{1}{+}{\Prho}{G} wave for the lowest \tpr bin.
\subfloatLabel{fig:phase_1mp_1pp_rho_tbin11}~through~\subfloatLabel{fig:phase_1mp_4pp_rho_tbin11}:
The phases for the highest \tpr bin. The model is represented as
in \cref{fig:intensities_1mp}.}
\label{fig:phases_1mp}
\end{wideFigureOrNot}
We describe the \wave{1}{-+}{1}{+}{\Prho}{P} amplitude by a
spin-exotic $\JPC = 1^{-+}$ resonance, the \PpiOne[1600], and a
nonresonant component. The \PpiOne[1600] is parametrized by
\cref{eq:BreitWigner,eq:method:fixedwidth}, the nonresonant component
using \cref{eq:method:nonresterm} (see
\cref{tab:method:fitmodel:waveset}). The $1^{-+}$ wave is fit in the
mass range from \SIrange{0.9}{2.0}{\GeVcc}.
The model is in fair agreement with the intensity distributions. It
reproduces in particular the strong \tpr dependence of the shape of
the intensity distribution by a \tpr-dependent interference of the
\PpiOne[1600] with the nonresonant component. The latter strongly
changes shape, strength, and phase with \tpr. At low \tpr, the
intensity is dominated by the large nonresonant component, which
interferes constructively with the \PpiOne[1600] at low masses. With
increasing \tpr, the strength of the nonresonant component decreases
quickly so that the \PpiOne[1600] becomes the dominant component. In
the two highest \tpr bins, the nonresonant component is small or even
vanishes in the \SI{1.6}{\GeVcc} region and the broad peak in the data
is nearly entirely described by the \PpiOne[1600]. The intensity dip
at \SI{1.25}{\GeVcc} in the highest \tpr bin is reproduced by a
destructive interference of the \PpiOne[1600] and the nonresonant
component. However, the shape of the nonresonant component in the
highest \tpr bin seems implausible since it is inconsistent with the
continuous evolution with increasing \tpr (see the discussion of the
Deck model below). At low \tpr, the model does not describe well the
low-mass part of the intensity distribution. In particular, the model
cannot reproduce the presumably artificial narrow enhancement at
\SI{1.1}{\GeVcc}.
The model describes the phases of the $1^{-+}$ wave well within the
fit range. The \PpiOne[1600] component causes only slight phase
motions. This becomes particularly obvious in the nearly constant
phase \wrt the \wave{4}{++}{1}{+}{\Prho}{G} wave in the
\SI{1.6}{\GeVcc} region [see
\cref{fig:phase_1mp_4pp_rho_tbin1,fig:phase_1mp_4pp_rho_tbin11}]. The
$4^{++}$ wave contains no resonance in this mass range. For some
waves, the model extrapolations to low or high masses deviate from the
data [see \eg
\cref{fig:phase_1mp_1pp_rho_tbin1,fig:phase_1mp_2mp_m0_f2_tbin1}].
The strong \tpr dependence of the relative strength of the nonresonant
and the \PpiOne[1600] components is shown in
\cref{fig:tprim_1mp_main}. For $\tpr \gtrsim \SI{0.3}{\GeVcsq}$, the
\PpiOne[1600] contribution dominates, whereas in the lowest \tpr bin
the intensity of the nonresonant component, integrated over the fit
range, is nearly an order of magnitude larger. The \tpr spectrum of
the \PpiOne[1600] is not well described by the parametrization in
\cref{eq:slope-parametrization}. The model is not able to reproduce
the downturn toward low \tpr. This may be a hint that, at low \tpr,
the fit is not able to separate the small \PpiOne[1600] component from
the dominant nonresonant component due to an inappropriate description
of the shape of the latter. This hypothesis is supported by the
result of a study, in which the shape of the nonresonant component was
determined from a Deck model (see discussion below). Limiting the fit
range to the region \SIvalRange{0.189}{\tpr}{0.724}{\GeVcsq}, where
the model is able to describe the data, yields a \PpiOne[1600] slope
parameter of \SI{7.3}{\perGeVcsq}. This value lies in the range that
is typical for resonances and is clearly much smaller than the slope
value of the nonresonant component. The model in
\cref{eq:slope-parametrization} is in fair agreement with the \tpr
spectrum of the nonresonant component, which has a slope parameter
value of \SIaerrSys{19.1}{1.4}{4.7}{\perGeVcsq}. This is the second
largest slope value of all wave components in the fit.\footnote{Only
the nonresonant component in the \wave{0}{-+}{0}{+}{\PfZero[980]}{S}
wave has an even steeper slope (see
\cref{tab:slopes,sec:zeroMP_results}).}
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig45a}%
\label{fig:tprim_1mp_main}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig45b}%
\label{fig:tprim_1mp_Deck}%
}%
\caption{Similar to \cref{{fig:tprim_0mp}}, but showing the \tpr
spectra of the two $\JPC = 1^{-+}$ wave components as given by
\cref{eq:tprim-dependence}: the \PpiOne[1600] component is shown
as blue lines and light blue boxes, and the nonresonant component
as black lines and gray boxes. The red and green curves and
horizontal lines represent fits using
\cref{eq:slope-parametrization}. \subfloatLabel{fig:tprim_1mp_main}~The
result of the main fit. \subfloatLabel{fig:tprim_1mp_Deck}~The
result of a fit, in which the parametrization of the nonresonant
amplitude was replaced by the square root of the intensity
distribution of the partial-wave decomposition of Deck Monte Carlo
data [\StudyO; see
\cref{sec:systematics,fig:intensities_phase_1mp_DeckMC}].}
\label{fig:tprim_1mp}
\end{wideFigureOrNot}
From the fit, we obtain the Breit-Wigner resonance parameters
$m_{\PpiOne[1600]} = \SIaerrSys{1600}{110}{60}{\MeVcc}$ and
$\Gamma_{\PpiOne[1600]} = \SIaerrSys{580}{100}{230}{\MeVcc}$. Since
the $1^{-+}$ wave has a small intensity and is dominated by
nonresonant contributions, the \PpiOne[1600] resonance parameters are
sensitive to changes of the fit model discussed in
\cref{sec:systematics} and hence have large systematic uncertainties.
In the systematic studies, we observe a correlation of the
\PpiOne[1600] parameters with the \PaOne, \PaOne[1640], and
\PaTwo[1700] parameters.
We also observe that the \PpiOne[1600] parameters depend on the choice
of the waves included in the fit.
Studies~\studyS and~\studyR with alternative \chisq~formulations (see
\cref{sec:alt_chi_2}) indicate that larger width values are preferred
when less weight is given to the phase information in the
\chisq~function.
The \PpiOne[1600] parameters are also sensitive to the range
parameter~$q_R$ in the Blatt-Weisskopf factors. More details on the
results of these systematic studies are discussed in
\cref{sec:syst_uncert_oneMP}.
Since the $1^{-+}$ wave is dominated by the nonresonant component, the
fit result depends on the choice of the parametrization for the
nonresonant component. In order to estimate this dependence, we
performed \StudyO, in which the parametrization of the nonresonant
amplitude was replaced by the square root of the intensity
distribution of the partial-wave decomposition of Deck Monte Carlo
data generated according to the model described in
\cref{sec:deck_model}. This fit describes the $1^{-+}$ amplitude
fairly well (see
\cref{fig:DeckMC_chi2difference,fig:intensities_phase_1mp_DeckMC}).
The Deck model behaves qualitatively similar to the empirical
parametrization used in the main fit, except in the highest \tpr bin,
where the Deck model has a more plausible shape. The main difference
\wrt the main fit is a larger \PpiOne[1600] yield at low \tpr. The
resulting \tpr spectrum for the \PpiOne[1600] [see
\cref{fig:tprim_1mp_Deck}] is much better described by the
parametrization in \cref{eq:slope-parametrization} than the \tpr
spectrum of the main fit [see \cref{fig:tprim_1mp_main}]. The slope
value of \SI{8.5}{\perGeVcsq} that is extracted using a fit range of
\SIvalRange{0.113}{\tpr}{0.742}{\GeVcsq} is in the range typical for
resonances. Mass and width of the \PpiOne[1600] resonance decrease by
\SI{60}{\MeVcc}. \StudyO defines the lower boundary of the
uncertainty interval for the \PpiOne[1600] mass.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig46a}%
\label{fig:intensity_1mp_DeckMC}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig46b}%
\label{fig:intensity_1mp_tbin1_DeckMC}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig46c}%
\label{fig:intensity_1mp_tbin11_DeckMC}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig46d}%
\label{fig:phase_1mp_1pp_rho_tbin11_DeckMC}%
}%
\caption{\subfloatLabel{fig:intensity_1mp_DeckMC}~\tpr-summed
intensity of the \wave{1}{-+}{1}{+}{\Prho}{P} wave.
\subfloatLabel{fig:intensity_1mp_tbin1_DeckMC}~and~\subfloatLabel{fig:intensity_1mp_tbin11_DeckMC}:
Intensity of this wave in the lowest and highest \tpr bins,
respectively.
\subfloatLabel{fig:phase_1mp_1pp_rho_tbin11_DeckMC}~Phase of the
$1^{-+}$ wave relative to the \wave{1}{++}{0}{+}{\Prho}{S} wave in
the highest \tpr bin. The result of the main fit is represented
by the continuous curves. The fit, in which the parametrization
of the nonresonant amplitude was replaced by the square root of
the intensity distribution of the partial-wave decomposition of
Deck Monte Carlo data [\StudyO; see \cref{sec:systematics}], is
represented by the dashed curves. The model and the wave
components are represented as in \cref{fig:intensities_1mp}.}
\label{fig:intensities_phase_1mp_DeckMC}
\end{wideFigureOrNot}
In the related \StudyK, we estimate the effect of an increased
background contamination on the fit result by using weaker
event-selection criteria. The \tpr-summed \PpiOne[1600] yield remains
approximately unchanged while the strength of the nonresonant
component increases.\footnote{However, the \PpiOne[1600] parameters
change. It becomes \SI{46}{\MeVcc} heavier and \SI{130}{\MeVcc}
narrower.}
Since at low \tpr the intensity distribution of the $1^{-+}$ wave
exhibits presumably artificial structures in the low-mass region, we
performed a study, in which the fit range for the $1^{-+}$ wave was
limited to \SIvalRange{1.4}{\mThreePi}{2.0}{\GeVcc}. In this study,
the mass of the \PpiOne[1600] increases by \SI{60}{\MeVcc} but remains
within the systematic uncertainty while the width remains unchanged.
A similar result is obtained in \StudyL, in which the analysis was
performed using only eight~\tpr bins, so that the subdivision of the
analyzed \tpr range into 11~bins seems to be sufficient to capture the
rapid change of the shape of the intensity distribution of the
$1^{-+}$ wave with \tpr.
We obtain slightly changed values for mass and width, \ie
$m_{\PpiOne[1600]} = \SI{1650}{\MeVcc}$ and
$\Gamma_{\PpiOne[1600]} = \SI{560}{\MeVcc}$, if we use a
mass-dependent width for the parametrization of the \PpiOne[1600]
analogous to \cref{eq:method:a2dynamicwidth} and assume that this
width is saturated by the $\Prho \pi P$-wave decay mode.
\subsubsection{Discussion of results on $1^{-+}$ resonances}
\label{sec:oneMP_discussion}
The results of previous experiments on the existence of a
\PpiOne[1600] signal in the $3\pi$ final state are contradictory. On
the one hand, the BNL~E852 experiment, which analyzed pion diffraction
at \SI{18}{\GeVc} beam momentum, claimed a \PpiOne[1600] signal in the
$\Prho \pi$ decay mode~\cite{adams:1998ff,chung:2002pu}. On the other
hand, the authors of \refCite{dzierba:2005jg} concluded that the peak
structure in the \wave{1}{-+}{1}{+}{\Prho}{P} wave that was reported
in \refsCite{adams:1998ff,chung:2002pu} was due to leakage caused by a
too small wave set and that they do not observe a significant
\PpiOne[1600] signal in the $\Prho \pi$ channel. This conclusion was
based on a partial-wave analysis of a much larger $3\pi$ data set also
from the BNL~E852 experiment in the kinematic range
\SIvalRange{0.08}{\tpr}{0.53}{\GeVcsq} using an extended wave set.
However, a \PpiOne[1600] signal was observed in a combined analysis of
$\eta' \pi$, $\PbOne \pi$, and $\Prho \pi$ final states from pion
diffraction at \SI{36.6}{\GeVc} beam momentum by the
VES~experiment~\cite{zaitsev:2000rc,Khokhlov:2000tk}. No
\PpiOne[1600] signal was found by the CLAS experiment in \threePi
photoproduction~\cite{Nozar:2008aa,Eugenio:2013xua}.
We have studied the significance of the \PpiOne[1600] signal by
performing a fit, in which we omitted the \PpiOne[1600] component from
the model. Hence in this fit, the \wave{1}{-+}{1}{+}{\Prho}{P} wave
is described solely by the nonresonant component. The minimum
\chisq~value of this fit is \num{1.17} times larger than that of the
main fit.\footnote{Compared to the \num{722} free parameters of the
main fit, this fit has \num{698} free parameters.}
\Cref{fig:no-pi1(1600)_chi2difference} shows the contributions from
the spin-density matrix elements to the \chisq~difference between this
and the main fit. In particular, the intensity of the $1^{-+}$ wave
and its phase relative to the \wave{1}{++}{0}{+}{\Prho}{S} wave are
described less well by the model without the \PpiOne[1600] (see
\cref{fig:intensities_phase_no-pi1(1600)}). The disagreement is
largest in the two highest \tpr bins where the model cannot describe
the data. However, at lower \tpr the nonresonant component is
sufficient to describe the basic features of the data.\footnote{The
omission of the \PpiOne[1600] also affects some of the resonance
parameters in the fit. Most striking is the impact on the
description of the intensity distribution of the
\wave{1}{++}{0}{+}{\Prho}{S} wave. Although this wave has a
relative intensity that is about 40~times larger than that of the
$1^{-+}$ wave and although the \PaOne and the \PpiOne[1600] have a
mass difference of about \SI{300}{\MeVcc}, the \PaOne becomes
\SI{56}{\MeVcc} heavier and \SI{78}{\MeVcc} narrower if the
\PpiOne[1600] is omitted from the model. Also the \PaOne[1640] and
\PaTwo[1700] parameters change substantially. The \PaOne[1640]
becomes \SI{92}{\MeVcc} heavier and \SI{26}{\MeVcc} wider; the
\PaTwo[1700] becomes \SI{28}{\MeVcc} heavier and \SI{60}{\MeVcc}
wider.} Furthermore, we performed a fit with a model that describes
the $1^{-+}$ amplitude using two independent coherent nonresonant
contributions but no \PpiOne[1600]. Also this fit does not yield a
satisfactory description of the data. Based upon the items discussed
above, we conclude that the significance of the \PpiOne[1600] signal
is strongly \tpr dependent. At \tpr below about \SI{0.5}{\GeVcsq},
there is only weak evidence for the \PpiOne[1600]. This is consistent
with the nonobservation of the \PpiOne[1600] in the BNL~E852 data in
the kinematic range $\tpr < \SI{0.53}{\GeVcsq}$~\cite{dzierba:2005jg},
as discussed above. However, our data show that a resonancelike
signal is required to describe the data in the \tpr region above about
\SI{0.5}{\GeVcsq}, which was not analyzed in \refCite{dzierba:2005jg}.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidthOr{\twoPlotWidth}]{fig47}
\caption{Similar to \cref{fig:DeckMC_chi2difference}, but for the
study, in which the \PpiOne[1600] resonance was omitted from the
fit model.}
\label{fig:no-pi1(1600)_chi2difference}
\end{figure}
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig48a}%
\label{fig:intensity_1mp_no-pi1(1600)}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig48b}%
\label{fig:intensity_1mp_tbin1_no-pi1(1600)}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig48c}%
\label{fig:intensity_1mp_tbin11_no-pi1(1600)}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig48d}%
\label{fig:phase_1mp_1pp_rho_tbin11_no-pi1(1600)}%
}%
\caption{\subfloatLabel{fig:intensity_1mp_no-pi1(1600)}~\tpr-summed
Intensity of the \wave{1}{-+}{1}{+}{\Prho}{P} wave.
\subfloatLabel{fig:intensity_1mp_tbin1_no-pi1(1600)}~and~\subfloatLabel{fig:intensity_1mp_tbin11_no-pi1(1600)}:
intensity of this wave in the lowest and highest \tpr bins,
respectively.
\subfloatLabel{fig:phase_1mp_1pp_rho_tbin11_no-pi1(1600)}~Phase of
the $1^{-+}$ wave relative to the \wave{1}{++}{0}{+}{\Prho}{S}
wave in the highest \tpr bin. The result of the main fit is
represented by the continuous curves. The fit, in which the
\PpiOne[1600] resonance was omitted from the fit model, is
represented by the dashed curves. These curves correspond to the
nonresonant component. The model and the wave components are
represented as in \cref{fig:intensities_1mp}.}
\label{fig:intensities_phase_no-pi1(1600)}
\end{wideFigureOrNot}
The \PpiOne[1600] is considered by the PDG to be an established state.
It was seen by the BNL~E852 and VES experiments in diffractively
produced $\eta' \pi$~\cite{beladidze:1993km,Ivanov:2001rv},
$\eta \pi^+ \pi^- \pi^-$~\cite{kuhn:2004en,amelin:2005ry}, and
$\omega \pi^- \pi^0$~\cite{Lu:2004yn,amelin:2005ry} final states.
Evidence for the \PpiOne[1600] was also found in an analysis of
$\ppbar \to \omega \pi^+ \pi^- \pi^0$ Crystal Barrel
data~\cite{baker:2003jh} and in $\chi_{c1} \to \eta' \pi^+ \pi^-$
decays by the CLEO-c experiment~\cite{Adams:2011sq}.
The PDG world averages for mass and width of the \PpiOne[1600] are
$m_{\PpiOne[1600]} = \SIaerr{1662}{8}{9}{\MeVcc}$ and
$\Gamma_{\PpiOne[1600]} = \SI{241(40)}{\MeVcc}$,
respectively~\cite{Patrignani:2016xqp}. Our measured \PpiOne[1600]
mass of $m_{\PpiOne[1600]} = \SIaerrSys{1600}{110}{60}{\MeVcc}$ is
consistent with the world average within the large systematic
uncertainties; however, our measured \PpiOne[1600] width of
$\Gamma_{\PpiOne[1600]} = \SIaerrSys{580}{100}{230}{\MeVcc}$ is
larger. This discrepancy is mainly due to the extremely small width
value of $\Gamma_{\PpiOne[1600]} = \SIerrs{185}{25}{28}{\MeVcc}$
quoted by the BNL~E852 experiment for the $\omega \pi^- \pi^0$ final
state~\cite{Lu:2004yn}. The present width is also larger than our
previously published one from an analysis of the same process on a
solid-lead target~\cite{alekseev:2009aa}. Due to the approximately
2~orders of magnitude smaller data sample, the analysis in
\refCite{alekseev:2009aa} was performed by integrating over the \tpr
range from \SIrange{0.1}{1.0}{\GeVcsq} and by assuming a model for the
\tpr dependence of the partial-wave amplitudes. Therefore, the \tpr
dependence of the shape of the \wave{1}{-+}{1}{+}{\Prho}{P} amplitude
was not taken into account. It is remarkable that in the lead-target
data, the contribution of the nonresonant component is much smaller
than that in the proton-target data so that the \tpr-integrated
lead-target data resemble the high-\tpr region of the proton-target
data.
The PDG summary table lists the \PpiOne[1400] as an additional
$\JPC = 1^{-+}$ resonance. This state was observed by several
experiments in the $\eta \pi$ final
state~\cite{Alde:1988bv,Aoyagi:1993kn,Thompson:1997bs,Chung:1999we,Adams:2006sa,Dorofeev:2001xu,Abele:1998gn,Abele:1999tf}.
In the $\Prho \pi$ channel, it was only observed by the Obelix
experiment~\cite{salvini:2004gz}. We do not see any clear resonance
signal below \SI{1.5}{\GeVcc} in the \wave{1}{-+}{1}{+}{\Prho}{P}
wave. Aside from the presumably artificial narrow structure at
\SI{1.1}{\GeVcc}, the description of the intensities and phases by our
model leaves little room for a possible \PpiOne[1400] component in the
$\Prho \pi P$ wave.
The BNL~E852 experiment also reported a heavy spin-exotic state, \ie
the \PpiOne[2015], in the $\PfOne \pi$~\cite{kuhn:2004en} and
$\PbOne \pi$~\cite{Lu:2004yn} decay modes. We do not see any clear
resonance signal of a heavy \PpiOne* state in the mass range from
\SIrange{1900}{2500}{\MeVcc} in the $\Prho \pi P$ wave. However, we
cannot exclude that some of the observed deviations of the model from
the data at high masses are due to an additional excited \PpiOne*
state.
%
\section{Results on \tpr dependence of relative phases of coupling amplitudes}
\label{sec:production_phases}
As discussed in \cref{sec:method}, our fit model in
\cref{eq:method:param:spindens} contains coupling amplitudes
$\mathcal{C}_a^j(\tpr)$ for each wave component~$j$ in partial
wave~$a$, in addition to the shape parameters of the resonant and
nonresonant components. The coupling amplitudes in the 11~\tpr bins
are independent parameters of the model, which are determined by the
fit. In order to reduce the number of these fit parameters, the
coupling amplitudes of resonance components that appear in waves with
the same \JPCMrefl quantum numbers but different decay modes are
constrained to have the same \tpr dependence via
\cref{eq:method:branchingdefinition}. In \cref{sec:results}, we
already discussed the \tpr-dependent yields of the resonant and
nonresonant components as given by \cref{eq:tprim-dependence}. Most
of these \tpr spectra approximately follow the simple model in
\cref{eq:slope-parametrization}.
In this section, we discuss the \tpr dependence of the relative phases
between the coupling amplitudes of wave component~$j$ in wave~$a$ and
of wave component~$k$ in wave~$b$,
\begin{equation}
\label{eq:coupling_phase}
\Delta \phi_\text{coupl.}^{j, a; k, b}(\tpr)
\equiv \arg\!\sBrk{\mathcal{C}_a^j(\tpr)\, \mathcal{C}_b^{k \text{*}}(\tpr)}.
\end{equation}
In the text below, we refer to these relative phases as \emph{coupling
phases}. Coupling amplitudes of the same resonance in different
decay channels, which are constrained via
\cref{eq:method:branchingdefinition}, have \tpr-independent relative
coupling phases that correspond to
$\arg \sBrk[1]{\prescript{}{b}{\mathcal{B}}_a^j}$.
As the coupling amplitude of a particular wave component is the
product of the actual production amplitude of this wave component and
the complex-valued couplings, $\alpha_{X \to \xi \pi}$ and
$\alpha_{\xi \to \pi \pi}$, which appear in its decay via the
isobar~$\xi^0$ (see \cref{sec:method}), the physical interpretation of
the coupling phase is not straightforward. Assuming that a single
production mechanism dominates, we would expect the coupling phases of
resonances to be approximately independent of \tpr. These phases may
be altered by effects from final-state interactions.
Our fit model assumes that resonances are described by Breit-Wigner
amplitudes and that they have the same masses and widths in different
waves and in all \tpr bins. In contrast, the shape of the nonresonant
components can be adapted individually for each wave by the fit. For
some waves, we allow the shape of the nonresonant component to change
with \tpr (see \cref{tab:method:fitmodel:waveset}). In addition, the
fit has the freedom to choose the relative strengths and phases for
the different components within a single wave and the relative
strengths and phases between different waves. The imperfections in
our model, in particular concerning the parametrization of the
nonresonant components (see \cref{sec:results}), might cause offsets
in the relative phases, which may even be uncorrelated across \tpr
bins. Considering these possible artifacts, we consider small phase
differences up to \SI{20}{\degree} as insignificant for the physical
interpretation.
The discussion of coupling phases will focus mostly on the resonance
components. In \cref{fig:tprim_phase_all_res}, we show the \tpr
dependence of the coupling phases of the 11~resonance components in
the dominant wave of the respective \JPC sector relative to the
\PpiTwo in the \wave{2}{-+}{0}{+}{\PfTwo}{S} wave. The dominant waves
are characterized by a large contribution from the respective
ground-state resonance, while the contributions from higher excited
states are substantially smaller. Since the \PaOne[1420] does not
appear in the dominant $1^{++}$ wave, its coupling phase is shown for
the \wave{1}{++}{0}{+}{\PfZero[980]}{P} wave. In
\cref{fig:tprim_phase_all_res}, we have chosen the \PpiTwo as the
reference component because it turned out to be relatively stable in
our systematic studies. The coupling phases of all resonances show a
smooth variation as a function of \tpr. Since the model does not
contain any assumptions on the \tpr behavior of the coupling phases,
the observed continuous behavior is a nontrivial result, which
supports our analysis model. We observe a similar behavior for the
coupling phases of the nonresonant components, although the variation
with \tpr is typically larger (see
\crefrange{fig:tprim_phase_2mp}{fig:tprim_phase_4pp} in
\cref{sec:phases_piJ,sec:phases_aJ} below).
\begin{figure}[tbp]
\centering
%
\includegraphics[width=\linewidthOr{1.22125\twoPlotWidth}]{fig49}
\caption{\tpr dependence of the relative phases
$\Delta \phi_\text{coupl.}$ of the coupling amplitudes [see
\cref{eq:coupling_phase}] of the 11~resonance components in the
fit model \wrt the \PpiTwo. The coupling phases are shown for the
dominant wave of the respective \JPC sector:
\wave{0}{-+}{0}{+}{\PfZero}{S}, \wave{1}{++}{0}{+}{\Prho}{S},
\wave{1}{-+}{1}{+}{\Prho}{P}, \wave{2}{++}{1}{+}{\Prho}{D},
\wave{2}{-+}{0}{+}{\PfTwo}{S}, and \wave{4}{++}{1}{+}{\Prho}{G}.
The only exception is the \PaOne[1420], which appears only in the
\wave{1}{++}{0}{+}{\PfZero}{P} wave. The width of the horizontal
lines represents the statistical uncertainty. The systematic
uncertainty is not shown.}
\label{fig:tprim_phase_all_res}
\end{figure}
The coupling phases of the resonance components exhibit three striking
features in their \tpr dependence: \one~for most resonances, we find
for $\tpr \lesssim \SI{0.3}{\GeVcsq}$ a slow change of the coupling
phases with \tpr, whereas for $\tpr \gtrsim \SI{0.3}{\GeVcsq}$ the
phases level off; \two~with the exception of the \PaOne[1420], the
coupling phases of different states with the same \JPC show large
relative offsets in the highest \tpr bin; and \three~the coupling
phases of the ground-state resonances do not deviate by more than
\SI{\pm 60}{\degree} from the phase of the \PpiTwo in the highest \tpr
bin. In particular the nearly constant phases of all resonances for
$\tpr \gtrsim \SI{0.3}{\GeVcsq}$ are remarkable and appear to be
characteristic of resonances. This behavior is consistent with a
common production mechanism for the resonances.
\subsection{Relative phases of the coupling amplitudes of the \piJ resonances}
\label{sec:phases_piJ}
The \tpr dependence of the coupling phases of the $2^{-+}$ wave
components is shown in \cref{fig:tprim_phase_2mp} relative to the
\PpiTwo in the \wave{2}{-+}{0}{+}{\PfTwo}{S} wave. The \tpr
dependence of the coupling phases of the resonance components in the
three $2^{-+}$ waves with $M = 0$ is constrained via
\cref{eq:method:branchingdefinition}. Therefore, in these waves the
coupling phases of the resonances follow the same \tpr dependence but
may have relative offsets, which correspond to the phase of the
branching amplitudes $\prescript{}{b}{\mathcal{B}}_a^j$. As for the
resonance parameters and the \tpr spectra that were discussed in
\crefrange{sec:zeroMP}{sec:oneMP}, the uncertainties of the coupling
phases are dominated by systematic effects; \ie statistical
uncertainties are negligible in comparison. The \tpr dependence of
the coupling phases differs in the various systematic studies (see
\cref{sec:systematics}). In order to illustrate the magnitude of the
systematic effects at least qualitatively, we show in
\cref{fig:tprim_phase_2mp} for each wave component in addition to the
continuous lines, which represent the result of the main fit, two sets
of dashed lines. They represent the results of the two systematic
studies that in the highest \tpr bin have the largest deviation from
the coupling phase of the main fit. In order to guide the eye, the
region between the two sets of dashed lines is shaded.\footnote{Note
that the shaded areas defined in this way cannot be interpreted as
systematic uncertainties.}
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig50a}%
\label{fig:tprim_phase_2mp_rho}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig50b}%
\label{fig:tprim_phase_2mp_m0_f2_S}%
}%
\\
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig50c}%
\label{fig:tprim_phase_2mp_m1_f2_S}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig50d}%
\label{fig:tprim_phase_2mp_f2_D}%
}%
\caption{\tpr dependence of the coupling phases of the wave
components in \subfloatLabel{fig:tprim_phase_2mp_rho}~the
\wave{2}{-+}{0}{+}{\Prho}{F},
\subfloatLabel{fig:tprim_phase_2mp_m0_f2_S}~the
\wave{2}{-+}{0}{+}{\PfTwo}{S},
\subfloatLabel{fig:tprim_phase_2mp_m1_f2_S}~the
\wave{2}{-+}{1}{+}{\PfTwo}{S}, and
\subfloatLabel{fig:tprim_phase_2mp_f2_D}~the
\wave{2}{-+}{0}{+}{\PfTwo}{D} wave. The coupling phases of the
\PpiTwo (blue lines), the \PpiTwo[1880] (green lines), the
\PpiTwo[2005] (red lines), and the nonresonant components (black
lines) are shown relative to the \PpiTwo in the
\wave{2}{-+}{0}{+}{\PfTwo}{S} wave. For each wave component, the
magnitude of the effects observed in the systematic studies (see
\cref{sec:systematics}) is illustrated qualitatively by two sets
of dashed lines with shaded area in between (see text).}
\label{fig:tprim_phase_2mp}
\end{wideFigureOrNot}
The coupling phase of the \PpiTwo in the $\PfTwo \pi S$ wave with
$M = 0$ is zero by definition. The coupling phases of the \PpiTwo in
the $\PfTwo \pi D$ wave and in the $\PfTwo \pi S$ wave with $M = 1$
are similar and offset by less than \SI{\pm 20}{\degree} (see
\cref{fig:tprim_phase_2mp}). The latter observation is remarkable
because the coupling phase of the \PpiTwo in the $\PfTwo \pi S$ wave
with $M = 1$ is not constrained via
\cref{eq:method:branchingdefinition}. In the $\Prho \pi F$ wave, the
\PpiTwo coupling amplitude shows a larger offset of about
\SI{+50}{\degree}.
The \PpiTwo[1880] shows a coupling phase offset of about
\minusDeg{180}\footnote{This is, of course, mathematically equivalent
to \plusDeg{180}.} in the $\PfTwo \pi D$ wave (see
\cref{fig:tprim_phase_2mp}). In this wave, the \PpiTwo[1880] is the
dominant component. Therefore, the corresponding coupling phase is
relatively stable \wrt the systematic studies. In contrast, the
relative contribution of the \PpiTwo[1880] to the other $2^{-+}$ waves
is much smaller, which leads to larger variations of these coupling
phases in the systematic studies. In the two $\PfTwo \pi S$ waves,
the coupling phase remains at an offset of about \minusDeg{180}.
However, in the $\Prho \pi F$ wave the \PpiTwo[1880] has a coupling
phase of about \SI{0}{\degree}.
The \PpiTwo[2005] is best determined by the $\Prho \pi F$ wave and
shows a phase offset of about \SI{+150}{\degree} (see
\cref{fig:tprim_phase_2mp}). Similar offsets, although with larger
systematic variations, are also observed in the $\PfTwo \pi D$ wave
and in the $\PfTwo \pi S$ wave with $M = 1$. In contrast, the
coupling phase in the $\PfTwo \pi S$ wave with $M = 0$ is about
\SI{-90}{\degree}.
The \tpr dependence of the coupling phase of the \Ppi[1800] follows
that of the \PpiTwo with an offset close to \plusDeg{180} (see
\cref{fig:tprim_phase_all_res}). The black lines in
\cref{fig:tprim_phase_0mp} show the coupling phase of the nonresonant
component in the \wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave relative to
the \Ppi[1800]. At low \tpr, the nonresonant coupling phase is offset
by about \SI{+100}{\degree}. It then jumps by about
\SI{+150}{\degree} at $\tpr \approx \SI{0.3}{\GeVcsq}$, thereby
changing the sign of the coupling amplitude \wrt the \Ppi[1800]. At
the same \tpr value, we observe a dip in the \tpr spectrum of the
nonresonant component (see \cref{fig:tprim_0mp}).
\begin{figure}[tbp]
\centering
\includegraphics[width=\twoPlotWidth]{fig51}
\caption{Similar to \cref{fig:tprim_phase_2mp} but for the wave
components in the \wave{0}{-+}{0}{+}{\PfZero[980]}{S} wave. The
coupling phase of the nonresonant component (black lines) is shown
relative to the \Ppi[1800] (blue line).}
\label{fig:tprim_phase_0mp}
\end{figure}
The coupling phase of the \PpiOne[1600] relative to the \PpiTwo shows
the most pronounced \tpr dependence of all resonances in
\cref{fig:tprim_phase_all_res} but stays within about \SI{\pm
30}{\degree} of the \PpiTwo coupling phase. Qualitatively, the
\PpiOne[1600] coupling phase behaves similar to that of the
ground-state resonances. The coupling phase of the nonresonant
component in the \wave{1}{-+}{1}{+}{\Prho}{P} wave relative to the
\PpiOne[1600] shows a strong \tpr dependence (see
\cref{fig:tprim_phase_1mp}). Below $\tpr \approx \SI{0.3}{\GeVcsq}$,
the coupling phase of the nonresonant component is approximately
similar to the coupling phase of the \PpiOne[1600] with a negligible
offset. In this \tpr region, the $1^{-+}$ wave is dominated by the
nonresonant component (see \cref{sec:oneMP_results}). Therefore, the
\PpiOne[1600] is not well separated from the nonresonant component.
Above $\tpr \approx \SI{0.3}{\GeVcsq}$, the coupling phase rises
rapidly to about \plusDeg{180}. This rapid change of the interference
pattern between the \PpiOne[1600] and the nonresonant component at
high \tpr is needed for the model to describe the changing shape of
the $1^{-+}$ intensity distribution. However, the variation of the
coupling phase in the systematic studies is large as in the case of
the \PpiOne[1600] resonance parameters.
\begin{figure}[tbp]
\centering
\includegraphics[width=\twoPlotWidth]{fig52}
\caption{Similar to \cref{fig:tprim_phase_2mp} but for the wave
components in the \wave{1}{-+}{1}{+}{\Prho}{P} wave. The coupling
phase of the nonresonant component (black lines) is shown relative
to the \PpiOne[1600] (blue line).}
\label{fig:tprim_phase_1mp}
\end{figure}
\subsection{Relative phases of the coupling amplitudes of the \aJ resonances}
\label{sec:phases_aJ}
The coupling phase of the \PaOne relative to the \PpiTwo shows the
largest variation with \tpr of all ground-state resonances in
\cref{fig:tprim_phase_all_res}. It starts at \SI{+100}{\degree} at
$\tpr = \SI{0.1}{\GeVcsq}$ and falls until
$\tpr \approx \SI{0.3}{\GeVcsq}$, after which it levels off at about
\SI{+50}{\degree}. \Cref{fig:tprim_phase_1pp} shows the coupling
phases of the $1^{++}$ wave components relative to the \PaOne in the
$\Prho \pi S$ wave. The \tpr dependence of the coupling phases of the
resonance components in the $\Prho \pi S$ and $\PfTwo \pi P$ waves are
constrained via \cref{eq:method:branchingdefinition}. The phase
offset between the coupling phases of the \PaOne in these two waves is
about \SI{+50}{\degree}. However, the variation of the \PaOne
coupling phase in the $\PfTwo \pi P$ wave in the systematic studies is
large. In the $\Prho \pi S$ wave, the coupling phase of the
nonresonant component rises by about \SI{100}{\degree} \wrt the \PaOne
over the analyzed \tpr range. This change of the interference pattern
is needed for the model to describe the movement of the peak in the
intensity distribution of the $\Prho \pi S$ wave with \tpr.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig53a}%
\label{fig:tprim_phase_1pp_rho}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig53b}%
\label{fig:tprim_phase_1pp_f0}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig53c}%
\label{fig:tprim_phase_1pp_f2}%
}%
\caption{Similar to \cref{fig:tprim_phase_2mp} but for the wave
components in \subfloatLabel{fig:tprim_phase_1pp_rho}~the
\wave{1}{++}{0}{+}{\Prho}{S},
\subfloatLabel{fig:tprim_phase_1pp_f0}~the
\wave{1}{++}{0}{+}{\PfZero[980]}{P}, and
\subfloatLabel{fig:tprim_phase_1pp_f2}~the
\wave{1}{++}{0}{+}{\PfTwo}{P} wave. The coupling phases of the
\PaOne (blue lines), the \PaOne[1420] (red lines), the
\PaOne[1640] (green lines), and the nonresonant components (black
lines) are shown relative to the \PaOne in the
\wave{1}{++}{0}{+}{\Prho}{S} wave.}
\label{fig:tprim_phase_1pp}
\end{wideFigureOrNot}
The \PaOne[1640] coupling phase \wrt the \PpiTwo is approximately
independent of \tpr with an offset of about \SI{-70}{\degree} (see
\cref{fig:tprim_phase_all_res}). Relative to the \PaOne, the coupling
phases of the \PaOne[1640] rise up to $\tpr \approx \SI{0.3}{\GeVcsq}$
and then level off (see \cref{fig:tprim_phase_1pp}). As discussed in
\cref{sec:onePP_results}, the parameters of the \PaOne[1640] are
mainly determined by the $\PfTwo \pi P$ wave. This is also true for
its coupling phase, which has a much smaller systematic variation in
the $\PfTwo \pi P$ wave. In this wave, the \PaOne[1640] has a phase
offset \wrt the \PaOne of about \SI{+130}{\degree} at high \tpr,
whereas in the $\Prho \pi S$ wave, the phase offset is about
\SI{-130}{\degree}. However, the variation of the latter coupling
phase in the systematic studies is large because the \PaOne[1640] is
only a small signal in the tail of the dominant \PaOne.
The \PaOne[1420] has a nearly constant coupling phase relative to the
\PpiTwo with an offset of about \SI{+40}{\degree} (see
\cref{fig:tprim_phase_all_res}). It therefore behaves qualitatively
similar to the ground-state resonances. In our model, the
\PaOne[1420] appears only in the $\PfZero[980] \pi P$ wave. Its
coupling phase relative to the \PaOne is shown as red lines in
\cref{fig:tprim_phase_1pp_f0}. This phase rises from about
\SI{-60}{\degree} at low \tpr to about \SI{0}{\degree} at high \tpr
and thus changes more strongly than the one \wrt the \PpiTwo.
The \PaTwo is the narrowest resonance in our analysis. In the two
\wave{2}{++}{}{}{\Prho}{D} waves, all other wave components are very
small in the \SI{1.3}{\GeVcc} mass range. The coupling phase of the
\PaTwo in the $\Prho \pi D$ wave with $M = 1$ relative to the \PpiTwo
shows a weak dependence on \tpr with an offset of about
\SI{+60}{\degree} (see \cref{fig:tprim_phase_all_res}).
\Cref{fig:tprim_phase_2pp} shows the coupling phases of the $2^{++}$
wave components relative to the \PaTwo in the $\Prho \pi D$ wave with
$M = 1$. The \tpr dependence of the coupling phases of the resonance
components in the $\Prho \pi D$ wave with $M = 1$ and in the
$\PfTwo \pi P$ wave are constrained via
\cref{eq:method:branchingdefinition}. The phase offset of the \PaTwo
in these two waves is close to zero, which confirms that we indeed see
the $\PfTwo \pi$ decay mode of the \PaTwo. The coupling phases of the
\PaTwo in the $\Prho \pi D$ wave with $M = 2$ is practically identical
to that in the $\Prho \pi D$ wave with $M = 1$. This result is
particularly remarkable since the $\Prho \pi D$ wave with $M = 2$ has
a small relative intensity and the coupling phase of the \PaTwo
component in this wave is not constrained via
\cref{eq:method:branchingdefinition}.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig54a}%
\label{fig:tprim_phase_2pp_m1_rho}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig54b}%
\label{fig:tprim_phase_2pp_m2_rho}%
}%
\hspace*{\threePlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\threePlotWidth]{fig54c}%
\label{fig:tprim_phase_2pp_f2}%
}%
\caption{Similar to \cref{fig:tprim_phase_2mp} but for the wave
components in \subfloatLabel{fig:tprim_phase_2pp_m1_rho}~the
\wave{2}{++}{1}{+}{\Prho}{D},
\subfloatLabel{fig:tprim_phase_2pp_m2_rho}~the
\wave{2}{++}{2}{+}{\Prho}{D}, and
\subfloatLabel{fig:tprim_phase_2pp_f2}~the
\wave{2}{++}{1}{+}{\PfTwo}{P} wave. The coupling phases of the
\PaTwo (blue lines), the \PaTwo[1700] (green lines), and the
nonresonant component (black lines) are shown relative to the
\PaTwo in the \wave{2}{++}{1}{+}{\Prho}{D} wave.}
\label{fig:tprim_phase_2pp}
\end{wideFigureOrNot}
The coupling phase of the \PaTwo[1700] in the $\Prho \pi D$ wave with
$M = 1$ has a nearly constant offset of \SI{-140}{\degree} \wrt the
\PpiTwo\ (see \cref{fig:tprim_phase_all_res}). Relative to the
\PaTwo, the coupling phase of the \PaTwo[1700] shows a similar
behavior in the $\Prho \pi D$ wave with $M = 1$ and in the
$\PfTwo \pi P$ wave with a nearly constant offset of about
\plusDeg{180} (see \cref{fig:tprim_phase_2pp}). In the
\wave{2}{++}{2}{+}{\Prho}{D} wave, the coupling phase starts at
\SI{0}{\degree} at low \tpr and decreases to \minusDeg{180} at high
\tpr. However, the \PaTwo[1700] signal is very small in this wave and
therefore not extracted reliably (see \cref{sec:twoPP_results}).
Compared to the other ground-state resonances in
\cref{fig:tprim_phase_all_res}, the coupling phase of the \PaFour is
closest to that of the \PpiTwo with an offset of about
\SI{+30}{\degree}. \Cref{fig:tprim_phase_4pp} shows the coupling
phases of the $4^{++}$ wave components relative to the \PaFour in the
$\Prho \pi G$ wave. The \tpr dependence of the coupling phases of the
\PaFour in the $\Prho \pi G$ and $\PfTwo \pi F$ waves are constrained
via \cref{eq:method:branchingdefinition}. The coupling phase offset
of the \PaFour in the $\PfTwo \pi F$ wave is close to \SI{0}{\degree}.
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig55a}%
\label{fig:tprim_phase_4pp_rho}%
}%
\hspace*{\twoPlotSpacing}%
\subfloat[][]{%
\includegraphics[width=\twoPlotWidth]{fig55b}%
\label{fig:tprim_phase_4pp_f2}%
}%
\caption{Similar to \cref{fig:tprim_phase_2mp} but for the wave
components in \subfloatLabel{fig:tprim_phase_4pp_rho}~the
\wave{4}{++}{1}{+}{\Prho}{G} and
\subfloatLabel{fig:tprim_phase_4pp_f2}~the
\wave{4}{++}{1}{+}{\PfTwo}{F} wave. The coupling phases of the
\PaFour (blue lines) and the nonresonant component (black lines)
are shown relative to the \PaFour in the
\wave{4}{++}{1}{+}{\Prho}{G} wave.}
\label{fig:tprim_phase_4pp}
\end{wideFigureOrNot}
%
\section{Summary and conclusions}
\label{sec:conclusions}
In this paper, we have presented the results of a fit of a
Breit-Wigner resonance model to 14~selected partial-wave amplitudes
with $\JPC = 0^{-+}$, $1^{++}$, $2^{++}$, $2^{-+}$, $4^{++}$, and
spin-exotic $1^{-+}$ quantum numbers. The amplitudes result from a
partial-wave analysis of \num{46e6}~exclusive events of the
diffractive reaction \reaction using a model with 88 partial
waves~\cite{Adolph:2015tqa}.
We have measured the masses and widths of the \aJ-like resonances:
\PaOne, \PaOne[1640], \PaTwo, \PaTwo[1700], \PaFour, and of the
resonancelike \PaOne[1420] [see \cref{fig:summary:a,tab:parameters}];
and those of the \piJ-like resonances: \Ppi[1800], \PpiTwo,
\PpiTwo[1880], \PpiTwo[2005], and the spin-exotic \PpiOne[1600] [see
\cref{fig:summary:p,tab:parameters}].
\begin{wideFigureOrNot}[tbp]
\centering
\subfloat[][]{%
\includegraphics[width=0.5\textwidth]{fig56a}%
\label{fig:summary:a}%
}%
%
\subfloat[][]{%
\includegraphics[width=0.5\textwidth]{fig56b}%
\label{fig:summary:p}%
}%
\caption{Masses and widths of \subfloatLabel{fig:summary:a}~\aJ-like
and \subfloatLabel{fig:summary:p}~\piJ-like resonances extracted
in this analysis (points). The systematic uncertainties are
represented by the boxes. The statistical uncertainties are at
least an order of magnitude smaller than the systematic ones and
are hence omitted. Different colors encode different resonances.}
\label{fig:summary}
\end{wideFigureOrNot}
The parameters of \PaOne[1420], \PaTwo, \PaFour, \Ppi[1800], and
\PpiTwo are reliably extracted with comparatively small uncertainties.
The consistency of the \PaOne[1420] signal with a Breit-Wigner
amplitude is confirmed. The \PaOne[1420] parameter values are
consistent with those from a simpler analysis of the same data
in~\refCite{Adolph:2015pws}, but have smaller uncertainties. The
\PaTwo and \Ppi[1800] parameter values are consistent with previous
measurements. The measured values of the \PaFour mass and width are
the most accurate so far. We find a lower \PaFour mass and a larger
width than some of the previous experiments.
We observe production of the \PaTwo with spin projection $M = 2$ along
the beam axis. In order to describe the $2^{++}$ partial-wave
amplitudes, the inclusion of an excited \PaTwo[1700] is necessary. It
appears most strongly in the \wave{2}{++}{1}{+}{\PfTwo}{P} wave. The
\PaTwo[1700] mass is consistent with previous measurements while the
width is larger.
In order to describe the four $2^{-+}$ partial-wave amplitudes that
are included in the fit, three resonances are needed, the \PpiTwo, the
\PpiTwo[1880], and the \PpiTwo[2005]. The latter one is not an
established state. The measured \PpiTwo[2005] parameter values are
consistent with the two measurements by previous experiments. We find
the \PpiTwo to be lighter and broader than the world average. The
$\PpiTwo[1880] \to 3\pi$ decay is observed for the first time. The
measured \PpiTwo[1880] width is consistent with the world average, and
the mass is found to be smaller.
The \wave{1}{++}{0}{+}{\Prho}{S} and \wave{1}{-+}{1}{+}{\Prho}{P}
partial-wave amplitudes are dominated by the nonresonant components
and are difficult to describe. This is a main source of systematic
uncertainty. The shape of the intensity distributions of both waves
depends strongly on \tpr. By fitting the resonance model
simultaneously in 11~\tpr~bins, we achieve a better separation of the
resonant and nonresonant components in these waves compared to
previous analyses of diffractive-dissociation reactions. In both
waves, the intensity of the nonresonant components behaves similar to
a model for the Deck effect. The resonance model is not able to
describe all details of the \PaOne peak in the
\wave{1}{++}{0}{+}{\Prho}{S} wave, which leads to comparatively large
uncertainties for the \PaOne parameters. The data require an excited
\PaOne[1640] state. However, because of the dominant \PaOne, the
\PaOne[1640] parameters are not well determined. The data also
require a spin-exotic resonance, the \PpiOne[1600], in the
\wave{1}{-+}{1}{+}{\Prho}{P} wave. The \tpr-resolved analysis allows
us to establish for the first time that a significant \PpiOne[1600]
signal appears only for $\tpr \gtrsim \SI{0.5}{\GeVcsq}$, whereas at
low \tpr the intensity of the spin-exotic wave is saturated by
Deck-like nonresonant contributions. The \PpiOne parameters have
large uncertainties. The measured width is significantly larger than
that observed in previous experiments including our own result from
the data taken with a lead target, but it has a large systematic
uncertainty toward smaller values.
The resonance yields are found to be much more sensitive to model
assumptions than the resonance parameters. For the \PaTwo and \PaFour
the systematic uncertainties are small enough to extract their
branching-fraction ratios for the decays into $\Prho \pi$ and
$\PfTwo \pi$. The branching-fraction ratio for the \PaFour was
measured to be
$B_{\Prho* \pi G, \PfTwo* \pi F}^{\PaFour*, \text{corr}} =
\numaerr{2.9}{0.6}{0.4}$. This value is corrected for the unobserved
\threePiN decay mode, the effects from self-interference, and the
branching fraction of the \PfTwo into $2\pi$. The measured value is
in good agreement with predictions by the ${}^3P_0$ decay model. The
corresponding branching-fraction ratio
$B_{\Prho* \pi D, \PfTwo* \pi P}^{\PaTwo*, \text{corr}} =
\numaerr{16.5}{1.2}{2.4}$ for the \PaTwo was measured for the first
time to our knowledge.
Since the resonance-model fit is performed simultaneously in 11~bins
of \tpr, the \tpr dependence of the amplitudes of the resonant and
nonresonant wave components has been studied in unprecedented detail.
The \tpr dependence of the intensities of most of the resonance
signals follows approximately the expected exponential behavior with
slope parameters between about \SIrange{7}{9}{\perGeVcsq} (see
\cref{tab:slopes}). This is in particular true for the \PaOne[1420].
The \PpiOne[1600] exhibits an exponential \tpr spectrum only, if the
Deck model is used to describe the nonresonant components. The slope
parameters of the higher-mass states are found to be smaller than
those of the ground states. In many waves, the slope of the
nonresonant component is steeper than that of the resonances.
The \tpr dependence of the relative phases of the wave components was
studied for the first time to our knowledge. Most resonances,
including the \PaOne[1420], are produced with a phase that is
approximately independent of \tpr, which is expected if the production
mechanism is the same over the analyzed \tpr range. The production
phase of the \PpiOne[1600] exhibits a stronger dependence on \tpr. In
many waves the production phase of the nonresonant component exhibits
a strong \tpr dependence, which is a hint that more than one
production mechanism contributes.
\section*{The COMPASS Collaboration}
\label{app:collab}
\renewcommand\labelenumi{\textsuperscript{\theenumi}~}
\renewcommand\theenumi{\arabic{enumi}}
\begin{flushleft}
M.~Aghasyan\Irefn{triest_i},
M.G.~Alexeev\Irefn{turin_u},
G.D.~Alexeev\Irefn{dubna},
A.~Amoroso\Irefnn{turin_u}{turin_i},
V.~Andrieux\Irefnn{illinois}{saclay},
N.V.~Anfimov\Irefn{dubna},
V.~Anosov\Irefn{dubna},
A.~Antoshkin\Irefn{dubna},
K.~Augsten\Irefnn{dubna}{praguectu},
W.~Augustyniak\Irefn{warsaw},
A.~Austregesilo\Irefn{munichtu},
C.D.R.~Azevedo\Irefn{aveiro},
B.~Bade{\l}ek\Irefn{warsawu},
F.~Balestra\Irefnn{turin_u}{turin_i},
M.~Ball\Irefn{bonniskp},
J.~Barth\Irefn{bonnpi},
R.~Beck\Irefn{bonniskp},
Y.~Bedfer\Irefn{saclay},
J.~Bernhard\Irefnn{mainz}{cern},
K.~Bicker\Irefnn{munichtu}{cern},
E.~R.~Bielert\Irefn{cern},
R.~Birsa\Irefn{triest_i},
M.~Bodlak\Irefn{praguecu},
P.~Bordalo\Irefn{lisbon}\Aref{a},
F.~Bradamante\Irefnn{triest_u}{triest_i},
A.~Bressan\Irefnn{triest_u}{triest_i},
M.~B\"uchele\Irefn{freiburg},
V.E.~Burtsev\Irefn{tomsk},
W.-C.~Chang\Irefn{taipei},
C.~Chatterjee\Irefn{calcutta},
M.~Chiosso\Irefnn{turin_u}{turin_i},
I.~Choi\Irefn{illinois},
A.G.~Chumakov\Irefn{tomsk},
S.-U.~Chung\Irefn{munichtu}\Aref{b},
A.~Cicuttin\Irefn{triest_i}\Aref{ictp},
M.L.~Crespo\Irefn{triest_i}\Aref{ictp},
S.~Dalla Torre\Irefn{triest_i},
S.S.~Dasgupta\Irefn{calcutta},
S.~Dasgupta\Irefnn{triest_u}{triest_i},
O.Yu.~Denisov\Irefn{turin_i}\CorAuth,
L.~Dhara\Irefn{calcutta},
S.V.~Donskov\Irefn{protvino},
N.~Doshita\Irefn{yamagata},
Ch.~Dreisbach\Irefn{munichtu},
W.~D\"unnweber\Arefs{r},
R.R.~Dusaev\Irefn{tomsk},
M.~Dziewiecki\Irefn{warsawtu},
A.~Efremov\Irefn{dubna},
P.D.~Eversheim\Irefn{bonniskp},
M.~Faessler\Arefs{r},
A.~Ferrero\Irefn{saclay},
M.~Finger\Irefn{praguecu},
M.~Finger~jr.\Irefn{praguecu},
H.~Fischer\Irefn{freiburg},
C.~Franco\Irefn{lisbon},
N.~du~Fresne~von~Hohenesche\Irefnn{mainz}{cern},
J.M.~Friedrich\Irefn{munichtu}\CorAuth,
V.~Frolov\Irefnn{dubna}{cern},
E.~Fuchey\Irefn{saclay}\Aref{p2i},
F.~Gautheron\Irefn{bochum},
O.P.~Gavrichtchouk\Irefn{dubna},
S.~Gerassimov\Irefnn{moscowlpi}{munichtu},
J.~Giarra\Irefn{mainz},
I.~Gnesi\Irefnn{turin_u}{turin_i},
M.~Gorzellik\Irefn{freiburg}\Aref{c},
A.~Grasso\Irefnn{turin_u}{turin_i},
A.~Gridin\Irefn{dubna},
M.~Grosse Perdekamp\Irefn{illinois},
B.~Grube\Irefn{munichtu}\CorAuth,
T.~Grussenmeyer\Irefn{freiburg},
A.~Guskov\Irefn{dubna},
F.~Haas\Irefn{munichtu},
D.~Hahne\Irefn{bonnpi},
G.~Hamar\Irefn{triest_i},
D.~von~Harrach\Irefn{mainz},
R.~Heitz\Irefn{illinois},
F.~Herrmann\Irefn{freiburg},
N.~Horikawa\Irefn{nagoya}\Aref{d},
N.~d'Hose\Irefn{saclay},
C.-Y.~Hsieh\Irefn{taipei}\Aref{x},
S.~Huber\Irefn{munichtu},
S.~Ishimoto\Irefn{yamagata}\Aref{e},
A.~Ivanov\Irefnn{turin_u}{turin_i},
T.~Iwata\Irefn{yamagata},
V.~Jary\Irefn{praguectu},
R.~Joosten\Irefn{bonniskp},
P.~J\"org\Irefn{freiburg},
K.~Juraskova\Irefn{praguectu},
E.~Kabu\ss\Irefn{mainz},
A.~Kerbizi\Irefnn{triest_u}{triest_i},
B.~Ketzer\Irefn{bonniskp},
G.V.~Khaustov\Irefn{protvino},
Yu.A.~Khokhlov\Irefn{protvino}\Aref{g},
Yu.~Kisselev\Irefn{dubna},
F.~Klein\Irefn{bonnpi},
J.H.~Koivuniemi\Irefnn{bochum}{illinois},
V.N.~Kolosov\Irefn{protvino},
K.~Kondo\Irefn{yamagata},
I.~Konorov\Irefnn{moscowlpi}{munichtu},
V.F.~Konstantinov\Irefn{protvino},
A.M.~Kotzinian\Irefn{turin_i}\Aref{yerevan},
O.M.~Kouznetsov\Irefn{dubna},
Z.~Kral\Irefn{praguectu},
M.~Kr\"amer\Irefn{munichtu},
F.~Krinner\Irefn{munichtu},
Z.V.~Kroumchtein\Irefn{dubna}\Deceased,
Y.~Kulinich\Irefn{illinois},
F.~Kunne\Irefn{saclay},
K.~Kurek\Irefn{warsaw},
R.P.~Kurjata\Irefn{warsawtu},
I.I.~Kuznetsov\Irefn{tomsk},
A.~Kveton\Irefn{praguectu},
A.A.~Lednev\Irefn{protvino}\Deceased,
E.A.~Levchenko\Irefn{tomsk},
M.~Levillain\Irefn{saclay},
S.~Levorato\Irefn{triest_i},
Y.-S.~Lian\Irefn{taipei}\Aref{y},
J.~Lichtenstadt\Irefn{telaviv},
R.~Longo\Irefnn{turin_u}{turin_i},
V.E.~Lyubovitskij\Irefn{tomsk},
A.~Maggiora\Irefn{turin_i},
A.~Magnon\Irefn{illinois},
N.~Makins\Irefn{illinois},
N.~Makke\Irefn{triest_i}\Aref{ictp},
G.K.~Mallot\Irefn{cern},
S.A.~Mamon\Irefn{tomsk},
B.~Marianski\Irefn{warsaw},
A.~Martin\Irefnn{triest_u}{triest_i},
J.~Marzec\Irefn{warsawtu},
J.~Matou{\v s}ek\Irefnnn{triest_u}{triest_i}{praguecu},
H.~Matsuda\Irefn{yamagata},
T.~Matsuda\Irefn{miyazaki},
G.V.~Meshcheryakov\Irefn{dubna},
M.~Meyer\Irefnn{illinois}{saclay},
W.~Meyer\Irefn{bochum},
Yu.V.~Mikhailov\Irefn{protvino},
M.~Mikhasenko\Irefn{bonniskp},
E.~Mitrofanov\Irefn{dubna},
N.~Mitrofanov\Irefn{dubna},
Y.~Miyachi\Irefn{yamagata},
A.~Moretti\Irefn{triest_u},
A.~Nagaytsev\Irefn{dubna},
F.~Nerling\Irefn{mainz},
D.~Neyret\Irefn{saclay},
J.~Nov{\'y}\Irefnn{praguectu}{cern},
W.-D.~Nowak\Irefn{mainz},
G.~Nukazuka\Irefn{yamagata},
A.S.~Nunes\Irefn{lisbon},
A.G.~Olshevsky\Irefn{dubna},
I.~Orlov\Irefn{dubna},
M.~Ostrick\Irefn{mainz},
D.~Panzieri\Irefn{turin_i}\Aref{turin_p},
B.~Parsamyan\Irefnn{turin_u}{turin_i},
S.~Paul\Irefn{munichtu},
J.-C.~Peng\Irefn{illinois},
F.~Pereira\Irefn{aveiro},
M.~Pe{\v s}ek\Irefn{praguecu},
M.~Pe{\v s}kov\'a\Irefn{praguecu},
D.V.~Peshekhonov\Irefn{dubna},
N.~Pierre\Irefnn{mainz}{saclay},
S.~Platchkov\Irefn{saclay},
J.~Pochodzalla\Irefn{mainz},
V.A.~Polyakov\Irefn{protvino},
J.~Pretz\Irefn{bonnpi}\Aref{h},
M.~Quaresma\Irefn{lisbon},
C.~Quintans\Irefn{lisbon},
S.~Ramos\Irefn{lisbon}\Aref{a},
C.~Regali\Irefn{freiburg},
G.~Reicherz\Irefn{bochum},
C.~Riedl\Irefn{illinois},
N.S.~Rogacheva\Irefn{dubna},
D.I.~Ryabchikov\Irefnn{protvino}{munichtu},
A.~Rybnikov\Irefn{dubna},
A.~Rychter\Irefn{warsawtu},
R.~Salac\Irefn{praguectu},
V.D.~Samoylenko\Irefn{protvino},
A.~Sandacz\Irefn{warsaw},
C.~Santos\Irefn{triest_i},
S.~Sarkar\Irefn{calcutta},
I.A.~Savin\Irefn{dubna},
T.~Sawada\Irefn{taipei},
G.~Sbrizzai\Irefnn{triest_u}{triest_i},
P.~Schiavon\Irefnn{triest_u}{triest_i},
T.~Schl\"uter\Arefs{r},
S.~Schmeing\Irefn{munichtu},
H.~Schmieden\Irefn{bonnpi},
K.~Sch\"onning\Irefn{cern}\Aref{i},
E.~Seder\Irefn{saclay},
A.~Selyunin\Irefn{dubna},
L.~Silva\Irefn{lisbon},
L.~Sinha\Irefn{calcutta},
S.~Sirtl\Irefn{freiburg},
M.~Slunecka\Irefn{dubna},
J.~Smolik\Irefn{dubna},
A.~Srnka\Irefn{brno},
D.~Steffen\Irefnn{cern}{munichtu},
M.~Stolarski\Irefn{lisbon},
O.~Subrt\Irefnn{cern}{praguectu},
M.~Sulc\Irefn{liberec},
H.~Suzuki\Irefn{yamagata}\Aref{d},
A.~Szabelski\Irefnnn{triest_u}{triest_i}{warsaw}
T.~Szameitat\Irefn{freiburg}\Aref{c},
P.~Sznajder\Irefn{warsaw},
M.~Tasevsky\Irefn{dubna},
S.~Tessaro\Irefn{triest_i},
F.~Tessarotto\Irefn{triest_i},
A.~Thiel\Irefn{bonniskp},
J.~Tomsa\Irefn{praguecu},
F.~Tosello\Irefn{turin_i},
V.~Tskhay\Irefn{moscowlpi},
S.~Uhl\Irefn{munichtu},
B.I.~Vasilishin\Irefn{tomsk},
A.~Vauth\Irefn{cern},
J.~Veloso\Irefn{aveiro},
A.~Vidon\Irefn{saclay},
M.~Virius\Irefn{praguectu},
S.~Wallner\Irefn{munichtu},
M.~Wilfert\Irefn{mainz},
J.~ter~Wolbeek\Irefn{freiburg}\Aref{c},
K.~Zaremba\Irefn{warsawtu},
P.~Zavada\Irefn{dubna},
M.~Zavertyaev\Irefn{moscowlpi},
E.~Zemlyanichkina\Irefn{dubna},
M.~Ziembicki\Irefn{warsawtu}
\end{flushleft}
\begin{Authlist}
\item \Idef{aveiro}{University of Aveiro, Department of Physics, 3810-193 Aveiro, Portugal}
\item \Idef{bochum}{Universit\"at Bochum, Institut f\"ur Experimentalphysik, 44780 Bochum, Germany\Arefs{l}\Aref{s}}
\item \Idef{bonniskp}{Universit\"at Bonn, Helmholtz-Institut f\"ur Strahlen- und Kernphysik, 53115 Bonn, Germany\Arefs{l}}
\item \Idef{bonnpi}{Universit\"at Bonn, Physikalisches Institut, 53115 Bonn, Germany\Arefs{l}}
\item \Idef{brno}{Institute of Scientific Instruments, AS CR, 61264 Brno, Czech Republic\Arefs{m}}
\item \Idef{calcutta}{Matrivani Institute of Experimental Research \& Education, Calcutta-700 030, India\Arefs{n}}
\item \Idef{dubna}{Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia}
\item \Idef{freiburg}{Universit\"at Freiburg, Physikalisches Institut, 79104 Freiburg, Germany\Arefs{l}\Aref{s}}
\item \Idef{cern}{CERN, 1211 Geneva 23, Switzerland}
\item \Idef{liberec}{Technical University in Liberec, 46117 Liberec, Czech Republic\Arefs{m}}
\item \Idef{lisbon}{LIP, 1000-149 Lisbon, Portugal\Arefs{p}}
\item \Idef{mainz}{Universit\"at Mainz, Institut f\"ur Kernphysik, 55099 Mainz, Germany\Arefs{l}}
\item \Idef{miyazaki}{University of Miyazaki, Miyazaki 889-2192, Japan\Arefs{q}}
\item \Idef{moscowlpi}{Lebedev Physical Institute, 119991 Moscow, Russia}
\item \Idef{munichtu}{Technische Universit\"at M\"unchen, Physik-Department, 85748 Garching, Germany\Arefs{l}\Aref{r}}
\item \Idef{nagoya}{Nagoya University, 464 Nagoya, Japan\Arefs{q}}
\item \Idef{praguecu}{Charles University in Prague, Faculty of Mathematics and Physics, 18000 Prague, Czech Republic\Arefs{m}}
\item \Idef{praguectu}{Czech Technical University in Prague, 16636 Prague, Czech Republic\Arefs{m}}
\item \Idef{protvino}{NRC \enquote{Kurchatov Institute}, IHEP, 142281 Protvino, Russia}
\item \Idef{saclay}{IRFU, CEA, Universit\'e Paris-Saclay, 91191 Gif-sur-Yvette, France\Arefs{s}}
\item \Idef{taipei}{Academia Sinica, Institute of Physics, Taipei 11529, Taiwan\Arefs{tw}}
\item \Idef{telaviv}{Tel Aviv University, School of Physics and Astronomy, 69978 Tel Aviv, Israel\Arefs{t}}
\item \Idef{triest_u}{University of Trieste, Department of Physics, 34127 Trieste, Italy}
\item \Idef{triest_i}{Trieste Section of INFN, 34127 Trieste, Italy}
\item \Idef{turin_u}{University of Turin, Department of Physics, 10125 Turin, Italy}
\item \Idef{turin_i}{Torino Section of INFN, 10125 Turin, Italy}
\item \Idef{tomsk}{Tomsk Polytechnic University, 634050 Tomsk, Russia\Arefs{nauka}}
\item \Idef{illinois}{University of Illinois at Urbana-Champaign, Department of Physics, Urbana, Illinois 61801-3080, USA\Arefs{nsf}}
\item \Idef{warsaw}{National Centre for Nuclear Research, 00-681 Warsaw, Poland\Arefs{u}}
\item \Idef{warsawu}{University of Warsaw, Faculty of Physics, 02-093 Warsaw, Poland\Arefs{u}}
\item \Idef{warsawtu}{Warsaw University of Technology, Institute of Radioelectronics, 00-665 Warsaw, Poland\Arefs{u} }
\item \Idef{yamagata}{Yamagata University, Yamagata 992-8510, Japan\Arefs{q} }
\end{Authlist}
\renewcommand\theenumi{\alph{enumi}}
\begin{Authlist}
\item [{\makebox[2mm][l]{\textsuperscript{\#}}}] Corresponding authors
\item [{\makebox[2mm][l]{\textsuperscript{*}}}] Deceased
\item \Adef{a}{Also at Instituto Superior T\'ecnico, Universidade de Lisboa, Lisbon, Portugal}
\item \Adef{b}{Also at Department of Physics, Pusan National University, Busan 609-735, Republic of Korea and at Physics Dept., Brookhaven National Laboratory, Upton, NY 11973, USA}
\item \Adef{ictp}{Also at Abdus Salam ICTP, 34151 Trieste, Italy}
\item \Adef{r}{Supported by the DFG cluster of excellence `Origin and Structure of the Universe' (www.universe-cluster.de) (Germany)}
\item \Adef{p2i}{Supported by the Laboratoire d'excellence P2IO (France)}
\item \Adef{c}{Supported by the DFG Research Training Group Programmes 1102 and 2044 (Germany)}
\item \Adef{d}{Also at Chubu University, Kasugai, Aichi 487-8501, Japan\Arefs{q}}
\item \Adef{x}{Also at Department of Physics, National Central University, 300 Jhongda Road, Jhongli 32001, Taiwan}
\item \Adef{e}{Also at KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan}
\item \Adef{g}{Also at Moscow Institute of Physics and Technology, Moscow Region, 141700, Russia}
\item \Adef{yerevan}{Also at Yerevan Physics Institute, Alikhanian Br. Street, Yerevan, Armenia, 0036}
\item \Adef{y}{Also at Department of Physics, National Kaohsiung Normal University, Kaohsiung County 824, Taiwan}
\item \Adef{turin_p}{Also at University of Eastern Piedmont, 15100 Alessandria, Italy}
\item \Adef{h}{Present address: RWTH Aachen University, III.\ Physikalisches Institut, 52056 Aachen, Germany}
\item \Adef{i}{Present address: Uppsala University, Box 516, 75120 Uppsala, Sweden}
\item \Adef{l}{Supported by BMBF - Bundesministerium f\"ur Bildung und Forschung (Germany)}
\item \Adef{s}{Supported by FP7, HadronPhysics3, Grant 283286 (European Union)}
\item \Adef{m}{Supported by MEYS, Grant LG13031 (Czech Republic)}
\item \Adef{n}{Supported by SAIL (CSR) and B.Sen fund (India)}
\item \Adef{p}{\raggedright Supported by FCT - Funda\c{c}\~{a}o para a Ci\^{e}ncia e Tecnologia, COMPETE and QREN, Grants CERN/FP 116376/2010, 123600/2011 and CERN/FIS-NUC/0017/2015 (Portugal)}
\item \Adef{q}{Supported by MEXT and JSPS, Grants 18002006, 20540299, 18540281 and 26247032, the Daiko and Yamada Foundations (Japan)}
\item \Adef{tw}{Supported by the Ministry of Science and Technology (Taiwan)}
\item \Adef{t}{Supported by the Israel Academy of Sciences and Humanities (Israel)}
\item \Adef{nauka}{Supported by the Russian Federation program ``Nauka'' (Contract No. 0.1764.GZB.2017) (Russia)}
\item \Adef{nsf}{Supported by the National Science Foundation, Grant no. PHY-1506416 (USA)}
\item \Adef{u}{Supported by NCN, Grant 2015/18/M/ST2/00550 (Poland)}
\end{Authlist}
|
3,212,635,537,774 | arxiv | \section{Introduction}\label{csw-sec1}
The class of complex-valued harmonic mappings $f$ defined on a
simply connected domain $D$ of the complex plane $\mathbb{C}$ has attracted
the attention of function theorists because it
generalizes the class of analytic functions with a lot of rich applications
in many different fields. Every such $f$ has the canonical decomposition $f=h+\overline{g}$,
where both $h$ and $g$ are analytic in $D$ and $g(z_0)=0$ for some prescribed point $z_0\in D$
(cf. \cite{CS,Du}). For a complex-valued and continuously differentiable mapping $f$, we let
$$\lambda_{f}=|f_{z}|-|f_{\overline{z}}| ~\mbox{ and }~
\Lambda_{f}=|f_{z}|+|f_{\overline{z}}|
$$
so that the Jacobian $J_{f}$ of $f$ takes the form
$$J_{f}=\lambda_{f}\Lambda_{f}=|f_{z}|^{2}-|f_{\overline{z}}|^{2}.
$$
Moreover, a necessary and sufficient condition for harmonic mappings
$f=h+\overline{g}$ to be locally univalent and sense preserving in
$D$ is that $J_f=|h'|^2-|g'|^2>0$, or equivalently, its dilatation
the dilatation $\omega_f(z)=g'(z)/h'(z)$ satisfies the inequality $|\omega_f(z)|<1$ for $z\in D$.
(see \cite{Le} and \cite{CS,Du,PonRasi2013}).
Let $k\in [0,1)$ be a constant. Then, we say that a sense preserving harmonic
mapping $f=h+\overline{g}$ in $D$ is a $k$-quasiconformal mapping
if $|\omega_f(z)|\leq k$ holds in $D$.
Throughout this paper, we consider harmonic mappings defined on the unit disk $\mathbb{D}=\{z\in \mathbb{C}:\, |z|<1\}$.
Denote by ${\mathcal H}$ the class of harmonic mappings $f=h+\overline{g}$ in $\mathbb{D}$
such that $h(0)=g(0)=h'(0)-1=0$ and consider the family
$${\mathcal S}_{H}=\{f\in {\mathcal H}:\, f \mbox{ is sense-preserving and univalent in $\mathbb{D}$}
\}.
$$
Often it is convenient to work with
$${\mathcal S}_{H}^0=\{f\in {\mathcal S}_{H}:\, f_{\overline{z}}(0)=0\}.
$$
Although both the families ${\mathcal S}_{H}$ and ${\mathcal S}_{H}^0$ are known to be normal, only
${\mathcal S}_{H}^0$ is compact (see \cite{CS}). We also denote the class of analytic functions
$f$ in $\mathbb{D}$ with $f(0)=f'(0)-1=0$ by ${\mathcal A}$ so that $\mathcal H$ reduces to
${\mathcal A}$ when the co-analytic part $g$ of $f=h+\overline{g}\in\mathcal H$ vanishes identically in $\ID$.
Then the set ${\mathcal S}:={\mathcal A}\cap {\mathcal S}_{H}^0$ of all
normalized univalent analytic functions in $\ID$
is the central object in the study of geometric function theory so that
$ {\mathcal S}\subset {\mathcal S}_{H}^0\subset {\mathcal S}_{H}.$
We denote $d_h(z,w)$ as the hyperbolic distance of $z, w \in \mathbb{D}$, that is,
$$d_h(z,w)=\frac{1}{2}\log\left (\frac{1+\left |\frac{z-w}{1-\overline{z}w}\right |}{1-\left |\frac{z-w}{1-\overline{z}w}\right |}\right ).
$$
We say that a harmonic mapping
$f=h+\overline{g}$ in $\mathbb{D}$ is uniformly locally univalent if
$f$ is univalent in each hyperbolic disk
$$D_h(a,\rho)=\left \{z\in \mathbb{D}:\, d_h(z,a)< \rho\right \}
$$
with center $a\in \mathbb{D}$ and hyperbolic radius $\rho$ (independent of the center), $0<\rho\leq \infty$.
The subscript $h$ in $d_h$ and $D_h$ should not be confused with the analytic part $h$ of the harmonic mapping $f$.
If $f$ is analytic in the above definition, then it reduces to the uniformly
locally univalent (analytic) functions. We know that a holomorphic
universal covering map of a plane domain $D$ is uniformly locally
univalent if and only if the boundary of $D$ is uniformly perfect
(cf. \cite{Ch, Su}). Also, in \cite{Ya}, Yamashita showed that an
analytic function $f$ in $\mathbb{D}$ is uniformly locally univalent in $\ID$
if and only if the pre-Schwarzian derivative $T_f=f''/f'$ of $f$ is
hyperbolically bounded, i.e., the norm
$$\|T_f\|=\sup_{z\in \mathbb{D}}(1-|z|^2)|T_f(z)|
$$
is finite and this means that $\log f'$ belongs to the Bloch space
$\mathscr{B}$ (cf. \cite{ACP, Co1}).
In Section \ref{sec-2} (see Theorem \ref{thm2.1}), we characterize the
uniformly locally univalent harmonic mappings $f=h+\overline{g}$ in
terms of the pre-Schwarzian derivative of $h+e^{i\theta}g$ for each $\theta\in [0,2\pi]$.
This result and the corresponding results in \cite{KS} helps to
obtain sharp distortion, growth and covering theorems (see Section
\ref{sec-3}) for the class ${\mathcal B}_{H}(\lambda)$,
where $\lambda$ is a positive real number, and
$${\mathcal B}_{H}(\lambda)=\{f=h+\overline{g}\in {\mathcal H}:\, \|T_f\|\leq 2\lambda\}
$$ with
\be\label{eq1.1} \|T_f\|:=\sup_{z\in \mathbb{D},\ \theta\in [0,
2\pi]} (1-|z|^2)\left
|\frac{h''(z)+e^{i\theta}g''(z)}{h'(z)+e^{i\theta}g'(z)}\right|.
\ee
Henceforth, $\|T_f\|$ is defined by \eqref{eq1.1} in the case of harmonic mappings $f=h+\overline{g}$ in$\ID$.
It is known that for $\lambda>1$, the class ${\mathcal B}(\lambda)$ and
the subclass ${\mathcal B}(\lambda)\cap{\mathcal S}$ are contained in the Hardy space
$H^p$ with $0<p<1/(\lambda^2-1)$ and $0<p<1/(\lambda-1)$, respectively (cf. \cite{Ki, KS1}).
In Section \ref{sec-4'}, we consider relationships between the
space ${\mathcal B}_{H}(\lambda)$ and the harmonic Hardy space. We
also prove that a $k$-quasiconformal harmonic mapping $f\in
{\mathcal B}_{H}(\lambda)$ ($\lambda>1$) is contained in the
harmonic Hardy space $h^p$ with $0<p<1/(\lambda-1)$, and also obtain
that ${\mathcal B}_{H}(\lambda)\subset h^p$ with
$0<p<1/(\lambda^2-1)$. Finally, in the last section, as applications
of distortion estimate obtained in Section \ref{sec-3}, we discuss
the growth of coefficients for harmonic mappings in ${\mathcal
B}_{H}(\lambda)$ ($\lambda>1)$.
In \cite{KS}, the authors discussed the set ${\mathcal B}(\lambda):={\mathcal A}\cap {\mathcal
B}_{H}(\lambda)$ and obtained distortion estimates for analytic
functions in ${\mathcal B}(\lambda)$ in terms of $\lambda$, and
characterization for functions in ${\mathcal B}(\lambda)$ (cf.
\cite[Proposition 1.1]{KS}). As a consequence of Theorem
\ref{thm2.1} in Section \ref{sec-2} and \cite[Proposition 1.1]{KS},
we can easily obtain the following corollary which characterizes
harmonic mappings in ${\mathcal B}_{H}(\lambda)$. We omit its proof
and this particular case is indeed a generalization of earlier known
result (see \cite[Proposition 1.1]{KS}) to the case of harmonic
mappings.
\bcor \label{thm2.2}
A locally univalent harmonic mapping $f=h+\overline{g}\in {\mathcal H}$
belongs to ${\mathcal B}_{H}(\lambda)$ if and only if, for each
pair of points $z_1$, $z_2$ in $\mathbb{D}$ and $\theta\in
[0,2\pi]$,
$$|u_\theta(z_1)-u_\theta(z_2)|\leq 2\lambda d_h(z_1, z_2),
$$
where $u_\theta(z)=\log \big(h'(z)+e^{i\theta}g'(z)\big)$.
\ecor
\section{Characterizations of Uniformly locally univalent harmonic mappings}\label{sec-2}
We now state our first result which is indeed a generalization of \cite[Theorem 1]{Ya} to the case of harmonic
mappings.
\bthm\label{thm2.1} A harmonic mapping $f=h+\overline{g}$ is
uniformly locally univalent in $\mathbb{D}$ if and only if $\|T_f\|<\infty$.
\ethm
For the proof of the sufficiency of Theorem \ref{thm2.1}, we need the following classical result due
to Noshiro \cite{No}.
\begin{Lem}\label{Noshi-lemA}
Let $f(z)=z+\sum_{k=2}^{\infty}a_{k}z^{k}$ be analytic for $|z|<R$ and $|f'(z)|<M$ for $|z|<R$.
Then the disk $|z|<R/M$ is mapped on a starlike domain with respect to the origin by $f$
and also by all its polynomial sections $f_n(z)=z+\sum_{k=2}^{n}a_{k}z^{k}$ $~(n=2,3, \ldots)$.
\end{Lem}
\subsection*{Proof of Theorem \ref{thm2.1}}
Let $f=h+\overline{g}$ be harmonic in $\mathbb{D}$ and assume that $\|T_f\|<\infty$.
Define $F(\xi)=(f\circ T)(\xi)$ for $\xi\in \mathbb{D}$, where
$$w=T(\xi)=\frac{R\xi+a}{1+\overline{a}R \xi}
$$
with $R=\tanh \rho$, the constants $a\in \mathbb{D}$ and $\rho$ ($0<\rho\leq \infty$). Then $F=H+\overline{G}$ is harmonic in $\mathbb{D}$.
Elementary computations yield \beqq
\frac{H''(\xi)+e^{i\theta}G''(\xi)}{H'(\xi)+e^{i\theta}G'(\xi)}
&=&\frac{h''(w)+e^{i\theta}g''(w)}{h'(w)+e^{i\theta}g'(w)}T'(\xi)
+\frac{T''(\xi)}{T'(\xi)},
\eeqq
where
$$T'(\xi)=\frac{R(1-|a|^2)}{(1+\overline{a}R \xi)^2}~\mbox{ and }~
T''(\xi)=-\frac{2\overline{a}(1-|a|^2)R ^2}{(1+\overline{a}R \xi)^3}
$$
so that
$$\frac{T''(\xi)}{T'(\xi)}=-\frac{2\overline{a}R}{1+\overline{a}R \xi}.
$$
Since
$$1-|w|^2=\frac{(1-|a|^2)(1-|\xi|^2 R ^2 )}{|1+\overline{a}R \xi|^2},
$$
we easily have $|T'(\xi)|(1-|\xi|^2)\leq 1-|w|^2$ and therefore, it follows that
\beqq
(1-|\xi|^2)\left |\frac{H''(\xi)+e^{i\theta}G''(\xi)}{H'(\xi)+e^{i\theta}G'(\xi)}\right|
&\leq &
(1-|w|^2)\left |\frac{h''(w)+e^{i\theta}g''(w)}{h'(w)+e^{i\theta}g'(w)}\right|\\
&& \hspace{.2cm} +(1-|\xi|^2)\left
|\frac{2\overline{a}R}{1+\overline{a}R \xi}\right |. \eeqq This
inequality implies that \be\label{eq2.2} \sup_{\xi\in
\mathbb{D}}(1-|\xi|^2)\left
|\frac{H''(\xi)+e^{i\theta}G''(\xi)}{H'(\xi)+e^{i\theta}G'(\xi)}\right
| \leq k_0<\infty , \ee where
$$ k_0=\sup_{w \in \mathbb{D}}(1-|w|^2)\left|\frac{h''(w)+e^{i\theta}g''(w)}{h'(w)+e^{i\theta}g'(w)}\right |
+\frac{2R}{1-R}.
$$
Let $\varphi$ be an analytic branch of $\log\big(H'(\xi)+e^{i\theta}G'(\xi)\big)$ in $\mathbb{D}$. Then
$$\varphi'(\xi)=\frac{H''(\xi)+e^{i\theta}G''(\xi)}{H'(\xi)+e^{i\theta}G'(\xi)}.
$$
This choice is clearly possible, because $H'(\xi)+e^{i\theta}G'(\xi)\neq 0$
for $\xi\in \mathbb{D}$, by \eqref{eq2.2}. It then follows from \eqref{eq2.2} that
\be\label{eq2.3}
\left|\log\Big|\frac{H'(\xi)+e^{i\theta}G'(\xi)}{H'(0)+e^{i\theta}G'(0)}\Big|\right|\leq
|\varphi(\xi)-\varphi(0)|\leq
\frac{k_0}{2}\log\left(\frac{1+|\xi|}{1-|\xi|}\right).
\ee
Now, we introduce $H_{\theta}(\xi)$ by
$$H_{\theta}(\xi)=\frac{H(\xi)+e^{i\theta}G(\xi)}{H'(0)+e^{i\theta}G'(0)}.
$$
We see that $H_{\theta}$ is analytic in $\ID$ and is normalized so
that $H_{\theta}'(0)-1=0$. We infer from \eqref{eq2.3} that
$$\log \big|H_{\theta}'(\xi)\big|\leq
\frac{k_0}{2}\log3 ~\mbox{ for }~ |\xi|< \frac{1}{2},
$$
whence
$$\big|H_{\theta}'(\xi)\big|<3^{k_0/2} ~\mbox{ for }~ |\xi|< \frac{1}{2}.
$$
Therefore, by Lemma \Ref{Noshi-lemA} of Noshiro, $H_{\theta}(\xi)-H_{\theta}(0)$ is
univalent in the disk $|\xi|<\frac{3^{-k_0/2}}{2}$ for each $\theta$. The radius of convexity
for univalent functions is known to be $2-\sqrt{3}$
(cf. \cite[Theorem 2.13]{Du}). Thus, $H_{\theta}(\xi)-H_{\theta}(0)$
and $H(\xi)+e^{i\theta}G(\xi)$ are convex in
$|\xi|<(2-\sqrt{3})\frac{3^{-k_0/2}}{2}=\rho_0$. This implies that
$F$ is harmonic (convex) univalent in
$|\xi|<(2-\sqrt{3})\frac{3^{-k_0/2}}{2}$ (cf. \cite{CS}).
Consequently, $f$ is univalent in the hyperbolic disk $D_h(a, \rho_0)$ with
$\tanh\rho_0=(2-\sqrt{3})\frac{3^{-k_0/2}}{2}\tanh\rho$ if
$(2-\sqrt{3})\frac{3^{-k_0/2}}{2}\leq 1$, and $\rho_0=\rho$ if
$(2-\sqrt{3})\frac{3^{-k_0/2}}{2}> 1$. Hence, $f$ is uniformly
locally univalent.
To prove the necessity, we assume that $f$ is uniformly locally
univalent in $\mathbb{D}$, that is, $f$ is univalent in each hyperbolic disk $D_h(a,\rho)$,
where $a\in \mathbb{D}$ and $0<\rho\leq \infty$.
Again, as above, we consider $w=T(\xi)$ and let
$$F(\xi)=(f\circ T)(\xi)=f(w) ~\mbox{ for $\xi\in \mathbb{D}$}.
$$
Then $F=H+\overline{G}$ is univalent in $\mathbb{D}$. By Lewy's Theorem (cf. \cite{Du}),
the Jacobian $J_F$ is different from $0$ for all $z\in\mathbb{D}$,
and then, without loss of generality, we assume that $F$ is sense-preserving. Let
$$F_0(\xi)=\frac{H(\xi)-H(0)}{H'(0)}+\overline{\frac{G(\xi)-G(0)}{H'(0)}}=H_0(\xi)+\overline{G_0(\xi)}.
$$
Obviously, $F_0\in {\mathcal S}_{H}$. For $\xi\in \mathbb{D}$, set
$$F_1(z)=\frac{F_0\Big(\frac{z+\xi}{1+\overline{\xi}z}\Big)-F_0(\xi)}{(1-|\xi|^2)H_0'(\xi)}
= H_1(z)+\overline{G_1(z)},
$$
which again belongs to ${\mathcal S}_{H}$. The analytic function $H_1(z)$ has the form
$$H_1(z)=z+A_2(\xi)z^2+A_3(\xi)z^3+\cdots
$$
and a direct computation shows that
$$A_2(\xi)=\frac{1}{2}\left \{ (1-|\xi|^2)\frac{H_0''(\xi)}{H_0'(\xi)}-2\overline{\xi} \right \}
=\frac{1}{2}\left \{ (1-|\xi|^2)\frac{H''(\xi)}{H'(\xi)}-2\overline{\xi} \right \}.
$$
Let $\alpha=\sup\{|a_2|:\,
f(z)=\sum_{k=1}^{\infty}a_kz^k+\sum_{k=1}^{\infty}\overline{b}_k\overline{z}^k\in{\mathcal
S}_H \}$. For $f\in \mathcal {S}_H$, we have
$\frac{f(z)-\overline{b_1f(z)}}{1-|b_1|^2}\in \mathcal {S}_H^0$. It
is known that for
$f^*(z)=\sum_{k=1}^{\infty}a^*_kz^k+\sum_{k=1}^{\infty}\overline{b}^*_k\overline{z}^k\in
\mathcal {S}_H^0$, the coefficient $|a^*_2|<49$ and
$|b^*_2|<\frac{1}{2}$(cf. \cite{Du}). Using this estimate, by computations, it is possible to get
$|a_2|<98$. It has been recently shown by Abu-Muhanna et al. \cite{AAP2015} that $|a^*_2|\leq 16.5$
which indeed the best known upper bound for $|a^*_2|$. Since $F_1\in {\mathcal S}_{H}$, we must have
$|A_2(\xi)|\leq \alpha$ and therefore,
$$(1-|\xi|^2)\left |\frac{H''(\xi)}{H'(\xi)}\right |< 2(\alpha+1), ~\mbox{ $\xi\in \mathbb{D}$}.
$$
For each $c\in \mathbb{D}$, the composition of sense preserving affine mapping $\phi(w) =w+c\overline{w}$
with $F$, namely, the function $F+c\overline{F}$, is univalent and sense-preserving in $\ID$. Then by what we have just proved, we obtain
$$(1-|\xi|^2)\left |\frac{H''(\xi)+cG''(\xi)}{H'(\xi)+cG'(\xi)}\right |< 2(\alpha+1), ~\mbox{ $\xi\in \mathbb{D}$},
$$
which in particular implies that, for each $\theta\in [0, 2\pi]$,
$$ (1-|\xi|^2)\left |\frac{H''(\xi)+e^{i\theta}G''(\xi)}{H'(\xi)+e^{i\theta}G'(\xi)}\right |
< 2(\alpha+1), ~\mbox{ $\xi\in \mathbb{D}$}.
$$
Thus, for $f=h+\overline{g}$, we have
$$ A(\theta):=\sup_{z\in \mathbb{D}}
(1-|z|^2)\left |\frac{h''(z)+e^{i\theta}g''(z)}{h'(z)+e^{i\theta}g'(z)}\right |
<\infty.
$$
Since $A(\theta)$ is a continuous function of $\theta$ in $[0, 2\pi]$, it follows from $ A(\theta)<\infty$ that
$$ \sup_{z\in \mathbb{D},\ \theta\in [0, 2\pi]} (1-|z|^2)\left
|\frac{h''(z)+e^{i\theta}g''(z)}{h'(z)+e^{i\theta}g'(z)}\right
|<\infty.
$$
The proof of the theorem is complete.
\hfill $\Box$
\section{Growth estimate for the class ${\mathcal B}_{H}(\lambda)$}\label{sec-3}
For a nonnegative real number $\lambda$, we consider
$$H_\lambda(z)=\int_{0}^z\Big(\frac{1+t}{1-t}\Big)^{\lambda}\,dt.
$$
It is easy to verify that $\|T_{H_\lambda}\|=2\lambda$, and thus
$H_\lambda\in {\mathcal B}_H(\lambda)$. If $\lambda \geq 0$, then
it is known that $H_\lambda$ is univalent in $\mathbb{D}$ if and
only if $0\leq\lambda\leq1$ (see \cite[Lemma 2.1]{KS}). We will see
later that $H_\lambda$ is extremal in the class ${\mathcal
B}_{H}(\lambda)$. It follows from
Theorem \ref{thm2.1} that if $f=h+\overline{g}\in {\mathcal
B}_{H}(\lambda)$, then $\frac{h+e^{i\theta}g}{1+e^{i\theta}b_1}\in
{\mathcal B}(\lambda)$. This fact and \cite[Theroem
2.3]{KS} give the following result.
\bthm[Distortion theorem]\label{thm3.1}
Let $\lambda$ be a nonnegative real number and
$f(z)=h(z)+\overline{g(z)}=\sum_{n=1}^{\infty}a_nz^n+\sum_{n=1}^{\infty}\overline{b}_n\overline{z}^n\in
{\mathcal B}_{H}(\lambda)$. Then for $z\in \ID$, we have
$$ \big | \lambda_{f} (z)\big |=\big|\,|h'(z)|-|g'(z)|\, \big|\geq |1-|b_1|\,| \left (\frac{1-|z|}{1+|z|}\right )^\lambda=|1-|b_1|\,| H_\lambda'(-|z|),
$$
$$\big | \Lambda_{f}(z) \big |= |h'(z)|+|g'(z)|\leq (1+|b_1|) \left (\frac{1+|z|}{1-|z|}\right )^\lambda=(1+|b_1|) H_\lambda'(|z|)
$$
and $|f(z)|\leq (1+|b_1|)H_\lambda(|z|)$. Furthermore, if $f\in
\mathcal {S}_H^0\cap{\mathcal B}_{H}(\lambda)$, then
$$-H_\lambda(-|z|)\leq |f(z)|\leq H_\lambda(|z|).
$$
Equality occurs in each case when $f(z)=\overline{\mu}H_\lambda(\mu z)$ for a unimodular constant $\mu$.
\ethm
\bcor\label{cor3.1} For $\lambda>1$, each $f(z)=h(z)+\overline{g(z)}=\sum_{n=1}^{\infty}a_nz^n+\sum_{n=1}^{\infty}\overline{b}_n\overline{z}^n\in
{\mathcal B}_{H}(\lambda)$ satisfies the growth condition
$$f(z)=O\big((1-|z|)^{1-\lambda}\big)
$$
as $|z|\rightarrow 1$. On the other hand, for $\lambda<1$, each
mapping $f\in {\mathcal B}_{H}(\lambda)$ is bounded with the bound
$(1+|b_1|)H_\lambda(1)$. Moreover, if $\lambda>0$ and $f\in
\mathcal {S}_H^0\cap{\mathcal B}_{H}(\lambda)$ in $\ID$, then the
image $f(\mathbb{D})$ contains the disk $\{w:\,|w|<-H_\lambda(-1)\}$.
\ecor
By \cite{Be, BC}, for $\lambda\leq 1/2$, ${\mathcal
B}(\lambda)\subset {\mathcal S}$ and so, by Theorem \ref{thm2.1},
for $\lambda\leq 1/2$, $f\in {\mathcal B}_{H}(\lambda)$ must be
univalent in $\ID$. We also note that, for $0\leq \lambda\leq 1$, we
have
$$-H_\lambda(-1)\geq -H_1(-1)=2\log2-1=0.38629 \ldots,
$$
and therefore the result is an improvement of the covering theorem
for harmonic mappings in ${\mathcal S}_{H}^0$.
In Corollary \ref{cor3.1}, the case $\lambda=1$ is critical. By Theorem \ref{thm3.1}, we have that, for
$f\in {\mathcal B}_{H}(1)$,
$$|f(z)|\leq (1+|b_1|)H_1(|z|)=(1+|b_1|)\big(-2\log(1-|z|)-|z|\big),
$$
which shows that functions in ${\mathcal B}_{H}(1)$ need not be bounded. The next theorem, which follows from
Theorem \ref{thm2.1} and \cite[Proposition 2.5]{KS}, gives a boundedness criterion for mappings in ${\mathcal B}_{H}(1)$.
\bthm\label{thm3.2}
If a harmonic mapping $f=h+\overline{g}$ in $\mathbb{D}$ satisfies the condition
$$ {\overline{\lim_{|z|\rightarrow 1^{-}}}}
\left \{(1-|z|^2)\left |\frac{h''(z)+e^{i\theta}g''(z)}{h'(z)+e^{i\theta}g'(z)}\right |-2\right \}
\log\frac{1}{1-|z|^2}<-2
$$
for each $\theta\in [0,2\pi]$, then $f$ is bounded. Here the constant $-2$ on the right hand side is
sharp.
\ethm
We conclude this section with the H\"{o}lder continuity of mappings in ${\mathcal B}_{H}(\lambda)$.
\bthm\label{thm3.3}
Let $0\leq \lambda<1$. Then each mapping $f\in {\mathcal B}_{H}(\lambda)$ is
H\"{o}lder continuous of exponent $1-\lambda$ in $\mathbb{D}$.
\ethm
The proof follows from Theorem \ref{thm2.1} and \cite[Theorem 2.6]{KS} and so, we omit its detail.
\section{The space ${\mathcal B}_{H}(\lambda)$ and the Hardy space}\label{sec-4'}
We begin this section with the following concepts.
\bdefe\label{def4'.1}
For $0<p<\infty$, the \textit{Hardy space} $H^p$ is the set of all functions $f$ analytic in $\mathbb{D}$
for which
$$M_p(r,f)=\left\{ \frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^p
\,d\theta \right \}^{1/p}
$$
is bounded on $0<r<1$.
The space $h^p$ consists of all harmonic mappings $f$ in $\mathbb{D}$ for which $M_p(r,f)$ $(0<r<1)$ are bounded (cf. \cite{Du}).
\edefe
For a harmonic mapping $f=h+\overline{g}$ in $\mathbb{D}$, the
Bloch seminorm is given by (cf. Colonna \cite{Co})
$$\|f\|_{\mathscr{B}_{H}}=\sup_{z\in \mathbb{D}}
(1-|z|^2)\big(|h'(z)|+|g'(z)|\big),
$$
and $f$ is called a Bloch mapping when $\|f\|_{\mathscr{B}_{H}}<\infty$.
In the recent years, the class of harmonic Bloch mappings has been studied extensively together with its
higher dimensional analog (see for example, \cite{CPVW2013,CPW2011,CPW2012,Co} and the references therein).
By Theorem \ref{thm3.1}, we have, for $f\in {\mathcal B}_{H}(\lambda)$,
$$ |f(z)|\leq (1+|b_1|)\int_{0}^{|z|}\left (\frac{1+t}{1-t}\right )^{\lambda}\,dt,
$$
which shows that
\bei
\item $f$ is bounded when $\lambda<1$,
\item $f(z)=O(-\log(1-|z|))$ $(|z|\rightarrow 1)$ when $\lambda=1$,
and
\item $f(z)=O((1-|z|)^{1-\lambda})$ $(|z|\rightarrow 1)$ when
$\lambda>1$.
\eei
Let ${\rm BMOA}$ (resp. ${\rm BMOH}$) denote the class of analytic functions
(resp. harmonic mappings) that have bounded mean oscillation on the
unit disk $\mathbb{D}$ (cf. \cite{Au}). In \cite{Ki}, Kim proved the following result for analytic functions.
\begin{Thm}\label{Thm a}\begin{enumerate}
\item If $\lambda<1$, ${\mathcal B}(\lambda)\cap {\mathcal S}\subset
H^{\infty}$,
\item If $\lambda=1$, ${\mathcal B}(\lambda)\cap {\mathcal S}\subset
{\rm BMOA}$,
\item If $\lambda>1$, ${\mathcal B}(\lambda)\cap {\mathcal S}\subset
H^{p}$ for every $0<p<1/(\lambda-1)$.
\end{enumerate}
\end{Thm}
In order to state a generalization of this result for harmonic mappings, we introduce
$${\mathcal S}_{H_k}=\{f=h+\overline{g}\in {\mathcal S}_{H}:\, f ~\mbox{ is $k$-quasiconformal}\} $$ for $0\leq k<1$.
We now state the analog of Theorem \ref{Thm a}.
\bthm\label{thm4'.0}
\begin{enumerate}
\item If $\lambda<1$, then ${\mathcal B}_{H}(\lambda)\cap {\mathcal S}_{H}\subset
h^{\infty}$.
\item If $\lambda=1$, then ${\mathcal B}_{H}(\lambda)\cap {\mathcal S}_{H}\subset
{\rm BMOH}$.
\item If $\lambda>1$, then ${\mathcal B}_{H}(\lambda)\cap {\mathcal S}_{H_k} \subset
h^{p}$ for every $0<p<1/(\lambda-1)$.
\end{enumerate}
\ethm
For the proof of Theorem \ref{thm4'.0}, we need some preparation.
\blem\label{lem4'.0} If
$f=h+\overline{g}\in {\mathcal B}_{H}(1)$, then $\|f\|_{\mathscr{B}_{H}}\leq 4(1+|b_1|).$
\elem \bpf
For $f=h+\overline{g}\in {\mathcal B}_{H}(1)$, by Theorem \ref{thm2.1},
we have $h+e^{i\theta}g\in {\mathcal B}(1)$ for each $\theta\in [0,
2\pi]$. It follows from \cite[Theorem 2.1]{Ki} that $
\|h+e^{i\theta}g\|_{\mathscr{B}}\leq 4(1+|b_1|) $, which implies
that $ \|f\|_{\mathscr{B}_{H}}\leq 4(1+|b_1|).$ \epf
In the next lemma, we shall consider the problem of how the integral means
of $k$-quasiconformal harmonic univalent mappings $f$ behaves. Here the integral means of
$f$ is defined by
\be\label{eq-ext1}
I_p(r)=I_p(r,f)=\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^p\, d\theta.
\ee
The following lemma is regarded as a generalization of \cite[Proposition 8.1]{Ch1}
to the case of harmonic mappings.
\blem\label{lem4'.1} Let $f\in {\mathcal S}_{H_k}$ and $p>0$. Then
$$I_p(r)\leq \frac{2(1+k^2)(|p-2|+1)}{1-k^2}\int_{0}^{r}M(\rho)^p
\rho^{-1}\, d\rho\;\;\; (0\leq r<1),
$$
where
$$ M(r):=M(r,f)=\max_{0\leq \theta\leq 2\pi}|f(re^{i\theta})|.
$$
\elem \bpf
Let $f=h+\overline{g}\in {\mathcal S}_{H_k}$ and write $z=re^{i\theta}$, where $0\leq r<1$. Writing
$$ |f(z)|^{p}=\left [\big (h(z)+\overline{g(z)}\big )\big (\overline{h(z)} +g(z)\big )\right ]^{p/2},
$$
elementary computations give
$$r\frac{\partial}{\partial r}\big(|f(z)|^p\big )=p|f(z)|^{p-2}{\rm
Re}\,\big \{\big(zh'(z)+\overline{zg'(z)}\,\big)\overline{f(z)}\,\big \}
$$
and
$$\frac{\partial}{\partial \theta}\big(|f(z)|^p\big ) =p|f(z)|^{p-2}{\rm Re}\,
\big \{\big(iz h'(z)+\overline{izg'(z)}\,\big)\overline{f(z)}\,\big \}.
$$
Further computations yield
\beqq
\Big(r\frac{\partial}{\partial r}\Big)^2\big(|f(z) |^p\big)
&=&p(p-2)|f(z)|^{p-4}\Big({\rm
Re\,}\big \{\big(zh'(z)+\overline{zg'(z)}\,\big) \overline{f(z)}\big\}\Big)^2\\
&&+p|f(z)|^{p-2} {\rm Re\,}\big \{\big(z^2h''(z)+\overline{z^2g''(z)}
+zh'(z)+\overline{zg'(z)}\, \big)\overline{f(z)}\, \big\}\\
&&+p|f(z)|^{p-2}\big|zh'(z)+\overline{zg'(z)}\big|^2
\eeqq
and
\beqq
\Big(\frac{\partial}{\partial\theta}\Big)^2\big(|f(z)|^p\big)
&=&p(p-2)|f(z)|^{p-4}\Big({\rm Re\,}\big \{\big(iz h'(z)+\overline{i zg'(z)}\,
\big)\overline{f(z)}\,\}\Big)^2\\
&&+p|f(z)|^{p-2}
{\rm Re\,}\big \{\big(-z^2h''(z)-\overline{z^2g''(z)}-zh'(z)-\overline{zg'(z)}\big)
\overline{f(z)}\, \big \}\\
&&+p|f(z)|^{p-2}\big|izh'(z)+\overline{izg'(z)}\big|^2.
\eeqq
Adding the last two expressions shows that
\vspace{8pt}
$\ds \Big(r\frac{\partial}{\partial r}\Big)^2\big(|f(z)|^p\big)
+\Big(\frac{\partial}{\partial\theta}\Big)^2\big(|f(z)|^p\big)
$
\beqq
&=&p(p-2)|f(z)|^{p-4}\left [\Big({\rm
Re\,}\big \{\big( zh'(z)+\overline{zg'(z)}\, \big)\overline{f(z)}\,\big \}\Big)^2\right . \\
&& +\left . \Big({\rm Re\,}\big \{\big(iz h'(z)+\overline{izg'(z)}\, \big)\overline{f(z)}\,\big \}\Big)^2 \right ]\\
&&+p\big|f(z)|^{p-2} \left [\big|zh'(z)+\overline{zg'(z)}\big|^2
+\big|izh'(z)+\overline{izg'(z)}\big|^2\right ]\\
&\leq&2p(1+k^2)(|p-2|+1)r^2|f(z)|^{p-2}|h'(z)|^2.
\eeqq
It follows that
\vspace{8pt}
$\ds \frac{1}{2\pi}\int_{0}^{2\pi}\left [\Big(r\frac{\partial}{\partial
r}\Big)^2\big(|f(re^{i\theta})|^p\big)
+\Big(\frac{\partial}{\partial\theta}\Big)^2\big(|f(re^{i\theta})|^p\big)\right ]d\theta\
$
\beqq
&=&\frac{1}{2\pi}\int_{0}^{2\pi}\Big(r\frac{\partial}{\partial r}\Big)^2\big(|f(re^{i\theta})|^p\big)\,d\theta\\
&=&r\frac{d}{dr}\big(r I_p'(r)\big)\\
&\leq & p(1+k^2)(|p-2|+1)r^2\frac{1}{\pi} \int_{0}^{2\pi}|f(re^{i\theta})|^{p-2}|h'(re^{i\theta})|^2
\,d\theta,
\eeqq
where we have used the fact that the integral corresponding to the second term above vanishes because of the periodicity of the
function involved in the integrand. As $J_f(z)=|h'(z)|^2-|g'(z)|^2$ and
$$|h'(z)|^2 =\frac{|h'(z)|^2}{|h'(z)|^2-|g'(z)|^2}J_f(z)\leq\frac{1}{1-k^2}J_f(z),
$$
we may integrate the last expression on both sides and obtain the inequality
\beqq
r I_p'(r)&\leq & p(1+k^2)(|p-2|+1)\iint_{|z|\leq r}|f(z)|^{p-2}|h'(z)|^2 \,d\sigma (z)\\
&\leq & \frac{p(1+k^2)(|p-2|+1)}{1-k^2}\iint_{|z|\leq r}|f(z)|^{p-2}J_f(z)\, d\sigma (z),
\eeqq
where $d\sigma (z)=(1/\pi)dx\,dy$ denotes the normalized area element.
Now, we substitute $w=f(z)$. Since $f$ is univalent in $\ID$ and $M(r)=\max_{0\leq \theta\leq
2\pi}|f(re^{i\theta})|$, the last inequality gives
\beqq
r I_p'(r)&\leq & \frac{p(1+k^2)(|p-2|+1)}{1-k^2}\iint_{|w|\leq M(r)} |w|^{p-2}\, d\sigma (w)\\
&=&\frac{2p(1+k^2)(|p-2|+1)}{1-k^2}\int_{0}^{M(r)}t^{p-1}\,dt\\
&=&\frac{2(1+k^2)(|p-2|+1)}{1-k^2}M(r)^p,
\eeqq
which upon integration on both sides shows that
$$ I_p(r)\leq \frac{2(1+k^2)(|p-2|+1)}{1-k^2}\int_{0}^{r}M(\rho)^p
\rho^{-1}\,d\rho.
$$
The desired conclusion follows. \epf
\subsection{The proof of Theorem \ref{thm4'.0}}
Let $f\in {\mathcal B}_{H}(\lambda)$ for some $\lambda<1$. Then, by Corollary \ref{cor3.1}, $f$ is bounded.
Next we assume that $f=h+\overline{g}\in {\mathcal B}_{H}(1)\cap {\mathcal S}_{H}$. Then,
by Lemma \ref{lem4'.0}, it follows that $f$ is Bloch and thus,
$h$ is Bloch, since, for $f=h+\overline{g}\in {\mathcal S}_{H}$, $h$ is Bloch if and only if $h$ is BMOA if and
only if $f$ is BMOH (cf. \cite{Au}). Consequently, $f\in {\rm BMOH}$.
Finally, we assume that $f\in {\mathcal B}_{H}(\lambda)\cap {\mathcal S}_{H_k}$ for some $\lambda>1$. Then, by Theorem
\ref{thm3.1}, we deduce that $f(z)=O((1-|z|)^{1-\lambda})$ and thus,
$M(r)=O((1-|z|)^{1-\lambda})$. Furthermore, using Lemma \ref{lem4'.1}, we find that
\beqq
I_p(r)&\leq& \frac{2(1+k^2)(|p-2|+1)}{1-k^2}\int_{0}^{r}M(\rho)^p \rho^{-1}\, d\rho\\
&\leq& \frac{2C (1+k^2)(|p-2|+1)}{1-k^2}\int_{0}^{r}\frac{1}{(1-|z|)^{p(\lambda-1)}} \rho^{-1} \,d\rho
\eeqq
for some positive constant $C$. Hence, $f\in h^p$ if $0<p<1/(\lambda-1)$.
\hfill $\Box$
\bigskip
Obviously, the assertion (1) in Theorem \Ref{Thm a} remains valid if
we replace ${\mathcal B}(\lambda)$ by ${\mathcal B}_{H}(\lambda)$.
By \cite[Theorem 1]{KS1}, we see that the assertion (3) does not hold for ${\mathcal B}_{H}(\lambda)$. On the other hand,
we will show that ${\mathcal B}_{H}(\lambda)$ is contained in some Hardy space.
\bthm\label{thm4'.1}
Let $\lambda\geq 1$. Then ${\mathcal B}_{H}(\lambda)\subset h^p$ with
$0<p<\frac{1}{\lambda^2-1}$.
\ethm
In the above, the expression $\frac{1}{\lambda^2-1}$ is interpreted as $\infty$ when
$\lambda=1$.
\bpf Assume that $f=h+\overline{g}\in {\mathcal B}_{H}(\lambda)$. By
Theorem \ref{thm2.1} and \cite[Theorem 2]{KS1}, for each $\theta$,
$h+e^{i\theta}g\in H^p$ with $0<p<\frac{1}{\lambda^2-1}$. It follows
that $h-g\in H^p$ and $h+g\in H^p$ which implies that $f\in h^p$.
\epf
\bcor
A uniformly locally univalent harmonic mapping $f$ in
$\mathbb{D}$ is contained in the Hardy space $h^p$ for some
$p=p(f)>0$.
\ecor
In \cite{Ki}, Kim also conjectured that the assertion (3) in Theorem
\Ref{Thm a} holds for ${\mathcal B}(\lambda)$.
\section{Coefficient estimates for the class ${\mathcal B}_{H}(\lambda)$}\label{sec-4}
Let
$f(z)=h(z)+\overline{g(z)}=\sum_{n=1}^{\infty}a_nz^n+\sum_{n=1}^{\infty}\overline{b}_n\overline{z}^n$
with $a_1=1$ and $b_1=0$. If $f\in {\mathcal B}_{H}(\lambda)$, then by Theorem \ref{thm2.1}, for each $\theta\in [0,
2\pi]$,
$$\left |\frac{h''(0)+e^{i\theta}g''(0)}{h'(0)+e^{i\theta}g'(0)}\right |\leq 2\lambda,
$$
which shows that $\big | |a_2|-|b_2| \big |\leq |a_2|+|b_2|\leq \lambda$. Of course,
this estimate is sharp because equality holds for $H_\lambda$.
In order to estimate the coefficients of harmonic mappings $f$ in
$\mathbb{D}$, we consider the integral mean $I_p(r,f)$ of $f$ defined by \eqref{eq-ext1},
where $p$ is a real number. For $f(z)=\sum_{n=1}^{\infty}a_nz^n+\sum_{n=1}^{\infty}\overline{b}_n\overline{z}^n
\in {\mathcal B}_{H}(\lambda)$ with $a_1=1$ and $b_1=0$, by Theorem \ref{thm2.1},
\cite[Theorem 2.3]{KS} and similar arguments as in
\cite[p.~190]{KS}, we have $|a_n+e^{i\theta}b_n|=O\big
(n^{\lambda-1} \big )$ uniformly for $\theta\in [0, 2\pi]$ as
$n\rightarrow \infty$, and then $|a_n|+|b_n|=O(n^{\lambda-1})$ as
$n\rightarrow \infty$. Moreover, if $\lambda<1$ and
$f=h+\overline{g}$ is univalent, then, by Theorem \ref{thm2.1} and
\cite[Corollary 2.4]{KS}, $H_\theta=h+e^{i\theta}g$ is uniformly
bounded for $\theta\in [0,2\pi]$. So
$${\rm Area}\,(H_\theta(\mathbb{D}))=\pi \Big (1+\sum_{n=2}^{\infty}n|a_n+e^{i\theta}b_n|^2\Big )<\infty,
$$
which implies that $|a_n|+|b_n|=o\big (n^{-1/2}\big )$ as
$n\rightarrow \infty$.
In the following theorem, we improve the exponents in these order
estimates.
\bthm\label{thm4.1} Let
$f(z)=h(z)+\overline{g(z)}=\sum_{n=1}^{\infty}a_nz^n
+\sum_{n=1}^{\infty}\overline{b}_n\overline{z}^n\in {\mathcal
B}_{H}(\lambda)$ with $a_1=1$ and $b_1=0$. Then, for each
$\varepsilon>0$, a real number $p$ and uniformly for $\theta\in [0,
2\pi]$, we have
$$I_p(r, h'+e^{i\theta}g')=O\big ((1-r)^{\alpha(|p|\lambda)-\varepsilon}\big ),
$$
and thus,
$$I_p(r, f)=O\big ((1-r)^{-\alpha(|p|\lambda)-\varepsilon}\big ),~~
|a_n|+|b_n|=O\big (n^{\alpha(\lambda)-1+\varepsilon}\big ),
$$
where $\alpha(\lambda)=\frac{\sqrt{1+4\lambda^2}-1}{2}$.
\ethm
We can prove this theorem by using Theorem \ref{thm2.1} and
\cite[Theroem 3.1]{KS}. Here we omit its detail.
Given a harmonic mapping
$f(z)=h(z)+\overline{g(z)}=\sum_{n=1}^{\infty}a_nz^n+\sum_{n=1}^{\infty}\overline{b}_n\overline{z}^n$
with $a_1=1$ and $b_1=0$ in $\mathbb{D}$, let $\gamma(f)$ denote the
infimum of exponents $\gamma$ such that $|a_n|+|b_n|=O\big
(n^{\gamma-1}\big )$ as $n\rightarrow \infty$, that is,
$$\gamma(f)=\overline{\lim_{n\rightarrow\infty}}\frac{\log n (|a_n|+|b_n|)}{\log n}.
$$
For the subset $X$ of ${\mathcal H}$, we let $\gamma(X)=\sup_{f\in
X}\gamma(f)$. As for the class ${\mathcal S}_b$ of all normalized
bounded univalent functions in $\mathbb{D}$, it is proved that
$0.24<\gamma({\mathcal S}_b)<0.4886$ (cf. \cite{Ca, Ma}), and
conjectured by Carleson and Jones \cite{Ca} that $\gamma({\mathcal
S}_b)=0.25$. For a bounded and univalent function $f$, we note
that the growth of coefficients seems to involve the irregularity of
boundary of image under $f$ (cf. \cite[Chapter 10]{Ch1}), and
Makarov and Pommerenke observed a remarkable phenomenon of phase
transition of the functional $\gamma(f)$ with respect to the
Minkowski dimension of the boundary curve (cf. \cite{Ma}). Recently,
in \cite{KS}, authors established the boundedness of
$\gamma({\mathcal B}(\lambda))$ in terms of $\lambda$. As a
generalization, we consider the class ${\mathcal B}_{H}(\lambda)$
and prove that $\gamma({\mathcal B}_{H}(\lambda))$ have the same
bound with $\gamma({\mathcal B}(\lambda))$.
For the class ${\mathcal B}_{H}(\lambda)$, Theorem \ref{thm4.1} implies that
$\gamma({\mathcal B}_{H}(\lambda))\leq \alpha(\lambda)$.
The extremal function $H_\lambda$ satisfies the relation
$\gamma(H_\lambda)=\lambda-1$. By \cite[Example 3.1]{KS}, it follows that
$\gamma({\mathcal B}_{H}(\lambda))\geq 0$ for $\lambda>0$.
Hence, we have
\bthm\label{thm4.2}
For each $\lambda\in (0,\infty)$, we have
$$\max\{\lambda-1,0\}\leq\gamma({\mathcal B}_{H}(\lambda))\leq \alpha(\lambda),
$$
where
$\alpha(\lambda)=\frac{\sqrt{1+4\lambda^2}-1}{2}$. In particular,
$\gamma({\mathcal B}_{H}(\lambda))=O(\lambda^2)$ as
$\lambda\rightarrow 0$.
\ethm
Now we mention a connection with integral means for univalent
analytic functions. For a univalent harmonic mapping $f\in {\mathcal S}_{H}$ and a real number $p$, we let
$$\beta_{f,\theta}(p)=\overline{\lim_{r\rightarrow 1^{-}}}\frac{\log
I_p(r, h'+e^{i\theta}g')}{\log \frac{1}{1-r}}.
$$
Clearly, for an univalent analytic function $f\in {\mathcal S}$,
$$ \beta_{f}(p)=\overline{\lim_{r\rightarrow 1^{-}}}
\frac{\log I_p(r, f')}{\log \frac{1}{1-r}}.
$$
Brennan conjectured that $\beta_f(-2)\leq 1$ for univalent analytic functions $f$ (cf. \cite[Charpter 8]{Ch1}).
As a corollary of Theorem \ref{thm4.1}, we have
\bthm\label{thm4.3}
For $f\in {\mathcal B}_{H}(\lambda)$ and a real number $p$,
$$\beta_{f, \, \theta}(p)\leq \alpha(|p|\lambda)=\frac{\sqrt{1+4p^2\lambda^2}-1}{2}
$$
holds for each $\theta\in [0, 2\pi]$. In particular, the Brennan
conjecture is true for univalent functions $f$ with $\|T_f\|\leq
\sqrt{2}$.
\ethm
\subsection*{Acknowledgements}
The work of Ms. Jinjing Qiao was supported by National Natural
Science Foundation of China (No. 11501159) and was partially
supported by ``INSA JRD-TATA Fellowship" of the Centre for
International Co-operation in Science (CICS).
The second author is on leave from IIT Madras.
|
3,212,635,537,775 | arxiv | \section{Introduction}
Nuclear structure varies tremendously from the many-body leptodermous
features of heavy nuclei to the individual properties of light nuclei
\cite{boh75,zel17,sie87,mye69,tho04,fre07}. The unexpected observed
increased jump in radius from lighter Li-isotopes to $^{11}$Li
\cite{tan85,tan85b} triggered the research on halo structures
\cite{han87} in a number of subfields of physics \cite{jen04}.
Few-body structure was especially efficient to describe the gross
features of halos, simply because the degrees of freedom essentially
decouple into two groups, where only a few nucleons determine the
low-energy properties \cite{jen04}.
The overall properties of $^{11}$Li are established as a three-body
system with constituents of two neutrons and $^{9}$Li \cite{joh90}.
In this connection the spin-spin splitting of the $s_{1/2}$ and
$p_{1/2}$ single-neutron states coupled to the $3/2^-$ ground state of
$^{9}$Li is crucial for the halo properties \cite{kat99,gar02a}.
These halo structures are consistent with reaction information
\cite{gar96,gar97,cas17} even after the binding energy has been
measured with better accuracy \cite{smi08}. The space spanned by
these single-particle states provides the $3/2^-$ ground-state as well
as the dipole-excited states of $1/2^+$, $3/2^+$ and $5/2^+$
\cite{gar02}.
The present investigation is triggered by the recent experiments
\cite{web19} on the mirror nucleus, $^{11}$O, which was preceded by
related experiments on $^{10}$N \cite{hoo17,lep02}, and quickly
followed up by theoretical papers on these and a few neighboring
nuclei \cite{wan19,mor19,for19}. Several previous publications on
$^{11}$O and $^{10}$N are available \cite{aoy97,til04,cha12}. The
comparison of isobaric analogue structures is a classical nuclear
discipline, which has provided strong support for the generalization
of the isospin concept from nucleons to nuclei
\cite{boh75,zel17,sie87}. Since halo structures only occur near
threshold for $s$ and $p$ valence-nucleons, as in $^{11}$Li, the
structure may be strongly influenced even for small energy changes.
Thus, mirror nuclei on the driplines are most likely to exhibit larger
differences than stable nuclear mirrors \cite{jen04}.
The mirror pair, $^{11}$Li and $^{11}$O, are located on the neutron
dripline and slightly outside the proton dripline,
respectively. Still, both are accessible by experiments, which for
$^{11}$O largely is possible due to the Coulomb barrier. The major
effect is from the Coulomb interaction of the additional protons,
which has both direct and indirect influence. The same structure in
both nuclei produces an energy difference from the additional charge.
However, the structure itself is modified by this extra Coulomb
interaction and in turn resulting in a modified energy.
The total effect of the additional Coulomb interaction is quantified
in the Thomas-Ehrman shift \cite{ehr51,tho52}, which is defined as the
energy difference (apart from the neutron-proton mass difference)
between analogue states in mirror nuclei. This energy shift may
depend on the state, and probably therefore is especially sensitive
towards variation of halo structure between analogue states in
$^{11}$Li and $^{11}$O \cite{aue00,gri02,gar04}. The possible
structure variation between these analogue states may lead to sizable
state-dependent Thomas-Ehrman shifts. This may even change the
sequence of ground and excited states built on these valence
configurations.
The recent experimental activity towards $^{11}$O and $^{10}$N
is an opportunity to compare properties in mirrors each located around
different nucleon driplines. This has earlier proved to be
informative. Previous theoretical investigations already provided a
number of details on these nuclei. However, they are a little random
as essentially all are incomplete in descriptions of the low-lying
states supported by the valence nucleon $s_{1/2}$ and $p_{1/2}$
single-particle states, since only the $3/2^-$-state is considered.
The only exception is Ref.\cite{wan19}, where the positive parity
state $5/2^+$ is also investigated.
In general, the connection between these mirror nuclei is not
particularly well explored. Furthermore, the previous results are for
some reason quantitatively deviating, either due to different methods,
interactions, or perhaps accuracy of some kind. We therefore decided
to investigate these low-lying nuclear states by use of our well
established few-body method, which due to the phenomenological input
also is both simple and accurate. Thus, in the present paper we
report on detailed studies of low-energy properties of $^{11}$O in
comparison to similar investigations of $^{11}$Li. Our purpose is
two-fold, that is first to discuss few-body properties of the specific
$^{11}$O-nucleus, and second to look for general conclusions by
studying this mirror of the prototype of a halo nucleus. Larger
differences can be expected for such dripline structures in comparison
and in contrast to stable mirror nuclei.
The paper, in section II, first briefly presents the applied
hyperspherical adiabatic expansion method \cite{nie01}, the
degrees-of-freedom, and the choice of interaction form. Section III
describes the choice of parameters and the derived properties of the
subsystems, $^{10}$Li and $^{10}$N. Section IV, V and VI are devoted
to the computed three-body properties of $^{11}$O specifically in
comparison to $^{11}$Li. In section VII we present a summary and the
conclusion.
\section{Sketch of the method}
\label{method}
The three-body calculations will be performed using the well-established
hyperspherical adiabatic expansion method described in detail in \cite{nie01}.
In this method the three-body wave function, with total angular momentum $J$
and projection $M$, is written as:
\begin{equation}
\Psi^{JM}=\frac{1}{\rho^{5/2}} \sum_n f^J_n(\rho) \Phi^{JM}_n(\rho,\Omega),
\label{exp0}
\end{equation}
where $\rho$ is the hyperradius, $\Omega$ collects the five
hyperangles as defined for instance in \cite{nie01}, and $f^J_n(\rho)$
are the radial expansion functions. The basis set
$\{\Phi^{JM}_n(\rho,\Omega)\}$ used in the expansion above is formed
by the eigenfunctions of the angular part of the Schr\"{o}dinger (or
Faddeev) equations,
\begin{equation}
\left[ \hat{\Lambda}^2 + \frac{2m\rho^2}{\hbar^2}(V_{12}+V_{13}+V_{23}) \right] \Phi_n^{JM}=\lambda_n(\rho) \Phi_n^{JM}(\rho,\Omega),
\label{angf}
\end{equation}
where $\hat{\Lambda}$ is the grand-angular momentum operator \cite{nie01},
$V_{ij}$ is the interaction between particles $i$ and $j$,
$m$ is the normalization mass used to define the Jacobi coordinates \cite{nie01},
and $\lambda_n(\rho)$ is the eigenvalue associated to the angular eigenfunction
$\Phi_n^{JM}(\rho,\Omega)$.
In practice, Eq.(\ref{angf}) is solved after the expansion
\begin{equation}
\Phi^{JM}_n(\rho,\Omega)=\sum_q C_q^{(n)}(\rho)
\left[ {\cal Y}_{\ell_x\ell_y}^{K L}(\Omega) \otimes \chi_{s_x s_y}^S \right]^{JM},
\label{exp1}
\end{equation}
where $q$ collects all the quantum
numbers $\{K,\ell_x,\ell_y,L,s_x,S\}$, where $\ell_x$ and $\ell_y$ are the
relative orbital angular momenta between two of the particles, and between the
third particle and the center-of-mass of the first two, respectively. The total
orbital angular momentum $L$ results from the coupling of $\ell_x$ and $\ell_y$.
The quantum number $K$ is the so-called hypermomentum, which is defined as
$K=2\nu +\ell_x +\ell_y$, with $\nu=0,1,2,\cdots$. The dependence on these quantum numbers,
$\ell_x$, $\ell_y$, $L$, and $K$, is contained in the usual hyperspherical
harmonics, ${\cal Y}_{\ell_x\ell_y}^{K L}(\Omega)$, whose definition can also
be found in \cite{nie01}, and which satisfy $\hat{\Lambda}^2 {\cal Y}_{\ell_x\ell_y}^{K L}=
K(K+4){\cal Y}_{\ell_x\ell_y}^{K L}$.
In the same way, $s_x$ is the total spin of two of the particles, which couples to the
spin of the third particle, $s_y$, to give the total spin $S$. The total spin function
is represented in Eq.(\ref{exp1}) by $\chi_{s_x s_y}^S$. Finally $L$ and $S$ couple to the
total three-body angular momentum $J$ with projection $M$.
Obviously the definition of the $\bm{x}$ and $\bm{y}$ coordinates (the Jacobi coordinates) is not unique, since
for three-body systems three different sets of Jacobi coordinates can be formed \cite{nie01}. When
solving the Schr\"{o}dinger equation a choice has to be made, which means that only one
of the internal two-body subsystems is treated in its natural coordinate. In this work, however,
we solve instead the Faddeev equations, which have the nice property of treating all the three possible sets
of Jacobi coordinates on the same footing \cite{nie01}.
The radial functions, $f^J_n(\rho)$, in Eq.(\ref{exp0}) are
obtained after solving the set of coupled equations
\begin{eqnarray}
\lefteqn{
\hspace*{-1cm}
\left[
-\frac{\partial^2}{\partial \rho^2}+\frac{\lambda_n(\rho)+\frac{15}{4}}{\rho^2} -\frac{2mE}{\hbar^2}
\right] f_n^J( \rho)=
} \nonumber \\ & &
\sum_{n'} \left(2P_{nn'}(\rho)\frac{\partial}{\partial\rho} +Q_{nn'}(\rho) \right) f_{n'}^J(\rho),
\label{radf}
\end{eqnarray}
where $E$ is the three-body energy, and $\lambda_n(\rho)$ is obtained from the angular equation (\ref{angf}).
The explicit form and properties of the coupling functions $P_{nn'}(\rho)$ and
$Q_{nn'}(\rho)$ can be found in \cite{nie01}.
The set of equations (\ref{radf}) has to be solved imposing to the
radial wave functions the appropriate asymptotic behaviour. This is
particularly simple for bound states, due to the asymptotic
exponential fall-off of the radial wave functions. In order to exploit
the simplicity of this asymptotic behaviour, we compute resonances
(understood as poles of the $S$-matrix) by means of the complex
scaling method \cite{ho83,moi98}. In this method the three-body
energy is allowed to be complex, and the radial coordinates are
rotated into the complex plane by an arbitrary angle $\theta$ ($\rho
\rightarrow \rho e^{i\theta}$). Under this transformation, and
provided that $\theta$ is sufficiently large, the resonance wave
function behaves asymptotically as a bound state, i.e., it decays
exponentially at large distances, and its complex energy,
$E=E_R-i\Gamma_R/2$, gives the resonance energy, $E_R$, and the
resonance width, $\Gamma_R$.
Being more specific, after the complex scaling transformation,
the Eqs.(\ref{radf}) are solved by imposing a box boundary condition. The continuum spectrum is then discretized, and
the corresponding discrete energies appear in the complex energy plane rotated by an angle equal to $2\theta$ \cite{ho83,moi98}.
The resonances show up as discrete points, independent of the complex scaling angle, and out of the cut corresponding to continuum states.
Note that an accurate enough solution of the three-body problem requires convergence at two
different levels. First, one needs convergence in the expansion of the angular eigenfunctions in Eq.(\ref{exp1}), which
is necessary in order to obtain sufficient accuracy in the $\lambda_n$-eigenvalues in the radial equations
(\ref{radf}). A correct convergence
requires inclusion of the relevant $\{\ell_x,\ell_y,L,s_x,S\}$-components, and, for each of them, a sufficiently
large value, $K_{max}$, of the hypermomentum $K$ is also needed. Second, one has to reach convergence as well in the
expansion in Eq.(\ref{exp0}), which implies a sufficiently large number of adiabatic terms.
Typically, the
convergence in the expansion (\ref{exp0}) is rather fast, and for bound states and resonances (after the
complex scaling transformation) four or five
terms are usually enough. However, the expansion (\ref{exp1}) is more demanding, especially when
dealing with particles with non-zero spin, since the number of components
can increase significantly in accordance with a given total three-body angular momentum $J$. Also, for extended systems, for which
the $\lambda_n$-functions have to be accurately computed at large distances, the required maximum
value of the hypermomentum, $K_{max}$, can be rather large.
Given a three-body system, the key quantities determining its properties are the two-body potentials entering in Eq.(\ref{angf}). In this work we shall assume that the nucleon-nucleon interaction is the GPT potential described in \cite{gog70}.
For the core-nucleon potential we choose an interaction, adjusted independently for the different partial waves, each term of the form:
\begin{equation}
V_{Nc}^{(\ell)}(r)=V_c^{(\ell)}(r)+V_{ss}^{(\ell)}(r) \bm{s}_c\cdot (\bm{\ell}+\bm{s}_N)+V_{so}^{(\ell)} \bm{\ell}\cdot \bm{s}_N,
\label{eq1}
\end{equation}
where $\bm{\ell}$ is the relative orbital angular momentum between the
core and the nucleon, whose intrinsic spins are denoted by $\bm{s}_c$
and $\bm{s}_N$, respectively. As shown in \cite{gar03}, this
spin-operator structure, which is consistent with the mean-field
description of the nucleons in the core, is crucial for a correct
implementation of the Pauli principle.
Obviously, when the interaction involves two charged particles, the Coulomb potential should be added to the interactions
described above. In this work we shall describe the core as a uniformly charged sphere with radius equal to the charge radius,
which for $^9$C will be taken equal to 2.5 fm. We assume all nucleons are point-like particles.
\section{The core-nucleon system}
\label{pots}
For the case of $^{11}$Li ($^9$Li+$n$+$n$) and its mirror partner,
$^{11}$O ($^9$C+$p$+$p$), it is clear that the essential ingredient is
the nuclear part of the core-nucleon interaction. Due to the charge
symmetry of the strong interaction, these potentials will be the same
for both, $^{10}$Li ($^9$Li+$n$) and $^{10}$N ($^9$C+$p$), since also
the $^{9}$Li and $^{9}$C cores are mirror nuclei. Table~\ref{tab1a}
contains the parameters used in this work for the potential form given
in Eq.(\ref{eq1}) with the $s$- and $p$-state parameters from
Ref.\cite{mor19}. The radial shapes are for convenience chosen to be
Gaussians with the same range in all terms. The actual shape is
unimportant as long as it is of short range with a range consistent
with the core-size.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
$\ell$ & $S_{c}^{(\ell)}$ & $S_{ss}^{(\ell)}$ & $S_{so}^{(\ell)}$ \\ \hline
0 &$-5.4$ & $-4.5$ & -- \\
1 & 260.75 & 1.0 & 300 \\
2 & 260 & $-9.0$ & $-300$ \\ \hline
\end{tabular}
\caption{The strength parameters, $S_i^{(\ell)}$, in MeV for the
Gaussian core-nucleon potentials, $V_i^{(\ell)}=S_i^{(\ell)}
e^{-r^2/b^2}$, defined in Eq.(\ref{eq1}), with the $s$ and $p$
partial waves as in Ref.\cite{mor19} (also denoted P1I in Ref.\cite{cas17}). We
choose the same numerical value, $b = 2.55$~fm, for the range
parameter, $b$, in all terms and partial waves. }
\label{tab1a}
\end{table}
The two all-decisive properties of the nucleon-core system are the positions
of the two-body resonances, and the exclusion of Pauli forbidden states
occupied by the core-nucleons. The first property is achieved by the
numerical values specified in Table~\ref{tab1a}.
The second property is fulfilled by use of the shallow $s$-wave
potential without a bound state, and a large and inverse (positive)
sign of the $p$-wave spin-orbit strength, which places the
$p_{3/2}$-shell at an unreachable high energy. In this way, by
construction, the valence-nucleon can not occupy the Pauli forbidden
$s_{1/2}$- and $p_{3/2}$-shells, which already are occupied by the six
neutrons or the six protons in the $^9$Li or $^9$C-core.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth,angle=0]{pots.eps}
\end{center}
\caption{(a): Core-nucleon potentials for $s_{1/2}$ states, $1^-$ (red) and $2^-$ (black), in $^{10}$Li (solid lines) and $^{10}$N (dashed lines). (b): The
same as in panel (a), but for the $p_{1/2}$ states, $1^+$ (red) and $2^+$ (black).}
\label{figpot}
\end{figure}
The resulting nucleon-core potentials are shown in Fig.~\ref{figpot}
for the $s_{1/2}$-states, $1^-$ (red) and $2^-$ (black) in panel (a),
and for the $p_{1/2}$-states, $1^+$ (red) and $2^+$ (black) in panel
(b). The solid and dashed lines refer, respectively, to the $^{10}$Li
and $^{10}$N-cases. The difference between them arises from the
Coulomb repulsion entering in the $^ 9$C-proton interaction for $^{10}$N.
The $s$-waves in the left panel of
Fig.~\ref{figpot} reveal our choices for $^{10}$Li of an attractive
$2^-$-potential placing a virtual nucleon-core state very close to
zero energy, while in contrast the $1^-$-potential is very small and
slightly repulsive. The same potentials for $^{10}$N are pushed up by
the Coulomb repulsion, where the $2^-$-potential still has an
attractive short-range part, whereas the $1^-$-potential is clearly
overall repulsive. The $p$-wave potentials in the right panel of
Fig.~\ref{figpot} all have an attractive short-range part leading to
more or less known $p$-wave resonances in both $^{10}$Li and $^{10}$N.
As shown in \cite{gar02a,gar97}, the main properties of $^{11}$Li, as well
as the behavior of the momentum distributions, are essentially determined
by the energy of the centroid of the spin-splitted $s$- and $p$- doublets.
Therefore, the subsequent three-body results would remain basically
unchanged with the opposite order of the $1^-$ and $2^-$-virtual states
and of the $1^+$ and $2^+$-resonances.
\subsection{$^{10}$Li-properties}
\begin{table}
\begin{footnotesize}
\begin{center}
\begin{tabular}{|c|cc|cc|cc|c|}
\hline
&\multicolumn{2}{c|}{This work} &\multicolumn{2}{c|}{Ref.\cite{kat99}} & \multicolumn{2}{c|}{Exp.\cite{boh93}} & $\delta(E_R)=\frac{\pi}{2}$ \\ \hline
& $E_R$ & $\Gamma_R$ & $E_R$ & $\Gamma_R$ & $E_R$ & $\Gamma_R$ & $E_R$\\ \hline
1$^-$ & -- & -- & -- & -- & -- & -- & -- \\
2$^-$ & $-0.020$ & -- & $-0.028$ & -- & -- & -- & -- \\
1$^+$ & 0.32 & 0.19 & 0.42 & 0.19 & $0.42\pm0.05$ & $0.15\pm0.07$ & 0.37 \\
2$^+$ & 0.58& 0.49 & 0.71 & 0.40 & $0.80\pm0.08$ & $0.30\pm0.10$ & 0.78 \\
4$^-$ & 3.95& 2.45 & 4.13 & 3.12 & $4.47\pm0.10$ & $0.7\pm0.2$ & 4.88 \\ \hline
\end{tabular}
\end{center}
\end{footnotesize}
\caption{For $^{10}$Li, the second column shows the energies of the 1$^-$ and 2$^-$ virtual states, and energies and widths of the $1^+$, $2^+$ and $4^-$ resonances in $^{10}$Li
obtained with the two-body potentials described in the text. The third column shows the energies and widths obtained in Ref.\cite{kat99}. In the fourth column
the available experimental data are given \cite{boh93}. The last column gives the resonance energies computed as $\delta(E_R)=\pi/2$. All the energies, $E_R$, and
widths, $\Gamma_R$, are given in MeV.}
\label{tab10Li}
\end{table}
The $^{10}$Li-properties are determined by the potentials given by the solid curves in Figs.~\ref{figpot}a and \ref{figpot}b.
The computed spectrum is shown in the second column of Table~\ref{tab10Li}. The ground state is a virtual $2^-$-state resulting
from the coupling of an $s_{1/2}$ valence-neutron with the $3/2^-$ ground-state of the core, whose energy is about $-20$ keV.
The corresponding potential is given by the solid black curve in Fig.~\ref{figpot}a. Due to the repulsive character of the potential
shown by the solid red curve in the same figure, the $1^-$ $s$-wave partner appears at high energy in the continuum. The
$p$-wave resonant-states, 1$^+$ and $2^+$, produced by the $p$-wave potential barriers (solid curves in Fig.~\ref{figpot}b), are
found at 0.32 MeV and 0.58 MeV, respectively, with corresponding widths of 0.19 MeV and 0.49 MeV.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth,angle=0]{10N.eps}
\end{center}
\caption{Complex energies of the $p_{1/2}$-resonances in $^{10}$Li, panel (a), and the $s_{1/2}$- and $p_{1/2}$-resonances in $^{10}$N, panel (b), after a
two-body complex scaling calculation using the potential described in Sect.~\ref{pots}.}
\label{fig1}
\end{figure}
The virtual state is obtained by finding the energy providing the correct divergent asymptotic behaviour produced by the
poles of the $S$-matrix located on the negative imaginary axis in the complex momentum plane. The resonances are also obtained as
poles of the $S$-matrix by means of the complex scaling method \cite{ho83,moi98}, which simplifies the numerical calculation by giving rise to an exponential fall-off of the complex rotated resonance wave functions. As mentioned in Sect.~\ref{method}, the complex rotated two-body problem is solved after discretization of the continuum by means of a box boundary condition.
The corresponding discrete energies appear in the complex energy plane
rotated by an angle equal to twice the angle used for the complex
scaling coordinate transformation. The resonances appear as discrete points,
independent of the complex scaling angle, out of the cut (lines)
corresponding to continuum states. This is shown for $^{10}$Li in
Fig.~\ref{fig1}a. As we can see, a complex scaling angle of
$\theta=0.3$ rads is enough to capture the $1^+$ and
$2^+$-resonances. As shown in Table~\ref{tab10Li}, together with the
$1^+$ and $2^+$-states, the core-neutron potential described in
Sect.~\ref{pots} gives rise to a $4^-$-resonance (with the valence
neutron in the $d_{5/2}$-state) at 3.95 MeV with a width of 2.45 MeV.
For the sake of clarity in the figure, this resonance is not shown in
Fig.~\ref{fig1}a.
The computed spectrum can be compared to the one obtained in
Ref.\cite{kat99} (third column of Table~\ref{tab10Li}), where a
microscopic coupled-channel calculation is performed. The similar
virtual $s$-states in the two calculations are dictated by a demand to
reproduce measured two-neutron halo properties of $^{11}$Li in
subsequent calculations. In the fourth column of the table we give the
available experimental data. Note that the ones of the $4^-$-state are
actually in Ref.\cite{boh93} assigned preliminary to angular momentum
and parity, $2^-$. However, as suggested in \cite{kat99}, the
calculations might indicate that they could actually correspond to the
$4^-$-resonance.
Although in both this work and Ref.\cite{kat99}, the energies are
computed as poles of the $S$-matrix, the agreement with the
experimental $1^+$ and $2^+$-energies \cite{boh93} seems to be worse
in our calculation. The $p$-states
deviate somewhat by more than about 100~keV in centroid energy of the two spin-split
$p$-states (0.5~MeV in this work and 0.6~MeV in \cite{kat99}) .
However, in \cite{kat99} the calculation is performed by
fitting the energy of the $1^+$$S$-matrix pole to the experimental energy of
the $1^+$-resonance, whereas in our case the experimental energies
are better reproduced by the energies for which the corresponding phase shifts are
equal to $\pi/2$. As seen in the last column in Table~\ref{tab10Li}, when
computed in this way,
our potential gives rise to 1$^+$ and $2^+$-energies equal to
0.37 MeV and 0.78 MeV, respectively, as well as to a $4^-$-energy of
4.88 MeV.
The differences between resonance energies obtained through the
different mathematical definition reflect an intrinsic uncertainty,
which only can be resolved by comparing calculations of directly
measured observables like specific scattering cross sections. In this
connection, it is important that the $^9$Li-neutron interaction used
in the present work also leads to reproduction of the experimental
excitation energy spectrum of $^{10}$Li after the breakup reaction,
$d(^9\mbox{Li},p)^{10}$Li, initiated by a $^{9}$Li laboratory energy
of $11.1$~MeV/A, see Ref.\cite{mor19}.
\subsection{$^{10}$N-properties}
\begin{table}
\begin{scriptsize}
\begin{center}
\begin{tabular}{|c|cc|cc|cc|cc|c|}
\hline
&\multicolumn{2}{c|}{This work} &\multicolumn{2}{c|}{Ref.\cite{aoy97}} &\multicolumn{2}{c|}{Ref.\cite{lep02}$^*$} &\multicolumn{2}{c|}{Ref.\cite{hoo17}} & $\delta(E_R)=\frac{\pi}{2}$ \\ \hline
& $E_R$ & $\Gamma_R$ & $E_R$ & $\Gamma_R$ & $E_R$ & $\Gamma_R$ & $E_R$ & $\Gamma_R$ & $E_R$ \\ \hline
1$^-$ & -- & -- & -- & -- & -- & --& $1.9^{+0.2}_{-0.2}$ & $2.5^{+2.0}_{-1.5}$ & -- \\
2$^-$ & 1.74 & 3.94 & 1.51 & 3.47 & -- & -- & $2.2^{+0.2}_{-0.2}$ & $3.1^{+0.9}_{-0.7}$ & -- \\
1$^+$ & 2.62 & 1.68 & 2.84 & 1.89 & $2.6^{+0.4}_{-0.4}$ & $2.3^{+1.6}_{-1.6}$ & -- & -- & 3.51 \\
2$^+$ & 2.89& 2.21 & 3.36 & 2.82 & -- & -- & -- & -- & 4.45 \\ \hline
\end{tabular}
\end{center}
\end{scriptsize}
\caption{For $^{10}$N, energies and widths, in MeV, of the 1$^-$, 2$^-$, $1^+$, and $2^+$ resonances obtained in our calculation (second column), the theoretical values
given in Ref.\cite{aoy97} (third column), and the experimental values given in Refs.\cite{lep02,hoo17} (fourth and fifth columns). The last column gives the resonance energies computed as $\delta(E_R)=\pi/2$. (*) Although in \cite{lep02} the observed resonance was assigned to be an $s$-wave resonance, as indicated
in Ref.\cite{til04}, it is very likely the energy and width corresponding to the $1^+$ state. }
\label{tab1}
\end{table}
The mirror nucleus, $^{10}$N, is now assumed to have exactly the same
potentials as $^{10}$Li, except for the Coulomb interaction arising from the
substituted valence-proton. We assume point-like protons and a
spherical and homogeneously charged $^9$C-core. With these
interactions we now compute the spectrum of $^{10}$N as described by a
$^9$C-core and a proton. The immediate consequence of the Coulomb
repulsion is that all the core-nucleon potentials are pushed up in
energy, reducing the depth of the potentials, and increasing the
potential barriers. This is precisely as seen by comparing the dashed and
solid curves in Fig.~\ref{figpot}.
For the $s_{1/2}$-states the Coulomb barrier implies that $1^-$ and
$2^-$-states in principle might appear as resonances. However, the
overall repulsive behaviour of the $1^-$-potential (dashed red curve
in Fig.~\ref{figpot}) does not exhibit any barrier, and only $2^-$
resonant-states are then possible. As for $^{10}$Li, resonances in
$^{10}$N are obtained after a complex scaling calculation. The
results are shown in Fig.~\ref{fig1}b, where the resonances are the
isolated points out of the cut (line) associated with the continuum states.
The ground state, the $s$-wave $2^-$-state, is clearly broader than
the 1$^+$ and $2^+$ $p$-states, and therefore requires a larger angle
in the complex scaling transformation in order to be captured in the
calculation. In particular, the calculation shown in the figure has
been made using a complex scaling angle equal to 0.5 rads for the
$2^-$-state, and 0.3 rads for the $1^+$ and $2^+$-states.
The resonance energies and widths obtained for $^{10}$N are collected
in the second column of Table~\ref{tab1}, where the results are
compared to the values given in Refs.\cite{aoy97,lep02,hoo17}. Our
results are very consistent with the theoretical values given in
\cite{aoy97}, where the complex scaling method also is used. The
slightly different energies are due to a core-nucleon strong
interaction producing also slightly different energies for the
$^{10}$Li-states, see Table~\ref{tab10Li}. The experimental value in
Ref.\cite{lep02} was initially assigned to a 1$^-$-state, but in
\cite{til04} it is suggested that this resonance is very likely the
mirror of the probable $1^+$-state at 0.24~MeV in $^{10}$Li.
Finally, in Ref.\cite{hoo17} (fifth column in Table~\ref{tab1}) two
resonances have been measured with energies around 2.0~MeV, which are
assigned by the authors to states with angular momentum and parity, $1^-$ and
$2^-$. In this reference the authors mention as well an excited $1^-$
or $2^-$ resonant-state with an energy of $2.8\pm 0.2$~MeV. As seen
in Fig.~\ref{fig1}b, we have not found any trace of such an excited
state with negative parity. It is in any case striking, that the three
energies reported in \cite{hoo17} agree reasonably well with the three
energies obtained in this work for the $2^-$, $1^+$, and $2^ +$-states.
Due to its large resonance width, the $2^-$-phase shift never reaches
the value of $\pi/2$, and therefore the energy of this resonance can
not be extracted in this way. In contrast, this is possible for the
$1^+$ and $2^+$-states, and this happens for energies equal to 3.51
MeV and 4.45 MeV, respectively, that is clearly larger than the
energies obtained as poles of the $S$-matrix (last column in
Table~\ref{tab1}). Again these different definitions reflect in
themselves an inherent uncertainty in the resonance parameters.
\section{The core-nucleon-nucleon system}
After discussing the two-body properties of $^{10}$Li and $^{10}$N, we
now investigate the effects they determine for the structure and
properties of the three-body mirror nuclei, $^{11}$Li and $^{11}$O.
We shall do this in three different steps, first in this section we
present the energy spectra, and in the two following sections we
discuss the properties of the different states, respectively the
internal structure of the wave functions, and the spatial distribution
of the three constituents.
\subsection{Energy spectrum of $^{11}$Li}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth,angle=0]{11O.eps}
\end{center}
\caption{Complex resonance energies for the computed $1/2^+$ (open brown circles), $3/2^+$ (solid green squares), and $5/2^+$ (open blue squares) states in
$^{11}$Li, panel (a), and $^{11}$O, panel (b), where the $3/2^-$resonance (solid red circles) is also shown. The calculations have been performed with a complex scaling
angle $\theta=0.3$ rads.}
\label{fig2}
\end{figure}
The potentials used to describe $^{11}$Li are the same as
used in Ref.\cite{cas17}, where the properties of the computed $3/2^-$
ground-state wave function in $^{11}$Li are described. An effective
three-body force is used to fit the experimental two-neutron
separation energy of $369.15(65)$~keV \cite{smi08}, which leads to a
charge root-mean-square radius of 3.42 fm, also in agreement with the
experimental value reported in \cite{smi08}. In this work an attractive Gaussian
three-body force with a range of 5 fm and a strength of $-0.6$ MeV has
been used. In addition, as shown in
Ref.\cite{cas17}, the computed $^{11}$Li ground-state wave function
permits reproduction of the experimental energy-integrated angular
differential cross section for the $^{11}$Li$(p, d)^{10}$Li reaction
at 5.7 MeV/A.
In Ref.\cite{gar02} the electric dipole excitations in $^{11}$Li,
i.e. the $1/2^+$, $3/2^+$, and $5/2^+$ resonant-states, were
investigated by means of the complex scaling method
\cite{ho83,moi98}. It was found that the energies of these three
resonances are pretty close to each other, with specific values
depending slightly on the properties of the core-neutron interaction.
In any case, they lie in the energy interval between 0.3~MeV and
0.7~MeV above the three-body threshold.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
& $^{11}$Li (Comp.) & $\Delta_c^{(1)}$ & $^{11}$O (Estim.)\\ \hline
$\frac{3}{2}^-$ & $-0.37$ & 4.93 & 4.56 \\
$\frac{1}{2}^+$ & $0.39-i0.15$ & $2.69-i0.77$ & $3.08-i0.92$ \\
$\frac{3}{2}^+$ & $0.35-i0.09$ & $3.41-i0.86$& $3.76-i0.95$\\
$\frac{5}{2}^+$ & $0.47-i0.22$ & $3.89-i1.20$& $4.36-i1.42$ \\ \hline
\end{tabular}
\caption{Computed complex energies, $E_R-i\Gamma_R/2$, of the $\frac{3}{2}^-$, $\frac{1}{2}^+$, $\frac{3}{2}^+$, and $\frac{5}{2}^+$ states in $^{11}$Li, the Coulomb shift for each of them as defined in Eq.(\ref{eq2}), and the estimated complex energies of the corresponding states in $^{11}$O. All the values are given in MeV.}
\label{tab0}
\end{table}
When the specific interactions used in this work are employed, the
complex scaling method reveals the existence of $1/2^+$, $3/2^+$, and
$5/2^+$-resonances. The results are shown in Fig.~\ref{fig2}a, where
specific dots appear clearly separated from the straight line
corresponding to the background continuum states. The resonances of
interest are indicated by the arrows in the figure. The precise
computed values for the resonance energies and widths, $(E_R,
\Gamma_R)$, are $(0.39, 0.30)$ MeV, $(0.35, 0.18)$ MeV, and $(0.47,
0.44)$ MeV for the $1/2^+$, $3/2^+$, and $5/2^+$-states, respectively.
This is also given in the second column in Table~\ref{tab0} as a
complex number for each state.
\subsection{Energy spectrum of $^{11}$O}
As expected, due to the Coulomb repulsion, the ground state in $^{11}$O
is not bound. Therefore, in this case all the states, $J^\pi=3/2^ -$,
$1/2^ +$, $3/2^ +$, and $5/2^+$, will be computed by means of the
complex scaling method. We have used a complex scaling angle of
$\theta=0.30$ rads, and the result of the calculation is shown in
Fig.~\ref{fig2}b. The straight line, rotated by an angle equal to
$2\theta$, contains the discretized continuum states, and the points
out of this line correspond to the different resonances. The lowest
$3/2^-$, $1/2^+$, $3/2^+$, and $5/2^+$-states are indicated by the
corresponding arrows.
As seen in the figure, in all the cases a second resonance is found in
the vicinity of $E_R=5$ MeV. In the 5/2$^+$-case even a third
resonance around 5.5 MeV is seen. In order to make the plot clean, the
cuts associated to the two-body resonances, i.e. two-body resonance
plus the third particle in the continuum \cite{moi98}, are not shown
in the figure.
\begin{table}
\begin{tabular}{|c|cc|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{This work} & \multicolumn{2}{c|}{Ref.\cite{web19,wan19}} & \multicolumn{2}{c|}{Ref.\cite{cha12}} & \multicolumn{2}{c|}{Ref.\cite{for19}} \\ \hline
& $E_R$ & $\Gamma_R$ & $E_R$ & $\Gamma_R$& $E_R$ & $\Gamma_R$ & $E_R$ & $\Gamma_R$ \\ \hline
$\frac{3}{2}^-$ & 4.74 & 2.75 & 4.16 & 1.30 & $3.21\pm 0.84$ & -- & 4.75 & 2.51 \\
& 4.97 & 5.07 & 4.85 & 1.33 & -- &-- & -- &-- \\ \hline
$\frac{1}{2}^+$ & 3.77 &2.74 & & & -- &-- & -- &-- \\
& 5.02 &4.87 & & & -- &-- & -- &-- \\ \hline
$\frac{3}{2}^+$ & 3.79 & 2.84& & & -- &-- & -- &-- \\
& 4.93 & 4.65& & & -- &-- & -- &-- \\ \hline
$\frac{5}{2}^+$ & 4.16 & 3.38& 4.65 & 1.06 & -- &-- & -- &-- \\
& 4.89 & 5.36 & 6.28 & 1.96 & -- &-- & -- &-- \\ \hline
\end{tabular}
\caption{For $^{11}$O, the second column gives the computed energies, $E_R$, and widths, $\Gamma_R$, of the $3/2^-$, $1/2^+$, $3/2^+$ and $5/2^+$states. The last
three columns show the results given in Refs.\cite{web19,wan19}, Ref.\cite{cha12}, and Ref.\cite{for19}, respectively. Both, energies and widths, are given in MeV.}
\label{tab3}
\end{table}
The precise values of the resonant energies, $E_R$, and widths,
$\Gamma_R$, are given in the second column of Table~\ref{tab3} for the
two lowest resonances for each of the computed $J^ \pi$-states.
As seen in the table, the $1/2^+$, $3/2^+$,
and $5/2^+$-energies are similar to each other, especially the 1/2$^+$
and $3/2^+$-states, which are almost degenerate. Therefore, very
likely one of these states should actually be the ground state, since
the $3/2^-$-energy is at least 0.6~MeV higher.
At this point, we emphasize that the computed resonance energies and widths have
been obtained without inclusion of any three-body force in the radial
Eqs.(\ref{radf}). When a short-range effective three-body potential
is included, we have observed that the effect is clearly bigger for
the 3/2$^-$-state than for the positive parity resonances. This is an
indication that for the $3/2^-$-case the core and the two
valence-protons are clearly closer to each other than for the $1/2^+$,
$3/2^+$, and $5/2^+$-states.
In particular, if we use a Gaussian three-body force with a range of 5
fm and an attractive strength of $-4$~MeV, the energies and widths of
the positive parity resonances remain essentially unchanged, whereas
for the lowest $3/2^-$-state we get an energy of 3.72 MeV (with a width of
1.18 MeV), similar to the one of the other resonances. The conclusion
is then, that from the pure three-body calculation it is difficult, or
even not possible, to determine which $J^\pi$-state is actually the
ground state in $^{11}$O. In any case, it seems clear that we cannot
exclude the possibility that another state than the $3/2^-$-state
becomes the ground state in $^{11}$O.
In Table~\ref{tab3} we compare our results with previous works. In all
of them, the ground state is assigned to an angular momentum and
parity, $3/2^-$. Only in Refs.\cite{web19,wan19} the positive parity
state $5/2^+$ is also considered. In these references,
\cite{web19,wan19}, the computed resonances are clearly narrower than
the ones obtained in the present work, and the fact that the two
energies given for the lowest 3/2$^-$ and $5/2^+$-states are similar
to ours, but with the quantum numbers exchanged, is probably just an
accidental coincidence.
The reason for this difference is difficult to determine.
On the one hand, in our work the core is assumed to be spherical in
contrast to Refs.\cite{web19,wan19}. Thus, we have ignored the
possibly significant role played by the deformation. On the other
hand, in Refs.\cite{web19,wan19} the calculations are performed taking
a maximum value for the hypermomentum, $K_{max}$, equal to $20$. As
we shall discuss later, this value might be too small to guarantee
that convergence has been reached in the calculations.
Our three-body
approach treats the constituents as inert particles with central
two-body interactions, and it is therefore insensitive to deformation.
In any case,
as discussed in \cite{mis97}, for sufficiently weakly bound systems, or
in other words, provided the valence nucleons are located at relatively large
distance from the core, ``the quadrupole deformation of the resulting halo
is completely determined by the intrinsic structure of a weakly bound
orbital, irrespective of the shape of the core''.
Furthermore, our phenomenological choice of interaction
parameters necessarily accounts for at least part of the effects of the
core-deformation.
A further comparison is found in another publication \cite{cha12},
where a clearly lower energy, 3.21 MeV, although with a large error bar, is given for the
$3/2^-$-state. However, this energy has not been computed, but
obtained as an extrapolation using the isobaric multiplet
mass-equation, whose coefficients are determined after the shell-model
computed energies for $^{11}$Li, $^{11}$Be, and $^{11}$B. In addition
to these energy properties, it was argued in Ref.\cite{for19}, that
the experimental breakup data given in \cite{web19}, can be as well
reproduced by only the ground state of $^{11}$O, whose energy and
width are given in Table~\ref{tab3} to be 4.75 MeV and 2.51 MeV,
respectively, very similar to our lowest $3/2^-$ energy and width.
\subsection{Coulomb shift}
Due to the charge symmetry of the strong interaction, the
$^{11}$O-states computed in the previous subsection have been obtained
simply by adding the Coulomb potential to the nuclear interactions
used to describe $^{11}$Li. The repulsive character of the Coulomb
interaction, between the two valence-protons and between each of the
valence-protons and the core, is obviously the reason for the increase
of the energies.
An estimate of how much these energies should be modified by the Coulomb repulsion can be obtained as the first-order perturbative value of the Coulomb shift:
\begin{equation}
\Delta_c^{(1)}=\langle \Psi(\mbox{$^{11}$Li}) | V_{coul}| \Psi(\mbox{$^{11}$Li}) \rangle,
\label{eq2}
\end{equation}
where $ \Psi(\mbox{$^{11}$Li})$ is the three-body $^{11}$Li wave
function corresponding to a given state, but where the
valence-neutrons are replaced by protons, and the charge of the core
is assumed to be the one of the mirror nucleus, $^9$C. Therefore,
$\Psi(\mbox{$^{11}$Li})$ in Eq.(\ref{eq2}) represents an artificial
$^{11}$O wave function, which is assumed to have the same structure as
the corresponding $^{11}$Li-state. The $V_{coul}$-potential is the
resulting Coulomb interaction between the three pairs of charged
particles, where the protons are point-like and the $^9$C-core is
spherical and uniformly charged corresponding to the root-mean-square
radius of about $2.5$ fm. The first order Coulomb shift,
$\Delta_c^{(1)}$, is then the diagonal contribution to the Coulomb
shift.
As already mentioned, the $1/2^+$, $3/2^+$, and $5/2^+$-states in $^{11}$Li have been obtained by means of the complex scaling method. The wave functions are then complex
rotated, and the corresponding value of $\Delta_c^{(1)}$ has to be obtained after complex rotation of the Coulomb potential. In this way $\Delta_c^{(1)}$ will be a complex quantity, whose imaginary part can be interpreted as the uncertainty in the energy shift \cite{moi98}. In other words, the imaginary part of $\Delta_c^{(1)}$ in the $1/2^+$, $3/2^+$, and $5/2^+$-cases permits us to estimate as well the change in the width of the resonance.
In Table~\ref{tab0} we give the computed energies of the
$^{11}$Li-states (second column) together with the computed values of
$\Delta_c^{(1)}$ for the four states considered (third
column). The results depend slightly on how the Coulomb potential is
constructed, but the overall size and relations are very well
determined. As we can see, the value of $\Delta_c^{(1)}$ is
substantially larger for the $3/2^-$-state than for the other three
states, which again is an indication of the smaller size of the
$3/2^-$-state, since the smaller the system the larger the Coulomb
repulsion, and therefore the larger the value of $\Delta_c^{(1)}$.
When this shift is added to the $^{11}$Li-energies, we obtain the
estimate for the energies of the $^{11}$O-states given in the last
column of the table. In the case of the $3/2^-$-state, since the
$^{11}$Li wave function is real, the shift $\Delta_c^{(1)}$ is also
real, and an estimate of the width in the $3/2^-$-state in $^{11}$O is
not possible in this way. As we can see, the estimated energies given
in the last column of Table~\ref{tab0} are quite reasonable, pretty
close to the computed energies given in Table~\ref{tab3} for the
lowest $3/2^-$, $1/2^+$, $3/2^+$, and $5/2^+$ states. The only
exception is perhaps the $1/2^+$-state, where a difference of about
0.7 MeV is found.
These similarities show that the
variations in the energy shift due to the structure
differences, as expected, are relatively small.
The difference between the energy shift, $\Delta_c^{(1)}$, for a given
state and the experimental energy shift, $\Delta_c$, between the
experimental energies is a measure of the structure effect of the
Thomas-Ehrman shift, that is $\Delta_{TE}=\Delta_c - \Delta_c^{(1)}$.
Recent calculations of $\Delta_{TE}$ concerning different light
mirror nuclei are available in the literature. For example, in
Ref.\cite{aue00} the shift between the mirror system, $^{11}$Be and
$^{11}$N, was investigated. In \cite{gri02} the same was done for
$^{12}$Be-$^{12}$O and $^{16}$C-$^{16}$Ne, and in Ref.\cite{gar04}
this shift was computed for the case of $^{17}$N and $^{17}$Ne. In all
the cases the value of $\Delta_{TE}$ was obtained to be of no more
than a few hundreds of keV. These sizes are consistent with the
energy difference between the lowest $J^\pi$-energies obtained in our
calculation (Table~\ref{tab3}), and the estimated energies given in
the last column of Table~\ref{tab0}.
\section{Three-body wave functions}
\begin{table}
\begin{tabular}{|c|cccccc|cc|}
\hline
& \multicolumn{6}{c|}{Component} &\multicolumn{2}{c|}{\%} \\ \hline
$J^\pi$ & $\ell_x$ & $\ell_y$ & $L$ & $s_x$ & $S$ & $K_{max}$ & $^{11}$Li & $^{11}$O\\ \hline
$\frac{3}{2}^-$ & 0 & 0 & 0 & 1 & $3/2$ & 120 &22\% & 5\% \\
& 0 & 0 & 0 & 2 & $3/2$ & 120 &35\% & 7\% \\
& 1 & 1 & 0 & 1 & $3/2$ & 60 & 6\% & 11\% \\
& 1 & 1 & 0 & 2 & $3/2$ & 80 &10\% &19\% \\
& 1 & 1 & 1 & 1 & $1/2$ & 60 & 4\% & 9\% \\
& 1 & 1 & 1 & 1 & $3/2$ & 40 &5\% & 11\% \\
& 1 & 1 & 1 & 2 & $3/2$ & 40 &3\% & 7\% \\
& 1 & 1 & 1 & 2 & $5/2$ & 80 &12\% &28\% \\ \hline
$\frac{1}{2}^+$ & 0 & 1 & 1 & 1 & $1/2$ & 121 & $<1$\% & 5\%\\
& 0 & 1 & 1 & 1 & $3/2$ & 201 &30\% & 37\% \\
& 1 & 0 & 1 & 1 & $1/2$ & 121 &6\% & 6\% \\
& 1 & 0 & 1 & 1 & $3/2$ & 121 & 7\% & 6\% \\
& 1 & 0 & 1 & 2 & $3/2$ & 201 &42\% & 41\% \\
& 1 & 2 & 1 & 1 & $3/2$ & 61 & 7\% & 5\% \\
& 2 & 1 & 1 & 1 & $3/2$ & 41 & 1\% & $<1$\% \\ \hline
$\frac{3}{2}^+$ & 0 & 1 & 1 & 1 & $1/2$ & 101 &1\% & $<1$\% \\
& 0 & 1 & 1 & 1 & $3/2$ & 101 &1\% & 1\%\\
& 0 & 1 & 1 & 2 & $3/2$ & 161 &9\% & 7\% \\
& 0 & 1 & 1 & 2 & $5/2$ & 201 &34\% & 32\% \\
& 1 & 0 & 1 & 1 & $1/2$ & 101 &1\% & 1\% \\
& 1 & 0 & 1 & 1 & $3/2$ & 161 &8\% & 7\% \\
& 1 & 0 & 1 & 2 & $3/2$ & 161 &5\% &4\%\\
& 1 & 0 & 1 & 2 & $5/2$ & 201 &36\% &43\% \\
& 1 & 2 & 1 & 2 & $5/2$ & 41 & 2\% & 4\% \\ \hline
$\frac{5}{2}^+$ & 0 & 1 & 1 & 2 & $3/2$ & 201 &24\% & 23\%\\
& 0 & 1 & 1 & 2 & $5/2$ & 201 &25\% & 20\% \\
& 1 & 0 & 1 & 1 & $3/2$ & 201 &25\% & 26\% \\
& 1 & 0 & 1 & 2 & $3/2$ & 81 &3\% & 2\% \\
& 1 & 0 & 1 & 2 & $5/2$ & 201 &23\% & 25\% \\
& 1 & 2 & 1 & 2 & $3/2$ & 31 & $<1$\% & 2\% \\
& 1 & 2 & 1 & 2 & $5/2$ & 31 & $<1$\% & 2\% \\ \hline
\end{tabular}
\caption{Dominant components (larger than 1\% probability) in the
lowest $3/2^-$, $1/2^+$, $3/2^+$, and $5/2^+$ wave functions in
$^{11}$Li and $^{11}$O in the Jacobi set with the
$\bm{x}$-coordinate defined between the core and one of the
valence nucleons. The quantum numbers are as defined below Eq.(\ref{exp1}).
Note that the core has negative parity.}
\label{tabcom0}
\end{table}
The calculation of the $^{11}$Li and $^{11}$O three-body states has
been made including all the components satisfying $\ell_x,\ell_y \leq
7$, where $\ell_x$ and $\ell_y$ are the relative angular momenta
between two of the particles, and between their center-of-mass and the
third particle, respectively. The maximum value of the hypermomentum,
$K_{max}$, has to be sufficiently large to reach convergence, but for
all partial waves it has been taken to be at least $20$. In Table~\ref{tabcom0} we
give the partial wave decomposition and the components with probability
larger than 1\% for each of the lowest $J^\pi$-states. Note here that the
use of the complex scaling method permits us to normalize the resonance
wave functions as described in Ref.\cite{moi98}. The quantum numbers
in the table are as described below Eq.(\ref{exp1}), with the
$\bm{x}$-Jacobi coordinate defined between the core and one of the
valence nucleons.
As seen in the table, for these components the $K_{max}$-value used is
pretty large, very likely overdoing the work of getting a
well-converged three-body solution, especially for $^{11}$Li. A careful
analysis of each individual component could certainly reduce the
$K_{max}$-value. As a test, we have performed the same calculations
using $K_{max}=20$ for all the components. These less accurate
calculations result in an increase of the three-body energies by up to
0.4 MeV for the $^{11}$Li-states and by up to 1 MeV for the $^{11}$O-resonances. The only exception is the lowest $3/2^-$-state (bound in
the case of $^{11}$Li), for which the increase in energy is of about
50 keV for $^{11}$Li, and of about 200~keV for $^{11}$O. This is once
more reflecting the smaller size of the lowest $3/2^-$-state compared
to the positive parity states, since the closer the particles are to each
other, the smaller is the basis necessary to reach convergence.
\begin{table}
\begin{tabular}{|c|ccccc|cc|}
\hline
& \multicolumn{5}{c|}{Component} & \multicolumn{2}{c|}{\%} \\ \hline
$J^\pi$ & $\ell_x$ & $j_N$ & $j_x$ & $\ell_y$ & $j_y$ & $^{11}$Li & $^{11}$O \\ \hline
$\frac{3}{2}^-$ & 0 & 1/2 & 1 & 0 & $1/2$ & 22\% & 5\%\\
& 0 & 1/2 & 2 & 0 & $1/2$ & 35\% & 7\%\\
& 1 & 1/2 & 1 & 1 & $1/2$ & 15\% & 32\%\\
& 1 & 1/2 & 2 & 1 & $1/2$ & 25\% & 53\% \\ \hline
$\frac{1}{2}^+$ & 0 & 1/2 & 1 & 1 & $1/2$ & 34\% & 41\%\\
& 1 & 1/2 & 1 & 0 & $1/2$ & 52\% & 51\%\\
& 1 & 1/2 & 2 & 2 & $3/2$ & 3\% & 2\% \\
& 1 & 3/2 & 0 & 0 & $1/2$ & 1\% & 1\% \\
& 1 & 3/2 & 1 & 0 & $1/2$ & 2\% & 1\% \\
& 1 & 3/2 & 1 & 2 & $3/2$ & 1\% & 1\% \\
& 1 & 3/2 & 2 & 2 & $3/2$ & 3\% & 2\% \\ \hline
$\frac{3}{2}^+$ & 0 & 1/2 & 1 & 1 & $1/2$ & 2\% & 2\% \\
& 0 & 1/2 & 2 & 1 & $1/2$ & 45\% & 39\% \\
& 1 & 1/2 & 1 & 0 & $1/2$ & 51\% & 52\% \\
& 1 & 1/2 & 3 & 2 & $3/2$ & $<1$\% & 3\%\\ \hline
$\frac{5}{2}^+$ & 0 & 1/2 & 2 & 1 & $1/2$ & 49\% & 44\% \\
& 1 & 1/2 & 1 & 2 & $3/2$ & $<1$\% & 1\%\\
& 1 & 1/2 & 2 & 0 & $1/2$ & 49\% & 52\% \\
& 1 & 1/2 & 2 & 2 & $3/2$ & $<1$\% & 1\% \\ \hline
\end{tabular}
\caption{The same as Table~\ref{tabcom0} but in the coupling scheme where the core-neutron relative orbital angular momentum $\ell_x$ couples to the spin of the neutron
to provide the angular momentum $j_N$, which couples to the spin of the core to the total core-neutron angular momentum $j_x$. The relative orbital angular momentum
between the core-neutron center-of-mass and the second neutron, $\ell_y$, couples to the spin of the second neutron to give the angular momentum $j_y$. Both, $j_x$
and $j_y$ couple to the total three-body angular momentum $J$.}
\label{tabcom2}
\end{table}
The same decomposition is shown in Table~\ref{tabcom2}, but in a
coupling scheme more consistent with the mean-field quantum numbers,
where the core-nucleon relative orbital angular momentum, $\ell_x$,
couples to the spin of the nucleon to provide the angular momentum
$j_N$, which in turn couples to the spin of the core to give the total
core-nucleon angular momentum, $j_x$. The relative orbital angular
momentum between the core-nucleon center-of-mass and the second
nucleon, $\ell_y$, couples to the spin of the second nucleon to give
the angular momentum, $j_y$. Both $j_x$ and $j_y$ couple to the total
three-body angular momentum, $J$.
As we can see, the structure of the $3/2^-$-state changes
substantially due to the Coulomb repulsion. In the case of $^{11}$Li
the $3/2^-$ (bound) ground state contains about 40\% of core-neutron
$p$-wave contribution. More precisely, 15\% of the wave function
corresponds to $^{10}$Li in the $1^+$-state, and 25\% to $^{10}$Li in
the $2^+$-state (Table~\ref{tabcom2}). With respect to the $s$-wave
contribution, even if the $1^-$-state in $^{10}$Li is lying high in the
continuum, 22\% corresponds to $^{10}$Li populating that state.
These characteristics are however very different when analyzing the
$3/2^-$-state in $^{11}$O. The Coulomb repulsion, which pushes up the
$s$-wave core-proton 2$^-$-state by more than 1.5 MeV, turns out to be
crucial producing a drastic reduction of the $s$-wave
contribution. In fact, as seen in the upper part of
Tables~\ref{tabcom0} and \ref{tabcom2}, the $p$-wave components give
85\% of the wave function, whereas the $s$-wave contribution reduces
now from almost 60\% in $^{11}$Li to only about 12\% in $^{11}$O.
This result is in contrast to Ref.\cite{wan19}, where the $29\%$
$s$-wave contribution in the $3/2^-$-state in $^{11}$O is even higher
than the 25\% given for $^{11}$Li. This low $s$-wave content in the
$^{11}$Li ground-state wave function seems to disagree with previous
results in \cite{gar96,gar97,cas17}, where it is shown that the
agreement with experimental momentum distributions and angular
differential cross sections requires a $p$-wave content of about
$35\%-40\%$ in the $^{11}$Li ground-state, or, equivalently,
$60\%-65\%$ $s$-wave contribution.
For both nuclei, $^{11}$Li and $^{11}$O, the contribution of $d$-waves
in the present work is far from substantial, in total of about 3\% in
both cases, and none of the $d$-wave components provides more than 1\%
of the norm. This result seems to contradict the measured increase of
about 8.8\% \cite{neu08} of the quadrupole moment in $^{11}$Li
relative to that of $^{9}$Li, which in shell-model calculations in
Ref.\cite{suz03} is explained as due to a significant $d$-wave
contribution of similar size as the one corresponding to $p$-waves.
The small probability of $d$-waves may be related to the lack of deformation of the frozen core as seen by the argument. If the structure of a given deformation is expanded on another, say body-fixed, deformation, there must be partial wave components corresponding to this deformation. However, our three-body model provides the full wave function corresponding to that obtained with deformation after, not before, projection of angular momentum and parity. Thus, our model can only say something about the inserted frozen core-structure and the calculated valence-structure.
However, as discussed in \cite{neu08}, the measured quadrupole
moment could instead be related to an about $10\%$ increase of the
charge-radius in $^{11}$Li compared to the one of $^{9}$Li. This
increase can be interpreted as due to the neutron halo.
The two neutrons
in the zero angular momentum ground state produce a
distortion of the $^{9}$Li-core, which effectively corresponds to an
increase of the core-radius. The initially slightly deformed
$^{9}$Li-core is otherwise in principle maintained in the subsequent
three-body calculations without significant effect as argued in
\cite{mis97}. Using such an increased radius, the neutron-core
interaction still must be adjusted to the previously described
specific desired properties. These all-decisive phenomenologically
obtained interactions guarantee the same resulting three-body
structure as obtained with the bare $^{9}$Li-radius. This is
consistent with \cite{neu08}, where it is stated that there is a
striking analogy between the quadrupole moment and the
root-mean-square charge-radius without any additional change of wave
function structure.
In any case, a detailed analysis of the quadrupole moment of $^{11}$Li
requires to take into account the different sources contributing to
the measured value, since the two valence neutrons are mostly on the same side
of $^{9}$Li. First the contribution from the original $^{9}$Li quadrupole moment,
second the one from the rotation of the $^9$Li core around
the three-body center of mass, and third and fourth the one from increased radius and induced
deformation of $^{9}$Li from the two valence neutrons. From Ref.\cite{boh75} we
know that neutral nucleons polarize the charged core by an amount
of the same order as if they were charged. Thus, our model is consistent with all available
$^{11}$Li data, but for now we leave the complicated quantitative quadrupole moment
calculation for future investigations.
Concerning the $1/2^+$, $3/2^+$, and $5/2^+$-resonances, they are all
almost completely given by $sp$-interference terms. Only minor
contributions from $pd$-interferences are seen, with the largest, as
given in the tables, reaching up to 7\% in the $1/2^+$-case.
The presence of low-lying $p$-resonances in $^{10}$Li and $^{10}$N makes
the $pd$-interferences more likely than the $dp$-ones, whose weight
is always smaller than 1\%.
In general, we can see that the structure of the three positive parity
resonances does not change significantly by moving from $^{11}$Li to
$^{11}$O. The weight of the different components remains essentially
the same in both cases.
An important difference, seen in Table~\ref{tabcom2}, between the
structure of the different $J^+$-resonances, is that only the
$1/2^+$-state has substantial contributions from $s$-waves
($\ell_x=0$) in the nucleon-core $1^-$-state ($j_x=1$) in either $^{10}$Li or
$^{10}$N. In contrast, for both the $3/2^+$ and $5/2^+$-resonances
only the 2$^-$ $s$-state ($\ell_x=0$, $j_x=2$) in $^ {10}$Li or
$^{10}$N is substantially populated. In the next section, we shall
discuss this difference as responsible for the very different spatial
structure of these states. Keep in mind that a similar weight of the different
partial-wave components does not necessarily imply a similar
spatial distribution of the constituents, which is in fact determined by
the radial wave functions $f_n^J(\rho)$ in Eq.(\ref{exp0}) and
the expansion coefficients $C_q^{(n)}(\rho)$ in Eq.(\ref{exp1}).
A different $\rho$-dependence can still provide a similar weight
of the partial waves after integration of the square of the wave function.
\section{Three-body spatial structure}
Let us examine now the spatial structure of the $^{11}$Li and
$^{11}$O-states. A clean indication of how the core and valence
nucleons are distributed in space is reflected by the different
two-body root-mean-square (rms) radii, which in turn permit us to
obtain a clear picture of the most probable inter-particle distances.
We therefore first discuss the radii or second radial moments, and
afterwards the origin in the structures of the wave functions.
\subsection{Radii}
Since the complex scaling method has been used in the calculations, the
corresponding resonance three-body wave functions are complex rotated.
As a consequence, for the resonant states, the rms radii have to be obtained as the
expectation value of the square of complex rotated radial distance ($r\rightarrow
re^{i\theta}$). Therefore, the rms radii are in this case complex
quantities, and as described in \cite{moi98}, the imaginary part is a
measure of the uncertainty of the computed value.
\begin{table}
\begin{tabular}{|c|cc|cc|}
\multicolumn{5}{c}{$^{11}$Li} \\ \hline
$J^\pi$ & $\langle r_{nn}^2\rangle^{1/2}$ & $\langle r_{c,nn}^2\rangle^{1/2}$ & $\langle r_{cn}^2\rangle^{1/2}$ & $\langle r_{n,cn}^2\rangle^{1/2}$ \\ \hline
$\frac{3}{2}^-$ & 6.4 & 5.0 & 5.9 & 5.7 \\
$\frac{1}{2}^+$ &$22.3+i6.3$ & $11.7+i3.0$ & $16.5+i4.4$ & $17.1+i4.9$ \\
$\frac{3}{2}^+$ & $13.9+i4.8$ & $7.4+i3.4$ & $10.4+i4.1$ & $10.7+i4.1$ \\
$\frac{5}{2}^+$ & $9.4+i4.3$ & $3.1+i0.9$ & $6.8+i2.0$ & $7.0+i2.2$ \\ \hline
\multicolumn{5}{c}{ } \\
\end{tabular}
\begin{tabular}{|c|cc|cc|}
\multicolumn{5}{c}{$^{11}$O} \\ \hline
$J^\pi$ & $\langle r_{pp}^2\rangle^{1/2}$ & $\langle r_{c,pp}^2\rangle^{1/2}$ & $\langle r_{cp}^2\rangle^{1/2}$ & $\langle r_{p,cp}^2\rangle^{1/2}$ \\ \hline
$\frac{3}{2}^-$ & $5.1+i 3.9$ & $2.8+i2.1$ & $3.9+i3.0$ & $3.9+i3.0$ \\
$\frac{1}{2}^+$ & $12.8+i5.0$ & $6.6+i3.0$ & $9.3+i4.0$ & $9.4+i4.0$ \\
$\frac{3}{2}^+$ & $12.2+i5.4$ & $6.5+i2.9$ & $9.0+i4.1$ & $9.0+i4.1$ \\
$\frac{5}{2}^+$ & $10.7+i6.2$ & $5.2+i3.1$ & $7.5+i4.5$ & $7.5+i4.6$ \\
\hline
\end{tabular}
\caption{ Computed values, in fm, of $\langle r_{ij}^2\rangle^ {1/2}$ and $\langle r_{k,ij}^ 2\rangle^ {1/2}$, for the $3/2^-$, $1/2^+$, $3/2^+$, and $5/2^+$ states
in $^{11}$Li (upper part) and $^{11}$O (lower part), where $\{ i, j, k\}$ represent the core ($c$) and the valence neutrons ($n$), or the core ($c$)
and the valence protons ($p$), respectively. The coordinate $r_{ij}$ is the distance between particles $i$ and $j$, and
$r_{k,jk}$ is the distance between particle $k$ and the center-of-mass of the $ij$-system.}
\label{tabr2}
\end{table}
In Table~\ref{tabr2} we give the rms radii, $\langle
r_{ij}^2\rangle^{1/2}$ and $\langle r_{k,ij}^2\rangle^{1/2}$, for the
different $^{11}$Li (upper part) and $^{11}$O (lower part) states.
From the right part of the table we notice that for all the states in
both $^{11}$Li and $^{11}$O, the distances, $\langle
r_{cN}^2\rangle^{1/2}$ and $\langle r_{N,cN}^2\rangle^{1/2}$, are
similar to each other, where $N$ can be either neutrons ($n$) or
protons ($p$). Since the core is about nine times heavier than the
nucleon, the value of $\langle r_{N,cN}^2\rangle^{1/2}$ is not far
from the distance between the core and the second nucleon, which
implies that the two valence-nucleons are roughly at the same distance
from the core in all the $J^\pi$-states.
Looking now into the left part of Table~\ref{tabr2}, we can see that
for the bound $3/2^-$-state in $^{11}$Li the neutron-neutron distance
is similar to the core-neutron distance, which indicates an
equilateral triangular structure with a particle-particle distance of
about 6 fm. However, for the resonant states in $^{11}$Li
and $^{11}$O the situation is slightly different, since the nucleon-nucleon
distance is roughly 1.4 times larger than the core-nucleon distance. This
structure corresponds to an isosceles triangle, where the unequal side
(the nucleon-nucleon distance) is about 40\% bigger than the two
equal sides given by the core-nucleon distance.
It is also interesting to note that the $3/2^-$-state for both nuclei
is clearly smaller than the positive-parity resonances. This was
already anticipated by the larger effect of the three-body force,
the larger value of $\Delta_c^{(1)}$ (Table~\ref{tab0}), and the smaller
$K_{max}$-values required to get convergence for the
$3/2^-$-states. This is related to the facts, that in $^{11}$Li the $3/2^-$-state
is bound, and in $^{11}$O the $3/2^-$ wave function has a
clearly dominant contribution from two valence-protons in a relative
$p$-wave with respect to the core (85\% according to
Table~\ref{tabcom2}). In this structure the potential barrier prevents
the protons from moving too far away from the core (see the dashed
curves in Fig.~\ref{figpot}b).
On the other hand, as mentioned when discussing Table~\ref{tabcom2},
the $1/2^+$, $3/2^+$, and $5/2^+$-resonances are almost entirely
$sp$-structures, which implies that one of the halo nucleons is always
populating a core-nucleon $s$-state. As seen in Fig.~\ref{figpot}a,
the $s$-wave potentials do not feel any confining barrier, except the
$2^-$-potential in $^{10}$N (dashed black curve), for which the
potential barrier is almost a factor of 2 lower than for the
$p$-potentials for the same system. As a consequence, the
positive-parity resonances are, as seen in Table~\ref{tabr2},
significantly bigger than for the 3/2$^-$-states.
Also, as already mentioned and shown in Table~\ref{tabcom2},
the contribution of the nucleon-core $1^-$-state ($\ell_x=0$, $j_x=1$)
to the $J^+$-resonances is only substantial for the $1/2^+$-state,
whereas for the $3/2^+$ and
$5/2^+$-resonances basically all the $\ell_x=0$ contribution arises
through the 2$^-$ state ($j_x=2$). In the case of $^{11}$Li, since
$^{10}$Li shows a very
low-lying $2^-$ virtual state, the $^{11}$Li-resonances with a large
$2^-$-component ($3/2^+$ and $5/2^+$) will show a tendency to keep the
neutron close to the core, leading therefore to a system smaller
than the $1/2^+$-state, where the $1^-$-components dominates. This is
actually seen in the upper part of Table~\ref{tabr2}, where the rms radii for the
$1/2^+$-state are significantly larger than those of the $3/2^+$ and
$5/2^+$-resonances.
In the case of $^{11}$O, the $2^-$-state in $^{10}$N is not that low
anymore, and it is actually rather broad (see Table~\ref{tab1}), being
then close to disappear into the continuum. The confining effect of
the $s$-wave $2^-$-resonance disappears, and the $1/2^+$, $3/2^+$, and
$5/2^+$-resonances in $^{11}$O have a similar size, see
Table~\ref{tabr2}.
\subsection{Structure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{3D.eps}
\end{center}
\caption{Real part of the structure function $F(r_{cn},r_{n,cn})$, as defined in Eq.(\ref{funF}), in fm$^{-2}$, for the four computed states in $^{11}$Li.
A complex scaling angle $\theta=0.30$ rads has been used.}
\label{fig0}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{3D-O.eps}
\end{center}
\caption{The same as Fig.~\ref{fig0} for the $^{11}$O states.}
\label{fig4}
\end{figure}
The origin of the average distance results discussed above can be
visualized by means of the structure function
\begin{eqnarray}
\lefteqn{ \hspace*{-7mm}
F(r_{cN},r_{N,cN})=} \nonumber \\ & &
r_{cN}^2 r_{N,cN}^2 \int \left(\Psi(\bm{r}_{cN},\bm{r}_{N,cN})\right)^2 d\Omega_{cN} d\Omega_{N,cN},
\label{funF}
\end{eqnarray}
where $N$ represents either the neutron for $^{11}$Li or the proton
for $^{11}$O, $\Psi$ is the complex rotated three-body wave function
of a given $J^\pi$-state, and $\Omega_{cN}$ and $\Omega_{N,cN}$ are
the angles defining the directions of $\bm{r}_{cN}$ and
$\bm{r}_{N,cN}$, respectively. Note that following the normalization
criteria described in Ref.\cite{moi98}, the definition of the
structure function above is made in terms of the square of the wave
function, and not in terms of the square of the modulus of the wave
function. In principle, the function $F$ depends on the complex
scaling angle, but since $\Psi$ is normalized to 1 it is obvious that
$F$ satisfies that
\begin{equation}
\int F(r_{cN},r_{N,cN}) dr_{cN} dr_{N,cN}=1,
\label{funF1}
\end{equation}
which implies that the integral of the imaginary part is equal to zero.
The real part of the structure function $F$ is shown in
Figs.~\ref{fig0} and \ref{fig4} for all the computed states in
$^{11}$Li and $^{11}$O, respectively. For the resonances the complex
scaling angle $\theta=0.3$ rads has been used.
For $^{11}$Li, the $3/2^-$ ground-state wave function is rather
confined, with a high peak centered around average distances
determined by $r_{cn}\approx r_{n,cn}\approx 6$ fm, as expected from
the rms radii shown in Table~\ref{tabr2} for this state. When looking
at the 1/2$^+$, $3/2^+$, and $5/2^+$-resonances, we can see that the
wave function is progressively developing a tail along the $r_{cn}$-
and $r_{n,cn}$-axis, which in turn can be related to the lack of
barrier in the $s$-wave potential. The main difference in the
structure function for these three states, is the presence of a peak
at relatively small core-neutron distances for the $3/2^+$ and
$5/2^+$-states. As explained above this is attributed to the important
contribution of the $2^-$-states, which present a very low-lying
virtual state. In the $1/2^+$-state this virtual state does not
contribute, the peak then disappears, and the wave function shows
mainly two wide peaks each located along the two axes.
For $^{11}$O, we see in Fig.~\ref{fig4} that the $3/2^-$-resonance
shows a structure function with a peak apparently similar to the one
of the $3/2^+$ and $5/2^+$-states, although in this case the peak is
essentially only of $p$-wave character. For the $1/2^+$, $3/2^+$, and
$5/2^+$-resonances, the $s$-wave contributes significantly, but due
the Coulomb repulsion, which pushes up the $2^-$-resonance in
$^{10}$N, the $s$-wave potential is not able to keep the $s$-wave
proton sufficiently close to the core, and the peak observed for the
$3/2^+$ and $5/2^+$-states in $^{11}$Li disappears.
\section{Summary and conclusions}
We have calculated three-body properties of the four lowest excited
bound states or resonances for the two light mirror nuclei, $^{11}$O
and $^{11}$Li. The phenomenological interactions are chosen to
reproduce all known properties of $^{11}$Li combined with
consistent information about the subsystem, $^{10}$Li. The only
difference in interactions is that the Coulomb potentials are added in
$^{11}$O from the charges of the two protons and the $^{9}$C-core. We
use the established hyperspherical adiabatic expansion method combined
with complex rotation to separate the resonances from the background
continuum structure.
The nuclei, $^{11}$O and $^{11}$Li, are special by being non-identical
mirrors on the neutron and proton driplines, that is at opposite sides
of the beta-stability curve. The effect of the Coulomb interactions
is rather small for most nuclei, except for a substantial translation
of the absolute energies. However, these smaller effects sometimes
carry signals about features of interest in a better understanding of
many-body nuclear structure. In general the importance lies in change
of structure, which requires theoretical models beyond the mean-field.
A prominent example is the Thomas-Ehrman shift, but in general nuclei at
the driplines are expected to maximize such structure variation.
In this report, we predict the properties of $^{11}$O and $^{10}$N
from knowledge of $^{11}$Li and its two-body subsystem, $^{10}$Li.
Other investigations are available, but to our knowledge none consider
systematically all four lowest-lying excited states/resonances, and
their relation to the properties of the nucleon-core subsystems.
Furthermore, we use phenomenological interactions, which should
enhance the reliability of our predictions. We emphasize that the
interactions in the present work are able to reproduce all known
features of $^{10}$Li and $^{11}$Li.
We first investigate the two-body nucleon-core subsystems,
proton-$^{9}$C and neutron-$^{9}$Li. In realistic calculations the
spin-spin splitting is essential, that is coupling of the $3/2^-$-core
and the $1/2^{\pm}$-proton angular momenta and parities. The sequence
of the resulting states of $1^-$ and $2^-$ is not experimentally
determined. Fortunately, the only two crucial properties are, first the
degeneracy weighted centroid energy, and second that one of
these states is unbound with a marginally negative virtual energy.
The two-body potentials turn out to have attractive pockets at short
range for $2^-$, while overall repulsive for the $1^-$-state. Both
receive additional repulsion from the Coulomb potentials in the
$^{10}$N-nucleus. By construction, the $2^-$-potential for $^{10}$Li
has a marginally unbound virtual state. Both the $2^+$ and
$1^+$-potentials have attractive short-range pockets for both
$^{10}$Li and $^{10}$N. The resonance energies of course follow the
pattern of the potentials with less than $0.6$~MeV for all $^{10}$Li
states and about $2$~MeV higher energies for $^{10}$N.
The computed three-body energy of $^{11}$Li is fine-tuned to reproduce
precisely the measured ground state value, while the three positive
parity excited states of both positions and widths are experimentally
unknown, but predicted to be very similar. For $^{11}$O, we find in
contrast to $^{11}$Li that the $3/2^-$-state is about $1$~MeV higher
than the three positive-parity states. The two lowest resonances, 1/2$^+$
and $3/2^+$, are very similar, and it is therefore as likely that one of
these is the ground state. This would be a qualitative difference
between important properties of these mirror nuclei. This predicted
energy sequence in $^{11}$O is consistent with a perturbation estimate of the
Coulomb shift. However, it is important to keep in mind that the uncertainty
introduced by the unknown three-body interaction, which is seen to
play a more relevant role in the 3/2$^-$ state than in the positive-parity
states, could modify the ordering in the energy spectrum.
The structure of the wave functions is in principle revealed by the partial wave
decomposition. It is striking that the positive parity resonances all
are of very similar partial wave content in $^{11}$Li and $^{11}$O. In contrast,
the $3/2^-$-state in $^{11}$O is almost entirely made of $p$-waves of
both nucleon-core two-body states, whereas $p$-waves in $^{11}$Li only
contribute about $40\%$ and $s$-waves correspondingly by $57\%$.
The spatial distributions of the nucleons surrounding the core also
differ substantially in the two nuclei. For the $3/2^-$-state, the
two nucleons are symmetrically distributed in one peak in both cases,
but about $30\%$ closer to the core and more smeared out in the
$^{11}$O-resonance than in the $^{11}$Li bound-state. The positive
parity resonances in $^{11}$O all three exhibit two peaks in their
density distributions corresponding to one proton close and one
further away from the core. In $^{11}$Li, these two peaks coincide
for the $3/2^+$ and $5/2^+$-resonances, whereas they remain for
$1/2^+$, but with much larger tails. This fact shows that, even if
the partial wave content is similar (as shown in Table~\ref{tabcom2} for
the 3/2$^+$ and 5/2$^+$ resonances), the spatial distribution
of the constituents can be different.
In conclusion, all these detailed predictions are beyond present
laboratory tests, but still displaying essential properties, which in
turn should inspire to new experimental investigations. The
substantial differences between the two mirror nuclei are all due to
the additional Coulomb interaction. This is a new experience in
nuclear physics, where the Coulomb interaction generally is believed
to influence nuclear structure only marginally. In summary, we have
learned that mirror nuclei not necessarily have very similar
structure. Furthermore, we have seen that dripline nuclei still can
deliver new information about nuclear structure.
\acknowledgments We want to thank H.O.U. Fynbo and K. Riisager for
drawing attention to these systems and subsequent continuous discussions.
We also thank J. Casal for constructive comments and discussions.
This work has been partially supported by the Spanish
Ministry of Science, Innovation and University MCIU/AEI/FEDER,UE (Spain) under Contract
No. FIS2018-093636-B-I00.
|
3,212,635,537,776 | arxiv | \section{Introduction}\label{sec:Intro}
We revisit the family of tests for exponentiality based on the residual life time as proposed by \cite{BH:2008}, and we provide further theoretical insight into the asymptotic behaviour of the test under alternatives. The problem of interest is testing the assumption that data is distributed according to the exponential distribution with unknown scale parameter. To be precise, let $X$ be a positive random variable and we write shorthand $X\sim\mbox{Exp}(\lambda)$, $\lambda > 0$, if $X$ follows an exponential distribution with scale parameter $\lambda$, hence the density is given by
\begin{equation}\label{eq:density}
f(x,\lambda) = \lambda\exp(-\lambda x),\quad x>0.
\end{equation}
Note that $X \sim$ $\mbox{Exp}(\lambda)$ if, and only if, $X/\lambda \sim$ $\mbox{Exp}(1)$, which shows that the exponential distribution belongs to the scale family of distributions, for a detailed discussion see \cite{JKB:1994}, Chapter 19. In the following we denote the family of exponential distributions by $\mathcal{E}:=\{\mbox{Exp}(\lambda):\,\lambda>0\}$. Let $X, X_1, X_2, \dotso$ be positive independent and identically distributed (iid.) random variables with distribution $\mathbb{P}^X$ defined on an underlying probability space $(\Omega,\mathcal{A},\mathbb{P})$. We test the composite hypothesis
\begin{equation}\label{eq:H0}
H_0:\;\mathbb{P}^X\in\mathcal{E}
\end{equation}
against general alternatives, based on the sample $X_1,\ldots,X_n$.
This testing problem has been studied extensively in the literature, and it is of ongoing interest, see \cite{CMO:2021,CMNO:2020,JMO:2020,MO:2016b,PWG:2022,VG:2020} for some recent related publications, \cite{A:1990,HM:2005,S:1984} for surveys on the topic, and \cite{ASSV:2017,OMO:2022} for reviews of testing procedures as well as of extensive competitive Monte Carlo power studies. The focus of this article is on the procedure proposed in \cite{BH:2008}. That paper studies a family of tests of exponentiality based on the mean residual life function with test statistic
\begin{equation*}
G_{n,a}=n\int_{0}^\infty \left[\frac1n\sum_{j=1}^n\min(Y_{j},z)-\frac1n\sum_{j=1}^n\mathbf{1}\{Y_{j}\le z\}\right]^2\exp(-az)\,\mbox{d}z,\quad a>0.
\end{equation*}
Here, $Y_j=X_j/\overline{X}_n$, $j=1,\ldots,n$, is the scaled data, where $\overline{X}_n=\frac1n\sum_{j=1}^n X_j$. This family of tests is an extension of the test in \cite{BH:2000} insofar as the exponential weight function in the original test statistic is replaced with a more flexible weight function $w(z)=e^{-az}$, $z>0$, which depends on some so-called tuning parameter $a>-1$. In \cite{BH:2008} the authors provide the limiting null distribution of the test statistics $G_{n,a}$ and a simulation study including simulated critical values, as well as a power study for different values of the tuning parameter $a$. Further insight on the behaviour of the tests under alternatives is hitherto missing. We fill this gap in the literature.
We start our investigation by providing an alternative representation of $G_{n,a}$ motivated by a characterisation of the exponential law due to \citet{S:1070}, which is obviously related to the mean residual life function. \citet{S:1070}
states that for a random variable $X$ with $\mathbb{P}(X>y)>0$ for $y>0$, we have $X\sim\mbox{Exp}(\lambda)$ if and only if
\begin{equation*}
\mathbb{E}[X|X>y]=y+\frac1\lambda,\quad \mbox{for all } y>0, \lambda>0.
\end{equation*}
Direct calculations show that under the same condition this characterisation can be restated in the following way: $X\sim\mbox{Exp}(\lambda)$ if and only if
\begin{equation}\label{eq:Char}
\mathbb{E}\left[\left(X-y-\frac1\lambda\right)\mathbf{1}\{X>y\}\right]=0,\quad \mbox{for all } y>0, \lambda>0.
\end{equation}
In view of the scale invariance of the family $\mathcal{E}$, we consider the scaled data $Y_j=X_j/\overline{X}_n$, $j=1,\ldots,n$, and propose the weighted $L^2$-type test statistic (fixing w.l.o.g. $\lambda=1$ in the characterisation)
\begin{equation}\label{eq:Tna}
T_{n,a}=T_{n,a}(Y_1,\ldots,Y_n)=n\int_0^\infty\left|\frac1n\sum_{j=1}^n(Y_j-y-1)\mathbf{1}\{Y_j>y\}\right|^2\exp(-a y)\,\mbox{d}y,
\end{equation}
which depends on the tuning parameter $a\ge0$. Note that $T_{n,a}$ is scale invariant, i.e. invariant w.r.t. transformations of the form $x\mapsto b x$, $b>0$, since it only depends on the scaled data $Y_1,\ldots,Y_n$. Scale invariance is indeed a desirable property, since the family $\mathcal{E}$ is closed under such transformations.
Putting $x\wedge y=\min(x,y)$ for real numbers $x,y$ and using $n^{-1}\sum_{j=1}^nY_j=1$, some algebra shows
\begin{equation}\label{eq:expl}
T_{n,a}=\frac1n\sum_{j,k=1}^n\frac1a(|Y_j-Y_k|-1)e^{-a Y_j\wedge Y_k}+\frac1{a^2}(Y_j+Y_k-2(Y_j\wedge Y_k+1))e^{-a Y_j\wedge Y_k}-\frac2{a^3}\left(e^{-a Y_j\wedge Y_k}-1\right),
\end{equation}
for each $a>0$, as well as
\begin{equation*}
T_{n,0}=\frac1n\sum_{j,k=1}^n\frac13(Y_j\wedge Y_k)^3-\frac{Y_j+Y_k-2}{2}(Y_j\wedge Y_k)^2+(Y_j-1)(Y_k-1)(Y_j\wedge Y_k).
\end{equation*}
An efficient implementation of \eqref{eq:expl} in the statistical computing language \texttt{R}, see \cite{R:2021}, can be found in Appendix \ref{app:SC}.
\begin{remark}\label{rem:Intro}
\begin{enumerate}
\item Some algebra shows that $T_{n,a}\equiv G_{n,a}$ holds for all $a>-1$, hence both test statistics are identical.
\item Representation \eqref{eq:Tna} leads to a new compact and formula of the covariance kernel of the limiting Gaussian process, which admits direct conclusions for the limiting distribution of the test statistic.
\item Obviously the restriction $a\ge0$ can be generalised to negative values. In this article we mostly focus on the positive half axis, since the power results in the simulation study of \cite{BH:2008} suggest that negative tuning parameters lead to low power of the tests. The authors suggest to use the tuning parameter $1\le a\le 2$ as a generally good choice.
\item Note that the test statistic $T_{n,0}$ is related to the test of exponentiality based on the integrated distribution function proposed in \cite{K:2001}, Section 2. The main difference in the definitions of the tests lies in the fact, that $\mathbb{P}(X>y)$ in \eqref{eq:Char} is estimated by and not replaced with the theoretical known value under $H_0$. This difference has an impact on the limit distribution and the power of the tests as is shown in the sequel.
\end{enumerate}
\end{remark}
The rest of the paper is organised as follows. In Section \ref{sec:Limit dist} we provide a simple and direct proof of the asymptotic limit distribution of $T_{n,a}$ under the null hypothesis and provide the first four cumulants of the limiting null distribution as well as explicit formulas of the eigenvalues and eigenfunctions associated with the integral operator induced by the covariance structure of the limiting Gaussian process. These findings corroborate the results of \cite{BH:2008} in a very short and direct way.
In Section \ref{sec:Limit_dist_fixed_alt} we provide new limit results under fixed alternatives as well as a proof of the consistency of the testing procedures. Local approximate Bahadur efficiencies are deduced and interpreted in connection to empirical power results in Section \ref{sec:ABE}. We conclude the article by pointing out open problems for further research in Section \ref{sec:conc} and by an Appendix containing explicit formulas and some source code.
\section{Limit distribution under the null hypothesis}\label{sec:Limit dist}
In this section we assume that $X_1,X_2,\ldots$ is an iid. sequence of random variables with $X_1\sim\mbox{Exp}(1)$. Due to the $L^2$-structure of the test statistics, a convenient setting --is in dependence of the constant $a\ge0$-- the separable Hilbert space $\mathbb{H}_a=L^2([0,\infty),\mathcal{B},\exp(-at){\rm d}t)$ of (equivalence classes of) measurable functions $f:[0,\infty) \rightarrow \R$
satisfying $\int_0^\infty |f(t)|^2 \exp(-at)\, {\rm d}t < \infty$. Here, $\mathcal{B}$ stands for the Borel sigma-field on $[0,\infty)$. The scalar product and the norm in $\mathbb{H}_a$ will be denoted by
\begin{equation*}
\langle f,g \rangle_{\mathbb{H}_a} = \int_0^\infty f(t)g(t)\,\exp(-at)\, {\rm d}t, \quad \|f\|_{\mathbb{H}_a} = \langle f,f \rangle_{\mathbb{H}_a}^{1/2}, \quad f,g \in \mathbb{H}_a,
\end{equation*}
respectively.
A change of variable shows that
\begin{equation*}
T_{n,a}=\overline{X}_n^{-3}\int_0^\infty Z_n^2(y)\exp(-ay/\overline{X}_n)\,\mbox{d}y,
\end{equation*}
where
\begin{equation*}
Z_n(y)=\frac1{\sqrt{n}}\sum_{j=1}^n(X_j-y-\overline{X}_n)\mathbf{1}\{X_j>y\},\quad y>0.
\end{equation*}
Define the auxiliary process
\begin{equation*}
Z_n^\ast(y)=\frac1{\sqrt{n}}\sum_{j=1}^n(X_j-y-1)\mathbf{1}\{X_j>y\}-(X_j-1)\mathbb{P}(X_1>y),\quad y>0.
\end{equation*}
Notice that $Z_n^*$ is a sum of centred iid. random variables, and we have $\mathbb{E}[Z_1^*(s)Z_1^*(t)]=K_0(s,t)$, $s,t>0$, where $K_0$ defined below. By the central limit theorem in Hilbert spaces there exists a centred Gaussian process $Z$ in $\mathbb{H}_a$ having covariance kernel
\begin{equation}\label{eq:K0}
K_0(s,t)=\exp(-s\vee t)-\exp(-(s+t)),\quad s,t>0,
\end{equation}
such that $Z_n^*\stackrel{\mathcal{D}}{\longrightarrow} Z$ in $\mathbb{H}_a$ as $n\to\infty$. Here and in what follows, we write $s\vee t=\max(s,t)$, and $\stackrel{\mathcal{D}}{\longrightarrow}$ denotes convergence in distribution. Likewise $\stackrel{\mathcal{\PP}}{\longrightarrow}$ will denote convergence in probability. Note that $K_0$ is an alternative representation of the limiting covariance kernel $\rho$ in (7) of \cite{BH:2008}.
\begin{theorem}\label{thm:AsyDist}
Under the standing assumptions, there exists a centred Gaussian process $Z$ in $\mathbb{H}_a$ with covariance kernel $K_0$ defined in \eqref{eq:K0}, such that
\begin{equation*}
T_{n,a}\stackrel{\mathcal{D}}{\longrightarrow} \|Z\|_{\mathbb{H}_a}^2,\quad\mbox{as}\,n\rightarrow\infty.
\end{equation*}
\end{theorem}
\begin{proof}
We first consider the case $a=0$. Note that
\begin{eqnarray*}
\|Z_n-Z_n^*\|_{\mathbb{H}_0}^2&=&\left(\frac{1}{n}\sum_{j=1}^n(X_j-1)\right)^2\int_0^\infty\left|\frac{1}{\sqrt{n}}\sum_{k=1}^n\left(\mathbf{1}\{X_k>y\}-\mathbb{P}(X_1>y)\right)\right|^2\mbox{d}y.
\end{eqnarray*}
By the strong law of large numbers $n^{-1}\sum_{j=1}^n(X_j-1)$ converges to 0 a.s., and the central limit theorem in $\mathbb{H}_0$ implies that the second term is a tight sequence. Hence, $\|Z_n-Z_n^*\|_{\mathbb{H}_0}^2=o_{\mathbb{P}}(1)$, and $T_{n,0}\stackrel{\mathcal{D}}{\longrightarrow} \|Z\|_{\mathbb{H}_0}^2$, as $n\rightarrow\infty$ follows from $Z_n^*\stackrel{\mathcal{D}}{\longrightarrow} Z$ in $\mathbb{H}_0$, Slutski's lemma and the continuous mapping theorem.
For the case $a>0$ define
\begin{equation*}
\widetilde{T}_{n,a}=\overline{X}_n^{-3}\int_0^\infty Z_n^2(y)\exp(-ay)\,\mbox{d}y.
\end{equation*}
A first-order Taylor expansion yields
\begin{equation}\label{eq:Texp1}
\exp(-ay/\overline{X}_n)=\exp(-ay)+\frac{ay}{\Delta_n^2}\exp\left(-\frac{ay}{\Delta_n}\right)\left(\overline{X}_n-1\right),
\end{equation}
where $\Delta_n\in(\min(\overline{X}_n,1),\max(\overline{X}_n,1))$.
From the Cauchy-Schwarz inequality and \eqref{eq:Texp1} we obtain
\begin{eqnarray*}
|T_{n,a}-\widetilde{T}_{n,a}|&=&\overline{X}_n^{-3}\int_0^\infty Z_{n}^2(y)|\exp(-ay/\overline{X}_n)-\exp(-ay)|\mbox{d}y\\
&\le&\overline{X}_n^{-3}\int_0^\infty Z_{n}^2(y)\mbox{d}y \int_0^\infty|\exp(-ay/\overline{X}_n)-\exp(-ay)|\mbox{d}y\\
&=&\overline{X}_n^{-3}T_{n,0}\int_0^\infty\frac{ay}{\Delta_n^2}\exp\left(-\frac{ay}{\Delta_n}\right)\left|\overline{X}_n-1\right|\mbox{d}y\\
&=&\frac1a \overline{X}_n^{-3}T_{n,0}\left|\overline{X}_n-1\right|.
\end{eqnarray*}
The strong law of large numbers implies $\overline{X}_n\to 1$ a.s., and since $T_{n,0}$ is a tight sequence (see case $a=0$), \linebreak $|T_{n,a}-\widetilde{T}_{n,a}|=o_{\mathbb{P}}(1)$ follows. By the central limit theorem in $\mathbb{H}_a$ there exists a centred Gaussian process $\widetilde{Z}$ with covariance kernel $K_0$ such that $Z_n^*\stackrel{\mathcal{D}}{\longrightarrow} \widetilde{Z}$ as $n\to\infty$ in $\mathbb{H}_a$ and the same reasoning as for the case $a=0$ yields $\|Z_n-Z_n^*\|_{\mathbb{H}_a}^2=o_{\mathbb{P}}(1)$. Invoking Slutzki's lemma and the continuous mapping theorem, we obtain $\widetilde{T}_{n,a}\stackrel{\mathcal{D}}{\longrightarrow} \|\widetilde{Z}\|_{\mathbb{H}_a}^2$ from which the claim follows.
\end{proof}
\begin{remark}\label{rem:cum}
By direct evaluation of integrals the first two cumulants of the distribution of $\|Z\|_{\mathbb{H}_a}^2$ are
\begin{equation*}
\kappa_{1,a}=\E \|Z\|_{\mathbb{H}_a}^2=\int_0^\infty K_0(t,t)\exp(-at)\,\mbox{d}t=\frac1{(a+1)(a+2)}
\end{equation*}
and
\begin{equation*}
\kappa_{2,a}=\mathbb{V}(\|Z\|_{\mathbb{H}_a}^2)=2\int_0^\infty\int_0^\infty K_0^2(s,t)\exp(-a(s+t))\,\mbox{d}s\mbox{d}t=\frac{2}{(a^2+3a+2)(2a+3)(a+2)},
\end{equation*}
where $\mathbb{V}(\cdot)$ denotes the variance. Following the methodology in \cite{H:1990,S:1976}, we calculate the third and fourth cumulants according to
\begin{equation*}
\kappa_{j,a}=2^{j-1}(j-1)!\int_{0}^\infty K_j(t,t) \exp(-at)\,\mbox{d}t,
\end{equation*}
where $K_j(s,t)$, the $j^{\text{th}}$ iterate of $K_0(s,t)$, is given by
\begin{eqnarray*}
K_{j}(s,t)&=&\int_0^\infty K_{j-1}(s,u)K_0(u,t)\exp(-au)\,\mbox{d}u,\quad j\ge2,\\
K_1(s,t)&=&K_0(s,t).
\end{eqnarray*}
Direct calculation shows that
\begin{equation*}
\kappa_{3,a}=\frac{16}{(1+a)(2a+3)(a+2)^3(3a+4)}
\end{equation*}
and
\begin{equation*}
\kappa_{4,a}=\frac{48(11a+16)}{(3a+4)(2a+3)^2(a+2)^4(1+a)(4a+5)}.
\end{equation*}
These formulas are very useful in order to fit a Pearson system of distributions to approximate the critical values of the test statistic $T_{n,a}$. The approximated quantiles of the limit distribution are found in Table \ref{tab:cv} and source code is provided in Appendix \ref{app:SC}. A comparison with Table 1 in \cite{BH:2008} shows that the approximation of the critical values is a reasonable fit to the empirical critical values for $a\ge0$.
\end{remark}
\begin{table}[t]
\centering
\begin{tabular}{lrrrrrrrrrr}
$q/a$ & -0.99 & -0.9 & -0.5 & 0 & 0.5 & 1 & 1.5 & 2 & 5 & 10 \\
\hline
0.9 & 117.263 & 13.919 & 2.522 & 1.009 & 0.553 & 0.351 & 0.243 & 0.178 & 0.052 & 0.017 \\
0.95 & 123.673 & 16.162 & 3.189 & 1.309 & 0.725 & 0.463 & 0.322 & 0.237 & 0.069 & 0.022 \\
0.99 & 136.942 & 21.396 & 4.813 & 2.045 & 1.149 & 0.739 & 0.516 & 0.381 & 0.112 & 0.036
\end{tabular}
\caption{Approximated $q$ - quantiles of the limit distribution in Theorem \ref{thm:AsyDist} by a Pearson system of distributions using the formulas of the cumulants in Remark \ref{rem:cum}.}\label{tab:cv}
\end{table}
From the theory of Gaussian processes it is well known (see \cite{SW:1986}, p. 206) that for each $a>0$ an orthogonal decomposition of the process $Z$ yields $\|Z\|^2_{\mathbb{H}_a}=\sum_{j=1}^\infty \lambda_j(a) N_j^2,$ where $N_1,N_2,\ldots$ are iid. standard normal, and $\lambda_1(a),\lambda_2(a),\ldots$ is the decreasing sequence of positive eigenvalues of the integral operator $\mathcal{K}_a:\mathbb{H}_a\rightarrow\mathbb{H}_a,\, f\mapsto\mathcal{K}_af(\cdot)=\int_0^\infty K_0(\cdot,t)f(t)\exp(-at)\mbox{d}t.$
To calculate the eigenvalues $\lambda_j(a)$, $j=1,2,\ldots$, of $\mathcal{K}_a$, one has to solve the homogeneous Fredholm integral equation of the second kind
\begin{equation*}\label{eq:inteq}
\int_0^\infty K_0(x,t)f(t)\exp(-at)\mbox{d}t=\lambda f(x),\quad x>0,
\end{equation*}
see, for example, \cite{KS:1947}. Usually these problems are very hard to solve explicitly and numerical or Monte Carlo simulation techniques are used to obtain approximations of $\lambda_j(a)$, see Section 5 in \cite{EH:2021} for a stochastic approximation method or the method presented in \cite{BMNO:2020}.
In the following we give explicit formulas for general tuning parameters $a$. Firstly, note that $K_0$ admits the representation
\begin{eqnarray}
K_0(s,t)&=& \min(\exp(-s),\exp(-t))-\exp(-s)\exp(-t),\quad s,t>0.\label{eq:altKern1}
\end{eqnarray}
Let $a=1$. Using the alternative representation in \eqref{eq:altKern1} together with $x=\exp(-s)$ and a substitution ($y=\exp(-t)$), we have
\begin{equation*}
\int_0^\infty K_0(x,t)f(t)\exp(-t)\mbox{d}t=\int_0^1(\min(x,y)-xy)f(-\log(y))\mbox{d}y.
\end{equation*}
Since $\mathbb{K}(x,y)=\min(x,y)-xy$, $x,y\in[0,1]$, is the covariance kernel of the Brownian bridge $B(t)$, say, the eigenvalue problem is solved in this case, see e.g. \cite{AD:1952}. We conclude that the eigenvalues and corresponding eigenfunctions are
\begin{equation*}
\lambda_k(1)=\frac{1}{(k\pi)^2} \quad\mbox{and}\quad f_{k,1}(t)=\sin\left(k\pi\exp(-t)\right),\quad t>0.
\end{equation*}
By analogy the general case $a>0$ leads to
\begin{equation*}
\int_0^\infty K_0(x,t)f(t)\exp(-at)\mbox{d}t=\int_0^1(\min(x,y)-xy)f(-\log(y))y^{a-1}\mbox{d}y,
\end{equation*}
which is connected to the eigenvalue problem of the weighted Brownian bridge $t^{a-1}B(t)$. This problem is solved in Theorem 1.4 in \cite{DM:2003}, which states the eigenvalues and eigenfunctions explicitly. Let $\nu=(a+1)^{-1}$ (note that there is a typo in \cite{DM:2003}) and denote by $J_\nu$ the Bessel functions of the first kind and by $0<z_{\nu,1}<z_{\nu,2}<\ldots$ the ascending sequence of zeros of $J_{\nu}$, for details on the zeros of Bessel functions see \cite{W:1995}, Chapter XV. We have for $k=1,2,\ldots$
\begin{equation}\label{eq:eig}
\lambda_k(a)=\left(\frac{2\nu}{z_{\nu,k}}\right)^2\quad \mbox{and}\quad f_{k,a}(t)=\frac{J_\nu\left(z_{\nu,k}\exp(-\frac{t}{2\nu})\right)}{\sqrt{\nu}J_{\nu-1}(z_{\nu,k})}\exp\left(-\left(\frac1{\nu}-1\right)\frac{t}2\right),\quad t>0.
\end{equation}
By using the identities in Remark \ref{rem:cum} (compare to Corollary 1.3 in \citet{DM:2003}) we see that $\sum_{k=1}^\infty\lambda^j_k(a)=\kappa_{j,a}/(2^{j-1}(j-1)!)$, $j=1,2,3,4$. The largest twenty eigenvalues are given in Table \ref{tab:eig}, and for the sake of completeness, we approximated the scaled cumulants by the sum of the respective powers of the first 100 eigenvalues. We see that for the mean, there is still some difference in the first few digits, which is explained by the low speed of convergence to 0 of the eigenvalues.
\begin{remark}
The stated formulas of eigenvalues and eigenfunctions were also derived in \cite{BH:2000} for the case $a=1$ relating the statistics to the classical Cram\'{e}r-von Mises test and for the general case $a>-1$ by solving a related differential equation in \cite{BH:2008}. From these results we see that \eqref{eq:eig} also holds in these cases.
\end{remark}
\begin{table}[t]
\centering
\begin{tabular}{lrrrrrr}
$k/a$& 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
1 & 0.2724430 & 0.10132118 & 0.05275301 & 0.03232757 & 0.02183334 & 0.015732912 \\
2 & 0.0812703 & 0.02533030 & 0.01221201 & 0.00716691 & 0.00470790 & 0.003327524 \\
3 & 0.0386475 & 0.01125791 & 0.00528483 & 0.00305755 & 0.00199068 & 0.001398474 \\
4 & 0.0225326 & 0.00633257 & 0.00293284 & 0.00168481 & 0.00109214 & 0.000764966 \\
5 & 0.0147448 & 0.00405285 & 0.00186176 & 0.00106499 & 0.00068857 & 0.000481445 \\
6 & 0.0103955 & 0.00281448 & 0.00128585 & 0.00073348 & 0.00047341 & 0.000330626 \\
7 & 0.0077217 & 0.00206778 & 0.00094101 & 0.00053570 & 0.00034534 & 0.000240981 \\
8 & 0.0059612 & 0.00158314 & 0.00071835 & 0.00040833 & 0.00026299 & 0.000183403 \\
9 & 0.0047409 & 0.00125088 & 0.00056629 & 0.00032151 & 0.00020693 & 0.000144239 \\
10 & 0.0038603 & 0.00101321 & 0.00045785 & 0.00025971 & 0.00016705 & 0.000116401 \\
11 & 0.0032042 & 0.00083737 & 0.00037782 & 0.00021415 & 0.00013768 & 0.000095907 \\
12 & 0.0027021 & 0.00070362 & 0.00031708 & 0.00017960 & 0.00011543 & 0.000080385 \\
13 & 0.0023095 & 0.00059953 & 0.00026989 & 0.00015279 & 0.00009817 & 0.000068347 \\
14 & 0.0019966 & 0.00051694 & 0.00023250 & 0.00013156 & 0.00008450 & 0.000058824 \\
15 & 0.0017433 & 0.00045032 & 0.00020237 & 0.00011447 & 0.00007351 & 0.000051161 \\
16 & 0.0015352 & 0.00039579 & 0.00017774 & 0.00010050 & 0.00006452 & 0.000044903 \\
17 & 0.0013624 & 0.00035059 & 0.00015735 & 0.00008895 & 0.00005709 & 0.000039727 \\
18 & 0.0012171 & 0.00031272 & 0.00014028 & 0.00007927 & 0.00005088 & 0.000035397 \\
19 & 0.0010939 & 0.00028067 & 0.00012584 & 0.00007110 & 0.00004562 & 0.000031738 \\
20 & 0.0009885 & 0.00025330 & 0.00011352 & 0.00006412 & 0.00004114 & 0.000028618 \\ \hline
$\sum_{j=1}^{100}\lambda_j(a)$ & 0.4959773 & 0.16565850 & 0.08288489 & 0.04974765 & 0.03317179 & 0.023697320 \\
$\kappa_{1,a}$ & 0.5000000 & 0.16666667 & 0.08333333 & 0.05000000 & 0.03333333 & 0.023809524 \\
$\sum_{j=1}^{100}\lambda_j^2(a)$ & 0.0833333 & 0.01111111 & 0.00297619 & 0.00111111 & 0.00050505 & 0.000261643 \\
$\kappa_{2,a}/2$ & 0.0833333 & 0.01111111 & 0.00297619 & 0.00111111 & 0.00050505 & 0.000261643 \\
$\sum_{j=1}^{100}\lambda_j^3(a)$ & 0.0208333 & 0.00105820 & 0.00014881 & 0.00003419 & 0.00001052 & 0.000003934 \\
$\kappa_{3,a}/8$ & 0.0208333 & 0.00105820 & 0.00014881 & 0.00003419 & 0.00001052 & 0.000003934 \\
$\sum_{j=1}^{100}\lambda_j^4(a)$ & 0.0055556 & 0.00010582 & 0.00000777 & 0.00000109 & 0.00000023 & 0.000000061 \\
$\kappa_{4,a}/48$ & 0.0055556 & 0.00010582 & 0.00000777 & 0.00000109 & 0.00000023 & 0.000000061 \\
\end{tabular}
\caption{First twenty eigenvalues $\lambda_k(a)$ for different tuning parameters $a$ and sums of powers over the first 100 eigenvalues and the corresponding cumulants using the formulas in Remark \ref{rem:cum}.}\label{tab:eig}
\end{table}
\section{Limit distribution under fixed alternatives and consistency}\label{sec:Limit_dist_fixed_alt}
In this section we assume that $X_1,X_2,\ldots$ is a sequence of iid. random variables with cumulative distribution function $F$, $\mathbb{E}[X_1]=1$ and $\mathbb{E}[|X_1|^3]<\infty$. The moment condition is motivated by the scale invariance of $T_{n,a}$ and hence there is no loss of generality compared to fixed alternatives with existing third moment.
\begin{theorem}\label{thm:cons}
We have
\begin{equation*}
\frac{T_{n,a}}{n}\stackrel{\mathcal{\PP}}{\longrightarrow} \int_0^\infty z^2(y)\exp(-ay)\,{\rm d}y=\Delta_a,\quad a\ge0,\quad \mbox{as}\; n\to\infty,
\end{equation*}
where $z(y)=\mathbb{E}\left[(X_1-y-1)\mathbf{1}\{X_1>y\}\right]$, $y>0$.
\end{theorem}
\begin{proof}
Note that by the same arguments as in the proof of Theorem \ref{thm:AsyDist} we have $n^{-1}\|Z_n-Z_n^*\|_{\mathbb{H}_a}^2\to0$ a.s. Since by the strong law of large numbers in Hilbert spaces $n^{-1/2}Z_n^*\to z$ a.s. in $\mathbb{H}_a$ the claim follows by the continuous mapping theorem and Slutski's lemma.
\end{proof}
Under the assumption $X_1\sim\mbox{Exp}(1)$, the characterisation in \eqref{eq:Char} gives $z\equiv0$ and hence $\Delta_a=0$. Since $\Delta_a$ equals 0 if and only if $X_1$ follows an exponential distribution, we conclude that the tests $T_{n,a}$ are consistent against each alternative distribution with existing first moment.
\begin{example}\label{ex:alt}
In this example we give explicit formulas for $\Delta_a$ for different $\Gamma(\beta,\beta)$ distributions. Note that for an suitable alternative we need $\mathbb{E}[X]=1$ to be satisfied. Direct evaluations show that
\begin{enumerate}
\item if $X\sim\Gamma(2,2)$, then $\Delta_a=2/(a+4)^3$,
\item if $X\sim\Gamma(3,3)$, then $\Delta_a=2(a^2+30a+252)/(a+6)^5$, and
\item if $X\sim\Gamma(4,4)$, then $\Delta_a=2\left({a}^{4}+56\,{a}^{3}+1344\,{a}^{2}+16640\,a+94720\right)/ \left( a
+8 \right) ^{7}$.
\end{enumerate}
\end{example}
In the following we derive the limit distribution of the test statistic under fixed alternatives under the stated assumptions in the beginning of the section.
Define
\begin{equation*}
W_n(y)=Z_n(y)-\sqrt{n}z(y),\quad W_n^*(y)=Z_n^*(y)-\sqrt{n}z(y),\quad y>0.
\end{equation*}
It is easy to see that $\|W_n-W_n^*\|^2_{\mathbb{H}_a}=o_\mathbb{P}(1)$.
\begin{lemma}\label{lem:conW}
Under the standing assumptions there exists a centred Gaussian process $W$ in $\mathbb{H}_a$ with covariance kernel
\begin{eqnarray*}
K(s,t)&=&\Psi_2(s\lor t)+st\mathbb{P}(X>s\lor t) - (s+t)\Psi_1(s\lor t)\\
&&-\mathbb{P}(X>s)\left(\Psi_2(t)-t\Psi_1(t)\right)-\mathbb{P}(X>t)\left(\Psi_2(s)-s\Psi_1(s)\right)\\
&&+\mathbb{V}(X)\mathbb{P}(X>s)\mathbb{P}(X>t)-z(s)z(t),\quad s,t>0,
\end{eqnarray*}
where $s\lor t=\max(s,t)$, and $\Psi_\ell(s)=\mathbb{E}[(X-1)^\ell\mathbf{1}\{X>s\}]$, $s>0$, for $\ell=1,2$. In $\mathbb{H}_a$ we have
\begin{equation*}
W_n\stackrel{\mathcal{D}}{\longrightarrow} W\quad \mbox{as}\,n\rightarrow\infty.
\end{equation*}
\end{lemma}
\begin{proof}
Since $\|W_n-W_n^*\|^2_{\mathbb{H}_a}=o_\mathbb{P}(1)$ the limit process is determined by the limit behaviour of $W_n^*$. We have
\begin{equation*}
W_n^*(y)=\frac1{\sqrt{n}}\sum_{j=1}^n\big[(X_j-y-1)\mathbf{1}\{X_j>y\}-(X_j-1)\mathbb{P}(X_j>y)-z(y)\big],\quad y\ge0,
\end{equation*}
and $\mathbb{E}[W_n^*(y)]=0$, $y>0$, since $\mathbb{E}[X_1]=1$. Hence $W_n^*$ is a sum of iid. elements in $\mathbb{H}_a$ and thus converges as $n\to\infty$ to a centred Gaussian process $W$ with covariance kernel $K(s,t)=\mathbb{E}[W_1^*(s)W_1^*(t)]$, $s,t\ge0$, in $\mathbb{H}_a$ by the central limit theorem in Hilbert spaces. The formula for $K$ is obtained by tedious but straightforward calculations.
\end{proof}
By the results of Lemma \ref{lem:conW} and Theorem 1 in \cite{BEH:2017}, we thus have
\begin{equation}\label{eq:altasyvert}
\sqrt{n}\left(\frac{T_{n,a}}{n}-\Delta_a\right)\stackrel{\mathcal{D}}{\longrightarrow} \mbox{N}(0,\sigma_a^2),
\end{equation}
where
\begin{equation*}
\sigma_a^2=4\int_0^\infty\int_0^\infty K(x,y)z(x)z(y)\exp(-a(x+y))\,\mbox{d}x\mbox{d}y.
\end{equation*}
For some families of distributions, $\Delta_a$ and $\sigma_a^2$ can be calculated explicitly for fixed tuning parameters see Example \ref{ex:alt}.
In general, however, we have to find a consistent estimator $\widehat{\sigma}_{n,a}^2$ of $\sigma_a^2$. In this spirit, we replace the probabilities by relative frequencies, the variance by the empirical variance $s_y^2=\frac1n\sum_{j=1}^n(Y_j-1)^2$, and the expectation by empirical counterparts
\begin{equation*}
\Psi_{n,\ell}(s)=\frac1n\sum_{j=1}^n(Y_j-1)^\ell\mathbf{1}\{Y_j>s\},\quad \ell=1,2,\quad \mbox{and}\quad z_{n}(s)=\frac1n\sum_{j=1}^n(Y_j-s-1)\mathbf{1}\{Y_j>s\},\quad s>0,
\end{equation*}
based on the scaled random variables $Y_1,\ldots,Y_n$. Denoting by $K_n$ the resulting estimator of $K$ obtained from plugging-in the empirical counterparts into the formula of $K$, the estimator $\widehat{\sigma}_{n,a}^2$ of $\sigma_a^2$ is
\begin{equation*}
\widehat{\sigma}_{n,a}^2=4\int_0^\infty\int_0^\infty K_n(x,y)z_n(x)z_n(y)\exp(-a(x+y))\,\mbox{d}x\mbox{d}y.
\end{equation*}
Writing $\widehat{\tau}_n^2=n^{-1}\sum_{j=1}^n(Y_j-1)^2$, and
\begin{eqnarray*}
S_{1n}&=&\int_0^\infty\int_0^\infty \Psi_{n,2}(s\lor t)z_n(s)z_n(t)\exp(-a(s+t))\,\mbox{d}t\mbox{d}s,\\
S_{2n}&=&\frac1n\sum_{j=1}^n\int_0^\infty\int_0^\infty st\mathbf{1}\{Y_j>s\lor t\}z_n(s)z_n(t)\exp(-a(s+t))\,\mbox{d}t\mbox{d}s\\
S_{3n}&=&\int_0^\infty\int_0^\infty (s+t)\Psi_{n,1}(s\lor t)z_n(s)z_n(t)\exp(-a(s+t))\,\mbox{d}t\mbox{d}s,\\
S_{4n}&=&\int_0^\infty\left(\Psi_{n,2}(t)-t\Psi_{n,1}(t)\right)z_n(t)\exp(-at)\mbox{d}t\\
S_{5n}&=&\frac1n\sum_{j=1}^n\int_0^\infty \mathbf{1}\{Y_j>t\}z_n(t)\exp(-at)\,\mbox{d}t,\\
S_{6n}&=&\int_0^\infty z_n^2(t)\exp(-at)\,\mbox{d}t,
\end{eqnarray*}
we have
\begin{equation*}
\widehat{\sigma}_{n,a}^2=4\left[S_{1n}+S_{2n}-S_{3n}-2S_{4n}S_{5n}+\widehat{\tau}_n^2S_{5n}^2-S_{6n}^2\right].
\end{equation*}
Using the functions $\upsilon_{\ell,a}(x,y)$, $x,y>0$, $\ell=1,2,3$, from Appendix \ref{app:form}, we have
\begin{eqnarray*}
S_{1n}&=&\frac1{n^3}\sum_{j,k,l=1}^n(Y_j-1)^2\upsilon_{1,a}(Y_j,Y_k)\upsilon_{1,a}(Y_j,Y_l),\\
S_{2n}&=&\frac1{n^3}\sum_{j,k,l=1}^n\upsilon_{2,a}(Y_j,Y_k)\upsilon_{2,a}(Y_j,Y_l),\\
S_{3n}&=&\frac2{n^3}\sum_{j,k,l=1}^n(Y_j-1)\upsilon_{2,a}(Y_j,Y_k)\upsilon_{1,a}(Y_j,Y_l),\\
S_{4n}&=&\frac1{n^2}\sum_{j,k=1}^n\left[(Y_j-1)^2\upsilon_{1,a}(Y_j,Y_k)-(Y_j-1)\upsilon_{2,a}(Y_j,Y_k)\right],\\
S_{5n}&=&\frac1{n^2}\sum_{j,k=1}^n\upsilon_{1,a}(Y_j,Y_k),\quad\mbox{and}\quad S_{6n}=\frac1{n^2}\sum_{j,k=1}^n\upsilon_{3,a}(Y_j,Y_k).
\end{eqnarray*}
Since $\widehat{\sigma}^2_{n,a}$ is a consistent sequence of estimators of $\sigma^2_a$ for each fixed tuning parameter $a>0$, we have (in the spirit of Corollary 1 in \cite{BEH:2017}) under the stated assumptions at the beginning of this section
\begin{equation}\label{eq:asystnorm}
\frac{\sqrt{n}}{\widehat{\sigma}_{n,a}}\left(\frac{T_{n,a}}{n}-\Delta_a\right)\stackrel{\mathcal{D}}{\longrightarrow} \mbox{N}(0,1)\quad \mbox{as}\quad n\to\infty.
\end{equation}
This result has immediate consequences, see Section 3 of \cite{BEH:2017}, which are detailed in the following subsections.
\subsection{A confidence interval for $\Delta_a$}
For $\alpha\in(0,1)$ let $u_\alpha=\Phi^{-1}(1-\alpha/2)$ be the $(1-\alpha/2)$-quantile of the standard normal law. Putting
\begin{equation*}
I_{n,a}=\left[\frac{T_{n,a}}{n}-\frac{u_\alpha\widehat{\sigma}_{n,a}}{\sqrt{n}},\frac{T_{n,a}}{n}+\frac{u_\alpha\widehat{\sigma}_{n,a}}{\sqrt{n}}\right],\quad a\ge 0,
\end{equation*}
it follows from \eqref{eq:asystnorm} that
\begin{equation*}
\lim_{n\to \infty}\mathbb{P}_F(I_{n,a}\ni \Delta_a)=1-\alpha.
\end{equation*}
Hence $I_{n,a}$ is an asymptotic confidence interval at confidence level $1-\alpha$ for $\Delta_a$. In the following we revisit the gamma distributions of Example \ref{ex:alt}. Specific values of $\Delta_a$, $\sigma_a^2$ and the corresponding estimators for $\Gamma(\beta,\beta)$ and $\beta\in\{2,3,4,5,10\}$ are found in Table \ref{tab:explval}. The discrepancy between estimators and true values are in line with the results of Table 5 in \cite{BEH:2017}, which indicates a rather slow convergence to the limit distribution under alternatives in \eqref{eq:asystnorm}. In Table \ref{tab:coverage} we present the empirical coverage probabilities of $I_{n,a}$ for $\Delta_a$ in the same setting as in Table \ref{tab:explval}. Critical values have been obtained by the Pearson-system approximation as presented in Table \ref{tab:cv}. Interestingly the confidence interval seems to be conservative, since the estimated probability of coverage of the true value $\Delta_a$ is disproportionately high in most cases. This is in contrast to the findings in Table 6 in \cite{BEH:2017}, where lower coverage rates than indicated by the nominal level were reported.
\begin{table}[t]
\centering
\begin{tabular}{lrrrrrr}
$a/\beta$ & 2 & 3 & 4 & 5 & 10 \\ \hline
$\Delta_0$ & 0.0312 & 0.0648 & 0.0903 & 0.1100 & 0.1661 \\
$T_{n,0}/n$ & 0.0252 & 0.0672 & 0.0944 & 0.1179 & 0.1778 \\
$\sigma_0^2$ & 0.0178 & 0.0300 & 0.0357 & 0.0382 & 0.0362\\
$\widehat{\sigma}_{n,0}^2$ & 0.0134 & 0.0313 & 0.0361 & 0.0377 & 0.0331\\ \hline
$\Delta_1$ & 0.0160 & 0.0337 & 0.0472 & 0.0575 & 0.0863 \\
$T_{n,1}/n$ & 0.0135 & 0.0349 & 0.0493 & 0.0619 & 0.0923\\
$\sigma_1^2$ & 0.0039 & 0.0065 & 0.0076 & 0.0080 & 0.0070\\
$\widehat{\sigma}_{n,1}^2$ & 0.0031 & 0.0069 & 0.0076 & 0.0080 & 0.0062 \\ \hline
$\Delta_2$ & 0.0092 & 0.0193 & 0.0268 & 0.0324 & 0.0475 \\
$T_{n,2}/n$ & 0.0081 & 0.0199 & 0.0280 & 0.0348 & 0.0506 \\
$\sigma_2^2$ & 0.0012 & 0.0018 & 0.0020 & 0.0020 & 0.0015\\
$\widehat{\sigma}_{n,2}^2$ & 0.0010 & 0.0019 & 0.0019 & 0.0020 & 0.0013 \\ \hline
$\Delta_3$ & 0.0058 & 0.0119 & 0.0162 & 0.0194 & 0.0276 \\
$T_{n,3}/n$ & 0.0053 & 0.0122 & 0.0170 & 0.0208 & 0.0292 \\
$\sigma_3^2$ & 0.0004 & 0.0006 & 0.0006 & 0.0006 & 0.0003\\
$\widehat{\sigma}_{n,3}^2$ & 0.0004 & 0.0006 & 0.0006 & 0.0006 & 0.0003 \\ \hline
$\Delta_4$ & 0.0040 & 0.0078 & 0.0104 & 0.0123 & 0.0168 \\
$T_{n,4}/n$ & 0.0036 & 0.0079 & 0.0109 & 0.0130 & 0.0177 \\
$\sigma_4^2$ & 0.0002 & 0.0002 & 0.0002 & 0.0002 & 0.0001\\
$\widehat{\sigma}_{n,4}^2$ & 0.0002 & 0.0002 & 0.0002 & 0.0002 & 0.0001
\end{tabular}
\caption{Values of $\Delta_a$ and $\sigma_a^2$ for different $\Gamma(\beta,\beta)$ distributions and simulated observations of $T_{n,a}/n$ and $\widehat{\sigma}^2_{n,a}$ for different tuning parameters $a$ ($n=1000$).}\label{tab:explval}
\end{table}
\begin{table}[b]
\centering
\begin{tabular}{llrrrrrrrrrrrrrrrrrr}
& $\beta$ &\multicolumn{3}{c}{2} & \multicolumn{3}{c}{3} & \multicolumn{3}{c}{4} & \multicolumn{3}{c}{5} & \multicolumn{3}{c}{10} \\[1mm]
$a$ & $n$ & 20 & 50 & 100 & 20 & 50 & 100 & 20 & 50 & 100 & 20 & 50 & 100 & 20 & 50 & 100 \\\hline
0 & &0.95 & 0.95 & 0.94 & 0.96 & 0.96 & 0.97 & 0.96 & 0.98 & 0.98 & 0.97 & 0.98 & 0.99 & 0.98 & 0.99 & 0.99 \\
1 & & 0.90 & 0.92 & 0.93 & 0.92 & 0.95 & 0.95 & 0.93 & 0.96 & 0.96 & 0.94 & 0.96 & 0.97 & 0.96 & 0.98 & 0.99 \\
2 & & 0.87 & 0.90 & 0.91 & 0.89 & 0.93 & 0.94 & 0.90 & 0.94 & 0.95 & 0.91 & 0.94 & 0.96 & 0.93 & 0.96 & 0.97\\
3 & & 0.85 & 0.89 & 0.91 & 0.85 & 0.91 & 0.92 & 0.87 & 0.92 & 0.93 & 0.87 & 0.92 & 0.94 & 0.90 & 0.94 & 0.95 \\
4 & & 0.82 & 0.88 & 0.90 & 0.83 & 0.90 & 0.92 & 0.84 & 0.90 & 0.92 & 0.85 & 0.90 & 0.93 & 0.87 & 0.92 & 0.94
\end{tabular}
\caption{Empirical relative frequencies of coverage of $I_{n,a}$ for $\Delta_a$ at a nominal level of 0.9 for different $\Gamma(\beta,\beta)$ distributions (10000 replications).}\label{tab:coverage}
\end{table}
\subsection{Neighbourhood-of-model validation}
A clear drawback in the field of goodness-of-fit testing is that if a level-$\alpha$-test does not lead to the rejection of the hypothesis $H_0$, the conclusion that $H_0$ is 'confirmed' is generally wrong. To overcome this problem, the results of \eqref{eq:altasyvert} lead to a so-called 'neighbourhood-of-model validation', see Subsection 3.3 in \cite{BEH:2017}. In this spirit one could see $\Delta_a$ as some sort of distance to the null hypothesis. If we argue to 'tolerate' a given value $\widetilde\Delta$, we can consider the testing problem
\begin{equation*}
H_{\widetilde\Delta}:\Delta_a(F)\ge\widetilde\Delta\quad\mbox{versus}\quad K_{\widetilde\Delta}:\Delta_a(F)<\widetilde\Delta.
\end{equation*}
From \eqref{eq:altasyvert} we obtain an asymptotic level-$\alpha$-test by rejecting $H_{\widetilde\Delta}$ whenever
\begin{equation*}
T_{n,a}\le n\widetilde\Delta-\sqrt{n}\widehat{\sigma}_{n,a}\Phi^{-1}(1-\alpha).
\end{equation*}
To prove this statement follow the reasoning in \cite{BEH:2017}, subsection 3.3.
\section{Asymptotic Bahadur efficiencies}\label{sec:ABE}
In this section we only consider the case that $a>0$. We start the investigation by giving an alternative representation of $\Delta_a$ in Theorem \ref{thm:cons}.
\begin{lemma}\label{lem:stoch_lim}
Suppose that $\mu=\mathbb{E}[X]$ and $\mathbb{E}[|X|^3]<\infty$. If $a>0$, we have
\begin{equation*}
\frac{T_{n,a}}n\stackrel{\mathcal{\PP}}{\longrightarrow}\Delta_a=\frac2{a^3}-\frac1{a^3}\mathbb{E}\left[\left((1-|Y_1-Y_2|)(a^2+a)+a+2\right)\exp(-a Y_1\wedge Y_2)\right],
\end{equation*}
and if $a=0$, we have
\begin{equation*}
\frac{T_{n,0}}n\stackrel{\mathcal{\PP}}{\longrightarrow}\Delta_0=\mathbb{E}\left[(Y_1\wedge Y_2)(Y_1-1)(Y_2-1)+(Y_1\wedge Y_2)^2\left(1-\frac12(Y_1+Y_2)\right)+\frac13(Y_1\wedge Y_2)^3\right],
\end{equation*}
where $Y_j=X_j/\mu$, $j=1,2$.
\end{lemma}
\begin{proof}
By Theorem \ref{thm:cons}, applied to the iid. random variables $Y_1,\ldots,Y_n$ with $\mathbb{E}[Y_1]=1$, we have
\begin{equation*}
\frac{T_n}n\stackrel{\mathcal{\PP}}{\longrightarrow} \Delta_a=\int_0^\infty z^2(y)\exp(-ay)\,\mbox{d}y,
\end{equation*}
where $z(y)=\mathbb{E}[(Y_1-y-1)\mathbf{1}\{Y_1>y\}]$, $y>0$. Since $Y_1$ and $Y_2$ are independent, it follows that
\begin{equation*}
z^2(y)=\mathbb{E}[(Y_1-y-1)(Y_2-y-1)\mathbf{1}\{Y_1\wedge Y_2>y\}].
\end{equation*}
Fubini's Theorem yields
\begin{eqnarray*}
\Delta_a&=&\int_0^\infty\mathbb{E}[(Y_1-y-1)(Y_2-y-1)\mathbf{1}\{Y_1\wedge Y_2>y\}]\exp(-ay)\,\mbox{d}y\\
&=&\mathbb{E}\left[\int_0^{Y_1\wedge Y_2}(Y_1-y-1)(Y_2-y-1)\exp(-ay)\,\mbox{d}y\right].
\end{eqnarray*}
Straightforward integration and some algebra concludes the proof in both cases.
\end{proof}
\begin{remark}
An alternative way of proving Lemma \ref{lem:stoch_lim} is to start by the representation \eqref{eq:expl} and to use symmetry arguments, the law of large numbers and Lebesgue's dominated convergence theorem, i.e. to adapt the lines of proof of Theorem 3.1 in \cite{EH:2021}.
\end{remark}
A useful tool for a theoretical comparison of the performance of two tests is the asymptotic relative Bahadur efficiency. This concept has been used throughout the literature of exponentiality tests, see \cite{CMO:2019,JMNOV:2015,M:2016,MO:2016,VN:2015}, and for more details on the theory we refer to \cite{B:1971,N:1995}. For a brief introduction to the concept, see Section 5 of \cite{JMO:2020}. In this spirit and using the same notations, we calculate the local approximate Bahadur slope of $T_{n,a}$. Assume that $0\in\Theta\subset\R$, where $\Theta$ is an open parameter space, and $\mathcal{G}=\{G(x;\vartheta):\vartheta\in\Theta\}$ is a family of distribution functions with density $g(x;\vartheta)$, such that $\vartheta=0$ corresponds to the standard exponential density $g(x;0)=\exp(-x)$, $x>0$, and for each $\vartheta>0$ the density $g(x;\vartheta)$ is not a density corresponding to an exponential distribution in $\mathcal{E}$. Moreover, we assume that the regularity assumptions WD in \cite{NP:2004} are satisfied.
In the following assume that $X_1,X_2,\ldots$ are independent identical copies of $X$ following the distribution with density $g(\cdot;\vartheta)$ with existing expectation $\mathbb{E}[X]=\mu(\vartheta)=\int_0^\infty x g(x;\vartheta) \mbox{d}x$. Then Lemma \ref{lem:stoch_lim} yields
\begin{equation*}
\frac{T_{n,a}}n\stackrel{\mathbb{P}_\vartheta}{\longrightarrow} b_{T_a}(\vartheta),
\end{equation*}
where $\stackrel{\mathbb{P}_\vartheta}{\longrightarrow}$ denotes convergence in probability under the true parameter $\vartheta$, and \begin{equation*}
b_{T_a}(\vartheta)=\int_0^\infty\int_0^\infty h_a(x,y;\vartheta)g(x;\vartheta)g(y;\vartheta)\mbox{d}x\mbox{d}y
\end{equation*}
with
\begin{equation*}
a^3 h_a(x,y;\vartheta)=2-\left(\left(1-\frac{|x-y|}{\mu(\vartheta)}\right)(a^2+a)+a+2\right)\exp\left(-\frac{a}{\mu(\vartheta)}x\wedge y\right),\quad x,y>0,\;a>0,
\end{equation*}
and
\begin{equation*}
h_0(x,y;\vartheta)=\frac{x\wedge y}{\mu(\vartheta)}\left(\frac{x}{\mu(\vartheta)}-1\right)\left(\frac{y}{\mu(\vartheta)}-1\right)+\left(\frac{x\wedge y}{\mu(\vartheta)}\right)^2\left(1-\frac{x+y}{2\mu(\vartheta)}\right)+\frac1{3}\left(\frac{x\wedge y}{\mu(\vartheta)}\right)^3,\quad x,y>0.
\end{equation*}
Note that $\mu(0)=1$, $b_{T_a}(0)=0$, and after some algebra we have $b_{T_a}'(0)=0$. Here and in the following, all derivatives are calculated w.r.t. $\vartheta$. Writing $\mu_1=\mu'(0)$, the same reasoning as in Appendix B of \cite{CMO:2021} gives
\begin{eqnarray*}
b_{T_a}''(0)&=& 2\int_0^\infty\int_0^\infty h_a(x,y;0)g'(x;0)g'(y;0)\mbox{d}x\mbox{d}y+4\mu_1\int_0^\infty\int_0^\infty h_a'(x,y;0)g(x;0)g'(y;0)\mbox{d}x\mbox{d}y\\
&& +\mu_1^2\int_0^\infty\int_0^\infty h_a''(x,y;0)g(x;0)g(y;0)\mbox{d}x\mbox{d}y.
\end{eqnarray*}
Expanding $b_{T_a}(\vartheta)$ into a Taylor series around $\vartheta_0=0$, we obtain
\begin{equation*}
b_{T_a}(\vartheta)=\frac{b_{T_a}''(0)}{2}\vartheta^2+O(\vartheta^3),\quad\mbox{as}\;\vartheta\rightarrow 0.
\end{equation*}
From Section \ref{sec:Limit dist} we know that the limit distribution of $T_n$ is $\|Z\|^2_{\mathbb{H}_a}=\sum_{j=1}^\infty \lambda_j(a) N_j^2,$ where $N_1,N_2,\ldots$ are iid. standard normal, and $(\lambda_j(a))_{j\in\N}$ is the decreasing sequence of positive eigenvalues of the integral operator $\mathcal{K}_a$. Using the result in \cite{Z:1961}, the logarithmic tail behaviour of the limit distribution of $\widetilde{T}_{n,a}=\sqrt{T_{n,a}}$ is
\begin{equation*}
\log\left(1-F_{\widetilde{T}_a}(x)\right)=-\frac{x^2}{2\lambda_1(a)}+O(x^2),\quad x\rightarrow\infty.
\end{equation*}
Since the limit in probability of $\widetilde{T}_{n,a}/\sqrt{n}$ is $\sqrt{b_{T_a}(\vartheta)}$, the approximate local Bahadur slope is given by
\begin{equation*}
c^*_{T_a}(\vartheta)=(\lambda_1(a))^{-1}b_{T_a}''(0)\vartheta^2+o(\vartheta^2),\quad \mbox{as}\,\vartheta\rightarrow0.
\end{equation*}
We compare the approximate Bahadur slope to the double Kullback-Leibler distance, also called Kullback-Leibler information numbers see \cite{NT:1996},
\begin{equation*}
KL(g)=\int_0^\infty\frac{(g'(x;0))^2}{g(x;0)}\mbox{d}x-\left(\int_0^\infty G'(x;0)\,\mbox{d}x\right)^2,
\end{equation*}
where $G(x;\vartheta)=\int_0^xg(t,\vartheta)\mbox{d}t$ is the cumulative distribution function of $g(\cdot;\vartheta)$. It is well known that the Kullback-Leibler information numbers are an upper bound for Bahadur efficiencies, see \cite{B:1971,R:1970}. Hence we compute the approximate Bahadur efficiencies, given by
\begin{equation*}
\mbox{eff}(g)=\frac{b_{T_a}''(0)}{2\lambda_1(a) KL(g)}.
\end{equation*}
These are equivalent to the comparison of the local approximate Bahadur slopes of $T_{n,a}$ and the likelihood ratio test as in \cite{CMO:2021}. Note that the largest eigenvalues $\lambda_1(a)$ are given in \eqref{eq:eig} and for easy reference tabulated in the first row of Table \ref{tab:eig}. To simplify the comparison to many well known competing procedures treated in \cite{CMO:2021}, we consider the following examples of distributions, all being a member of the class $\mathcal{G}$ and being standard references for the computation of Bahadur efficiencies of exponentiality tests, see Section 5 of \cite{N:1996}:
\begin{enumerate}
\item the Weibull distribution with density
\begin{equation*}
g(x;\vartheta)=(1+\vartheta)x^\vartheta\exp\left(-x^{1+\vartheta}\right),\quad x\ge0,
\end{equation*}
and $KL(g)=1 - 2\gamma + \pi^2/6 + \gamma^2 - (1 - \gamma)^2$, where $\gamma=0.5772156649\ldots$ is the Euler-Mascheroni constant,
\item the gamma distribution with density
\begin{equation*}
g(x;\vartheta)=x^\vartheta\exp\left(-x\right)/\Gamma(\vartheta+1),\quad x\ge0,
\end{equation*}
where $\Gamma(\cdot)$ denotes the gamma function, and $KL(g)=\pi^2/6 - 1$,
\item a linear failure rate (LFR) distribution with density
\begin{equation*}
g(x;\vartheta)=(1+\vartheta x)\exp\left(-x-\vartheta x^2/2\right),\quad x\ge0,
\end{equation*}
and $KL(g)=1$,
\item a mixture of exponential distributions with negative weights (EMNW$(\beta)$) with density
\begin{equation*}
g(x;\vartheta)=(1+\vartheta)\exp(-x)-\vartheta\beta\exp(-\beta x),\quad x\ge0,
\end{equation*}
and $KL(g)=16/45$ for $\beta=3$,
\item and a Makeham distribution with density
\begin{equation*}
g(x;\vartheta)=(1 + \vartheta(1 - \exp(-x)))\exp\left(-x - \vartheta(x - 1 + \exp(-x))\right),\quad x\ge0,
\end{equation*}
and $KL(g)=1/12$.
\end{enumerate}
\begin{table}[t]
\centering
\begin{tabular}{lrrrrrr}
Alt.$/a$& 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
Weibull & 0.722 & 0.834 & 0.865 & 0.868 & 0.859 & 0.843 \\
Gamma & 0.517 & 0.672 & 0.754 & 0.801 & 0.829 & 0.844 \\
LFR & 0.917 & 0.731 & 0.592 & 0.495 & 0.424 & 0.371 \\
EMNW$(3)$ & 0.765 & 0.940 & 0.987 & 0.982 & 0.954 & 0.917 \\
Makeham & 0.918 & 0.987 & 0.948 & 0.884 & 0.818 & 0.757
\end{tabular}
\caption{Approximate Bahadur efficiencies $\mbox{eff}(g)$ of $T_{n,a}$ for different tuning parameters $a$.}\label{tab:eff}
\end{table}
The results are reported in Table \ref{tab:eff}. Interestingly, there is a clear dependence of the efficiency of the tests on the tuning parameter $a$ under all considered alternatives . Sometimes the highest efficiency is attained for the largest considered tuning parameter as in the gamma case, but for the LFR alternative for the lowest value of $a$. The Makeham alternative suggests to take 1 as best value for the tuning parameter. This behaviour is consistent with the empirical power study results in Tables 1 to 5 in \cite{BH:2008}. A comparison to the Bahadur efficiencies stated in Tables 3 and 4 in \cite{CMO:2021} for other tests of exponentiality shows that the considered procedures are competitive especially for the LFR and EMNW(3) alternatives. The results in Table \ref{tab:eff} confirm the suggestion of \cite{BH:2008} that a tuning parameter $a$ between 1 and 2 is a good choice, since it shows a robust approximate Bahadur efficiency over all considered alternatives.
\section{Conclusions and Outlook}\label{sec:conc}
We revisited the family of tests of exponentiality of \cite{BH:2008} and provided new insight into the asymptotic behaviour of the tests under fixed alternatives as well as local Bahadur efficiencies. These results facilitate the comparison of the performances to other well known tests of exponentiality. As a result we visualised the dependence of the power of the tests on the choice of the tuning parameter $a$. This effect might be controlled by implementing a data dependent choice of the tuning parameter due to \cite{T:2019}. We leave this investigation open for further research.
We finish the article by pointing out other related open questions.
As stated in Remark \ref{rem:Intro}, the family of tests based on the integrated distribution function presented in \cite{K:2001} is very close to the test statistic $T_{n,a}$. For this family of tests corresponding theoretical results as in Section \ref{sec:Limit_dist_fixed_alt} and \ref{sec:ABE} are missing. There is little hope to solve the eigenvalue problem in this case, but results regarding Bahadur efficiency may be obtained by approximating the largest eigenvalue numerically.
In \cite{SAS:2019} the authors propose a test of exponentiality based on a characterisation of the exponential law by a conditional second moment equation involving the hazard rate. Theoretical insight into this family of tests are hitherto missing, so it would be interesting to see corresponding results, since the tests based on the mean residual life function are tests based on a characterisation by a first conditional moment equation.
\section*{Acknowledgement}
The author thanks Bernhard Klar and Bojana Milo{\v{s}}evi{\'{c}} for fruitful discussions, and is grateful to Norbert Henze for numerous suggestions that all led to an improvement of the paper.
\bibliographystyle{apalike2}
|
3,212,635,537,777 | arxiv | \section{Introduction}
Understanding of non-crystalline solids
such as glasses and granular systems is an important
problem in condensed matter physics \cite{Jamming-Book,kurchan-levine}.
A useful concept is that {\it frustration} of geometrical, energetic
or kinetic origins is indispensable to avoid crystallization and allow
realization of amorphous solids \cite{Sadoc,Tarjus-review}.
In the present paper we discuss a jamming in a strongly frustrated
Josephson junction array (JJA) under external magnetic field
\cite{Yoshino-Nogawa-Kim-1,Yoshino-Nogawa-Kim-2,Yoshino-Nogawa-Kim-3}.
It is a very interesting
system which provides an exceptional opportunity to study both athermal
(jamming) and thermal (glass) transitions in exactly the same settings. The
question raised by the Chicago group - whether athermal and thermal
jamming or glass transitions can be understood in a unified way
\cite{Jamming-Book,Liu-Nagel,Nagel-group} - can be asked explicitly in
this system.
In the present paper we discuss the possibility that athermal and thermal jamming transition in the present system can be understood as a generalization
of the Aubry's transition \cite{Aubry-Daeron,Peyrard-Aubry} found in a
family of one-dimensional models of {\it frictions}, most importantly
the Frenkel-Kontorova (FK) model which exhibits very rich phenomenology
in spite of its simplicity \cite{FK-review}.
The organization of the paper is as follows. In the next section, we
discuss the sequence of connections between the FK model \cite{FK-review},
Matsukawa-Fukuyama (MF) model \cite{Matsukawa-Fukuyama} and the
frustrated Josephson junction array under magnetic field
\cite{Tinkam,Mooij-group,Martinoli-Leemann} step by step. In sec. \ref{sec-aubry} we review the Aubry's transition
\cite{Aubry-Daeron,Peyrard-Aubry} in the FK model. There we focus on the
properties of the so called hull function which is a powerful
theoretical tool to analyze the Aubry's transition. Then we sketch our
recent attempt to generalize it for the case of frustrated JJA \cite{Yoshino-Nogawa-Kim-3}. In sec.~\ref{sec-shear} we point out that
'shear' can be exerted on JJA via external electric current \cite{Yoshino}.
We discuss how tribology (sliding friction) \cite{Persson}, non-linear rheology
(soft-matters, granular matters, e.t.c.)
\cite{rheologybook,Otsuki-Sasa,Hatano-Otsuki-Sasa,Olsson-Teitel,Hatano,Otsuki-Hayakawa}
and non-linear transport (JJA,
superconductors, e.t.c) \cite{Tinkam,FFH} are related to
each other emphasizing remarkable similarity of their scaling
features around critical points including the J (Jamming)-point.
Finally we discuss the ``Jamming phase diagram'' of the JJA, which is
analogous to the one proposed for soft-matters
\cite{Liu-Nagel,Nagel-group}, suggested by our analysis of non-linear transport properties
at zero temperature \cite{Yoshino-Nogawa-Kim-1}
and Monte Carlo simulations at finite temperatures \cite{Yoshino-Nogawa-Kim-2}.
In sec. \ref{sec-conclusions} we summarize this paper and
discuss some future outlooks.
\section{Link between the friction models and the Josephson junction
arrays}
\label{sec-link}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{FK-JJA.eps}
\end{center}
\caption{(Color online) Schematic pictures of the friction models and the Josephson
junction array. a) the Frenkel-Kontorova (FK) model b)
Matsukawa-Fukuyama's 2-chain model on the ladder lattice
and c) the Josephson junction array (JJA) on a square lattice under
external magnetic field.
The squares in the plaquette represent positions of the vortexes
induced either by mismatching between the two incommensurate surfaces b) or
external magnetic field c).
}
\label{fig-fk-jja}
\end{figure}
\subsection{Frenkel-Kontorova model - starting point}
\subsubsection{Frustration due to mismatching}
The original Frenkel-Kontorova model \cite{FK-review} is a one-dimensional elastic chain of
particles put on a periodic substrate (see Fig.~\ref{fig-fk-jja} a)). The Hamiltonian is given by,
\begin{equation}
H=\sum_{n=1}^{L} \left\{\frac{k}{2}(u_{n+1}-u_{n}-l)^2
- \lambda \cos \left(\frac{\pi u_{n}}{a}\right) \right\}.
\end{equation}
Here $u_{n}$ denotes the position of the $n$-th particle. The particles are
connected to each other by Hookian springs of strength $k$ as described by the 1st term
in the Hamiltonian where $l$ is the natural spacing between the
particles. We impose a boundary condition such that the length
of the whole system is fixed $(u_{L+1}-u_{1})/l=L$.\footnote{ The FK
model with fixed volume (length) and that under fixed pressure (external
force) behave completely differently \cite{FK-review,Peyrard-Aubry}.
The former is relevant in the context of friction (jamming) and
frustration is in some sense stronger than the latter. In the
latter case the response of the system with respect to the
increments of the external force exhibits devil's stair case
singularities. Note also that the Hookian spring force, which arises
due to the harmonic potential, does not endure the natural spacing $l$
by itself.}
The 2nd term
describes the periodic potential due to the substrate whose period is $a$.
There are two important parameters: 1) the strength of the potential $\lambda$
and 2) winding number $f$,
\begin{equation}
f=\frac{l}{2a}.
\end{equation}
Both are crucial for the jamming-unjamming transition (Aubry's
transition) in the FK model \cite{Aubry-Daeron,Peyrard-Aubry}.
Later we will find equivalent two parameters in the frustrated Josephson
junction array under external magnetic field.
The sinusoidal potential allows the elastic chain to make
phase slips with respect to the substrate. Thus this simple system
allows both elastic and plastic deformations.
The elastic term prefers to keep the natural spacing $l$ while the
substrate potential prefers $2a$.
In the context of the friction between two different materials brought
in contact with each other \cite{Persson},
it is natural to suppose that the two surfaces are {\it incommensurate}
with respect to each other, namely $f$ is an {\it irrational} number -
a number which cannot be represented as a ratio of some two integers.
As the result the system becomes {\it frustrated}
as soon as $\lambda$ becomes finite.
Finding the ground state of the system, which is some periodic (possibly
of very long periodicity) crystalline
structure for rational $f$, becomes a highly non-trivial problem \cite{ground-states-isotropic}.
In the present paper we always assume $f$ is irrational.
\footnote{For technical reasons we wish to use periodic boundary conditions which
cannot be compatible with {\it irrational} $f$. Thus in practice
we use rational numbers which approximates a target irrational number.
For instance we can take a series of integers $p_{n}$ with
$n=1,2,\ldots$ from the Fibonacci series and construct a series of
rational numbers $p_{n-1}/p_{n}$ which converges to
$f=(3-\sqrt{5})/20.38196601...$ in the limit $n \to \infty$.
We consider systems with linear size $L=p_{n}$ so that we arrive at the
target irrational number in the thermodynamic limit $L \to \infty$.
Note that it is easy to construct similar Fibonacci-like series for any
irrational numbers which are solutions of some quadratic equations
\cite{Yoshino-Nogawa-Kim-1}.}
The system exhibits a jamming or frictional transition - called as
Aubry's transition \cite{Aubry-Daeron,Peyrard-Aubry} which we review in section \ref{sec-aubry}.
For weak enough coupling $\lambda < \lambda_{c}$
the elastic chain is only mildly deformed and it can slide over the
substrate smoothly without energy dissipation - friction-less.
For stronger coupling $\lambda > \lambda_{c}$, the elastic chain becomes
pinned by the substrate and friction emerges. We will find later that
the parameter $\lambda$, which plays a key role in the FK model,
is equivalent to strength of anisotropy of the Josephson coupling in the Josephson junction array.
\subsubsection{Phase representation}
It is convenient to introduce a dimension-less ``phase'' variable $\theta_{n}$
defined by $u_{n}=\frac{a}{\pi}[\theta_{n}+2\pi (f-1)n]$ by which
the (dimension-less) Hamiltonian can be rewritten as,
\begin{equation}
H=\sum_{n=1}^{L} \left\{\frac{1}{2}(\theta_{n+1}-\theta_{n}-2\pi)^2
- \lambda\cos (\theta_{n}+2\pi f n)\right\}
\label{eq-FK-phase}
\end{equation}
where $\lambda$ is also made dimension-less by an appropriate rescaling. The
boundary condition is such that $(\theta_{L+1}-\theta_{1})/2\pi=L$ is
fixed.
\subsection{Matsukawa-Fukuyama model - a crucial intermediate step}
\subsubsection{Phase model on ladder-lattice}
Matsukawa and Fukuyama considered a two-chain model in the context of
friction \cite{Matsukawa-Fukuyama}. Their idea is to allow the
``substrate'' in the FK model to deform elastically as well, which is
certainly more realistic than the FK model in the context of tribology \cite{Persson}. Following their idea, let us
modify the FK model \eq{eq-FK-phase} and develop a phase model defined
on a two-leg ladder lattice shown in Fig.\ref{fig-fk-jja} b). The hamiltonian is given by,
\begin{equation}
H= \sum_{\vec{e}_{ij}=\vec{e}_{x}} \frac{1}{2}(\theta_{i}-\theta_{j}-2\pi)^{2}
- \lambda \sum_{\vec{e}_{ij}=\vec{e}_{y}}
\cos(\theta_{i}-\theta_{j}-2\pi f n_{i})
\label{eq-2-chain-original}
\end{equation}
To simplify notations we relabeled the sites as $i=1,2,\ldots,N$
whose position in the real space is given by $(n_{i},m_{i})$. In the two-chain model
the index for the column takes values $n=1,2,\ldots,L$ while that for the row (or layer) takes
just two values $m=1,2$.
The sums are took over nearest neighbour pairs connected by
displacement vector $\vec{e}_{ij}=(n_{i}-n_{j},m_{i}-m_{j})$
which is either equal to $(1,0)$ or $(0,1)$.
\subsubsection{Gauge invariance}
\label{subsubsec-gauge}
An important property of the system is {\it gauge invariance} which we
explain below. Let us rewrite the hamiltonian \eq{eq-2-chain-original} as,
\begin{equation}
H= \sum_{\vec{e}_{ij}=(1,0)} \frac{1}{2}\psi^{2}_{ij}
- \lambda \sum_{\vec{e}_{ij}=(0,1)} \cos(\psi_{ij})
\label{eq-2-chain}
\end{equation}
with the phase difference
\begin{equation}
\psi_{ij} \equiv \theta_{i}-\theta_{j}-A_{ij}.
\label{eq-def-psi}
\end{equation}
Here $A_{ij}$ is an anti-symmetric matrix $A_{ij}=-A_{ji}$ which satisfy,
\begin{equation}
\sum_{\rm plaquette} A_{i,j}=2\pi f.
\label{eq-vector-potential}
\end{equation}
The sum $\sum_{\rm plaquette}$ is a directed sum over ``bonds''
along each ``plaquette'' in the anti-clockwise manner.
It is easy to see that the original representation
\eq{eq-2-chain-original} respect the condition
\eq{eq-vector-potential}.
The crucial point is that the phase differences
$\psi_{ij}$ are invariant under gauge transformations;
\begin{eqnarray}
&& \theta_{i} \to\theta_{i}+\delta \theta_{i}\\
&& A_{ij} \to A_{ij}+\delta \theta_{i} -\delta \theta_{j}
\end{eqnarray}
Thus the hamiltonian \eq{eq-2-chain-original} is gauge-invariant.
The condition \eq{eq-vector-potential} itself is also gauge-invariant.
In addition to the gauge invariance, the hamiltonian \eq{eq-2-chain}
is invariant under $A \to -A$ with $\theta_{n} \to -\theta_{n}$.
Furthermore also $f \to 1+f$ does not change the problem.
So we only need to consider $0 < f \leq 1/2$ in the following.
\subsection{Frustrated Josephson-junction array (JJA) under magnetic field}
\subsubsection{Frustration due to external magnetic field}
The final step is just to 1) increase the number of legs of the ladder to build a 2-dimensional square lattice and 2) replace the intra-layer elastic couplings by sinusoidal couplings ((see Fig.~\ref{fig-fk-jja} c)). Then we obtain the Josephson junction array on a square lattice under external magnetic field applied perpendicularly to the array \cite{Tinkam},
\begin{equation}
H= -\sum_{\vec{e}_{ij}=(1,0)} \cos(\psi_{ij})
- \lambda \sum_{\vec{e}_{ij}=(0,1} \cos(\psi_{ij})
\label{eq-JJA-hamiltonian}
\end{equation}
Here $\theta_{i}$ is identified as the phase of the superconducting order parameter of the $i$-th site (superconducting island). The sinusoidal couplings represent Josephson coupling between the superconducting islands. Now the potential $A$ is identified as the vector potential due to external magnetic field $B_{z}$ applied along the $+z$ direction. The parameter $f$ which appears in \eq{eq-vector-potential} is the number density of quantized flux lines $f=a^{2}B_{z}/\phi_{0}$ where $a^{2}$,$B_{z}$ and $\phi_{0}$ are the are of the plaquette, strength of the magnetic and flux quantum.
Let us emphasize that the two important parameters in the FK model, namely 1) the parameter $\lambda$ and 2) the winding number $f$ are inherited down to the the JJA. To conclude we finally arrived a Josephson junction array on a square lattice with {\it anisotropic} coupling - with anisotropy $\lambda$ - under external magnetic field $B_{z}=(\phi_{0}/a^{2})f$ - with irrational number density $f$ of fluxes per plaquette. In short, let us call such a system as {\it irrationally frustrated anisotropic} JJA.
Quite interestingly it is actually possible to construct {\it anisotropic} JUL in laboratory. The strength of the Josephson coupling depends, for instance, on the thickness of the junctions. Saito and Osada \cite{Saito-Osada} created anisotropic JJA with various $\lambda$ by controlling the thickness of the junctions in the lithography process.
It may sound rather strange to consider the {\it anisotropy} seriously
since it usual plays only minor roles. Not surprisingly, previous
studies of irrationally frustrated JJA considered only isotropic systems
$\lambda=1$.\footnote{We note however that Denniston and Tang
\cite{Denniston-Tang-ladder} studied the frustrated JJA on the
ladder-lattice (with $m=1,2$) (See Fig.~\ref{fig-fk-jja} b)) and
consider variation of the inter-leg coupling $\lambda$. Their system
is almost the same as the 2-chain model by Matsukawa and Fukuyama but the elastic intra-chain coupling in \eq{eq-2-chain} is replaced by a
sinusoidal coupling. They found the Aubry's transition also exist in the
frustrated JJA on the ladder.}
As we discuss later, it turned out in our recent studied that $\lambda$ is actually {\it relevant for irrational} $f$ \cite{Yoshino-Nogawa-Kim-1,Yoshino-Nogawa-Kim-2,Yoshino-Nogawa-Kim-3}. Quite remarkably the isotropic point $\lambda=1$ turned out to be a critical point at zero temperature corresponding to $\lambda_{c}$ of the FK model where a jamming transition analogous to the Aubry's transition takes place. By symmetry it is obvious that we only need to consider the case $\lambda \geq 1$.
\subsubsection{Vortex - analogue of dislocation}
\label{subsubsec-vortex}
We mentioned above that the parameter $f$ can be regarded as number
density of quantized flux lines per plaquette. As we explain below, this is
because the vector potential $A$ due to the magnetic field induces
vortexes of the phases $\theta_{i}$. The point is that vortexes are quantized objects like dislocations in crystals.
Here it is convenient to define ``charges'' of the vortexes as,
\begin{equation}
q_{i}=\frac{1}{2\pi}\sum_{\rm plaquette} s(\psi_{ij})=p_{i}-f \qquad
p_{i}=\ldots,-2,-1,0,1,2,\ldots
\end{equation}
where $s(x)$ is a saw-tooth like periodic function with period $2\pi$
and $s(x)=x$ in the range $-\pi < x \leq \pi$. By definition, the charge
$q$ takes only discrete values of the form $p_{i}-f$ with some
integer $p_{i}$ and offset $-f$ as shown above.
Physically the integer $p_{i}$ represents the number of quantized
fluxes (each carrying a flux quantum $\phi_{0}$) threading the $i$-th plaquette.
Note also that the charge $q$ is gauge invariant.
The usefulness of the charge becomes manifested in the so called
coulomb-gas mapping (see Chap. 9 of \citen{Chaikin-Lubensky}) in which continuous, elastic deformations (``spin-wave'')
are integrated out to find effective hamiltonian ${\cal H}$ of the vortexes.
The resultant system is essentially equivalent to a lattice-gas of electrostatic
charges interacting with each other by the repulsive coulomb interactions,
\begin{equation}
{\cal H}= \sum_{i \neq j} q_{i}G(\vec{r}_{ij})q_{j} +G({\vec 0})\sum_{i} q^{2}_{i}
\label{eq-coulomb-gass}
\end{equation}
with ${\vec r}_{ij}=(n_{i}-n_{j},m_{i}-m_{j})$.
The interaction potential $G({\vec r})$ is the (static) Green's function
of elastic deformations (spin-wave). In 2-dimension, it scales as
$G({\vec r}) \propto \log(|{\vec r}|)$ for $r \gg 1$.
Note that the anisotropy $\lambda$ in \eq{eq-JJA-hamiltonian} is simply reflected in
anisotropy in $G({\vec r})$ such that with it is stronger into
$y$-direction ${\vec r}|| (0,1)$ compared to $x$-direction ${\vec r}||
(1,0)$ by factor $\lambda (\geq 1)$.
The value $G({\vec 0})(>0)$ can be interpreted as the {\it core energy}
of the vortexes.
Because of the core energy, states with higher values of the vortex
charges generally have larger energies and can be neglected at low
temperatures. Since we only need to
consider $0 < f \leq 1/2$ as noted in sec \ref{subsubsec-gauge},
it is sufficient to consider two values of the charges $q=-f,1-f$.
In addition we assume the charge neutrality $\sum_{i}q_{i}=0$ holds,
which can be enforced by applying the periodic boundary conditions.
As the result we find that a fraction
$f$ of the plaquettes carries a vortex $p=1$ (or $q=1-f$)
and the other fraction $1-f$ carries no vortex $p=0$ ( or $q=-f$).
In Fig.~\ref{fig-fk-jja}, the boxes in the plaquette represent the
vortexes ($p=1$).
\subsubsection{Vortex patterns in equilibrium - vortex liquid, crystal and glass}
Let us sketch briefly possible patterns of vortexes in equilibrium states
at low temperatures. For clarity we discuss three cases 1) $f=0$ 2) $f$
is rational and 3) $f$ is irrational.
If $f=0$, the ground state of the system is trivial: the phase becomes
uniformly ordered $\theta_{i}={\rm constant}$ for all sites $i$.
In such a ground state the vortex is absent everywhere $p_{i}=0$ ($q_{i}=0$).
It can be regarded as a crystalline state (or ferromagnetic state). At
finite temperatures, pairs of vortex ($p=1$) and anti-vortex ($p=-1$)
will be created leading to melting of the crystalline state by
proliferation of the vortexes (and anti-vortexes) at some
critical temperature $T_{\rm c}$. In 2-dimension, it takes place
in a special way named as Kosterlitz-Thouless transition \cite{KT}.
If $f$ is {\it rational}, i.~e. $f=p/q$
with some integers $p$ and $q$, the system will have a period {\it
vortex lattice} \cite{ground-states-isotropic}, which is analogous to
periodically ordered structure of dislocations in the so called Frank-Kasper phase
\cite{Frank-Kasper}. For example with $f=1/2$, the charges exhibit a checkerboard like order in which
the sign of the charges alternates along $x$ and $y$-axis
as $q=1/2,-1/2,1/2,-1/2,\ldots$ (or $p=1,0,1,0,\ldots$). We also note
that the 'half-vortexes' which appears in the case of $f=1/2$ is
identical to the so called {\it chirality} in frustrated magnets
\cite{Villain,TJ83-FFXY}.
In bulk superconductors formation of the vortex lattice is well
known. The latter is a triangular lattice called as Abrikosov lattice
\cite{Tinkam}. On the other hand, the vortex lattices in JJA are formed on top of the
underlying square lattice so that it is a
{\it super}-lattice. Thus the vortex lattices in JJA are usually {\it pinned}
by the underlying lattice of the JJA while those in the bulk
{\it pure} superconductors are free to move around unless some pinning
centers are present \cite{FFH,collective-pinning-vortex-glass}.
Starting from the FK model we are naturally lead to consider {\it irrational} $f$.
Apparently the system cannot develop simple periodic
vortex lattices with irrational $f$ so that
finding the ground state becomes a highly
non-trivial problem. Indeed JJA with irrational $f$ - {\it irrationally
frustrated JJA} - has been
regarded as a system which possibly exhibit a glassy phase since a
seminal work by T. Halsey \cite{Halsey}. This is a quite intriguing
possibility since it means emergence of a glassy phase with frustration
but {\it without quenched disorder} - at variance with the conventional
spin-glasses and vortex-glasses (superconductors with random pinning
centers) which involve quenched disorder
\cite{FFH,collective-pinning-vortex-glass}. Disorder may be somehow self-generated in this system. Indeed equilibrium relaxations of the irrationally frustrated JJA were similar to the primary relaxation observed in typical fragile supercooled liquids \cite{JJA-relaxation}.
\section{Low lying states and Aubry's transition}
\label{sec-aubry}
\subsection{Hull function of the FK model}
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig-hull-function-FK.eps}
\end{center}
\caption{(Color online) Hull function $G(x)$ of the FK model.
Here $\theta_{n}-2\pi n$ of the ground states
are plotted against the ``folded
coordinate'' $[n]=fn-{\rm int}(fn)$ to elucidate the hull function.
$\lambda_{c}=0.9716..$ \cite{Peyrard-Aubry}.
}
\label{fig-hull-FK}
\end{figure}
Now let us turn to review the Aubry's transition found in the FK model\cite{FK-review,Aubry-Daeron,Peyrard-Aubry,Zhirov}
and related friction models including the MF model \cite{Matsukawa-Fukuyama,kawaguchi-matsukawa}. A remarkable feature of the FK model is that
mathematically rigorous analysis of the low lying states is possible based on the fact that configuration of the energy minima (and
maxima) of the system satisfies a recursion relation which is identical
to the so called standard map well known in dynamical systems.
It is known rigorously that the ground state of the FK model can be expressed as\cite{Aubry-Daeron,Peyrard-Aubry},
\begin{equation}
\theta_{n}=2\pi n + G( f n + \alpha)
\label{eq-hull-FK}
\end{equation}
where $G$ is a periodic function with periodicity $1$, i.e. $G(x+1)=G(x)$
for any $x$. The function $G(x)$ is called 'hull function' and
describes distortion of the configuration of the elastic chain due to
the substrate potential. It is important to note that the entire region
$0 < x \leq 1$ becomes equally populated in the thermodynamic limit $L \to \infty$
for {\it irrational} $f$.
Quite remarkably the phase $\alpha$ is arbitrary, meaning that there is
a manifold of ground states which have exactly the same energy.
Moreover it is known rigorously that there is a ``phase
transition'' for irrational $f$, called 'transition by breaking of
analycity' (or Aubry's transition), at a critical strength of coupling
$\lambda_{c}(f)$. The basic feature of the transition is summarized in the table \ref{table-aubry}.
In Fig.~\ref{fig-hull-FK} we show the hull function of the FK model
constructed from numerically generated ground states at various
$\lambda$ (see Ref \cite{Peyrard-Aubry} for the method).
\begin{table}[t]
\begin{tabular}{c||c|c}
& $G(x)$ & (meta)stable states\\ \hline
$\lambda < \lambda_{c}(f)$ & analytic & only the ground state
\\
$\lambda > \lambda_{c}(f)$ & non-analytic,
with infinitely many discontinuities & infinitely many metastable states\cite{Zhirov}
\end{tabular}
\label{table-aubry}
\caption{Changes of the low lying states by the Aubry's transition}
\end{table}
From a physical point of view, a significant consequence of the Aubry's
transition is the ``frictional transition'' between the sliding phase and jamming (pinned) phase \cite{Peyrard-Aubry}. Let us sketch the essence of the
reasoning in the following.
For $\lambda < \lambda_{c}$, starting from a ground state, one can find
a continuum of states with exactly the same energy by varying
$\alpha$. The point is that they are all related to each other by some
continuous displacements of the particles in the real space.
This comes
from the fact that $G(x)$ is analytic for $\lambda < \lambda_{c}$. Thus
no external force is needed to {\it slide} the whole system - {\it
friction-less} or {\it sliding}.
Existence of the sliding becomes trivial if the elastic
chain itself is replaced by a rigid body. In such an extreme case of
friction between two incommensurate rigid bodies, the forces between them
oscillates in the space with an incommensurate
period so that the net force becomes cancelled out.
The non-trivial point is
that similar cancellation of the forces still happens even if the chain
is allowed to deform elastically as long as the coupling
$\lambda$ is sufficiently small.
For $\lambda > \lambda_{c}$, discontinuous points appear in the hull function $G(x)$. It means that variation of $\alpha$ require discontinuous movements of the particles in the real space.
Thus the ground states are no-more connected to each other by sliding:
the system prepared in the ground state has to go over some higher
energy states (thus energy barriers) to reach another ground state. Thus the system is
jammed. Now some finite strength of external force greater than a
certain frictional force (yield stress) $f_{\rm yield}(\lambda) \propto
(\lambda-\lambda_{c})^{\beta}$ must be applied to the system to let it
move (de-pinning) \cite{Peyrard-Aubry}.
Firov et al \cite{Zhirov,FK-review} has been able to find a hierarchy of
exponentially large number of low lying states on top of the ground
state in the jammed phase $\lambda > \lambda_{\rm c}$.
This is a very interesting observation from the view point
of the physics of glasses. However, unfortunately the FK model is an
one-dimensional system so that the Aubry's transition disappears at finite temperatures.
The frictional transition and emergence of discontinuity in the hull function has also been found in the Matsukawa-Fukuyama's 2-chain model \cite{kawaguchi-matsukawa}.
Now it is very natural to expect that these features will be inherited
down to our JJA on the square lattice under magnetic field. The main
message that we find here is that we should vary the anisotropy
$\lambda$ and see what happens in the low lying states.
\subsection{Low lying states of the anisotropic JJA}
Let us now turn to the {\it anisotropic} irrationally frustrated JJA with
$\lambda > 1$.
Examples of the real space configurations of the vortexes
in equilibrium at a low temperature are shown in Fig.~\ref{fig-stripe-jja}.
The most prominent feature is the stripe pattern
of the vortexes which are {\it regularly stacked} into $y$-direction (stronger
coupling) and {\it undulated} along the $x$-direction (weaker
coupling)
\cite{Yoshino-Nogawa-Kim-1,Yoshino-Nogawa-Kim-2,Yoshino-Nogawa-Kim-3}. The
formation of the stripes is reasonable because the repulsive
interactions between vortexes are anisotropic if $\lambda \neq 1$ as we
noted in sec \ref{subsubsec-vortex}.
There are two important observations.
First, the undulated stripe pattern is {\it frozen in time}, i.~e. the
ergordicity is broken. The pattern of the undulation cannot evolve dynamically
by usual relaxational dynamics once such a structure is established.
This is simply because the stripes are {\it perfectly stacked} into the
$y$-direction in a belt. At a first sight,
the stripe patterns may look similar to those found,
for example, in liquid crystals. But they are very different
because usual stripes fluctuate dynamically \cite{Chaikin-Lubensky}.
Second, there is a family of low lying states with
different patters of the transverse
undulation as shown in Fig.~\ref{fig-stripe-jja}.
Apparently the ground state should have
no transverse undulation. Very interestingly the energies of the
different patterns of the undulation shown in Fig.~\ref{fig-stripe-jja}
are very close to each other suggesting a gap-less band of undulated states.
Thus these undulated states are all relevant in the equilibrium ensemble.
This is manifested in the structure factor of the vortexes which
exhibit Bragg peaks into $q_{y}$ direction but a power law tail into
$q_{x}$ direction \cite{Yoshino-Nogawa-Kim-2,Yoshino-Nogawa-Kim-3}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig-stripe-jja.eps}
\end{center}
\caption{(Color online) Undulated vortexes in
the anisotropic irrationally frustrated JJA. a) Examples of real space
patterns of vortexes in thermal equilibrium at a low temperature and b)
in nearby energy energy minima. The thermalized configurations shown in
a) are obtained by performing Monte Carlo (MC) simulations at $T=0.2$
on a system with $f=21/55$ and $\lambda=1.5$. The equilibration
is extremely hard in this system so that we used a MC method which
combines the Metropolis method, over-relaxation method and exchange MC
method to endure equilibration \cite{MC-method}.
The filled squares represent plaquettes with vortexes with charge $p=1$.
The configurations in b) are obtained by minimizing the energy by
simple energy descent algorithm starting from the thermalized
configurations shown in a). The energies of the energy minima of the configurations 1)-3) are $E=-5072.14311, -5072.38582, -5072.34445$ respectively.
}
\label{fig-stripe-jja}
\end{figure}
This is a very peculiar state of matter. Is this a glass? ``No'', in
the sense that it has Bragg peaks which one would not expect for a
glass. ``Yes'', in the sense that there are many states with different
patterns of undulation, which is a self-induced disorder, and they are
separated by energy barriers.
In a sense, the prediction by Halsey \cite{Halsey} - that superconducting glass (without quenched disorder) in the JJA with {\it irrational} $f$ - is realized. However we must keep in mind that here we are considering anisotropic JJA with $\lambda > 1$ instead of the isotropic JJA $\lambda=1$ studied in most of the previous works.
Now let us examine the low lying state more closer.
In the analysis of the ground state of the FK model, the hull function
\eq{eq-hull-FK} played a central role as we noted before. Since the JJA can be regarded as
a 2-dimensional version of the FK model, we are naturally led to look for similar one which may describe the low lying sates of the JJA in a
compact way.
Because of the gauge invariance, let us focus
on the gauge-invariant phase differences across the Josephson couplings
$\psi_{ij}=\theta_{i}-\theta_{j}-A_{ij}$ defined \eq{eq-def-psi}
where $i$ and $j$ are nearest neighbours across a Josephson
coupling which may be either along $x$ or $y$-axis.
As we discuss later $\psi_{ij}$ is directly related to the Josephson
current $\sin(\psi_{ij})$, which is the analogue of {\it stress field}
in rheology.
In Fig.~\ref{fig-hull-jja} we display the phase differences $\psi_{ij}$
at various sites $i=(n,m)$ plotted against ``folded coordinates''
$[n]=fn-{\rm int}(fn)$ and $[m]=fm-{\rm int}(fm)$ which takes values limited in the range $0 < [n] \leq 1$ and $0 < [m] \leq 1$.
The purpose of this plot is to elucidate the hull function
analogously to the case of the FK model shown in Fig.~\ref{fig-hull-FK}.
Quite remarkably the plots in the panels c) and d) strongly suggest
there is indeed an analytic hull function of the folded coordinate along
the direction of {\it stronger coupling}. On the other hand, the panels
a) and b) suggest there are no such analytic hull functions along the
direction of {\it weaker coupling}.
Recently we found it is possible to obtain the
hull functions analytically by performing a $1/\lambda$ expansion starting from
$\lambda=\infty$ limit \cite{Yoshino-Nogawa-Kim-3}.
It turned out that the transverse undulation is encoded in the
``phase differences'' between different columns which one can see
in the pane-ll c) and d).
The existence (absence) of analytic hull functions along stronger
(weaker) couplings immediately implies sliding (jamming) of the vortexes.
Starting from an energy minimum, a family of different states with exactly
the same energy can be obtained through the operation $[m]\to
[m+\alpha]$ along the direction of stronger coupling with varying phase shift parameter $\alpha$. This amount to a unidirectional motion of the undulated vortex stripes into the
direction of stronger coupling without changing its pattern, i.~e. {\it sliding}.
In contrast, no such operation is possible along the direction of weaker
coupling, i.~e. {\it jamming}. In the next section we discuss how these
properties are reflected in physical observables associated with shear.
The above observation implies the symmetric system with $\lambda=1$,
on which most of the previous works have been dedicated, is actually
very special. As we discuss later, the critical point
corresponding to the Aubry's transition point is actually $\lambda_{c}=1$
in the JJA at zero temperature $T=0$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{fig-hull-jja.eps}
\end{center}
\caption{(Color online) Spatial configuration of gauge invariant phases in an energy minimum.
Phase differences $\psi_{i,i+e_{x}}$ and $\psi_{i,i+e_{y}}$ across
Josephson junctions parallel to $x$ and $y$ axes at various sites
$i=(n,m)$ are plotted against ``folded coordinates''.
Different symbols in the panels a) and b)
represent the phase differences at $n=1,2,\ldots,L$
along arbitrary chosen three different ``rows'' of the JJA.
In the panels c) and d),
the phase differences at $m=1,2,\ldots,L$
along arbitrary chosen three ``columns'' are shown.
The system size is $L=55$.
}
\label{fig-hull-jja}
\end{figure}
\section{Response to shear - shear by external electric current}
\label{sec-shear}
Jamming is nothing but onset of rigidity which can be detected by response against shear. In general shear is induced into the system through the boundaries. In rheology one can consider to apply some constant external shear-stress $\sigma_{\rm ext}$ on boundaries of systems under study. Very interestingly this is equivalent to put external electric current $I_{\rm ext}$ into a Josephson junction array as we explain in sec \ref{subsec-dynamic-shear-response}. We will find that stress $\sigma$ and shear rate $\dot{\gamma}$ in rheology correspond precisely to current $I$ (or current density $J$) and voltage $V$ (or electric field $E$) in the transport problem of the driven JJA.
To study rheology (or transport) one can control either the
external stress $\sigma$ or shear rate $\dot{\gamma}$. Here we choose to control
the strain $\gamma$ so that we can discuss static and dynamic response to shear in the same set up.
We put shear across the Josephson junction array along, say $y$-axis,
in the following manner. First we fix the configurations of the phase
variables $\theta_{i}$ on the bottom ($m=1$) and top ($m=L$) layers.
Second we slightly change the boundary such that a uniform displacement
is imposed on the top layer ($m=L$)
$\theta_{i} \to \theta_{i}+(L-1)\gamma$
while the bottom layer ($m=1$) is left in the
same fixed configuration. This amount to induce a gradient of phase
$d\theta/dy=\gamma$ along the $y$-axis. Clearly $\gamma$ corresponds to shear-strain of the usual sense.
As the result some internal stress $\sigma$, which is super-current
running across the Josephson junctions (see below), will be
induced in the system. To study rheology, we drive the top wall with a
constant speed so that the strain $\gamma$ increase with a constant
shear-rate $\dot{\gamma}$.
Let us also remark that shear $d \theta/d y=\gamma$ on the phases
along the $y$-axis amount to motion of vortexes (dislocations)
into the orthogonal direction, i.e. $x$-axis. This is equivalent to say that the vortexes are driven by the Lorentz force \cite{Tinkam}.
\subsection{Static response to shear - static rigidity}
\label{subsec-static-shear-response}
From static point of view, emergence of rigidity can
be best quantified by shear-modulus.
The free-energy of the system $F(\gamma)$
can be formally expanded in power series of {\it infinitesimal}
shear strain $\gamma$ as
\begin{equation}
F(\gamma)=F(0)+N \langle \sigma \rangle \gamma +
\frac{N}{2}\mu \gamma^{2} + \ldots
\end{equation}
where $\sigma$ and $\mu$ are the shear-stress and shear-modulus
respectively. $\langle \ldots \rangle$ stands for a thermal average.
Here $\gamma$ must be
infinitesimal. The free-energy density $F(\gamma)/N$, in the thermodynamic sense, must not
depend on the boundary condition (including the shape of the container) so that shear-modulus must be zero in
the thermodynamic sense {\it even in solids}. Thus when the shear-modulus $\mu$
defined by the fluctuation formula \eq{eq-shear-modulus} emerges,
it means that the ordering of the $\gamma \to 0$ limit and the
thermodynamic limit $N \to \infty$ no more commute in sharp contrast to liquids.
In turn this means linear elasticity {\it must} fail in solids. Physically
this means that elasticity and {\it plasticity} must emerge
simultaneously in solids.\cite{YM2010}
It is useful to note that the change on the boundary condition can be formally ``absorbed'' into the bulk part of the system by replacing the original Hamlitonian \eq{eq-JJA-hamiltonian} by,
\begin{equation}
H(\gamma)= -\sum_{\vec{e}_{ij}=(1,0)} \cos(\psi_{ij})
- \lambda \sum_{\vec{e}_{ij}=(0,1)} \cos(\psi_{ij}+\gamma).
\end{equation}
Based on this observation we find that the stress $\sigma$ can be expressed as,
\begin{equation}
N \sigma=\frac{\partial H(\gamma)}{\partial \gamma}
=\lambda \sum_{\vec{e}_{ij}=(0,1)} \sin(\psi_{ij}).
\label{eq-shear-stress}
\end{equation}
Similarly the shear-modulus $\mu$ can be expressed as,
\begin{equation}
\mu=b-\beta \left [ \langle \sigma^{2} \rangle - \langle \sigma \rangle^{2}
\right]
\label{eq-shear-modulus}
\end{equation}
where $b$ is {\it instantaneous} or adiabatic shear-modulus (``Born term'')
defined as,
\begin{equation}
N b=\frac{\partial^{2} H(\gamma)}{\partial \gamma^{2}}
=\lambda \sum_{\vec{e}_{ij}=(0,1)} \cos(\psi_{ij}).
\label{eq-born}
\end{equation}
Fluctuation formulae for the elastic modulus like \eq{eq-shear-modulus}
are well known in literature \cite{squire}. In the context of XY models
and super-conductors it is usually called as helicity modulus
\cite{Chaikin-Lubensky}. The crucial term is the 2nd term which
represents reduction of the shear-modulus due to thermal fluctuations of
the stress $\sigma$.
In liquids, the two limits $\gamma \to 0$ and $N \to \infty$ should commute.
Then an identity $\mu=0$ must hold meaning exact cancellation must take place
between the Born term and the fluctuation term in \eq{eq-shear-modulus}.
In the previous section we found the anisotropic irrationally frustrated
JJA exhibits sliding/jamming in the low lying states such that the vortexes
can slide freely along the stronger coupling but jammed along the weaker
coupling. In turn this means that shear of the phases along
stronger/weaker coupling causes finite/zero changes of the energy
respectively. Consequently the shear-modulus $\mu$ must be finite/zero
along stronger/weaker coupling at zero temperature $T=0$. Indeed we observed this numerically \cite{Yoshino-Nogawa-Kim-1}.
This is a quite intriguing situation - the anisotropic system $\lambda
\neq 1$ at zero temperature $T=0$ behaves either as solid or liquid
depending on the axes along which one imposes the shear. From numerical
observations it seems that the picture holds up to the symmetric point
$\lambda=1$ \cite{Yoshino-Nogawa-Kim-1} suggesting that the symmetric point
is actually the critical point $\lambda_{c}=1$ where shear-modulus
along a given axis becomes zero/finite.
\subsection{Dynamic response to shear - transport or rheology}
\label{subsec-dynamic-shear-response}
The shear-stress $\sigma$ defined in \eq{eq-shear-stress} is nothing but
super-current flowing along $y$-direction in the Josephson junction array. More precisely according to the DC/AC Josephson relations \cite{Tinkam} the current $I_{ij}$ and voltage drop $V_{i}-V_{j}$ across the junction are given by,
\begin{equation} I_{ij}=\sin(\psi_{ij}) \qquad V_{i}-V_{j}=\frac{d\psi_{ij}}{dt}
\label{eq-dc-ac-Josephson-effect}
\end{equation}
Here we are assuming some appropriate rescalings to define the
dimension-less quantities $I_{ij}$ and $V_{i}$.
At each site $i$ (super-conducting island) the current must be conserved.
By taking into account charging of the island and Ohmic energy dissipation we find,
\begin{equation}
C\frac{dV_{i}}{dt}+\sum_{j}\frac{V_{i}-V_{j}}{R}+\sum_{j}I_{ij}=I_{\rm ext}(\delta_{m_{i},L}-\delta_{m_{i},1})
\end{equation}
where the sums are took over nearest-neighbours.
$C$ and $R$ are the capacitance of the islands and resistance of the
junctions respectively.
$I_{\rm ext}$ is the strength of external current which is injected from
the top layer $m=L$ and extracted from the bottom layer $m=1$. Combining
with the Josephson relation \eq{eq-dc-ac-Josephson-effect} and the
definition of the gauge-invariant phase difference $\psi_{ij}$ given in
$\eq{eq-def-psi}$, one easily finds an effective equation of motion of the phases $\theta_{i}$, which is called RCSJ (Resistively and Capacitively Shunted Junction) model\cite{Tinkam}.
Apparently it can be cast into the form of Newton's equation of motion,
\begin{equation}
\frac{d\theta_{i}}{dt}=v_{i} \qquad
m \frac{d v_{i}}{dt}+\frac{\partial H}{\partial \theta_{i}}
+ \eta \sum_{j}(v_{i}-v_{j})=F_{\rm ext}(\delta_{m_{i},L}-\delta_{m_{i},1}).
\label{eq-motion}
\end{equation}
which can be considered as a toy model for rheology of layered systems under external shear applied on the top and bottom walls \cite{Yoshino}.
From the above observations, it is clear that transport properties in
JJA and rheology are quite analogous. Because of the shear, the velocity
field $d \theta_{i}/dt$ will acquire a slope along the $y$-axis which
can be identified with the shear rate $\dot{\gamma}$. From the AC
Josephson relation (the 2nd equation of \eq{eq-dc-ac-Josephson-effect}),
we find that it mounts to a constant electric field $E$ along the $y$ axis.
To summarize shear-stress $\sigma$ and shear-rate $\dot{\gamma}$ in
rheology correspond to electric current $I$ (or current density $J$) and voltage drop $V$ across
the system (or electric field $E$) in the transport problem of JJA. Thus
the so called ``flow curves'' in rheology corresponds to current-voltage $IV$ (or $JE$) characteristics in JJA. In {\it tribology} we just need to
consider only two layers $m=1,2$ as in the Matsukawa-Fukuyama's 2-chain
model \cite{Matsukawa-Fukuyama}. The yield stress $\sigma_{\rm c}$ is called as static frictional force.
These problems have been studied extensively in the
corresponding research communities but somehow the intimate analogy has not
been appreciated \cite{Yoshino}.
\subsection{Non-linear rheology and transport}
Let us discuss here some basic phenomenological aspects of the non-linear
rheology and the non-linear transport associated with 2nd order
phase transition, including the jamming transition.
To be specific we will denote
$\lambda-\lambda_{c}$ as the distance to the critical point which is
natural in the context of the anisotropic JJA at zero
temperature. However the readers can easily translate the discussion to
different situations by replacing $\lambda-\lambda_{c}$ by distance to
critical temperature $T-T_{c}$ or jamming density $\phi-\phi_{\rm J}$,
e.t.c. depending on the problems at hand.
Let us assume the following generic scaling form.
\begin{equation}
\sigma=|\lambda-\lambda_{c}|^{\beta}\tilde{\sigma}_{\pm}\left(\frac{\dot{\gamma}
}{|\lambda-\lambda_{c}|^{\Delta}}\right)
\label{eq-scaling}
\end{equation}
where $\beta$ and $\Delta$ are critical exponents
and the subscript $\pm$ stands for $\lambda > \lambda_{c}$ and $\lambda
< \lambda_{c}$ respectively. Physically we expect the following
behaviours: 1) Newtonian behaviour in the ``sliding phase'' ($\lambda <
\lambda_{c}$),
2) Finite yield stress in the ``jammed phase'' ($\lambda > \lambda_{c}$) and
3) The explicit $\lambda$ dependence must disappear at the critical point ($J$-point) $\lambda=\lambda_{c}$. Based on these intuitions let us conjecture
the following asymptotic behaviours of the
scaling function $\tilde{\sigma}(y)$,
\begin{equation}
\tilde{\sigma}_{\pm}(y) = \left \{
\begin{array}{ll}
\left \{
\begin{array}{cl}
y & \lambda < \lambda_{c}\\
\tilde{\sigma}_{-}(0) & \lambda > \lambda_{c}
\end{array} \right. & y \ll 1 \\
& \\
c y^{\beta/\Delta} & y \gg 1
\end{array} \right.
\end{equation}
where $\tilde{\sigma}(0)$ and $c$ are some constants.
Consequently the scaling ansatz predicts the following asymptotic behaviours ($\dot{\gamma}\to 0$),
\begin{equation}
\lim_{\dot{\gamma} \to 0} \sigma = \left \{
\begin{array}{lll}
\eta(\lambda) \dot{\gamma} &
\eta(\lambda)\propto (\lambda_{c}-\lambda)^{-(\Delta-\beta)} & \lambda < \lambda_{c}\\
c\; \dot{\gamma}^{\beta/\Delta} & & \lambda=\lambda_{c} \\
\sigma_{c}(\lambda) & \sigma_{c}(\lambda) = \tilde{\sigma}_{-}(0)(\lambda-\lambda_{c})^{\beta} & \lambda > \lambda_{c}
\end{array} \right.
\end{equation}
Most importantly the power law fluid behavior $\sigma \propto \dot{\gamma}^{\beta/\Delta}$ at the critical point is predicted. Usually $\beta/\Delta < 1$ which is called {\it shear-thinning} behaviour.
For the transport problems in superconductors including JJA, one just
need to replace shear-stress $\sigma$ by the electric current density $J$, shear-rate $\dot{\gamma}$ by the electromagnetic field $E$. The Newtonian law corresponds to the Ohmic law
$J=\sigma E$ with the linear conductivity $\sigma$\footnote{It should not be confused with stress $\sigma$} and the yield stress $\sigma_{c}$ corresponds to critical current $J_{c}$.
Recently the non-liner rheology of granular systems is found to obey
this type of scaling around the J-point
\cite{Otsuki-Sasa,Hatano-Otsuki-Sasa,Olsson-Teitel,Hatano,Otsuki-Hayakawa}. In
granular systems Bagnold's scaling must replace the Newtonian law in the unjammed phase. At least formally, the above argument can be easily modified to account for it.
The above scaling ansatz is quite reminiscent of the scaling property of
magnetization of ferromagnetic models around the critical temperature $T_{c}$.
On purpose we actually used the same standard notations for the critical
exponents, i.~e. $\beta$ and $\Delta$, in the latter problem. Namely by
replacing the stress $\sigma$ by magnetization $m$ and strain rate $\dot{\gamma}$ by magnetic field $h$, one recovers $m \propto |T-T_{c}|^{\beta}\tilde{m}_{\pm} (h/|T-T_{c}|^{\Delta})$. One can easily find precise correspondences between 1) the Newtonian (Ohmic) law v.s. paramagnetic behaviour $m=\chi h$ with the linear-susceptibility $\chi$ diverging at $T_{c}$ 2) power law rheology $\sigma \propto \dot{\gamma}^{\beta/\Delta}$ v.s.
$m \propto h^{\delta}$ with $\delta=\beta/\Delta$ at the critical points and 3) yield stress (critical current)
$\sigma_{c} \propto (\lambda-\lambda_{c})^{\beta}$ v.s. the spontaneous magnetization $m_{s} \propto (T_{c}-T)^{\beta}$.
This type of scaling has been advocated first in the context of non-linear current-voltage characteristics of superconductors by Wolf, Gubser and Imry \cite{WGI}.
They studied non-linear current-voltage characteristics of superconducting film at
the superconducting phase transition, which is a Kosterlitz-Thouless
type 2nd order phase transition\cite{KT}. They pointed out
the analogy with the scaling of the magnetization of ferromagnets. Such
dynamical scaling ansatz has been extensively used in the studies of
transport properties in high-$T_{\rm c}$ superconductors, especially in the
context of the {\it vortex-glasses} with quenched pinning
centers \cite{FFH}.
More recently Otsuki and Sasa \cite{Otsuki-Sasa} has realized the same
type of critical behavior in the context of the non-linear rheology of
molecular glasses. Quite remarkably they were able to find a mean-field theory
which predicts that the flow curves of the non-linear rheology are
formally identical to the equation of state of the Landau-Ginzburg
theory of ferromagnets under external magnetic field suggesting in
particular $\beta/\Delta=1/3$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\textwidth]{fig-flowcurves.eps}
\end{center}
\caption{(Color online)
Flow curves of the FK model and JJA. Both models are simulated by the RCSJ method at zero temperature $T=0$.
The panel a) displays the master flow curve of the FK model using
$\lambda_{c}=0.9716$..and $\beta=\Delta=3$. The panel b) displays the master flow curve of the irrationally frustrated anisotropic JJA using $\lambda_{c}=1$, $\beta=1.19$ and $\Delta=3.5$ so that $\beta/\Delta=0.34$.
}
\label{fig-flowcurves}
\end{figure}
Let us now discuss the dynamical scaling properties of the FK model and the JJA under shear.
We performed the RCSJ simulation on both models.
The master flow curve of the FK model is displayed in the panel a) of Fig.~\ref{fig-flowcurves} which follow well the expected dynamical scaling behaviour around the Aubry's transition point $\lambda_{c}$.
A previous work \cite{Peyrard-Aubry} found $\sigma_{c} \propto (\lambda-\lambda_{c})^{\tilde{\psi}}$ with $\lambda_{c}=0.9716..$ and $2.85 < \tilde{\psi} < 3.06$. In addition we found the system remains Newtonian for the entire sliding phase $\lambda \leq \lambda_{c}$ including the critical point so we assumed $\beta=\Delta$ in the scaling plot.
For the irrationally frustrated anisotropic JJA,
we pointed out in sec \ref{subsec-static-shear-response} that the shear-modulus $\mu$ along stronger/weaker
coupling is finite/zero at zero temperature $T=0$ and that the symmetric point $\lambda_{c}=1$ is the critical point where the shear-modulus $\mu$ along a given axes changes from finite/zero to zero/finite.
Then it is quite natural to expect that the current-voltage curve of the system with respect to
injection of the electric current along a given direction exhibit dynamical scaling feature at around $\lambda_{c}=1$. This is indeed observed by a numerical simulation of the RCSJ dynamics \cite{Yoshino-Nogawa-Kim-1}.
The current-voltage curves collapse onto a master curves as shown in the panel b) of Fig.~\ref{fig-flowcurves}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig-phasediagram.eps}
\end{center}
\caption{(Color onlie) ''Jamming'' phase diagram of the JJA
}
\label{fig-phasediagram}
\end{figure}
\subsection{Jamming phase diagram}
In Fig.~\ref{fig-phasediagram} we show a schematic ``jamming phase diagram'' of the irrationally frustrated anisotropic Josephson junction array. So far we discussed only the properties of the system at zero temperature $T=0$ and $\lambda \geq 1$. Under the electric current $J_{y}$ injected along the $y$-axis, the system remains jammed as long as $J_{y}$ is smaller than the critical current $J_{\rm c} \sim (\lambda-1)^{\beta}$. The configuration of the jammed solid phase is characterized by the frozen pattern of the undulated vortex stripes. Under strong enough current $J_{y} > J_{\rm c}$, the system starts to move exhibiting the shear-thinning behaviour. On the other hand the same solid can slide freely with respect to electric current $J_{x}$ injected along the $x$-axis for the entire $\lambda > 1$ region. For the region $\lambda < $, we just need to interchange the $x$ and $y$-axes in the above discussion.
Previous studies on the irrationally frustrated JJA are almost exclusively concerned with the symmetric point $\lambda=1$. Recent intensive numerical studies at finite temperatures suggest $T_{c}(\lambda=1)=0$ \cite{Park-Choi-Kim-Jeon-Chung,Granato} without finite temperature glass transition anticipated in the early works \cite{Halsey,JJA-relaxation}. On the other hand our recent studies on the static properties at low temperatures \cite{Yoshino-Nogawa-Kim-2,Yoshino-Nogawa-Kim-3} strongly suggest $T_{c}(\lambda) > 1$ at least sufficiently away from the symmetric point $\lambda=1$ and that $T_{c}(\lambda)$ rapidly decreases approaching the symmetric point $\lambda=1$. These point toward the possibility of the jamming phase diagram depicted in Fig~\ref{fig-phasediagram}.
\footnote{We only show the $\lambda >1$ part. Note that $T_{c}(1/\lambda)=T_{c}(\lambda)/\lambda$ holds
due to the obvious symmetry between $x$ and $y$ axis. }
Quite interestingly it is very similar to the jamming phase diagram proposed by the Chicago group \cite{Liu-Nagel,Nagel-group}. Most notably the point $(\lambda=1,T=0)$ look quite similar to the Jamming point which deserves further studies.
\section{Conclusions}
\label{sec-conclusions}
To conclude we discussed static and transport or rheological properties of the irrationally frustrated anisotropic Josephson junction array (JJA) which exhibits vortex stripes with self-generated randomness at low temperatures. We emphasized in particular the intimate connection between the friction models and the irrationally frustrated JJA which provides valuable insights into the problems.
\section*{Acknowledgements} We thank Takahiro Hatano, Hikaru Kawamura, Jorge Kurchan
and Hiroshi Matsukawa for useful discussions.
We thank the Supercomputer Center, ISSP, University of Tokyo for the use of the facilities.
This work is supported by Grant-in-Aid for Scientific Research
on Priority Areas "Novel States of Matter Induced by Frustration"
(1905200*) and Grant-in-Aid for Scientific Research (C) (21540386).
|
3,212,635,537,778 | arxiv | \section{Introduction}
Is the global economic system of the present era inherently unstable? It had long been
thought that the cyclical sequence of inflations and recessions that have buffeted most
national economies throughout the 19th and 20th centuries
are an inevitable result of modern industrial capitalism. However, starting in the 1970s,
economists allied with the influential Chicago school of economics started to promote
the belief that the panacea to all economic ills of the world lay in completely and
unconditionally subscribing to their particular brand of free-market policies. Their
hubris reached its apogee at the beginning of this decade as summed up by the statement
of the Nobel Laureate Robert Lucas at the 2003 annual meeting of the American Economic
Association that ``the central problem of depression prevention has been solved,
for all practical purposes''~\cite{Lucas03}. This complacency about the economy's robustness to all
possible perturbations led not only most professional economists, but more importantly,
government bureaucrats and ministers (e.g., Gordon Brown's claims that economic
booms and busts were a thing of the past~\cite{Brown07}) to ignore or downplay
the seriousness of the present economic and financial crisis at its initial stage.
As many of the recent books on the onset of the global economic meltdown written
by Posner and others point out, the mainstream economists and those
whom they advised were blinded by their unquestioning acceptance of the assumptions
of neo-classical theory~\cite{Posner09}.
In response to the rising criticism of traditional economic theory, spearheaded by
physicists working on economic phenomena~\cite{Bouchaud08} as well as non-traditional economists who have collaborated with physicists~\cite{Lux09}, some economists are now trying to put up
a defense that the sudden collapse of markets and banks is not something that can
be predicted by economic theory as this contradicts their basic foundational
principles of rational expectations
and efficient markets. Thus, according to the conventional economic school of thought,
bubbles cannot exist because any rise in price must reflect all information available
about the underlying asset~\cite{Fama70}. Although detailed analysis of data from
markets clearly reveals that much of the observed price variation cannot be explained in
terms of changes in economic fundamentals~\cite{Shiller05}, the unquestioning belief
in the perfection
of markets has prompted several economists in the past decades to assert that the
famous historical bubbles, such as Tulipomania in 17th century Holland or the South Sea
Affair of 18th century England, were not episodes of price rise driven by irrational speculation,
but rather were based on sound economic reasons (see, e.g., Ref.~\cite{Garber90}).
This complete divorce of theory from
observations points to the basic malaise of economics. What makes it all the more
worrying is that despite the lack of any empirical verification, such economic theories
have nevertheless been used to guide the policies of national and international agencies
affecting the well-being of billions of human beings.
In fact, in its desperate effort to become a rigorous science by
adopting, among other things, the formal mathematical framework of
game theory, mainstream economics has become concerned less with
describing reality than with an idealized version of the world. As an
economist recently pointed out, in the overly mathematical formalism
of rational expectations theory, any economic transaction, including
that of a person buying a newspaper from the corner store vendor,
appears to be a complicated chess game between Kenneth Arrow and Paul
Samuelson, two of the most notable post-war economists (quoted in
Ref.~\cite{Sinha10}). In truth, almost throughout our life, we rarely
go through a complicated optimization process in an effort to
calculate the best course of action. Even if we had access to complete
information about all the options available (which is seldom the
case), the complexity of the computational problem may overwhelm our
decision-making capabilities. Thus, most often we are satisfied with
choices that seem ``good enough'' to us, rather than the best one
under all possible circumstances. Moreover, our choices may also
reflect non-economic factors such as moral values that are usually not
taken into consideration in mainstream economics.
Given these caveats, it seems that the cherished hypotheses of
efficient markets and rational agents stands on very shaky ground
indeed. The question obviously arises as to whether there are any
alternative foundations that can replace the neo-classical framework.
Behavioral economics, which tries to integrate the areas of
psychology, sociology and economics, is one possible candidate.
Another challenger is from outside the traditional boundaries of
economics, a discipline that has been dubbed
econophysics~\cite{Sinha09,Yakovenko09}. Although physicists have
earlier worked on economic problems occasionally, it was only about a
decade and half ago that a systematic, concerted movement began which
has seen more and more physicists using the tools of their trade to
analyze phenomena occurring in a socio-economic
context~\cite{Farmer05}. This was partly driven by the availability of
large quantities of high-quality data and the means to analyze them
using computationally intensive algorithms. One of the most active
sub-fields within this area is the empirical characterization of
statistical properties of financial markets. Starting from the work
of Mantegna and Stanley~\cite{Mantegna99}, several important results
are now known about such markets which appear to be {\em universal},
in the sense that they are invariant with respect to the systems being
considered, the time-period under consideration and the type of data
being analyzed. One of the best examples of such universal features of
financial markets is the {\em inverse cubic law} for the distribution
of price (or index) fluctuations~\cite{Gopikrishnan98}. Not only has
it been observed to hold across several different time-scales and
across different types of stocks (and market indices), but more
surprisingly, it appears to be valid irrespective of the stage of
development of the market~\cite{Sinha07}.
Financial markets have also proved a fertile ground for uncovering the
structure of interactions between the different components of an
economic system. In particular, the transactions between agents buying
and selling different stocks in the market are reflected in the
correlated movements of the prices of different stocks. Analogous to
the process of inferring the movement of air molecules by watching the
Brownian motion of suspended particles, we can have a coarse-grained
view of the interaction dynamics between individuals in the market by
reconstructing the network of significantly correlated stocks (i.e.,
correlated in terms of their price fluctuations). Comparison of such
stock interaction networks for different markets has hinted that a
financial market at a later stage of development possesses many more
strongly bound clusters of co-moving stocks that are often from the
same business sector~\cite{Pan07}. As such markets tend to have
identical statistical properties in terms of the distributions of
price or index fluctuations, as well as, other trading indicators,
they differ primarily in the topological structure of the interactions
between their components. Thus, network analysis can provide us with a
window into the process of economic development.
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{price_eqlbm.eps}
\caption{{\bf Price mechanism leading to stable equilibrium between supply
and demand according to traditional economic thinking.} (Left) The supply
and demand curves indicate how increasing supply or decreasing demand
can result in falling price or vice versa. If the available supply of a certain
good in the market at any given time is less than its demand for it among
consumers, its price will go up. The perceived shortage will stimulate increase
in production that will result in an enhanced supply. However, if supply increases
beyond the point where it just balances the demand at that time, there will be
unsold stock remaining which will eventually push the price down. This in turn will
result in a decrease in the production. Thus, a negative feedback control mechanism
governed by price will move demand and supply along their respective curves
to the mutual point of intersection, where the quantity available ($Q_0$) at the
equilibrium price $P_0$ is such that supply exactly equals demand. (Right) As
the demand and supply of a product changes over time due to various different
factors, the supply and demand curves may shift on the
quantity-price space. As a result, the new equilibrium will be at a different
price ($P_0^{\prime}$) and quantity ($Q_0^{\prime}$). Until the curves shift again,
this equilibrium will be stable, i.e., any perturbation in demand or supply will
quickly decay and the system will return to the equilibrium.
}
\label{fig:price_eqlbm}
\end{figure}
When we broaden our problem from the relatively restricted context of financial markets
to general economic phenomena, the role played by networks of interactions become even more
intriguing. Traditionally, economics has been concerned primarily with equilibria.
Fig.~\ref{fig:price_eqlbm} shows that the price mechanism was perceived by economists
to introduce a negative feedback between perturbations in demand and supply, so that
the system quickly settles to the equilibrium where supply exactly equals demand.
Much of the pioneering work of Samuelson~\cite{Samuelson47}, Arrow~\cite{Arrow58} and
others (for a review see Ref.~\cite{Negishi62}) had been involved with
demonstrating that such equilibria can be stable, subject to
several restrictive conditions. However, the occurrence of complex networks of
interactions in reality bring new dynamical issues to fore. Most notably, we are faced
with the question: do complex economic networks give rise to instabilities ?
Given that most economic systems at present are composed of numerous strongly connected
components, will periodic and chaotic behavior be the norm for such systems rather
than static equilibrium solutions ?
This question has, of course, been asked earlier in different contexts. In ecology,
it has given rise to the long-standing stability-diversity debate~\cite{May73}.
In the network framework, the ecosystem can be thought of as a network of species,
each of the nodes being associated with a variable that corresponds to the population
of the species it represents. The stability of the ecosystem is then defined by the
rate at which small perturbations to the populations of various species decay with time.
If the disturbance instead grows and gradually propagates through the system affecting
other nodes, the equilibrium is clearly unstable. Prior to the pioneering work of
May in the 1970s, it was thought that increasing complexity of an ecosystem, either
in terms of a rise in the total number of species or the density and strength of their
connections, results in enhanced stability of the ecosystem. This belief was based
on empirical observations that more diverse food-webs (e.g., in the wild) showed less violent
fluctuations in population density than simpler communities (such as in fields
under monoculture) and were less likely to suffer species extinctions. It has also been
seen that tropical forests, which generally tend to be more diverse than sub-tropical
ones, are also more resistant to invasion by foreign species~\cite{Elton58}. It was
therefore nothing short of a shock to the field when in 1972, Robert May showed in the
very brief article {\em Will a large complex system be stable?}~\cite{May72}
using linear stability arguments that as complexity increases, a randomly connected
network would tend to become more and more unstable.
The surprising demonstration that a system which has many elements and/or dense connections
between its elements is actually more likely to suffer potentially damaging large
fluctuations initiated by small perturbations immediately led to a large body of
work on this problem (see Ref.~\cite{McCann00} for a review). The two major objections
to May's results were (a) it uses linear stability analysis and that (b) it assumed random
organization of the interaction structure. However, more recent work which consider
systems with different types of population dynamics in the nodes, including
periodic limit-cycles and chaotic attractors~\cite{Sinha05b,Sinha06}, as well as,
networks having realistic features such as clustered small-world
property~\cite{Sinha05a} and scale-free degree distribution~\cite{Sinha05c},
have shown the result of increasing instability of complex networks to be
extremely robust. While large complex networks can still arise as a result
of gradual evolution~\cite{Wilmers02}, it is almost inevitable that such
systems will be frequently subject to large fluctuations and extinctions.
The relevance of this body of work to understanding the dynamics of economic
systems has been highlighted in the wake of the recent banking crisis when a
series of defaults, following each other in a cascading process, led to the
collapse of several major financial institutions. In fact, May and two other theoretical ecologists have written
an article entitled {\em Ecology for bankers}~\cite{May08} to point out the
strong parallels between understanding collapse in economic and ecological
networks. Recent empirical determination of networks occurring in the financial
context, such as that of inter-bank payment flows between banks through the
{\em Fedwire} real-time settlement service run by the US Federal Reserve, has
now made it possible to analyze the process by which cascades of failure events
can occur in such systems~\cite{Soramaki07}. Analogous to ecological systems, where population
fluctuations of a single species can trigger diverging deviations from the
equilibrium in the populations of other species, congestion in settling the
payment of one bank can cause other pending settlements to accumulate rapidly
setting up the stage for a potential major failure event. It is intriguing
that it is the very complexity of the network that has made it susceptible
to such network propagated effects of local deviations which makes global or
network-wide failure even more likely.
As the world banking system becomes more and more connected, it may be
very valuable to understand how the topology of interactions can
affect the robustness of the network.
The economic relevance of the network stability arguments used in the ecological
context can be illustrated
from the following toy example. Consider a model financial market comprising
$N$ agents where
each agent can either buy or sell at a given time instant.
This tendency can be quantitatively measured by the
probability to buy, $p$, and its complement, the probability to sell, $1-p$.
For the market to be in equilibrium, the demand should equal supply, so that
as many agents are likely to buy as to sell, i.e., $p = 0.5$. Let us in addition
consider that agents are influenced in their decision to buy or sell by the
actions of other agents with whom they have interactions. In general, we can
consider that out of all possible pairwise interactions between agents only a fraction
$C$ are actually realized. In other words, the inter-agent
connections are characterized by the matrix of link
strengths {\bf J}=$\{J_{ij}\}$ (where $i,j=1, ..., N$ label the agents)
with a fraction $C$ of non-zero entries.
If $J_{ij}>0$, it implies that an action of agent
$j$ (buying or selling) is likely to influence agent $i$ to act in the
same manner, whereas $J_{ij}<0$ suggests that the action of $i$ will be contrary
to that of $j$. Thus, the time-evolution of the probability for agent $i$ to buy
can be described by the following linearized equation close to the equilibrium
$p_i = 0.5 (i=1,\ldots,N)$:
\begin{equation}
\frac{dp_i}{dt}=\epsilon_i (0.5 - p_i) + \Sigma_j J_{ij} (0.5-p_j),
\label{eq1}
\end{equation}
where $\epsilon_i$ is the rate of converge of an isolated node
to its equilibrium state of equal probability for buying or selling. Without much
loss of generality we can consider $\epsilon_i = 1$ by appropriate choice of
time units for the dynamics. If in addition, we consider that for simplicity the
interactions are assigned randomly from a Gaussian distribution with mean $0$ and
variance $\sigma^2$, then the largest eigenvalue of the corresponding Jacobian
matrix {\bf J} evaluated around the equilibrium is $\lambda_{max}
= \sqrt{NC\sigma^2-1}$. For system parameters such that $NC\sigma^2 > 1$, an
initially small perturbation will gradually grow with time and drive the
system away from its equilibrium state.
Thus, even though the equilibrium $p=0.5$ is stable for
individual nodes in isolation, it may become unstable under
certain conditions when interactions between the agents are introduced.
Note that the argument can be easily generalized to the case where
the distribution from which $J_{ij}$ is chosen has a non-zero mean.
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{price_oscillations.eps}
\caption{{\bf Delay in market response can result in persistent price
oscillations.} Ideally the price mechanism should result in a
transient increase (decrease) in demand to be immediately matched by a
corresponding increase (decrease) in supply. However, in reality there
is delay in the information about the rise or fall in demand reaching
the producer; moreover, at the production end it may take time to
respond to the increasing demand owing to inherent delays in the
production system. Thus, the supply may always lag behind the price in
a manner that produces oscillations: as price rises, supply initially
remains low before finally increasing, by which time demand has fallen
due to the high price which (in association with the increased supply)
brings the price down. Supply continues to rise for some more time
before starting to decrease. When it falls much lower than the
demand, the price starts rising again which starts the whole cycle
anew. Thus, if the demand fluctuates at a time-scale that is shorter
than the delay involved in adjusting the production process to respond
to variations in demand, the price may evolve in a periodic or even a
chaotic manner.
}
\label{fig:price_osc}
\end{figure}
Another problem associated with the classical concept of economic equilibrium
is the process by which the system approaches it. Walras, in his original formulation
of how prices achieve their equilibrium value had envisioned the {\em t\^{a}tonnement}
process by which a market-maker takes in buy/sell bids from all agents in market
and gradually adjusts price until demand equals supply. Formally, it resembles
an iterative convergence procedure for determining the fixed-point solution of a
set of dynamical equations. However, as we know from the developments in nonlinear dynamics
over the past few decades, such operations on even simple non-linear systems
(e.g., the logistic equation) can result in periodic cycles or even chaos~\cite{May76}.
It is therefore not surprising to consider a situation in which the price mechanism
can actually result in supply and demand to be forever out of step each other even though
each is trying to respond to changes in the other. A simple situation in which
such a scenario can occur is shown in Fig.~\ref{fig:price_osc}, where a delay in
the response of the supply to the changes in price through variations in demand
can cause persistent oscillations.
As the principal reason for the instability appears to be the delay, one can
argue that by increasing the speed of information propagation
it should be possible to stabilize the equilibrium. However, we seem to have
witnessed exactly the reverse with markets becoming more volatile as improvements
in communication enable economic transactions to be conducted faster and faster.
As a history of financial manias and panics points out, ``there is little historical
evidence to suggest that improvements in communications create docile financial
markets \ldots''~\cite{Chancellor99}.
A possible answer to this apparent paradox lies in the fact that in
any realistic economic situation, information about fluctuations in the demand
may require to be relayed through several intermediaries before it reaches
the supplier. In other situations, the market may be segmented into several
communities of agents, with significantly more interactions occurring
between agents within the same as opposed to different communities.
These features can introduce several levels of delays in the market, resulting
in a multiple time-scale problem~\cite{Pan09}. Thus, increasing the speed of transactions,
while ostensibly allowing faster communication at the global scale can
disrupt the dynamical separation between process operating at different
time-scales. This can prevent sub-systems from converging to their respective
equilibria before subjecting them to new perturbations, thereby always keeping
the system out of equilibrium.
Therefore, we see that far from conforming to the neo-classical ideal of a stable
equilibrium, the dynamics of the economic system is likely to be always far
from equilibrium. In analogy with the question asked about ecological and other
systems with many diverse interacting components, we can ask whether a sufficiently
complex economy is bound to exhibit instabilities. After all, just like the
neo-classical economists, natural scientists also at one time believed in the
clockwork nature of the physical world (which in turn influenced the
the English philosopher, Thomas Hobbes, to seek the laws for social organization).
However, Poincare's work on the question of whether the solar system is stable
showed the inherent problems with such a viewpoint and eventually paved the
way for the later developments of chaos theory. Possibly we are at the brink of
a similar theoretical breakthrough in econophysics, one that does not strive to
re-interpret (or even ignore) empirical data
so as to conform to a theorist's expectations but one which describes the
mechanisms by which economic
systems actually evolve over time. It may turn out that, far from failures of the
market that need to be avoided, crashes and depressions may be the necessary
ingredients of future developments, as has been suggested by Schumpeter in
his theory of {\em creative destruction}~\cite{Schumpeter75}.
However, most importantly, we should not forget that economic
phenomena form
just one aspect of the entire set of processes that make up the human social
organization. Econophysics has to ultimately strive to be a theory for the
entire spectrum of human social behavior. As Keynes, one of the greatest
economists, had once said
``do not let us overestimate the importance of the economic problem, or sacrifice
to its supposed necessities other matters of greater and more permanent significance''
\cite{Keynes31}.
|
3,212,635,537,779 | arxiv | \section{Introduction} Recent advances in the smooth number technology associated with
Waring's problem (see \cite{BW2022}) make possible an investigation of the cognate
problem to which Fre\u \i man's theorem provides a qualitative answer. Consider then
natural numbers $k_i$ $(i\ge 1)$ satisfying $2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k_2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ldots $. We address the
problem of determining circumstances in which, given $j\in \dbN$, there exists a natural
number $s=s(j)$ such that all large integers $n$ are represented in the form
\[
x_1^{k_j}+x_2^{k_{j+1}}+\ldots +x_s^{k_{j+s-1}}=n,
\]
with $x_i\in \dbN$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. Fre\u \i man's theorem, announced in 1949 (see
\cite{Fre1949}), asserts that such holds if and only if the infinite series $\sum k_i^{-1}$
diverges. A formal proof of this conclusion was given by Scourfield in 1960
(see \cite[Theorem 1]{Sco1960}). We now provide an effective version of this conclusion.
\begin{theorem}\label{theorem1.1}
Let $k_i\in \dbN$ $(i\ge 1)$ satisfy $2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k_2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ldots $. Suppose that $s$ is a
natural number for which
\[
\sum_{i=3}^s\frac{1}{k_i}>2\log k_1+\frac{1}{k_2}+3.20032.
\]
Then all sufficiently large natural numbers $n$ are represented in the form
\[
x_1^{k_1}+x_2^{k_2}+\ldots +x_s^{k_s}=n,
\]
with $x_i\in \dbN$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$.
\end{theorem}
Since the hypotheses of Theorem \ref{theorem1.1} impose the condition
$k_2\ge k_1\ge 2$, one obtains an immediate consequence of this theorem that implies
Fre\u \i man's theorem.
\begin{corollary}\label{corollary1.2}
Let $k_i\in \dbN$ $(i\ge 1)$ satisfy $2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k_2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ldots $, and suppose that
$j\in \dbN$. Then whenever $s$ is a natural number for which
\begin{equation}\label{1.1}
\sum_{i=j}^{j+s-1}\frac{1}{k_i}\ge 2\log k_j+4.71,
\end{equation}
all sufficiently large natural numbers $n$ are represented in the form
\[
x_1^{k_j}+x_2^{k_{j+1}}+\ldots +x_s^{k_{j+s-1}}=n,
\]
with $x_i\in \dbN$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$.
\end{corollary}
By making better use of sharper Weyl exponents available for smaller exponents, most
particularly in the situation in which one or more of the $k_i$ are equal to $2$, it would not
be difficult to reduce the number $4.71$ occurring in the lower bound \eqref{1.1} of the
statement of Corollary \ref{corollary1.2}. Back of the envelope computations suggest that a
number comfortably below $3.5$ should be accessible. For larger values of $k_1$, and
$k_2$ large compared to $k_1$, on the other hand, the conclusion of Theorem
\ref{theorem1.1} has strength reflecting the limits of current technology. Standard
heuristics from the circle method, meanwhile, suggest that the conclusion of Corollary
\ref{corollary1.2} should remain valid provided only that
\[
\sum_{i=j}^{j+s-1}\frac{1}{k_i}>4.
\]
If one is prepared to accept a local solubility condition, then the assumption of
square-root cancellation for the mean values of exponential sums encountered in the
application of the circle method would reduce the lower bound $4$ here to $2$, while the
most optimistic heuristics would reduce this number further to $1$.\par
We now turn to the special case of this variant of Waring's problem involving mixed powers
in which the exponents consist of consecutive terms of an arithmetic progression. Thus,
when $k$ and $r$ are non-negative integers with $k\ge 2$, we consider the representation
of large positive integers $n$ in the shape
\begin{equation}\label{1.2}
x_1^k+x_2^{k+r}+\ldots +x_s^{k+r(s-1)}=n,
\end{equation}
with $x_i\in \dbN$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. We denote by $R(k,r)$ the least number $s$ having the
property that all large integers $n$ are represented in the form \eqref{1.2}. In particular,
the important number $G(k)$ familiar to aficionados of Waring's problem is equal to
$R(k,0)$. Moreover, the pioneering work of Roth \cite[Theorem 2]{Rot1951} shows that
$R(2,1)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 50$, which is to say that all large enough integers $n$ have a representation in
the shape
\[
n=x_1^2+x_2^3+\ldots +x_{50}^{51},
\]
with $x_i\in \dbN$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 50)$.
\begin{theorem}\label{theorem1.3}
Let $k$ and $r$ be natural numbers with $k\ge 2$. Then, uniformly in $k$ and $r$ one has
$R(k,r)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec A(r)(k+1)^{r+1}$, where $A(r)=r^{-1}25^r(r+1)^{r+1}$. Meanwhile, when
$r\ge k$ one has $R(k,r)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (6k+6)^{2r}$.
\end{theorem}
It would appear that the only previous work concerning this problem of such generality
hitherto available in the literature is that due to Scourfield \cite[Theorem 2]{Sco1960}. The
latter work shows that when $k\ge 12$, one has
\[
R(k,r)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C(r)k^{4r+1}(\log k)^{2r},
\]
in which $C(r)$ is a quantity depending at most on $r$, but apparently growing somewhat
more rapidly than $\exp (8r^2)$. Meanwhile, the early work of Roth \cite{Rot1951}
showing that $R(2,1)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 50$ has been improved by a sequence of authors over the past
seven decades (see \cite{Bru1987, Bru1988, For1995, For1996, Tha1968, Tha1980,
Tha1982, Tha1984, Vau1970, Vau1971}). Most recently, Liu and Zhao \cite{LZ2021} have
shown that $R(2,1)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 13$. As $r$ increases in equation \eqref{1.2}, the number of
summands required to apply available technology increases rapidly. Thus, recent work of
Kuan, Lesesvre and Xiao \cite[Theorem 2]{KLX2020} asserts that $R(2,2)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 133$.\par
We isolate two cases of the representation problem \eqref{1.2} for special attention. First,
in the case $r=1$, we note that Ford \cite[Theorems 2 and 3]{For1996} has shown that
$R(3,1)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 72$, and that for large values of $k$ one has $R(k,1)\ll k^2\log k$. A corollary of
Theorem \ref{theorem1.3} improves the order of magnitude of the latter bound.
\begin{corollary}\label{corollary1.4}
When $k$ is an integer with $k\ge 2$, one has $R(k,1)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 100(k+1)^2$.
\end{corollary}
Thus, when $k\ge 2$ and $s\ge 100(k+1)^2$, all large integers $n$ possess a
representation in the shape
\[
x_1^k+x_2^{k+1}+\ldots +x_s^{k+s-1}=n,
\]
with $x_i\in \dbN$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. The cognate problem in which one seeks representations
of large integers $n$ in the shape
\[
x_1^k+x_2^{2k}+\ldots +x_s^{sk}=n,
\]
with $x_i\in \dbN$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$, is considerably more difficult. Here, by taking $r=k$ in
Theorem \ref{theorem1.3} we obtain the following conclusion.
\begin{corollary}\label{corollary1.5}
Let $k$ be an integer with $k\ge 2$. Then $R(k,k)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (6k+6)^{2k}$.
\end{corollary}
For comparison, the aforementioned work of Scourfield \cite{Sco1960} would deliver a
much weaker bound of the general shape $R(k,k)\ll \exp (ck^2)$ for a suitable $c>0$. It is
worth remarking, however, that the heuristic arguments noted in the discussion following the
statement of Corollary \ref{corollary1.2} suggest that one should have bounds of the shape
$R(k,1)\ll k$ and $R(k,k)\ll e^k$.\par
Our proofs of Theorems \ref{theorem1.1} and \ref{theorem1.3} are based on applications
of the Hardy-Littlewood method, and the basic infrastructure associated with this treatment
is outlined in \S2. Then, in \S3 we prepare a novel Weyl-type estimate for exponential sums
over smooth numbers. This eases our path in subsequent discussions and will likely be of
independent interest. We combine this estimate with an upper bound for mean values of
smooth Weyl sums in \S4, making use of our recent work \cite{BW2022} concerning
Waring's problem. Thereby, we obtain an acceptable upper bound for appropriate sets of
minor arcs relevant to Theorem \ref{theorem1.1} and the second conclusion of Theorem
\ref{theorem1.3}. A refinement of this approach in \S5 applies for the minor arc
contribution needed for the proof of the first conclusion of Theorem \ref{theorem1.3}. The
corresponding major arc contributions are discussed in \S6, the positivity of the singular
series requiring some additional discussion in \S7.\par
In this paper the letter $p$ is reserved to denote a prime number. We use the standard
notation $p^h\|n$ to indicate that $p^h|n$ and $p^{h+1}\nmid n$. Also, we write
$\|{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}\|$ for $\min\{|{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-n|:n\in {\mathbb Z}}\def\dbQ{{\mathbb Q}\}$ and $e(z)$ for $e^{2\pi {\rm i}z}$.
\section{Preliminary infrastructure} The proofs of Theorems \ref{theorem1.1} and
\ref{theorem1.3} make use of the Hardy-Littlewood method, with smooth Weyl sums
playing a pivotal role. We denote the set of $R$-smooth integers not exceeding $P$ by
$\mathscr A(P,R)$, so that
\[
\mathscr A(P,R)=\{ n\in [1,P]\cap {\mathbb Z}}\def\dbQ{{\mathbb Q}:\text{$p|n$ implies $p\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec R$}\}.
\]
We note that the standard theory of smooth numbers shows that whenever $\eta\in (0,1)$,
then there is a positive number $c_\eta$ with the property that
$\text{card}(\mathscr A(P,R))\sim c_\eta P$ as $P\rightarrow \infty$ (see for example
\cite[Lemma 12.1]{Vau1997}).\par
Fix $k_i\in \dbN$ $(i\ge 1)$ with $2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k_2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \ldots $. Let $s$ be a natural number,
and define ${\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}={\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}_s({\mathbf k})$ by putting
\begin{equation}\label{2.1}
{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}_s({\mathbf k})=\sum_{i=1}^s\frac{1}{k_i}.
\end{equation}
For now, it suffices to remark that we have in mind imposing the condition ${\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}>2$,
although we shall later impose more onerous conditions on $s$. We consider a natural
number $n$ sufficiently large in terms of $s$ and $k_1,\ldots ,k_s$, and we seek a
representation of $n$ in the form
\begin{equation}\label{2.2}
x_1^{k_1}+x_2^{k_2}+\ldots +x_s^{k_s}=n.
\end{equation}
When $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$, we put
\begin{equation}\label{2.3}
P_i=n^{1/k_i}
\end{equation}
and observe that all positive integral solutions of the Diophantine equation \eqref{2.2}
satisfy the bound $x_i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P_i$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. Fix $\eta$ to be a positive number sufficiently
small in terms of $s$ and $k_1,\ldots ,k_s$, in a manner that will become clear in due
course. Our goal is to establish a lower bound for the number $T(n;\eta)$ of solutions of
the equation \eqref{2.2} with $x_i\in \mathscr A(P_i,P_i^\eta)$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$.
\par The smooth Weyl sums
$f_i({\alpha}} \def\bfalp{{\boldsymbol \alpha})$ that are key to our arguments are defined by
\begin{equation}\label{2.4}
f_i({\alpha}} \def\bfalp{{\boldsymbol \alpha})=\sum_{x\in \mathscr A(P_i,P_i^\eta)}e({\alpha}} \def\bfalp{{\boldsymbol \alpha} x^{k_i}).
\end{equation}
Writing
\begin{equation}\label{2.5}
\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})=f_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})f_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})\cdots f_s({\alpha}} \def\bfalp{{\boldsymbol \alpha}),
\end{equation}
it follows via orthogonality that
\begin{equation}\label{2.6}
T(n;\eta)=\int_0^1 \mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha}) e(-n{\alpha}} \def\bfalp{{\boldsymbol \alpha}){\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} .
\end{equation}
\par We derive an asymptotic formula for $T(n;\eta)$ by means of the circle method, the
successful application of which requires the introduction of a Hardy-Littlewood dissection.
Write $L=\log n$. We take the set of major arcs ${\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p}$ to be the union of the intervals
\[
{\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p}(q,a)=\{ {\alpha}} \def\bfalp{{\boldsymbol \alpha}\in [0,1): |{\alpha}} \def\bfalp{{\boldsymbol \alpha}-a/q|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec L^{1/15}n^{-1}\},
\]
with $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec a\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec L^{1/15}$ and $(a,q)=1$. The set of minor arcs complementary to
${\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p}$ is then ${\mathfrak k}=[0,1)\setminus {\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p}$. Our first objective, which we complete in
\S\S4 and 5, is to establish that for a suitable positive number ${\delta}} \def\Del{{\Delta}$, provided that $s$ is
suitably large in terms of ${\mathbf k}$, one has an upper bound of the shape
\begin{equation}\label{2.7}
\int_{\mathfrak k} |\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}L^{-{\delta}} \def\Del{{\Delta}}.
\end{equation}
The major arc asymptotics is then the central theme of \S\S6 and 7, where we confirm the
lower bound
\begin{equation}\label{2.8}
\int_{\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p} \mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})e(-n{\alpha}} \def\bfalp{{\boldsymbol \alpha}){\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \gg n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1},
\end{equation}
again for suitably large values of $s$, and with $n$ sufficiently large in terms of $s$, ${\mathbf k}$
and $\eta$. By combining the bounds \eqref{2.7} and \eqref{2.8} within \eqref{2.6}, we
conclude that $T(n;\eta)\gg n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}$, so that all large enough integers $n$
possess a representation in the shape \eqref{2.2}. This confirms the respective conclusions
of Theorems \ref{theorem1.1} and \ref{theorem1.3}, the only problem remaining being
that of determining how large $s$ must be so that the estimates \eqref{2.7} and
\eqref{2.8} hold true. Appropriate bounds on $s$ will be determined in \S\S 4, 5 and 7.
\section{An estimate of Weyl-type} This section concerns estimates for the exponential
sums $f_i({\alpha}} \def\bfalp{{\boldsymbol \alpha})$ of use on sets of minor arcs more general than the arcs ${\mathfrak k}$ introduced
in the previous section. Consider a natural number $k\ge 2$ and a large positive number
$P$. We take $Q$ to be a parameter with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$. The major arcs $\grM(Q)$
are then defined to be the union of the sets
\[
\grM(q,a;Q)=\{ {\alpha}} \def\bfalp{{\boldsymbol \alpha} \in [0,1): |q{\alpha}} \def\bfalp{{\boldsymbol \alpha} -a|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec QP^{-k}\},
\]
with $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec a\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q$ and $(a,q)=1$. The complementary set of minor arcs is then defined
by putting ${\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)=[0,1)\setminus \grM(Q)$. Finally, we make use of the dyadically
truncated set of arcs $\grN(Q)=\grM(Q)\setminus \grM(Q/2)$. Notice that, as a
consequence of Dirichlet's approximation theorem, one has $[0,1)=\grM(P^{k/2})$.\par
Our interest lies in estimates for the exponential sum
\[
f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)=\sum_{x\in \mathscr A(P,R)}e({\alpha}} \def\bfalp{{\boldsymbol \alpha} x^k),
\]
valid when $R$ is a positive number with $R\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^\eta$ for a suitably small positive number
$\eta$, and ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \grN(Q)$. In order to describe these estimates, we recall the concept
of an admissible exponent from the theory of smooth Weyl sums. A real number $\Delta_s$
is referred to as an {\it admissible exponent} (for $k$) if it has the property that, whenever
$\varepsilon>0$ and $\eta$ is a positive number sufficiently small in terms of $\varepsilon$, $k$ and $s$,
then whenever $2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec R\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^\eta$ and $P$ is sufficiently large, one has
\[
\int_0^1 |f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)|^s{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll P^{s-k+\Del_s+\varepsilon}.
\]
Here, the underlying parameter is $P$ and the constant implicit in Vinogradov's notation
may depend on $\varepsilon$, $\eta$, $k$ and $s$. One may confirm that for all positive numbers
$s$, there is no loss of generality in supposing that one has
$\max \{ 0,k-s/2\}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \Del_s\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k$.\par
In order to facilitate concision, from this point onwards we adopt the extended $\varepsilon$, $R$
notation routinely employed by scholars working with smooth Weyl sums while applying the
Hardy-Littlewood method. Thus, whenever a statement involves the letter $\varepsilon$, then it is
asserted that the statement holds for any positive real number assigned to $\varepsilon$. Implicit
constants stemming from Vinogradov or Landau symbols may depend on $\varepsilon$, as well as
ambient parameters implicitly fixed such as $k$ and $s$. If a statement also involves the
letter $R$, either implicitly or explicitly, then it is asserted that for any $\varepsilon>0$ there is a
number $\eta>0$ such that the statement holds uniformly for $2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec R\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^\eta$. Our
arguments will involve only a finite number of statements, and consequently we may pass
to the smallest of the numbers $\eta$ that arise in this way, and then have all estimates in
force with the same positive number $\eta$. Notice that $\eta$ may be assumed
sufficiently small in terms of $k$, $s$ and $\varepsilon$.\par
Associated with a family $(\Del_s)_{s>0}$ of admissible exponents for $k$ is the number
\begin{equation}\label{3.1}
\tau(k)=\max_{w\in \dbN}\frac{k-2\Del_{2w}}{4w^2},
\end{equation}
an exponent which satisfies the bound $\tau(k)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1/(4k)$. For each positive number $s$,
one then has the related number
\begin{equation}\label{3.2}
\Del_s^*=\min_{0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec t\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s-2}\left(\Del_{s-t}-t\tau(k)\right),
\end{equation}
which we have described elsewhere as an {\it admissible exponent for minor arcs} (see the
preamble to \cite[Theorem 5.2]{BW2022} for a discussion of these exponents).\par
We recall two consequences of our recent work \cite{BW2022} on Waring's problem.
\begin{lemma}\label{lemma3.1} Suppose that $k\ge 2$, $s\ge 2$ and $\Del_s^*$ is an
admissible exponent for minor arcs satisfying $\Del_s^*<0$. Let ${\kappa}$ be a positive
number with ${\kappa}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k/2$. Then, whenever $P^{\kappa}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$, one has the bound
\[
\int_{{\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)|^s{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll_{\kappa} P^{s-k}Q^{\varepsilon-2|\Del_s^*|/k}.
\]
\end{lemma}
\begin{proof} This is immediate from \cite[Theorem 5.3]{BW2022}.
\end{proof}
\begin{lemma}\label{lemma3.2} Suppose that $s$ is a real number with $s\ge 4k$ and
$\Del_s$ is an admissible exponent. Then whenever $Q$ is a real number with
$1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$, one has the uniform bound
\[
\int_{\grM(Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)|^s{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll P^{s-k}Q^{\varepsilon+2\Del_s/k}.
\]
\end{lemma}
\begin{proof} For the sake of concision, write $f({\alpha}} \def\bfalp{{\boldsymbol \alpha})$ for $f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)$. Suppose first that
$P^{1/(2k)}<Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$. Then the conclusion of \cite[Theorem 4.2]{BW2022} shows
that whenever $s\ge 2$, one has
\[
\int_{\grM(Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^s{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll P^{s-k+\varepsilon}Q^{2\Del_s/k},
\]
and the desired conclusion is immediate.\par
In order to handle the range of $Q$ with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{1/(2k)}$, we turn to the bounds
made available in \cite{VW1991}. We take a pedestrian approach sufficient for our
subsequent application, though we note that with greater effort the condition $s\ge 4k$
could be relaxed at this point. Suppose first that ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \grM(Q)$ for some real number
$Q$ satisfying
\[
\exp ((\log P)^{1/3})\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{1/(2k)}.
\]
When $a\in {\mathbb Z}}\def\dbQ{{\mathbb Q}$ and $q\in \dbN$ satisfy $(a,q)=1$ and
$0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec a\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \tfrac{1}{2}P^{k/2}$, the intervals $\grM (q,a;\tfrac{1}{2}P^{k/2})$
comprising $\grM(\tfrac{1}{2}P^{k/2})$ are disjoint. For
${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \grM(q,a;\frac{1}{2}P^{k/2})\subseteq \grM(\tfrac{1}{2}P^{k/2})$, we put
\[
{\Upsilon}} \def\Upshat{{\widehat \Ups}({\alpha}} \def\bfalp{{\boldsymbol \alpha})=(q+P^k|q{\alpha}} \def\bfalp{{\boldsymbol \alpha} -a|)^{-1}.
\]
Meanwhile, for ${\alpha}} \def\bfalp{{\boldsymbol \alpha}\in [0,1)\setminus \grM(\tfrac{1}{2}P^{k/2})$, we put ${\Upsilon}} \def\Upshat{{\widehat \Ups}({\alpha}} \def\bfalp{{\boldsymbol \alpha})=0$.
Note that when ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \grN(Q)$ one has ${\Upsilon}} \def\Upshat{{\widehat \Ups}({\alpha}} \def\bfalp{{\boldsymbol \alpha})\ll Q^{-1}$. Then as a consequence
of \cite[Lemma 7.2]{VW1991}, much as in the argument leading to
\cite[equation (6.3)]{BW2022}, we find that when ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \grN(Q)$, one has
\[
f({\alpha}} \def\bfalp{{\boldsymbol \alpha})\ll P(\log P)^3{\Upsilon}} \def\Upshat{{\widehat \Ups}({\alpha}} \def\bfalp{{\boldsymbol \alpha})^{-\varepsilon+1/(2k)}+P^{1-1/(2k)}\ll PQ^{2\varepsilon -1/(2k)}.
\]
We remark in this context that the constraint $k\ge 3$ of \cite[equation (6.3)]{BW2022} is
unnecessary in present circumstances. When instead
\[
1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \exp((\log P)^{1/3}),
\]
we appeal to \cite[Lemma 8.5]{VW1991}, deducing as in the cognate argument associated
with \cite[Theorem 6.1]{BW2022} that
\[
f({\alpha}} \def\bfalp{{\boldsymbol \alpha})\ll P{\Upsilon}} \def\Upshat{{\widehat \Ups}({\alpha}} \def\bfalp{{\boldsymbol \alpha})^{-\varepsilon+1/k}+P\exp(-(\log P)^{1/3})\ll PQ^{-1/(2k)}.
\]
Again, the constraint $k\ge 3$ of \cite{BW2022} is unnecessary in present circumstances.
Thus, in view of the hypothesis $s\ge 4k$, it follows that when $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{1/(2k)}$ one
has
\begin{align*}
\int_{\grN(Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^s{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} &\ll P^sQ^{\varepsilon-s/(2k)}\text{mes}(\grM(Q))\\
&\ll P^sQ^{\varepsilon -s/(2k)}(Q^2P^{-k})\ll P^{s-k}Q^\varepsilon .
\end{align*}
This again delivers the estimate asserted in the statement of the lemma, since
\[
\int_{\grM(Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^s{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{\substack{l=0\\ 2^l\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q}}^\infty
\int_{\grN(2^{-l}Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^s{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll P^{s-k}Q^\varepsilon .
\]
This completes the proof of the lemma.
\end{proof}
We obtain a pointwise bound for $f({\alpha}} \def\bfalp{{\boldsymbol \alpha})=f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)$ when ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)$ by
application of the Gallagher-Sobolev inequality.
\begin{lemma}\label{lemma3.3} Suppose that $k\ge 2$ and $0<\rho(k)<2\tau(k)/k$. Then,
uniformly in $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$, one has the bound
\[
\sup_{{\alpha}} \def\bfalp{{\boldsymbol \alpha}\in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)|\ll PQ^{-\rho(k)}.
\]
\end{lemma}
\begin{proof} We consider in the first instance the situation in which
$P^{1/(2k)}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$. Here, we apply Lemma \ref{lemma3.1} with $s=u+tk$,
where $u=4^k$ and $t$ is sufficiently large in terms of $k$. The value of $u$ here has
been chosen large enough that the classical theory of Waring's problem is comfortably
applicable. With more care one could work with a choice for $u$ little more than $k\log k$.
On considering the underlying Diophantine equation, working with the value of $u$ already
chosen, it follows from Hua's lemma and a routine application of the circle method along the
lines described in \cite[Chapter 2]{Vau1997} that
\begin{equation}\label{3.3}
\int_0^1|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^u{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int_0^1\biggl| \sum_{1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P}e({\alpha}} \def\bfalp{{\boldsymbol \alpha} x^k)\biggr|^u
{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll P^{u-k}.
\end{equation}
In particular, the exponent $\Del_{s-tk}=0$ is admissible for $k$, and thus it follows from
\eqref{3.2} that $\Del_s^*=-tk\tau(k)$ is an admissible exponent for minor arcs. We
therefore infer from Lemma \ref{lemma3.1} that
\begin{equation}\label{3.4}
\int_{{\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^s{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll P^{s-k}Q^{\varepsilon-2t\tau(k)}.
\end{equation}
\par Consider next a real number ${\alpha}} \def\bfalp{{\boldsymbol \alpha}$ with ${\alpha}} \def\bfalp{{\boldsymbol \alpha}\in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)$, and let ${\delta}} \def\Del{{\Delta}$ be any real
number with $|{\delta}} \def\Del{{\Delta}|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{-k}$. Suppose, if possible, that ${\alpha}} \def\bfalp{{\boldsymbol \alpha}+{\delta}} \def\Del{{\Delta}\in \grM(Q/2)$. In
such circumstances, there exist $a\in {\mathbb Z}}\def\dbQ{{\mathbb Q}$ and $q\in \dbN$ with $(a,q)=1$, $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q/2$
and $|q({\alpha}} \def\bfalp{{\boldsymbol \alpha}+{\delta}} \def\Del{{\Delta})-a|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \tfrac{1}{2}QP^{-k}$. Consequently, one has
\[
|q{\alpha}} \def\bfalp{{\boldsymbol \alpha} -a|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q|{\delta}} \def\Del{{\Delta}|+\tfrac{1}{2}QP^{-k}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec QP^{-k},
\]
whence ${\alpha}} \def\bfalp{{\boldsymbol \alpha}\in \grM(Q)$. This yields a contradiction, so we are forced to conclude that
${\alpha}} \def\bfalp{{\boldsymbol \alpha}+{\delta}} \def\Del{{\Delta}\in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q/2)$. This observation allows us to estimate $f({\alpha}} \def\bfalp{{\boldsymbol \alpha})$ pointwise on
${\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)$ in terms of mean values for $f({\alpha}} \def\bfalp{{\boldsymbol \alpha})$ over ${\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q/2)$. Indeed, as a
consequence of the Sobolev-Gallagher inequality (see for example Montgomery
\cite[Lemma 1.1]{Mon1971}), we have
\[
|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^s\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (2P^{-k})^{-1}\int_{|{\beta}} \def\bfbet{{\boldsymbol \beta}-{\alpha}} \def\bfalp{{\boldsymbol \alpha}|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{-k}}|f({\beta}} \def\bfbet{{\boldsymbol \beta})|^s{\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta}
+s\int_{|{\beta}} \def\bfbet{{\boldsymbol \beta}-{\alpha}} \def\bfalp{{\boldsymbol \alpha}|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{-k}}|f'({\beta}} \def\bfbet{{\boldsymbol \beta})f({\beta}} \def\bfbet{{\boldsymbol \beta})^{s-1}|{\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta} .
\]
Hence, whenever ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)$, we infer that
\begin{equation}\label{3.5}
|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^s\ll P^kI_1+I_2,
\end{equation}
where
\[
I_1=\int_{{\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q/2)}|f({\beta}} \def\bfbet{{\boldsymbol \beta})|^s{\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta} \quad \text{and}\quad I_2=\int_{{\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q/2)}
|f'({\beta}} \def\bfbet{{\boldsymbol \beta})f({\beta}} \def\bfbet{{\boldsymbol \beta})^{s-1}|{\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta} .
\]
\par The bound
\begin{equation}\label{3.6}
I_1\ll P^{s-k}Q^{\varepsilon-2t\tau(k)}
\end{equation}
follows from \eqref{3.4}. Meanwhile, by applying H\"older's inequality, we see that
\begin{equation}\label{3.7}
I_2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec I_3^{1/u}I_4^{1-1/u},
\end{equation}
where
\begin{equation}\label{3.8}
I_3=\int_0^1|f'({\beta}} \def\bfbet{{\boldsymbol \beta})|^u{\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta} \quad \text{and}\quad I_4=\int_{{\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q/2)}
|f({\beta}} \def\bfbet{{\boldsymbol \beta})|^v{\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta} ,
\end{equation}
in which
\[
v=\frac{s-1}{1-1/u}.
\]
\par Recall that $u=4^k$ is even. Then since
\[
f'({\beta}} \def\bfbet{{\boldsymbol \beta})=2\pi {\rm i}\sum_{x\in \mathscr A(P,R)}x^ke({\beta}} \def\bfbet{{\boldsymbol \beta} x^k),
\]
it follows from \eqref{3.8} by considering the underlying Diophantine equations that
\[
I_3\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (2\pi P^k)^u\int_0^1|f({\beta}} \def\bfbet{{\boldsymbol \beta})|^u{\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta} .
\]
On recalling \eqref{3.3}, therefore, we deduce that
\begin{equation}\label{3.9}
I_3\ll (P^k)^uP^{u-k}.
\end{equation}
Meanwhile, since $s>u$ we have $v>s$, and so it follows from \eqref{3.8} via Lemma
\ref{lemma3.1} that
\begin{equation}\label{3.10}
I_4\ll P^{v-k}Q^{\varepsilon -2|\Del_v^*|/k},
\end{equation}
where
\[
\Del_v^*=-(v-u)\tau(k)=-\Bigl( \frac{s-u}{1-1/u}\Bigr) \tau(k)=-\frac{tk\tau(k)}{1-1/u}.
\]
On substituting \eqref{3.9} and \eqref{3.10} into \eqref{3.7}, we find that
\begin{equation}\label{3.11}
I_2\ll P^k(P^{u-k})^{1/u}(P^{v-k})^{1-1/u}Q^{\varepsilon-2t\tau(k)}\ll P^sQ^{\varepsilon-2t\tau(k)}.
\end{equation}
\par On substituting \eqref{3.6} and \eqref{3.11} into \eqref{3.5}, we conclude that
\[
|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^s\ll P^sQ^{\varepsilon-2t\tau(k)}.
\]
Thus, whenever ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)$, we have $f({\alpha}} \def\bfalp{{\boldsymbol \alpha})\ll PQ^{\varepsilon-{\kappa}}$, where
\[
{\kappa}=\frac{2t\tau(k)}{u+tk}.
\]
We now take $t$ sufficiently large in terms of $k$ and obtain the upper bound
\[
f({\alpha}} \def\bfalp{{\boldsymbol \alpha})\ll PQ^{\varepsilon-2\tau(k)/k}.
\]
This confirms the upper bound that we sought when $P^{1/(2k)}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$.\par
In order to handle the range of $Q$ with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q<P^{1/(2k)}$, just as in the proof of
Lemma \ref{lemma3.2} we turn to the bounds made available in \cite{VW1991}. For the
sake of concision we adopt the notation of the proof of the latter lemma. Suppose first that
one has $(\log P)^{60ks}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{1/(2k)}$ and ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)$. Then
\cite[Lemma 7.2]{VW1991} delivers the bound
\begin{align*}
f({\alpha}} \def\bfalp{{\boldsymbol \alpha})&\ll P(\log P)^3{\Upsilon}} \def\Upshat{{\widehat \Ups}({\alpha}} \def\bfalp{{\boldsymbol \alpha})^{-\varepsilon+1/(2k)}+P^{1-\tau(k)+\varepsilon}\\
&\ll PQ^{-1/(3k)}\ll PQ^{\varepsilon-2\tau(k)/k}.
\end{align*}
When instead $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (\log P)^{60ks}$, we appeal to \cite[Lemma 8.5]{VW1991},
deducing that
\begin{align*}
f({\alpha}} \def\bfalp{{\boldsymbol \alpha})&\ll P{\Upsilon}} \def\Upshat{{\widehat \Ups}({\alpha}} \def\bfalp{{\boldsymbol \alpha})^{-\varepsilon+1/k}+P(\log P)^{-60ks}\\
&\ll PQ^{-1/(2k)}\ll PQ^{\varepsilon-2\tau(k)/k}.
\end{align*}
Thus, in all circumstances, we have the estimate asserted in the statement of the lemma,
and the proof is complete.
\end{proof}
For the purposes of this paper, we apply a bound for $\tau(k)$ sufficient for our
applications, though falling very slightly short of the sharpest bound attainable using current
technology. In this context, it is useful to introduce the exponent ${\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}={\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}(k)$, defined
by
\begin{equation}\label{3.12}
{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}(k)=1/(Dk^2),\quad \text{where}\quad D=4.51396.
\end{equation}
\begin{lemma}\label{lemma3.4}
When $k\ge 2$ and $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$, one has the uniform bound
\[
\sup_{{\alpha}} \def\bfalp{{\boldsymbol \alpha}\in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)|\ll PQ^{-{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}}.
\]
\end{lemma}
\begin{proof} When $k\ge 4$, it is shown in \cite[Lemma 7.1]{BW2022} that there is a
family of admissible exponents satisfying the property that $\tau(k)\ge (Ck)^{-1}$, where
$C=9.027901<2D$. Thus
\[
\frac{2}{k}\tau(k)>\frac{1}{Dk^2},
\]
and the desired conclusion follows from Lemma \ref{lemma3.3}.\par
When $k$ is equal to $2$ or $3$, we appeal to the formula \eqref{3.1} with the crude
bound on admissible exponents available from Hua's lemma (see
\cite[Lemma 2.5]{Vau1997}). Thus, we have the admissible exponent $\Del_{2^k}=0$
since
\[
\int_0^1|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^{2^k}{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int_0^1 \biggl|\sum_{1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P}e({\alpha}} \def\bfalp{{\boldsymbol \alpha} x^k)
\biggr|^{2^k}{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll P^{2^k-k+\varepsilon},
\]
and hence we deduce via \eqref{3.1} that $\tau(2)\ge 1/8$ and $\tau(3)\ge 3/64$. Thus
\[
\frac{2}{2}\tau(2)\ge \frac{1}{8}>\frac{1}{18}>{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome} (2)\quad \text{and}\quad
\frac{2}{3}\tau(3)\ge \frac{1}{32}>\frac{1}{40}>{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome} (3).
\]
In each of these cases, the desired conclusion again follows from Lemma \ref{lemma3.3}.
\end{proof}
We finish this section with a formulation of our new minor arc estimate of sufficient flexibility
that further applications may be anticipated.
\begin{theorem}\label{theorem3.5} Suppose that $a\in {\mathbb Z}}\def\dbQ{{\mathbb Q}$ and $q\in \dbN$ satisfy
$(a,q)=1$. Then one has
\[
f({\alpha}} \def\bfalp{{\boldsymbol \alpha};P,R)\ll P\left( {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}^{-1}+{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} P^{-k}\right)^{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome},
\]
where ${\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} =q+P^k|q{\alpha}} \def\bfalp{{\boldsymbol \alpha} -a|$.
\end{theorem}
\begin{proof} We begin by establishing the superficially weaker assertion that, whenever
${\alpha}} \def\bfalp{{\boldsymbol \alpha}\in {\mathbb R}}\def\dbT{{\mathbb T}$, $c\in {\mathbb Z}}\def\dbQ{{\mathbb Q}$ and $t\in \dbN$ satisfy $(c,t)=1$ and $|{\alpha}} \def\bfalp{{\boldsymbol \alpha} -c/t|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1/t^2$, then
\begin{equation}\label{3.13}
f({\alpha}} \def\bfalp{{\boldsymbol \alpha})\ll P(t^{-1}+tP^{-k})^{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome} .
\end{equation}
From this assertion, it follows via a standard transference principle (see for example
\cite[Lemma 14.1]{Woo2015b}) that the conclusion of the lemma holds.\par
Suppose then that $c$ and $t$ satisfy the relations $(c,t)=1$ and $|{\alpha}} \def\bfalp{{\boldsymbol \alpha} -c/t|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1/t^2$. We
apply Dirichlet's approximation theorem. Thus, there exist $b\in {\mathbb Z}}\def\dbQ{{\mathbb Q}$ and $r\in \dbN$ with
$(b,r)=1$ satisfying $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec r\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2}$ and $|r{\alpha}} \def\bfalp{{\boldsymbol \alpha}-b|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{-k/2}$. We now put
\[
Q=\max \{ r, P^k|r{\alpha}} \def\bfalp{{\boldsymbol \alpha} -b|\} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{k/2},
\]
so that $r\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q$ and $|r{\alpha}} \def\bfalp{{\boldsymbol \alpha} -b|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec QP^{-k}$, and either $r\ge Q$ or
$|r{\alpha}} \def\bfalp{{\boldsymbol \alpha} -b|\ge QP^{-k}$. Thus ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \grM(Q)\setminus \grM(Q/2)\subseteq {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q/2)$,
and it follows from Lemma \ref{lemma3.4} that
\begin{equation}\label{3.14}
f({\alpha}} \def\bfalp{{\boldsymbol \alpha})\ll PQ^{-{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}}.
\end{equation}
When $c/t\ne b/r$, it follows from the triangle inequality that
\[
\frac{1}{tr}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \Bigl| \frac{c}{t}-\frac{b}{r}\Bigr| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \Bigl| {\alpha}} \def\bfalp{{\boldsymbol \alpha} -\frac{c}{t}\Bigr|
+\Bigl| {\alpha}} \def\bfalp{{\boldsymbol \alpha} - \frac{b}{r}\Bigr| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{1}{t^2}+\frac{Q}{rP^k},
\]
whence
\[
1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{r}{t}+\frac{tQ}{P^k}.
\]
Thus, we have either $r\ge \tfrac{1}{2}t$ or $Q\ge \tfrac{1}{2}P^k/t$. When instead
$c/t=b/r$, we have $b=c$ and $r=t$, and the same conclusion holds. In either case,
therefore, we find that $Q=\max\{ r,P^k|r{\alpha}} \def\bfalp{{\boldsymbol \alpha} -b|\}\ge \tfrac{1}{2}\min \{ t,P^k/t\}$.
Thus, we infer from \eqref{3.14} that
\[
f({\alpha}} \def\bfalp{{\boldsymbol \alpha})\ll P\left( t^{-{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}}+(P^k/t)^{-{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}}\right) \ll P( t^{-1}+tP^{-k})^{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}.
\]
Thus the desired conclusion \eqref{3.13} follows, and the proof of the theorem is complete.
\end{proof}
\section{The minor arc contribution for ascending powers} We now address the
representation problem \eqref{2.2} and adopt the notation of \S2. In situations wherein
$k_2$ may be substantially larger than $k_1$, we apply a Weyl-type estimate only for the
exponential sum $f_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})$, estimating the remaining ones in mean. Put
\begin{equation}\label{4.1}
F_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})=f_1({\alpha}} \def\bfalp{{\boldsymbol \alpha}),\quad G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})=f_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})f_3({\alpha}} \def\bfalp{{\boldsymbol \alpha})\cdots f_s({\alpha}} \def\bfalp{{\boldsymbol \alpha}),
\end{equation}
and note that in view of \eqref{2.1}, \eqref{2.3} and \eqref{2.4}, one has
$F_1(0)G_1(0)\asymp n^{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}$. We take $Q$ to be a parameter with
$1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}$ and define a Hardy-Littlewood dissection in accordance with that
introduced in \S3. Thus, the major arcs $\grM(Q)$ are defined to be the union of the sets
\[
\grM(q,a;Q)=\{ {\alpha}} \def\bfalp{{\boldsymbol \alpha}\in [0,1):|q{\alpha}} \def\bfalp{{\boldsymbol \alpha}-a|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Qn^{-1}\},
\]
with $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec a\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q$ and $(a,q)=1$, and the associated set of minor arcs are defined by
setting ${\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)=[0,1)\setminus \grM(Q)$. Also, we put
$\grN(Q)=\grM(Q)\setminus \grM(Q/2)$. Note that $\grN(Q)\subseteq {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q/2)$. Since
$n=P_i^{k_i}$ for each $i$, these definitions align with those of \S3 when considering the
smooth Weyl sum $f_i({\alpha}} \def\bfalp{{\boldsymbol \alpha})$.\par
We begin by recording a Weyl-type estimate for $F_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})$.
\begin{lemma}\label{lemma4.1}
When $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}$, one has the bound
\[
\sup_{{\alpha}} \def\bfalp{{\boldsymbol \alpha}\in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)}|F_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})|\ll F_1(0)Q^{-1/(Dk_1^2)}.
\]
\end{lemma}
\begin{proof} In view of \eqref{4.1}, this estimate is immediate from Lemma
\ref{lemma3.4}.
\end{proof}
The mean value estimate that we obtain for $G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})$ depends on admissible exponent
bounds. Here we note that, whenever $v$ is even, the corollary to
\cite[Theorem 2.1]{Woo1993} shows that the exponent $\Del_v$ is admissible for $k\ge 4$,
where $\Del_v$ is the unique positive solution of the equation
\begin{equation}\label{4.2}
\Del_ve^{\Del_v/k}=ke^{1-v/k}.
\end{equation}
When $k$ is equal to $2$ or $3$, the admissible exponents available from Hua's lemma
show that the real numbers $\Del_v$ defined via \eqref{4.2} are admissible. Of course,
much sharper estimates are known in these cases (see \cite{Woo2015a} for the sharpest
available conclusions when $k=3$). We note that the exponent $\Del_s$ in \cite{Woo1993}
corresponds to our $\Del_v$ with $v=2s$, owing to the slightly different definition employed
therein.\par
We next provide an upper bound for the mean value of $|G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})|$ over the intermediate
set of arcs $\grN(Q)$. In this context, it is convenient to introduce the quantity
\begin{equation}\label{4.3}
\Phi_1=\sum_{i=2}^s\frac{1}{k_i}.
\end{equation}
\begin{lemma}\label{lemma4.2}
When $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}$, one has the bound
\[
\int_{\grN(Q)}|G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll G_1(0)n^{-1}Q^{2\Tet_1},
\]
where $\Tet_1=e^{1-\Phi_1+2/k_2}$.
\end{lemma}
\begin{proof} Define the exponents
\begin{equation}\label{4.4}
t_i=k_i\Phi_1\quad (2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s).
\end{equation}
Then it follows from \eqref{4.3} that we have
\[
\sum_{i=2}^s\frac{1}{t_i}=\frac{1}{\Phi_1}\sum_{i=2}^s\frac{1}{k_i}=1,
\]
and hence an application of H\"older's inequality leads us from \eqref{4.1} to the bound
\begin{equation}\label{4.5}
\int_{\grN(Q)}|G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \prod_{i=2}^s I_i^{1/t_i},
\end{equation}
where
\begin{equation}\label{4.6}
I_i=\int_{\grN(Q)}|f_i({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^{t_i}{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} .
\end{equation}
\par For each index $i$, the largest even integer not exceeding $t_i$ is larger than $t_i-2$,
and hence it follows from \eqref{4.2} that there is an exponent $\Del_{t_i}$ admissible for
$k_i$ with
\[
\Del_{t_i}<k_ie^{1-(t_i-2)/k_i}.
\]
Since $\grN(Q)\subseteq \grM(Q)$, we find from \eqref{4.6} via Lemma \ref{lemma3.2}
that
\[
I_i\ll P_i^{t_i-k_i}Q^{\varepsilon+2\Del_{t_i}/k_i}.
\]
Thus, in view of \eqref{2.3} and \eqref{4.4}, we have
\begin{equation}\label{4.7}
I_i\ll P_i^{t_i}n^{-1}Q^{2{\delta}} \def\Del{{\Delta}_i}\quad (2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s),
\end{equation}
where
\[
{\delta}} \def\Del{{\Delta}_i=e^{1-\Phi_1+2/k_i}.
\]
On substituting \eqref{4.7} into \eqref{4.5}, we conclude that
\begin{equation}\label{4.8}
\int_{\grN(Q)}|G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll G_1(0)n^{-1}Q^{2\Ome_1},
\end{equation}
where
\[
\Ome_1=e^{1-\Phi_1+2/k_2}\sum_{i=2}^s\frac{1}{t_i}=e^{1-\Phi_1+2/k_2}=\Tet_1.
\]
The conclusion of the lemma is therefore immediate from \eqref{4.8}.
\end{proof}
By combining the conclusions of Lemmata \ref{lemma4.1} and \ref{lemma4.2}, we obtain a
minor arc estimate sufficient for our proof of Theorem \ref{theorem1.1}. Here and
henceforth, we fix $\eta$ to be a positive number sufficiently small in terms of
$k_1,k_2,\ldots ,k_s$, and $\varepsilon$, in the context of the estimates of this and the previous
section relevant for the various admissible exponents encountered. Also, we recall the
notation of writing $L$ for $\log n$.
\begin{lemma}\label{lemma4.3}
Suppose that
\begin{equation}\label{4.9}
\sum_{i=2}^s\frac{1}{k_i}>2\log k_1+\frac{2}{k_2}+1+\log (2D).
\end{equation}
Then there is a positive number ${\delta}} \def\Del{{\Delta}$ having the property that
\[
\int_{\mathfrak k} |\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}L^{-{\delta}} \def\Del{{\Delta}}.
\]
\end{lemma}
\begin{proof} By referring to the definition of ${\mathfrak k}$ in \S2, we see that ${\mathfrak k}\subset
{\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(L^{1/15})$. When $L^{1/15}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}$, it follows from Lemmata
\ref{lemma4.1} and \ref{lemma4.2} that
\begin{align*}
\int_{\grN(Q)}|F_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} &\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \Bigl( \sup_{{\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \grN(Q)}|F_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})|\Bigr)
\int_{\grN(Q)}|G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \\
&\ll F_1(0)Q^{-1/(Dk_1^2)}G_1(0)n^{-1}Q^{2\Tet_1}.
\end{align*}
Provided that the hypothesis \eqref{4.9} holds, it follows from \eqref{4.3} that
\[
e^{\Phi_1}>2e^{1+2/k_2}Dk_1^2,
\]
whence $2\Tet_1<1/(Dk_1^2)$. Put
\[
{\delta}} \def\Del{{\Delta}=\frac{1}{15}\Bigl(\frac{1}{Dk_1^2}-2\Tet_1\Bigr).
\]
Then, on recalling \eqref{2.5}, we may conclude thus far that
\[
\int_{\grN(Q)}|\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}Q^{-15{\delta}} \def\Del{{\Delta}}.
\]
But ${\mathfrak k}$ is covered by the sets $\grN(Q)$ via a dyadic dissection, and we see that
\begin{align*}
\int_{\mathfrak k} |\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} &\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec
\sum_{\substack{l=0\\ 2^l\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}L^{-1/15}}}^\infty \int_{\grN(2^{-l}n^{1/2})}
|\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \\
&\ll n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}\sum_{\substack{l=0\\ 2^l\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}L^{-1/15}}}^\infty
(2^{-l}n^{1/2})^{-15{\delta}} \def\Del{{\Delta}}\\
&\ll n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}L^{-{\delta}} \def\Del{{\Delta}}.
\end{align*}
\end{proof}
We complete this section by addressing the particular situation relevant to the second
conclusion of Theorem \ref{theorem1.3}.
\begin{corollary}\label{corollary4.4}
Suppose that $k$ and $r$ are natural numbers with $r\ge k\ge 2$, and put $k_i=k+r(i-1)$
$(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. Then, provided that $s\ge (6k+6)^{2r}$, there is a positive number ${\delta}} \def\Del{{\Delta}$
having the property that
\[
\int_{\mathfrak k} |\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}L^{-{\delta}} \def\Del{{\Delta}}.
\]
\end{corollary}
\begin{proof} We apply Lemma \ref{lemma4.3}, observing that by a familiar argument one
has
\[
\sum_{i=2}^s\frac{1}{k+r(i-1)}>\int_1^s\frac{{\,{\rm d}} t}{k+rt}=\frac{1}{r}\log
\biggl( \frac{k+rs}{k+r}\biggr) .
\]
Thus, provided that one has
\begin{equation}\label{4.10}
\frac{1}{r}\log \biggl( \frac{k+rs}{k+r}\biggr) \ge 2\log k+\frac{2}{k+r}+1+\log (2D),
\end{equation}
then the conclusion of the corollary follows from Lemma \ref{lemma4.3}. We observe that
the hypothesis $r\ge k$ ensures that
\[
\frac{1}{k+r}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{1}{2k}<\frac{1}{k}-\frac{1}{2k^2}<\log \Bigl( 1+\frac{1}{k}\Bigr),
\]
and hence $e^{1/(k+r)}<1+1/k$. It follows that the lower bound \eqref{4.10} is satisfied
provided that
\[
k+rs\ge (2eD(k+1)^2)^r(k+r).
\]
Thus, on recalling that $k\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec r$, we conclude that the lower bound \eqref{4.10} holds
whenever $s\ge 2(2eD)^r(k+1)^{2r}$. In view of \eqref{3.12}, however, a modicum of
computation confirms the bound $2^{1+1/r}eD<35$ for $r\ge 2$, and hence the upper
bound asserted in the corollary holds whenever $s\ge (6k+6)^{2r}$, completing the proof.
\end{proof}
\section{An enhanced minor arc estimate} Given exponents $k_i$ $(i\ge 1)$ having the
property that $k_i$ is small for numerous small indices $i$, one may sharpen the analysis
of the minor arcs presented in the previous section. We illustrate the underlying strategy
in this section with a consideration of the situation in which
\[
k_i=k+r(i-1)\quad (1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s),
\]
with $r$ small. We now put
\begin{equation}\label{5.1}
F_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})=f_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})f_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})\cdots f_k({\alpha}} \def\bfalp{{\boldsymbol \alpha})\quad \text{and}\quad
G_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})=f_{k+1}({\alpha}} \def\bfalp{{\boldsymbol \alpha})f_{k+2}({\alpha}} \def\bfalp{{\boldsymbol \alpha})\cdots f_s({\alpha}} \def\bfalp{{\boldsymbol \alpha}).
\end{equation}
In accord with the discussion of the previous section, we have
$F_2(0)G_2(0)\asymp n^{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}$. In all other respects, we adopt the notation of the previous
section, with an analysis so similar that we are able to economise on detail.
\begin{lemma}\label{lemma5.1}
When $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}$, one has the bound
\[
\sup_{{\alpha}} \def\bfalp{{\boldsymbol \alpha} \in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)}|F_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|\ll F_2(0)Q^{-1/(Dk(r+1))}.
\]
\end{lemma}
\begin{proof} As a consequence of Lemma \ref{lemma3.4}, it follows from \eqref{5.1} that
\begin{equation}\label{5.2}
\sup_{{\alpha}} \def\bfalp{{\boldsymbol \alpha}\in {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(Q)}|F_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|\ll \prod_{i=1}^k P_iQ^{-1/(Dk_i^2)}=F_2(0)Q^{-\phi},
\end{equation}
in which we put
\[
\phi=\sum_{i=1}^k\frac{1}{D(k+r(i-1))^2}.
\]
However, by applying a familiar lower bound, we find that
\[
\phi>\frac{1}{D}\int_0^k\frac{{\,{\rm d}} t}{(k+rt)^2}=\frac{1}{Dr}
\Bigl( \frac{1}{k}-\frac{1}{k(r+1)}\Bigr) =\frac{1}{Dk(r+1)}.
\]
The conclusion of the lemma follows by substituting this estimate into \eqref{5.2}.
\end{proof}
Our upper bound for the mean value of $|G_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|$ over $\grN(Q)$ is obtained through
a modification of the corresponding treatment of $G_1({\alpha}} \def\bfalp{{\boldsymbol \alpha})$ in Lemma \ref{lemma4.2}. We
now put
\begin{equation}\label{5.3}
\Phi_2=\sum_{i=k+1}^s\frac{1}{k_i}.
\end{equation}
\begin{lemma}\label{lemma5.2} When $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}$, one has the bound
\[
\int_{\grN(Q)}|G_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll G_2(0)n^{-1}Q^{2\Tet_2},
\]
where $\Tet_2=e^{1-\Phi_2+2/(k(r+1))}$.
\end{lemma}
\begin{proof} Define the exponents $t_i=k_i\Phi_2$ $(k+1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. Then by following the
argument of the proof of Lemma \ref{lemma4.2} mutatis mutandis, we obtain the upper
bound
\[
\int_{\grN(Q)}|G_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll \prod_{i=k+1}^s \left( P_i^{t_i}n^{-1}Q^{2{\delta}} \def\Del{{\Delta}_i}
\right)^{1/t_i},
\]
where we now write ${\delta}} \def\Del{{\Delta}_i=e^{1-\Phi_2+2/k_i}$. Thus we infer that
\[
\int_{\grN(Q)}|G_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll G_2(0)n^{-1}Q^{2\Ome_2},
\]
where $\Ome_2=e^{1-\Phi_2+2/(k(r+1))}=\Tet_2$, and this delivers the conclusion of the
lemma.
\end{proof}
We now combine the conclusions of Lemmata \ref{lemma5.1} and \ref{lemma5.2} much as
in the proof of Lemma \ref{lemma4.3}.
\begin{lemma}\label{lemma5.3}
Suppose that $k$ and $r$ are natural numbers with $r\ge 1$ and $k\ge 2$, and put
$k_i=k+r(i-1)$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. Then, provided that $s\ge A(r)(k+1)^{r+1}$, where
$A(r)=r^{-1}25^r(r+1)^{r+1}$, there is a positive number ${\delta}} \def\Del{{\Delta}$ having the property that
\[
\int_{\mathfrak k} |\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}L^{-{\delta}} \def\Del{{\Delta}}.
\]
\end{lemma}
\begin{proof} We again have ${\mathfrak k}\subset {\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}(L^{1/15})$. When
$L^{1/15}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n^{1/2}$, we find from Lemmata \ref{lemma5.1} and \ref{lemma5.2}
that
\begin{align}
\int_{\grN(Q)}|F_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})G_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} &\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \Bigl( \sup_{{\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \grN(Q)}|F_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|
\Bigr) \int_{\grN(Q)}|G_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \notag\\
&\ll F_2(0)Q^{-1/(Dk(r+1))}G_2(0)n^{-1}Q^{2\Tet_2}.\label{5.4}
\end{align}
On recalling \eqref{5.3}, we see that
\[
\Phi_2=\sum_{i=k+1}^s\frac{1}{k+r(i-1)}>\int_k^s\frac{{\,{\rm d}} t}{k+rt}=\frac{1}{r}\log
\biggl( \frac{k+rs}{k(r+1)}\biggr) ,
\]
and hence
\[
\Tet_2<e^{1+2/(k(r+1))}\biggl( \frac{k(r+1)}{k+rs}\biggr)^{1/r}.
\]
Therefore, provided that
\begin{equation}\label{5.5}
2e^{1+2/(k(r+1))}\biggl( \frac{k(r+1)}{k+rs}\biggr)^{1/r}<\frac{1}{Dk(r+1)},
\end{equation}
then it follows from \eqref{5.4} that there is a positive number ${\delta}} \def\Del{{\Delta}$ having the property
that
\begin{equation}\label{5.6}
\int_{\grN(Q)}|F_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})G_2({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll F_2(0)G_2(0)n^{-1}Q^{-15{\delta}} \def\Del{{\Delta}}.
\end{equation}
\par We now set about establishing the inequality \eqref{5.5}. Observe first that since
$r\ge 1$, one has
\[
\frac{2r}{k(r+1)^2}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{1}{2k}<\frac{1}{k}-\frac{1}{2k^2}<
\log \Bigl( 1+\frac{1}{k}\Bigr) ,
\]
and thus
\[
e^{2r/(k(r+1))}<(1+1/k)^{r+1}.
\]
We consequently infer that \eqref{5.5} holds whenever
\[
k+rs>k(r+1)(2eDk(r+1))^r (1+1/k)^{r+1}.
\]
On recalling \eqref{3.12}, we find that $2eD<25$, and thus it follows that \eqref{5.5}
holds whenever
\[
s\ge r^{-1}25^r(r+1)^{r+1}(k+1)^{r+1}=A(r)(k+1)^{r+1}.
\]
Since this lower bound is one of the hypotheses of the lemma, we may henceforth work
under the assumption that the upper bound \eqref{5.6} holds.\par
On recalling \eqref{2.5} and \eqref{2.1}, we find that \eqref{5.6} yields the bound
\[
\int_{\grN(Q)}|\mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})|{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}Q^{-15{\delta}} \def\Del{{\Delta}}.
\]
A comparison with the concluding part of the argument of the proof of Lemma
\ref{lemma4.3}, using a dyadic dissection of ${\mathfrak k}$ into subsets of the shape $\grN(Q)$,
therefore leads us from here to the conclusion of the lemma.
\end{proof}
\section{The major arc contribution} The goal of this section is to make progress on
establishing the lower bound \eqref{2.8} for the contribution of the major arcs to
$T(n;\eta)$. Throughout this section and the next, we work under the hypothesis that
\begin{equation}\label{6.1}
{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}=\sum_{i=1}^s\frac{1}{k_i}>2.
\end{equation}
The hypotheses available to us in Theorem \ref{theorem1.1} ensure that ${\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}>3$,
thereby confirming \eqref{6.1} with room to spare. In the first conclusion of Theorem
\ref{theorem1.3}, meanwhile, we have in particular $s>25^r(k+1)$, and thus
\begin{align*}
{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}&=\sum_{i=1}^s\frac{1}{k+r(i-1)}>\int_0^s\frac{{\,{\rm d}} t}{k+rt}=\frac{1}{r}\log
\biggl( \frac{k+rs}{k}\biggr) \\
&>\frac{1}{r}\log (25^r)=\log 25>2,
\end{align*}
and the hypothesis \eqref{6.1} again holds. On the other hand, in the second conclusion
of Theorem \ref{theorem1.3} one has $r\ge k$ and $s\ge (6k+6)^{2r}$, whence a similar
argument yields
\[
{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}>\frac{1}{r}\log \left( (6k+6)^r\right)=\log (6k+6)\ge \log 18>2,
\]
and \eqref{6.1} holds once again. The upshot of this discussion is that we are cleared in all
circumstances to work henceforth under the assumption that \eqref{6.1} holds.\par
Suppose next that ${\alpha}} \def\bfalp{{\boldsymbol \alpha}\in {\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p}(q,a)\subseteq {\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p}$. The standard theory of smooth Weyl
sums (see \cite[Lemma 5.4]{Vau1989}) shows that there is a positive number $c=c(\eta)$
such that for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$, one has
\[
f_i({\alpha}} \def\bfalp{{\boldsymbol \alpha})=cq^{-1}S_i(q,a)v_i({\alpha}} \def\bfalp{{\boldsymbol \alpha}-a/q)+O(P_iL^{-1/4}),
\]
wherein
\[
S_i(q,a)=\sum_{t=1}^qe(at^{k_i}/q)\quad \text{and}\quad v_i({\beta}} \def\bfbet{{\boldsymbol \beta})=\frac{1}{k_i}
\sum_{m\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n}m^{-1+1/k_i}e({\beta}} \def\bfbet{{\boldsymbol \beta} m).
\]
Put
\begin{equation}\label{6.2}
{\mathfrak J}(n;X)=\int_{-X/n}^{X/n}v_1({\beta}} \def\bfbet{{\boldsymbol \beta})v_2({\beta}} \def\bfbet{{\boldsymbol \beta})\cdots v_s({\beta}} \def\bfbet{{\boldsymbol \beta})e(-{\beta}} \def\bfbet{{\boldsymbol \beta} n){\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta} .
\end{equation}
Also, write
\[
\grS(n;X)=\sum_{1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec X}q^{-s}U_n(q),
\]
where
\begin{equation}\label{6.4}
U_n(q)=\sum_{\substack{a=1\\ (a,q)=1}}^q S_1(q,a)S_2(q,a)\cdots S_s(q,a)e(-na/q).
\end{equation}
Then since ${\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p}$ has measure $O(L^{1/5}n^{-1})$, we see that
\begin{equation}\label{6.5}
\int_{\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p} \mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})e(-n{\alpha}} \def\bfalp{{\boldsymbol \alpha}){\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} =c^s{\mathfrak J}(n;L^{1/15})\grS(n;L^{1/15})
+O(P_1P_2\cdots P_sn^{-1}L^{-1/20}).
\end{equation}
\par We show in the next section that the singular series
\begin{equation}\label{6.6}
\grS(n)=\sum_{q=1}^\infty q^{-s}U_n(q).
\end{equation}
converges absolutely and uniformly for $n\in \dbN$, and moreover that $\grS(n)\gg 1$
whenever $s\ge 4k_1$ and the condition \eqref{6.1} holds. Moreover, under the latter
condition we show further that there is a positive number ${\delta}} \def\Del{{\Delta}$ such that
\begin{equation}\label{6.7}
\grS(n)-\grS(n;X)\ll X^{-{\delta}} \def\Del{{\Delta}}.
\end{equation}
\par Next, on making use of the bound supplied by \cite[Lemma 2.8]{Vau1997}, one finds
that
\[
v_i({\beta}} \def\bfbet{{\boldsymbol \beta})\ll P_i(1+n\| {\beta}} \def\bfbet{{\boldsymbol \beta}\|)^{-1/k_i}\quad (1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s).
\]
Hence, working under the hypothesis \eqref{6.1}, we deduce from \eqref{6.2} that there is
a positive number ${\delta}} \def\Del{{\Delta}$ such that
\begin{equation}\label{6.8}
{\mathfrak J}(n;X)={\mathfrak J}(n)+O(P_1P_2\cdots P_sn^{-1}X^{-{\delta}} \def\Del{{\Delta}}),
\end{equation}
where
\[
{\mathfrak J}(n)=\int_{-1/2}^{1/2}v_1({\beta}} \def\bfbet{{\boldsymbol \beta})v_2({\beta}} \def\bfbet{{\boldsymbol \beta})\cdots v_s({\beta}} \def\bfbet{{\boldsymbol \beta})e(-{\beta}} \def\bfbet{{\boldsymbol \beta} n){\,{\rm d}}{\beta}} \def\bfbet{{\boldsymbol \beta} .
\]
A familiar approach paralleling that of \cite[Theorem 2.3]{Vau1997} shows that
\begin{equation}\label{6.9}
{\mathfrak J}(n)=\frac{\Gam\bigl( 1+\frac{1}{k_1}\bigr)\Gam\bigl( 1+\frac{1}{k_2}\bigr) \cdots
\Gam\bigl( 1+\frac{1}{k_s}\bigr) }{\Gam \bigl( \frac{1}{k_1}+\frac{1}{k_2}+\ldots
+\frac{1}{k_s}\bigr) }n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}\left( 1+O(n^{-1/k_s})\right) .
\end{equation}
Thus, on combining \eqref{6.5} with \eqref{6.7}, \eqref{6.8} and \eqref{6.9}, we conclude
that there is a positive number ${\delta}} \def\Del{{\Delta}$ for which
\begin{equation}\label{6.10}
\int_{\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p} \mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})e(-n{\alpha}} \def\bfalp{{\boldsymbol \alpha}){\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} =c^s\Gam({\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta})^{-1}\biggl(
\prod_{i=1}^s\Gam\Bigl( 1+\frac{1}{k_i}\Bigr) \biggr)
\grS(n)n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}+O(n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}L^{-{\delta}} \def\Del{{\Delta}}).
\end{equation}
\par Subject to our verification in the next section that the lower bound $\grS(n)\gg 1$ holds
uniformly in $n$, we conclude from \eqref{6.10} that the lower bound \eqref{2.8} holds. In
combination with the minor arc estimate \eqref{2.7}, available from Lemma \ref{lemma4.3}
under the hypotheses of Theorem \ref{theorem1.1}, we conclude that
\begin{equation}\label{6.11}
T(n;\eta)=\int_{\mathfrak K}}\def\grL{{\mathfrak L}}\def\grp{{\mathfrak p} \mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})e(-n{\alpha}} \def\bfalp{{\boldsymbol \alpha}){\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} +\int_{\mathfrak k} \mathscr F({\alpha}} \def\bfalp{{\boldsymbol \alpha})e(-n{\alpha}} \def\bfalp{{\boldsymbol \alpha})
{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \gg n^{{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}-1}.
\end{equation}
This completes the proof of Theorem \ref{theorem1.1}. In order to establish Theorem
\ref{theorem1.3}, we observe on the one hand that the upper bound \eqref{2.7} follows
from Lemma \ref{lemma5.3} when $s\ge A(r)(k+1)^{r+1}$. Also, when $r\ge k$ and
$s\ge (6k+6)^{2r}$, the upper bound \eqref{2.7} follows from Corollary \ref{corollary4.4}.
Thus, in either case, we find as before that \eqref{6.11} follows in these respective
situations, and thus the proof of Theorem \ref{theorem1.3} is now complete.
\section{The singular series} In this section we estimate the singular series, confirming
\eqref{6.7} and the bounds $1\ll \grS(n)\ll 1$. Our argument parallels the analogous
treatment of \cite{Sco1960}, though we introduce refinements en route. We continue
working under the hypothesis \eqref{6.1} throughout.\par
First, from \eqref{6.4} and \cite[Theorem 4.2]{Vau1997}, we see that the bound
\[
q^{-s}U_n(q)\ll q^{1-1/k_1-1/k_2-\ldots -1/k_s}=q^{1-{\theta}} \def\bftet{{\boldsymbol \theta}} \def\Tet{{\Theta}}
\]
holds uniformly in $n$. Thus, in view of \eqref{6.1}, there is a positive number ${\delta}} \def\Del{{\Delta}$ for
which $q^{-s}U_n(q)\ll q^{-1-{\delta}} \def\Del{{\Delta}}$. It follows that the singular series $\grS(n)$ defined in
\eqref{6.6} converges absolutely and uniformly in $n$, and moreover one has the bound
\eqref{6.7}. Next, by following the argument underlying the proof of
\cite[Lemma 2.11]{Vau1997}, we
see that $U_n(q)$ is a multiplicative function of $q$. In view of \eqref{6.6}, we may rewrite
$\grS(n)$ in the form $\grS(n)=\prod_p\chi_p(n)$, where the product is over all prime
numbers $p$, and
\begin{equation}\label{7.1}
\chi_p(n)=\sum_{\nu=0}^\infty p^{-s\nu}U_n(p^\nu).
\end{equation}
By orthogonality, this Euler factor is related to the number $M_n(p^\nu)$ of incongruent
solutions of the congruence
\[
x_1^{k_1}+x_2^{k_2}+\ldots +x_s^{k_s}\equiv n\mmod{p^\nu},
\]
via the relation
\begin{equation}\label{7.2}
\chi_p(n)=\lim_{\nu\rightarrow \infty}p^{(1-s)\nu}M_n(p^\nu).
\end{equation}
A model for the necessary argument, which is standard, may be found in the discussion
associated with \cite[Lemma 2.12]{Vau1997}. The limit \eqref{7.2} is seen to exist via the
relation \eqref{7.1}. In particular, the quantity $\chi_p(n)$ is a non-negative number
satisfying the relation $\chi_p(n)=1+O(p^{-1-{\delta}} \def\Del{{\Delta}})$.\par
We summarise our deliberations thus far in the form of a lemma.
\begin{lemma}\label{lemma7.1}
Suppose that \eqref{6.1} holds. Then the series \eqref{6.6} converges absolutely, and there
exists a natural number $C$ with the property that for all integers $n$, one has
\[
\grS(n)\ge \tfrac{1}{2} \prod_{p\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C}\chi_p(n).
\]
\end{lemma}
We have yet to obtain a lower bound for $\chi_p(n)$ when $p\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C$, a matter to which we
now attend. Put $D=(k_1,k_2,\ldots ,k_s)$, the greatest common divisor of
$k_1,k_2,\ldots ,k_s$. Define the non-negative integer ${\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}$ by means of the relation
$p^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}\| D$. Then we have $p^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}|k_i$ for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$, and there exists an index $j$
with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$ for which $p^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}\| k_j$. We show that for each integer $n$, there is a
solution of the congruence
\begin{equation}\label{7.3}
x_1^{k_1}+x_2^{k_2}+\ldots +x_s^{k_s}\equiv n\mmod{p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau}},
\end{equation}
with $\tau=1$ for odd $p$, and with $\tau=2$ for $p=2$, in each case with $(x_j,p)=1$.
\par In order to establish this last assertion, suppose temporarily that there is an integer
$n$ having the property that \eqref{7.3} has no solution with $(x_j,p)=1$. It then follows
that the range of the left hand side of \eqref{7.3} modulo $p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau}$, with
$(x_j,p)=1$, has at most $p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau}-1$ elements. In the first instance we assume
that $p$ is odd. Then, the theory of power residues shows that the monomial $x^{k_j}$
takes $(p-1)/(p-1,k_j)$ values modulo $p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+1}$ as $x$ varies over
$1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+1}$ with $(x,p)=1$. Furthermore, for any index $i$ we see that
$y^{k_i}$ takes at least $1+(p-1)/(p-1,k_i)$ values modulo $p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+1}$ as $y$
varies over $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+1}$. We now repeatedly apply the Cauchy-Davenport
theorem (see \cite[Lemma 2.14]{Vau1997}), beginning with the values of $x_j^{k_j}$, and
then adding in the remaining powers step-by-step. On recalling \eqref{6.1}, we find that
with $p\nmid x_j$, the range of the left hand side of \eqref{7.3}, modulo $p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+1}$,
contains a number of elements which is at least
\[
\sum_{i=1}^s\frac{p-1}{(p-1,k_ip^{-{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}})}\ge p^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} (p-1)
\sum_{i=1}^s\frac{1}{k_i}>2p^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}(p-1).
\]
This yields a contradiction, since $2p^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} (p-1)\ge p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+1}$. Our claim concerning the
solubility of the congruence \eqref{7.3} is consequently confirmed when $p$ is odd.\par
We next consider the situation with $p=2$, where $\tau=2$. For some index $j$ with
$1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$, one has $2^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}\| k_j$. In \eqref{7.3} we take $x_j=1$. We can solve
\eqref{7.3} with $x_i\in \{0,1\}$ ($1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$ and $i\ne j$) provided that
$s\ge 2^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+2}$. However, we have $k_1\ge 2^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}$, and hence the condition that
$s\ge 4k_1$ suffices to confirm our claim concerning the solubility of the congruence
\eqref{7.3} in the case that $p=2$.\par
A routine argument now bounds $M_n(p^\nu)$ from below. We observe that since
$p^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}\|k_j$, a number coprime to $p$ is a $k_j$-th power residue modulo
$p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau}$ if and only if it is a $k_j$-th power residue modulo $p^\nu$, for all
$\nu\ge {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau$. Let $x_1,x_2,\ldots ,x_s$ be a solution of \eqref{7.3}, with
$(x_j,p)=1$, and let $\nu$ be a natural number with $\nu\ge {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau$. There are
$p^{\nu-{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}-\tau}$ choices for $y_i$ with $y_i\equiv x_i\mmod{p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau}}$ and
$1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec y_i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec p^\nu$. For each such choice with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$ and $i\ne j$, the integer
\[
n-\sum_{\substack{i=1\\ i\ne j}}^sx_i^{k_i}
\]
is a $k_j$-th power residue modulo $p^{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau}$, and therefore a $k_j$-th power
residue modulo $p^\nu$. Thus, we have $M_n(p^\nu)\ge p^{(s-1)(\nu -{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}-\tau)}$, so by
\eqref{7.2} we see that $\chi_p(n)\ge p^{-({\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}+\tau)(s-1)}$. This lower bound holds for
all primes $p$ with $p^{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}\| D$ and all $n\in \dbN$ provided that $s\ge 4k_1$ and
\eqref{6.1} holds.\par
We summarise these deliberations in the following lemma.
\begin{lemma}\label{lemma7.2}
Suppose that $s\ge 4k_1$, and \eqref{6.1} holds. Then there is a positive number ${\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}$
having the property that $\grS(n)\ge {\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}$ for all $n\in \dbN$.
\end{lemma}
This lemma completes our analysis of the singular series, and thus we have confirmed all of
the properties that were needed to complete the analysis of \S6. It is worth noting that the
condition $s\ge 4k_1$ of Lemma \ref{lemma7.2} is automatically satisfied whenever the
hypotheses of Theorem \ref{theorem1.1} hold for the exponents $k_1,k_2,\ldots ,k_s$. In
order to verify this claim, observe that
\[
\sum_{i=3}^s\frac{1}{k_3}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{s}{k_1}
\]
whilst
\[
2\log k_1+\frac{1}{k_2}+3.20032>2\log 2+3>4.
\]
Thus the hypotheses of Theorem \ref{theorem1.1} can be satisfied only when $s>4k_1$.
\bibliographystyle{amsbracket}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
|
3,212,635,537,780 | arxiv | \section{Climatological background} \label{sec:background}
\subsection{El Ni{\~{n}}o--Southern Oscillation} \label{sec:background-ENSO}
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{regions-of-interest.pdf}
\caption{Main regions of interest used within this paper. Sets of blue dots labelled with ``El Chichon'', ``Agung'' and ``Pinatubo'' indicate grid points within a 5$^\circ$ radius around the corresponding volcanoes. The numbers 3, 3.4 and 4 identify the corresponding Nino regions (cf.\ Tab.~\ref{tab:enso-regions}) commonly used for defining characteristic indices of ENSO variability. The region ``ENSO-big'' will be removed from the complete global data set when analyzing the spatial imprints of volcanic eruptions to ensure that ENSO-related effects are excluded.
}
\label{fig:map-regions-of-interest}
\end{figure}
\begin{table}
\begin{tabular}{l|l|l}
\toprule
region & latitudes & longitudes \\ \hline
Nino1+2 & 10\degree S - 0\degree N & 90\degree W - 80\degree W \\
Nino3 & 5\degree S - 5\degree N & 150\degree W - 90\degree W \\
Nino4 & 5\degree S - 5\degree N & 160\degree E - 150\degree W \\
Nino3.4 & 5\degree S - 5\degree N & 170\degree W - 120\degree W \\
ENSO-big & 30\degree S - 10\degree N & 180\degree W - 60\degree W \\ \bottomrule
\end{tabular}
\caption{Overview on different regions commonly used for defining characteristic temperature-based indices associated with ENSO variability. In addition, we include the definition of the ``ENSO-big'' region studied in this work, which corresponds to the region that is discarded in our analyses of the impacts of strong volcanic eruptions on global temperature teleconnectivity.}
\label{tab:enso-regions}
\end{table}
Among the dominant teleconnectivity patterns in the global climate system, the El Ni\~{n}o--Southern Oscillation (ENSO) is the probably most remarkable phenomenon in terms of both, the magnitude of associated variations in sea-surface temperature (SST) and sea-level pressure, as well as its specific impacts on different aspects of regional climate variability worldwide \citep{trenberth_definition_1997}. During the positive phase (El Ni\~{n}o) of this complex oscillation of the coupled atmosphere--ocean system in the tropical Pacific, the eastern tropical Pacific exhibits some anomalous warming with respect to ``normal'' mean conditions, while the negative phase (La Ni\~{n}a) is characterized by a corresponding cooling. In comparison with the normal mean climatology, this surface temperature anomaly results in marked shifts of key atmospheric pressure systems, modifying the large-scale circulation and, thus, leading to prominent shifts of, e.g., precipitation patterns. It has been shown that effects of both ENSO phases can be observed in remote regions including North and South America, Africa, the Indian subcontinent, and even Antarctica \citep{ropelewski_global_1987,dai_global_2000,neelin_tropical_2003,turner2004nino,clarke2008introduction,sarachik2010nino}.
The long-term variability of ENSO is characterized by some irregular oscillations with a period of 2 to 7 years and remarkable variations in the associated characteristic frequencies and amplitudes of both, temperature and pressure anomalies. Following its prominent spatial structures in tropical SST and sea level pressure, ENSO is commonly traced by indices that take up the variability of the aforementioned observables in some key region of the tropical Pacific ocean. Notably, a set of indices has been defined in terms of average SST anomalies taken over distinct regions in the eastern and central tropical Pacific, referred to as Nino1+2, Nino3, Nino4 and Nino3.4, respectively \citep{trenberth2001indices} (see Fig.~\ref{fig:map-regions-of-interest} and Tab.~\ref{tab:enso-regions}). In this work, we will utilize the so-called Ocean Ni\~{n}o Index (ONI) for differentiating between different phases of ENSO. It is defined as the running three-month mean SST anomaly for the Ni\~{n}o 3.4 region ($5\degree$N--$5\degree$S, $120\degree$--$170\degree$W) in comparison with centered 30-year base periods that are updated every 5 years \citep{NCEP-ONI}. When the ONI exceeds $0.5\degree$C for at least five consecutive months, the corresponding situation is classified as an El Ni\~{n}o, and the magnitude of the ONI is taken as an indicator of the strength of the corresponding event. In turn, if the ONI drops below $-0.5\degree$C for at least five consecutive months, this indicates a La Ni\~{n}a episode.
In the last years, it has been recognized that the commonly observed spatial patterns associated with El Ni\~{n}o (as well as La Ni\~{n}a) related SST anomalies are far from being homogeneous across the set of observed events. Consequently, it has been suggested to further distinguish both phases into two respective flavours \citep[see][and references therein]{Wiedermann2016a}. The first type is the classic or East Pacific (EP) El Ni\~{n}o \citep{rasmusson1982variations,harrison1998nino}, which is localized in the eastern tropical Pacific and characterized by strong positive SST anomalies close to the western coast of South America. Opposed to this, the El Ni\~{n}o Modoki or Central Pacific (CP) El Ni\~{n}o exhibits marked SST anomalies in the central tropical Pacific close to the dateline. Notably, both spatial structures (EP and CP) can be observed in the context of La Niña, too. Noticing that there have been contradictory classifications in the literature for some past ENSO events, \citet{Wiedermann2016a} recently presented a new indicator for the ENSO flavor based on functional climate networks. In the remainder of this paper, we will follow their classification, which is summarized in Tab.~\ref{tab:summary-ep-cp-events}.
\begin{table}
\centering
\begin{tabular}{p{0.2\columnwidth}|p{0.7\columnwidth}}
\toprule
Event & Years \\ \hline
EP El Niño & 1957, 1965, 1972, 1976, 1982, 1997 \\
CP El Niño & 1953, 1958, 1963, 1968, 1969, 1977, 1979, 1986, 1987, 1991, 1994, 2002, 2004, 2006, 2009 \\
EP La Niña & 1964, 1970, 1973, 1988, 1998, 2007, 2010 \\
CP La Niña & 1954, 1955, 1967, 1971, 1974, 1975, 1984, 1995, 2000, 2001, 2011 \\
\bottomrule
\end{tabular}
\caption{
Classification of past ENSO episodes into the four types Eastern Pacific (EP) El Ni\~{n}o, Central Pacific (CP) El Ni\~{n}o, EP La Ni\~{n}a, and CP La Ni\~{n}a as proposed by \citet{Wiedermann2016a}. For constructing the composite maps shown in Fig.~\ref{fig:composites}, time windows corresponding to each type of event have been selected according to their midpoints coinciding with Christmas of the year given in the table. All years not listed here have been classified as ``neutral'' years with no distinct ENSO event.
}
\label{tab:summary-ep-cp-events}
\end{table}
\subsection{Volcanic eruptions}
Besides distinct ENSO episodes and their known global climate impacts, another type of events that can substantially affect climate at large spatial and temporal scales are strong volcanic eruptions. Similar to El Ni\~{n}o and La Ni\~{n}a episodes, such events can result in large-scale spatially coherent cooling trends due to modifications of the radiation balance by changes in atmospheric chemistry and the shielding effect of volcanic aerosols in the stratosphere. Subsequently, such cooling can again cause changes of precipitation and temperature patterns from synoptic (weather) time scales to relatively persistent multi-annual effects \citep{Robock2000} and even trigger long-lasting climate disruptions \citep{Miller2012}. In the past decades, several large volcanic eruptions have injected up to some millions of tons of sulfur dioxide into the atmosphere, which can get rapidly distributed around the globe once entered the stratosphere.
In this study, we focus on the global effects of the three major volcanic eruptions during the second half of the 20th century. Within this period, the largest and most influential event, the Mount Pinatubo eruption \citep{McCormick1995}, took place between April and September 1991 in the Philippines, followed by the Mount Agung eruption in Indonesia (February 1963 to January 1964) and the El Chichon eruption (March to September 1982) in Mexico (see Fig.~\ref{fig:map-regions-of-interest}).
\section{Data and methods} \label{sec:methods}
\subsection{Data}
\label{sec:data}
In this study, we use daily mean surface air temperature (SAT) data (at sigma level $\sigma=0.995$) from the National Center for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR) Reanalysis I project \citep{Kalnay1996,kistler_ncepncar_2001}: The data cover the years 1948--2015 at a global grid with equi-angular spatial resolution of 2.5$\degree$ in both latitude and longitude, thus comprising $10,512$ individual temperature time series. Note that we found the inclusion of the land areas important, hence we chose SAT instead of the aforementioned SST. In order to remove leading order effects of seasonality in the temperature recordings, the long-term average temperatures for each calendar day of the year have been subtracted from the raw data independently for each grid point, resulting in so-called SAT anomalies.
Equi-angular gridded data have, by construction, a higher density of grid points at the poles than around the equator, which would result in systematic biases of statistical characteristics overemphasizing the polar regions with apparently more data if not properly accounted for. For the latter purpose, area-weighted measures have been developed and subsequently applied in recent works \citep{tsonis2006networks,heitzig2012node,Wiedermann2013}. As an alternative, we follow here the approach of \citet{radebach2013disentangling}, where the original data have been remapped onto a grid with a much higher spatial homogeneity. Specifically, we use an icosahedral grid
as described by \citet{Heikes1994}, which finally leads to a decomposition of the Earth's surface into Voronoi cells of almost the same area. In the present case, the corresponding remapping procedure results in a set of $N=$10,242 grid points that exhibit a narrowly peaked distribution of geodesic distances between direct neighbors.
In \citet{radebach2013disentangling}, the time series associated with each new grid point have been determined based upon the values at the respective four surrounding grid points in the original equi-angular grid. In this work, we use a slightly different approach by taking the four closest points in space instead, which in some cases may deviate from the former setting. This modification is motivated by the fact that the consideration of the spatially closest ``observational'' values may provide a better approximation of climate variability at the new grid point. Moreover, these spatial neighbours can be determined efficiently using spatial search trees. Due to the commonly rather large spatial correlation length of the SAT field (as compared to other climate variables like precipitation) and its resulting spatial smoothness, we do not expect the time series resulting from both algorithmic variants to differ markedly.
Finally, we note that when using the global data set as described above, the temporal correlations associated with the key ENSO region and the surrounding parts of the Pacific ocean are known to dominate climate variability globally. This leads to undesired outcomes when aiming to resolve the effects of individual volcanic eruptions on global temperature patterns, since they might be masked by ENSO variability, especially in cases where the corresponding effects take place simultaneously with some El Ni\~{n}o (or La Ni\~{n}a) event. In fact, strong tropical volcanic eruptions have even been suggested to serve as triggers for El Ni\~no phases \citep{Khodri2017}.
In order to account for the problem of temporal co-occurrence between the effects of volcanic eruptions and ENSO events, we are going to use the full set of data when studying the effects of ENSO on global temperature teleconnectivity, while excluding the main ENSO region and its surroundings (referred to as ``ENSO-big'' in Fig.~\ref{fig:map-regions-of-interest} and Tab.~\ref{tab:enso-regions}) when studying the impacts of specific volcanic eruptions. Note that this excluded region has been chosen rather large on purpose (as an outcome of more systematic studies with variable regions to be discarded, which are not further discussed here for brevity) such as to ensure an as complete as possible separation between the direct ENSO impacts and the effects of volcanic eruptions, especially in cases of simultaneous events. In fact, when considering the full global SAT data set in the context of the impacts of volcanic eruptions, only the signatures of the Mount Pinatubo eruption are clearly visible \citep{radebach2013disentangling}. Alternative strategies for reducing the impact of ENSO variability in order to highlight the climate effects of other phenomena like volcanic eruptions might include conditioning out the effect of ONI or other representatives of the ENSO state on the local SAT variability at each grid point prior to network analysis. We outline further investigations that make use of such approaches as a subject of future research.
\subsection{Functional climate network analysis}
\label{sec:climate-networks}
Functional climate networks provide a coarse-grained spatial representation of the co-variability structure among globally or regionally distributed measurements of some climate variables \citep{tsonis2006networks,tsonis2012,Donner2017}. Starting from a set of records of the variable of interest, the geographical positions associated with the individual time series are identified with the $N$ nodes of some abstract network embedded on the Earth's surface. The connectivity of this network is then formed by establishing links between pairs of these nodes that fulfill some statistical similarity criterion (see below). Thus, links in such climate networks represent strong statistical associations between climate variability at different points in space. These associations may potentially indicate certain climatic processes or mechanisms interlinking the variability at the corresponding locations. Hence, the resulting linkage structure is referred to as \emph{functional connectivity}.
Like other undirected and unweighted networks, functional climate networks are conveniently represented in terms of their adjacency matrix $\mathbf{A}=(A_{ij})$, where $A_{ij} = 1$ indicates the existence of a link between node $i$ and node $j$, while $A_{ij} = 0$ corresponds to an absence of such a link.
In our specific case, the matrix $\mathbf{A}$ is time-dependent, since the spatial co-variability structure of the SAT field changes with time. In such a case, we speak of an \emph{evolving climate network} \citep{radebach2013disentangling}.
\subsubsection{Network generation}
According to our considerations presented above, we take the grid points of the icosahedral grid constructed by remapping the original NCEP/NCAR reanalysis data as nodes of an evolving SAT network (i.e., we consider a fixed node set that does not change over time).
For establishing the time-dependent link set, we consider sliding windows in time covering a set of days $\{d\}=[d_0,d_0+\Delta d]$ with a width of $\Delta d=365$ days and mutual offset of 183 days between subsequent windows. Each of these windows is labeled with the corresponding midpoint date $d_{mid} = d_0 + \Delta d/2$. Our choice of 1-year windows is motivated by the fact that seasonality may not be not completely accounted for when using shorter windows, since the consideration of SAT anomalies as defined above does not exclude seasonality in the local higher-order statistical characteristics of SAT after correcting only for the mean climatology. Moreover, due to different distributions of land and water masses on both hemispheres of the Earth (with difference persistence properties of SAT), the resulting spatial co-variability structure manifested in the climate network topology may undergo seasonal variations as well, which could affect the results of our analysis presented in this work.
From the seasonally adjusted temperature data at each grid point during a time window (corrected for the window-wise mean), $T_i(\{d\})$ (with $i$ denoting the respective grid point), we compute the matrix of pairwise Pearson's correlation coefficients
\begin{equation}
\label{eq:correlation-coefficient}
c_{ij}(d_{mid}) = \frac{\left<T_i(\{d\})T_j(\{d\})\right>_{\{d\}}}{\sqrt{\sigma_{\{d\}}(T_i(\{d\}))\cdot\sigma_{\{d\}}(T_j(\{d\}))}}\,,
\end{equation}
\noindent
where $\left<\bullet\right>_{\{d\}}$ and $\sigma_{\{d\}}(\bullet)$ denote the mean value and standard deviation of the respective variable taken over the time window $\{d\}$. From this matrix, we identify the entries (i.e., pairs of nodes) with the highest absolute values of mutual correlations $|c_{ij}|$. Specifically, in this work, we consider the 0.5\% strongest pairwise statistical similarities among all nodes per window, i.e.,
\begin{equation}
\label{eq:adjacency-matrix}
A_{ij}(d_{mid}) = \Theta(|c_{ij}|(d_{mid})-q_{|c|,0.995}(d_{mid}))-\delta_{ij}\,,
\end{equation}
\noindent
where $\Theta(\bullet)$ is the Heaviside function, $q_{|c|,0.995}(d_{mid})$ is the $99.5$-percentile of the distribution of absolute correlation values for the time window centered at $d_{mid}$, and $\delta_{ij}$ denotes Kronecker's Delta. $\mathbf{A}\left(d_{mid}\right)$ is now the mathematical representation of our evolving climate network. In the following, we omit the explicit time dependence of $\mathbf{A}$ (also in all network properties) for brevity. Note that according to our construction, $\mathbf{A}$ is symmetric
\subsubsection{Node degree}
The degree $k_i$ of a node $i$ is defined as the number of links connected to $i$,
\begin{equation}
k_i = \sum_{j=1}^N A_{ij}\,.
\end{equation}
\noindent
It represents how densely a node is connected within the network. In case of a functional climate network, the degree can thus be considered as a proxy for the importance (or centrality) of a certain grid point in the spatio-temporal correlation structure of the variable of interest.
In the following, we refer to network measures like the degree, which provide a characteristic value specific to each individual node $i$, as local network characteristics. We call the full set of their values taken together with the associated spatial positions of all nodes a \emph{field}.
\subsubsection{Average link distance}
Another local network characteristic that defines a field of values upon a functional climate network is the average link distance \citep{Donner2017} of a node $i$, which is defined as
\begin{equation}
d_i = \left<d_{ij}\right>_{\left\{j|A_{ij}=1\right\}}= \frac{1}{k_i} \sum_{\left\{j|A_{ij}=1\right\}} d_{ij}= \frac{1}{k_i} \sum_{j=1}^N A_{ij}D_{ij}\,,
\end{equation}
\noindent
with $d_{ij}$ being the normalized spatial distance between two nodes $i$ and $j$. As a proper normalization, we choose here the largest possible (shortest) distance between two points on the Earth's surface, i.e., half of the circumference of the Earth, $u_{Earth}$, so that $d_{ij} = 2 D_{ij}/u_{Earth}$ where $D_{ij}$ is the geodesic distance between nodes $i$ and $j$. A low average link distance indicates that $i$ has very localized connections, while a high value points to a node with long-ranging spatial connections. This measure is closely related with the \emph{total distance} of a node with respect to the rest of the network as previously used by \citet{Tsonis2008} in the context of functional climate networks.
We emphasize that the average link distance must not be confused with the conceptually related average path length, where $d_{ij}$, as defined above, would be replaced by the minimum number $l_{ij}$ of links separating two nodes $i$ and $j$ in the network (i.e., where $i$ and $j$ do not need to be directly connected).
Taking the average of $d_i$ over all nodes $i$ of the network gives the \emph{global average link distance}
\begin{equation}
\left<\left<d\right>\right> = \left<d_i\right>_i .
\end{equation}
\noindent
\subsubsection{Transitivity}
The network transitivity quantifies how strongly the connectivity of a network is clustered. Mathematically, it describes the degree to which the network's adjacency property is transitive, i.e., the fraction of cases in which the presence of two links between nodes $i$ and $j$ as well as $i$ and $k$ is accompanied by a third link between $j$ and $k$. Mathematically, this is expressed as \citep{boccaletti2006,radebach2013disentangling}
\begin{align}
\mathcal{T} = \frac{\sum_{i,j,k=1}^N A_{ij} A_{jk} A_{ki}}{\sum_{i,j,k=1}^N A_{ij} A_{jk}}\, .
\end{align}
\noindent
Like the global average link distance $\left<\left<d\right>\right>$ (but unlike degree and average link distance), $\mathcal{T}$ does not define a field, but returns one single single scalar value for each network.
\subsubsection{Modularity}
The concept of modularity was introduced into network science by \citet{newman2004finding} to measure the degree of heterogeneity within the network structure, i.e., how well different groups of nodes can be distinguished that are densely connected within each group, but only sparsely among different groups. In the case of a climate network, modularity provides a single scalar-valued characteristic property that discriminates between a relatively homogeneous link placement (low modularity) and the existence of (commonly regionally confined) clusters of nodes (time series) that exhibit relatively coherent variability (high modularity).
The definition of modularity relies upon a partitioning of the network into meaningful subgraphs. Up to a multiplicative constant, it counts how many links are clustered within these subgraphs and compares this value with the expected number of links inside these subgraphs if the network were random,
\begin{equation}
\mathcal{Q} = \frac{1}{2m} \sum_{ij}\left( A_{ij} -\frac{k_i k_j}{2m}\right)\Delta_{ij}\,, \label{eq:modularity}
\end{equation}
where $m$ is the total number of links and $\Delta_{ij}$ an indicator function informing whether or not two nodes $i$ and $j$ belong to the same subgraph in the considered partition.
The individual subgraphs in the partition that maximizes the modularity $\mathcal{Q}$ are called communities. The higher the modularity, the more split-up (or modular) the network. Accordingly, community detection by modularity maximization has become a common tool for cluster analysis.
While the above definition of modularity is mathematically precise, its maximization is a hard computational problem and can only be achieved by making use of suitable heuristics. Various estimation algorithms have been proposed \citep{fortunato2010community}. It should be emphasized that many of them can result in suboptimal solutions. Thus, a good choice of the algorithm is important for obtaining reliable results. In this work, we employ the \textit{WalkTrap} method introduced by \citet{pons2006computing}. By comparing the results provided by this algorithm with those of other methods (cf. \Cref{app:modularity-comparison}), we have found that the WalkTrap solution exhibits comparably high values of modularity and relatively stable values in case of strongly overlapping windows (i.e., in cases where the considered data do not change much).
\subsection{Regionalization of field measures}
\label{sec:localization-of-field-measures}
As detailed above, node degree and average link distance constitute two important local network characteristics. In some of our following investigations, it will be useful to study the associated spatial fields in full detail. However, when focusing on the specific impacts of certain climate phenomena, it can be beneficial to perform a regionalization of these measures. Specifically, for a subset of nodes $\mathcal{X} \subseteq \left\{ 1, \dots, N \right\}$ representing a certain part of the globe, a regionalized version of the degree would be given as
\begin{equation}
k_{\mathcal{X}} = \frac{1}{|\mathcal{X}|}\sum_{i \in \mathcal{X}} k_i\,, \label{eq:localization-procedure}
\end{equation}
where $|\mathcal{X}|$ denotes the number of nodes in the considered set. As a consequence, we can not only assign a degree value to an individual node, but also (as a mean degree) to a subgraph. Note that this regionalized degree differs from the concepts of cross-degree and cross-link density between subgraphs \citep{donges2011investigating}, since unlike $k_{\mathcal{X}}$, the latter exclude contributions due to links between nodes within $\mathcal{X}$ in their definition.
For the average link distance, the corresponding regionalized property $d_{\mathcal{X}}$ can be defined in full analogy.
Below, we detail some reasonable choices for $\mathcal{X}$ to be utilized in the context of the present work, which focus on specific spatially contiguous regions of the Earth's surface that are associated with ENSO or volcanoes with strong past eruptions.
\subsubsection{El Ni{\~{n}}o--Southern Oscillation regions}
\label{sec:localization-ENSO}
As already detailed in Sect.~\ref{sec:background-ENSO}, there exist a variety of indices that measure the ``strength'' of a particular ENSO state. Among others, four Nino regions (1+2, 3, 4 and 3.4) have been defined to capture SST anomalies in different parts of the tropical Pacific.
The regionalization approach described above can be applied to these four regions by taking all nodes located within the respective spatial domains and apply Eq.~\eqref{eq:localization-procedure}. Thereby, we obtain a set of eight new scalar-valued characteristics: $k_{\text{Nino1+2}}$, $d_{\text{Nino1+2}}$, $k_{\text{Nino3}}$, $d_{\text{Nino3}}$, $k_{\text{Nino4}}$, $d_{\text{Nino4}}$, $k_{\text{Nino3.4}}$ and $d_{\text{Nino3.4}}$. In order to reduce this vast amount of information, in what follows, we will not make use of the (anyway less frequently studied) Nino1+2 region, but focus on the Nino3.4 region (which is also the basis of the nowadays most common ONI index) and its two contributors, Nino3 and Nino4
\subsubsection{Volcano regions}
\label{sec:localization-volcanoes}
The locations of the three volcanoes responsible for the largest eruptions of the recent decades are shown in Fig.~\ref{fig:map-regions-of-interest}. To obtain interpretable information on the (tele-) connectivity induced by these eruptions, we need to integrate the connectivity properties of a sufficiently large amount of meaningfully chosen grid cells. As a first attempt, we therefore take the area within a radius of $5\degree$ around each volcano as basis for the regionalization procedure of $k_i$. This leads to the three observables $k_{\textrm{Pinatubo}}$, $k_{\textrm{Agung}}$ and $k_{\textrm{Chichon}}$. For the average link distance, one could again proceed in a similar way.
However, the aforementioned choice might not be optimal, since symmetric spatial regions in the near-field do not necessarily exhibit the strongest persistent temperature effects after an eruption. Instead, the specific local meteorological conditions (especially wind fields) during the eruption period largely control the three-dimensional patterns of atmospheric aerosol concentrations and, hence, the position of the strongest mid-term cooling to be expected. Accordingly, the induced teleconnectivity can be more evident within regions that have been shifted with respect to the locations of the volcanoes. To account for this, we also calculate regionalized degrees for accordingly shifted regions (see Sect.~\ref{sec:results-volcanoes} for details), denoted as $k^\prime_{\textrm{Pinatubo}}$, $k^\prime_{\textrm{Agung}}$ and $k^\prime_{\textrm{Chichon}}$. Here, the specific regions have been selected according to an examination of the resulting degree fields of the SAT networks for time windows succeeding the individual eruptions and the corresponding wind fields, seeking for the timing and position of the strongest anomalies in the degree field that could be attributed to each eruption (see below). Note that although a volcanic eruption may start relatively abruptly, its larger-scale atmospheric effects commonly become effective only with a considerable delay of several months or more \citep{Robock2000,McCormick1995}.
\section{Results}
\label{sec:results}
In the following, we present the results of our functional network analysis of global SAT patterns with a focus on the associated imprints of ENSO. Subsequently, we turn to analyzing and discussing the excess connectivity induced by strong volcanic eruptions.
\subsection{El Ni{\~{n}}o--Southern Oscillation}
\label{sec:results-ENSO}
\newlength{\timeserieslen}
\setlength\timeserieslen{0.9\textwidth}
Let us start with investigating the global effects of ENSO on the spatio-temporal co-variability structure of global SAT. From a complex network perspective, this problem has already been addressed in a variety of previous studies \citep[e.g.][and various others]{tsonis2008prl,yamasaki2008,gozolchiani2008,radebach2013disentangling,Wiedermann2016a,Fan2017}, making use of different approaches for constructing network structures from global climate data. However, none of these works has considered the complementarity between topological and spatial network properties in great detail, nor utilized the concepts of modularity and global average link distance that constitute key aspects of this paper and provide important new insights as demonstrated in the following.
\subsubsection{Global network properties}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{enso-colorbar.pdf} \\
\subfloat{\includegraphics[width = \timeserieslen]{global-transitivity.pdf}\label{fig:ts-global-transitivity}} \\
\subfloat{\includegraphics[width = \timeserieslen]{modularity-walktrap.pdf}\label{fig:ts-modularity}} \\
\subfloat{\includegraphics[width = \timeserieslen]{global-avg-link-length-field.pdf}\label{fig:ts-global-link-length}}
\caption{Time series of climate network (a) transitivity $\mathcal{T}$, (b) modularity $\mathcal{Q}$ and (c) global average link distance $\left<\left<d\right>\right>$. Background colors highlight different ENSO phases (red: El Ni\~{n}o (EN), blue: La Ni\~{n}a (LN)) according to the Ocean Ni\~no Index (ONI), with opacity representing to the corresponding index value. Ticks on the time axis indicate the 1st of January of a given year; all values are shown according to the midpoint dates of the respective time windows.}
\label{fig:ts-global}
\end{figure*}
The network transitivity $\mathcal{T}$ has been shown by \cite{Wiedermann2016a} to systematically discriminate between the EP and CP flavours of both, El Ni\~no and La Ni\~na. While this reference used an area-weighted version of $\mathcal{T}$ and included information on the total pairwise correlation strength instead of just binary adjacency information, we follow here the approach of \citet{radebach2013disentangling} in using remapping onto an icosahedral grid. Figure~\ref{fig:ts-global-transitivity} shows the corresponding results obtained using our slightly modified data set, which are qualitatively almost indistinguishable from those of the two aforementioned studies as expected\footnote{Note, however, that Fig.~\ref{fig:ts-global} shows the results for different network measures in dependence on the window midpoint, while \citet{Wiedermann2016a} used the endpoint, leading to a 6-month shift between the respective plots.}.
In order to further quantify the strength of teleconnectivity in the global SAT field, the network modularity $\mathcal{Q}$ provides a prospective candidate measure that has not yet been exploited for this purpose in previous studies. Recall that a high modularity indicates a fragmented network, whereas low values would point to a relatively homogeneous connectivity structure of the network as a whole. Hence, a marked decrease in modularity could indicate an increase in the degree of organization of the global SAT network, i.e., a tendency towards more balanced co-variability in global temperatures.
Figure~\ref{fig:ts-modularity} shows that most time intervals that are characterized by elevated values of network transitivity actually exhibit a marked reduction in modularity. Consistent with previous findings of \citet{radebach2013disentangling}, most of these time windows in fact coincide with either some El Ni\~no or La Ni\~na phase, indicating again the global impact of these episodes in terms of equilibrating spatial co-variability in the Earth's SAT field. This can be considered as an expected signature of emerging teleconnectivity. Note that taken alone, this process would not necessarily imply a stronger \emph{synchronization} (as studied by, e.g., \citet{Maraun2005}) between climate variability in distinct regions, which would be reflected by higher absolute correlation values. Specifically, in this work, we use a fixed link density of 0.5\% in all window-specific climate networks and thus cannot make any statements about the overall strength of correlations. However, following previous results by \citet{radebach2013disentangling}, we may actually expect that the correlation threshold $q_{\left|c\right|,0.995}$ used for establishing network connectivity in this work exhibits maxima whenever $\mathcal{T}$ shows a peak, thereby supporting the hypothesis of El Ni\~no and La Ni\~na episodes synchronizing global SAT variability by establishing teleconnections.
Regarding the latter observation, it is remarkable that previous works by other authors rather reported a reduction of connectivity associated with a breakdown of synchronization due to the large-scale climate disruption triggered by El Ni\~no events \citep{yamasaki2008}. In fact, this observation has been recently used to develop network-based forecasting strategies for El Ni\~no \citep{ludescher2013improved,ludescher2014very}. However, the apparent contradiction between the latter results and our observations can be resolved when taking the different approaches of network construction used in the respective works into account.
Beyond their overall large-scale similarity, the temporal variability profiles of transitivity and modularity also exhibit some important differences. In particular, the strong 1982/83 El Ni\~no is represented as a single long episode of reduced modularity values while being split into two rather distinct peaks in transitivity (see Fig.~\ref{fig:ts-global-transitivity} and \ref{fig:ts-modularity}). Given the known seasonal profile of El~Ni\~no peaking around Christmas, it is remarkable that the ONI index has remained high during a quite long period of time, indicating a single extended event even despite its temporary decay captured by $\mathcal{T}$, but not quite by $\mathcal{Q}$. This underlines that both measures actually capture different aspects of network organization that provide complementary information.
Another notable observation is related with the abrupt shift from El Ni\~no to La Ni\~na conditions in summer 1998, leading to a very fast reorganization of the global SAT field. The latter transition is reflected by some negative anomaly of $\mathcal{T}$ in summer 1998 in comparison with the ``normal'' background values of this measure, which presents a unique feature in the time evolution of network transitivity over the last decades that is not accompanied by any corresponding anomaly in $\mathcal{Q}$.
Taken together, modularity and transitivity evolve similarly at larger time scales, but provide complementary viewpoints. High transitivity commonly coincides with the temporary appearance of densely connected structures in the functional climate network, which are typically well localized in space \citep{radebach2013disentangling}. In turn, modularity captures the global connectivity pattern rather than primarily local features. Specifically, a low modularity value actually highlights more \emph{global connections} in the climate network.
While network transitivity and modularity present two key topological network characteristics, functional climate networks are systems embedded in geographical space. Thus, the spatial placement of nodes and links (which is disregarded by topological characteristics) can play a pivotal role in network structure formation \citep{radebach2013disentangling}. In order to address this aspect, we present the temporal evolution of the global average link distance $\left<\left<d\right>\right>$ in Fig.~\ref{fig:ts-global-link-length}. Notably, this measure exhibits more irregular variability with a less clear distinction between ``background level'' and ``anomalies'' associated with different types of climate disruptions than the previously studied two topological characteristics. Yet, the general behaviour of $\left<\left<d\right>\right>$ resembles that of network transitivity in the sense that ENSO-related peaks often co-occur in both measures. This indicates that strong El Ni\~no and La Ni\~na episodes do not exclusively trigger short-range (localized) connectivity (high $\mathcal{T}$), but also global teleconnectivity (high $\left<\left<d\right>\right>$), which is in line with contemporary knowledge on the large-scale impacts of both types of ENSO phases. Notably, this result is in agreement with previous qualitative results of \citet{radebach2013disentangling} on the link distance distribution of global SAT networks.
From the results discussed above, we tentatively conclude that in order to distinguish globally influential ENSO events from episodes of minor (or more regional) relevance, a combination of modularity and average link distance can be useful, taking a holistic view in studying the differential imprints of different types of ENSO phases. We will recall this strategy when discussing the effects of volcanic eruptions on network organization at a global scale.
\subsubsection{Spatial patterns of network connectivity}
\begin{figure*}
\centering
\newlength\figurewidth
\setlength\figurewidth{0.4\textwidth}
\newlength\figuredistance
\setlength\figuredistance{0.08\textwidth}
\includegraphics[width=\figurewidth]{colorbar-degree-field.jpg} \hspace{\figuredistance} \includegraphics[width=\figurewidth]{colorbar-avg-link-length-field.jpg} \\
\subfloat{\label{fig:composite-marc-en-ep-degree}\includegraphics[width=\figurewidth]{mark-EN-ep-degree-field.jpg}} \hspace{\figuredistance} \subfloat{\label{fig:composite-marc-en-ep-link-length}\includegraphics[width=\figurewidth]{mark-EN-ep-avg-link-length-field.jpg}} \\
\subfloat{\label{fig:composite-marc-en-cp-degree}\includegraphics[width=\figurewidth]{mark-EN-cp-degree-field.jpg}} \hspace{\figuredistance} \subfloat{\label{fig:composite-marc-en-cp-link-length}\includegraphics[width=\figurewidth]{mark-EN-cp-avg-link-length-field.jpg}} \\
\subfloat{\label{fig:composite-marc-ln-ep-degree}\includegraphics[width=\figurewidth]{marc-LN-ep-degree-field.jpg}} \hspace{\figuredistance} \subfloat{\label{fig:composite-marc-ln-ep-link-length}\includegraphics[width=\figurewidth]{marc-LN-ep-avg-link-length-field.jpg}} \\
\subfloat{\label{fig:composite-marc-ln-cp-degree}\includegraphics[width=\figurewidth]{marc-LN-cp-degree-field.jpg}} \hspace{\figuredistance} \subfloat{\label{fig:composite-marc-ln-cp-link-length}\includegraphics[width=\figurewidth]{marc-LN-cp-avg-link-length-field.jpg}} \\
\subfloat{\label{fig:composite-other-degree}\includegraphics[width=\figurewidth]{other-degree-field.jpg}} \hspace{\figuredistance} \subfloat{\label{fig:composite-other-link-length}\includegraphics[width=\figurewidth]{other-avg-link-length-field.jpg}}
\caption{Global maps showing composites of (a,c,e,g,i) degree $k_i$ and (b,d,f,h,j) average link distance $d_i$ for different types of ENSO phases:
(a,b) EP El Niño,
(c,d) CP El Niño,
(e,f) EP La Niña,
(g,h) CP La Niña
and (i,j) all other periods. The corresponding classification of different years is summarized in Tab.~\ref{tab:summary-ep-cp-events}.
}
\label{fig:composites}
\end{figure*}
The above analysis of global network properties has largely confirmed some known effects of certain ENSO phases on the spatial co-variability structure of the global SAT field. Drawing upon the insight that topological and spatial network measures can provide different perspectives on the corresponding network patterns, we now turn to investigating the geographical characteristics of the generated functional climate networks. Specifically, following recent observations that climate network properties distinguish between the EP and CP flavours of both, El Ni\~no and La Ni\~na \citep{Wiedermann2016a}, we are interested in the question how the associated (tele-) connectivity structures are manifested in the respective spatial fields of degree and average link distance. For this purpose, Fig.~\ref{fig:composites} shows composite plots of the spatial patterns exhibited by both network properties during the different types of ENSO phases, thereby averaging the local network properties over all time windows that are classified as showing either of these situations (see Tab.~\ref{tab:summary-ep-cp-events}).
The left panels of Fig.~\ref{fig:composites} display the respective mean degree fields for the different types of ENSO periods.
As expected, we observe a particularly strong deviation from a homogeneous pattern during EP El Ni\~nos (Fig.~\ref{fig:composite-marc-en-ep-degree}), while the degrees in the eastern-to-central tropical Pacific are only slightly larger than in the rest of the network during time windows without El Ni\~no or La Ni\~na conditions (Fig.~\ref{fig:composite-other-degree}). This general behaviour is expected from previous studies \citep{Wiedermann2016a}. Still, the observed degree patterns alone do not allow us to distinguish between a local or global phenomenon. For this purpose, the right panels of Fig.~\ref{fig:composites} show the corresponding mean average link distance fields for each type of situation. Elevated values of this measure in the typical ENSO region are present in case of all four possible types of episodes, indicating that both flavours of El Ni\~no and La Ni\~na actually generate additional connections in the tropical Pacific that span relatively large distances.
Analyzing the composite maps of the average link distance in more detail, it is important to note that beyond the ENSO region itself, additional parts of the globe exhibit elevated values. This indicates the presence of localized teleconnections that possibly link climate variability in the latter regions with ENSO. Specifically, EP El Ni\~nos (Fig.~\ref{fig:composite-marc-en-ep-link-length}) exhibit such teleconnections with Indonesia and Western Africa, which are also recovered for EP La Ni\~nas (Fig.~\ref{fig:composite-marc-ln-ep-link-length}). For CP El Ni\~nos (Fig.~\ref{fig:composite-marc-en-cp-link-length}), the $d_i$ field highlights a weak connection with Western Africa, but none with Indonesia. Similar but still weaker teleconnections can be observed for CP La Ni\~nas (Fig.~\ref{fig:composite-marc-ln-cp-link-length}).
Among the aforementioned patterns, the apparent teleconnection with Indonesia present during EP events but not during their CP counterparts is particularly interesting, as it is localized in the westernmost tropical Pacific. Thus, it connects eastern and western Pacific while not leading to marked long-distance connections in the central Pacific close to the dateline. One appealing explanation of this finding could be that the corresponding link is mediated via the Walker circulation \citep{Ashok2009} and, thus, via airflow in higher altitudes rather than near-surface atmospheric circulation. However, it has to be noted that our analysis is based on cross-correlations only. The values can be severely affected by distinct temporal persistence properties of SAT in the eastern and western tropical Pacific, as pointed out by recent studies making use of modern causal inference methods \citep{Balasis2013,Runge2014}. Accounting for this effect in terms of replacing the correlation values by associated significance levels in the network generation step \citep{Palus2011} could provide a useful yet computationally demanding avenue for future research on this topic. From an impact perspective, the teleconnection suggested by our results is compatible with the documented increased likelihood of droughts in Indonesia during El Ni\~no events \citep{diaz2001enso}.
The apparent teleconnection with Western Africa spans a rather large spatial distance (about one third of the globe). In this context, \citet{joly2009influence} noted that ''a significant part of the West African monsoon (WAM) interannual variability can be explained by the remote influence of El Niño–Southern Oscillation (ENSO).`` This previously reported teleconnection could be responsible for the elevated average link distance over Western Africa especially during EP El Ni\~nos.
In general, climate variability in tropical regions is typically more likely to exhibit strong correlations than between tropics and extratropics, which is mainly due to the cellular structure of meridional atmospheric circulation that is effectively decoupling tropics and extratropics. In this regard, the omnipresent slightly elevated average link distance values in the polar regions are most likely data artifacts not corrected by our remapping procedure rather than indications of actual teleconnections.
\subsubsection{Regionalized network characteristics}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{enso-colorbar.pdf} \\
\subfloat{\includegraphics[width = \timeserieslen]{nino-3-4-region-degree-field.pdf}\label{fig:ts-degree-3.4}} \\
\subfloat{\includegraphics[width = \timeserieslen]{nino-3-region-degree-field.pdf}\label{fig:ts-degree-3}} \\
\subfloat{\includegraphics[width = \timeserieslen]{nino-4-region-degree-field.pdf}\label{fig:ts-degree-4}} \\
\subfloat{\includegraphics[width = \timeserieslen]{nino-3-4-region-avg-link-length-field.pdf}\label{fig:ts-link-length-3.4}} \\
\subfloat{\includegraphics[width = \timeserieslen]{nino-3-region-avg-link-length-field.pdf}\label{fig:ts-link-length-3}} \\
\subfloat{\includegraphics[width = \timeserieslen]{nino-4-region-avg-link-length-field.pdf}\label{fig:ts-link-length-4}}
\caption{Time series of different regionalized climate network properties introduced in Sect.~\ref{sec:localization-ENSO}. Background colours and time axis as in Fig.~\ref{fig:ts-global}. }
\label{fig:time-series-localized}
\end{figure*}
Global and local climate network properties as discussed above still provide only incomplete information on the effects of climate variability in different parts of the ENSO region on global SAT. To obtain further insights into this aspect, we now turn to analyzing the regionalized field measures introduced in Sect.~\ref{sec:localization-of-field-measures} and study the specific connectivity associated with the Nino3.4, Nino3 and Nino4 regions in terms of degree and average link distance.
The corresponding results are summarized in Fig.~\ref{fig:time-series-localized}. We observe that the relative magnitude of variations of regionalized degree and average link distance is even stronger than that of the global network properties transitivity, modularity and global average link distance discussed above. All measures exhibit episodes of very small values as opposed to such with much larger values, the latter often coinciding with El Ni\~no and La Ni\~na phases. Since the corresponding regions have been previously chosen for defining ENSO-specific indices, this result has been expected. Most importantly, degree and average link distance based characteristics exhibit strong positive correlations. Notably, for climatic events with predominantly local structure, we would expect a strong increase of $k_i$ but only weaker increase of $d_i$ in the region under study. Hence, our corresponding observations underline that ENSO-related climate impacts are not confined to the vicinity of the ENSO region, but are controlled by large-scale teleconnections.
Since the different ENSO regions show partial overlap (cf.~Fig.~\ref{fig:map-regions-of-interest}), the results obtained for the individual regions exhibit a high degree of similarity. However, regarding specific El Ni\~no or La Ni\~na episodes, comparing the corresponding signatures for the Nino3 and Nino4 regions still allows attributing these events to more Eastern Pacific or Central Pacific types. For example, the strong 1997/98 El Ni\~no is reflected by very high values of the regionalized degree for the Nino3 and Nino3.4 regions, but relatively weak signatures in the more western Nino4 region, which is consistent with its classification as an EP type event.
Examining the time evolution of all six regionalized network measures in some detail, it is notable that between 1978 and 1982, there has been considerable variability in all measures pointing towards an episodic presence of teleconnections even though none of the time windows was classified as an El Ni\~no or La Ni\~na episode according to the ONI. Moreover, we find that before the year 2000, clear peaks can always be observed in all properties as alternating with periods of low values. In turn, during the last about 15 years, we rather find strong variability without any low background level, with peaks occurring almost annually, with the exception of 2013 and 2014. This change in the overall temporal variability pattern of our regionalized network measures might point to some fundamental changes in the spatio-temporal organization of global SAT, either due to some not yet identified mode of natural variability or as a result of external interference. An attribution of this observation is, however, beyond the scope of the present work.
\subsection{Volcanic eruptions}
\label{sec:results-volcanoes}
Besides ENSO variability, strong volcanic eruptions have been identified as causes of marked disruptions in climate network properties in earlier studies \citep{radebach2013disentangling}. In this context, the application of the complementary viewpoints as used in this work for further characterizing the impacts of such eruptions promises interesting additional insights.
Regarding the global network properties, let us turn back to Fig.~\ref{fig:ts-global}. As already emphasized in our discussion on the corresponding imprints of different ENSO phases, anomalies in transitivity and modularity need to be interpreted differently in terms of global versus more regional changes in climate network connectivity. While EP El Ni\~nos and La Ni\~nas (but not their CP counterparts) consistently show peaks in transitivity coinciding with breakdowns in modularity, a similar signature has been found in the aftermath of the Mount Pinatubo eruption, suggesting that this event has affected the climate system globally. However, when comparing these topological network characteristics with the spatial network property of global average link distance $\left<\left<d\right>\right>$, we find a marked difference. Specifically, for ENSO-related climate disruptions, both $\left<\left<d\right>\right>$ and $\mathcal{T}$ show the same direction of changes (i.e., peaks) with only few exceptions. In turn, we observe an opposite behaviour of both measures in 1993, which corresponds to the time windows where the cooling effects following the Mount Pinatubo eruption should have taken their maximum \citep{McCormick1995}. Hence, unlike for ENSO-related disruptions, the peak in transitivity and simultaneous drop in $\left<\left<d\right>\right>$ indicate that the effects of the volcanic eruptions have rather been regionally confined. The latter is consistent with the hypothesis of elevated correlations in the region that has been most directly affected by the associated cooling trend following the eruption. Based on this observation, we suggest that using the global average link distance in conjunction with network transitivity and modularity enables us to discern disruptive events with global effects (strong ENSO phases) from those exhibiting more regional impacts (volcanic eruptions).
\begin{figure*}
\centering
\subfloat{\includegraphics[width = \textwidth]{pinatubo-degree-field.jpg}\label{fig:pinatubo-ts-degree}} \subfloat{\label{fig:pinatubo-ts-link-length}} \subfloat{\label{fig:pinatubo-map}} \\
\subfloat{\includegraphics[width = \textwidth]{agung-degree-field.jpg}\label{fig:agung-ts-degree}} \subfloat{\label{fig:agung-ts-link-length}} \subfloat{\label{fig:agung-map}} \\
\subfloat{\includegraphics[width = \textwidth]{el-chichon-degree-field.jpg}\label{fig:el-chichon-ts-degree}} \subfloat{\label{fig:el=chichon-ts-link-length}} \subfloat{\label{fig:el-chichon-map}}
\caption{Time series of regional mean degree and associated degree field (excluding the ENSO-big region indicated by white color) for the three main volcanic eruptions during the study period: (a-c) Mount Pinatubo, (d-f) Mount Agung, (g-i) El Chichon. In the degree maps shown in panels (c), (f) and (i), blue dots mark grid points within a radius of 5$\degree$ around each volcano, which have been used to define the regionalized degrees shown in panels (a), (d) and (g), respectively. Red dots indicate spatially shifted regions of the same size where the largest changes to the degree field have been observed. These regions serve as the basis for computing the regionalized degrees shown in panels (b), (e) and (h), respectively. Purple vertical lines indicate the timing of the respective eruptions, whereas green vertical lines indicate the midpoints of the time windows exhibiting the strongest signature in the regionalized network properties. The time series have been restricted to $\pm$10 years around the date of the respective eruption. Background colours indicate the corresponding ENSO strength as in Fig.~\ref{fig:ts-global}.}
\label{fig:volcanoes}
\end{figure*}
In general, one notable difference in comparison with ENSO-related impacts is that large-scale effects of volcanic eruptions on global SAT teleconnectivity can be observed only after a sufficiently large amount of aerosols have entered the stratosphere \citep{Robock2000}. Accompanying the resulting time shift between trigger event and response, we may also need to consider a spatial shift of the most affected region as compared to the location of the volcano. In the following, we apply our regionalization procedure described in Sect.~\ref{sec:localization-volcanoes} to studying the impacts of the Mount Pinatubo, Mount Agung and El Chichon eruptions. In order to avoid interference with the effects of ENSO events, the ENSO-big region depicted in Fig.~\ref{fig:map-regions-of-interest} are excluded from the corresponding computations. The results obtained from this analysis are summarized in Fig.~\ref{fig:volcanoes}.
The largest of the three considered eruptions (Mount Pinatubo) had a global cooling effect and has left clearly visible signatures in all considered global network measures as discussed above. Some months after the eruption, a large region of elevated network connectivity has been established, which covers essentially all of the western tropical Pacific (Fig.~\ref{fig:pinatubo-map}). The temporal evolution of the average degree in the region around Mount Pinatubo displays an abrupt rise about half a year after the eruption, then a constantly high value for about one year (the common residence time of volcanic aerosols in the stratosphere) before dropping again back to its previous level (Fig.~\ref{fig:pinatubo-ts-degree}). The region with the highest degrees is shifted northward with respect to the location of the volcano (Fig.~\ref{fig:pinatubo-map}). When computing the average degree for this region, we observe an even stronger rise of the regionalized degree than for the region surrounding the volcano (Fig.~\ref{fig:pinatubo-ts-link-length}).
The Mount Agung eruption exhibits similar, but weaker, patterns in the respective region (Fig.~\ref{fig:agung-map}). However, the region with the highest degree is shifted south-westward. The average degree in the region surrounding Mount Agung only shows weak changes after the eruption (Fig.~\ref{fig:agung-map}), as opposed to a somewhat sharper increase in the shifted region, with the peak effect occurring significantly faster after the beginning of the eruption than in case of the Mount Pinatubo eruption (Fig.~\ref{fig:agung-ts-link-length}). However, it should be noted that we can already observe the beginning of some upward trend in regionalized degree before the actual eruption, pointing to a possible interference with normal natural variability.
Unlike the two other volcanic eruptions, the degree field in the period succeeding the El Chichon eruption showed hardly any marked changes (Fig.~\ref{fig:el-chichon-map}). Consequently, we also do not observe any marked signature in the temporal variability profile of the regionalized degree in the surrounding of the volcano (Fig.~\ref{fig:el-chichon-ts-degree}). Instead of a peak shortly after the eruption, we actually find a clear drop of the average degree. However, given that El Chichon is located relatively close to the extended ENSO region, we cannot rule out that this could be an effect of the strong El Ni\~no event occurring shortly after the eruption and eventually even being partially triggered by the latter \citep{Khodri2017}. In general, previous studies indicate that the El Chichon eruption caused a much weaker summer cooling than the Mount Agung eruption \citep{Man2014}, which could also explain its absent signature in our analysis.
\section{Conclusions}
We have used functional climate networks constructed from spatial correlations of daily global surface air temperature (SAT) anomalies to analyze the global impact and teleconnections of past El Niño and La Niña events as well as volcanic eruptions. By making use of the global network property of modularity, we have found that at least the East Pacific flavours of such events lead to a \emph{global reconfiguration} of SAT variations. Considering the global average link distance as a complementary spatial network characteristic, we have identified distinct qualitative differences between the imprints of these ENSO periods and the Mount Pinatubo eruption in global SAT patterns.
Using composites of the degree and average link distance fields, we have identified hallmarks of distinct ENSO teleconnections in the climate network structure, especially such linking the eastern tropical Pacific with Indonesia and West Africa during East Pacific El Ni\~nos, both of which have already been reported in previous studies \citep{diaz2001enso,joly2009influence}. By making use of a regionalization procedure applied to these two fields of local network properties, we have introduced a simple yet effective tool to unveil the differential roles of different regions in the tropical Pacific in establishing teleconnections during different El Ni\~no and La Ni\~na events.
Finally, we have analyzed the global and local connectivity properties of SAT-based climate networks in the aftermath of the strongest recent volcanic eruptions of Mount Pinatubo, Mount Agung and El Chichon. In particular, while the Mount Pinatubo eruption has been confirmed to exhibit marked impacts on global SAT, its dominating effect was rather regional (i.e., it did not trigger long-range teleconnections detectable by our approach).
While most of the results presented in this work rely on a qualitative analysis of temporal changes in the climate network properties, additional statistical quantification of their relationship with existing indicators of ENSO variability and volcanic eruptions' impacts might further strengthen our findings. Regarding ENSO, many previous studies have attempted to utilize the spatial patterns of SST anomalies to define corresponding index variables. However, the corresponding classifications of El Ni\~no and La Ni\~na phases reached only partial consensus, which was in fact the motivation of the work of \citet{Wiedermann2016a} presenting climate network transitivity as a useful and consistent index. Going one step further, one might easily quantify, for example, the correlation between transitivity and other (global or regionalized local) network characteristics. However, in our opinion, the particular value of the present work lies in identifying specific properties that are not exhibited by the former (as well as other not network-related indices). In this context, there is no established benchmark that could be used for further testing the significance of our results.
In turn, regarding the effects of volcanic eruptions, the respective regionalized degrees for the spatially shifted ``major impact regions'' of both, Mount Pinatubo and Mount Agung, exhibited their overall maximum values among all time windows studied in this work in the aftermath of the associated eruptions. This indicates a very high significance of our corresponding results. Note that, however, we did not succeed in finding any comparatively strong impact signature in the climate network properties after the eruption of El Chichon, as well as after other strong volcanic eruptions of the past about 70 years (not shown). We relate the latter finding to the generally lower magnitude of the respective perturbations (in terms of a smaller amount of climate-active volcanic aerosols injected into the stratosphere). Moreover, some of the other major eruptions (e.g., the Mount St.~Helens eruption in 1980) appeared in the extratropics rather than the tropics. Together with the different seasonality of these events, this could imply different effects on regional and global temperature patterns, similar as shown recently for global monsoon precipitation \citep{Liu2016}.
In summary, our study confirms that ENSO does not only have a strong local effect on SAT in terms of coherent SAT trends in the tropical Pacific associated with a spatially confined increase of network connectivity \citep{radebach2013disentangling}, but also dynamically reconfigures climate variability globally by triggering teleconnections especially with other tropical regions. In this regard, one possible mechanism could involve the modulation of monsoons by strong El Ni\~no and/or La Ni\~na periods, which could be further modulated by volcanic eruptions \citep{Maraun2005}. Confirming this hypothesis in the context of climate network studies would, however, require much more elaborated approaches than those used in the present work, and is therefore outlined as a subject of future research.
|
3,212,635,537,781 | arxiv |
\section{Introduction}
Social scientists often provide recommendations about the implementation of policies, which determine whether and in what manner a given treatment should be applied in some target context.
Crucial to this task is generating counterfactual predictions to inform such recommendations.
The methods to which social scientists turn for this purpose are varied.
They include quantitative extrapolations from existing randomized evaluations or observational studies, predictions based on structural models that interpret behavioral patterns in the target context, and more subjective expert opinions, among others.
Our goal in this paper is to offer a formal framework for evaluating the relative success of methods for generating policy recommendations.
We evaluate the success of {\it ex ante} policy recommendations that can draw on pre-existing experiments and descriptive data to recommend a treatment assignment in a target context.
Then, we conduct an {\it ex post} evaluation of the recommendation, based on information from experiments in the target context.
We are thus able to quantify and assess the statistical significance, {\it ex post}, of the differential performance of various methods for generating {\it ex ante} recommendations.
These {\it ex post} assessments can inform choices over methods to produce {\it ex ante} recommendations for new contexts where experimental data are not yet available.
Our framework is decision-based, considering optimal choices for a social planner seeking to maximize a welfare objective subject.
We consider a linear social welfare function, although the framework could take into consideration preferences over inequality, uncertainty, status quo bias, or other modifications.
The planner is thought to face constraints that limit the set of policy options.
Multiple methods may yield identical results for basic types of recommendations --- e.g., they may agree on whether the treatment is harmful or helpful on average, in which case they would make the same recommendation regarding the alternatives of ``treat everyone'' versus ``treat no one.''
But when it comes to estimating optimal policies, methods may differ in their recommendations.
Such differences in recommendations would be due to different methods' predictions about how different types of people respond to treatment \citep{Manski2004, Dehejia2005, Hirano2009, Imai2011, Kitagawa2015, Athey2017a} .
The wider the range of admissible policy options, the wider is the scope for methods to differ in their recommendations and thus in their relative performance.
Another way to put this is that the more refined the policy decision at stake, the more we need to ask of the methods, and thus the more refined will be our judgment about the performance of different methods.
What we refer to as a ``method'' is an approach for determining which of these policy options should be implemented.
These include reduced form methods that rely on conditional unconfoundedness to extrapolate conditional treatment effects using existing experimental or observational evaluations from other contexts, as in \cite{JosephHotz2005} and \cite{Dehejia2014}.
Another class of methods includes meta-analytical methods that model heterogeneity across contexts, as in \cite{Dehejia2003}, \cite{Meager2015}, and \cite{Vivalt2014}.
A third class of methods includes structural models.
These include models that interpret behavioral patterns among untreated observations within the target context so as to generate counterfactual predictions about outcomes under treatment, as in \cite{Todd2006} and \cite{Todd2010a}.
They also include approaches that estimate of some structural model parameters using untreated units in the target context and others from available experimental evidence, as in \cite{Attanasio2012}.
Hybrids of reduced form and structural methods are also available, as in \cite{Gechter2015a}.\footnote{Our framework as currently developed does not allow for interval predictions for treatment effects, like those generated by \cite{Gechter2015a} and \cite{Andrews2017} but integrating them through the theory of treatment choice under ambiguity reviewed in \cite{Manski2011} presents no particular problem.}
Finally, a fifth class of methods includes the solicitation of subjective opinions of experts, as discussed in \cite{Banerjee2016b} and applied in \cite{DellaVigna2016a}.
In our application, we allow methods to draw on existing experimental data from other contexts as well as data on covariates and untreated outcomes in the target context.
The resulting recommendations are thus ``{\it ex ante}'' insofar as they are made under uncertainty about the distribution of potential outcomes under treatment.
An assessment is then done {\it ex post}, using data from experiments that reveal the distribution of potential outcomes under treatment.
Our analysis allows us to determine, {\it ex post}, whether the policy recommendation from one or another method performed significantly better in terms of promoting welfare.
We use our framework to evaluate conditional cash transfer policies for increasing children's school enrollment, a policy subject to widespread consideration around the world \citep{Parker2018}.
We begin with an illustrative example of making a policy recommendation for a conditional cash transfer program in Morocco. {\it Ex ante}, the methods can draw upon data from a conditional cash transfer experiment in Mexico---specifically, the PROGRESA randomized evaluation---as well as covariates and untreated outcome data in Morocco.
We consider four methods:
(1) simple reduced form extrapolation of conditional treatment effects by age-sex strata from Mexico to Morocco, (2) more sophisticated conditional average treatment effect extrapolation using \cite{Athey2019-grf}'s Generalized Random Forest (GRF) algorithm, (3) a static, semi-parametric structural (SPS) model based on \cite{Todd2010a}, (4) a dynamic, parametric structural model based on \cite{Attanasio2012} (AMS).
We then assess, {\it ex post}, the performance of the two methods using the results of the randomized evaluation conducted by \cite{Benhassine2015}.
The results show
that methods (1) and (4) perform best in this particular case despite making fairly different policy recommendations.
The GRF appears to suffer from ``contextual overfitting'' in the sense that the extra heterogeneity it detects is more idiosyncratic to PROGRESA than the basic heterogeneity captured by a simple stratification exercise.
SPS's non-parametric component is perhaps too flexible, leading to a more standard overfitting problem despite some regularization.
These results are preliminary, based on one reference-target pair out of 7 different CCT contexts we have harmonized data from.
Our full set of findings are intended to speak to the question of whether policy recommendations should rely on internally valid evidence generated outside the target context, or whether one should rely on potentially confounded evidence from within the target context, as in \cite{Pritchett2013}.
Our methodological contribution is a formal framework for evaluating policy recommendations based on counterfactual predictions from competing methods.
Our aim is to provide tools that are more general in speaking to policy recommendations than the relatively informal and case-specific model validation exercises that regularly appear in the applied microeconomics literature; see, for example, \cite{Todd2006}, \cite{Keane2007a}, \cite{Duflo2012}, and \cite{Wolpin2013}, who each use predictions into holdout samples to evaluate the fit of structural models.
Our framework for evaluating policy recommendation methods builds on \cite{Pesaran2002} and \cite{Granger2006}'s notion of using information on the intended use of predictions as a basis for judging methods, foundational work on forecast evaluation theory by \cite{Diebold1995}, \cite{White2003}, and \cite{Hansen2011}, as well as work on prediction-based model comparisons by \cite{Keane2007a}, \cite{Wolpin2007}, \cite{Wolpin2013}, \cite{Schorfheide2012}, and \cite{Schorfheide2016}.
We see our work as complementary to research by \cite{DellaVigna2016a} on evaluating the quality of experts' {\it ex ante} forecasts by using experimental data {\it ex post}.
\cite{Manski1996, Manski2004}, \cite{Dehejia2005}, \cite{Hirano2009}, \cite{Tetenov2012}, \cite{Kitagawa2015}, and \cite{Athey2017a} address the issue of deriving optimal treatment regimes in decision-theoretic terms in sample; we extend these ideas to the evaluation of methods out of sample as well.
Evaluating methods through the lens of a decision problem is in line with \citet{Arrow1972}, and presents an attractive alternative to model comparison based on Kullback-Leibler divergence (as in \citealp{Vuong1989}).
Selection based on Kullback-Leibler superiority may be inadequate for a particular decision problem \citep{Marschak1968} .
Finally, our application to the conditional cash transfer programs draws on the synthesis discussions in \cite{Banerjee2015b} and \citet{Garcia-Saavedra2017-CCT-meta}
as well as the specific data and analyses conducted by \cite{DeJanvry2006}, \cite{Todd2006}, and \cite{Attanasio2012} for Mexico and \cite{Benhassine2015} for Morocco.
\section{Setting}
Let $\mathcal{M}$ define a set of methods under consideration.
A method $m \in \mathcal{M}$ produces predictions for a set of real-valued treatment conditions, $\mathcal{T}$, finite.
Suppose that the status quo treatment condition is given by $t= 0 \in \mathcal{T}$ for all $m \in \mathcal{M}$.
Considering our application below, we can imagine that $m$ could be a reduced form extrapolation method for predicting responses to different subsidy schedules, which are elements in the set $\mathcal{T}$, including the no subsidy condition, $t=0$.
We consider a social planner interested in using data from a set of reference contexts to devise welfare-maximizing policies in a set of target contexts.
Suppose contexts are indexed by $c \in \{1,...,C\}$.
Let $D_c$ be an indicator variable dividing the contexts into target and reference contexts, such that $D_c = 1$ when $c$ is a target context and $D_c = 0$ when $c$ is a reference context.
The planner wants to assess the methods in $\mathcal{M}$ according to their ability to assign individuals to treatments from within $\mathcal{T}$ in a way that maximizes the planner's social welfare function in the target contexts, subject to potential constraints on feasible treatments.
A set of treatment conditions, $\mathcal{T}_c \subseteq \mathcal{T}$, is active in each context $c$, although methods may be able to use data from a set $\mathcal{T}_c$ to generate counterfactual predictions for treatments that are outside this set. For example, structural methods can generate counterfactual predictions for treatment effects with treatments that have never been implemented.
For the target contexts, we suppose that {\it ex ante}, $\mathcal{T}_c = \{0\}$, meaning only data on the status quo conditions are available.
In the reference contexts, {\it ex ante}, $\mathcal{T}_c$ may contain treatments other than just the status quo.
Thus, in the target contexts, only the $t=0$ treatment is active prior to the implementation of policy, and the social planner is seeking a recommendation on how to introduce treatments from a feasible set of options so as to maximize welfare.
The recommendations are based on methods that make counterfactual predictions for the target contexts.
We observe an $J$-vector of context level characteristics, $V_c$, which contains the indicator, $D_c$, as well as $J-1$ other attributes, such that ${\rm Supp}\,(V_c) = \mathcal{V}\subseteq \mathbb{R}^J$.
Within a given context $c$, let $i$ index individuals.
Each context is governed by a probability distribution, $\mathbb{P}_c$, on the following: an individual-level treatment variable, $T_{ic}$, where ${\rm Supp}\,{(T_{ic})} = \mathcal{T}_c$; a $K$-vector of individual-level covariates, $W_{ic}$, with ${\rm Supp}\,{(W_{ic})} = \mathcal{W}_c \subseteq \mathbb{R}^K$; and an individual-level potential outcome function, $Y_{tic}:\mathcal{T}_c \rightarrow \mathbb{R}$, that maps treatments to outcomes.
These outcomes measure individual wellbeing from the planner's perspective.
We suppose the following conditions on the outcome data:
\begin{description}
\item[C0.] $| \mathbb{E}\,_c[Y_{tic}] | < \infty$ and $0 < \mathbb{V}\,_c[Y_{tic}] < \infty$ for all $ t \in \mathcal{T}_c$ and $c \in \{1,...,C\}$.
\end{description}
Let $Y_{it} = Y_{T_{ic}ic}$ be the observed outcome for unit $i$ in context $c$.
The observed data consist of the context level characteristics and individual-level data, $O_{ic} = (V_c, W_{ic}, T_{ic}, Y_{ic})$, for random samples of individuals $i \in \{1,...,n_c\}$ across contexts $c=\{1,...,C\}$.
For simplicity, we focus on the case where we can take individual-level treatments to be binary, in which case $\mathcal{T} = \{0,1\}$, where $t=0$ is a status quo control condition and $t=1$ a treated condition.
Then, individuals' potential outcomes are $\left(Y_{1ic}, Y_{0ic}\right)$ and observed outcomes are given by $Y_{ic} = T_{ic}Y_{1ic}, + (1-T_{ic})Y_{0ic}$.
For a context $c \in \{1,..,C\}$, consider the following conditions on the data generating process defining $\mathbb{P}_c$:
\begin{description}
\item[C1.] conditional random assignment, $\left(Y_{1ic}, Y_{0ic} \right) \independent T_{ic}| W_{ic} = w$ for all $w \in \mathcal{W}_c$;
\item[C2.] overlap, where probabilities of assignment, $p_c(w) := \Pr[T_{ic} = 1|W_{ic} = w]$, are interior such that $0 < b_0 < p_c(w) < b_1 < 1$ for all $w \in \mathcal{W}_c$, and
\item[C3.] stable unit treatment value assumption (SUTVA; \citealp{rubin80_randomization_analysis}), such that for $T_{ic} = t$, $Y_{ic} = Y_{tic}$ with probability 1.
\end{description}
When conditions C1-C3 hold, the conditional (on covariates) potential outcome distributions for both the treatment and control groups in the reference contexts are identified.
In our setting, we suppose that, {\it ex ante}, we are working with a random sample for which conditions C1-C3 hold in the reference contexts (for which $D_{ic} = 0$).
However, in the the target contexts (for which $D_c = 1$), we suppose that, {\it ex ante}, we have a random sample only for units with $T_{ic} = 0$, while the covariate distributions are also identified.
As such, {\it ex ante}, the distributions $\mathbb{P}_c[(Y_{0ic}, W_{ic})|V_c, D_c = 0]$, $\mathbb{P}_c[(Y_{1ic}, W_{ic})|V_c, D_c = 0]$, and $\mathbb{P}_c[(Y_{0ic}(0), W_{ic})|V_c, D_c = 1]$ are identified.
The methods make use of these distributions to derive policy recommendations.
Once the recommendations are submitted, we then suppose that the conditions C1-C3 obtain in the target contexts, in which case the $\mathbb{P}_c[(Y_{1ic}, W_{ic})|V_c, D_c = 1]$ distributions are revealed. This allows the planner to judge, {\it ex post}, the quality of the methods in terms of how their recommendations fair with respect to welfare.
Conditions C1-C3 include situations where we have a set of randomized experiments that we can feed into a set of methods to produce {\it ex ante} recommendations for a new target context.
These conditions also admit observational studies in which conditional independence and overlap of treatment assignment holds over the covariate space, $\mathcal{W}_c$, although for the time being we focus on randomized experiments.
Our specification of the potential outcomes, combined with assumption C3, rules out ``interference'' (\citealp[p. 19]{cox58}), including general equilibrium effects.
We consider this a starting point for our analysis.
Generalizations could consider situations where interference is present, and derive criteria for judging methods by working with causal quantities that are identified under interference \citep{hudgens_halloran08, aronow_samii2017_interference}.
\section{Planner's Objective and {\it Ex Ante} Recommendations}
We can define the planner's objective in context $c$ in general terms as
$$
\max_{\pi} \mathcal{U}(\mathbb{P}_c(\pi))
$$
where $\pi : \mathcal{V} \times \mathcal{W}_c \rightarrow \{0,1\}$ is a treatment assignment rule that takes context $c$'s contextual attributes and an individual's covariate values and maps them to a treatment assignment.
Then,
$\mathbb{P}_c(\pi)$
is the
joint potential outcome-covariate distribution
induced by $\pi$.
The restriction to $\{0,1\}$ for each individual in the target context follows from our focus on binary treatments.
If a richer set of treatment values $\mathcal{T}$ were under consideration, the rule $\pi$ could assign distribution functions over this $\mathcal{T}$ on the basis of covariate values.
Sometimes the set of conditional treatment assignments that maximizes this objective {\it ex ante} is non-unique---i.e., there are ties.
For example, multiple units may share the same covariate value.
The treatment assignment that maximizes the objective may assign some fraction of such units to treatment.
Then, all permutations of assignments would yield the same {\it ex ante} value for the objective.
We assume that $\pi$ encodes a tie-breaker rule that is unconditionally statistically independent and equalizes probability of treatment for such tied units.
Current approaches to this problem (as in \citealp{Manski2004} and \citealp{Tetenov2012}) consider social welfare that is linear in expected treatment and control outcomes, what \citet{Hirano2019} refer to as utilitarian welfare.
We adopt the same approach.\footnote{For the current analysis, we put to the side considerations related to the planner's sensitivity to inequality or uncertainty \citep{Dehejia2008}, as well as asymmetry in the planner's preferences toward different treatments, such as status quo bias \citep{Tetenov2012}.}
We also incorporate treatment opportunity costs by adding a cost-effectiveness term to the planner's objective.
This increases the planner's sensitivity to over- and under-estimation of the treatment effects in generating treatment rules.
We suppose that we are operating in context $c$, and therefore suppress the associated indexing except when necessary for clarification.
Thus, we define
\begin{small}
\begin{align*}
&\mathcal{U}(\mathbb{P}(\pi)) = \\
&\mathbb{E}\, \left\{ \pi(W_i) \mathbb{E}\, \left[ Y_{1i} - Y_{0i} - \frac{C(W_i,Y_{1i})}{\kappa} \bigg | W_i \right] + (1-\pi(W_i)) \mathbb{E}\, \left[ \frac{C(W_i,Y_{1i})}{\kappa} - (Y_{1i} - Y_{0i}) \bigg | W_i \right]\right\} \\
&= 2 \mathbb{E}\, \left\{ \pi(W_i) \mathbb{E}\, \left[ Y_{1i} - Y_{0i} - \frac{C(W_i,Y_{1i})}{\kappa} \bigg | W_i \right] \right\} - \mathbb{E}\, \left[ Y_{1i} - Y_{0i} - \frac{C(W_i,Y_{1i})}{\kappa} \right],
\end{align*}
\end{small}
where $\kappa > 0$ is the level of cost effectiveness at which the planner is indifferent between allocating funds to treat an individual and using the funds for some alternative purpose \citep{Garber1997, Dhaliwal2011}, and $C(w,y) \ge 0$ is the marginal cost function for treating an individual with covariate value $w$ and treated outcome $y$.
Choice of a treatment assignment with this specification will be equivalent to operating under a budget constraint that must be satisfied in expectation {\it ex ante} (cf. \citealp{Bhattacharya2012}, versus {\it ex post}, as in \citealp{Kitagawa2015}), where the marginal value of relaxing the budget constraint is $1/\kappa$.
The prediction method $m$ translates the planner's objective into an {\it ex ante} recommendation for a treatment assignment rule, $\pi_m$. Method $m$ does not have access to $\mathbb{P}$ and rather must rely on some approximation, $\widehat{\mathbb{P}}_m$. As such, method $m$ solves,
\begin{align*}
\pi_m = \text{arg}\max_{\pi} \mathcal{U}(\widehat{\mathbb{P}}_{m}(\pi)),
\end{align*}
where $\widehat{\mathbb{P}}_{m}(\pi)$ can be informed by samples from the reference contexts and status quo data from the target. Under our specification of the welfare function, we have
\begin{align*}
\pi_m = \text{arg}\max_{\pi } \widehat{\mathbb{E}\,}_m \left\{ \pi(W_i) \widehat{\mathbb{E}\,}_m \left[ Y_{1i} - Y_{0i} - \frac{C(w,Y_{1i})}{\kappa} \bigg | W_i \right] \right\}.
\end{align*}
In practice, we will use a ``plug-in'' rule where $\hat{E}_m[\cdot]$ is generated from model $m$ estimated on the reference data and status quo data from the target \citep{Hirano2019}. The problem that method $m$ solves generates a decision rule analogous to \cite{Manski2004}'s conditional empirical success rule:
\begin{align*}
\pi_m(w) = 1 \left\{ \widehat{\mathbb{E}\,}_m \left[ Y_{1i} - Y_{0i} - \frac{C(w,Y_{1i})}{\kappa} \bigg | w \right] \geq 0 \right\}.
\end{align*}
In our application to conditional cash transfer programs, the cost function takes a form that allows for further simplifications. Those details are presented below.
\section{{\it Ex Post} Inference}
The welfare contrast for two methods, $l$ and $m$, is given by
\begin{align*}
\Delta_{lm} &= \mathbb{E}\,\left\{ (\pi_l(W_i) - \pi_m(W_i)) \mathbb{E}\, \left[ Y_{1i} - Y_{0i} - \frac{C(W_i,Y_{1i})}{\kappa} \bigg | W_i \right] \right\},
\end{align*}
where $2 \Delta_{lm}$ would correspond to the difference in utilities as per our specification of $\mathcal{U}(\mathbb{P}(\pi))$ above.
In constructing this {\it ex post} contrast, we condition on any data used to generate the $\hat{\mathbb{P}}$s.
The welfare constrast $\Delta_{lm}$ is non-zero for values of $w$ for which the methods disagree on treatment assignment (i.e., $w$ for which $\pi_l(w) \ne \pi_m(w)$).
It equals the marginalized value of the minimum-benefit-adjusted conditional average treatment effects when $m$ says to treat and $l$ says not to (or the reverse).
{\it Ex post}, we suppose that we obtain a random sample of experimental units in the target context for which conditions C1-C3 hold.
We assume that in this {\it ex post} experiment, treatment assignment probabilities are given by $p(W_i)$, and that these probabilities are known.
The experimental data in our target context allow us to estimate this welfare contrast.
Given a random sample of size $N$ in the target context, we consider an estimator for the linear welfare contrast based on inverse-probability of treatment weighting with normalized weights.
This estimator is efficient among consistent estimators that avoid modeling of either the potential outcome surfaces or conditional treatment probabilities (\citealp{Hirano2003}; \citealp{Lunceford2004}; \citealp[35]{Imbens2009}).
We define the estimator as
\begin{align*}
\hat \Delta_{lm} = &\frac{\sum_{i = 1}^{N} \frac{T_i}{p(W_i)} \left(\pi_l(W_i) - \pi_m(W_i)\right)\left(Y_i - \frac{C(W_i,Y_i)}{\kappa}\right)}{\sum_{i = 1}^{N} \frac{T_i}{p(W_i)} } \\
& - \frac{\sum_{i = 1}^{N} \frac{(1-T_i)}{1-p(W_i)} \left(\pi_l(W_i) - \pi_m(W_i)\right)Y_i}{\sum_{i = 1}^{N}\frac{(1-T_i)}{1-p(W_i)}}.
\end{align*}
Inference for this estimator is based on the random sampling of $(Y_i, T_i, W_i)$ values from $\mathbb{P}$ under conditions C0-C3:
\begin{prop}\label{prop:Delta}
Under conditions C0-C3, as $N \rightarrow \infty$
$$
\frac{\sqrt{N}\left(\hat \Delta_{lm} - \Delta_{lm}\right)}{\sqrt{\hat V_{\Delta_{lm}}}} \overset{d}{\rightarrow} N(0,1),
$$
where
\begin{align*}
\hat V_{\Delta^g_{lm}} = & \frac{\sum_{i=1}^N \frac{T_i}{p(W_i)^2}\left[(\pi_l(W_i) - \pi_m(W_i)) (Y_i - \frac{C(W_i,Y_i)}{\kappa}) - \hat \delta_1\right]^2}{\sum_{i = 1}^{N} \frac{T_i}{p(W_i)}} \\
& + \frac{\sum_{i=1}^N \frac{1-T_i}{[1-p(W_i)]^2}\left[(\pi_l(W_i) - \pi_m(W_i))Y_i - \hat \delta_0\right]^2}{\sum_{i = 1}^{N} \frac{1-T_i}{1-p(W_i)}},
\end{align*}
and
\begin{align*}
&\hat \delta_ 1 = \frac{\sum_{i = 1}^{N} \frac{T_i}{p(W_i)} \left(\pi_l(W_i) - \pi_m(W_i)\right)(Y_i - \frac{C(W_i,Y_i)}{\kappa})}{\sum_{i = 1}^{N} \frac{T_i}{p(W_i)} } \\
&\hat \delta _0 = \frac{\sum_{i = 1}^{N} \frac{1-T_i}{1-p(W_i)} \left(\pi_l(W_i) - \pi_m(W_i)\right)Y_i}{\sum_{i = 1}^{N}\frac{1-T_i}{1-p(W_i)}}.
\end{align*}
\end{prop}
All proofs are contained in the appendix.
Conditional on $W_i = w$, the recommendations, $\pi_m(w)$ and $\pi_l(w)$, are fixed.
Our uncertainty about the welfare contrast is due to sampling and treatment assignment variation in the experimental data gathered in the target context that we use for the {\it ex post} assessment.\footnote{Treating the treatment assignment rules as fixed means that we can have expert opinion among the methods considered in $\mathcal{M}$.
\citet{Diebold2015} makes this point in reviewing \citet{Diebold1995} and the literature following it, drawing a distinction between \citet{Diebold1995} and \citet{West1996}, which additionally considers uncertainty arising from the samples on which models are fitted to generate predictions.}
The {\it ex post} situation that we consider is simple in that all randomization and sampling occurs at the unit level and there is no causal interference in the outcome data generating process.
Certainly the analysis could be extended to consider cluster randomization or cluster sampling, covariate adjustment, and targeting quantities that are identified under interference.
For the present analysis, we stick with this relatively simple case.
Proposition \ref{prop:Delta} is sufficient to perform inference for any pair of methods.
\citet{Hansen2011} provide a sequential multiple testing algorithm for establishing a ``model confidence set'' (MCS) of level $1-\alpha$, which allows one to distinguish a set of best performing algorithms with an asymptotic error rate of $\alpha$.
\section{Empirical Application}
For an empirical illustration, we use data from Mexico and Morocco on the effects of conditional cash transfers (CCTs) on primary school enrollment.
We consider a policy scenario where a planner in Morocco is seeking recommendations for implementing a conditional cash transfer (CCT) program.
The planner's objective is defined as above, accounting for opportunity costs.
We use data from randomized evaluations of the PROGRESA program in Mexico \citep{Schultz2004, Behrman2005, DeJanvry2006, Todd2006, Attanasio2012} and the TAYSSIR program in Morocco \citep{Benhassine2015}.
To construct the {\it ex ante} evaluations, we are limited to using the full data from Mexico and then the covariate and control group outcome data from Morocco.
The {\it ex post} assessment is done using the full experimental data from Morocco.
\subsection{General Setup}
The outcome, $Y_{ic}$, is the school enrollment of child $i$.
The covariate set, $\mathcal{W}_c$, is defined as the intersection pre-treatment characteristics on the PROGRESA and Moroccan questionnaires.
The cost-effectiveness benchmark, $\kappa$, is based on gains from school enrollment. \cite{Montenegro2014} report a 10\% Mincer\footnote{controlling only for potential experience and its square.} earnings premium for each additional year of schooling in Morocco. The average market earner in the \cite{Benhassine2015} sample gets \$1578.20 per year. At a 5\% discount rate, the net present value of an additional year spent in school is approximately \$1,000.\footnote{Over 40 years of work.}
The treatment cost function, $C(W_i, Y_{1i})$, is based on the value of the conditional transfer. In Morocco, for 6-7 year olds, this amounts to the following:
$$
60 \text{ MAD per month} \times 12 \times \frac{0.1\text{ years of ed.}}{100 USD} \times \frac{\text{1 USD}}{\text{8 MAD}} \times Y_{1i} = 0.09 Y_{1i} = \frac{C(w, Y_{1i})}{\kappa}.
$$
Correspondingly, for 8-9 year olds the transfer value is 80 MAD per month, which means $0.12 Y_{1i} = \frac{C(w, Y_{1i})}{\kappa}$, and for 10+ year olds, the transfer is 100 MAD per month, and so $0.15 Y_{1i} = \frac{C(w, Y_{1i})}{\kappa}$. If we define these age specific multipliers as $g(Age_i)$, then the objective for model $m$ reduces to,
\begin{align*}
\pi_m &= \max_{\pi} \hat{\mathbb{E}\,}_m \left\{ \pi(W_i) \hat{\mathbb{E}\,}_m \left[ (1 - g(Age_i))Y_{1i} - Y_{0i} | W_i \right] \right\},
\end{align*}
generating the decision rule
\begin{align*}
\pi_m(w) = 1 \left\{ \hat{\mathbb{E}\,}_m \left[ (1 - g(Age_i))Y_{1i} - Y_{0i} \bigg | w \right] \geq 0 \right\}.
\end{align*}
This implies evaluating the signs of estimated conditional treatment effects on the adjusted outcome,
$$
Y^{adj}_{ic} = T_{ic}(1-g(\text{Age}_{ic})Y_{ic} + (1-T_{ic})Y_{ic}.
$$
We allow each method all observations and variables from PROGRESA ($D_c = 0$), and observations from TAYSSIR with $D_c = 1, T_{ic} = 0, U_{ic} \leq 0.5 $ where we assign $U_{ic} \sim U[0,1]$ independently from all other variables once (i.e., a random 50\% split of the TAYSSIR control group). The methods use these data to compute $\pi_m(w)$ as defined above and then for methods $l$ and $m$, we compute $\widehat{\Delta}_{lm}$ as
\begin{align*}
\hat \Delta_{lm} = &\frac{\sum_{\{i : D_c = 1, T_{ic} = 1\}} \frac{T_i}{p_c(W_i)} \left(\pi_l(W_i) - \pi_m(W_i)\right)(1 - g(Age_i))Y_i }{\sum_{\{i : D_c = 1, T_{ic} = 1\}} \frac{T_i}{p_c(W_i)} } \\
& - \frac{\sum_{ \{i : D_c = 1, T_{ic} = 0, U_{ic} > 0.5 \} } \frac{(1-T_i)}{1-p_c(W_i)} \left(\pi_l(W_i) - \pi_m(W_i)\right)Y_i}{\sum_{\{i : D_c = 1, T_{ic} = 0, U_{ic} > 0.5\} }\frac{(1-T_i)}{1-p_c(W_i)}},
\end{align*}
where for the TAYSSIR experiment, $p_c(w)$ is known = $p_c$. The estimate for the variance is constructed analogously.
\paragraph{Methods}
We consider methods that are already available from the current literature and that are straightforward to implement.
We highlight assumptions on the joint distribution of random variables for each method.
However, we are not concerned with testing these assumptions directly, as in \cite{Allcott2012}, \cite{Dehejia2014}.
Instead, we list them as part of the specification of each method.
We are interested in assessing the relative empirical performance of the methods, all of which we view as likely misspecified \citep{Wolpin2007}.
The first two methods rely on reduced-form extrapolation of conditional treatment effects, as per, e.g., \citet{JosephHotz2005} and \citet{Dehejia2016}.
The reduced form approaches we use include a ``low-tech'' version that simply takes the age-sex-specific treatment effects from PROGRESA and extrapolates them using the age-sex distribution in the TAYSSIR sample.
We also use a ``high-tech'' approach that applies the generalized random forest estimator for heterogeneous treatment effects, as proposed by \citet{Athey2019-grf}.
The next two methods include structural counterfactual predictions.
This includes a static semi-parametric structural model for PROGRESA based on \citet{Todd2010a} as well as a dynamic, parametric structural model for PROGRESA based on \citet{Attanasio2012}.
\subsection{Method 1: Extrapolating Age-Sex Conditional Effects}
The ``low-tech'' reduced form approach uses the adjusted outcomes ($Y^{adj}_{ic}$) from the PROGRESA data to estimate conditional treatment effects in age-sex strata for girls and boys with ages ranging from 6 to 16.
These conditional treatment effects are then extrapolated to Morocco, which is justified given the following assumption.
\begin{description}
\item[Age-Sex 1.] Unconfounded location (\citet{JosephHotz2005}) given age and sex.
$$
\mathbb{E}\,[(1-g(Age_i))Y_{1i} - Y_{0i} | D_c, Age_i, Sex_i] = \mathbb{E}\,[(1-g(Age_i))Y_{1i} - Y_{0i} | Age_i, Sex_i].
$$
\end{description}
Table \ref{tab:age-sex-cates-mex} shows the raw treatment effects for the age-sex strata in PROGRESA.
Almost all of these are positive, so any treatment assignment based on them will be very liberal.
We see exactly this in the second column of Table \ref{tab:agesex-all-welfare} which compares the effects of treatment assignments based on age-sex extrapolation to simply treating everyone, as a researcher might be inclined to recommend after seeing the large effect of TAYSSIR on enrollment.
Without adjusting for cost-effectiveness, age-sex extrapolation recommends treating 97\% of the Moroccan sample.
Table \ref{tab:age-sex-adj-cates-mex} lists the enrollment gains by age-sex strata, after adjusting for cost-effectiveness.
The cost-effectiveness threshold we've imposed in Morocco is quite stringent: only a few subgroup effects remain positive.
The first column of Table \ref{tab:agesex-all-welfare} shows the implications.
Now the age-sex extrapolation only recommends treating 13\% of children in Morocco.
This results in a statistically significant 3.5 percentage point decrease in enrollment gain (the third row and forth row), but a 6.5 percentage point increase in cost-effectiveness-adjusted enrollment gain (the fifth and sixth row).
The increase comes from the fact that age-sex based-extrapolation avoids recommending treatment for younger children whose enrollment is already almost universal.
Universal untreated enrollment has to be paid for, which is expensive, and leaves little room for enrollment gain so the cost is not worth it.
\subsection{Method 2: Generalized Random Forest-Based Extrapolation}
\label{sec:grf}
The ``high-tech'' reduced form approach fits generalized random forest (GRF) models to the adjusted outcomes in the PROGRESA data.
At present we include as covariates the child's pre-treatment enrollment status, years of education, literacy status, age, and sex, and then for the child's household, the number of children, whether the head is male, whether it is single-parent, whether the father is alive, whether the mother is alive, whether the child lives with the father, or whether the child lives with the mother.
In cases of item-level missing data, we impute a zero for the missing value and then accompany the variable with a separate indicator variable for whether the value is missing.
Defining $W_i$ the vector of variables described above, GRF-based extrapolation is justified under the following assumption.
\begin{description}
\item[GRF1.] Unconfounded location given $W_i$.
$$
\mathbb{E}\,[(1-g(Age_i))Y_{1i} - Y_{0i} | D_c, W_i ] = \mathbb{E}\,[(1-g(Age_i))Y_{1i} - Y_{0i} | W_i ].
$$
\end{description}
Using these variables, we fit a generalized random forest (GRF) using the algorithm written by \citet{Athey2019-grf}.
We set the number of trees to 2000 with a minimal leaf size of 2 units.
We use 50\% splits of the data to build trees on one random split of the data and then select the error-minimizing tree pruning by evaluating predictions with different candidate pruning levels on the data from the other split.
These settings do not depart very much from the defaults set by \citet{Athey2019-grf}.
There are two approaches to using GRFs to characterize effect heterogeneity.
The first is to fit GRFs to the treated and control outcomes separately, and then combine those fits to construct estimates of unit-level treatment effects.
This approach targets loss on the level of treated and control outcomes, and then indirectly targets effect heterogeneity.
The second is to fit a GRF with a loss function that is specifically targeted to effect heterogeneity.
In simulations and various empirical tests on the PROGRESA sample, we found that the first approach was substantially better for our application.
This was surprising, but the reason seemed to be that the approach targeting effect heterogeneity directly tended to regularize too heavily, and therefore did not discriminate strongly enough between classes of units with large or small (or even negative) effects.
As such, our analysis applies the method that models the treated and control outcomes separately.
Tables \ref{tab:mexico-grfpo-noce-varimp} and \ref{tab:mexico-grfpo-ce-varimp} show variable importance summaries provided by \citet{Athey2019-grf} for random forests fitted to enrollment without and with cost-effectiveness adjustment, respectively.
The summaries essentially count instances where a variable is used in a split, where instances at the top end of the tree get higher weight than at the lower end.
Age, baseline enrollment, and number of years of education completed emerge as most important but several other variables matter as well, including parental education and presence.
Tables \ref{tab:morocco-grfpo-noce-catesum-boys} and \ref{tab:morocco-grfpo-noce-catesum-girls} display treatment effects on enrollment, aggregated by age-sex strata to allow for comparisons to the ``low-tech'' reduced form approach.
The GRF detects a great deal of treatment effect heterogeneity within strata, with positive and negative effects for all strata.
Table \ref{tab:morocco-grfpo-noce-treat-rate} show the share in each stratum recommended for treatment, which increases quickly with age.
Tables \ref{tab:morocco-grfpo-ce-catesum-boys} and \ref{tab:morocco-grfpo-ce-catesum-girls} gives treatment effect predictions within strata, after adjusting the treated outcome for cost-effectiveness.
The average treatment effect is predicted to be negative in most strata, with the exception of those including children aged 13 and up.
Table \ref{tab:morocco-grfpo-ce-treat-rate} shows the treatment recommendations.
Despite negative average effects within strata, the treatment rates are still quite high.
Table \ref{tab:grf-agesex-welfare} evaluates GRF's treatment recommendations relative to simple age-sex-based extrapolation from PROGRESA.
The results are perhaps surprising.
GRF treats more individuals, resulting in 1 percentage point more enrollment gain.
But the cost outstrips the gain and the adjusted enrollment gain is negative and statistically significant.
We believe this disappointing performance of GRF versus a much simpler method is due to a kind of ``contextual overfitting''.
While the GRF's regularization guards against within-context overfitting, the dimensions of heterogeneity generating a large share of positive treatment effects even after adjusting for cost-effectiveness may represent quirks of PROGRESA's implementation in Mexico.
It remains to be seen whether this problem persists when we add reference contexts and context characteristics to our exercise.
\begin{comment}
\subsection{Method 3: Bayesian Hierarchical Age-Sex}
\end{comment}
\subsection{Method 3: Semi-Parametric Structural Approach}
\label{sec:sps_model}
\subsubsection{\citet{Todd2010a}'s Non-Parametric Structural Model}
The third method takes as a starting point the non-parametric structural (NPS) model of school attendance proposed in \citet{Todd2010a}.
For parsimony in the non-parametric step of our own approach (described in the next section) we use the simplest version of the NPS model, which considers the enrollment decision for each child independently.
In this version of the model, households solve the following static utility maximization problem with utility depending on household consumption $c$, the child's enrollment status $y$.
\begin{align}
&\max_{y \in \{0, 1\}} \ U(c,y;w,\epsilon) \notag \\
\label{eq:nps_budget_constraint}
&\text{subject to } c = n + e(1-y).
\end{align}
Abusing notation slightly, $W_i$ is here redefined as $W_i \setminus E_i, N_i$ where $E_i$ represents the child's wage offer, $N_i$ household income for child $i$ excluding $i$'s own earnings.
The redefined $W_i$ and $V_c$ are observed shifters in preference for child schooling and $\epsilon_i$ is an unobserved shifter.
Optimal school attendance is given by
$$
y_0 = \phi(n,e; w, \epsilon) = 1\{U(n,1;w,\epsilon) > U(n+e,0; w, \epsilon)\}.
$$
Now modify the budget constraint by introducing the treatment, a grant $g$ paid only when the child attends school:
\begin{align*}
c&= n+ e(1-y) + g y \\
&= (n + g) + (e - g)(1-y).
\end{align*}
It is easy to see that $y_1$, optimal school attendance with the grant program in place, can be obtained by plugging a modified version of non-child income ($n+g$) and modified child wage offer ($e-g$) into the same $\phi(\cdot)$ function as before:
$$
y_1 = \phi(n+g, e-g; w, \epsilon).
$$
In addition to this structure, NPS imposes a key assumption on the distribution of random variables:
\begin{description}
\item[NPS1.] Exogeneity of non-child income and child wage offers: $\epsilon_{i} \independent N_i, E_i | W_i$.
\end{description}
The NPS CATE for children with characteristics $w, n, e$ is identified as
\begin{align}
\notag
\mathbb{E}\, [ Y_{1i} | N_i = n, E_i = e, W_i = w] - & \mathbb{E}\, [ Y_{0i} | N_i=n, E_i=e, W_i = w ] \\
\notag
= \mathbb{E}\,[ Y_{0i} | N_i = n+g, E_i = e - g, W_i = w] - & \mathbb{E}\,[ Y_{0i} | N_i=n, E_i=e, W_i = w] \\
\label{eq:nps_cate}
= \mathbb{E}\,[ Y_i | T_i = 0, N_i = n+g, E_i = e - g, W_i = w] - & \mathbb{E}\,[ Y_i | T_i=0, N_i=n, E_i=e, W_i = w ]
\end{align}
Crucially, \eqref{eq:nps_cate} is identified in the data provided to predictors for context $c$.
\subsubsection{Our Implementation: the Semi-Parametric Structural (SPS) Approach}
As in \citet{Todd2010a}, we do not observe wages for most children in our contexts.
In addition, in some of our contexts non-child income is not directly observable since households are smallholder farmers.
We therefore make some modifications to the NPS model.
Each child's wage offer is given by exponentiating the conditional expectation of her earnings given her age, gender, industry of work\footnote{If observed, otherwise we set this to the most common industry for child workers.}, and locality of residence.\footnote{\citet{Todd2010a}, in contrast set children's wage offers to the village-level wage for an agricultural worker but this variable is not available in many of our contexts, and in fact in only half of the PROGRESA evaluation villages they consider.}
\begin{description}
\item[SPS1.] Conditional expectation wage offers:
\begin{align*}
E_i &= \exp( \mathbb{E}\, [ \log(E_i) | Age_i, Sex_i, Industry_i, Locality_i ] ) \\
&= e_0(Age_i, Sex_i, Industry_i, Locality_i).
\end{align*}
\end{description}
This is very close to how we handle wages in the dynamic parametric structural model (DPS) described in \ref{sec:dps_model}, except that in the DPS model agents optimize with respect to (1) the sample mean function $\hat{\mathbb{E}\,}[ E_i | Age_i, Sex_i, Industry_i, Locality_i ]$ rather than (2) the population expectation function $ \mathbb{E}\,[ E_i | Age_i, Sex_i, Industry_i, Locality_i ]$ (use of (1) in DPS exactly follows AMS).
We handle non-child income in the same way: by computing the sum of the expected earnings of all the members in $i$'s household, excluding $i$.
\begin{description}
\item[SPS2.] Conditional expectation non-child income:
\begin{align*}
N_i &= \sum_{j \in Household_i, j \neq i} \exp ( \mathbb{E}\, [ \log(E_j) | Age_j, Sex_j, Industry_j, Locality_i ] ) \\
&= n_0(Household_i).
\end{align*}
Additionally, assume $E_i$ is missing at random, effectively following \cite{Todd2010a}.
\end{description}
With industry and locality indicators, the set of conditioning variables is high-dimensional.
We therefore estimate the conditional expectation function by LASSO, which is justified under the following assumption.
\begin{description}
\item[SPS3.] Approximately sparse linear representation of expected wages.
\begin{align*}
&\mathbb{E}\, [ \log(E_i) | Age_i, Sex_i, Industry_i, Locality_i ] \\
&\approx \eta_0 + \eta_1 Age_{i} + \eta_2 (Age_{i} - 21 )_{+} + \eta_3 male_i + \zeta_{industry} + \lambda_{province} + \xi_{locality} + \nu_{i}
\end{align*}
where approximation is in the sense of \cite{Belloni2012}.
The notation $(\cdot)_{+}$ indicates the positive part of the expression in parentheses.
$\zeta_{industry}, \lambda_{province}$, and $\xi_{locality}$ are fixed effects.
Province is the top level subnational geographic unit (like a US state), which is defined for each context following IPUMS-International.
Locality is the smallest geographic level available in each dataset.
We do not subject the linear spline in age to the LASSO penalty because the substantial positive gradient of wage in age for youths is a key driver of the opportunity cost of enrollment.
Similarly, we always include the industries employing the majority of children in each context.\footnote{We exclude education following AMS. In practice, and echoing AMS's findings, education has little effect on earnings in our rural contexts. Education is never selected by LASSO and its inclusion has almost no impact on other variables coefficients or the selected penalty term.} \label{ass:sparse_regression}
\end{description}
We select the LASSO penalty term by 5-fold least squares cross-validation.
Finally, we replace assumption NPS1 with
\begin{description}
\item[SPS4.] $\mathbb{E}\,[\varepsilon_{i} | e_0(Age_i, Sex_i, Industry_i, Locality_i), n_0(Household_i), Sex_i] = 0$ where $\varepsilon_i$ is defined in the conditional expectation equation
\begin{align}
\label{eq:sps_exogeneity}
Y_i = m_0(e_0(Age_i, Sex_i, Industry_i, Locality_i), n_0(Household_i), Sex_i) + \varepsilon_{i}.
\end{align}
\end{description}
Our approach is the following.
We estimate $e_0(Age_i, Sex_i, Industry_i, Locality_i)$ in a first stage and use the resulting $\hat{e}$ in place of $e_0$ in $n_0$.
We then estimate $m_0$, our analog of $\mathbb{E}\,[ Y_i | T_i=0, N_i=n, E_i=e, W_i = w ] $ from equation \eqref{eq:nps_cate}, with $\hat{e}$ and $\hat{n}$ in place of $e_0$ and $n_0$.
$\hat{e}$ and $\hat{n}$ are generated regressors in the sense of \cite{Mammen2012} who show that the second stage estimate is consistent for $m_0$.
Our second stage implementation is by mixed datatype kernel regression with bandwidth selected by cross-validation using the \texttt{np} package in R (\cite{Hayfield2008}).
We then use the second stage estimate to generate the predicted SPS CATE just as in equation \eqref{eq:nps_cate}.
\subsubsection{Results}
Figures \ref{fig:sps_original_density}, \ref{fig:sps_counterfactual_density}, and \ref{fig:sps_bivariate_regression} provide some intuition for how SPS works in practice.
Figure \ref{fig:sps_original_density} shows the bivariate density of child wage offers in the held out portion of the control group on the y-axis and non-child income on the x-axis.
Figure \ref{fig:sps_counterfactual_density} shows how the model predicts the effect of the TAYSSIR grant.
Effective child wage offers decrease, shifting the density down, and non-child income is increased slightly, moving density slightly right.
The impact of this shift depends on the non-parametric regression of enrollment as a function of child wage offer and non-child income in the portion of the control group (non-holdout) available to predictors, which is depicted in Figure \ref{fig:sps_bivariate_regression}.
The plot shows that non-child income has little association with enrollment conditional on a child's wage offer.
child wage offers do matter, however, particularly in the region where probability mass is being moved.
Table \ref{tab:sps-agesex-welfare} shows that the SPS approach tends to overstate the magnitude of treatment effects.\footnote{This turns out to be true for both positive and negative treatment effects, with specifics to be included in later drafts.}
Relative to age-sex based extrapolation from PROGRESA, SPS is only slightly better in cost-adjusted enrollment gain terms than making all children in Morocco eligible for TAYSSIR grants.
We think this is because the non-parametric second step is simply too volatile, despite having been regularized by cross validation based bandwidth selection.
\subsection{Method 4. Dynamic Parametric Structural}
\label{sec:dps_model}
\paragraph{Model}
Our Dynamic Parametric Structural (DPS) model is largely based on \citet{Attanasio2012}, with a few modifications made to fit data availability and improve in-sample fit in our contexts, following standard practice (see e.g. \citet{Wolpin2013}).
The main dynamic features of the model are (1) a finite horizon - children can only be enrolled until age 17 and so can only accumulate subsidies up to this point and (2) persistence of education choices - the flow utility of enrollment is affected by the number of years the child is behind her age-appropriate grade level.
Uniquely among our methods, the DPS model allows modeling of the entire subsidy schedule by age.
\paragraph{Flow utility}
The in-period utility for child $i$\footnote{Again, schooling decisions are made by the household for each child independently.} at age $a$ in school and work are $u^S_{ia}$ and $u^W_{ia}$, respectively:
\[
\begin{array} { l } { u _ { ia } ^ { S } = \gamma \delta g _ { ia } + \mu_i + \psi ^ { \prime } z _ { ia } +
b' \cdot \text{yrs\_behind}_{ia} +
1 \left( p _ { ia } = 1 \right) \beta ^ { p } x _ { ia } ^ { p } + 1 \left( s _ { ia } = 1 \right) \beta ^ { s } x _ { i a } ^ { s } + \varepsilon _ { i a } } \\
{ u _ { ia } ^ { W } = \delta w _ { ia } } \end{array}\]
$g_{ia}$ represents the grant $i$ is entitled to given her completed years of schooling and other characteristics (for example gender).
$\mu_i$ is a child-specific shifter to the preference for enrollment, drawn from a discrete distribution with $K$ points of support.
We will refer to each point of support as an unobserved child ``type''.
In practice, we estimate the model with three types.
$z_{ia}$ includes other observed covariates affecting preference for enrollment.
Specifically it includes $a$ and a dummy variable for whether $i$'s father received any formal schooling.
$yrs\_behind_{ia}$ is a three-element vector of dummy variables with indicators for being behind grade level by 1 year, 2 years, or $\geq 3$ years.
$p_{ia}$ is a dummy variable measuring whether $i$'s years of schooling at age $a$ make her eligible for primary school and $x^p_{ia}$ measures distance to school, proxying for cost of attendance.
$s_{ia}$ is a dummy variable equal to one if $i$'s years of schooling make her eligible for secondary school and $x^s_{ia}$ is a constant.
$e_{ia}$ is $i$'s wage offer.
$\varepsilon_{ia}$ is an IID idiosyncratic shock to the utility of $i$'s attending school at age $a$, which follows the logistic distribution.\footnote{Note that this $\varepsilon_i$ has no relation to the $\varepsilon_i$ from equation \eqref{eq:sps_exogeneity}.}
We allow some of the coefficients to vary by $i$'s sex.
In particular, $\beta^s$ and the component of $\psi$ multiplying age vary by gender.
The unobserved type probability also depends on the sex of the child.
\paragraph{Wage offer}
Log wage offers are computed according to the same sparse linear regression representation equation described in assumption SPS3 of the SPS model.
The only difference is that in the SPS model predicted values from the LASSO regression are treated as estimates of the true wage offer.
In the DPS model, agents are assumed to use the same predicted values as the econometrician.
\paragraph{Terminal value}
The value of having accumulated $ed_{i18}$ years of school in the terminal period (when the child is 18) is given below.
\[
V \left( ed_{ i , 18 } \right) = \frac { \alpha _ { 1 } } { 1 + \exp \left( - \alpha _ { 2 } ed_ { i , 18 } \right) }
+ \alpha_3 \cdot 1\{ed_{i,18} \geq sec \}.
\]
$\alpha_3$ measures the additional value of having completed secondary school, measured by $ed_{i,18}$ being greater than the last year of secondary school, $sec$.
\paragraph{Value functions}
The value of choosing to have $i$ attend school after having completed $ed_{ia}$ years of education by age $a$ is:
\begin{align}
V _ { i a } ^ { S } \left( e d _ { i a } \right) = u _ { i a } ^ { S } + \beta \{ p_a^S & \left( e d _ { i a } + 1 \right) \mathbb{E}\, \max \left[ V _ { i, a + 1 } ^ S \left( e d _ { ia } + 1 \right) , V _ { i, a + 1 } ^ { \mathrm { W } } \left( e d _ { i a } + 1 \right) \right] \notag \\
& + \left( 1 - p_a ^S \left( e d _ { i t } + 1 \right) \right) \mathbb{E}\, \max \left[ V _ { i, a + 1 } ^ { S } \left( e d _ { i a } \right) , V _ { i, a + 1 } ^W \left( e d _ { i t } \right) \right] \}. \label{eq:enrolling_value}
\end{align}
We implicitly condition on covariates $W_{ia}$ redefined as $W_{ia} \setminus ed_{ia}$ and $\mu_i$.
$p^s_a( ed )$ is the probability of successfully passing grade $ed$ at age $a$ conditional on enrolling.
We estimate it non-parametrically, outside the model (like the wage offer function).
If $i$ successfully passes the grade, she expects to receive the maximum of the value of enrolling or choosing to work in the next year with her education equal to $ed_{ia} + 1$.
The expectation is taken over possible realizations of the $\varepsilon_{ i, a + 1}$ shock.
If $i$ does not pass, she expects to get the maximum of the value of enrolling/working being one year older and with education still equal to $ed_{ia}$.
The term in braces in equation \ref{eq:enrolling_value} is thus the next-period expected value.
From the point of view of this period, the expected value is discounted by $\beta$ which we set equal to 0.95, following AMS.
We add the flow utility of being enrolled to complete the definition of the value function when enrolling.
The value of having $i$ work this period is simpler since $ed_{ia}$ stays fixed:
\[
V _ { i t } ^W \left( ed_{ia} \right) = u _ { i t }^W + \beta \mathbb{E}\, \max \left\{ V _ { i, a + 1 }^S \left( ed_{ ia } \right) , V_{ i, a + 1 }^W \left( ed_{ i a} \right) \right\}.
\]
Given a set of parameters, we solve for the outputs each value function (enroll, work) for of all possible combinations of age and years of education completed by backward induction, beginning by calculating the terminal value for each set of candidate parameters.\footnote{Note that since the only error term $\varepsilon_{it}$ follows an IID logistic distribution, then $\mathbb{E}\, \max \left\{ V _ { i t + 1 } ^ { s } \left( e d _ { i t } \right) , V _ { i t + 1 } ^ { \mathrm { w } } \left( e d _ { i t } \right) \right\}$ has a closed form (see \citet{Keane2011}).
Our closed form is slightly different from theirs because they use two Type 1 Extreme Value random variables instead of one logistic draw.
We simply subtract $\varepsilon_2$ in both equations in their function to derive our closed form:
\[
\mathbb{E}\, \max \left\{ V _ { i a }^S \left( e d _ { i a } \right) , V _ { i a } ^W \left( e d _ { i a } \right) \right\} =
\rho \{ \log[(\exp( V^S_{ia}) / \rho + \exp(V^W_{it}) / \rho] \}.
\]
We normalize the scale $\rho$ of the error term to 1 in estimation.}
\paragraph{Likelihood}
With the value function in hand and the logistic error distribution assumption, it is straightforward to compute the likelihood of each child's being enrolled given observed characteristics $W_i$ and unobserved type $\mu_i$: $\Pr(Y_i = 1 | W_i, \mu_i = \mu_k)$.\footnote{We drop the $a$ subscript because the model is estimated on a single cross-section so age does not vary with $i$.}
The full likelihood associated with $Y_i = 1$ is given by:\footnote{If $Y_i = 0$ the likelihood contribution is 1 - \eqref{eq:enrolled_likelihood}.}
\begin{align}
\label{eq:enrolled_likelihood}
\Pr(Y_i = 1| W_i ) = \sum_{k=1}^K
\frac{1}{1+\exp(V^W_{it}(ed_i | \mu_k, W_i )-V^S_{it}( ed_i| \mu_k, W_i))}
\Pr(\mu_i = \mu_k | W_i, \mu_k).
\end{align}
Since the utility shock $\varepsilon_i$ is IID across time and individuals, we could in principle derive the conditional type probability as a function of the history of characteristics and education decisions of $i$ starting at the age when $i$'s enrollment was first considered ($min\_a$): $(W_{i,a-1}, \ldots, W_{i,min\_a})$ and $(Y_{i,a-1}, \ldots, Y_{i,min\_a})$ respectively:
\begin{align}
&\Pr(\mu_i = \mu_k | (Y_{i,a-1}, \ldots, Y_{i,min\_a}), (W_{i,a-1}, \ldots, W_{i,min\_a})) \notag \\
&= \frac{\Pr(Y_{ia} | W_{ia}, \mu_i = \mu_k) \cdots \Pr(Y_{i,min\_a} | W_{i,\min\_a}, \mu_i = \mu_k) \Pr (\mu_k)}
{\sum_{k=1}^{K} \Pr(Y_{ia} | W_{ia}, \mu_i = \mu_k) \cdots \Pr(Y_{i,min\_a} | W_{i,\min\_a}, \mu_i = \mu_k) \Pr (\mu_k)}. \label{eq:type_probability}
\end{align}
Actually estimating \eqref{eq:type_probability} is infeasible, however, since it requires knowledge of the full histories $(W_{i,a-1}, \ldots, W_{i,min\_a})$ and $(Y_{i,a-1}, \ldots, Y_{i,min\_a})$\footnote{These would be included in $W_i$, but unfortunately we do not have them.} and, furthermore, would be very high-dimensional.
Following \cite{Todd2006}, we instead use a multinomial logit approximation:
\begin{align*}
&\Pr(\mu_i = \mu_k | (Y_{i,a-1}, \ldots, Y_{i,min\_a}), (W_{i,a-1}, \ldots, W_{i,min\_a})) \\
&\approx \Pr(\mu_i = \mu_k | a, ed_{ia}, gender, father\_ed) \\
& \ \ \ \approx \frac{\exp(\beta^\mu_k \cdot (1, a, ed_{ia}, gender, father\_ed))}
{1 + \sum_{k} \exp(\beta^\mu_k \cdot (1, a, ed_{ia}, gender, father\_ed)}, k \in {1 \ldots, K-1}
\end{align*}
Age proxies for the length of the history, $ed_{ia}$ for $(Y_{i,a-1}, \ldots, Y_{i,min\_a})$ and $gender$ and $father\_ed$ for $(W_{i,a-1}, \ldots, W_{i,min\_a})$ (specifically $W_{i,min\_a}$).
Importantly, $ed_{ia}$ is excluded from flow utility so there is independent variation to identify the conditional type probabilities.
We estimate the parameters described above, along with the support points $(\mu_1, \cdots, \mu_K)$ by maximum likelihood.
\subsubsection{Results}
Figures \ref{fig:dps_fit_by_age} and \ref{fig:dps_fit_by_ed} show that the model fits well in the portion of the Moroccan control group made available to predictors.
Figure \ref{fig:dps_fit_by_age} shows the fit to enrollment rates by age.
The size of each point on the graph representing the sample size in that age-sex stratum.
The model captures delayed entry into school, near-universal enrollment at young ages, and the sharp drop in enrollment for teenagers.
Figure \ref{fig:dps_fit_by_ed} illustrates the fit by number of years of education completed.
It captures the drop in enrollment at the transition to secondary school (year 7 in Morocco).
Table \ref{tab:dps-agesex-welfare} shows the results from using the DPS model to predict the effect of the TAYSSIR program.
For all but a tiny fraction of the Moroccan holdout sample, predicted enrollment gains - while reasonable - are too small to exceed the cost-effectiveness threshold.
We show this visually in Figure \ref{fig:dps_predicted_te_dist}, plotting the CDF of predicted treatment effects due to observables.
A key point from Table \ref{tab:dps-agesex-welfare} is how close TAYSSIR is to being non-cost-effective for \emph{any} child.
Age-sex based extrapolation from PROGRESA only provides a small, statistically insignificant increase in welfare relative to DPS's no-treatment recommendation.
\section{Conclusion}
We develop a decision-based approach to comparing the relative performance of methods for generating counterfactual predictions that are then used to make policy recommendations.
We consider a social planner who is operating in a target context and is seeking recommendations on what policy to choose from a set of feasible options.
The richness of the space of policy options determines the nature of the recommendations being sought --- e.g., whether a simple up-or-down recommendation to treat everyone or no one, or a more refined recommendation about who should be treated and who not.
Recommendations could be based on econometric estimates, whether reduced form or structural, or expert opinions.
Our leading application is one where the planner maximizes a linear welfare objective in assigning treatments on the basis of available covariate information.
In this case, the success of a method for generating recommendations depends on how accurately it can predict conditional treatment effects in the target context.
We define a welfare contrast to use for conducting an {\it ex post} analysis of how well different methods performed with respect to the planner's goals.
We estimate this welfare contrast by using experimental data that reveals how a treatment affects the outcome distribution in the target population.
The welfare contrast is straightforward to compute, and it allows us to judge whether one method outperforms another in a manner that is statistically significant.
We provide an empirical illustration that considers a planner seeking a recommendation on how to implement program using conditional cash transfers (CCTs) to boost school enrollment in Morocco.
The data available for generating recommendations include a randomized evaluation of CCTs in Mexico as well as data from Moroccan households under the status quo {\it ex ante}, in which no CCTs have been applied.
We generate recommendations from reduced form methods and structural models.
We then perform an {\it ex post} evaluation of these methods using data from a randomized evaluation of CCTs in Morocco.
We view this toy example as helping build intuition for how to specify methods to evaluate in a full-featured empirical portion of the paper including the contexts from \cite{Banerjee2015b} which we will pre-specify.
We see this exercise as making three contributions.
First, as our application attempts to show, it provides a clear framework to assess internal validity versus external validity trade-offs.
In particular, our application allows us to assess how robust and internally valid estimates from external contexts fare relative to within-context estimates that may be biased due to model misspecification \citep{Pritchett2013}.
Second, it provides a principled basis for assessing the performance of different methods by tying the assessment to welfare considerations.
This is important, because different objective functions can imply different rank orderings of methods.
Our approach thus forces one to first consider the welfare objective so as to be clear about the relevant objective.
Third, we show that each experiment or observational study may contain much more decision-relevant information than would be contained in a single treatment effect estimate.
We are undertaking a number of extensions to what we have done here.
Model selection or model averaging approaches based on our welfare criteria may lead to better predictions.
We also plan to work with evidence bases that include more external contexts.
In doing so, we would want to account for site selection, as per \citet{Allcott2015} and \citet{Gechter2018}.
\clearpage
|
3,212,635,537,782 | arxiv | \section{Introduction}
The radio properties of quasars with otherwise very similar optical
properties can be markedly different. There is a clear dichotomy
between radio-loud and radio-quiet quasars in optically selected
samples. The radio-loudness is usually characterized by the
radio-to-optical flux ratio. In the PG quasar sample, which is
probably the best studied quasar sample in the radio and optical
(\citeNP{KellermannSramekSchmidt1989}; \citeNP{BorosonGreen1992}),
radio-loud and radio-weak quasars separate cleanly in two distinct
populations (e.g.~\citeNP{KellermannSramekSchmidt1989}).
It is known that radio-loud AGN almost never reside in late type,
i.e.~spiral galaxies (e.g.~\citeNP{KirhakosBahcallSchneider1999};
\citeNP{BahcallKirhakosSchneider1995}) whereas radio-quiet quasars
appear both in spiral and in elliptical host galaxies. Furthermore,
all relativistically boosted jets with superluminal motion and typical
blazars have been detected in early type galaxies
(e.g.~\citeNP{ScarpaUrryFalomo2000}). It is still unclear, why AGN in
spiral galaxies, at the same optical luminosity as their elliptical
counterparts, should not be able to produce the powerful, relativistic
jets seen in radio galaxies.
However, a few sources with intermediate radio-to-optical ratios
appear to be neither radio-loud nor radio-quiet. They form a distinct
subclass with very similar radio morphological and spectral
properties. They all have a compact core at arcsecond scales and a flat and
variable spectrum in common. These properties are very similar to the
ones of radio cores in radio-loud quasars, but their low
radio-to-optical ratio and their low extended steep-spectrum emission
is atypical for radio-loud quasars.~\citeN{MillerRawlingsSaunders1993}
and \citeN{FalckeSherwoodPatnaik1996} have identified a number of these
sources, called ``radio-intermediate quasars'' (RIQs), and suggested
that they might be relativistically boosted radio-weak quasars or
``radio-weak blazars''. This would imply that most, if not all,
radio-quiet quasars also have relativistic jets. In fact, Very Long
Baseline Interferometry (VLBI) observations of radio-quiet quasars
already have shown high-brightness temperature radio cores and jets
(\citeNP{FalckePatnaikSherwood1996}; \citeNP{BlundellBeasley1998}). A
crucial test of the relativistic jet hypothesis is the search for
apparent superluminal motion in these sources. A prime candidate for
detecting this is the brightest radio source in the RIQ sample,
III~Zw~2, which we discuss in this paper.
III~Zw~2 (PG 0007+106, Mrk 1501, $z=0.089$) was discovered by
\citeN{Zwicky1967}, classified as a Seyfert I galaxy (e.g., \citeNP{Arp1968};
\citeNP{KhachikianWeedman1974}; \citeNP{Osterbrock1977}), and later also
included in the PG quasar sample (\citeNP{SchmidtGreen1983}). The host galaxy
was classified as a spiral (e.g. \citeNP{HutchingsCampbell1983}) and a
disk model was later confirmed by fitting of model isophotes to near-IR images
(\citeNP{TaylorDunlopHughes1996}). A spiral arm was claimed
(\citeNP{Hutchings1983}) but recent observations suggest a tidal arm
with several knots of star forming regions (\citeNP{SuraceSandersEvans2001}).
III~Zw~2 is the brightest member of a group of galaxies and an extended low
surface brightness emission surrounding all the galaxies suggests that
there are perhaps interactions between the galaxies
(\citeNP{SuraceSandersEvans2001}).
The source has shown extreme variability at radio wavelengths with at least
20-fold increases in radio flux density within 4 years
(\citeNP{AllerAllerLatimer1985}). III~Zw~2 is also known to be variable in the
optical (\citeNP{Lloyd1984}; \citeNP{ClementsSmithAller1995}) and X-ray
(\citeNP{KaastradeKorte1988}). \citeN{SalviPageStevens2002} compare the
long term radio light curves of III~Zw~2 with optical, IR and X-ray
light curves and find indications for correlated flux variations from
radio to X-ray wavelengths. Unfortunately, the time sampling at other
wavelengths than radio is very poor.
III~Zw~2 is a core-dominated flat-spectrum AGN with only a faint
extended structure (see~\citeNP{UngerLawrenceWilson1987} and
Section~\ref{iiizw2_ext}). The weak extended radio emission and the
host galaxy is quite typical for a Seyfert galaxy. Its [O{\sc III}]
luminosity is a mere factor three brighter than that of a bright
Seyfert galaxy like Mrk~3
(e.g.~\citeNP{Alonso-HerreroWardKotilainen1997}) which explains why it
has been classified as either a Seyfert galaxy or a quasar. In this
luminosity region a distinction between the two may not be of much
significance.
Earlier VLBI observations of the source have only shown a
high-brightness temperature unresolved core
(\citeNP{FalckeSherwoodPatnaik1996},
~\citeNP{KellermannVermeulenZensus1998}) and Millimeter-VLBI
observations by~\citeN{FalckeBowerLobanov1999} just barely resolved the
source into two very compact components. A broadband radio spectrum
showed a highly peaked spectrum which was well explained by a very
compact source and synchrotron self-absorption.
The unique and simple structure and timescales of radio outbursts within
5 years makes III~Zw~2 and ideal source to study radio-jet evolution
relevant also to radio galaxies.
In section 2, we will describe our Very Large Array (VLA) and Very Long
Baseline Array (VLBA) observations. before results from VLA monitoring
are presented in sections 3.1 -- 3.3. We then describe the results from
VLBI observations in section 3.4. In section 4 we will discuss the
results.
\section{Observations}
\begin{table}
\caption[]{Total observing time $t_{obs}$, fraction of observing time
on III~Zw~2 at 43 GHz $f_{43}$ and 15 GHz $f_{15}$ and fraction of observing
time on phase-reference quasar $f_{quasar}$ for the VLBA observations.}
\label{vlbainfo}
\[
\begin{tabular}{p{0.18\linewidth}|cp{0.14\linewidth}cp{0.14\linewidth}cp{0.14\linewidth}cp{0.14\linewidth}}
\hline
Date & $t_{obs}$ & $f_{43}$ &$f_{15}$& $f_{quasar}$ \\
\hline
1998/02/16 & 8 h &0.75 &0.25 &--\\
1998/06/13 & 9 h &0.75 &0.25 &--\\
1998/09/14 & 8 h &0.75 &0.25 &--\\
1998/12/12 & 8 h &0.75 &0.25 &--\\
1999/07/15 & 6 h &0.75 &0.25 &--\\
1999/11/12 & 8 h &0.33 &0.33 &0.33\\
2000/07/22 & 6 h &0.33 &0.33 &0.33\\
2000/08/27 & 8 h &0.33 &0.33 &0.33\\
2000/09/06 & 8 h &0.33 &0.33 &0.33\\
\hline
\end{tabular}
\]
\end{table}
In 1996 III~Zw~2 started a new major radio outburst and we initiated a
target of opportunity program to monitor the spectral evolution of the
burst with the VLA and its structural evolution with the VLBA with
excellent relative astrometry of the component separation.
We observed III~Zw~2 with the VLA 41 times from 1998 September until
September 2001 in intervals of roughly one month. The observations were
made at six frequencies ranging from 1.4 GHz to 43 GHz. Results from an
observation on 1998 May 21 at 350 MHz and on 1999 July 7 at 327.5 MHz are
also presented here.
The source 3C48 was used as the primary flux density calibrator, and
III~Zw~2 was self-calibrated and mapped with the Astronomical Image Processing
System (AIPS).
We also used the monitoring data at 8 and 15 GHz obtained with the
Michigan 26 m telescope, and at 22 and 37 GHz from the Mets\"ahovi
radio telescope. The single dish data are important for placing the VLA
and VLBA data in context, as they include a larger time window and are
more closely spaced.
We observed III~Zw~2 with the VLBA nine times over a period of 2.5 years at
15 and 43 GHz.
Details of this observations are given in Table~\ref{vlbainfo}. In the last
four epochs we included the background quasar J0011+0823 at 15 GHz as
phase-reference source. For the second epoch, we used the Effelsberg 100 m
telescope in combination
with the VLBA. We observed four 8 MHz bands, each at right and left circular
polarization. The initial calibration was performed with the AIPS package.
A-priori amplitude calibration was applied using system temperature
measurements and standard gain curves. Fringes were found in the III~Zw~2
data on all baselines. The data were self-calibrated and mapped using the
software package DIFMAP (\citeNP{ShepherdPearsonTaylor1994}). We started with
phase-only self-calibration and later included phase-amplitude self-calibration
with solution intervals slowly decreasing down to one minute. Results
of the first five VLBA observations at 43 GHz were reported
by~\citeN{BrunthalerFalckeBower2000}.
\section{Results}
\subsection{Extended emission of III~Zw~2}
\label{iiizw2_ext}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=2.5cm,bburx=16.7cm,bblly=3.8cm,bbury=25.2cm,clip=,angle=-90]{iiizw2_l.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=2.5cm,bburx=17.7cm,bblly=3.8cm,bbury=25.2cm,clip=,angle=-90]{iiizw2_c.ps}}
\caption{Combined VLA map of 11 epochs (A,B,C and D-array) of III~Zw~2 at
1.4 GHz (top) and 4.8 GHz (bottom). All maps were convolved with a beam
of 2 $\times$2 arcseconds to detect faint extended emission. The
contours start at 0.26 mJy and 0.15 mJy at 1.4 GHz and 4.8 GHz
respectively and increase with a factor of $\sqrt 2$.}
\label{lc-map}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=2.5cm,bburx=16.7cm,bblly=3.8cm,bbury=25.2cm,clip=,angle=-90]{iiizw2_x.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=2.5cm,bburx=17.7cm,bblly=3.8cm,bbury=25.2cm,clip=,angle=-90]{iiizw2_u.ps}}
\caption{Combined VLA map of 11 epochs (A,B,C and D-array) of III~Zw~2 at
8.4 GHz (top) and 15 GHz (bottom). All maps were convolved
with a beam of 2$\times$2 arcseconds to detect faint extended
emission. The contours start at 0.12 mJy and 0.6 mJy at 8.4 GHz and 15 GHz
respectively and increase with a factor of $\sqrt 2$.}
\label{xu-map}
\end{figure}
\citeN{UngerLawrenceWilson1987} discovered a weak radio component
$15.4''$ (23 kpc, with an angular size distance of $d_A\sim307.4$ Mpc;
$H_0=75~km~sec^{-1}~Mpc^{-1}$, $q_0=0.5$ as used in this paper)
southwest of the nucleus. This detection was confirmed later
(\citeNP{KukulaDunlopHughes1998}; \citeNP{FalckeBowerLobanov1999}), but
no additional extended radio emission was found.
To study the extended structure in more detail, we combined the raw
data of eleven VLA observations. In the combined data we used data from the
VLA in A, B, C and D configuration. Since the nucleus is highly variable, we
subtracted it from the uv-data before combining the data. The combined data
set was then self-calibrated and mapped.
The combined VLA maps at 1.4, 4.8, 8.4 and 15 GHz are shown in Fig.
\ref{lc-map} and \ref{xu-map}. The 4.8, 8.4 and 15 GHz maps were convolved
with a large beam of 2$\times$2 arcseconds to detect faint extended
structure. We detected the southwestern component at all four frequencies.
This radio lobe or hotspot is connected to the nucleus with a jet-like
structure visible at 1.4, 4.8 and 8.4 GHz. At 1.4 GHz one sees an indication
that the jet is ejected in northwestern direction and gets deflected by almost
$90^{\circ}$ towards the southwestern lobe. This is also in accordance with the
direction of the jet on sub-parsec scales (see section~\ref{vlbi}).
We also discovered a weaker secondary radio lobe $21.9''$ (32.6 kpc) on the
opposite side of the galaxy at 1.4 to 8.4 GHz. If one assumes equal expansion
velocities for both lobes,
a simple time travel argument (e.g., \citeNP{RyleLongair1967}) would suggest
that the weaker northeastern lobe is approaching and the brighter southwestern
lobe is receding. However, this scenario can not explain the differences in
flux density between the two lobes. One would expect the approaching lobe to be
brighter due to relativistic boosting of the emission.
Hence it is more likely that the armlength difference is explained by an
asymmetric expansion of the the two lobes due to different intrinsic
velocities or differences in the ambient medium, i.e. the medium in the
southwest of III~Zw~2 has a higher density than the medium in the
northeast. This is supported by the fact that there is a close
companion galaxy only $\sim 30''$ to the south.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=19.6cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{hotspec.ps}}
\caption{Spectra of the southwestern (circles) and northeastern (squares)
radio lobes. The black triangles are upper limits for the northeastern lobe.}
\label{lobes}
\end{figure}
The spectra
of the two radio lobes are shown in Fig~\ref{lobes}. The 350 and 327.5 MHz
data were from our observations on 1998 May 21 and 1999 July 7 respectively
where we also detected the southwestern component. Both radio lobes have a
steep spectrum with spectral indices of $\alpha =
-0.57$~to~$-1.15$. Values around -0.7 are typical for synchrotron
emission of optically thin radio lobes of radio galaxies. There is no
break or a steepening of the spectrum towards higher frequencies. This
indicates that both radio lobes are still active and powered by the
central engine. Otherwise, the high energy electrons would have lost
most of their energy due to radiation losses. This would lead to a
steepening in the spectrum at higher frequencies.
\subsection{Variability}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{8.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{15.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{22.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=19.6cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{43.ps}}
\caption{Radio light curves of III~Zw~2. The triangles at 8 and 15 GHz are from the Michigan monitoring program, the squares at 22 and 37 GHz are from the Mets\"ahovi monitoring program. The circles are our VLA observations at 8, 15, 22 and 43 GHz.}
\label{longterm}
\end{figure}
The core of III~Zw~2 shows extreme variability at radio wavelengths. Long
time radio light curves of this source spanning more than 20 years are shown
in Fig.~\ref{longterm} using data from Michigan, Mets\"ahovi, and the
VLA. One can see major flares with 30-fold increases in radio flux
density within two years. These major flares occur roughly every five
years with sub-flares on shorter timescales.
The outburst discussed in this paper started in 1996 and we monitored this
flare with the VLA. The good time sampling of one observation each month
allowed us to study this outburst in great detail. Lightcurves from the
most recent flare at six frequencies from 1.4 to 43 GHz are shown in
Fig.~\ref{flare1} and~\ref{flare2} together with our best model fits to
the data. Since the Michigan data at 8 GHz is rather noisy compared to
the other frequencies and the VLA data, we used
only our VLA monitoring data for the fits at this frequency. At 15, 22
and 37 GHz we used the VLA data as well as the Michigan and Mets\"ahovi
data.
First we fitted a linear rise and decay to the flare. The rise is
consistent with a linear fit at all frequencies. The decay is also linear
at 4.8 GHz. At higher frequencies, the decay is linear only for a short
time period and deviates significant from a linear behavior at later times.
Thus we used only the linear part of the lightcurves for our fits.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{flare_1+exp.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{flare_4+exp.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=19.6cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{flare_8+exp.ps}}
\caption{Radio light curves of the recent flare in III~Zw~2 at 1.4, 4.8 and 8.4 GHz. The circles are our VLA observations, and the triangles are from the Michigan monitoring program. The solid lines are our linear rise and decline fits. The dashed lines are the fitted exponential rise and decay.}
\label{flare1}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{flare_15+exp.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{flare_22+exp.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=19.6cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{flare_37+exp.ps}}
\caption{Radio light curves of the recent flare in III~Zw~2 at 15, 22 and 37/43 GHz. The circles are our VLA observations, and the triangles are from the Michigan monitoring program and the squares are from the Mets\"ahovi monitoring program. The solid lines are our linear rise and decline fits. The dashed lines are the fitted exponential rise and decay.}
\label{flare2}
\end{figure}
The decay at higher frequencies can be fitted much better with an exponential
decay. Thus we fitted also an exponential rise and decay,
\begin{equation}
S(t)=S_0 e^{(t_0-t)/\tau_{r}},
\end{equation}
\begin{equation}
S(t)=S_0 e^{(t_0-t)/\tau_{d}},
\end{equation}
to the lightcurves. The fitting parameters are $S_0$, the maximum amplitude of
the flare and the flare rise and decay timescales $\tau_r$ and $\tau_d$. The
epochs of the flare maximum $t_0$ were taken from the observation with the
highest flux density. The increase of the lightcurves can also be fitted by an
exponential rise and the exponential decay fits the outburst until a new
smaller flare starts.
The slopes of the linear rise ($a_{r}$) and decay ($a_{d}$), the
exponential rise and decay timescales $\tau_r$ and $\tau_d$ and the
epoch of the flare maximum $t_0$ are listed in Table~\ref{lighttab} for
all six frequencies.
\begin{table}
\caption[]{Fitting information: The slopes of the linear rise ($a_{r}$) and decay ($a_{d}$), the exponential rise and decay timescales $\tau_r$ and $\tau_d$ and the epoch of the flare maximum $t_0$.}
\label{lighttab}
\[
\begin{tabular}{|c||cc|cc|c|}
\hline
Frequency & $a_{r}$ & $a_{d}$ & $\tau_{r}$ & $\tau_{d}$ & $t_0$ \\
$[$GHz$]$&\multicolumn{2}{c|}{[Jy~yr$^{-1}$]}&\multicolumn{2}{c|}{[yr $^{-1}$]}&[yr]\\
\hline
1.4 & 0.021 &-&-3.088&-&- \\
4.8 & 0.20 & -0.14&-1.39& 2.16 & 2000.1\\
8.4 & 0.46 & -0.43&-1.40& 1.26&1999.5 \\
15 & 0.74 & -0.86 &-1.14 &1.01 & 1999.1\\
22 & 1.24& -2.85&-1.04 &0.73 &1998.9 \\
43 & 1.65&-4.73&-0.96 &0.53 & 1998.7\\
\hline
\end{tabular}
\]
\end{table}
The lightcurve reaches its peak first at higher
frequencies. The time-lag between the peak at 43 GHz and the peak at
lower frequencies is shown in Fig.~\ref{time_peak} and is best fit
with a $\delta t \propto \nu^{-1.1}$ power law.
The rise and decay are faster at higher frequencies than at
lower frequencies. This behavior is typical for flares in AGN
(e.g.~\citeNP{TuerlerCourvoisierPaltani1999}).
\begin{figure}
\begin{center}
\includegraphics[width=0.35\textwidth,clip=,angle=-90]{time_peak_2.ps}
\caption{Time-lag between the peak at 43 GHz and the peak at the other
frequencies. The data point at 1.4 GHz is a lower limit.}
\label{time_peak}
\end{center}
\end{figure}
At 4.8 GHz the rise is faster than the decay, while at 15, 22 and
43 GHz the rise is slower than the decay. This is the case for
the linear and the exponential fits to the flare. At 8.4 GHz the linear
rise is slightly slower than the linear decay while the exponential
rise slightly faster than the exponential decay is. The rise and decay
timescales are plotted as a function of frequency in
Fig.~\ref{t_dr}. The data can be fitted with power laws $\tau_r
\propto \nu^{-0.2}$ and $\tau_d \propto \nu^{-0.7}$. Only the rise
timescale at 1.4 GHz deviates from this power law. However, the
quiescence flux at 1.4 GHz is comparable to the flux density of the
outburst and will affect this data point.
The small dependancy of the rise timescale on frequency indicates that
optical depth effects are not very important during the rise. This is in
contrast to the decay, where optical depth effects are clearly important.
\begin{figure}
\begin{center}
\includegraphics[width=0.35\textwidth,clip=,angle=-90]{t_dr.ps}
\caption{Rise (x) and decay (+) timescales for all frequencies.}
\label{t_dr}
\end{center}
\end{figure}
\citeN{ValtaojaLaehteenmaekiTeraesranta1999} modeled the radio lightcurves
of flares at 22 and 37 GHz in 85 extragalactic radio sources. They fitted an
exponential curve to the rise and the decay and found that in virtually all
flares a good model-fit could be obtained using a constant ratio between decay
and rise timescale of $\tau_{d}=1.3\tau_{r}$, i.e. the rise is faster than
the decay. These flares can be identified with the ejection of new VLBI
components in the jets.
The fact that the lightcurves of III~Zw~2 at 15, 22 and 43 GHz show the
opposite behavior with ratios of $\tau_d / \tau_r \approx 0.6 - 0.9$,
i.e. the decay is faster than the rise, indicates that
different physical processes are involved in this source. However, it
can not be excluded that the flare is composed of two closely spaced
flares.
\subsection{Spectral evolution}
During the outburst the spectrum of the source also showed variations. An
almost simultaneous broadband radio spectrum from 1.4 to 660 GHz during the
increase in flux density in May 1998 was presented in
~\citeN{FalckeBowerLobanov1999}. The spectrum was highly inverted at
centimeter wavelengths ($\alpha=+1.9\pm0.1$) with a turnover frequency
around 43 GHz. At frequencies above 43 GHz the spectrum became steep
with a spectral index of $\alpha=-0.75\pm0.15$, i.e. a textbook-like
synchrotron spectrum.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{03spec.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{06spec.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{15spec.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=19.6cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{30spec.ps}}
\caption{Spectra of III Zw 2 on 1998 November 04 (top), 1999 February
22 (middle), 1999 November 12 (middle) and 2001 January 30 (bottom)
together with the quiescent spectrum (horizontal line) and the best
fit spectrum.}
\label{spec}
\end{figure}
Our VLA monitoring of the spectral evolution started in September 1998.
Four of the 41 epochs yielded no or bad data due to bad weather or hardware
failure. We fitted the remaining spectra with a broken power-law plus a flat
and constant quiescence spectrum $S_q$,
\begin{equation}
S(\nu)=S_{0} \left(\frac{\nu}{\nu_0}\right)^k\left(1-e^{-\left(\frac{\nu}{\nu_0}\right)^{l-k}}\right)+S_{q},
\end{equation}
where $k$ and $l$ are the spectral indices of the rising and declining
parts of the spectrum. $S_0$ and $\nu_0$ are fitting parameters and are
not exactly equal to the maximum flux density and the peak frequency of
the fitted spectrum.
We assume a flat spectrum for the quiescence flux which
is typical for quiescent cores in active galactic nuclei. Since we could fit
all epochs with a value of 25 mJy, we adopted this value for all epochs.
The approach with a broken power-law has the disadvantage, that there are
ambiguities in the parameters if the peak of the spectrum falls beyond the
frequencies covered in the observations.
This situation was the case in our first three observations in
September 1998, October 1998 and November 1998. The spectral
shape in these three epochs was very similar to the May 1998 spectrum. Highly
inverted at centimeter wavelengths, a flattening towards higher frequencies,
and possibly a turnover around 43 GHz, but with higher flux densities (see
Fig.~\ref{spec}, top).
Thus we assumed the spectral index of the declining part of the spectrum to be
$l=-0.75$, the value of the May 1998 observation. This is a reasonable
assumption since the overall spectral shape did not change
significantly between May 1998 and September 1998.
The temporal evolution of the fitting parameters $\nu_0$, $k$ and $l$ can be
seen in Fig.~\ref{turnover} and Fig~\ref{kandl}. In some epochs we covered
only 5 frequencies from 1.4 to 22 GHz. The absence of
the 43 GHz flux density in these epochs could bias the results of the
spectral fitting. Thus we marked the epochs with only 5 frequencies in
Fig.~\ref{turnover} with triangles while the epochs with 6 frequencies are
indicated by circles. One can see that the fits of both subsets are in good
agreement.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=19.6cm,bblly=1.9cm,bbury=16.9cm,clip=,angle=-90]{new_turn.ps}}
\caption{Evolution of the "turnover frequency" $\nu_0$. The circles
mark epochs with 6 observed frequencies (1.4-43 GHz) and the
triangles epochs with 5 observed frequencies (1.4-22 GHz). In the
last two epochs only 4 frequencies (1.4-15 GHz) were used because of
a new outburst at high frequencies. The cross is the turnover
frequency of the spectrum in May 1998, where the 1-43 GHz data is
taken from \protect\citeN{FalckeBowerLobanov1999}. The asterisks mark
the epochs of our VLBA observations.}
\label{turnover}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=19.6cm,bblly=1.9cm,bbury=16.9cm,clip=,angle=-90]{kandl.ps}}
\caption{Evolution of the spectral indices k (circles) and l (squares). The asterisks mark the epochs of our VLBA observations.}
\label{kandl}
\end{figure}
After November 1998 the spectrum underwent a dramatic change (see also
Fig.~\ref{spec}). The turnover frequency $\nu_0$, that stayed roughly
constant at around 30 GHz from May 1998 until November 1998 dropped to
23 GHz in December 1998. In the following months, the turnover drops
further until it reached 10 GHz in June 1999. During the next year the
turnover frequency showed only smaller and slower variations and stayed
roughly constant at $\sim 7$~GHz. The drop in turnover frequency to 4
GHz in the first months of 2000 can be explained by the onset of a new
minor flare at high frequencies (see 43 GHz lightcurve in
Fig.~\ref{flare2}). The new flare caused a flattening of the optical
thin part of the spectrum and a shift of the turnover to lower
frequencies. The flattening can also be seen in Fig.~\ref{kandl} where
the spectral index $l$ changes from $\sim -0.9$ to $\sim -0.1$ during
that time.
The spectral index in the optical thick part of the spectrum $k$ stays at
$\sim 2$ and slowly flattens towards later times.
A new strong outburst started at high frequencies in January 2001 (see 22 and
43 GHz lightcurves in Fig.~\ref{flare2}) and one would have to model two
independent broken power-laws to the spectrum. Since one broken power-law
is characterized by four parameters $S_0$, $\nu_0$, $k$ and $l$,
our six data points in each spectrum are not sufficient to model two
independent components with four parameters each. In the first two epochs of
the new flare, only the 22 and 43 GHz data were affected and we fitted the
broken power-law to the remaining four frequencies.
The fast change in peak frequency implies also a strong morphological change,
i.e. a rapid expansion. This prediction was tested by VLBI observations which
are described in the next section.
\subsection{Structural evolution}
\label{vlbi}
\subsubsection{43 GHz Results}
The first three VLBA observations were made during the first phase of the
flare, marked by the increase in flux density and a roughly constant spectral
peak above 30 GHz. The constancy of the peak frequency indicates no structural
change, since the turnover is caused by synchrotron self-absorption
(\citeNP{FalckeBowerLobanov1999}). The source is slightly resolved and
the long baselines show non-zero closure phases, indicating an
asymmetric structure. Two point-like components were fitted to the
uv-data to represent the extent of the source. The source shows no
structural change (see Fig.~\ref{43maps}) and the separation of the two
components during this phase of the flare stayed constant at $\sim
76~\mu as$, corresponding to $\sim 0.11$~pc. The excellent agreement
within 2~$\mu$as between the first three epochs shows the high quality
of the data and the accuracy of the relative astrometry.
After November 1998, the VLA monitoring shows a dramatic change in the
spectrum. The peak frequency dropped quickly to 10 GHz within a few
month (Fig.~\ref{turnover}). In the framework of a simple equipartition
jet model with a $R\propto \nu_{\mathrm{ssa}}^{-1}$ dependence (e.g.,
\citeNP{BlandfordKonigl1979}; \citeNP{FalckeBiermann1995}) one would
expect a rapid expansion. With a source size of 0.11 pc and a turnover
frequency of 33 GHz in the first phase with no expansion one expects a
source size of 0.36 pc for a self-absorption frequency of 10 GHz
according to the spectral evolution.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=3.3cm,bburx=16.2cm,bblly=9.8cm,bbury=27.8cm,clip=,angle=0]{7mm_maps.ps}}
\caption{All six VLBA maps of III~Zw~2 at 43 GHz convolved with a superresolved circular beam of 150 $\mu$as.}
\label{43maps}
\end{figure}
\begin{table}
\caption[]{Flux densities S, separation D and position angle P.A. of the
two outermost point-like components of our model-fits to the 43 GHz uv-data.}
\label{7mm_tab}
\[
\begin{tabular}{p{0.18\linewidth}|p{0.12\linewidth}p{0.12\linewidth}p{0.12\linewidth}cp{0.10\linewidth}c}
\hline
Date & $S_1$ [Jy] & $S_2$ [Jy] & $S_3$ [Jy]& d [mas] & P.A.\\
\hline
1998/02/16 & 0.93 & 0.58 & -- & 0.075 & $-84^{\circ}$\\
1998/06/13 & 1.03 & 0.65 & -- & 0.077 & $-78^{\circ}$\\
1998/09/14 & 1.60 & 1.27 & -- & 0.077 & $-72^{\circ}$\\
1998/12/12 & 0.86 & 0.86 & -- & 0.106 & $-63^{\circ}$\\
1999/07/15 & 0.56 & 0.26 & 0.08& 0.245 & $-71^{\circ}$\\
1999/11/12 & 0.08 & 0.23 & 0.05 & 0.246 & $-73^{\circ}$\\
\hline
\end{tabular}
\]
\end{table}
Indeed, the fourth VLBA epoch, observed only one month after the start
of the spectral evolution, shows first signs of an expansion. The fifth
epoch shows a dramatic structural change (see Fig.~\ref{43maps}) and a
model of three point-like components is required to describe the data
now. The separation between the outer components is now $\sim 245~\mu
as$ corresponding to $\sim 0.37$~pc. This is in good agreement with the
expected value of 0.36 pc from the equipartition jet model. The
structure in the sixth epoch is very similar to the fifth epoch but
with lower flux density. This is again expected, since the turnover
frequency stayed around 10 GHz.
The separation of the outer components for all six epochs is plotted in
Fig.~\ref{expand1} (upper panel). For the first three epochs we measure
an upper limit for the expansion speed of 0.04 $c$. The rapid expansion
between the fourth and fifth epoch shows an apparent speed of 1.25
$c$. Between the last two epoch we detected again no expansion with an
upper limit of 0.04 $c$.
Unfortunately the source was too weak at 43 GHz to be detected at the
last three epochs.
\subsubsection{15 GHz Results}
At 15 GHz the picture looks completely different. The source is very
compact but slightly resolved in all epochs except the first and we
fitted two point-like components to the uv-data. The flux densities,
separations and position angles of the two components are listed in
Table~\ref{2cm_tab}. The component separation of all epochs is also
plotted in Fig.~\ref{expand1}. One can see a constant expansion with an
apparent expansion speed of $\sim0.6~c$. Simple extrapolation backwards
suggests that the expansion has started in May 1996. This is consistent
with the onset of the new flare in the 37 GHz lightcurve in
Fig.~\ref{longterm}. The flare started between two 37 GHz observations
in May and October 1996.
One should note that the component separation of the fifth epoch shows a
deviation from a constant expansion. If one splits up the separation into
its north-south and east-west components the scatter is larger in the
north-south direction. This is expected, since the beam of the VLBA is
elongated in the north-south direction. This relatively large scatter
in north-south direction also explains the scatter in the positions
angles in Table.~\ref{2cm_tab}.
\begin{table}
\caption[]{Flux densities S, separation D and position angle P.A. of the
two point-like components of our model-fits to the 15 GHz uv-data.}
\label{2cm_tab}
\[
\begin{tabular}{p{0.18\linewidth}|cp{0.12\linewidth}cp{0.12\linewidth}cp{0.10\linewidth}cp{0.10\linewidth}c}
\hline
Date & $S_1$ [Jy] & $S_2$ [Jy] &d [mas] & P.A.\\
\hline
1998/02/16 & 0.32 & 0.40 & 0.085 & $-54^{\circ}$\\
1998/06/13 & 0.45 & 0.48 & 0.114 & $-79^{\circ}$\\
1998/09/14 & 0.67 & 0.50 & 0.121 & $-70^{\circ}$\\
1998/12/12 & 0.79 & 0.69 & 0.145 & $-59^{\circ}$\\
1999/07/15 & 0.50 & 0.46 & 0.142 & $-87^{\circ}$\\
1999/11/12 & 0.43 & 0.25 & 0.195 & $-68^{\circ}$\\
2000/07/22 & 0.22 & 0.10 & 0.222 & $-61^{\circ}$\\
2000/08/27 & 0.19 & 0.08 & 0.220 & $-68^{\circ}$\\
2000/09/06 & 0.18 & 0.07 & 0.229 & $-63^{\circ}$\\
\hline
\end{tabular}
\]
\end{table}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=18.2cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{7mm_expand.ps}}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.2cm,bburx=19.6cm,bblly=1.9cm,bbury=16.6cm,clip=,angle=-90]{2cm_expand.ps}}
\caption{Component separation at 43 (upper) and 15 (lower) GHz. The apparent expansion speed is 0.6 c at 15 GHz and 1.25 c at 43 GHz.}
\label{expand1}
\end{figure}
\section{Discussion}
The stop-and-go behavior and the apparent contradiction between the 43
GHz and 15 GHz data can be explained by a jet interacting with the
interstellar medium in combination with optical depth effects in an
'inflating-balloon model'.
In this model, the initial phase of the flare can be explained by a
relativistic jet interacting with the interstellar medium or a torus
that creates a shock and gets frustrated. A relativistic shock was
proposed by \citeN{FalckeBowerLobanov1999} due to synchrotron cooling
times of 14-50 days which are much shorter than the duration of the
outburst. The ultra-compact hotspots are pumped up, powered by the jet
and responsible for the increase in flux density. The post-shock
material expands with the maximum sound speed of a magnetized
relativistic plasma of $c_s\approx 0.6$~c.
Since the source is optically thick at 15 GHz, one observes the outside
of the source, i.e. the post-shock material expanding with sound
speed. At 43 GHz, the source is optically thin and one can look inside
the source and see the stationary hotspots.
The rapid expansion at 43 GHz thereafter has marked the phase where the
jet breaks free and starts to propagate relativistically into a
lower-density medium. Then the expansion stops again when the jet hits
another cloud.
The fact that spectral and structural evolution during the outburst are
closely linked demonstrates that we are dealing with a real physical
expansion and not only a phase velocity. The observations described
here produced a large amount of data that are all consistent with a
simple synchrotron self-absorbed jet model.
For the question of the nature of the radio-loud/radio-quiet dichotomy
this means that radio-weak and radio-loud quasars can indeed have
central engines that are in many respects very similar. Their optical
properties are almost indistinguishable and both types of quasars can
produce relativistic jets in their nuclei. The finding of superluminal
motion supports the hypothesis of \citeN{MillerRawlingsSaunders1993} and
\citeN{FalckePatnaikSherwood1996} that RIQs are relativistically
boosted intrinsically radio-weak AGN. Recently, a further relativistic
jet in a radio-quiet quasar was found by~\citeN{BlundellBeasleyBicknell2003}.
However, the nature of the medium interacting with the jet remains
unclear. The outbursts could be explained by a
precessing jet that hits a molecular torus roughly every five years. So
far, no direct evidence for molecular gas in the nucleus of III~Zw~2
was found.
Some Seyfert galaxies have shown H$_2$O maser emission associated with
the nuclear jet. In these sources, the maser emission is the result
from an interaction of the jet with a molecular cloud. One example is
the Seyfert II galaxy Mrk\,348 (see
\citeNP{PeckHenkelUlvestad2003}). In this source, the ejection of a new
VLBI component has lead to a flare of the radio source similar to the
outburst in III~Zw~2. The outburst started with a peak frequency of
$\approx$ 22 GHz which gradually decreased over 20 months. During this
outburst, H$_2$O maser emission was found (\citeNP{FalckeHenkelPeck2000}). Searches for similar water
maser emission in III~Zw~2 with the Effelsberg 100-m telescope yielded
no detection (Henkel, private communication).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[bbllx=9.1cm,bburx=20.2cm,bblly=1.7cm,bbury=16.5cm,clip=,angle=-90]{gps_fig.ps}}
\caption{Intrinsic turnover frequency vs. linear size for GPS and CSS sources. The quasars are represented by crosses, and the galaxies by solid squares. Adapted from~\protect\citeN{ODeaBaum1997}. The two circles mark the values for III~Zw~2 before and after the expansion at 43 GHz.}
\label{nu_s}
\end{figure}
In the currently favoured youth model for Compact Steep Spectrum (CSS) and
GHz Peaked Spectrum (GPS) sources, the linear size of a source is related
to the age of the source. The correlation between the turnover frequency and
the projected linear size (e.g.,~O'Dea \& Baum 1997) suggests that the
turnover frequency decreases while the source ages and expands. Therefore
the sources with the highest turnover frequencies represent the youngest
objects. In Fig.~\ref{nu_s} we plot linear size vs. turnover frequency
for GPS and CSS sources. We include III~Zw~2 before ($\nu_{ssa}\approx
33~$GHz; size $\approx$~0.11~pc) and after ($\nu_{ssa}\approx 10~$GHz;
size~$\approx$~0.37~pc) the expansion and the two points lie at the
lower end of the scatter of the linear correlation for GPS/CSS
sources. This could be explained by projection effects. Since III~Zw~2
is a Seyfert 1 galaxy with superluminal motion, the jet is probably
close to the line-of-sight. Hence the true size would be
underestimated, and the points in the plot move to the right. However,
the evolution of III~Zw~2 during the expansion is almost parallel to
the correlation. This implies that the same physical processes,
i.e. synchrotron self-absorption, are involved in III~Zw~2 and in
GPS/CSS sources. In the case of III~Zw~2, the radio source is much
older than the current outburst. Hence, it is possible that
some of the GPS/CSS sources are in fact not young, but only show
intermittent activity.
III~Zw~2 remains an extremely unusual object. Future simultaneous
multi-frequency observations of new outbursts would help to confirm the
proposed scenario of a jet-ISM interaction.
\begin{acknowledgements}
The National Radio Astronomy Observatory is a facility of the National
Science Foundation operated under cooperative agreement by Associated
Universities, Inc. The UMRAO is partially supported by funds from the
National Science Foundation and from the Univ. of Michigan Dept. of
Astronomy. The 100 m telescope at Effelsberg is operated by the
Max-Planck-Institut f\"ur Radioastronomie in Bonn.
\end{acknowledgements}
|
3,212,635,537,783 | arxiv | \section{Introduction}
The engineering of gauge theories in different dimensions by means of branes probing Calabi-Yau (CY) singularities in string and M-theory has received considerable attention. Among its multiple applications, this approach: provides a way to construct interesting gauge theories and study their dynamics and dualities, is a framework for local model building \cite{Aldazabal:2000sa,Berenstein:2001nk,Verlinde:2005jr,Buican:2006sn} and it is at the heart of the gauge/gravity correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}.
The well-known connection between CY $(m+2)$-folds and gauge theories on the worldvolume of D$(5-2m)$-branes for $m=0,\ldots,3$ (see e.g. \cite{Morrison:1998cs,Beasley:1999uz,Feng:2000mi,Beasley:2001zp,Feng:2001xr,Feng:2001bn,Feng:2002zw,Wijnholt:2002qz,Benvenuti:2004dy,Franco:2005rj,Benvenuti:2005ja,Franco:2005sm,Butti:2005sw} for the widely studied case of D3-branes on CY 3-folds) can be extended to arbitrary $m$ in terms of the topological B-model. In this context, the open string sector of the B-model on CY $(m+2)$-folds is described by $m$-graded quivers with superpotentials (see \cite{Aspinwall:2008jk,lam2014calabi,Franco:2017lpa,Closset:2018axq} and references therein).
This correspondence is particularly well understood in the case of toric CYs. For $m=1$, {\it brane tilings} (a.k.a. dimer models), significantly simplify the map between CY 3-folds and $4d$ $\mathcal{N}=1$ gauge theories \cite{Hanany:2005ve,Franco:2005rj,Franco:2005sm}. Progress in this area has considerably accelerated in recent years, initially fueled by a desire to develop brane constructions for lower dimensional gauge theories \cite{Franco:2015tna,Franco:2015tya,Franco:2016nwv,Franco:2016qxh,Franco:2016tcm,Franco:2017cjj}. Lately, the scope of these investigations expanded to developing tools for toric CYs of arbitrary dimension. These efforts culminated in \cite{Franco:2019bmx} with the introduction of {\it $m$-dimers}, which fully encode the $m$-graded quivers with superpotentials associated to toric CY $(m+2)$-folds and streamline the connection between quivers and geometry.
The $m$-dimers associated to specific geometries can be determined via a variety of traditional approaches, such as partial resolution and mirror symmetry, which have been extended to general $m$ \cite{Franco:2015tna}. Despite the considerable simplifications brought by $m$-dimers, their determination can sometimes become practically challenging and additional tools are desirable. Examples of such methods include {\it orbifold reduction} \cite{Franco:2016fxm} and {\it $3d$ printing} \cite{Franco:2018qsc} which were originally developed in the context of CY 4-folds but can be applied more broadly \cite{Closset:2018axq}.
In this paper we introduce a substantially more powerful approach, which we denote {\it Calabi-Yau product}. This algorithm starts from the known quiver theories\footnote{Throughout this paper, we will use the term quiver theory to indicate the combination of a quiver and its superpotential.} for a pair of toric CY$_{m+2}$ and CY$_{n+2}$ and produces the quiver theory for a related CY$_{m+n+3}$. In doing so, it enables the computation of quiver theories that were previously out of practical reach.
This paper is organized as follows. \sref{section_graded_quivers} presents a review of $m$-graded quivers. \sref{section_product_geometry} introduces the basics of the CY product, in particular the input data for the construction and how the parent geometries give rise to the product geometry. \sref{section_periodic_quiver} explains how to construct the periodic quiver for the product theory. \sref{section_superpotential} discusses the superpotential. The construction is illustrated in \sref{section_examples} with explicit examples. \sref{section_relation_to_other_constructions} considers the relation between the CY product and other constructions. We conclude and present ideas for future work in \sref{section_conclusions}. Additional details are provided in two appendices.
\section{A Brief Review of $m$-Graded Quiver Theories}
\label{section_graded_quivers}
In order to make our presentation self-contained, in this section we present a brief review $m$-graded quivers and their dualities. We refer the interested reader to \cite{Franco:2017lpa,Closset:2018axq,Franco:2019bmx} for further details.
Given an integer $m \geq 0$, an $m$-graded quiver is a quiver with a grading for every arrow $\Phi_{ij}$ by a {\it quiver degree}:
\begin{equation}
|\Phi_{ij}| \in \{ 0, 1, \cdots, m\}~.
\end{equation}
Every node $i$ corresponds to a unitary ``gauge group" $U(N_i)$. Arrows connecting nodes correspond to bifundamental or adjoint ``fields".
The conjugate of every arrow $\Phi_{ij}$ has the opposite orientation and degree $m-|\Phi_{ij}|$:
\begin{equation}\label{Phi opp intro}
\overline{\Phi}_{ji}^{(m-c)}\equiv \overline{(\Phi_{ij}^{(c)})}~,
\end{equation}
where we use a superindex in parenthesis to explicitly indicate the degree of the corresponding arrow, i.e. $|\Phi_{ij}^{(c)}|=c$.
The integer $m$ determines the possible degrees, i.e. the different types of fields, which can be restricted to the range:
\begin{equation}\label{arrows fields}
\Phi_{ij}^{(c)} \; : i \longrightarrow j~, \qquad c=0, 1, \cdots, n_c-1~, \qquad n_c \equiv \floor{m+2\over 2}~,
\end{equation}
since other degrees can be obtained by conjugation. We refer to degree 0 fields as {\it chiral fields}.
Graded quivers for $m=0,1,2,3$ describe $d=6,4,2,0$ supersymmetric gauge theories with $2^{3-m}$ supercharges, respectively. Different degrees correspond to different types of superfields. These theories can be engineered in terms of Type IIB D$(5-2m)$-branes probing CY $(m+2)$-folds.
\paragraph{Superpotential.}
Graded quivers admit {\it superpotentials}, which are linear combinations of gauge invariant terms of degree $m-1$:
\begin{equation}
W= W(\Phi)~,\qquad\qquad |W|= m-1~.
\label{superpotential_degree}
\end{equation}
Gauge invariant terms correspond to closed oriented cycles in the quiver, which may require conjugation of some of the fields.
\paragraph{Kontsevich bracket condition.}
The superpotential must also satisfy
\begin{equation}
\{W,W \}=0 ~.
\label{superpotential_Kontsevich}
\end{equation}
Here $\{ f, g \}$ denotes the Kontsevich bracket, which is defined as follows
\begin{equation}
\{ f, g \}= \sum_\Phi \left( {\d f\over \d \Phi}{\d g\over \d\overline{\Phi}} +(-1)^{(|f|+1)|\overline{\Phi}|+(|g|+1)|\Phi|+ |\Phi||\overline{\Phi}|+1} {\d f\over \d \overline{\Phi}}{\d g\over \d \Phi} \right)~.
\end{equation}
\subsection{The Toric Case}
\label{section_toric_quivers}
The CY$_{m+2}$ associated to an $m$-graded quiver arises as its {\it classical moduli space} which, generalizing the standard notion for $m\leq 3$, is defined as the center of the Jacobian algebra with respect to fields of degree $m-1$ \cite{Franco:2017lpa}. Namely, it is obtained by imposing the relations:
\begin{equation}
{\partial W \over \partial \Phi^{(m-1)}}=0~, \ \ \ \ \ \forall \, \Phi^{(m-1)}
\label{relations}
\end{equation}
plus gauge invariance. Since the superpotential has degree $m-1$, the terms that contribute to the relations in \eref{relations} are of the general form $\Phi^{(m-1)} J(\Phi^{(0)})$, with $J(\Phi^{(0)})$ a holomorphic function of chiral fields. We will refer to such terms as $J$-terms. The relations \eref{relations} therefore comprise only chiral fields.
\paragraph{Toric superpotential.}
Every toric CY$_{m+2}$ has at least one {\it toric phase}, which is a quiver theory satisfying the following properties. First, the ranks for all nodes can be equal. In addition, the superpotential of a toric phase has a special structure, which is referred to as the {\it toric condition} \cite{Franco:2019bmx}. The toric condition implies that every field of degree $m-1$ appears in exactly two superpotential terms, with opposite signs. Namely,
\begin{equation}
W= \Phi^{(m-1)}_a J_a^+(\Phi^{(0)})- \Phi_a^{(m-1)} J_a^-(\Phi^{(0)}) + \ldots~,
\label{general_toric_W}
\end{equation}
where dots stand for terms that do not contain $\Phi_a^{(m-1)}$. The relations \eref{relations} then take the form:
\begin{equation}
J_a^+(\Phi^{(0)}) = J_a^-(\Phi^{(0)}) ~.
\label{general_toric_J}
\end{equation}
Due to this special structure, toric phases can be encoded in $m$-dimers or, equivalently, by periodic quivers on $\mathbb{T}^{m+1}$ \cite{Franco:2019bmx}.
\paragraph{Generalized perfect matchings.}
We define a {\it generalized perfect matching}, or perfect matching for short, $p$ as a collection of fields satisfying:
\begin{itemize}
\item[{\bf 1)}] $p$ contains precisely one field from each term in $W$.
\item[{\bf 2)}] For every field $\Phi$ in the quiver, either $\Phi$ or $\bar{\Phi}$ is in $p$.
\end{itemize}
Perfect matchings provide variables that automatically satisfy the relations \eref{general_toric_J}. Therefore, there is a one-to-one correspondence between them and GLSM fields in the toric description of the CY$_{m+2}$. Perfect matchings indeed substantially simplify the determination of the toric diagram (see \cite{Franco:2019bmx} for details).
Since for every field a perfect matching contains either the field or its conjugate, a perfect matching determines a {\it polarization} of the quiver. We define polarization as a choice of orientation for every field in the quiver, i.e. a choice of what we regard as the original field and its conjugate. In what follows, we will adopt a convention for defining the polarization such that, given a perfect matching, we orient the fields in the quiver such that the fields in the perfect matching are the only ones that appear conjugated in the superpotential.\footnote{Notice that while every perfect matching defines a polarization, not every polarization corresponds to a perfect matching. For a quiver with $N_f$ fields, there are $2^{N_f}$ possible polarizations, arising from the two choices of orientation for every field.} This choice of polarization implies that the corresponding perfect matching consists of the conjugates of all the fields in the quiver.
\subsection{Dualities}
\label{subsec_dualities}
$m$-graded quivers admit order $(m+1)$ mutations. For $m\leq 3$, they correspond to the dualities of the corresponding gauge theories: no duality for $6d$ $\mathcal{N}=(0,1)$, Seiberg duality for $4d$ $\mathcal{N}=1$ \cite{Seiberg:1994pq}, triality for $2d$ $\mathcal{N}=(0,2)$ \cite{Gadde:2013lxa} and quadrality for $0d$ $\mathcal{N}=1$ \cite{Franco:2016tcm}. Interestingly, these mutations generalize these dualities to $m>3$. We refer the reader to \cite{Franco:2017lpa,Closset:2018axq} for detailed discussions on the transformation of quiver theories under mutations.
\subsection{Generalized anomaly cancellation}
Under a mutation at a node $\star$, its rank transform as:
\begin{equation}
N'_\star = N_0 - N_\star ~,
\end{equation}
where $N_0$ is the total number of incoming chiral fields. Invariance of the ranks under $m+1$ consecutive mutations of the same node leads to the {\it generalized anomaly cancellation} conditions.
For odd $m$, these conditions are given by:
\begin{equation}
\sum_j N_j \sum_{c=0}^{n_c-1} (-1)^c \left({\cal N}(\Phi_{ji}^{(c)})-{\cal N}(\Phi_{ij}^{(c)})\right)=0~, \qquad \forall i~, \qquad {\rm if}\;\; m \in 2\mathbb{Z}+1~,
\label{anomaly_odd}
\end{equation}
with ${\cal N}(\Phi_{ij}^{(c)})$ denotes the number of arrows from $i$ to $j$ of degree $c$. For every $i$, the sum over $j$ runs over all nodes in the quiver (including $i$), and $n_c$ is given by \eref{arrows fields}.
For even $m$, the conditions become
\begin{equation}
\sum_j N_j \sum_{c=0}^{n_c-1}(-1)^c \left({\cal N}(\Phi_{ji}^{(c)})+{\cal N}(\Phi_{ij}^{(c)})\right)=2N_i~, \qquad \forall i~, \qquad {\rm if}\;\; m \in 2\mathbb{Z}~.
\label{anomaly_even}
\end{equation}
For $m=0,1,2,3$, these conditions reproduce the cancellation of non-abelian anomalies in the corresponding $d=6,4,2,0$ gauge theories.
\section{Product of Toric Calabi-Yaus: the Geometry}
\label{section_product_geometry}
In this paper we will introduce the CY product. Before explaining the details of this novel algorithm, let us discuss its main ingredients and basics of the resulting geometry.
\paragraph{Initial data.}
The input for this procedure is given by:
\begin{itemize}
\item An $m$-graded quiver theory $P$ for a toric phase associated with a toric Calabi-Yau $(m+2)$-fold $\mathrm{CY}_{m+2}$. The toric diagram $T_{\mathrm{CY}_{m+2}}$ is an $(m+1)$-dimensional convex polytope consisting of points $u_{i}$. We also pick a perfect matching $p$ of $P$, which corresponds to the point $u_{0}$ of $T_{\mathrm{CY}_{m+2}}$.
\item An $n$-graded quiver theory $Q$ for a toric phase associated with a toric Calabi-Yau $(n+2)$-fold $\mathrm{CY}_{n+2}$. The toric diagram $T_{\mathrm{CY}_{n+2}}$ is an $(n+1)$-dimensional convex polytope consisting of points $v_{i}$ in it. We also pick a perfect matching $q$ of $Q$, which corresponds to the point $v_{0}$ of $T_{\mathrm{CY}_{n+2}}$.
\end{itemize}
\paragraph{The product geometry.}
The output of this algorithm is an $(m+n+1)$-graded quiver theory that we will call $P_{p}\times Q_{q}$. This theory is a toric phase for the $(m+n+3)$-dimensional toric Calabi-Yau $\mathrm{CY}_{m+n+3}$ whose toric diagram $T_{\mathrm{CY}_{m+n+3}}$ is the convex hull of points
\begin{equation}
\{(u_{i},v_{0})|u_{i} \in T_{\mathrm{CY}_{m+2}}\} \cup \{(u_{0},v_{i})|v_{i} \in T_{\mathrm{CY}_{n+2}}\}
\label{product_toric_diagram}
\end{equation}
$T_{\mathrm{CY}_{m+n+3}}$ is a lattice polytope in $\mathbb{Z}^{m+n+2}$. In this lattice, the $T_{\mathrm{CY}_{m+2}}$ gets embedded in a hyperplane spanned by the first $m+1$ coordinates, while $T_{\mathrm{CY}_{n+2}}$ gets embedded in a hyperplane spanned by the last $n+1$ coordinates. These two hyperplanes are orthogonal and meet at a single point $(u_{0},v_{0})$. In other words, the final toric diagram $T_{\mathrm{CY}_{m+n+3}}$ is the convex hull of the set of points obtained by ``interlacing" $T_{\mathrm{CY}_{m+2}}$ and $T_{\mathrm{CY}_{n+2}}$ at the point $(u_0,v_0)$. \fref{examples_product_toric_diagrams} shows two examples of this construction. Higher dimensional examples are straightforward although, obviously, difficult to visualize.
\begin{figure}[ht]
\centering
\includegraphics[width=13.5cm]{examples_product_toric_diagrams.pdf}
\caption{Two examples of the action of the Calabi-Yau product on toric diagrams. The first line is an example of CY$_2\times$CY$_2=$CY$_3$. The second line is CY$_3\times$CY$_2=$CY$_4$.}
\label{examples_product_toric_diagrams}
\end{figure}
At first sight, the use of the term ``product" to refer to the operation that acts on the geometry as described above, might be slightly confusing. The resulting geometry is {\it not} the product of the two parent CYs. In particular, its dimension is not equal to the sum of the dimensions of the starting CYs. However, we feel that the term captures various aspects of the process and its sufficiently simple to justify its adoption.
It is clear that the product of CYs can very easily produce quiver theories for extremely complicated geometries. Moreover, iterating the process, it becomes straightforward to deal with high dimensional geometries. We will present explicit examples in \sref{section_examples}.
There is substantial freedom in this construction. Given a desired CY$_{m+n+3}$, it can generally be decomposed into other CY$_{m+2}$ and CY$_{n+2}$ geometries in multiple ways (even with different values of $m$ and $n)$, there is a choice of toric phase for each of the parent geometries and of perfect matchings for the points $u_0$ and $v_0$. Therefore, generically, the CY product method can generate a large number of quiver theories for a given CY$_{m+n+3}$, reflecting the rich space of theories related by the corresponding order $(m+n+2)$ dualities.
\section{Product of Toric Calabi-Yaus: the Periodic Quiver}
\label{section_periodic_quiver}
Having discussed the connection between the parent and product geometries, we now explain how to construct the periodic quiver for the product. The periodic quiver contains all the information defining the quiver theory, namely not only the quiver but also the superpotential. Having said that, in \sref{section_superpotential} we will present explicit rules for constructing the superpotential directly, without having to read it from the periodic quiver.
The starting point of the construction is the initial data discussed in the previous section. As already mentioned, choosing different toric phases for the two parent geometries and/or using different perfect matchings for the $u_0$ and $v_0$ points can result in different phases for the same product geometry. Similar freedom has been observed in other constructions such as $3d$ printing \cite{Franco:2018qsc} and it is natural to expect such different phases to be related by duality.
As discussed in \sref{section_toric_quivers}, in order to simplify the product construction, given a perfect matching it is convenient to pick the polarization of the quiver in which the perfect matching turns out to simply consist of the conjugates of all the fields in the quiver. We will do so here. Using the polarization of $P$ given by $p$ and the polarization of $Q$ given by $q$, we will define a polarization of the periodic quiver for $P_{p}\times Q_{p}$. As we will see later, this polarization in fact corresponds to a perfect matching of the product theory and corresponds to the point $(u_{0},v_{0})$.
The periodic quiver of the product theory $P_{p}\times Q_{p}$ can be elegantly defined in terms of the action of the product operation on the basic elements of the parent quivers: nodes and fields. Below, we will use the following convention to denote nodes and fields in the different quivers: $i$ and $X$ for $P$, $j$ and $Y$ for $Q$ and $(i,j)$ and $Z$ for $P_{q}\times Q_{q}$. We have three possible products:
\paragraph{Node $\times$ node.}
The product of nodes $i$ of $P$ and $j$ of $Q$ gives rise to a node $(i,j)$ of $P_{p}\times Q_{q}$. This process is illustrated in \fref{node_x_node}.
\begin{figure}[H]
\centering
\includegraphics[width=4.5cm]{node_x_node.pdf}
\caption{Node $\times$ node.}
\label{node_x_node}
\end{figure}
\paragraph{Field $\times$ node.}
The product of a field $\bar{X}^{(c)}_{i_{1},i_{2}}$ of $P$ which is in $p$ with a node $j$ of $Q$ gives rise to a field $\bar{Z}^{(c+n+1)}_{(i_{1},j)(i_{2},j)}$ in $P_{p} \times Q_{q}$. Similarly, the product of a node $i$ of $P$ and a field $\bar{Y}^{(d)}_{j_{1}j_{2}}$ of $Q$ which is in $q$ gives rise to a field $\bar{Z}^{(d+m+1)}_{(i,j_{1})(i,j_{2})}$ in $P_{p}\times Q_{q}$.\footnote{For clarity, we have emphasized that we go over the fields $\bar{X}^{(c)}_{i_{1},i_{2}}$ of $P$ which are in $p$ and the fields $\bar{Y}^{(d)}_{j_{1}j_{2}}$ of $Q$ in $q$. However, given our choice of polarization determined by $p$ and $q$, these are simply the conjugates of {\it all} the fields in $P$ and $Q$.} \fref{node_x_field} represents this operation. The horizontal and vertical directions encode the $\mathbb{T}^{m+1}$ and $\mathbb{T}^{n+1}$ tori, respectively.
\begin{figure}[H]
\centering
\includegraphics[width=15.5cm]{node_x_field.pdf}
\caption{Field $\times$ node.}
\label{node_x_field}
\end{figure}
\paragraph{Field $\times$ field.}
The product of a field $\bar{X}^{(c)}_{i_{1}i_{2}}$ of $P$ in $p$ with a field $\bar{Y}^{(d)}_{j_{1}j_{2}}$ of $Q$ in $q$ gives rise to a field $\bar{Z}_{(i_{1},j_{1})(i_{2},j_{2})}^{(c+d)}$. \fref{field_x_field} represents this operation.
\begin{figure}[H]
\centering
\includegraphics[width=10.2cm]{field_x_field.pdf}
\caption{Field $\times$ field.}
\label{field_x_field}
\end{figure}
Table \ref{table_product} summarizes the product construction. This procedure not only generates the quiver for $P_{p} \times Q_{q}$ but also constructs its periodic quiver. This is because given an embedding of the periodic quiver $P$ in $\mathbb{T}^{m+1}$ and of $Q$ in $\mathbb{T}^{n+1}$, these rules result in an embedding of $P_{p} \times Q_{q}$ in $\mathbb{T}^{m+n+2}$.
\begin{table}[ht]
\setstretch{2}
\centering
\begin{tabular}{|c|c|c|}
\hline
\ \ \ \ \ $P$ \ \ \ \ \ & \ \ \ \ \ $Q$ \ \ \ \ \ & \ \ \ $P_{p} \times Q_{q}$ \ \ \ \\
\hline
$i$ & $j$ & $(i,j)$ \\ \hline
$i$ & $\bar{Y}^{(d)}_{j_{1}j_{2}}$ & $\bar{Z}^{(d+m+1)}_{(i,j_{1})(i,j_{2})}$ \\
$\bar{X}^{(c)}_{i_{1}i_{2}}$ & $j$ & $\bar{Z}^{(c+n+1)}_{(i_{1},j)(i_{2},j)}$ \\ \hline
$\bar{X}^{(c)}_{i_{1}i_{2}}$ & $\bar{Y}^{(d)}_{j_{1}j_{2}}$ & $\bar{Z}_{(i_{1},j_{1})(i_{2},j_{2})}^{(c+d)}$ \\ \hline
\end{tabular}
\caption{Summary of the construction of the periodic quiver for $P_{p} \times Q_{q}$}
\label{table_product}
\end{table}
For the sake of completeness we also describe the conjugates of the fields we have written above. Their origin can be understood as follows:
\begin{itemize}
\item
The conjugate of $\bar{Z}_{(i,j_{1})(i,j_{2})}^{(d+m+1)}$ is $Z^{(n-d)}_{(i,j_{2})(i,j_{1})}$. It arises from the product between the node $i$ and the field $Y_{j_{2}j_{1}}^{(n-d)}$ which is not in $q$.
\item
The conjugate of $\bar{Z}^{(c+n+1)}_{(i_{1},j)(i_{2},j)}$ is $Z_{(i_{2},j)(i_{1},j)}^{(m-c)}$. It comes from the product between $X^{(m-c)}_{i_{2}i_{1}}$ which is not in $p$ and node $j$.
\item
The conjugate of $\bar{Z}_{(i_{1},j_{1})(i_{2},j_{2})}^{(c+d)}$ is $Z_{(i_{2},j_{2})(i_{1},j_{1})}^{(m+n+1-c-d)}$. It comes from the product between $X^{(m-c)}_{i_{2}i_{1}}$ and $Y^{(n-d)}_{j_{2}j_{1}}$.
\end{itemize}
It is important to note that at the end of this process there is no field that comes from the product of an $\bar{X}^{(c)}_{i_{1}i_{2}} \in p$ and a $Y_{j_{2}j_{1}}^{(d)} \notin q$ or vice versa. This makes the choice of $p$ and $q$ central to this construction.
\subsection{Anomaly Cancellation}
Let us begin checking the consistency of the CY product construction we have just introduced. In this section we will show that if $P$ and $Q$ satisfy the corresponding anomaly cancellation conditions, then so does $P_{p} \times Q_{q}$. We assume that the ranks of all nodes are equal to $N$ and normalize the anomaly by this number. We first enumerate all the fields that are charged under a given node $(i,j)$ of $P_{p} \times Q_{q}$ and consider their contributions to the anomaly. These fields are given by:
\begin{enumerate}
\item
Product of incoming fields at $i$ in $P$ with node $j$ of $Q$.
\begin{enumerate}
\item
If $\bar{X}^{(c)}_{i^{\prime}i}\in p$, then it gives rise to a field $\bar{Z}^{(c+n+1)}_{(i^{\prime},j)(i,j)}$ incoming at $(i,j)$ which contributes $(-1)^{c+n+1}$ to the anomaly.
\item
If $X_{i^{\prime}i} \notin p$, then it gives rise to a field $Z^{(c)}_{(i^{\prime},j)(i,j)}$ incoming at $(i,j)$ which contributes $(-1)^{c}$ to the anomaly.
\end{enumerate}
\item
Product of incoming field at $j$ in $Q$ with node $i$ of $P$.
\begin{enumerate}
\item
If $\bar{Y}^{(d)}_{j^{\prime}j} \in q$, then it gives rise to a field $\bar{Z}^{(d+m+1)}_{(i,j^{\prime})(i,j)}$ incoming at $(i,j)$ which contributes $(-1)^{d+m+1}$ to the anomaly.
\item
If $Y_{j^{\prime}j} \notin q$, then it gives rise a field $Z^{(c)}_{(i,j^{\prime})(i,j)}$ incoming at $(i,j)$ which contributes $(-1)^{d}$ to the anomaly.
\end{enumerate}
\item Product of a field $\bar{X}^{(c)}_{i^{\prime} i }$ that is in $p$ with a field $\bar{Y}^{(d)}_{j^{\prime}j}$ that is in $q$. This gives rise to the incoming field $\bar{Z}_{(i^{\prime},j^{\prime})(i,j)}^{(c+d)}$ which contributes $(-1)^{c+d}$ to the anomaly. This is just the product of the the contribution to anomaly at $i$ of the incoming field $\bar{X}^{(c)}_{i^{\prime} i}$ and the contribution to the anomaly at $j$ of the incoming field $\bar{Y}^{(d)}_{j^{\prime}j}$.
\item
Product of an outgoing field $\bar{X}^{(c)}_{i i^{\prime}}$ at $i$ that is in $p$ with an outgoing field $\bar{Y}^{(d)}_{jj^{\prime}}$ at $j$ that is in $q$. This gives rise to the outgoing field $\bar{Z}_{(i,j)(i^{\prime},j^{\prime})}^{(c+d)}$ at $(i,j)$. Its conjugate contributes $(-1)^{m+n+1-c-d}$ to the anomaly. This is {\it minus} the product of the contributions to the anomaly at $i$ of the incoming field $X^{(m-c)}_{i^{\prime}i}$ and the contribution to the anomaly at $j$ of the incoming field $Y^{(n-d)}_{j^{\prime}j}$.
\end{enumerate}
Adding all these contributions, the anomaly at node $(i,j)$ becomes
\begin{equation}
A = a_{\cancel{p}} + (-1)^{n+1}a_{p} + b_{\cancel{q}} + (-1)^{m+1}b_{q} + a_{p}b_{q} - a_{\cancel{p}}b_{\cancel{q}} ~,
\end{equation}
where $a_{p}$ is the contribution to the anomaly by incoming fields at $i$ which are in $p$ and $a_{\cancel{p}}$ is the contribution to the anomaly by incoming fields that are not in $p$. Similarly, $b_{p}$ is the contribution to the anomaly at node $j$ by incoming fields that are in $q$, while $b_{\cancel{q}}$ is the contribution from the fields that are not in $q$.
At this point we distinguish three cases depending on the parity on $m$ and $n$.
\paragraph{Odd $m$ and $n$.}
In this case $A$ becomes
\begin{equation}
A= a_{\cancel{p}} + a_{p} + b_{\cancel{q}} + b_{q} + a_{p}b_{q} - a_{\cancel{p}}b_{\cancel{q}} ~.
\end{equation}
For odd $m$ and $n$, the anomaly cancellation conditions for $i$ in $P$ and $j$ in $Q$ respectively are
\begin{align}
a_{\cancel{p}} = -a_{p} && b_{\cancel{q}} = -b_{q}
\end{align}
Plugging these back into the expression for $A$ results in $A = 0$, which is the anomaly cancellation condition, since $m+n+1$ is odd.
\paragraph{Even $m$ and even $n$.}
In this case $A$ becomes
\begin{equation}
A_{net} = a_{\cancel{p}} - a_{p} + b_{\cancel{q}} - b_{q} + a_{p}b_{q} - a_{\cancel{p}}b_{\cancel{q}} ~.
\end{equation}
The anomaly cancellation conditions for $i$ and $j$ respectively are
\begin{align}
a_{\cancel{p}} = 2-a_{p} && b_{\cancel{q}} = 2-b_{q}
\end{align}
Plugging these back also results in $A = 0$, which is again the anomaly cancellation condition since $m+n+1$ is odd in this case, too.
\paragraph{Odd $m$ and even $n$.}
Lastly, in this case
\begin{equation}
A= a_{\cancel{p}} - a_{p} + b_{\cancel{q}} + b_{q} + a_{p}b_{q} - a_{\cancel{p}}b_{\cancel{q}} ~,
\end{equation}
The anomaly cancellation conditions at $i$ and $j$ are
\begin{align}
a_{\cancel{p}} = -a_{p} && b_{\cancel{q}} = 2-b_{q}
\end{align}
which gives $A = 2$, i.e. the anomaly cancellation condition is satisfied since $m+n+1$ is even for this case.
\section{Superpotential}
\label{section_superpotential}
The construction introduced in \sref{section_periodic_quiver}, produces the periodic quiver for $P_{p} \times Q_{q}$ from which, in principle, its superpotential can be read off. In general, this can be rather challenging. Therefore, in this section we introduce explicit rules for the direct construction of the superpotential.
The superpotential of the product theory takes the general form
\begin{equation}
W = \mathcal{W}_P + \mathcal{W}_Q + \mathcal{W}_C + \mathcal{W}_{PQ} ~.
\end{equation}
$\mathcal{W}_P$ and $\mathcal{W}_Q$ descend from the superpotentials of $P$ and $Q$, respectively. $\mathcal{W}_C$ consists of new cubic interactions. Finally, $\mathcal{W}_{PQ}$ depends on superpotentials of both $P$ and $Q$. We now describe each of them in detail.
\paragraph{$\mathcal{W}_P$: terms descending from the superpotential of $P$.}
Let us consider a single term $T_{P}$ in the superpotential $W_P$ of the parent theory $P$. It has the general form
\begin{equation}
T_{P} = X^{(c_{1})}_{i_{1}i_{2}}X^{(c_{2})}_{i_{2}i_{3}}\cdots X^{(c_{k-1})}_{i_{k-1}i_{k}}\bar{X}^{(c_{k})}_{i_{k}i_{1}} ~,
\label{pterm_product_potential}
\end{equation}
where $\sum_{n}c_{n} = m-1$ due to degree constraint. Our convention for the polarization makes the perfect matching $p$ manifest. The fields in $p$ appear as a single conjugated field per term in $W_P$. Furthermore, we will order the fields in every term such that the fields in $p$ occur last.
Every term $T_P$ gives rise to various terms in $\mathcal{W}_P$, as we now discuss. First, some of these terms correspond to the product between the fields in this term and a node $j$ of $Q$. They take the form
\begin{equation}
\sum_{j\in J}Z^{(c_{1})}_{(i_{1},j)(i_{2},j)}Z^{(c_{2})}_{(i_{2},j)(i_{3},j)}\cdots Z^{(c_{k-1})}_{(i_{k-1},j)(i_{k},j)}\bar{Z}^{(c_{k}+n+1)}_{(i_{k},j)(i_{1},j)} ~,
\label{potential_w_i_p_product}
\end{equation}
where the sum is over the set $J$ of nodes $j$ of $Q$. After this operation, the degree of the superpotential changes by $n+1$ and becomes $m+n$, as required for the superpotential of an $(m+n+1)$-graded quiver.
The additional terms descending from $T_P$ are constructed as follows. We first pick a field $X^{(c)}_{i^{\prime}i}$ from those in $T_{P}$. Since this field does not appear conjugated, it is obviously not contained in $p$. We also pick a field $Y_{j^{\prime}j}^{(d)}$ that is not in $q$. We then replace $X^{(c)}_{i^{\prime}i}$ in $T_{P}$ by its product with $Y_{j^{\prime}j}^{(d)}$, i.e. by $Z^{(c+d+1)}_{(i^{\prime},j^{\prime})(i,j)}$. This operation increases the degree by $d+1$. We also replace $\bar{X}^{(c_{k})}_{i_{k}i_{1}}$ by its product with $\bar{Y}_{j j^{\prime}}^{(n-d)}$, i.e. by $\bar{Z}^{(c_{k} + n-d)}_{(i_{k},j)(i_{1},j^{\prime})}$. This changes the degree by $n-d$. Finally, we simply replace the remaining fields in $T_{P}$ by their product with appropriate node in $Q$, which does not change the degrees since these fields are not in $p$. When combined, all these replacements change the degree of the superpotential term by $n+1$, as desired. Explicitly these terms are
\begin{align}
\sum_{\bar{Y}_{j j^{\prime}}^{(n-d)} \in q}\bigg[&Z^{(c_{1}+d+1)}_{(i_{1},j^{\prime})(i_{2},j)}Z^{(c_{2})}_{(i_{2},j)(i_{3},j)}Z^{(c_{3})}_{(i_{3},j)(i_{4},j)}\cdots Z^{(c_{k-1})}_{(i_{k-1},j)(i_{k},j)}\bar{Z}^{(c_{k}+n-d)}_{(i_{k},j)(i_{1},j^{\prime})} \nonumber\\
&\,\, + (-1)^{c_{1}}Z^{(c_{1})}_{(i_{1},j^{\prime})(i_{2},j^{\prime})}Z^{(c_{2}+d+1)}_{(i_{2},j^{\prime})(i_{3},j)}Z^{(c_{3})}_{(i_{3},j)(i_{4},j)}\cdots Z^{(c_{k-1})}_{(i_{k-1},j)(i_{k},j)}\bar{Z}^{(c_{k}+n-d)}_{(i_{k},j)(i_{1},j^{\prime})} +\cdots \nonumber \\
&\,\,\,+ (-1)^{c_{1}+\cdots +c_{k-2}}Z^{(c_{1})}_{(i_{1},j^{\prime})(i_{2},j^{\prime})}Z^{(c_{2})}_{(i_{2},j^{\prime})(i_{3},j^{\prime})}Z^{(c_{3})}_{(i_{3},j^{\prime})(i_{4},j^{\prime})}\cdots Z^{(c_{k-1}+d+1)}_{(i_{k-1},j^{\prime})(i_{k},j)}\bar{Z}^{(c_{k}+n-d)}_{(i_{k},j)(i_{1},j^{\prime})}\bigg]
\label{potential_w_y_p_product}
\end{align}
To obtain $\mathcal{W}_P$, we repeat this process for all the terms in $W_P$. In addition to the signs written above, we must include the signs with which the parent superpotential terms enter $W_P$.
\paragraph{$\mathcal{W}_Q$: terms descending from the superpotential in $Q$.}
These terms are determined by the same procedure, after the exchange $(P,p) \leftrightarrow (Q,q)$. Let us present the final result. Every term $T_{Q}$ in the superpotential $W_Q$ of $Q$ is of the form:
\begin{equation}
T_{Q} = Y^{(d_{1})}_{j_{1}j_{2}}Y^{(d_{2})}_{j_{2}j_{3}}\cdots Y^{(d_{k-1})}_{j_{k-1}j_{k}}\bar{Y}^{(d_{k})}_{j_{k}j_{1}} ~.
\label{qterm_product_potential}
\end{equation}
As before, $T_{Q}$ gives rise to superpotential terms of two types, analogous to \eref{potential_w_i_p_product} and \eref{potential_w_y_p_product}. The first set of terms is
\begin{equation}
\sum_{i\in I}Z^{(d_{1})}_{(i,j_{1})(i,j_{2})}Z^{(d_{2})}_{(i,j_{2})(i,j_{3})}\cdots Z^{(d_{l-1})}_{(i,j_{l-1})(i,j_{k})}\bar{Z}^{(d_{l}+m+1)}_{(i,j_{l})(i,j_{1})} ~,
\label{potential_w_j_q_product}
\end{equation}
with $I$ the set of nodes of $P$.
The second set of terms is
\begin{align}
\sum_{\bar{X}_{i i^{\prime}}^{(m-c)} \in p}\bigg[&Z^{(c+d_{1}+1)}_{(i^{\prime},j_{1})(i,j_{2})}Z^{(d_{2})}_{(i,j_{2})(i,j_{3})}Z^{(d_{3})}_{(i,j_{3})(i,j_{4})}\cdots Z^{(d_{l-1})}_{(i,j_{l-1})(i,j_{l})}\bar{Z}^{(m-c + d_{k})}_{(i,j_{l})(i^{\prime},j_{1})} \nonumber\\
&\,\, + (-1)^{d_{1}}Z^{(d_{1})}_{(i^{\prime},j_{1})(i^{\prime},j_{2})}Z^{(c+d_{2}+1)}_{(i^{\prime},j_{2})(i,j_{3})}Z^{(d_{3})}_{(i,j_{3})(i,j_{4})}\cdots Z^{(d_{l-1})}_{(i,j_{l-1})(i,j_{l})}\bar{Z}^{(m-c +d_{l})}_{(i,j_{l})(i^{\prime},j_{1})}+\cdots+ \nonumber \\
&\,\,\, +(-1)^{d_{1}+\cdots+d_{l-2}}Z^{(d_{1})}_{(i^{\prime},j_{1})(i^{\prime},j_{2})}Z^{(d_{2})}_{(i^{\prime},j_{2})(i^{\prime},j_{3})}Z^{(d_{3})}_{(i^{\prime},j_{3})(i^{\prime},j_{4})}\cdots Z^{(m-c+d_{l-1})}_{(i^{\prime},j_{l-1})(i,j_{l})}\bar{Z}^{(m-c +d_{l})}_{(i,j_{l})(i^{\prime},j_{1})}\bigg] \label{potential_w_x_q_product}
\end{align}
Repeating this process for all the terms in $W_P$ , we obtain $\mathcal{W}_P$. Once again, we need to include the signs of the parent terms in $W_P$.
\paragraph{$\mathcal{W}_C$: new cubic interactions.}
This part of the superpotential consists of new cubic interactions. For every pair of fields $\bar{X}^{(c)}_{i_1 i_2}\in p$ and $\bar{Y}^{(d)}_{j_1 j_2}\in q$ we have a pair of cubic terms
\begin{equation}
(-1)^{c+d}\left[Z^{(n-d)}_{(i_2,j_2)(i_2,j_1)}Z^{(m-c)}_{(i_2,j_1)(i_1,j_2)}\bar{Z}^{(c+d)}_{(i_1,j_1)(i_2,j_2)} -Z^{(m-c)}_{(i_2,j_2)(i_1,j_2)}Z^{(n-d)}_{(i_1,j_2)(i_1,j_1)}\bar{Z}^{(c+d)}_{(i_1,j_1)(i_2,j_2)}\right]
\label{cubic_terms}
\end{equation}
where the fields involved are descendants of $\bar{X}^{(c)}_{ii^{\prime}}$ and $\bar{Y}^{(d)}_{jj^{\prime}}$ via the rules in Table \ref{table_product}, or their conjugates. Namely,
\begin{equation}
Z^{(m-c)}_{(i_2,j_1)(i_1,j_1)} = X^{(m-c)}_{i_2 i_1} \times j_1 \ \ , \ \ Z^{(n-d)}_{(i_1,j_2)(i_1,j_1)} = i_1 \times Y^{(n-d)}_{j_2 j_1} \ \ , \ \ \bar{Z}^{(c+d)}_{(i_1,j_1)(i_2,j_2)} = \bar{X}^{(c)}_{i_1 i_2} \times \bar{Y}^{(d)}_{j_1 j_2} ~.
\end{equation}
$\mathcal{W}_C$ is the sum of \eref{cubic_terms} over all the pairs of $\bar{X}^{(c)}_{i_1 i_2}$ and $\bar{Y}^{(d)}_{j_1 j_2}$.
\paragraph{$\mathcal{W}_{PQ}$: mixed terms.}
\label{mixed_terms}
The last part of the superpotential involves contributions coming from $P$ and $Q$. A term $T_{P}$ in the superpotential of $P$ and a term $T_{Q}$ in the superpotential of $Q$ give rise to a number of terms in the superpotential of the product theory. $\mathcal{W}_{PQ}$ is the sum of all such terms. To describe them, let us first consider the special case in which both $T_{P}$ and $T_{Q}$ are cubic terms, i.e.
\begin{equation}
T_{P} = X_{i_{1}i_{2}}^{(c_{1})}X_{i_{2}i_{3}}^{(c_{2})}\bar{X}^{(m-1-c_{1}-c_{2})}_{i_{3}i_{1}} \ \ \ , \ \ \ T_{Q} = Y_{j_{1}j_{2}}^{(d_{1})}Y_{j_{2}j_{3}}^{(d_{2})}\bar{Y}_{j_{3}j_{1}}^{(n-1-d_{1}-d_{2})} ~.
\label{cubic_TP_and_TQ}
\end{equation}
In this case, they give rise to a single term that involves the pairwise product of fields,\footnote{It is useful to reflect on why we obtain a single term. First of all, we defined the polarizations of the parent theories such that every term in their superpotentials contains a single conjugated field. In addition, following the rules introduced in \sref{section_periodic_quiver}, we cannot multiply unbarred and barred fields. As a result, there are not multiple possibilities associated to cyclic permutations of the fields in \eref{cubic_TP_and_TQ}.} i.e.
\begin{equation}
(-1)^{m+n+c_{2}+d_{2}}Z_{(i_{1},j_{1}),(i_{2},j_{2})}^{(c_{1}+d_{1}+1)}Z^{(c_{2}+d_{2}+1)}_{(i_{2},j_{2}),(i_{3},j_{3})}\bar{Z}^{(n+m-2-c_{1}-c_{2}-d_{1}-d_{2})}_{(i_{3},j_{3}),(i_{1},j_{1})} ~.
\end{equation}
If $T_{P}$ and/or $T_{Q}$ are of order greater than 3, no such simple terms can be written. The reason is that the pairwise product of fields is only possible if they have the same order and the resulting terms will have correct degree, i.e. $m+n$, if and only if $T_{P}$ and $T_{Q}$ are cubic.\footnote{It is interesting to compare this to the B-model computation of the superpotential: cubic terms are special in that they correspond to $m_{2}$ of the $A_{\infty}$ algebra, which is composition of maps, while higher order terms correspond to higher $m_{k}$, which are more involved.}
One way of addressing this issue is to turn $T_{P}$ and $T_{Q}$ into a sum of cubic terms and mass terms, by integrating in auxiliary massive fields. Then we can construct $\mathcal{W}_{PQ}$ as described above, consisting exclusively of terms descending from the cubic terms. The final quiver and superpotential can then be obtained by integrating out the massive fields.
Naively, it might seem that this procedure dramatically changes our construction. A massive field in $P$ gives rise to one descendant for every field or node of $Q$ and vice versa. Nevertheless, it can be verified that all these descendants are massive, resulting in the same quiver we would have obtained without integrating in massive fields. Therefore, we can use the rule for cubic terms above as the starting point to efficiently compute the rules for higher order terms. The result is that there are $\binom{k-1}{2}\binom{l-1}{2}$ terms in $\mathcal{W}_{PQ}$ descending from terms $T_{P}$ of order $k$ and $T_{Q}$ of order $l$. All these terms are of order $k+l-3$. We provide a thorough discussion of these terms and the first few steps of this iteration in Appendix \ref{mixed_potential_terms_appendix}.
\paragraph{The geometry of the product theory.}
It is relatively straight forward, yet quite laborious, to show that the desired geometry \eref{product_toric_diagram} arises as the classical moduli space of the $P_{p} \times Q_{q}$ theory we have constructed.\footnote{The notion of moduli space has been extended to general $m$ in \cite{Franco:2017lpa}.} We present the proof in Appendix \ref{section_products_geometry}.
\subsection{Kontsevich Bracket}
As another consistency check of our construction, let us verify that the superpotential we have written satisfies $\{W,W\} = 0$, where
\begin{equation}
\{W,W\} = 2\sum_{\bar{Z}^{(b)}_{(i,j)(i^{\prime},j^{\prime})}} \pdv{W}{Z^{(b)}_{(i^{\prime},j^{\prime})(i,j)}}\pdv{W}{\bar{Z}^{(m+n+1-b)}_{(i,j)(i^{\prime},j^{\prime})}} ~.
\end{equation}
To do this, we divide $\{W,W\}$ into eight pieces,
\begin{equation}
\{W,W\} = 2(KB_{P} + KB_{Q} + KB_{PC} + KB_{QC} + KB_{PQ} + KB_{PQP} + KB_{PQQ} + KB_{PQC}) ~,
\end{equation}
each of which vanishes individually.
$KB_{P} = \frac{1}{2}\{\mathcal{W}_P,\mathcal{W}_P\}$ is the contribution that arises exclusively due to $\mathcal{W}_P$. Explicitly, its nontrivial terms are
\begin{equation}
KB_{P} = \sum_{j\in J}\sum_{\bar{X}^{(c)}_{i_{1}i_{2}} \in p}\pdv{\mathcal{W}_P}{Z^{(m-c)}_{(i_{2},j)(i_{1},j)}}\pdv{\mathcal{W}_P}{\bar{Z}_{(i_{1},j)(i_{2,j})}^{(c+n+1)}} + \sum_{\bar{Y}_{j_{1}j_{2}}^{(d)}\in q}\sum_{\bar{X}^{(c)}_{i_{1}i_{2}} \in p}\pdv{\mathcal{W}_P}{Z^{(m+n+1-c-d)}_{(i_{2},j_{2})(i_{1},j_{1})}}\pdv{\mathcal{W}_P}{\bar{Z}_{(i_{1},j_{1})(i_{2},j_{2})}^{(c+d)}} ~.
\end{equation}
It is straightforward to show that $KB_{P}$ vanishes if the superpotential $W_P$ of $P$ satisfies $\{W_P,W_P\} = 0$. The reason is that the terms in $KB_{P}$ descend from the terms of $\{W_P,W_P\}$ in a manner that is analogous to how terms in $\mathcal{W}_P$ descend from terms in $W_P$ and the signs in \eref{potential_w_y_p_product} are such that the required cancellations still occur.
Similarly, $KB_{Q} = \frac{1}{2}\{\mathcal{W}_Q,\mathcal{W}_Q\}$ is
\begin{equation}
KB_{Q} = \sum_{i\in I}\sum_{\bar{Y}^{(d)}_{j_{1}j_{2}} \in q}\pdv{\mathcal{W}_Q}{Z^{(n-d)}_{(i,j_{2})(i,j_{1})}}\pdv{\mathcal{W}_Q}{\bar{Z}_{(i,j_{1})(i,j_{2})}^{(d+m+1)}} + \sum_{\bar{X}_{i_{1}i_{2}}^{(c)}\in p}\sum_{\bar{Y}^{(d)}_{j_{1}j_{2}} \in q}\pdv{\mathcal{W}_Q}{Z^{(m+n+1-c-d)}_{(i_{2},j_{2})(i_{1},j_{1})}}\pdv{\mathcal{W}_Q}{\bar{Z}_{(i_{1},j_{1})(i_{2},j_{2})}^{(c+d)}}~,
\end{equation}
and it vanishes if the superpotential $W_Q$ of $Q$ satisfies $\{W_Q,W_Q\} = 0$.
$KB_{PC}$ and $KB_{QC}$ involve the Kontsevich bracket between $\mathcal{W}_P$ and $\mathcal{W}_Q$ with $\mathcal{W}_C$. Explicitly, $KB_{PC} = \frac{1}{2}(\{\mathcal{W}_P,\mathcal{W}_C\}+ \{\mathcal{W}_C,\mathcal{W}_P\}) $ and $KB_{QC} =\frac{1}{2}( \{\mathcal{W}_Q,\mathcal{W}_C\}+ \{\mathcal{W}_C,\mathcal{W}_Q\})$. They reduce to
\begin{align}
KB_{PC} & = & \sum_{j\in J}\sum_{\bar{X}^{(c)}_{i_{1}i_{2}} \in p}\pdv{\mathcal{W}_C}{Z^{(m-c)}_{(i_{2},j)(i_{1},j)}}\pdv{\mathcal{W}_P}{\bar{Z}_{(i_{1},j)(i_{2,j})}^{(c+n+1)}} + \sum_{\bar{Y}_{j_{1}j_{2}}^{(d)}\in q}\sum_{\bar{X}^{(c)}_{i_{1}i_{2}} \in p}\pdv{\mathcal{W}_P}{Z^{(m+n+1-c-d)}_{(i_{2},j_{2})(i_{1},j_{1})}}\pdv{\mathcal{W}_C}{\bar{Z}_{(i_{1},j_{1})(i_{2},j_{2})}^{(c+d)}}\nonumber \\
KB_{QC} & = & \sum_{i \in I}\sum_{\bar{Y}^{(d)}_{j_{1}j_{2}} \in q}\pdv{\mathcal{W}_C}{Z^{(n-d)}_{(i,j_{2})(i,j_{1})}}\pdv{\mathcal{W}_Q}{\bar{Z}_{(i,j_{1})(i,j_{2})}^{(d+m+1)}} + \sum_{\bar{X}_{i_{1}i_{2}}^{(c)}\in p}\sum_{\bar{Y}^{(d)}_{j_{1}j_{2}} \in q}\pdv{\mathcal{W}_Q}{Z^{(m+n+1-c-d)}_{(i_{2},j_{2})(i_{1},j_{1})}}\pdv{\mathcal{W}_C}{\bar{Z}_{(i_{1},j_{1})(i_{2},j_{2})}^{(c+d)}}
\end{align}
Both $KB_{PC}$ and $KB_{QC}$ vanish independently of any conditions on $W_P$ and $W_Q$. This can be verified directly using the explicit form of $\mathcal{W}_C$.
Let us now consider $KB_{PQ} = \frac{1}{2}\{\mathcal{W}_{PQ},\mathcal{W}_{PQ}\}$. Its non-trivial part is
\begin{equation}
KB_{PQ} = \sum_{\bar{X}^{(c)}_{i_{1}i_{2}}\in p}\sum_{\bar{Y}^{(d)}_{j_{1}j_{2}} \in q}\pdv{\mathcal{W}_{PQ}}{Z^{(m+n+1-c-d)}_{(i_{2},j_{2})(i_{1},j_{1})}}\pdv{\mathcal{W}_{PQ}}{\bar{Z}_{(i_{1},j_{1})(i_{2},j_{2})}^{(c+d)}}~.
\end{equation}
First, let us consider the case in which $W_P$ and $W_Q$ are cubic, since in this case $\mathcal{W}_{PQ}$ comes just from the pairwise product of fields, as explained earlier. In this case, both $\{W_P,W_P\}$ and $\{W_Q,W_Q\}$ are entirely quartic and a term in $KB_{PQ}$ comes from the pairwise product of fields from a term in $\{W_P,W_P\}$ and a term in $\{W_Q,W_Q\}$, As result, $KB_{PQ}$ vanishes.
To show that $KB_{PQ}$ vanishes even when $W_P$ and $W_Q$ are not cubic, we can rewrite $W_P$ and $W_Q$ as sums of cubic terms and mass terms by appropriately integrating in massive fields and using the argument above. There is an added subtlety: after integrating in these massive fields, $\{W_P,W_P\}$ and $\{W_Q,W_Q\}$ vanish only after using the equations of motion for massive fields. This is enough for our purposes, and it can be shown that $KB_{PQ}$ vanishes once we integrate out massive fields from the product theory.
All the remaining contributions, $KB_{PQP},KB_{PQQ}$ and $KB_{PQC}$, involve $\mathcal{W}_{PQ}$ and therefore it is convenient to express $W_P$ and $W_Q$ as a sum of cubic terms and mass terms. Explicitly, they are
\begin{align}
KB_{PQP} &= \frac{1}{2}\left(\{\mathcal{W}_{PQ},\mathcal{W}_P\} + \{\mathcal{W}_P,\mathcal{W}_{PQ}\}\right) \nonumber\\
KB_{QPQ} &= \frac{1}{2}\left(\{\mathcal{W}_{PQ},\mathcal{W}_Q\} + \{\mathcal{W}_Q,\mathcal{W}_{PQ}\}\right) \nonumber \\
KB_{PQC} &= \frac{1}{2}(\{\mathcal{W}_P,\mathcal{W}_Q\} + \{\mathcal{W}_Q,\mathcal{W}_P\} + \{\mathcal{W}_{PQ},\mathcal{W}_C\} + \{\mathcal{W}_C,\mathcal{W}_{PQ}\})
\end{align}
A lengthy but straightforward bookkeeping calculation shows that all of these contributions vanish up to the equations of motion for massive fields. $KB_{PQP}$ vanishes as a result of $\{W_P,W_P\}=0$, while vanishing of $KB_{PQQ}$ follows from $\{W_Q,W_Q\}=0$. Lastly, $KB_{PQC}$ vanishes independently of any restriction on $W_P$ and $W_Q$.
\subsection{Toric Condition}
To conclude our discussion of the superpotential, we now show that our construction is such that if $P$ and $Q$ satisfy the toric condition then $P_{q} \times Q_{q}$ also does so. We do so by considering the different ways a field of degree $m+n$ can arise in the superpotential of $P_{p} \times Q_{q}$. It is useful to note that all such terms must come from $\mathcal{W}_P,\mathcal{W}_Q$ and $\mathcal{W}_C$, but not from $\mathcal{W}_{PQ}$. As explained in Appendix \ref{mixed_potential_terms_appendix}, every term in $\mathcal{W}_{PQ}$ contains two fields coming from the product of a field not in $p$ and a field not in $q$. The degrees of such fields are greater or equal to 1, so none of these terms can contain a degree $m+n$ field. The different scenarios are:
\begin{itemize}
\item A field of degree $m-1$, $\bar{X}^{(m-1)}_{i_{1}i_{2}}\in p$. Its product with a node $j$ of $Q$ gives rise to a field $\bar{Z}^{(m+n)}_{(i_{1},j)(i_{2},j)}$ of degree $m+n$. This field only appears in $\mathcal{W}_P$, in the form shown in \eref{potential_w_i_p_product}. Therefore, if $\bar{X}^{(m-1)}_{i_{1}i_{2}}$ participates in two terms with opposite signs, then so does $\bar{Z}^{(m+n)}_{(i_{1},j)(i_{2},j)}$. Similarly, if there is a field $\bar{Y}_{j_{1}j_{2}}^{(n-1)}\in q$, its product with a node $i$ of $P$ gives rise to $\bar{Z}^{(m+n)}_{(i,j_{1})(i,j_{2})}$. It only participates in $\mathcal{W}_P$, as shown in \eref{potential_w_j_q_product}, namely in two terms with opposite sign.
\item The product of a conjugate chiral $\bar{X}^{(m)}_{i_{1}i_{2}}\in p$ and a conjugate chiral field $\bar{Y}^{(n)}_{j_{1}j_{2}}\in q$ gives rise to a field $\bar{Z}^{(m+n)}_{(i_{1},j_{1})(i_{2},j_{2})}$ of degree $m+n$. Since conjugate chiral fields do not appear in the superpotential, $\bar{Z}^{(m+n)}_{(i_{1},j_{1})(i_{2},j_{2})}$ does not appear in $\mathcal{W}_P$ or $\mathcal{W}_Q$. It only appears in two terms of $\mathcal{W}_C$ with opposite sign as shown in \eref{cubic_terms}.
\item The product of a field $X^{(m-1)}_{i_{1}i_{2}}\notin p$ and a conjugate chiral field $Y_{j_{1}j_{2}}^{(n)}\notin q$ gives a field $Z^{(m+n)}_{(i_{1},j_{1})(i_{2},j_{2})}$ of degree $m+n$. Since $X^{(m-1)}_{i_{1}i_{2}}$ appears in two terms with opposite sings in $W_P$, $Z^{(m+n)}_{(i_{1},j_{1})(i_{2},j_{2})}$ appears in two terms of the final superpotential with opposite signs. These terms arise as described by \eref{potential_w_y_p_product}. Since $Y_{j_{1}j_{2}}^{(n)}$ is a conjugate chiral, it does not appear in $W_Q$, which implies that $Z^{(m+n)}_{(i_{1},j_{1})(i_{2},j_{2})}$ does not appear in $\mathcal{W}_Q$. It does not appear in $\mathcal{W}_C$, either.
Similarly, the product of a conjugate chiral field $X^{(m)}_{i_{1}i_{2}}\notin p$ and $Y_{j_{1}j_{2}}^{(n-1)}\notin q$ gives rise to $Z^{(m+n)}_{(i_{1},j_{1})(i_{2},j_{2})}$, which only appears in two terms with opposite signs. These terms are in $\mathcal{W}_Q$, specifically among those described in \eref{potential_w_x_q_product}.
\end{itemize}
The discussion above covers all the fields of degree $m+n$. We conclude that the product between an $m$-graded toric phase $P$ and an $n$-graded toric phase $Q$ using arbitrary perfect matchings is an $(m+n+1)$-graded toric phase.
\section{Examples}
\label{section_examples}
In this section we illustrate the product construction with two explicit examples. The first theory we will construct is the well-known phase 2 of $F_{0}$ \cite{Feng:2002zw}.\footnote{By phase 2, we mean the phase whose quiver is shown in \eref{F0_2_quiver}. Various papers label the two phases of $F_0$ in different ways.} The second example is a product of the conifold quiver theory with itself, which results in a $0d$ $\mathcal{N}=1$ matrix model. While, to our knowledge, this the first time the second theory appears in the literature, our primary goal is to demonstrate the simplicity of this procedure.
\subsection{$F_{0}$}
\label{subsec_F0}
Let us consider the complex cone over $F_{0}$ CY 3-fold, or $F_0$ for short. The $m=1$, i.e. $4d$ $\mathcal{N}=1$, quiver theories for this geometry have been extensively studied in the literature (see e.g. \cite{Feng:2002zw}). The toric diagram for $F_0$ can be constructed as the product of two copies of $\mathbb{C}^{2}/\mathbb{Z}_{2}$ using one of the two perfect matchings for the central point in each case, as illustrated in \fref{F_0_product_example}.
The $m=0$, i.e. $6d$ $\mathcal{N}=(1,0)$, quiver theory of the parent $\mathbb{C}^{2}/\mathbb{Z}_{2}$ geometry consists of two $U(N)$ gauge groups with two hypermultiplets stretching between them, as shown in \fref{c2_z2_periodic_quiver}.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=2.5,decoration={markings,mark=at position 0.75 with {\arrow{latex}}}]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,minimum size = 1em , inner sep=1pt,font=\scriptsize]
\draw (0,0) node(a){$0$};
\draw (1,0) node(b){$1$};
\draw (2,0) node(c){$0$};
\draw [ very thick ](a) -- (b);
\draw [ very thick ](b) -- (c);
\end{tikzpicture}
\caption{The periodic quiver for $\mathbb{C}^{2}/\mathbb{Z}_{2}$}
\label{c2_z2_periodic_quiver}
\end{figure}
This theory has 4 perfect matchings, which translate into the 4 ways in which we can orient the 2 hypermultiplets. Two of them correspond to the two endpoints of the toric diagram (shown on the left of \fref{F_0_product_example}) while the other 2 correspond to the central point. As a result, we have 2 perfect matching choices for the central point of each of the $\mathbb{C}^{2}/\mathbb{Z}_{2}$ factors. But the 2 central perfect matchings are conjugates of each other and as a result any choice of perfect matchings gives the same theory up to chiral conjugation.\footnote{We note that $m=0$ is the only case for which the conjugates of the field in a perfect matching also form a perfect matching. This is only possible because there is no superpotential.}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=1.25]
\tikzstyle{every node}=[circle,thick,fill=black,draw,minimum size = 0.5em , inner sep=1pt,font=\scriptsize]
\draw (0,0) node[fill = cyan](a){};
\draw (1,0) node(b){};
\draw (-1,0) node(e){};
\draw [black](e) -- (a) -- (b);
\draw (1.5,0) node[cross=8pt,red, line width = 0.75mm]{};
\draw (2,0) node[fill = cyan](a){};
\draw (2,1) node(b){};
\draw (2,-1) node(e){};
\draw [black](e) -- (a) -- (b);
\draw [blue2 , -{Triangle[angle = 90:3.5mm]},line width = 2.5mm] (2.8,0)--(3.2,0);
\draw (5,0) node[fill = cyan](a){};
\draw (6,0) node(b){};
\draw (4,0) node(c){};
\draw (5,1) node(d){};
\draw (5,-1) node(e){};
\draw (b)--(d)--(c)--(e)--(b);
\end{tikzpicture}
\caption{The toric diagram of $F_{0}$ can be obtained as the product of two copies of the toric diagram of $\mathbb{C}^{2}/\mathbb{Z}_{2}$. In both cases we use the central point of the toric diagram to take the product.}
\label{F_0_product_example}
\end{figure}
The product of the periodic quivers is presented in \fref{F_0_product_quiver}. The first step shows the two parent $6d$ $\mathcal{N}=(1,0)$ quivers. The arrows are oriented to indicate the choice of perfect matchings. The second step shows the nodes of $F_{0}$ that arise from the product of nodes in the parent theories. In the third step, we add vertical fields (which come from the product of a node in the first parent and a field in the second one) and horizontal fields (which come from the product of a field in the first parent and a node in the second one). The last step adds the diagonal fields that arise from the product of a field in the first parent with a field in the second one.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale = 1.5]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,inner sep=1pt,font=\scriptsize]
\draw (0,0) node(a){$0$};
\draw (1,0) node(b){$1$};
\draw (2,0) node(c){$0$};
\draw [very thick,-latex] (a) -- (b);
\draw [very thick,-latex] (c) -- (b);
\draw (2.5,0) node[cross=6pt,red,ultra thick]{};
\draw (3,-1) node(a){$0$};
\draw (3,0) node(b){$1$};
\draw (3,1) node(c){$0$};
\draw [very thick,-latex] (a) -- (b);
\draw [very thick,-latex] (c) -- (b);
\draw [blue2 , -{Triangle[angle = 90:3.5mm]},line width = 2.5mm] (3.8,0)--(4.2,0);
\draw (5,-1) node(a){$\scalebox{0.5}{(0,0)}$};
\draw (5,0) node(b){$\scalebox{0.5}{(0,1)}$};
\draw (5,1) node(c){$\scalebox{0.5}{(0,0)}$};
\draw (6,-1) node(d){$\scalebox{0.5}{(1,0)}$};
\draw (6,0) node(e){$\scalebox{0.5}{(1,1)}$};
\draw (6,1) node(f){$\scalebox{0.5}{(1,0)}$};
\draw (7,-1) node(g){$\scalebox{0.5}{(0,0)}$};
\draw (7,0) node(h){$\scalebox{0.5}{(0,1)}$};
\draw (7,1) node(i){$\scalebox{0.5}{(0,0)}$};
\draw [blue2 , -{Triangle[angle = 90:3.5mm]},line width = 2.5mm] (6,-1.8)--(6,-2.2);
\begin{scope}[shift={(0,-4)}]
\draw (5,-1) node(a){$\scalebox{0.5}{(0,0)}$};
\draw (5,0) node(b){$\scalebox{0.5}{(0,1)}$};
\draw (5,1) node(c){$\scalebox{0.5}{(0,0)}$};
\draw (6,-1) node(d){$\scalebox{0.5}{(1,0)}$};
\draw (6,0) node(e){$\scalebox{0.5}{(1,1)}$};
\draw (6,1) node(f){$\scalebox{0.5}{(1,0)}$};
\draw (7,-1) node(g){$\scalebox{0.5}{(0,0)}$};
\draw (7,0) node(h){$\scalebox{0.5}{(0,1)}$};
\draw (7,1) node(i){$\scalebox{0.5}{(0,0)}$};
\draw [very thick,-latex] (a) -- (d);
\draw [very thick,-latex] (g) -- (d);
\draw [very thick,-latex] (b) -- (e);
\draw [very thick,-latex] (h) -- (e);
\draw [very thick,-latex] (c) -- (f);
\draw [very thick,-latex] (i) -- (f);
\draw [very thick,-latex] (a) -- (b);
\draw [very thick,-latex] (c) -- (b);
\draw [very thick,-latex] (d) -- (e);
\draw [very thick,-latex] (f) -- (e);
\draw [very thick,-latex] (g) -- (h);
\draw [very thick,-latex] (i) -- (h);
\end{scope}
\draw [blue2 , -{Triangle[angle = 90:3.5mm]},line width = 2.5mm] (4.2,-4)--(3.8,-4);
\begin{scope}[shift={(-4,-4)}]
\draw (5,-1) node(a){$\scalebox{0.5}{(0,0)}$};
\draw (5,0) node(b){$\scalebox{0.5}{(0,1)}$};
\draw (5,1) node(c){$\scalebox{0.5}{(0,0)}$};
\draw (6,-1) node(d){$\scalebox{0.5}{(1,0)}$};
\draw (6,0) node(e){$\scalebox{0.5}{(1,1)}$};
\draw (6,1) node(f){$\scalebox{0.5}{(1,0)}$};
\draw (7,-1) node(g){$\scalebox{0.5}{(0,0)}$};
\draw (7,0) node(h){$\scalebox{0.5}{(0,1)}$};
\draw (7,1) node(i){$\scalebox{0.5}{(0,0)}$};
\draw [very thick,-latex] (a) -- (d);
\draw [very thick,-latex] (g) -- (d);
\draw [very thick,-latex] (b) -- (e);
\draw [very thick,-latex] (h) -- (e);
\draw [very thick,-latex] (c) -- (f);
\draw [very thick,-latex] (i) -- (f);
\draw [very thick,-latex] (a) -- (b);
\draw [very thick,-latex] (c) -- (b);
\draw [very thick,-latex] (d) -- (e);
\draw [very thick,-latex] (f) -- (e);
\draw [very thick,-latex] (g) -- (h);
\draw [very thick,-latex] (i) -- (h);
\draw [very thick,-latex] (e) -- (a);
\draw [very thick,-latex] (e) -- (c);
\draw [very thick,-latex] (e) -- (g);
\draw [very thick,-latex] (e) -- (i);
\end{scope}
\end{tikzpicture}
\caption{A product of periodic quivers resulting in phase 2 of $F_{0}$. }
\label{F_0_product_quiver}
\end{figure}
The result is the phase 2 of $F_{0}$ \cite{Feng:2002zw}. Since in this the parent theories do not have superpotentials, the final superpotential only consists of the new cubic terms that arise in the product. These terms can be straightforwardly read from the minimal plaquettes of the quiver.
For completeness, in \fref{F0_2_quiver} we show the standard quiver for this theory. Its superpotential is
\begin{align}
W &= X^{+}_{(0,0)(0,1)}X^{+}_{(0,1)(1,1)}X^{--}_{(1,1)(0,0)} - X^{+}_{(0,0)(1,0)}X^{+}_{(1,0)(1,1)}X^{--}_{(1,1)(0,0)} \nonumber \\
& + X^{+}_{(0,0)(1,0)}X^{-}_{(1,0)(1,1)}X^{-+}_{(1,1)(0,0)} - X^{-}_{(0,0)(0,1)}X^{+}_{(0,1)(1,1)}X^{-+}_{(1,1)(0,0)} \nonumber \\
& + X^{-}_{(0,0)(1,0)}X^{+}_{(1,0)(1,1)}X^{+-}_{(1,1)(0,0)} - X^{+}_{(0,0)(0,1)}X^{-}_{(0,1)(1,1)}X^{+-}_{(1,1)(0,0)} \nonumber \\
& + X^{-}_{(0,0)(0,1)}X^{-}_{(0,1)(1,1)}X^{++}_{(1,1)(0,0)} - X^{-}_{(0,0)(1,0)}X^{-}_{(1,0)(1,1)}X^{++}_{(1,1)(0,0)}
\end{align}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=2.5,decoration={markings,mark=at position 0.75 with {\arrow{latex}}}]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,minimum size = 0.5em , inner sep=1pt,font=\scriptsize]
\draw (0,0) node(a){$\scalebox{0.75}{(0,0)}$};
\draw (1,0) node(b){$\scalebox{0.75}{(1,0)}$};
\draw (0,1) node(c){$\scalebox{0.75}{(0,1)}$};
\draw (1,1) node(d){$\scalebox{0.75}{(1,1)}$};
\draw [very thick, postaction = {decorate}](a) -- node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$2$} (b);
\draw [very thick, postaction = {decorate}](a) -- node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$2$} (c);
\draw [very thick, postaction = {decorate}](b) -- node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$2$} (d);
\draw [very thick, postaction = {decorate}](c) -- node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$2$} (d);
\draw [very thick, postaction = {decorate}](d) -- node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$4$} (a);
\end{tikzpicture}
\caption{The quiver for phase 2 of $F_{0}$}
\label{F0_2_quiver}
\end{figure}
\paragraph{An Infinite Family: $F^{(m)}_{0}$.}
The process discussed above can be continued inductively to get an infinite family of toric CY $(m+2)$-folds indexed by $m$. The toric diagram for $F^{(m)}_{0}$ is
\begin{align}
\begin{array}{c}
(0,\ldots,0) \\
(\pm 1,0,\ldots, 0) \\
\vdots \\
(0,\ldots,0,\pm1)
\end{array}
\end{align}
This family was first introduced in \cite{Closset:2018axq}, where the corresponding quiver theories were also constructed.
Roughly speaking the periodic quiver for $F_{0}^{(m)}$ corresponds to
\begin{align}
\left(\mathord{\begin{tikzpicture}[baseline = -0.65ex,scale = 2]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,inner sep=1.25pt,font=\scriptsize]
\draw (0,0) node(a){0};
\draw (1,0) node(b){1};
\draw (2,0) node(c){0};
\draw [very thick,-latex] (a) -- (b);
\draw [very thick,-latex] (c) -- (b);
\end{tikzpicture}}\right)^{m+1}
\end{align}
This is of course not a complete description except for $m=1$ because, at every step, to construct a periodic quiver for $F^{(n)}_{0}$ we need to choose a perfect matching for $F^{(n-1)}_{0}$. This freedom hints at the existence of multiple phases of $F_{0}^{(m)}$ for $m > 1$ and it is natural to expect that different choices of perfect matching lead to different phases related by the dualities discussed in \sref{subsec_dualities}.\footnote{For example, $F_0^{(2)}$ is also known as $Q^{1,1,1}/\mathbb{Z}_2$. This theory has 14 toric phases, which were classified in \cite{Franco:2018qsc}.}
The quiver theory $Q^{(m)}$ of one particular phase of $F^{(m)}_{0}$ can be constructed inductively as follows
\begin{align}
Q^{(0)} = \mathord{\begin{tikzpicture}[baseline = -0.65ex,scale = 1.75]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,inner sep=1.25pt,font=\scriptsize]
\draw (0,0) node(a){0};
\draw (1,0) node(b){1};
\draw (2,0) node(c){0};
\draw [very thick] (a) -- (b);
\draw [very thick] (c) -- (b);
\end{tikzpicture}} & \ \ \ \ &p^{(0)} = \mathord{\begin{tikzpicture}[baseline = -0.65ex,scale = 1.75]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,inner sep=1.25pt,font=\scriptsize]
\draw (0,0) node(a){0};
\draw (1,0) node(b){1};
\draw (2,0) node(c){0};
\draw [very thick,-latex] (a) -- (b);
\draw [very thick,-latex] (c) -- (b);
\end{tikzpicture}}\nonumber \\
Q^{(m+1)} = Q^{(m)}_{p^{(m)}} \times Q^{(0)}_{p^{(0)}} & & p^{(m+1)} = p^{(m)} \times p^{(0)} \label{fom_product_construction}
\end{align}
where we use the product perfect matching $p \times q$ of $P_{p}\times Q_{q}$ as defined in Appendix \ref{section_products_geometry}. This phase of $F_{0}^{(m)}$ was discussed at length in \cite{Closset:2018axq,Franco:2019bmx}, to which we refer for details.
\subsection{Conifold $\times$ Conifold}
The conifold is one of the most thoroughly studied toric CY 3-folds. Its toric diagram is shown in \fref{toric_diagram_quiver_conifold}. The corresponding gauge theory was constructed in the seminal work \cite{Klebanov:1998hh}. It consists of two $U(N)$ gauge groups and four bifundamental chiral fields $X_{01}$, $\tilde{X}_{01}$, $X_{10}$, and $\tilde{X}_{10}$, as shown in \fref{toric_diagram_quiver_conifold}. The superpotential is
\begin{equation}
W_{con} = X_{01}X_{10}\tilde{X}_{01}\tilde{X}_{10} - \tilde{X}_{01}X_{10}X_{01}\tilde{X}_{10} \label{W_conifold}
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=10cm]{toric_diagram_quiver_conifold.pdf}
\caption{a) Toric diagram and b) quiver for the conifold.}
\label{toric_diagram_quiver_conifold}
\end{figure}
This theory has $4$ perfect matchings, each of them consists of one of the chiral fields and corresponds to a corner of the toric diagram. Given the symmetry between the perfect matchings, the result is independent of which perfect matching we use for the product, up to relabeling. We will therefore drop the reference to the perfect matching and refer to this theory as conifold$\times$conifold. Without loss of generality, we choose the toric diagrams of the two conifolds to coincide at the origin. The conifold$\times$conifold is therefore a toric CY 5-fold with toric diagram
\begin{align}
& (0,0,0,0) && (1,0,0,0) && (0,1,0,0) && (1,1,0,0)\nonumber \\ &(0,0,0,1) \nonumber \\ & (0,0,1,0) \nonumber\\ & (0,0,1,1)
\end{align}
where we have indicated the two conifold factors as the row and column. Table \ref{conifold_conifold_field_table} summarizes the nodes and fields in the product $0d$ $\mathcal{N}=1$ matrix model.\footnote{See e.g. \cite{Franco:2016tcm} for the basics of $0d$ $\mathcal{N}=1$ gauge theories.} The corresponding quiver is shown in \fref{conifold_conifold_quiver}.
\begin{table}[ht]
\setstretch{2}
\centering
\begin{tabular}{|c|cccccc|}
\hline
& 0 & 1 & $X_{01}^{(0)}$ & $\tilde{X}_{01}^{(0)}$ & $X_{10}^{(0)}$ & $\bar{\tilde{X}}_{01}^{(1)}$ \\
\hline
0 & $(0,0) \sim 0$ & $(0,1) \sim 1$ & $Z_{01}^{(0)}$ & $\tilde{Z}_{01}^{(0)}$ & $Z_{10}^{(0)}$ & $\bar{\Lambda}_{01} ^{(1)}$\\
1 & $(1,0) \sim 2$ & $(1,1) \sim 3$ & $Z_{23}^{(0)}$ & $\tilde{Z}_{23}^{(0)}$ & $Z_{32}^{(0)}$ & $\bar{\Lambda}_{23}^{(1)}$ \\
$X_{01}^{(0)}$ & $Z_{02}^{(0)}$ & $Z_{13}^{(0)}$ & $\bar{\Lambda}_{03} ^{(1)}$ & $\bar{\Sigma}_{03} ^{(1)}$ &$\bar{\Lambda}_{12}^{(1)}$ & $\Sigma_{03}^{(2)}$\\
$\tilde{X}_{01}^{(0)}$ & $\tilde{Z}_{02}^{(0)}$ & $\tilde{Z}_{13}^{(0)}$ & $\bar{\Gamma}_{03} ^{(1)}$ & $\bar{\Delta}_{03} ^{(1)}$ & $\bar{\Gamma}_{12}^{(1)}$ & $\Delta_{03}^{(2)}$ \\
$X_{10}^{(0)}$ & $Z_{20}^{(0)}$ & $Z_{31}^{(0)}$ & $\bar{\Lambda}_{21}^{(1)}$ & $\bar{\Sigma}_{21}^{(1)}$ & $\bar{\Lambda}_{30}^{(1)}$ & $\Sigma_{21}^{(2)}$\\
$\bar{\tilde{X}}_{01}^{(1)}$ & $\bar{\Lambda}_{02}^{(1)}$ & $\bar{\Lambda}_{13}^{(1)}$ & $\Gamma_{03} ^{(2)}$ & $\Omega_{03}^{(2)}$ & $\Gamma_{12}^{(2)}$ & $\bar{Z}_{03}^{(3)}$\\
\hline
\end{tabular}
\caption{Summary of how the nodes and fields in the conifold$\times$conifold theory descend from the two parents. For simplicity, we converted the pairs of indices labeling nodes in the product to single indices. We also indicate the degree of the fields as a superindex. We use Latin and Greek letters to indicate chiral and Fermi fields, respectively.}
\label{conifold_conifold_field_table}
\end{table}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale = 4,decoration={markings,mark=at position 0.75 with {\arrow{latex}}}]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,inner sep=2pt,font=\small]
\draw (0,1) node(a){$0$};
\draw (1,1) node(b){$1$};
\draw (1,0) node(c){$2$};
\draw (0,0) node(d){$3$};
\draw [very thick, postaction = {decorate} , bend left = 15](a) to node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$2$}(b);
\draw [very thick,postaction = {decorate}, bend left = 15] (b) to (a);
\draw [very thick, postaction = {decorate} , bend left = 15](c) to node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$2$}(d);
\draw [very thick,postaction = {decorate}, bend left = 15] (d) to (c);
\draw [very thick, postaction = {decorate} , bend left = 10](a) to node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$2$}(c);
\draw [very thick,postaction = {decorate}, bend left = 10] (c) to (a);
\draw [very thick, postaction = {decorate} , bend left = 10](b) to node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$2$}(d);
\draw [very thick,postaction = {decorate}, bend left = 10] (d) to (b);
\draw [very thick,postaction = {decorate}] (d) to (a);
\draw [very thick,postaction = {decorate},red] (b) to (a);
\draw [very thick,postaction = {decorate},red] (d) to (c);
\draw [very thick,postaction = {decorate},red] (c) to (a);
\draw [very thick,postaction = {decorate},red] (d) to (b);
\draw [very thick, postaction = {decorate} , red , bend left = 15](d) to node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$4$}(a);
\draw [very thick, postaction = {decorate} , red , bend left = 15](a) to node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$5$}(d);
\draw [very thick, postaction = {decorate} , red , bend left = 15](b) to node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$3$}(c);
\draw [very thick, postaction = {decorate} , red , bend left = 15](c) to node[rectangle , minimum size = 0.6em , pos = 0.25, draw = none , fill = white , inner sep = 0, font = \tiny]{$3$}(b);
\end{tikzpicture}
\caption{Quiver for the conifold$\times$conifold. Black arrows have degree $0$ and red arrows have degree $2.$ They correspond to $0d$ $\mathcal{N}=1$ chiral and Fermi fields, respectively.}
\label{conifold_conifold_quiver}
\end{figure}
\paragraph{Superpotential.}
Since the periodic quiver in this case lives on $\mathbb{T}^{4}$ we cannot display it diagrammatically. Instead, we can construct the superpotential explicitly using prescription given in \sref{section_superpotential}. We divide the total superpotential into four parts
\begin{equation}
W = \mathcal{W}_{1} + \mathcal{W}{2} + \mathcal{W}_C + \mathcal{W}_{12} ~,
\end{equation}
where $\mathcal{W}_{1}$ and $\mathcal{W}_{2}$ come from the first and second conifold factors respectively, $\mathcal{W}_C$ contains the new cubic terms and $\mathcal{W}_{12}$ contains the mixed terms. Recall that in Table \ref{conifold_conifold_field_table} we used Latin and Greek letters to indicate chiral and Fermi fields, respectively. With this in mind the various parts of superpotential are:
\medskip
\paragraph{\underline{$\mathcal{W}_{1}$}:}
Since there are only two terms in the superpotential of the conifold we write the descendants of each of them separately. We thus write $\mathcal{W}_{1} = \mathcal{W}_{1+} - \mathcal{W}_{1-}$, with $\mathcal{W}_{1+}$ and $\mathcal{W}_{1-}$ the descendants of the positive and negative terms, respectively. We get
\begin{align}
\mathcal{W}_{1+} &= Z_{01}Z_{10}\tilde{Z}_{01}\Lambda_{10} + Z_{23}Z_{32}\tilde{Z}_{23}\Lambda_{32} +\bar{\Lambda}_{03}Z_{32}\tilde{Z}_{23}\bar{\Sigma}_{30} + Z_{01}\bar{\Lambda}_{12}\tilde{Z}_{23}\bar{\Sigma}_{30} + Z_{01}Z_{10}\bar{\Sigma}_{03}\Sigma_{30} \nonumber \\
& + \bar{\Gamma}_{03}Z_{32}\tilde{Z}_{23}\bar{\Delta}_{30} + Z_{01}\bar{\Lambda}_{12}\tilde{Z}_{23}\bar{\Delta}_{30} + Z_{01}Z_{10}\bar{\Delta}_{03}\bar{\Delta}_{30} + \bar{\Lambda}_{21}Z_{10}\tilde{Z}_{01}\bar{\Sigma}_{12} + Z_{23}\bar{\Lambda}_{30}\tilde{Z}_{01}\bar{\Sigma}_{12} \nonumber \\
& + Z_{23}Z_{32}\bar{\Sigma}_{21}\bar{\Sigma}_{12} + \Gamma_{03}Z_{32}\tilde{Z}_{23}Z_{30} + Z_{01}\Gamma_{12}\tilde{Z}_{23}Z_{30} + Z_{01}Z_{10}\Omega_{03}Z_{30}
\end{align}
and
\begin{align}
\mathcal{W}_{1-} &= \tilde{Z}_{01}Z_{10}Z_{01}\Lambda_{10} + \tilde{Z}_{23}Z_{32}Z_{23}\Lambda_{32} + \bar{\Sigma}_{03}Z_{32}Z_{23}\bar{\Sigma}_{30} + \tilde{Z}_{01}\bar{\Lambda}_{12}\tilde{Z}_{23}\bar{\Sigma}_{30} + \tilde{Z}_{01}Z_{10}\bar{\Lambda}_{03}\bar{\Sigma}_{30}\nonumber \\
& + \bar{\Delta}_{03}Z_{32}Z_{23}\bar{\Delta}_{30} + \tilde{Z}_{01}\bar{\Gamma}_{12}Z_{23}\bar{\Delta}_{30} + \tilde{Z}_{01}Z_{10}\bar{\Gamma}_{03}\bar{\Delta}_{30} + \bar{\Lambda}_{21}Z_{10}Z_{01}\bar{\Sigma}_{12} + \tilde{Z}_{23}\bar{\Lambda}_{30}Z_{01}\bar{\Sigma}_{12} \nonumber\\
& + \tilde{Z}_{23}Z_{32}\bar{\Lambda}_{21}\bar{\Sigma}_{12} + \Omega_{03}Z_{32}Z_{23}Z_{30} + \tilde{Z}_{01}\Gamma_{12}Z_{23}Z_{30} + \tilde{Z}_{01}Z_{10}\Gamma_{03}Z_{30}
\end{align}
\medskip
\paragraph{\underline{$\mathcal{W}_{2}$}:}
Similarly, $\mathcal{W}_{2} = \mathcal{W}_{2+} - \mathcal{W}_{2-}$, with the two parts being
\begin{align}
\mathcal{W}_{2+} &= Z_{02}Z_{20}\tilde{Z}_{02}\Lambda_{20} + Z_{13}Z_{31}\tilde{Z}_{13}\Lambda_{31} + \bar{\Lambda}_{03}Z_{31}\tilde{Z}_{13}\bar{\Gamma}_{30} + Z_{02}\bar{\Lambda}_{21}\tilde{Z}_{13}\bar{\Gamma}_{30} + Z_{02}Z_{20}\bar{\Gamma}_{03}\bar{\Gamma}_{30} \nonumber\\
& + \bar{\Sigma}_{03}Z_{31}\tilde{Z}_{13}\bar{\Omega}_{30} + Z_{02}\bar{\Sigma}_{21}\tilde{Z}_{13}\bar{\Omega}_{30} + Z_{01}Z_{10}\bar{\Delta}_{03}\bar{\Omega}_{30} + \bar{\Lambda}_{12}Z_{20}\tilde{Z}_{02}\bar{\Gamma}_{21} + Z_{13}\bar{\Lambda}_{30}\tilde{Z}_{02}\bar{\Gamma}_{21} \nonumber \\
& + Z_{13}Z_{31}\bar{\Gamma}_{12}\bar{\Gamma}_{21} + \Sigma_{30}Z_{31}\tilde{Z}_{13}Z_{30} + Z_{02}\Sigma_{21}\tilde{Z}_{13}Z_{30} + Z_{02}Z_{20}\Delta_{03}Z_{30} \nonumber\\[.15cm]
\mathcal{W}_{2-} &= \tilde{Z}_{02}Z_{20}Z_{02}\Lambda_{20} + \tilde{Z}_{13}Z_{31}Z_{13}\Lambda_{31} + \bar{\Gamma}_{03}Z_{31}Z_{13}\bar{\Gamma}_{30} + \tilde{Z}_{02}\bar{\Lambda}_{21}Z_{13}\bar{\Gamma}_{30} + \tilde{Z}_{02}Z_{20}\bar{\Lambda}_{03}\bar{\Gamma}_{30} \nonumber \\
& + \bar{\Sigma}_{03}Z_{31}Z_{13}\bar{\Omega}_{30} + \tilde{Z}_{02}\bar{\Sigma}_{21}Z_{13}\bar{\Omega}_{30} + \tilde{Z}_{02}Z_{20}\bar{\Sigma}_{03}\bar{\Omega}_{30} + \bar{\Gamma}_{12}Z_{20}Z_{02}\bar{\Gamma}_{21} + \tilde{Z}_{13}\bar{\Lambda}_{30}Z_{02}\bar{\Gamma}_{21} \nonumber\\
& + \tilde{Z}_{13}Z_{31}\bar{\Lambda}_{12}\bar{\Gamma}_{21} + \Delta_{03}Z_{31}Z_{13}Z_{30} + \tilde{Z}_{02}\Sigma_{21}Z_{13}Z_{30} + \tilde{Z}_{02}Z_{20}\Sigma_{03}Z_{30}
\end{align}
\paragraph{\underline{$\mathcal{W}_C$}:}
As explained in \sref{section_superpotential} there are two cubic terms in the superpotential of $P_{p}\times Q_{q}$ for every pair of fields $\bar{X}^{(c)}_{i_{1},j_{1}} \in p$ and $\bar{Y}^{(d)}_{i_{2},j_{2}} \in Q$. In the present case, these terms are:
\begin{align}
\renewcommand{\arraystretch}{1.25}
\begin{array}{|c|c|c|c|c|}
\hline
& X_{01} & \tilde{X}_{01} & X_{10} & \bar{\tilde{X}}_{01} \\
\hline
X_{01} & \phantom{-} Z_{01}Z_{13}\Lambda_{30} & \phantom{-} Z_{01}\tilde{Z}_{13}\Gamma_{30} & \phantom{-} Z_{23}Z_{31}\Lambda_{12} & \phantom{-} Z_{01}\bar{\Lambda}_{13}\bar{\Gamma}_{30}\\
& - Z_{02}Z_{23}\Lambda_{30} & - \tilde{Z}_{02}Z_{23}\Gamma_{30} & - Z_{20}Z_{01}\Lambda_{12} & - \bar{\Lambda}_{02}Z_{23}\bar{\Gamma}_{30} \\
\hline
\tilde{X}_{01} &\phantom{-} \tilde{Z}_{01}Z_{13}\Sigma_{30} &\phantom{-} \tilde{Z}_{01}\tilde{Z}_{13}\Delta_{30} &\phantom{-} \tilde{Z}_{23}Z_{31}\Sigma_{12} & \phantom{-} \tilde{Z}_{01}\bar{\Lambda}_{13}\bar{\Omega}_{30} \\
& - Z_{02}\tilde{Z}_{23}\Sigma_{30} & - \tilde{Z}_{02}\tilde{Z}_{23}\Delta_{30} & - Z_{20}\tilde{Z}_{01}\Sigma_{12} & - \bar{\Lambda}_{02}\tilde{Z}_{23}\bar{\Omega}_{30} \\
\hline
X_{10} & \phantom{-} Z_{10}Z_{02}\Lambda_{21} & \phantom{-} Z_{10}\tilde{Z}_{02}\Gamma_{21} & \phantom{-} Z_{32}Z_{20}\Lambda_{03} & \phantom{-} Z_{10}\bar{\Lambda}_{02}\bar{\Gamma}_{21}\\
& - Z_{13}Z_{32}\Lambda_{21} & - \tilde{Z}_{13}Z_{32}\Gamma_{21} & - Z_{31}Z_{10}\Lambda_{03} & - \bar{\Lambda}_{13}Z_{32}\bar{\Gamma}_{21} \\
\hline
\bar{\tilde{X}}_{01} & \phantom{-} \bar{\Lambda}_{01}Z_{13}\bar{\Sigma}_{30} & \phantom{-} \bar{\Lambda}_{01}\tilde{Z}_{13}\bar{\Delta}_{30} & \phantom{-} \bar{\Lambda}_{23}Z_{31}\bar{\Sigma}_{12} & \phantom{-}\bar{\Lambda}_{01}\bar{\Lambda}_{13}Z_{30} \\
& - Z_{02}\bar{\Lambda}_{23}\bar{\Sigma}_{30} & - \tilde{Z}_{02}\bar{\Lambda}_{23}\bar{\Delta}_{30} & - Z_{20}\bar{\Lambda}_{01}\bar{\Sigma}_{12} & - \bar{\Lambda}_{02}\bar{\Lambda}_{23}Z_{30} \\
\hline
\end{array}
\end{align}
$\mathcal{W}_C$ is the sum of all these terms.
\paragraph{\underline{$W_{12}$}:}
As explained in \sref{section_superpotential} and Appendix \ref{mixed_potential_terms_appendix}, for every pair of terms $T_{P}$ and $T_{Q}$, there are terms in the product superpotential that combine them. For every pair of quartic terms, there are $9$ quintic terms. As in the case of $\mathcal{W}_{1}$ and $\mathcal{W}_{2}$, we write the corresponding terms separately. So we decompose $\mathcal{W}_{12}$ as
\begin{equation}
\mathcal{W}_{12} = \mathcal{W}_{++} + \mathcal{W}_{+-} + \mathcal{W}_{-+} + \mathcal{W}_{--}~,
\end{equation}
where the signs correspond to the signs of the parent terms in the two conifolds. The individual contributions are:
\begin{align}
\mathcal{W}_{++} &= Z_{02}Z_{23}\bar{\Lambda}_{30}\bar{\Delta}_{03}Z_{30} - Z_{02}\bar{\Lambda}_{21}\bar{\Gamma}_{12}\tilde{Z}_{23}Z_{30} - Z_{02}\bar{\Lambda}_{21}Z_{10}\bar{\Delta}_{03}Z_{30} \nonumber\\
&- Z_{01}\bar{\Lambda}_{12}\bar{\Sigma}_{21}\tilde{Z}_{13}Z_{30} + \bar{\Lambda}_{03}Z_{32}\bar{\Sigma}_{21}\tilde{Z}_{13}Z_{30} - Z_{01}\bar{\Lambda}_{12}Z_{20}\bar{\Delta}_{03}Z_{30} \nonumber\\
& + \bar{\Lambda}_{03}Z_{32}Z_{20}\bar{\Delta}_{03}Z_{30} + \bar{\Lambda}_{03}\bar{\Lambda}_{30}\tilde{Z}_{02}\tilde{Z}_{23}Z_{30} + \bar{\Lambda}_{03}Z_{31}\bar{\Gamma}_{12}\tilde{Z}_{23}Z_{30} \nonumber\\[.15cm]
\mathcal{W}_{+-} &= - Z_{02}\tilde{Z}_{23}\bar{\Lambda}_{30}\bar{\Gamma}_{03}Z_{30} + Z_{02}\bar{\Sigma}_{21}\bar{\Gamma}_{12}Z_{23}Z_{30} + Z_{02}\bar{\Sigma}_{21}Z_{10}\bar{\Gamma}_{03}Z_{30} \nonumber \\
& + \tilde{Z}_{01}\bar{\Lambda}_{12}\bar{\Lambda}_{21}\tilde{Z}_{13}Z_{30} - \bar{\Sigma}_{03}Z_{32}\bar{\Lambda}_{21}\tilde{Z}_{13}Z_{30} + \tilde{Z}_{01}\bar{\Lambda}_{12}Z_{20}\bar{\Gamma}_{03}Z_{30} \nonumber \\
& - \bar{\Sigma}_{03}Z_{32}Z_{20}\bar{\Gamma}_{03}Z_{30} - \bar{\Sigma}_{03}\bar{\Lambda}_{30}\tilde{Z}_{02}Z_{23}Z_{30} - \bar{\Sigma}_{03}Z_{31}\bar{\Gamma}_{12}Z_{23}Z_{30}\nonumber \\[.15cm]
\mathcal{W}_{-+} &= - \tilde{Z}_{02}Z_{23}\bar{\Lambda}_{30}\bar{\Sigma}_{03}Z_{30} + \tilde{Z}_{02}\bar{\Lambda}_{21}\bar{\Lambda}_{12}\tilde{Z}_{23}Z_{30} + \tilde{Z}_{02}\bar{\Lambda}_{21}Z_{10}\bar{\Sigma}_{03}Z_{30} \nonumber\\
& + Z_{01}\bar{\Gamma}_{12}\bar{\Sigma}_{21}Z_{13}Z_{30} - \bar{\Gamma}_{03}Z_{32}\bar{\Sigma}_{21}Z_{13}Z_{30} + Z_{01}\bar{\Gamma}_{12}Z_{20}\bar{\Sigma}_{03}Z_{30} \nonumber \\
& - \bar{\Gamma}_{03}Z_{32}Z_{20}\bar{\Sigma}_{03}Z_{30} - \bar{\Gamma}_{03}\bar{\Lambda}_{30}Z_{02}\tilde{Z}_{23}Z_{30} - \bar{\Gamma}_{03}Z_{31}\bar{\Lambda}_{12}\tilde{Z}_{23}Z_{30} \nonumber \\[.15cm]
\mathcal{W}_{--} &= \tilde{Z}_{02}\tilde{Z}_{23}\bar{\Lambda}_{30}\bar{\Lambda}_{03}Z_{30} - \tilde{Z}_{02}\bar{\Sigma}_{21}\bar{\Lambda}_{12}Z_{23}Z_{30} - \tilde{Z}_{02}\bar{\Sigma}_{21}Z_{10}\bar{\Lambda}_{03}Z_{30} \nonumber\\
& - \tilde{Z}_{01}\bar{\Gamma}_{12}\bar{\Lambda}_{21}Z_{13}Z_{30} + \bar{\Delta}_{03}Z_{32}\bar{\Lambda}_{21}Z_{13}Z_{30} - \tilde{Z}_{01}\bar{\Gamma}_{12}Z_{20}\bar{\Lambda}_{03}Z_{30} \nonumber \\
& +\bar{\Delta}_{03}Z_{32}Z_{20}\bar{\Lambda}_{03}Z_{30} + \bar{\Delta}_{03}\bar{\Lambda}_{30}Z_{02}Z_{23}Z_{30} + \bar{\Delta}_{03}Z_{31}\bar{\Lambda}_{12}Z_{23}Z_{30}
\end{align}
This completes our description of the superpotential. All in all, it consists of $124$ terms. Of these, $38$ are $J$-terms, i.e. they contain precisely one degree $m-1$ field (namely degree 2 in this case) and the rest are chiral fields. Each one of the $19$ degree $m-1$ fields (see the quiver in \fref{conifold_conifold_quiver}) appear in two of these terms with opposite sign, so the superpotential satisfies the toric condition. Finally, with some effort we can verify that the Kontsevich bracket $\{W,W\}$ vanishes.
\section{Relation to Other Constructions}
\label{section_relation_to_other_constructions}
We now briefly discuss how the product construction relates to other known methods for determining the quiver theories corresponding to a given geometry.
\subsection{Algebraic Dimensional Reduction}
Algebraic dimensional reduction is an algorithm for constructing the quiver theory for $\mathrm{CY}_{m+2}\times \mathbb{C}$ starting from the quiver theory for $\mathrm{CY}_{m+2}$ \cite{Franco:2017lpa}. It generalizes dimensional reduction from $6d$ $\mathcal{N}=(1,0)$ theories to $4d$ $\mathcal{N}=2$ theories ($m=0\to m=1$), from $4d$ $\mathcal{N}=1$ theories to $2d$ $\mathcal{N}=(2,2)$ theories ($m=1\to m=2$) and from $2d$ $\mathcal{N}=(0,2)$ theories to $0d$ $\mathcal{N}=2$ theories ($m=2\to m=3$)\footnote{In all these cases, the dimensionally reduced theories have more than $2^{3-m}$ supercharges.} to arbitrary $m$.
Algebraic dimensional reduction is indeed a specific instance of products and corresponds to the product of the quiver theory for $\mathrm{CY}_{m+2}$ with the simplest $m=0$ quiver theory, the one for $\mathbb{C}^{2}$. This theory is shown in \fref{c2_quiver_pm} and has two perfect matchings. We can use any of them and get the same result. Similarly any perfect matching used for the $\mathrm{CY}_{m+2}$ theory gives the same quiver theory for $\mathrm{CY}_{m+2}\times \mathbb{C}$ up to a relabeling of fields.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=1]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,inner sep=2pt,font=\small]
\draw (0,0) node(a){$0$};
\draw (3,0) node(b){$0$};
\draw[very thick](a) -- (b);
\draw (5,0) node(a){$0$};
\draw (8,0) node(b){$0$};
\draw[very thick,-latex](a) -- (b);
\draw (10,0) node(a){$0$};
\draw (13,0) node(b){$0$};
\draw[very thick,latex-](a) -- (b);
\end{tikzpicture}
\caption{The periodic quiver for $\mathbb{C}^{2}$ and its perfect matchings, represented here as orientations of the quiver.}
\label{c2_quiver_pm}
\end{figure}
\subsection{Orbifold Reduction}
Orbifold reduction is a generalization of dimensional reduction that constructs a quiver theory for a toric $\mathrm{CY}_{4}$ from a that of a toric $\mathrm{CY}_{3}$ \cite{Franco:2016fxm}.\footnote{This corresponds to going from $m=1$ to $m=2$. The procedure can be naturally extended to higher $m$.} It adds a third dimension to the toric diagram $T_{\mathrm{CY}_{3}}$ by adding images of one of its points up to some height $k_{+}$ above the central plane containing the $T_{\mathrm{CY}_{3}}$ and some depth $k_{-}$ below it (see \fref{red_orb_orbred}).
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{red_orb_orbred}
\caption{Toric diagrams for: a) the dimensional reduction of $dP_3$ to $dP_3\times \mathbb{C}$, b) a $(dP_3\times \mathbb{C})/\mathbb{Z}_k$ orbifold with $k=2$ and c) an orbifold reduction of $dP_3$ with $k_+=2$ and $k_-=1$.}
\label{red_orb_orbred}
\end{figure}
This process again corresponds to a specific case of a product. The orbifold reduction of a $4d$ $\mathcal{N} = 1$ quiver theory with periodic quiver $P$ using a perfect matching $q$ corresponds to the product $P_{p} \times A^{(k_{+}+k_{-})}_{q}$. Here $A^{(k)}$ is the $6d$ $\mathcal{N} = (1,0)$ quiver theory for $\mathbb{C}^{2}/\mathbb{Z}_{k}$, i.e. the affine necklace quiver of type $A$ with $k$ nodes. A perfect matching of an $m=0$ quiver is just a choice of orientation of its edges, so the perfect matching $p$ is such that $k_{+}$ arrows point up while $k_{-}$ arrows point down. There are $\binom{k_{+} + k_{-}}{k_{-}}$ such perfect matchings. They all realize theories corresponding to the same geometry and are related by a sequence of trialities.
\subsection{3d Printing}
\label{subsection:product_vs_printing}
Another algorithm for efficiently constructing quiver theories for toric CYs starting from simpler parent geometries is {\it $3d$ printing}. $3d$ printing allows one to add images of multiple points in the toric diagram (we refer to \cite{Franco:2018qsc} for details). $3d$ printing is indeed more general than the CY product in two senses:
\begin{itemize}
\item
All the geometries that can be addressed with CY products can also be reached by a sequence of 3d printings that increase $m$ by one at a time. The converse is not true; there are geometries that can be realized by $3d$ printing but not as CY products. The simplest such example is the conifold. As it is evident from its toric diagram, shown in \fref{toric_diagram_quiver_conifold}, it can be constructed by lifting both the points in the toric diagram of $\mathbb{C}^{2}$. On the other hand, it is clear that it is not possible to produce it by a product.
\item
Even if the same geometry can be realized by both processes, there might be phases of the quiver theories that can be obtained via $3d$ printing but not via a product. A simple example of this phenomenon is $F_{0}$. Phase 2 of $F_{0}$ can be obtained using either construction but only $3d$ printing is able to construct phase 1.
\end{itemize}
Despite these relative disadvantages, the CY product is a superior method for geometries that can be reached via both methods for several reasons:
\begin{itemize}
\item
The CY product is much more efficient. This is true even for simple geometries. As an example, let us consider the construction of a quiver theory for the conifold$\times$conifold. In order to $3d$ print this theory starting from the conifold, we first need to produce an intermediate CY 4-fold that is the dimensional reduction of the conifold, i.e. conifold$\times \mathbb{C}$. Then two points of its toric diagram must be lifted to produce the conifold$\times$conifold. To carry out this process we will have to compute the perfect matchings not only for the conifold but also for the intermediate conifold$\times \mathbb{C}$ theory. The difficulties of constructing the necessary quiver blocks and computing perfect matchings at every intermediate step makes $3d$ printing impractical if the difference between the dimensions of the input and target geometries is large.
\item
The CY product always produces {\it reduced} theories, which is not the case with $3d$ printing which often results in reducible, also known as inconsistent, theories which need to be reduced \cite{Franco:2018qsc}.
\item
Unlike $3d$ printing the CY product does not generate mass terms in the superpotential. This not only reduces the computational burden but it also means that CY product provides a more direct way of arriving to the final quiver theory, without the need to integrate out massive fields at the end.
\item
More importantly, in addition to these computational advantages, the CY product provides us with a concise and much clearer relationship between the input and target geometries. This becomes more striking as the difference between the dimensions of the input and target geometries increases.
\end{itemize}
Having considered the relative merits of the two constructions we turn to some speculation about their relation. While we have restricted ourselves to the case in which the periodic quivers for both theories are embedded in tori, more generally we can regard the product construction as a method for producing a quiver embedded in $S\times T$ given two quivers embedded in manifolds $S$ and $T$. We can also consider cases where the manifolds have a boundary. Imagine $T$ has a boundary $\partial T$. In that case the resulting quiver will be embedded in a manifold $S\times T$ with boundary $S\times \partial T$. Arguably the simplest case of this situation is when $T$ is the line segment $I$. The basic building block of $3d$ printing, a quiver block $\mathcal{Q}^{(m+1)}_{p}$, is a graph embedded in $\mathbb{T}^{m}\times I$ and indeed can be regarded as a product of an $m$-graded periodic quiver $Q^{(m)}$ using a perfect matching $p$ with a simple quiver embedded in a line segment as follows\footnote{The notation in the figure is inspired by the one used for quiver blocks in $3d$ printing in \cite{Franco:2018qsc}. In that context, the nodes $\overline{\star}$ and $\underline{\star}$ would correspond to the two images of a node $\star$ at the two endpoints of a line segment.}
\begin{align}
\mathcal{Q}^{(m+1)}_{p} = Q^{(m)}_{p} \times \mathord{\begin{tikzpicture}[baseline = 4ex,scale = 1.75]
\tikzstyle{every node}=[circle,thick,fill=yellow2,draw,inner sep=1.25pt,font=\scriptsize]
\draw (0,0) node(a){$\underline{\star}$};
\draw (0,1) node(b){$\overline{\star}$};
\draw [very thick,-latex] (a) -- (b);
\end{tikzpicture}}
\end{align}
As usual, we have indicated the perfect matching of the $m=0$ quiver by specifying an orientation of its fields. This construction realizes both the field content and the superpotential of the quiver block.
It is therefore natural to expect that $3d$ printing and product are two instances of a single overarching construction. Such procedure would include both the products of $m$-graded quivers embedded in manifolds, possibly with boundaries, and an operation to glue two such manifolds along their boundaries under suitable conditions. We leave the task of understanding this construction in complete generality and its physical realization to a future work.
\section{Conclusions}
\label{section_conclusions}
Over the years, there has been tremendous progress in the map between the geometry of singularities and the corresponding quiver theories on branes. This started with a few isolated examples of CY 3-folds and evolved into the development of brane tilings, tools that vastly simplify that study of infinite classes of geometries. Similar tools were later developed for higher dimensional CYs. We regard the CY product as a significant development in the arsenal of tools to connect geometry and quiver theories. It allows us to straightforwardly compute quiver theories in cases that were previously out of practical reach.
We envision multiple directions for future research. To name a few:
\begin{itemize}
\item The CY product will help investigating the order $(m+1)$ dualities of the $m$-graded quiver theories associated to CY $(m+2)$-folds. There is a large amount of freedom in this construction: choice of phases for the quiver theories of the parent geometries and choice of perfect matchings for the interlacing points.\footnote{Moreover, the perfect matchings are phase dependent.} Therefore, given a target CY, there are multiple possible decompositions into CY factors. In fact, different decompositions can even differ in the dimension of the components. It is therefore worthwhile to study the interplay between this vast landscape of possibilities and the intricate space of dual theories.
\item The CY product is particularly amenable to automatic computer implementation. It is therefore ideally suited for generating large datasets of CYs/quiver theories. Such datasets would provide valuable insights into the structure of these theories. Moreover, they can be used to test the applicability of modern ideas such as machine learning to problems involving quiver theories, such as the classification of duals for general $m$. Initial explorations of these ideas have been undertaken in \cite{toappear0}.
\item As mentioned in \sref{subsec_F0} in the case of $F_0^{(m)}$, the CY product can be applied iteratively, equivalently using multiple factors. In this way, it is possible to build quiver theories for complicated, higher dimensional geometries using very simple, low dimensional building blocks. A similar approach has been exploited to build some of the infinite classes of theories in \cite{Closset:2018axq}.
\item From a first principle perspective, we can calculate the quivers associated with a CY$_{m+2}$ via the topological B-model \cite{Aspinwall:2008jk,lam2014calabi,Franco:2017lpa,Closset:2018axq}. However, this approach requires knowledge of the fractional branes as a starting point, which is often challenging. It would be interesting to investigate the correspondence between the B-model and CY product approaches.
\end{itemize}
\acknowledgments
We would like to thank C. Closset and G. Musiker for enjoyable discussions and related collaborations. The research of SF was supported by the U.S. National Science Foundation grants PHY-1820721 and DMS-1854179. AH was supported by INFN grant GSS (Gauge Theories, Strings and Supergravity).
|
3,212,635,537,784 | arxiv | \section{Introduction}\label{sec:BatchCodes}
\subsection{Background and Definitions}
Batch codes were first introduced by Ishai \textit{et al.} in~\cite{IKOS04} as a method to represent the distributed storage of a set of $n$ data items on $m$ servers. These codes were originally motivated by several applications such as load balancing in distributed storage, private information retrieval, and cryptographic protocols. Formally, these codes are defined as follows~\cite{IKOS04}.
\begin{definition}
\begin{enumerate}
\item An $(n,N,k,m,t)$ {\em batch code} over an alphabet $\Sigma$, encodes a string $x\in \Sigma^n$ into an $m$-tuple of strings $y_1,\ldots,y_m\in \Sigma^*$ (called {\em buckets} or {\em servers}) of total length $N$, such that for each $k$-tuple (called {\em batch} or {\em request}) of distinct indices $i_1,\ldots,i_k\in[n]$, the $k$ data items $x_{i_1},\ldots,x_{i_k}$ can be decoded by reading at most $t$ symbols from each server.
\item An $(n,N,k,m,t)$ {\em multiset batch code} is an $(n,N,k,m,t)$ batch code which also satisfies the following property: For any multiset request of $k$ indices $i_1,\ldots,i_k\in[n]$ there is a partition of the buckets into $k$ subsets $S_1,\ldots,S_k \subseteq [m]$ such that each item $x_{i_j}, j\in[k]$, can be retrieved by reading at most $t$ symbols from each bucket in $S_j$.
\end{enumerate}
\end{definition}
Yet another class of codes, called \emph{combinatorial batch codes} (CBC), is a special type of batch codes in which all encoded symbols are copies of the input items, i.e., these codes are replication-based. Several works have considered codes under this setup; see e.g.~\cite{BB14,BRR12,BKMS10,BT11C,BT11A,BT11B,BT12,BT15,PSW09,S14,SG13}. However, note that combinatorial batch codes are not multiset batch codes and don't allow to request an item more than once.
Motivated by the works on codes which enable parallel reads for different users in distributed storage systems, for example, codes with locality and availability \cite{RPDV2016,ZY2016}, we introduce a generalization of CBCs, named \emph{multiset combinatorial batch codes}.
\begin{definition}
An $(n,N,k,m,t;r)$ {\em multiset combinatorial batch code (MCBC)} is a collection of subsets of $[n]$, ${\cal C}=\{C_1,C_2,\dots,C_m\}$ (called {\em servers}) where $N=\sum_{j=1}^m |C_j|$, such that for each multiset request $\{i_1,i_2,\dots,i_k\}$, in which every element in $[n]$ has multiplicity at most $r$, there exist subsets $D_1,\ldots,D_m$, where for all $j\in[m]$, $D_j\subseteq C_j$ with $|D_j|\leq t$, and the multiset union\footnote{For any $i\in[n]$, the multiplicity of $i$ in the multiset union of the sets $D_j$ for $j\in[m]$ is the number of subsets that contain $i$, that is $|\{j\in[m]:i\in D_j\}|$.} of $D_j$ for $j\in[m]$ contains the multiset request $\{i_1,i_2,\dots,i_k\}$.
\end{definition}
In other words, an $(n,N,k,m,t;r)$-MCBC is a coding scheme which encodes $n$ items into $m$ servers, with total storage of $N$ items, such that any multiset request of items of size at most $k$, where any item can be repeated at most $r$ times, can be retrieved by reading at most $t$ items from each server. In particular, when $r=1$ we obtain a combinatorial batch code, and when $r=k$ and $t=1$ we obtain a multiset batch code based on replication.
\begin{example}
Let us consider the following $(n=5,N=15,k=5,m=5,t=1;r=2)$ MCBC,
$$\begin{array}{|c|c|c|c|c|}
\hline
1 & 1 & 2 & 2 & 3 \\
3 & 4 & 3 & 4 & 4 \\
5 & 5 & 5 & 5 & 5 \\
\hline
\end{array}$$
where the $i$-th column contains the indices of items stored in the server $C_i\in{\cal C}$, $i\in[5]$. It is possible to verify that the code ${\cal C}$ satisfies the requirements of a $(5,15,5,5,1;2)$-MCBC. For example, the multiset request $\{3,3,4,4,5\}$ can be read by taking the subsets $D_1=\{3\}$, $D_2=\{4\}$, $D_3=\{3\}$, $D_4=\{4\}$, $D_5=\{5\}$.
\end{example}
Similarly to the original problem of combinatorial batch codes, the goal in this paper is to minimize the total storage $N$ given the parameters $n,m,k,t$ and $r$ of an MCBC. Let $N(n,k,m,t;r)$ be the smallest $N$ such that an $(n,N,k,m,t;r)$-MCBC exists. An MCBC is called {\em optimal} if $N$ is minimal given $n,m,k,t,r$. In this paper, we focus on the case $t=1$, and thus omit $t$ from the notation and write it as an $(n,N,k,m;r)$-MCBC and its minimum storage by $N(n,k,m;r)$. In case $r=1$, i.e. an MCBC is a CBC, we further omit $r$ and write it as an $(n,N,k,m)$-CBC and its minimum storage as $N(n,k,m)$.
In \cite{PSW09}, the authors studied another class of CBCs, called \emph{uniform combinatorial batch codes} (\emph{uniform CBCs}), in which each item is stored in the same number of servers. Several constructions of optimal uniform CBCs were given in~\cite{BB14,BRR12,PSW09,SG13}. In this paper we consider a slightly different class of MCBCs, in which each server stores the same number of items, and call these codes {\em regular multiset combinatorial batch codes} ({\em regular MCBCs}).
A regular $(n,N,k,m;r)$-MCBC is an MCBC in which each server stores the same number $\mu$ of items, where $\mu = N/m$. Given $n,m,k,r$, let $\mu(n,k,m;r)$ denote the smallest number of items stored in each server, then the optimal value of $N$ is determined by $\mu(n,k,m;r)$, that is, $N=m\mu(n,k,m;r)$.
\subsection{Previous Results on CBCs}
For CBCs, a significant amount of work has been done to study the value $N(n,k,m)$, and the exact value has been determined for a large range of parameters. We list some of the known results below (for more details see~\cite{BRR12,BKMS10,BT11A,BT11B,PSW09,SG13}).
\begin{theorem}
\label{CBC}
\begin{itemize}
$~$
\item[(i)] $N(n,k,k)=kn-k(k-1)$.
\item[(ii)] If $n\geq (k-1){m\choose k-1}$, then $N(n,k,m)=kn-(k-1){m\choose k-1}$.
\item[(iii)] If ${m\choose k-2}\leq n\leq (k-1){m\choose k-1}$, then $N(n,k,m)=(k-1)n-\left\lfloor\frac{(k-1){m\choose k-1}-n}{m-k+1}\right\rfloor$.
\item[(iv)] If ${m\choose k-2}-(m-k+1)A(m,4,k-3)\leq n\leq {m\choose k-2}$, then $N(n,k,m)=(k-2)n-\left\lfloor\frac{2({m\choose k-2}-n)}{m-k+1}\right\rfloor$ for $0\leq ({m\choose k-2}-n)\mod (m-k+1)<\frac{m-k+1}{2}$.
\item[(v)] $N(m+1,k,m)=m+k$.
\item[(vi)] Let $k$ and $m$ be integers with $2\leq k\leq m$, then $$N(m+2,k,m)=\begin{cases}
m+k-2+\lceil 2\sqrt{k+1}\rceil & \text{ if } m+1-k\geq \lceil \sqrt{k+1}\rceil, \\
2m-2+\left\lceil 1+\frac{k+1}{m+1-k}\right\rceil & \text{ if } m+1-k< \lceil \sqrt{k+1}\rceil. \\
\end{cases}$$
\item[(vii)] For all integers $n\geq m\geq 3$, $N(n,3,m)=\begin{cases}
2n-m+\left\lfloor\frac{n-3}{m-2}\right\rfloor & \text{ if } n\leq m^2-m, \\
3n-m^2+m & \text{ if } n\geq m^2-m. \\
\end{cases}$
\item[(viii)] For all integers $n\geq m\geq 4$,
$$N(n,4,m)=\begin{cases}
n & \text{ if n=m,} \\
2n-m+\left\lceil\frac{1+\sqrt{8n-8m+1}}{2}\right\rceil & \text{ if $m<n\leq \frac{m^2+6m}{8}$ and $m$ is even} \\
& \text{~~~ or if $m<n\leq \frac{m^2+4m+3}{8}$ and $m$ is odd,} \\
2n-m+\left\lceil\frac{5+\sqrt{8n-16m+25}}{2}\right\rceil & \text{ if $\frac{m^2+6m+8}{8}\leq n<{m\choose 2}$ and $m$ is even} \\
& \text{~~~ or if $\frac{m^2+4m+11}{8}<n<{m\choose 2}$ and $m$ is odd,} \\
2n-\frac{m-1}{2} & \text{ if $n=\frac{m^2+4m+11}{8}$ and $m$ is odd,} \\
3n-\left\lfloor\frac{m^2}{2}-\frac{n-m}{m-3}\right\rfloor & \text{ if ${m\choose 2}\leq n<3{m\choose 3}$,} \\
4n-3{m\choose 3} & \text{ if $3{m\choose 3}\leq n$.}
\end{cases}$$
\item[(ix)] For any prime power $q\geq 3$, $N(q^2+q-1,q^2-q-1,q^2-q)=q^3-q$.
\end{itemize}
\end{theorem}
\subsection{Our Contributions}
From the definition of MCBCs, one can observe that $r\leq k\leq tm$ and $n\leq N$. If $m\geq nr$, the trivial construction where each server stores a single item is optimal. Therefore, we only consider the case $m<nr$.
In this paper, we study the properties of MCBCs, and give a necessary and sufficient condition for the existence of MCBCs. We give the following bounds on the value of $N(n,k,m;r)$.
\begin{theorem}\label{th:bounds}
\begin{enumerate}
$~$
\item[(i)] $N(n,k,m;r)\geq rn$.
\item[(ii)] $N(n,k,m;r)\geq N(n,k,m;i)$ for $i\in [r-1]$.
\item[(iii)] $\frac{1}{r}N(rn,k,m)\leq N(n,k,m;r)\leq N(rn,k,m)$.
\item[(iv)] $N(n,k,m;r)\leq rN(n,\left\lceil\frac{k}{r}\right\rceil,\left\lfloor\frac{m}{r}\right\rfloor)$.
\item[(v)] Let $r\leq k-1$. For any $c\in[r,k-1]$,
$N(n,k,m;r)\geq nc-\left\lfloor\frac{k-c}{m-k+1}\left[\frac{\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}}{{m-c\choose k-1-c}}-n\right]\right\rfloor$.
\end{enumerate}
\end{theorem}
We also provide several constructions of $(n,N,k,m;r)$-MCBC and determine the exact value of $N(n,k,m;r)$ for some specific parameters.
\begin{theorem}\label{th:constructions}
\begin{enumerate}
$~$
\item[(i)] If $n\geq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$, then $N(n,k,m;r)=kn-\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$.
\item[(ii)] $N(n,k,m;k)=kn$, $N(n,k,m;k-1)=\begin{cases}
kn-{m\choose k-1} & \text{if $n\geq {m\choose k-1}$}, \\
(k-1)n & \text{if $n< {m\choose k-1}$}.
\end{cases}$
\item[(iii)] If $\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-(m-k+1)A(m,4,k-2)\leq n\leq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$ and $r\leq k-2$, then $N(n,k,m;r)=(k-1)n-\left\lfloor\frac{\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-n}{m-k+1}\right\rfloor$.
\item[(iv)] $N(n,k,k;r)=kn-\left\lfloor\frac{k-1}{r}\right\rfloor k$ if $r\mid k$, $n\geq \frac{k}{r}$ or $r\nmid k$, $n\geq \lfloor\frac{k}{r}\rfloor+r$.
\item[(v)] For any prime power $q$, $N(q^2+q,k,q^2;r)\leq q^3+q^2$, where $(k,r)$ satisfies $\lfloor\frac{q}{2}\rfloor+1\leq r\leq q$, $k\leq (q-r+1)(2r-1)$ or $r=1$, $k\leq q^2$. Especially, when $(k,r)\in\{(q^2,1),(2q-1,q)\}$, $N(q^2+q,k,q^2;r)=q^3+q^2$.
\end{enumerate}
\end{theorem}
For a regular $(n,N,k,m;k)$-MCBC, every item has to be stored in at least $k$ different servers and so $\mu(n,k,m;k)\geq kn/m$. Our contribution in this part is finding a necessary and sufficient condition for equality in the last inequality. This result is summarized in the following theorem.
\begin{theorem}\label{th:constructionsreg}
$\mu(n,k,m;k)=\frac{kn}{m}$ if and only if $n=c\cdot\frac{m}{\gcd{(m,k)}}$ for some integer $c\geq 0$.
\end{theorem}
The rest of the paper is organized as follows. In Section~\ref{sec:hallcondition}, we give a necessary and sufficient condition for the existence of MCBCs. In Sections~\ref{sec:bound} and \ref{sec:construction}, we give several bounds and constructions for MCBCs, and establish the results of $N(n,k,m;r)$ in Theorems~\ref{th:bounds} and \ref{th:constructions}. In Section~\ref{sec:regular_mcbc}, we analyse regular MCBCs, and determine the value of $\mu(n,k,m;k)$ in Theorem~\ref{th:constructionsreg}.
\section{Set Systems and the Multiset Hall's Condition}\label{sec:hallcondition}
A {\em set system} is a pair $(V,{\cal C})$, where $V$ is a finite set of {\em points} and ${\cal C}$ is a collection of subsets of $V$ (called {\em blocks}). Given a set system $(V,{\cal C})$ with a points set $V=\{v_1,v_2,\dots,v_n\}$ and a blocks set ${\cal C}=\{C_1,C_2,\dots,C_m\}$, its {\em incidence matrix}
is an $m\times n$ matrix $M$, given by
$$M_{i,j}=\begin{cases}
1 & \text{if $v_j\in C_i$}, \\
0 & \text{if $v_j\not\in C_i$}.
\end{cases}$$
If $M$ is the incidence matrix of the set system $(V,{\cal C})$, then the set system having incidence matrix $M^\top$ is called the {\em dual set system} of $(V,{\cal C})$.
Let ${\cal C}=\{C_1,C_2,\dots,C_m\}$ be an $(n,N,k,m;r)$-MCBC. Similarly to the study of CBCs, by setting $V=[n]$, we consider the set system $(V,{\cal C})$ of the MCBC. In addition, we denote the set system $(X,{\cal B})$ which is given by $X=[m]$ and ${\cal B}=\{B_1,B_2,\dots,B_n\}$ where for each $i\in[n]$, $B_i\subseteq X$ consists of the servers that store the $i$-th item. Then, it is readily verified that $(X,{\cal B})$ is the dual set system of $(V,{\cal C})$. We note that a set system $(V,{\cal C})$ of this form or its dual set system $(X,{\cal B})$ uniquely determines an MCBC and thus in the rest of the paper we will usually refer to an MCBC by its set system or its dual set system.
\begin{example}
\label{e.ap4} The following is a $(20,80,16,16)$-CBC given in \cite{SG13} based on an {\em affine plane} of order $4$. Here, $V=[20]$, each column contains the indices of items stored in a server $C_i\in{\cal C}$ and also forms a block of the set system $(V,{\cal C})$.
$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
1 & 2 & 3 & 4 & 1 & 2 & 3 & 4 & 1 & 2 & 3 & 4 & 1 & 2 & 3 & 4 \\
5 & 6 & 7 & 8 & 6 & 5 & 8 & 7 & 7 & 8 & 5 & 6 & 8 & 7 & 6 & 5 \\
9 & 10 & 11 & 12 & 12 & 11 & 10 & 9 & 10 & 9 & 12 & 11 & 11 & 12 & 9 & 10 \\
13 & 14 & 15 & 16 & 15 & 16 & 13 & 14 & 16 & 15 & 14 & 13 & 14 & 13 & 16 & 15 \\
17 & 17 & 17 & 17 & 18 & 18 & 18 & 18 & 19 & 19 & 19 & 19 & 20 & 20 & 20 & 20 \\
\hline
\end{array}$$
The incidence matrix of the CBC given above is as follows, where the indices of nonzero entries in the $i$-th row, $i\in[16]$, correspond to the indices of items stored in the $i$-th server $C_i$.
\begin{equation*}
\left(
\begin{array}{ccccc}
1~0~0~0~1~0~0~0~1~0~0~0~1~0~0~0~1~0~0~0 \\
0~1~0~0~0~1~0~0~0~1~0~0~0~1~0~0~1~0~0~0 \\
0~0~1~0~0~0~1~0~0~0~1~0~0~0~1~0~1~0~0~0 \\
0~0~0~1~0~0~0~1~0~0~0~1~0~0~0~1~1~0~0~0 \\
1~0~0~0~0~1~0~0~0~0~0~1~0~0~1~0~0~1~0~0 \\
0~1~0~0~1~0~0~0~0~0~1~0~0~0~0~1~0~1~0~0 \\
0~0~1~0~0~0~0~1~0~1~0~0~1~0~0~0~0~1~0~0 \\
0~0~0~1~0~0~1~0~1~0~0~0~0~1~0~0~0~1~0~0 \\
1~0~0~0~0~0~1~0~0~1~0~0~0~0~0~1~0~0~1~0 \\
0~1~0~0~0~0~0~1~1~0~0~0~0~0~1~0~0~0~1~0 \\
0~0~1~0~1~0~0~0~0~0~0~1~0~1~0~0~0~0~1~0 \\
0~0~0~1~0~1~0~0~0~0~1~0~1~0~0~0~0~0~1~0 \\
1~0~0~0~0~0~0~1~0~0~1~0~0~1~0~0~0~0~0~1 \\
0~1~0~0~0~0~1~0~0~0~0~1~1~0~0~0~0~0~0~1 \\
0~0~1~0~0~1~0~0~1~0~0~0~0~0~0~1~0~0~0~1 \\
0~0~0~1~1~0~0~0~0~1~0~0~0~0~1~0~0~0~0~1 \\
\end{array}
\right)
\end{equation*}
Here, the dual set system is $X=[16]$ and ${\cal B}=\{\{1,5,9,13\},\{2,6,10,14\},$ $\{3,7,11,15\},\{4,8,$ $12,16\},\{1,6,11,16\},\{2,5,12,15\},\{3,8,9,14\},\{4,7,10,13\},\{1,8,10,15\},\{2,7,9,16\},\{3,6,12,$ $13\},\{4,5,11,14\},\{1,7,12,14\},\{2,8,11,13\},\{3,5,10,16\},\{4,6,9,15\},\{1,2,3,4\},\{5,6,7,8\},$ $\{9,10,11,12\},\{13,14,15,16\}\}$.
\end{example}
In the rest of this section we let $(V,{\cal C})$ with $V=[n]$ and ${\cal C}=\{C_1,C_2,\dots,C_m\}$ be a set system, and $(X,{\cal B})$ with $X=[m]$ and ${\cal B}=\{B_1,B_2,\dots,B_n\}$ be its dual set system. The following theorem states a necessary and sufficient condition on the dual set system to form a construction of CBCs.
\begin{theorem}[\cite{PSW09}]
\label{hallc} The set system $(V,{\cal C})$ is an $(n,N,k,m)$-CBC if and only if its dual set system $(X,{\cal B})$ satisfies the following {\em Hall's condition}:
\\\vbox{for all $h \in [k]$, and any $h$ distinct blocks $B_{i_1},B_{i_2},\dots,B_{i_h}\in{\cal B}$, $|\cup_{j=1}^h B_{i_j}|\geq h$.}
\end{theorem}
The Hall's condition was generalized in several ways, see e.g.~\cite{AK1990,M1967,MP1966}. For example, in \cite{BT12}, the authors explored the value of $N(n,k,m,t)$ for $t>1$ with a generalization of the Hall's condition, that is, the set system $(V,{\cal C})$ is an $(n,N,k,m,t)$-CBC if and only if its dual set system satisfies the {\em $(k,t)$-Hall's condition: for all $h\in [k]$, and any $h$ distinct blocks $B_{i_1},B_{i_2},\dots,B_{i_h}\in{\cal B}$, $|\cup_{j=1}^h B_{i_j}|\geq h/t$.} In this paper, we present another generalization of the Hall's condition, named the {\em multiset Hall's condition}, and provide a necessary and sufficient condition for the construction of MCBCs.
\begin{theorem}
\label{mhallc}
The set system $(V,{\cal C})$ is an $(n,N,k,m;r)$-MCBC if and only if its dual set system $(X,{\cal B})$ satisfies the following {\em multiset Hall's condition}:
\\\vbox{for all $h\in\left[\lceil\frac{k}{r}\rceil\right]$, and any $h$ distinct blocks $B_{i_1},B_{i_2},\dots,B_{i_h}\in{\cal B}$, $|\cup_{j=1}^h B_{i_j}|\geq \min\{hr,k\}$.}
\end{theorem}
\begin{proof}
($\Rightarrow$) Assume that $(V,{\cal C})$ is an $(n,N,k,m;r)$-MCBC, and let $i_1,i_2,\dots,i_h\in V$ for some $h\in\left[\lceil\frac{k}{r}\rceil\right]$ be the indices of some $h$ different items. Then, the set $\cup_{j\in[h]} B_{i_j}$ corresponds to the indices of all the servers that contain these items.
If $h\leq \lfloor\frac{k}{r}\rfloor$, let us consider the multiset request $\{i_1,\ldots,i_1,i_2,\ldots,i_2,\ldots, i_h,\ldots,i_h\}$ where each of the $h$ elements is requested $r$ times. Since it is possible to read from each server at most one item, the number of servers that contain these $h$ items has to be at least $hr$, that is $|\cup_{j\in[h]} B_{i_j}|\geq hr$. Similarly, if $h=\lceil\frac{k}{r}\rceil$, then we need $k$ servers for the multiset request of size $k$ on $i_1,i_2,\dots,i_h$ where each $i_j$, for $j\in[h]$, is requested at most $r$ times, and so $|\cup_{j\in[h]} B_{i_j}|\geq k$. Together we conclude that $|\cup_{j=1}^h B_{i_j}|\geq \min\{hr,k\}$.
($\Leftarrow$) We construct a new set system $(U,{\cal F})$ with $U=[rn]$ and ${\cal F}=\{F_1,F_2,\dots,F_m\}$ where for $i\in [m]$, $F_i=\{c+jn:c\in C_i,j\in[0,r-1]\}$. Let $U^{(\ell)}=\{\ell+jn:j\in[0,r-1]\}$ for $\ell\in[n]$. We first show the following claim.
\begin{claim}\label{cl1}
$(V,{\cal C})$ is an $(n,N,k,m;r)$-MCBC if and only if $(U,{\cal F})$ is an $(rn,rN,k,m)$-CBC.
\end{claim}
\begin{proof}
Suppose that $(V,{\cal C})$ is an $(n,N,k,m;r)$-MCBC. For any request $P=\{i_1,i_2,\dots,i_k\}\subseteq[rn]$ of $(U,{\cal F})$, let $r_\ell$, for $\ell\in[n]$, be the number of elements requested from the set $U^{(\ell)}$, and note that $0\leq r_\ell\leq r$ and $\sum_{\ell=1}^nr_\ell=k$. Consider the multiset request $Q$ of $(V,{\cal C})$ where each $\ell\in [n]$ appears $r_\ell$ times. Since $(V,{\cal C})$ is an $(n,N,k,m;r)$-MCBC, $Q$ can be read by choosing subsets $D_j\subseteq C_j$, $|D_j|\leq 1$ for $j\in[m]$. Then $P$ can also be read from the set of servers $\{F_j:j\in[m],|D_j|=1\}$. Therefore, $(U,{\cal F})$ is an $(rn,rN,k,m)$-CBC.\footnote{We notice that this direction is not needed in the proof. But we still prove it here because we will use it in Lemma~\ref{bound} below.}
The reverse is similar. Suppose that $(U,{\cal F})$ is an $(rn,rN,k,m)$-CBC. For any multiset request $Q$ of $(V,{\cal C})$ where each $\ell\in [n]$ appears $r_\ell$ times, consider the request $P$ of $(U,{\cal F})$ which contain any $r_\ell$ distinct elements in $U^{(\ell)}$. Since $(U,{\cal F})$ is an $(rn,rN,k,m)$-CBC, $P$ can be read by taking $D_j\subseteq F_j$, $|D_j|\leq 1$ for $j\in[m]$. Then $Q$ can be read from the servers $\{C_j:j\in[m],|D_j|=1\}$. Therefore, $(V,{\cal C})$ is an $(n,N,k,m;r)$-MCBC.
\end{proof}
Let $(X =[m], {\cal G} =\{G_1,\ldots,G_{nr}\})$, be the dual set system of $(U,{\cal F})$, so $G_i$, for $i\in[rn]$, is the set of servers that contain the $i$-th item in $(U,{\cal F})$. We show that $(X,{\cal G})$ satisfies the Hall's condition. For any $i_1,i_2,\dots,i_h\in[rn]$, $h\in [k]$, let $r_\ell$ denote the number of elements in $U^{(\ell)}$ for $\ell\in[n]$. Then $\left|\cup_{j=1}^hG_{i_j}\right|=\left|\cup_{\ell:r_\ell\neq 0}B_\ell\right|$.
Let $a=|\{\ell:r_\ell\neq 0\}|$. By the multiset Hall's condition, when $a\leq \lceil\frac{k}{r}\rceil$, $\left|\cup_{\ell:r_\ell\neq 0}B_\ell\right|\geq \min\{ar,k\}$; when $\lceil\frac{k}{r}\rceil<a\leq k$, $$\left|\bigcup_{\ell:r_\ell\neq 0}B_\ell\right|\geq \min\{r\left\lceil\frac{k}{r}\right\rceil,k\}\geq k=\min\{ar,k\}.$$
Since $h=\sum_{\ell:r_\ell\neq 0}r_\ell\leq ar$ and $h\leq k$, we always have $|\cup_{j=1}^h G_{i_j}|\geq h$ for any $h\in[k]$, that is $(X,{\cal G})$ satisfies the Hall's condition. Hence, $(U,{\cal F})$ is an $(rn,rN,k,m)$-CBC by Theorem~\ref{hallc}, and by Claim~\ref{cl1} $(V,{\cal C})$ is an $(n,N,k,m;r)$-MCBC.
\end{proof}
Theorem~\ref{hallc} is a special case of Theorem~\ref{mhallc} for $r=1$. In the following, when constructing an MCBC, we always construct its dual set system $(X,{\cal B})$, and check if it satisfies the multiset Hall's condition from Theorem~\ref{mhallc}. By adding an asterisk, we let $(X,{\cal B})^\ast$ denote its dual set system $(V,{\cal C})$. The following properties are obvious.
\begin{remark}\label{basic}
\begin{enumerate}
$~$
\item[(i)] If there exists some $h_0<\lceil\frac{k}{r}\rceil$ such that for any $h_0$ blocks $B_{i_1},B_{i_2},\dots,B_{i_{h_0}}$, $|\cup_{j=1}^{h_0} B_{i_j}|\geq k$, then for any $h$ such that $h_0<h\leq \lceil\frac{k}{r}\rceil$, the multiset Hall's condition is also satisfied.
\item[(ii)] An $(n,N,k,m;r)$-MCBC is also an $(n,N,k',m;r)$-MCBC for any $k'\leq k$.
\end{enumerate}
\end{remark}
\begin{example}
By checking the multiset Hall's condition, it is possible to verify that Example~\ref{e.ap4} gives a construction of $(20,80,k,16;r)$-MCBC for any pair $(k,r)\in\{(16,1),(11,2),(10,3),(7,4)\}$. Especially, as will be shown in Construction~\ref{cons.ap} in the sequel, the code is optimal when $(k,r)\in\{(16,1),(7,4)\}$.
\end{example}
In the following sections, we will give several bounds and constructions of MCBCs.
\section{Bounds of MCBCs}\label{sec:bound}
In this section, we give several bounds of MCBCs, which provide the results stated in Theorem~\ref{th:bounds}.
\begin{lemma}
\label{trivial_bound}\begin{itemize}
$~$
\item[(i)] $N(n,k,m;r)\geq rn$.
\item[(ii)] $N(n,k,m;r)\geq N(n,k,m;i)$ for $i\in [r-1]$.
\item[(iii)] $N(n,k,m;k)=kn$.
\end{itemize}
\end{lemma}
\begin{proof}
(i) This inequality holds since each item has to be stored in at least $r$ servers. (ii) This inequality holds from the definition of MCBCs. (iii) By (i), $N(n,k,m;k)\geq kn$. The trivial construction where each item is stored in arbitrary $k$ servers gives an optimal code construction.
\end{proof}
\begin{lemma}
\label{recursive_bound}
\begin{itemize}
$~$
\item[(i)] $\frac{1}{r}N(nr,k,m)\leq N(n,k,m;r)\leq N(rn,k,m)$.
\item[(ii)] $N(n,k,m;r)\leq rN(n,\left\lceil\frac{k}{r}\right\rceil,\left\lfloor\frac{m}{r}\right\rfloor)$.
\end{itemize}
\end{lemma}
\begin{proof}
(i) From the proof of Claim~\ref{cl1} in Theorem~\ref{mhallc}, we can see that if there exists an $(n,N,k,m;r)$-MCBC, then there exists an $(rn,rN,k,m)$-CBC, and therefore $N(nr,k,m) \leq r N(n,k,m;r)$.
Assume that there exists an $(rn,N,k,m)$-CBC given by the set system $(U=[rn], {\cal F}=\{F_1,F_2,\dots,F_m\})$. We construct a new set system $(V=[n],{\cal C}=\{C_1,\ldots,C_m\})$ as follows.
The $i$-th server contains the items given by the set $C_i = \{(\ell-1) (\bmod n)+1 : \ell \in F_i\}$.
That is, each item in the set $U^{(\ell)}=\{\ell+jn:j\in[0,r-1]\}$ in each server is replaced with $\ell$ for $\ell\in[n]$ (without repetitions). The new set system $(V,{\cal C})$ defines an $(n,N',k',m;r')$-MCBC with storage $N'\leq N$. To complete this proof we will show that $k'=k$ and $r'=r$.
Let $Q$ be a multiset request for $(V,{\cal C})$ where the $\ell$-th element, $\ell\in [n]$, is requested $r_\ell$ times, so $0\leq r_\ell\leq r$ and $\sum_{\ell=1}^nr_\ell=k$. Consider the request $P$ of $(U,{\cal F})$ which contains any $r_\ell$ distinct elements from $U^{(\ell)}$. Since $(U,{\cal F})$ is an $(rn,N,k,m)$-CBC, $P$ can be read by taking subsets $D_j\subseteq F_j$, $|D_j|\leq 1$ for $j\in[m]$. Then $Q$ can be read from the servers $\{C_j:j\in[m],|D_j|=1\}$. Hence, $(V,{\cal C})$ is an $(n,N',k,m;r)$-MCBC with $N'\leq N$, and $N(n,k,m;r)\leq N(rn,k,m)$.
(ii) Assume that there exists an $(n,N,\left\lceil\frac{k}{r}\right\rceil,\left\lfloor\frac{m}{r}\right\rfloor)$-CBC given by the set system $(V=[n], {\cal F}=\{F_1,F_2,\ldots,F_{\lfloor\frac{m}{r}\rfloor}\})$. We construct a new code by the following set system $(V=[n], {\cal C}=\{C_1,C_2,\dots,C_m\})$, such that $C_{i+j\lfloor\frac{m}{r}\rfloor}=F_i$ for any $i\in\left[\left\lfloor\frac{m}{r}\right\rfloor\right]$ and $j\in[0,r-1]$, and $C_\ell=\emptyset$ for any $r\lfloor\frac{m}{r}\rfloor+1 \leq \ell \leq m$. That is, each $F_i$ for $i\in\left[\left\lfloor\frac{m}{r}\right\rfloor\right]$ is repeated $r$ times.
Assume that the dual set system of $(V,{\cal F})$ is $(Y=\left[\left\lfloor\frac{m}{r}\right\rfloor\right],{\cal G}=\{G_1,\dots,G_n\})$, and the dual set system of $(V,{\cal C})$ is $(X=[m],{\cal B}=\{B_1,\dots,B_n\})$. Then $|B_i|=r|G_i|$ for $i\in[n]$. Since $(V,{\cal F})$ is an $(n,N,\left\lceil\frac{k}{r}\right\rceil,\left\lfloor\frac{m}{r}\right\rfloor)$-CBC, for any $1\leq h\leq \left\lceil\frac{k}{r}\right\rceil$, and distinct $i_1,\ldots,i_h\in [n]$, $|\cup_{j=1}^h G_{i_j}|\geq h$ by Theorem~\ref{hallc}. Then for any $1\leq h\leq \left\lceil\frac{k}{r}\right\rceil$, $|\cup_{j=1}^h B_{i_j}|\geq hr\geq \min\{hr,k\}$. Therefore, by Theorem~\ref{mhallc}, $(V,{\cal C})$ is an $(n,rN,k,m;r)$-MCBC, and $N(n,k,m;r)\leq rN(n,\left\lceil\frac{k}{r}\right\rceil,\left\lfloor\frac{m}{r}\right\rfloor)$.
\end{proof}
Let $(V,{\cal C})$ be a set system of an $(n,N,k,m;r)$-MCBC and let $(X,{\cal B})$ be its dual set system. For $i\geq 0$, we denote by $A_i$ the number of subsets in ${\cal B}$ of size $i$. Note that for $i<r$, $A_i=0$ since every item is contained in at least $r$ different servers.
As pointed in \cite{PSW09}, $A_i=0$ for $i\geq k+1$ since for any block of size larger than $k$, we can reduce the block to $k$ points and the multiset Hall's condition is still satisfied. The following bound is a generalization of the results in \cite{BRR12,BT11A,PSW09}.
\begin{lemma}\label{lem:bound}
\label{b.sum} If $(X,{\cal B})^\ast$ is an $(n,N,k,m;r)$-MCBC with $r\leq k-1$, and $A_i$ for $i\in[k-1]$ is defined as above, then $$\sum_{i=r}^{k-1}{m-i\choose k-1-i}A_i\leq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}.$$
\end{lemma}
\begin{proof}
Let $M_{k-1}$ be the ${m\choose k-1}\times n$ matrix, whose rows are labeled by all the $(k-1)$-subsets of $X$, and the columns are labeled by the blocks in ${\cal B}$ that contain less than $k$ points. The $(i,j)$-th entry of $M_{k-1}$ is $1$ if the $j$-th block $B_j$ is contained in the $i$-th $(k-1)$-subset of $X$, and otherwise it is $0$.
Each row in $M_{k-1}$ has at most $\left\lfloor\frac{k-1}{r}\right\rfloor$ ones. In order to verify this property, assume in the contrary that there exist $\left\lfloor\frac{k-1}{r}\right\rfloor+1$ blocks, and without loss of generality let them be the blocks $B_1,B_2,\dots,B_{\lfloor\frac{k-1}{r}\rfloor+1}$, which are all subsets of the same $(k-1)$-subset. Therefore, $|\cup_{i=1}^{\lfloor\frac{k-1}{r}\rfloor+1} B_i|\leq k-1$, and the multiset Hall's condition is not satisfied, since $\lfloor\frac{k-1}{r}\rfloor+1 = \lceil k/r \rceil$ and $\min\{ (\lfloor\frac{k-1}{r}\rfloor+1)r,k \} = k$. Every column which corresponds to a block of size $i<k$ has exactly ${m-i\choose k-1-i}$ ones. Therefore, by counting the number of ones in $M_{k-1}$ by rows and columns separately, we get that $\sum_{i=r}^{k-1}{m-i\choose k-1-i}A_i\leq\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$.
\end{proof}
According to Lemma~\ref{lem:bound}, we derive the next theorem.
\begin{theorem}
\label{bound} Let $r\leq k-1$. For any $c\in[r,k-1]$,
$$N(n,k,m;r)\geq nc-\left\lfloor\frac{k-c}{m-k+1}\left[\frac{\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}}{{m-c\choose k-1-c}}-n\right]\right\rfloor.$$
\end{theorem}
\begin{proof}
The proof is similar to the one given in Lemma 3.2 in~\cite{BRR12}, and hence we omit it here.
\end{proof}
\section{Constructions of MCBCs}\label{sec:construction}
In this section we present several constructions of MCBCs. Constructions~\ref{c.largen} and \ref{cons2} are generalizations of the equivalent ones in \cite{BRR12,BT11A,PSW09} which determine the value of $N(n,k,m)$ in Theorem~\ref{CBC} (ii) and (iii). Construction~\ref{cons.m=k} is a generalization of that in \cite{PSW09} which determines the value of $N(n,k,k)$ in Theorem~\ref{CBC} (i).
\subsection{A Construction by Replication}
Our first construction uses simple replication which is a generalization of the one in \cite{BRR12,BT11A,PSW09}.
\begin{construction}
\label{c.largen}
Let $n,k,m,r$ be positive integers such that $n\geq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$ with $r<k$. We construct an $(n,N,k,m;r)$-MCBC with $N=kn-\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$, by explicitly constructing its dual set system $(X=[m],{\cal B}=\{B_1,\ldots,B_n\})$ as follows:
\begin{enumerate}
\item The first $\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$ blocks of ${\cal B}$ consist of $\left\lfloor\frac{k-1}{r}\right\rfloor$ copies of all different $(k-1)$-subsets of $[m]$.
\item Each remaining block of ${\cal B}$ is taken to be any $k$-subset of $[m]$.
\end{enumerate}
Thus, the value of $N$ is given by
$$N=\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1} (k-1) + \left(n- \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}\right)k = kn - \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}.$$
\end{construction}
The correctness of this construction is proved in the next theorem.
\begin{theorem}
The code $(X,{\cal B})^*$ from Construction~\ref{c.largen} is an $(n,N,k,m;r)$-MCBC with $n\geq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$, $r<k$ and $N=kn-\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$.
\end{theorem}
\begin{proof}
We only need to check that $(X,{\cal B})$ satisfies the multiset Hall's condition. For $1\leq h\leq \lceil\frac{k}{r}\rceil$, let $B_{i_1},B_{i_2},\dots,B_{i_h}\in{\cal B}$ be some $h$ different blocks. If there exists a block of size $k$ or there exist two distinct blocks of size $k-1$, then $|\sum_{j=1}^n B_{i_j}|\geq k$; otherwise, we have $h\leq \left\lfloor\frac{k-1}{r}\right\rfloor$ by construction, and $|\sum_{j=1}^n B_{i_j}|\geq k-1\geq \min\{hr,k\}$.
\end{proof}
Before we show that this construction is optimal, let us recall a useful lemma from~\cite{BRR12,BT11A}.
\begin{lemma}[\cite{BRR12,BT11A}]
\label{ineq} Let $1\leq k\leq m$ and $0\leq i\leq k-1$. Then ${m-i\choose k-1-i}-1\geq (m-k+1)(k-1-i)$.
\end{lemma}
We can now deduce that Construction~\ref{c.largen} is optimal.
\begin{cor}
\label{b.largen} For any $n\geq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$, $N(n,k,m;r)=kn-\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$.
\end{cor}
\begin{proof}
For any $(n,N,k,m;r)$-MCBC, let $A_i$ for $i\in[k]$ be the number of blocks in the dual set system of size $i$. By Lemma~\ref{ineq}, for $i\leq k-1$, ${m-i\choose k-1-i}\geq (m-k+1)(k-1-i)+1\geq k-i$, then $$\sum_{i=r}^{k-1}(k-i)A_i\leq \sum_{i=r}^{k-1}{m-i\choose k-1-i}A_i\leq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1},$$
and the last inequality holds according to Lemma~\ref{b.sum}. Therefore, we get that
\begin{align*}
N & =\sum_{i=r}^k iA_i = \sum_{i=r}^k (k-(k-i))A_i= \sum_{i=r}^k kA_i - \sum_{i=r}^k (k-i)A_i \\
& = kn - \sum_{i=r}^{k-1} (k-i)A_i\geq kn-\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}.
\end{align*}
Hence, we conclude that $N(n,k,m;r)=kn-\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$ when $n\geq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$, since the codes from Construction~\ref{c.largen} achieve this bound.
\end{proof}
As a special case when $r=k-1$ we get the following corollary.
\begin{cor}
\label{r=k-1} $N(n,k,m;k-1)=\begin{cases}
kn-{m\choose k-1} & \text{if $n\geq {m\choose k-1}$}, \\
(k-1)n & \text{if $n< {m\choose k-1}$}. \\
\end{cases}$
\end{cor}
\begin{proof}
For $n\geq {m\choose k-1}$, according to Corollary~\ref{b.largen} for $r=k-1$, we get that $N(n,k,m;k-1) = kn-{m\choose k-1}$. For $n<{m\choose k-1}$, we slightly modify the code from Construction~\ref{c.largen} such that the $n$ blocks in ${\cal B}$ are some different $(k-1)$-subsets of $[m]$. It is readily verifies that the multiset Hall's condition holds for this modified construction and thus it provides an $(n,N=n(k-1),k,m;k-1)$-MCBC. Finally, according to Lemma~\ref{trivial_bound} (i), this construction is optimal.
\end{proof}
\subsection{Constructions Based on Constant Weight Codes}
Next, we give constructions based upon constant weight codes. Let $(n,d,w)$-code denote a binary constant weight code of length $n$, weight $w$ and minimum Hamming distance $d$, and let $A(n,d,w)$ denote the maximum number of codewords of an $(n,d,w)$-code.
\begin{construction}
\label{cons.smalln} Let $X=[m]$ and ${\cal C}$ be an $(m,2(k-w),w)$-code with $n$ codewords for some $w\in[r,k-1]$. Let ${\cal B}=\{B_1,\ldots,B_n\}$ be the support sets of all the codewords in ${\cal C}$.
\end{construction}
\begin{theorem}
The code $(X,{\cal B})^*$ from Construction~\ref{cons.smalln} is an $(n,wn,k,m;r)$-MCBC.
\end{theorem}
The next theorem proves the correctness of this construction.
\begin{proof}
We only need to check that $(X,{\cal B})$ satisfies the multiset Hall's condition. It is satisfied as the size of each block in ${\cal B}$ is $w\geq r$ and since the minimum distance of ${\cal C}$ is $2(k-w)$, we get that the union of any two blocks in ${\cal B}$ is at least $k$.
\end{proof}
If we take $w=r$, we get the following family of optimal codes.
\begin{cor}
\label{nsmall} For any $n\leq A(m,2(k-r),r)$, $N(n,k,m;r)=rn$.
\end{cor}
\begin{proof}
By Construction~\ref{cons.smalln}, if there exists an $(m,2(k-r),r)$-code with $A(m,2(k-r),r)$ codewords, we get an $(n,rn,k,m;r)$-MCBC for any $n\leq A(m,2(k-r),r)$, and it is optimal by Lemma~\ref{trivial_bound} (i).
\end{proof}
Constant weight codes are used in \cite{BRR12} to prove Theorem~\ref{CBC} (iv). Now, we give a similar construction for MCBCs.
\begin{construction}
\label{cons2} Let $X=[m]$, $r\leq k-2$. Let ${\cal C}$ be an $(m,4,k-2)$-code with $\alpha$ codewords with $\alpha\leq A(k,4,k-2)$. First, let ${\cal B}_0$ be a set of $\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$ blocks, in which each $(k-1)$-subset of $[m]$ appears $\left\lfloor\frac{k-1}{r}\right\rfloor$ times. Let ${\cal S}$ consist of the support sets of the codewords in ${\cal C}$. Then, for any block in ${\cal S}$, add it to ${\cal B}_0$, and remove one copy of each of its $m-k+2$ supersets\footnote{For a block $S\in{\cal S}$ of size $k-2$, the {\em supersets} are the $(k-1)$-subsets of $[m]$ that contain $S$.} of size $k-1$ in ${\cal B}_0$. Let the resulting block set be ${\cal B}$.
\end{construction}
\begin{theorem}
The code $(X,{\cal B})^*$ from Construction~\ref{cons2} is an $(n,N,k,m;r)$-MCBC with $$n=\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-\alpha(m-k+1)\text{ and }N=n(k-1)-\alpha,$$
where $\alpha\leq A(k,4,k-2)$.
\end{theorem}
\begin{proof}
Since the code has minimum distance four, for any two blocks in ${\cal S}$, their supersets of size $k-1$ are different. Therefore, each $(k-1)$-subset of $[m]$ is removed at most once. During the process, we add $\alpha$ blocks and remove $\alpha(m-k+2)$ blocks. Hence, we get that $n=\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-\alpha(m-k+1)$. Finally, since only $\alpha$ of all $n$ blocks are of size $k-2$, we get that $N=n(k-1)-\alpha$.
Next we show that the multiset Hall's condition holds. For $1\leq h\leq \lceil\frac{k}{r}\rceil$, let $B_{i_1},B_{i_2},\dots,B_{i_h}\in{\cal B}$ be some $h$ different blocks. In case $h=1$, the size of each block in ${\cal B}$ is at least $k-2\geq r$ so the condition holds and thus we assume that $h\geq 2$.
If there exist two blocks $B_{i_a},B_{i_b}\in{\cal B}\setminus{\cal S}$ which are different $(k-1)$-subsets, then $|B_{i_a}\cup B_{i_b}|\geq k$; if there exists two blocks $B_{i_a},B_{i_b}\in {\cal S}$, then $|B_{i_a}\cup B_{i_b}|\geq k$ because of the minimum Hamming distance of the code ${\cal C}$ is four.
Therefore, we only need to check the case when there is one block from ${\cal S}$, and the other $h-1$ blocks are the same $(k-1)$-subset from ${\cal B}\setminus{\cal S}$. For example, $B_{i_1}\in{\cal S}$, and $B_{i_2},\dots,B_{i_h}\in{\cal B}\setminus{\cal S}$ are the same $(k-1)$-subset. If $B_{i_1}$ is not a subset of $B_{i_2}$, then $|B_{i_1}\cup B_{i_2}|\geq k$; if $B_{i_1}$ is a subset of $B_{i_2}$, then by the construction $h\leq\left\lfloor\frac{k-1}{r}\right\rfloor$ and $|\cup_{j=1}^h B_{i_j}|=k-1\geq hr=\min\{hr,k\}$. Therefore, the multiset Hall's condition holds.
\end{proof}
The following lower bound of $A(n,4,w)$ is known.
\begin{lemma}[\cite{GS1980}]
\label{cwcd=4} $A(n,4,w)\geq\frac{1}{n}{n\choose w}$.
\end{lemma}
Next, we apply Construction~\ref{cons2} to get a family optimal codes.
\begin{cor}\label{cor:r=k-2}
For any $\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-(m-k+1)A(m,4,k-2)\leq n\leq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}$, $r\leq k-2$, $$N(n,k,m;r)=n(k-1)-\left\lfloor\frac{\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-n}{m-k+1}\right\rfloor.$$
\end{cor}
\begin{proof}
When $r\leq k-2$, taking $c=k-1$ in Theorem~\ref{bound}, we have
$N\geq n(k-1)-\left\lfloor\frac{\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-n}{m-k+1}\right\rfloor$.
For any positive integers $n,m,k,r$ such that $r+2\leq k\leq m$ and $$\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-(m-k+1)A(m,4,k-2)\leq n\leq \left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1},$$ we have that $0\leq \left\lfloor\frac{\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-n}{m-k+1}\right\rfloor\leq A(m,4,k-2)$. Let $\alpha=\left\lfloor\frac{\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-n}{m-k+1}\right\rfloor$. By Construction~\ref{cons2}, there exists an $(n',N',k,m;r)$-MCBC with $$n'=\left\lfloor\frac{k-1}{r}\right\rfloor{m\choose k-1}-\alpha(m-k+1)\text{ and }N'=n'(k-1)-\alpha.$$ Removing any $n'-n$ blocks of size $k-1$ from its dual set system, we get an optimal $(n,N,k,m;r)$-MCBC with $N=N'-(k-1)(n'-n)=n(k-1)-\alpha$.
\end{proof}
\subsection{A Construction for $m=k$}
In the following, we give a construction of $(n,N,k,k;r)$-MCBC and determine the value of $N(n,k,k;r)$ for $1\leq r\leq k$.
\begin{construction}
\label{cons.m=k}
Let $m=k$ and $X=[k],{\cal B}=\{B_1,\ldots,B_n\}$ and $k=\alpha r+\beta$, where $\alpha\geq 1$ and $0\leq \beta\leq r-1$ such that the following holds.
\begin{enumerate}
\item[(i)] When $\beta=0$, for any $n\geq \alpha$, let $B_i=[(i-1)r+1,(i-1)r+r]$ for $i\in [\alpha]$, and $B_i=[k]$ for any $i\in[\alpha+1,n]$.
\item[(ii)] When $\beta>0$, for any $n\geq \alpha+r$, let $B_i=[(i-1)r+1,(i-1)r+r]$ for $i\in [\alpha]$, $B_i=[k]\setminus\{i-\alpha,i-\alpha+r,i-\alpha+2r,\dots,i-\alpha+(\alpha-1)r\}$ for $i\in[\alpha+1,\alpha+r]$, and $B_i=[k]$ for any $i\in[\alpha+r+1,n]$.
\end{enumerate}
\end{construction}
\begin{theorem}
The code $(X,{\cal B})^*$ from Construction~\ref{cons.m=k} is an $(n,N,k,k;r)$-MCBC with $N=kn-\left\lfloor\frac{k-1}{r}\right\rfloor k$.
\end{theorem}
\begin{proof}
We show that in both cases, i.e. $\beta>0$ and $\beta=0$, the multiset Hall's condition holds.\\
(i) $\beta=0$. For any $h\geq 1$ different blocks $B_{i_1},B_{i_2},\dots,B_{i_h}\in{\cal B}$, if there exists some block $B_{i_j}=[k]$, then $|\cup_{j=1}^h B_{i_j}|=k\geq r$; otherwise, $i_j\in[\alpha]$ for all $j\in[h]$, and then $|\cup_{j=1}^h B_{i_j}|=hr$ since by the construction the blocks $B_1,\ldots,B_\alpha$ are mutually disjoint. Thus, the multiset Hall's condition holds, and it is an $(n,N,k,k;r)$-MCBC with
$$N=\alpha r + (n-\alpha)k= kn-\frac{k}{r}(k-r)=kn-\left\lfloor\frac{k-1}{r}\right\rfloor k.$$
(ii) $\beta>0$. First, note that for any $i\in[\alpha+1,\alpha+r]$, $|B_i|=k-\alpha\geq r$. This holds since $k-\alpha-r=\alpha r+\beta-\alpha-r=(\alpha-1)(r-1)+(\beta-1)\geq 0$. For any $h$ different blocks $B_{i_1},B_{i_2},\dots,B_{i_h}\in {\cal B}$ with $h\geq 2$, if there exists some $j\in[h]$ such that $B_{i_j}=[k]$ or for all $j\in[h]$, $i_j\in[\alpha]$, then the proof is similar as in case (i). If there exist two blocks $B_{i_a},B_{i_b}$ such that $i_a,i_b\in[\alpha+1,\alpha+r]$, then $|B_{i_a}\cup B_{i_b}|=k$. Therefore, the remaining case to check is when only one block is from the set $\{B_i:i\in[\alpha+1,\alpha+r]\}$, and the other blocks are from the set $\{B_i:i\in[\alpha]\}$.
Without loss of generality assume that $i_1,i_2,\dots,i_{h-1}\in[\alpha]$ and $i_h\in[\alpha+1,\alpha+r]$. Since $|B_{i_h}|=k-\alpha$, when $1\leq h\leq \alpha-1$, by the construction we get that $|\cup_{j=1}^h B_{i_j}|\geq k-\alpha+h-1$. Since $$k-\alpha+h-1-hr=\alpha r+\beta-\alpha+h-1-hr=(\alpha-h)(r-1)+(\beta-1)\geq 0,$$ we conclude that $|\cup_{j=1}^h B_{i_j}|\geq hr$. If $h=\alpha$, by the construction we get that $|\cup_{j=1}^h B_{i_j}|=k-1=\alpha r+\beta-1\geq \alpha r$. Lastly, if $h=\alpha+1$, then $|\cup_{j=1}^h B_{i_j}|=k$. Therefore, the multiset Hall's condition is satisfied for any $1\leq h\leq \left\lceil\frac{k}{r}\right\rceil$, and the code is an $(n,N,k,k;r)$-MCBC with
$$N=kn-\alpha (k-r)-\alpha r=kn-\alpha k=kn-\left\lfloor\frac{k-1}{r}\right\rfloor k.$$
\end{proof}
The next corollary summarizes the construction and results in this section.
\begin{cor}
\label{k=m} $N(n,k,k;r)=kn-\left\lfloor\frac{k-1}{r}\right\rfloor k$ if $r\mid k$, $n\geq \frac{k}{r}$ or $r\nmid k$, $n\geq \lfloor\frac{k}{r}\rfloor+r$.
\end{cor}
\begin{proof}
Taking $m=k$ in Lemma~\ref{b.sum}, we have that $\sum_{i=r}^{k-1}(k-i)A_i\leq \left\lfloor\frac{k-1}{r}\right\rfloor k$. Similarly to the proof of Corollary~\ref{b.largen}, we get that
\begin{align*}
N(n,k,k;r) & =\sum_{i=r}^k iA_i = \sum_{i=r}^k (k-(k-i))A_i= \sum_{i=r}^k kA_i - \sum_{i=r}^k (k-i)A_i \\
& =kn-\sum_{i=r}^{k-1}(k-i)A_i\geq kn-\left\lfloor\frac{k-1}{r}\right\rfloor k.
\end{align*}
Hence we conclude that $N(n,k,k;r)=kn-\left\lfloor\frac{k-1}{r}\right\rfloor k$ since Construction~\ref{cons.m=k} gives optimal codes that reach this bound.
\end{proof}
\subsection{A Construction from Steiner Systems}
In the following we construct a class of MCBCs based upon Steiner systems, which is a generalization of Example~\ref{e.ap4}.
A \emph{Steiner system} $S(2,\ell,m)$ is a set system $(X,{\cal B})$, where $X$ is a set of $m$ points, ${\cal B}$ is a collection of $\ell$-subsets (blocks) of $X$, such that each pair of points in $X$ occurs together in exactly one block of~${\cal B}$. By the well known Fisher's inequality \cite{MR2007}, for an $S(2,\ell,m)$ with $m>\ell\geq 2$, $|{\cal B}|\geq m$. For the existence of Steiner systems, we refer the reader to~\cite{CM2007}.
\begin{theorem}
\label{c.steiner} Let $(X,{\cal B})$ be an $S(2,\ell,m)$ with $m> \ell$. Then $(X,{\cal B})^*$ is a $(|{\cal B}|,\ell|{\cal B}|,k,m;r)$-MCBC for any $\lfloor\frac{\ell}{2}\rfloor+1\leq r\leq \ell$ and $k\leq (\ell-r+1)(2r-1)$.
\end{theorem}
\begin{proof}
By Remark~\ref{basic} (ii), we only need to check that $(X,{\cal B})^*$ is a $(|{\cal B}|,\ell|{\cal B}|,k,m;r)$-MCBC for any $\lfloor\frac{\ell}{2}\rfloor+1\leq r\leq \ell$ and $k=(\ell-r+1)(2r-1)$.
Let us first determine the number of points in the union of any $h$ blocks in ${\cal B}$. Since any two blocks intersect in at most one point, for any $i\in[2,h]$, if the first $i-1$ blocks are chosen, then the $i$-th block can contribute at least $\ell-(i-1)$ new points. Therefore, if $h\leq \ell+1\leq |{\cal B}|$, the union of any $h$ blocks contains at least
$$\ell+(\ell-1)+\cdots+(\ell-(h-1))=h\ell-{h\choose 2}$$ points.
Let us consider some $h$ blocks $B_{i_1},B_{i_2},\dots,B_{i_h}$ with
$$1\leq h\leq \left\lceil\frac{k}{r}\right\rceil=\left\lceil\frac{(\ell-r+1)(2r-1)}{r}\right\rceil\leq \left\lceil 2(\ell-r+1)-\frac{l+1}{r}+1\right\rceil\leq 2(\ell-r)+2,$$
where the last inequality holds since $r\geq\lfloor\frac{\ell}{2}\rfloor+1$. We see $2(\ell-r)+2\leq \ell+1$ for $r\geq\lfloor\frac{\ell}{2}\rfloor+1$.
If $h\in[2(\ell-r)+1]$, then $r\leq \ell-\frac{h-1}{2}$, and
$$|\cup_{j=1}^h B_{i_j}|\geq h\ell-{h\choose 2}=h\left(\ell-\frac{h-1}{2}\right)\geq hr=\min\{hr,k\}.$$
If $h=2(\ell-r)+2$, then $|\cup_{j=1}^h B_{i_j}|\geq \ell h-{h\choose 2}=(\ell-r+1)(2r-1)=k$. Therefore, the multiset Hall's condition holds for any $1\leq h\leq \lceil\frac{k}{r}\rceil$.
\end{proof}
An \emph{affine plane} of order $q$ is an $S(2,q,q^2)$. It has $q^2$ points and $q^2+q$ blocks. It is well known that an affine plane exists for any prime power $q$ \cite{CM2007}. The next result of CBCs based upon affine planes was given in~\cite{SG13}.
\begin{theorem}[\cite{SG13}]\label{cons.ap}
Let $q$ be a prime power and $(X,{\cal B})$ be an affine plane of order $q$. Then $(X,{\cal B})^*$ is an optimal uniform $(q^2+q,q^3+q^2,q^2,q^2)$-CBC.
\end{theorem}
The code in Theorem~\ref{cons.ap} is also an optimal CBC, since for $k=m$ by Theorem~\ref{CBC} (i), $N(q^2+q,q^2,q^2)=q^3+q^2$. However, note that it is a different code from the optimal $(n,N,k,k)$-CBC in \cite{PSW09} which is constructed as follows: Let $X=[k]$, and ${\cal B}=\{B_1,\ldots,B_n\}$, which are given by $B_i=\{i\}$ for $i\in[k]$, and $B_i=[k]$ for $i\in[k+1,n]$. By Theorem~\ref{c.steiner}, we can see the code in Theorem~\ref{cons.ap} is also $(q^2+q,q^3+q^2,k,q^2;r)$-MCBCs for different pair-values of $k$ and $r$.
\begin{cor}
\label{c.ap} Let $q$ be a prime power. Then there exists a $(q^2+q,q^3+q^2,k,q^2;r)$-MCBC for any $\lfloor\frac{q}{2}\rfloor+1\leq r\leq q$ and $k\leq (q-r+1)(2r-1)$.
\end{cor}
When $r=q$, we also receive an optimal $(q^2+q,q^3+q^2,2q-1,q^2;q)$-MCBC, since it reaches the bound in Lemma~\ref{trivial_bound} (i) with total storage $N=rn=q(q^2+q)=q^3+q^2$. Note that this code could also be obtained by Construction~\ref{cons.smalln} using $(q^2,2q-2,q)$-codes. The existence of $(q^2,2q-2,q)$-codes follows from affine planes as follows. For any block $B\in{\cal B}$, we get a codeword ${\sf u}$ of length $q^2$ in which the value of each coordinate ${\sf u}_i$ for $i\in[q^2]$ is $1$ if and only if $i\in B$. Since any two blocks intersect in at most one points, the distance between every two distinct codewords is at least $2(q-1)$. Lastly, we note that it is possible to improve the value of $k$ when $r\leq\lfloor\frac{q}{2}\rfloor$.
We demonstrate this in the following example.
\begin{example}
Let $q=4$. From Corollary~\ref{c.ap}, we obtain a $(20,80,k,16;r)$-MCBC for $(k,r)\in\{(10,3),(7,4)\}$. From the incidence matrix in Example~\ref{e.ap4}, the lower bounds on the size of the union of any $h$ blocks, $1\leq h\leq 6$, are shown in the following table:
\begin{center}
\begin{tabular}{|c|c|}
\hline
$h$ & size of union \\
\hline
1 & 4 \\
2 & 7 \\
3 & 9 \\
4 & 10 \\
5 & 10 \\
6 & 11 \\
\hline
\end{tabular}
\end{center}
Therefore, we get also a $(20,80,11,16;2)$-MCBC.
\end{example}
\section{Regular MCBCs}\label{sec:regular_mcbc}
In this section, we study regular MCBCs, and give a construction for such codes. Given $n,m,k,r$, let $\mu(n,k,m;r)$ denote the smallest number of items stored in each server in a regular MCBC. The following lemma presents a simple lower bound on the value of $\mu(n,k,m;r)$.
\begin{lemma}
\label{bound_reg} $\mu(n,k,m;r)\geq\left\lceil\frac{N(n,k,m;r)}{m}\right\rceil$.
\end{lemma}
\begin{proof}
This property holds since a regular $(n,N,k,m;r)$-MCBC is also an $(n,N,k,m;r)$-MCBC, and therefore $m\mu(n,k,m;r)\geq N(n,k,m;r)$.
\end{proof}
\begin{remark}
It is easy to check that the constructions of MCBCs in Section~\ref{sec:construction} also give regular MCBCs for some specific parameters. Moreover, when the MCBCs are optimal, the bound in Lemma~\ref{bound_reg} holds with equality.
\end{remark}
For $r=k$, we determine when regular MCBCs with minimum storage $kn$ exist.
\begin{construction}
\label{reg_cons} Let $n=\frac{m}{\gcd{(m,k)}}$ and $k\leq m$, then we have $\frac{nk}{m}=\frac{k}{\gcd{(m,k)}}$ and $m|nk$. Let $I=[0,nk-1]\subseteq \mathbb{Z}$, and for each $i\in[n]$, $I^{(i)}=[(i-1)k,ik-1]$. Then let $X=[m]$ and ${\cal B}=\{B_1,\dots,B_n\}$, where $B_i=\{j\pmod{m}+1:j\in I^{(i)}\}$ for $i\in[n]$.
\end{construction}
\begin{theorem}
The code $(X,{\cal B})^*$ from Construction~\ref{reg_cons} is a regular $(n,k,m;k)$-MCBC.
\end{theorem}
\begin{proof}
For any $a,b\in I$, $a\neq b$, if $a\pmod{m}+1=b\pmod{m}+1$, then $m\mid(a-b)$. Since each $I^{(i)}$ consists of $k$ consecutive integers and $k\leq m$,
then $|B_i|=k$ for any $i\in[n]$. We only need to prove that the code is regular. This holds since each $i\in[m]$ appears in exactly $\frac{nk}{m}=\frac{k}{\gcd{(m,k)}}$ blocks in ${\cal B}$.
\end{proof}
\begin{cor}
\label{rmcbc} $\mu(n,k,m;k)=\frac{kn}{m}$ if and only if $n=c\cdot\frac{m}{\gcd{(m,k)}}$ for some integer $c\geq 0$.
\end{cor}
\begin{proof}
If $\mu(n,k,m;k)=\frac{kn}{m}$, then the value of $n$ satisfies $n=c\cdot\frac{m}{\gcd{(m,k)}}$ for some integer $c\geq 0$ so that $\mu(n,k,m;k)$ is an integer. For any such $n$, Construction~\ref{reg_cons} gives a code with the desired parameters as follows. Assume that $n=c\cdot\frac{m}{\gcd{(m,k)}}$, and let $X=[m]$ and $\hat{{\cal B}}$ consist of $c$ copies of all the blocks of ${\cal B}$ in Construction~\ref{reg_cons}. Then, $(X,\hat{{\cal B}})^*$ is a regular $(n,k,m;k)$-MCBC.
\end{proof}
\section{Conclusion}\label{sec:conclusion}
In this paper, we generalized combinatorial batch codes to multiset combinatorial batch codes and regular multiset combinatorial batch codes. Several bounds and constructions of optimal codes were obtained.
\section*{Acknowledgments}
The authors would like to thank Prof. Tuvi Etzion for valuable discussions.
|
3,212,635,537,785 | arxiv | \section{Introduction}
Finding new collective electronic states in materials is one of the fundamental goals of condensed matter physics. While the traditional approach has been to search for such phases within naturally occurring compounds, in recent years the focus has shifted to \emph{heterostructures} \cite{Hwang:2012p38867}: artificial materials formed by interleaving two or more structurally and chemically dissimilar materials. Of particular interest is the spatial region at the \emph{interface} where dissimilar materials meet. New states may emerge here because the environment near an interface is different from that occurring in bulk (thermodynamically stable) materials.
Advances in the \emph{angstrom-scale} layer-by-layer synthesis of multi-element compounds for \emph{materials-by-design} have taken the approach to a new level of power and sophistication: It enables the atomic-scale combination of materials with different properties, granting access to a new terrain in which unusual states of matter may
arise \cite{Schlom:2008p15406}.
Heterostructures formed from transition metal oxides (TMO) are a particularly appealing hunting ground for new physics. In these materials the transition metal ($M$) ion has an open $d$-shell electronic configuration with spin, orbital, and charge degrees of freedom. Electrons in these partially filled $d$-shells are \emph{correlated}: the motion of one electron depends explicitly and non-trivially on the behavior of all of the others giving rise to interesting many-body phenomena \cite{Imada:1998p38738}. The resulting magnetic, superconducting, and multiferroic phases are of great scientific interest and are potentially capable of providing innovative energy, security, electronics and medical technology platforms. The heterostructure geometry \cite{Zubko:2011p42313,Mannhart:2010p14504,MRS:9125042} enables otherwise unattainable changes in atomic structure and chemical bonding, leading to new modalities for control and optimization of known states and potentially leading to new ones.
Over the past decade, one particular class of heterostructures, based on the interface between lanthanum aluminate (LaAlO$_3$; LAO for short) and Strontium titanate (SrTiO$_3$; STO for short), has been the subject of very extensive study. In this Colloquium we choose not to discuss the LAO/STO interface or its variants, selecting our examples instead from vanadate, manganite, cuprate and nickelate-based systems for two reasons. First, the LAO/STO system and its variants have been extensively reviewed in other venues, see for example \onlinecite{Mannhart08,Zubko11,Hwang:2012p38867}.
Second, and more importantly, the LAO/STO system involves doping nominally insulating STO with maximum sheet carrier densities of fewer than 0.5 electrons ($e$) per in-plane unit cell, and the charge density is typically spread over several unit cells in the direction perpendicular to the interface. The volume carrier densities are therefore typically low, so that the situation is more closely related to a doped semiconductor than to the correlated electron materials on which we wish to focus here.
Nonetheless some of the scientific excitement has focused on superconductivity \cite{Reyren:2007p42482} (exhibited also when bulk STO is lightly doped by conventional dopants) and Anderson localization metal-insulator transitions \cite{Caviglia08}.
Reports or theoretical suggestions of other correlation phenomena in this system including charge ordering \cite{Pentcheva07} and magnetism \cite{Okamoto:2006p42492,Brinkman07,Ariando11,Bert11,Li11} have also appeared; we return to the issue in section IV and in the conclusions.
\begin{figure*}[t]
\centering
\includegraphics[width=1.11\columnwidth]{Fig1}\vspace{-0.4\baselineskip}
\caption{Anatomy of an oxide heterointerface: an illustration showing the interplay between different degrees of freedom (charge, spin, and lattice) at a coherently grown interface between ferromagnetic La$_{2/3}$Ca$_{1/3}$MnO$_3$ and superconducting YBa$_2$Cu$_3$O$_{7-x}$. The
electron micrograph is reproduced from Ref.~\onlinecite{Chien:2013p42266}, where each color represents a different chemical species.
\label{fig:hetero_anatomy}}
\end{figure*}
This Colloquium illustrates the essential features that make TMO-based heterostructures an appealing discovery platform for emergent properties. The guiding principle is that strong electronic correlations in combination with the access to new symmetries and electronic band structures provided by oxide interfaces can activate new electronic properties formerly ``hidden'' in bulk compounds. We illustrate this principle with a few selected examples, showing how charge redistributes, magnetism and orbital polarization arises and ferroelectric order emerges from heterostructures comprised of oxide components with nominally contradictory behavior. For example, interfaces may be metallic, magnetic, or ferroelectric even though in bulk form the constituent materials are insulating, non-magnetic, or simple dielectrics. We conclude by articulating open challenges and opportunities in the field, in particular, how to translate the new understanding of when emergent phases arise into control of novel behavior by design at oxide interfaces, and the manipulation of these states by suitable mechanical, electrical or optical boundary conditions and excitations.
\section{Anatomy of an Oxide Interface}
The formation of a coherent perovskite oxide heterointerface, as shown
in \autoref{fig:hetero_anatomy}, provides a remarkable correlated electron ``playground.'' It brings different transition metal cations with their localized $d$ electron physics and interacting charge, spin and lattice degrees of freedom into intimate contact in a tunable crystalline environment.
The key structural features of transition metal oxides relate to the coordination geometry of the metal ions and the metal-oxygen-metal bond angles. These determine magnetic exchange interactions~\cite{Kanamori:1965p42321,Goodenough:1955p42314,Anderson:1950p42315} and electronic bandwidths~\cite{Eng:2003p42319}, thereby controlling the electronic and magnetic ground states. Structural and electronic changes across an interface can act to stabilize previously unanticipated phases of matter \cite{Okamoto:2004p42312}.
Consider for example a multilayer heterostructure comprised of alternating blocks of the metallic ferromagnet La$_{2/3}$Ca$_{1/3}$MnO$_3$ (LCMO), and the high-temperature cuprate superconductor YBa$_2$Cu$_3$O$_{7-x}$ (YBCO) sketched in \autoref{fig:hetero_anatomy}. The interface brings several crucial structural effects. The first is a coordination mismatch. LCMO is a three-dimensional perovskite ($AM$O$_3$ stoichiometry) with corner-connected MnO$_6$ octahedra that may be described by interleaving alternating (La,Ca)O and MnO$_2$ layers along [001]. In contrast, YBCO is a two-dimensional oxide with four- and five-fold coordinated Cu cations. The layered cuprate structure may be considered as a derivative of
perovskite, which partly facilitates coherent growth of the heterostructure. But unlike LCMO, YBCO displays an ordered network of oxygen vacancies accommodated by the valence preferences of Cu: One oxygen atom is removed from every third (001) YO plane to produce the square pyramidal CuO$_5$ coordination, then on every third CuO$_2$ layer, vacancies order along [100], producing the square planar CuO$_4$ coordination. Thus a ``coordination mismatch''
arising from the change from the 6-fold coordination of the Mn to the lower coordination of the Cu (\autoref{fig:hetero_anatomy}, lower left) occurs at the interface. As a result, a set of CuO chains ($i.e.$ charge reservoir) is missing from the interfacial YBCO unit cell to maintain a prerovskite-like sequence ...MnO$_2$--BaO--CuO$_2$... across the junction~\cite{Zhang:2009p13098,Chien:2013p42266}.
Coherent epitaxial growth also produces an intrinsic strain mismatch arising from the different equilibrium lattice constants (\autoref{fig:hetero_anatomy}, center). The atomic structure at the heterointerface responds to alleviate the strain mismatch through relaxation of the interatomic distances and internal atomic degrees of freedom (for example, rotations or size deformations to the transition metal oxygen polyhedra) in the constituents along the superlattice repeat direction. These new atomic arrangements directly alter the electronic structure. Away from the interface it is characterized by carriers in the $d$-manifold with orbital symmetries $d(x^2-y^2)$ (YBCO) and $d(z^2-r^2/x^2-y^2)$ for LCMO (\autoref{fig:hetero_anatomy}, upper left), but near the interface the $d(z^2-r^2)$ become occupied in the YBCO and acquire more $d({x^2-y^2})$ character in the LCMO.
In addition to the structural effects, an electronic mismatch occurs. The ferromagnetism in LCMO relies on the cooperative \emph{parallel} alignment of spins from the narrow correlated electronic bands; singlet Cooper pair formation in YBCO, in contrast, relies on \emph{paired} spins with antiferromagnetic interactions. These antagonistic spin interactions (frustration) have been invoked to explain changes in the interfacial magnetization and superconductivity, $e.g.$ giant magnetoresistance, the appearance of uncompensated magnetic moment on Cu in CuO$_2$ plane, and large modulation of ferromagnetic magnetization profile across the heterojunction~\cite{Pena:2005p42538,Stahn:2005p42539,Chakhalian:2006p42181,Hoppler:2009p24795}
The different valence configurations of the cations in the constituent materials of the heterostructure also induce changes in charge density and chemical bonding. In the system shown in \autoref{fig:hetero_anatomy} (lower right panel) a charge of $\sim$0.2$e$ per Cu ion is transferred from Mn to Cu ions across the interface \cite{Chakhalian:2007p25272}.
The charge transfer at other oxide interfaces has also been found to exhibit a peculiar asymmetric electronic ``roughness'' intertwined with an asymmetric interface stacking sequence or an asymmetric chemical
roughness \cite{Hoffmann:2005p41603,May:2008p7133,Chien:2013p42266}.
The effects from different stacking sequences and electronic roughness remain to be resolved.
To summarize, the following degrees of freedom are highly tunable at an oxide interface and may be exploited in uncovering new phases:
\begin{itemize} \setlength{\itemsep}{0pt}
\item Epitaxial strain mismatch owing to differences in equilibrium lattice parameters
\item Atomic coordination frustration and cation site preferences
\item Ordered spin and orbital states
\item Charge flow across the interface (layer dipole discontinuities)
\item Chemical frustration and interlayer mixing
\end{itemize}
The following examples detail how these considerations are made,
and the exciting new phases born from the interplay of the
correlated electronic and atomic structure across oxide interfaces.
\begin{figure*}
\centering
\includegraphics[width=1.98\columnwidth]{Fig2}\vspace{-0.5\baselineskip}
\caption{Electronic structure of Cu in a La$_{2/3}$Ca$_{1/3}$MnO$_3$/YBa$_2$Cu$_3$O$_{7-x}$ heterostructure, determined from X-ray Linear Dichroism (XLD) and X-ray Magnetic Circular Dichroism (XMCD) measurements. Panels (a,b): XLD spectra taken on the Cu $L_3$-edge at temperature $T=15$ K with the electric-field vector E$\|ab$ plane and E$\|c$ plane, taken in bulk (panel a) and interface (panel b) sensitive modes. The main peak (``white line'') in (b) is shifted towards higher energies, indicating a lower charge state of Cu at the interface. Panel (c): XMCD spectra measured at the Cu and Mn $L_3$ edges in (c) recorded at $T=15$ K in a 5 T applied magnetic field demonstrating that the interfacial copper cations exhibit a non-zero ferromagnetic local moment, whereas in bulk the antiferromagnetic coupling leads to a net magnetization of zero.
\label{fig:lsmo_xas}
}
\end{figure*}
\section{Charge at the interface}
Understanding and controlling the distribution of charge carriers at the interface between dissimilar semiconductors is one of the pivotal developments of modern microelectronics \cite{Gertner:2013} important both for devices and as a crucial platform for discovery of remarkable physical phenomena including integer and fractionally quantized Hall effects as well as spin-Hall and other spintronic phenomena.
In conventional semiconductor heterojunctions the basic physics is driven by the difference in work-function, which causes charge transfer across the boundary to equalize chemical potentials. The work-function difference may be manipulated by a process known as $\delta$-doping~\cite{Schubert:1990p42508,Harris:1991p42509}, in which a layer of ions is implanted in a plane at some distance from the interface. An additional advantage of $\delta$-doping is that the placement of the dopants at some distance from the interface minimizes the effects of randomness in the dopant positions. $\delta$-doping is now widely used to produce two dimensional electron gases (2DEGs) confined to the proximity to the interface ($e.g.$ GaAs/AlGaAs).
The interest in using TMO to explore similar physics was motivated by two observations \cite{Ahn:2003p42529,Ahn:2006p42528}: ($i$) in oxides, the accessible carrier density is expected to be orders of magnitude higher than that of semiconductors ($\geq 10^{20}$ cm$^{-3}$), and ($ii$) the Thomas-Fermi screening length is expected to be much shorter, so the charges may be confined to within $\textless$1-2 nm of the interface, a factor of $5-10$ shorter than the $\sim$10 nm length characteristic of semiconductor junctions. However, the current intense effort in material synthesis, theory, and device fabrication of oxide interfaces is motivated mainly by the known sensitivity of the correlated electron properties of transition metal oxides to the $d$-band filling ~\cite{Tokura:1999p42470,Dagotto:2001p42472,Ovchinnikov:2003p42474,Mackenzie:2003p42475,Basov:2005p42476,Lee:2006p42477,Tokura:2006p42471,Armitage:2010p40289}. The discovery of an interface-based method of carrier doping has revived the idea of tailoring the materials electronic properties and creating novel quantum states not easily attainable in the bulk counterparts. The basic idea (analogous to that motivating $\delta$-doping) is to explore electronic and magnetic phases without the hindering effects of chemical disorder inherent in the conventional solid state chemistry methods of changing carrier concentration.
During the past several years, extensive experimentation has established that perovskite-based heterostructures are particularly susceptible to interlayer charge redistribution derived from the incompatibilities illustrated in \autoref{fig:hetero_anatomy} making them ideal candidates to explore such possibilities ~\cite{Bibes:2011p39683,May:2009p42138,Okamoto:2004p42312,Ohtomo:2004p42478,Ohtomo:2002p42318}.
\subsection{Interface Doping of a High-$T_c$ Superconductor}
To illustrate the inherent interest of charge reconstruction on interfacial states, we discuss as one of many possible examples the recent progress on cuprate/manganite heterointerfaces. Macroscopically it has been established that the introduction of a ferromagnetic (La,Ca)MnO$_{3}$ manganite layer into the heterostructure with an optimally doped YBCO cuprate triggers a \emph{suppression} of the superconducting transition temperature accompanied by a reduced ferromagnetic Curie temperature ~\cite{Satapathy:2012p39966,Driza:2012p41609,Kalcheim:2011p34692,Hoppler:2009p24795,Pena:2004p41605,Holden:2004p40634,Sefrioui:2003p42479}. In a recent set of experiments (\autoref{fig:lsmo_xas}), $L$-edge polarized resonant X-ray absorption spectra taken at the Mn and Cu edges reveal the presence of a chemical shift implying a flow of electronic charge across the interface of about $\sim$0.2\,$e$ per Cu atom \cite{Chakhalian:2007p25272,Chien:2013p42266}. The depleted electrons from MnO$_{2}$ layer are directly transferred to the CuO$_{2}$ planes, unbalancing the charge distribution between the atomic CuO$_{2}$ layers and the CuO chain charge reservoir block. The average Mn valence also increases from the as-grown value (Mn$^{+3.33}$) to around 3.5, indicative of covalent bond formation across the Mn--O--Cu interface.
The charge transfer across the interface from the Mn to Cu ions induces a major reconstruction of the $d$-orbital occupancies and frontier orbital symmetries in the interfacial CuO$_{2}$ layers \cite{Chakhalian:2007p25272,Chakhalian:2006p42181}.
In particular, the Cu $d_{3z^2-r^2}$ orbital, which is fully occupied and electronically
inactive in the bulk cuprates becomes active at the interface (\autoref{fig:lsmo_xas}b). At the same time charge transfer is observed in the presence of enhanced covalent chemical bonding across the interface, the Cu cations from the nominally antiferromagnetic CuO$_{2}$ plane acquire an uncompensated magnetic moment (\autoref{fig:lsmo_xas}c), attributed to spin canting of the local moments on the interfacial Cu cations.
\begin{figure}[b]
\centering
\includegraphics[width=0.98\columnwidth]{Fig3}\vspace{-0.5\baselineskip}
\caption{Panel (a): Schematic of cross-sectional scanning tunneling microscopy (XSTM)
measurements performed on an LCMO/YBCO superlattice grown on a Nb-doped STO substrate. Panel (b): Data reproduced from Ref.~\onlinecite{Chien:2013p42266}. The spatial evolution of the $dI$/$dV$ spectra averaged across the two identically terminated heterointerfaces reveals that the electronic transition is more abrupt for the bottom interface (right arrow) than the top, broader, interface (left arrow).
The red dots represent the voltage of the minimum in the density of states.}
\label{fig:XSTM}
\end{figure}
Initial studies of the interplay between the ferromagnetic and superconducting order parameters used synchrotron based X-ray and neutron reflectivity experiments. However these tools were unable to clearly resolve the length scale of interactions at the boundary between the two phases. Very recently, the issue has been addressed by use of cross-sectional scanning tunneling microscopy (XSTM) together with atomic-resolution electron microscopy (EELS/STEM). These methods enable direct observation of the charge distribution and the corresponding spatial scale for the buried interface \cite{Chien:2013p42266}. \autoref{fig:XSTM} shows the spatially resolved $dI$/$dV$ spectra, which provided the first direct evidence that the length scale for charge transfer between YBCO and LCMO has an upper limit of \textless 1 nm, and that the spatial broadening of the electronic transition is commensurate with the rougher interface.
This result sets a fundamental upper limit on the charge-transfer length scale in the YBCO/LCMO system, ruling out a class of theories based on long-range proximity effects~\cite{Hoffmann:2005p41603}. In addition to the established X-ray and neutron based probes, this powerful characterization technique provides a useful tool to achieve a microscopic direct space understanding of the electronic structure across correlated oxide interfaces.
\subsection{Additional considerations}
The complex behavior occurring at the LCMO/YBCO interface highlights the need to develop a clear language and set of concepts to describe interface electronic physics in correlated oxides. The inherently many-body nature of the correlated interface raises
fundamental questions, in particular of the applicability of the ideas, formulae, and language devised for semiconductor interfaces where a single-particle description works well. Pioneering work of Oka and Nagaosa \cite{Oka:2005p42264} showed via density matrix renormalization group calculations of a one dimensional model system (in essence the one dimensional Hubbard model with a spatially varying interaction parameter and band bottom) that the standard concepts of band bending and interface dipole apply, albeit with some modifications, as long as the conduction and valence bands are replaced by lower and upper Hubbard bands.
A growing body of literature builds on this work, using the concepts of band bending, Schottky barriers, and depletion layer creation borrowed from semiconductor physics \cite{Yajima:2011p27821,Hikita:2009p11794}, as well as more involved approaches, which unite Poisson-Schr\"odinger electrostatics with Mott-Hubbard physics
\cite{Okamoto:2004p42312,Charlebois:2013p42487,Lee:2006p42486}. Correlation physics is shown to lead to \emph{quantitative} changes in the spatial confinement of carriers near interfaces \cite{Okamoto:2004p42312,Lee:2006p42486,Lee:2007p42491}, including the possible formation of extended depletion regions of zero compressibility (so-called `Mott plateaus')\cite{Lee:2006p42486,Charlebois:2013p42487}. Other theoretically proposed possibilities
\textit{unattainable with semiconductor junctions}, include a spontaneously emerging quantum-well structure when an electron-doped Mott-Hubbard insulator is coupled to a normal metal with a large work-function. Following the same line of reasoning, in a $p$-$n$ junction between two correlated insulators the local Mott gap collapses giving rise to a 2DEG~\cite{Charlebois:2013p42487}.
With few exceptions \cite{Jin:2011p42490}, current experimental attention has focussed on interfaces such as that between the two band insulators LaAlO$_3$ and SrTiO$_3$. In most of these situations the carriers are introduced via the polar catastrophe mechanism \cite{Mannhart08}; the maximum sheet carrier density is $0.5$ per in-plane unit cell and this carrier density is typically distributed \cite{Mannhart08,Okamoto:2006p42492} over several unit cells away from the interface, leading in general to volume carrier densities far below the Mott value of one per unit cell. Density functional plus Hubbard $U$ calculations \cite{Pentcheva07} indicate that a charge ordered phase in which the entire polar catastrophe charge density is in the first interface layer may be possible, but these suggestions have not yet been confirmed by experiment or beyond-DFT methods. One very interesting potential exception is the work of \onlinecite{Moetakef12} on GdTiO$_3$/SrTiO$_3$ heterostructures, where a nontrivial insulating phase was observed when two layers of SrTiO$_3$ were sandwiched between thick sheets of GdTiO$_3$. It has been explained by Chen, Lee and Balents in terms of a novel `Mott dimer' phase \cite{Chen13}, where the carrier density is far below the one electron per transition metal ion value needed for Mott physics; nonetheless many theoretical predictions suggest alternative avenues for emergent properties to arise and warrant experimental investigation.
Additional issues beyond conceptual approaches to interface control arise. The length scales in correlated oxides are typically very short, so the details of the interface may be more important than in conventional semiconductors.A local picture is needed, which is able to address the formation of chemical bonds across the junction, differing electronegativities of transition metal ions, changes in both crystal field energies and Madelung potentials, and polarity effects \cite{Salluzzo:2013p41953,GarciaBarriocanal:2013p42121,Park:2013p41899,Biscaras:2012p42167,Zhong:2010p42484,Savoia:2009p42485,Takizawa:2009p4135,Sing:2009p2694,Herranz:2007p42483,Hotta:2007p42258,Ohtomo:2004p42478}.
A further complication is that while many correlated oxides are reasonably well described by the Mott-Hubbard picture on which the above-cited works are based, some important functional TMO are \emph{charge-transfer} compounds ~\cite{Imada:1998p38738,Khomskii:1997p42493,Zaanen:1985p4144}. The role of the lower Hubbard band in these materials is usurped by the ligand states (typically oxygen $2p$), thus implying a very different physical character for the doped holes (mainly in oxygen levels) and doped electrons (mainly in transition metal $d$-levels). As a result, the alignment of the oxygen levels across the interface becomes crucial.
For all of the materials discussed in this paper, theoretical treatments which go beyond the simple Hubbard model, including chemically realistic structures and energetics on the same footing as correlation effects, are needed, as are experimental investigations of systems with higher electron densities and complete control over cation and oxygen stoichiometry.
\section{Control of Magnetism with Oxide Heterostructures}
Long range magnetic order in transition metal oxides usually arises from a combination of local moment formation on the transition metal site and inter-site coupling via the oxygen sublattice. Heterostructures offer an opportunity to generate new magnetic states by manipulating both the moment formation and the nature of the inter-site coupling. As examples, we note that the paramagnet LaCoO$_3$ can be converted to a ferromagnetic (FM) material by tensile epitaxial strain, which changes the material from a low-spin to a high-spin state \cite{Freeland:2008p25261,Park:2009p25266,Rondinelli:2009p1318,Fuchs:2007p10350}.
On the other hand, bulk antiferromagnetic (AFM) EuTiO$_3$ can be converted to a ferromagnetic insulator under modest tensile strains \cite{Lee:2010p25085}. Another notable example is the comprehensive study by \onlinecite{Seo:2010p42273}, which examined three-component SrRuO$_3$/manganite/SrRuO$_3$ heterostructures. These authors found strong compressive strain causes relative FM alignment of magnetization in the heterostructure layers, while tensile or weak compressive strain favors AFM alignment of neighboring layers.
This sort of control over local magnetization in thin film geometries is of potential utility for oxide electronics and spintronic applications, including magnetic memory and sensing \cite{Bibes:2011p39683}. For example, electromechanical coupling via a piezoelectric
material can be used to control the orientation and strength of the magnetization by tuning the lattice parameters of the heterostructure through an applied electric field \cite{Dekker:2011p34081}. Here, we focus on going beyond strain control to make use of the
broken symmetry at the interface between two dissimilar materials to generate unique spatially structured magnetic states.
\subsection{Creating Novel Magnetic States at Interfaces}
One approach to manipulating magnetism involves interfacial charge transfer in heterostructures created from an antiferromagnetic \emph{insulator} and a paramagnetic \emph{metal} \cite{Takahashi:2001p42254,Freeland:2010p25254,Yordanov:2011p32557}. The choice of materials in this case was determined by two key factors: first, creating moments from a material without any propensity to moment formation, $i.e.$, zero moments, is difficult. It is therefore reasonable to begin then by choosing a system with a large local moment such as CaMnO$_3$ with $3\mu_B$/Mn, which in bulk is a G-type (conventional two-sublattice N\'eel) antiferromagnet. However, modest electron doping of this material leads to strong ferromagnetic (FM) correlations \cite{Neumeier:2000p42400}. In a quantum-well heterostructure in which a paramagnetic metal (in this case CaRuO$_3$) is confined between two thick layers of CaMnO$_3$ one may expect that charge transfer from the metal to insulating CaMnO$_3$ will lead to interfacial doping and thus ferromagnetism.
Theoretical studies substantiate this argument and find that a charge of approximately $0.1$\,e per interface unit cell leaks across the interface and is confined within $\sim$1 unit cell at the CaRuO$_3$/CaMnO$_3$ interface \cite{Nanda:2007p12003}. Although the magnitude of the charge leakage is small, it has a significant impact on the antiferromagnetic order in the CaMnO$_3$, providing a mechanism for spin canting which yields large ferromagnetic moments at the interface \cite{Takahashi:2001p42254,Freeland:2010p25254,Yordanov:2011p32557}. To validate this concept a study of the spatial distribution of the magnetism was carried out using X-ray resonant magnetic scattering (XRMS) at the Mn $L$-edge \cite{Freeland:2005p25285,Kavich:2007p25275}. \autoref{fig:CROCMO} shows the large XRMS signal, and that it deviates from anticipated bulk G-type AFM state, which shows no ferromagnetic component to the magnetic moment under identical strain conditions indicating that the ferromagnetism emerges from the interface \cite{Freeland:2010p25254}. By fitting this signal as a function of incident angle, the extent of the magnetic polarization away from the interface was found to extend over several unit cells in contrast to the length of one unit cell predicted by theory \cite{Nanda:2007p12003}. The observed longer length scale of the magnetization profile discrepancy may be due to magnetic polarons, which are known to exist in lightly-doped CaMnO$_3$ \cite{Chiorescu:2007p42405}, but such interfacial polarons have not explicitly investigated theoretically.
\begin{figure}
\centering
\includegraphics[width=0.98\columnwidth]{Fig4.pdf}\vspace{-0.2\baselineskip}
\caption{
Panel (a): Schematic crystal structure showing canted spins (pink arrows) within the MnO$_6$ octahedra of CaMnO$_3$ at the interface of the quantum-well structure with metallic CaRuO$_3$. The canting arises from electron transfer owing to the ohmic contact. Panel (b): X-ray resonant magnetic scattering data showing a large magnetic signature arising from the FM alignment of spins at the interface in the presence of a magnetic field (see associated data in Ref.~\onlinecite{Freeland:2010p25254}).
\label{fig:CROCMO}
}
\end{figure}
\subsection{Other Routes to Interface Magnetism}
Strain and layer sequencing can offer additional handles to manipulate the interfacial magnetic state in the CaRuO$_3$/CaMnO$_3$ system \cite{He:2012p41809}. The link between the metallic layer and magnetism is best illustrated by studying superlattices where the metallic layer undergoes a metal-to-insulator transition when the dimensionality is reduced in the ultra-thin layer, and correspondingly the magnetism disappears \cite{Grutter:2013p42329}. One can use this understanding and exploit it to design new functional materials and there are many possibilities that exist within the perovskite familiy which can be combined to seek new types of magnetic states \cite{Smadici:2007p29592,Bhattacharya:2008p7251,Gibert:2012,Hoffman:2013p42403}. For example, many antiferromagnets have ordering temperatures well-above room temperature, so one could extend this concept to create interfacial insulating \emph{ferri}magnets that operate at high temperature \cite{Ueda15051998}.
Another possibility is to make use of the spatially localized magnetic state in proximity to a metallic layer to create a spin-polarized 2DEG \cite{Nanda:2008p54}. More broadly, one could create heterostructures with two magnetic materials, and use the competition towards different collectively ordered magnetic states in addition to structural
incompatibilities to generate a plethora of interesting and potentially spatially varying magnetic phases. These are but a few of the magnetic possibilities which remain to be
uncovered at oxide heterointerfaces, chosen to highlight the large phase space still available for exploration and the opportunities available to connect with materials theory in the rational search for new magnetic systems.
\section{Interfacial Control of Orbital Polarization}
\subsection{The Case of Rare-earth Nickelates}
The orbital configuration, $i.e.$, the distribution of the $d$-electrons over the available crystal field levels, plays an important role in the formation of strongly correlated ground states in transition metal oxides \cite{Tokura:2000p42052}. In general, orbital configurations are closely linked to structure and may therefore be manipulated at interfaces. Here we discuss these issues specifically for the orthonickelate perovskites $R$NiO$_3$, where $R$ is a trivalent cation from the lanthanide series, but the ideas can be extended to other $AM$O$_3$ systems.
The original and decade later renewed interest in nickelates arose from the possibility of generating a cuprate-like electronic and orbital configuration in a copper-free system
\cite{Hamada93,Anisimov:1999p30820,Lee:2004p30821,Chaloupka:2008p288,Poltavets:2010p30805}. The basic idea is that in bulk $R$NiO$_3$ the Ni is octahedrally coordinated, with only small deviations from cubic ($O_h$) symmetry. Further, formal valence considerations indicate that the nominally Ni$^{3+}$ cation is in the low-spin $d^7$ configuration, with the $t_{2g}$ states ($d_{xy,xz,yz}$) filled and one electron in the two-fold degenerate $e_g$-symmetry ($d_{3z^2-r^2,x^2-y^2}$) Ni $d$-levels. Low-spin $d^7$ is a first-order Jahn-Teller configuration, with a susceptibility to bond distortions which break the cubic point symmetry and are enhanced by correlation effects. It was thus expected that modest perturbations would split the $e_g$ levels, leaving an effective one-band configuration where the electron is fully confined to a single orbital.
The degree to which an electron occupies two different $m_{l1}$ and $m_{l2}$ orbitals
can be quantified as an \emph{orbital polarization}
\[
P_{l_1 m_{l1},l_2 m_{l2}}=\frac{n_{l_1 m_{l1}}-n_{l_2 m_{l2}}}{n_{l_1 m_{l1}}+n_{l_2 m_{l2}}}\, ,
\]
where $n_{l_1 m_{l1}}$ and $n_{l_2 m_{l2}}$ are the occupancies of the $\left|l_1 m_{l1}\right\rangle$ and $\left|l_2 m_{l2}\right\rangle$ states \cite{Han:2010p28554},
with orbital quantum number $l_i$
and magnetic quantum number $m_{li}$,
respectively.
For the rare-earth nickelates, the relevant orbital polarization arises from the
$n_{x^2-y^2}$ and $n_{3z^2-r^2}$ occupancies, and a fully polarized state $P=1$ would
be indicative of a single band electronic structure.
Something akin to this effect occurs in many members of the ``colossal'' magnetoresistance manganites, where the basic configuration is a high-spin $d^4$ configuration and similarly a Jahn-Teller ion that can be manipulated with strain \cite{Tokura:2000p42052}. Hubbard-model calculations further indicated that the single-band physics was very likely to appear \cite{Hansmann:2009p12185}; however, more realistic
\emph{ab-initio} calculations indicate that the actual electronic configuration for Ni is in the high-spin $d^8$ state with a hole on the oxygen ($d^8\bar{L}$) \cite{Han:2010p28554}. Since the high-spin $d^8$ configuration has one electron in each of the two $e_g$ orbitals, it is significantly less susceptible to undergoing Jahn-Teller distortions,
suggesting that it would be more difficult than initially expected to achieve the
desired degree of orbital polarization, even in the correlated case \cite{Han:2011p41333}. Studies of the dependence of orbital polarization on the different flavors of structural symmetry-breaking \cite{PhysRevB.87.155135} is thus of great experimental interest and is a stringent test of the theory.
\subsection{Manipulating Orbitals in $R$NiO$_3$ Heterostructures}
Advances in high-quality growth of nickelates over the past few years mean that we are now in a position to test these predictions \cite{Tsubouchi:2008p179,Eguchi:2009p2955,Scherwitzl:2010p35780,Liu:2010p25255,May:2010p18583,Boris:2011p35832,Hwang:2013p42503,Bruno:2013p42438}. The basic experimental approach is to use a combination of quantum confinement, achieved by fabricating ultra-thin layers of TMO sandwiched between layers of wide-gap insulators, and epitaxial strain, obtained by varying the substrate material, to break the octahedral symmetry. Advanced x-ray techniques are then used to estimate the resulting changes in orbital occupancies.
However, \emph{ab-initio} calculations based on density functional theory indicate that the contribution of strain to octahedral symmetry breaking is not completely intuitive (see \onlinecite{Rondinelli:2012p42180} and references therein). In particular, a considerable degree of compression or tension can be accommodated by octahedral rotations, without necessarily changing the local point symmetry significantly since the NiO$_6$ units are
highly flexible \cite{Chakhalian:2011p34974}. Furthermore, quantum confinement may be affected by the chemistry of the insulating layer, with different degrees of polarization found for different choices of wide-gap insulator \cite{Han:2010p28554}.
At present, the experimental results are not completely consistent with each other or with theory. For example, examination of the Ni $L_2$ edge indicated an $\sim5\%$ orbital polarization for a single unit-cell of LaNiO$_3$ subject to tensile strain \cite{Freeland:2011p42179} and no orbital polarization for compressive strain. Other measurements employing an orbital reflectometry technique on four unit cell films also observed a similar non-zero interfacial polarization for tensile strain \cite{Benckiser:2011p27820,Frano:2013p42355}. Recent studies have indicated it is possible to increase the orbital polarization up to $25\%$ through judicious optimization of high tensile strain states and alternative spacer materials \cite{Wu:2013p42419}; the latter had been shown theoretically to play a considerable role in obtaining the targeted orbital polarization levels \cite{Han:2010p28554}.
\subsection{Open Questions in Orbital Control at Interfaces}
All experiments agree though that the degree of orbital polarization observed in actual superlattices is small compared to that needed to achieve a fully orbital polarized Ni $e_g^1$ state. The main challenge is to then build the framework to understand how to create fully orbital polarized states in oxide heterostructures.
One important facet of this problem has to do with strain and symmetry. For example, LaNiO$_3$ has rhombohedral symmetry in the bulk which actually disfavors a uniaxial Jahn-Teller distortion \cite{Carpenter:2009p3987}. NdNiO$_3$, on the other hand, is orthorhombic which allows such a distortion without large energetic penalties. Recent studies by \onlinecite{Tung:2013p42442} show that the nickelate films maintain to some extent the symmetry of the bulk, which, due to the connection between compatible lattice distortions and crystal symmetry, directly influences the ability to orbitally polarize the $3d$-states even under large strains.
With this understanding, one may be able to choose the proper bulk symmetry of the TMO to be used in the heterostructure to build in larger orbital polarizations in NdNiO$_3$ by coupling strain with the interfacial covalency effect discussed above and interfacial proximity effects \cite{doi:10.1021/nl500235f,doi:10.1021/cg500285m}. Even for the case of NdNiO$_3$ films, however, the orbital polarization is still insufficient to create a fully polarized state \cite{Tung:2013p42442}. This is largely due to the energy scale mismatch between elastic strain ($\sim$ 100 meV) and the bandwidth (on order of several eV), and the overall tendency to orbital polarization is further reduced by the $d^8\bar{L}$ character of the Ni$^{3+}$ state.
Small orbital polarizations have also been observed even in the case of the Jahn-Teller active manganites \cite{Aruta:2006p42142,Tebano:2008p42139,Pesquera:2012p42127}, which indicates that this balancing of drastically different energy scales is difficult even in systems that prefer orbital order. A potential solution is to create interfaces with large symmetry mismatch due to lattice topology or by combination of dissimilar crystal field environments.
Consider for example bulk oxides with large orbital polarization such as the cuprates \cite{Nucker:1995p42064,Chen:1992p42068} and Ruddlesden-Popper (layered-structure) nickelates \cite{Kuiper:1998p42189,Pellegrin:1996p42182} as a starting point. In these materials, the large orbital polarization arises from the strongly asymmetric crystal (ligand) field of the layered structure. As was discussed above for the LCMO/YBCO heterointerface, oxide interfaces can be harnessed to `undo' orbital polarization, but there is no reason why the converse should not also be possible. This offers a real opportunity in the area of matching systems with drastically different symmetries to create orbital states at the interface.
Orbital control can also be used to modulate strongly correlated states. Strain very effectively controls the metal-insulator transition (MIT) for NdNiO$_3$ thin films \cite{Liu:2010p25257, Liu:2013p42440}, but the underlying mechanism is not fully understood. Using quantum confinement when the layer dimensions approach the atomic limit, it was observed that orbital polarization under compressive strain tends to favor a metallic state while quantum confinement caused a re-emergence of a MIT through the interfacial reduction of the orbital polarization \cite{Liu:2012p41668}.
A similar connection was recently observed in the case of VO$_2$ thin films \cite{Aetukuri:2013p42392}, where the decrease in the MIT temperature was correlated with strain driven polarization of the V $t_{2g}$ orbitals. The potential use of strain in combination with symmetry mismatch to tune between correlated metallic and insulating phases is an important issue warranting further investigation.
\section{Ferroelectric Heterostructures from Nonferroelectric Bulk Oxides}
The electrically switchable polarization of ferroelectrics (FE) allows their integration in random access memories (FE-RAM), electro-optical devices, sensing microsystems, active vibration control and surface acoustic wave systems, to high frequency devices~\cite{Setter:2006p42422}. The main challenges for future FE-RAM scaling, however, is that the FE dielectric thickness must be reduced to fit within the required device area while maintaining sufficient reproducibility and signal margins for sense amplifier differentiation
between a `0' and `1' data state~\cite{Wu:2010p42427}. Furthermore, non-destructive \emph{magnetic} sensing of \emph{electric} polarization, enhanced miniaturization and increased packaging density in magnetoelectric materials (ME)
\cite{Fiebig:2005p42423,Eerenstein:2006p42271,Ramesh:2007p42429,Velev:2011p42270} would enable the realization of four-state logic in a single device \cite{Bibes:2008p12103,Khomskii:2009p42424}.
The conventional approach for realizing strong ME materials, i.e., where their is strong coupling between the primary electric and magnetic polarizations, uses naturally occurring materials possessing primary ferroic orders, namely ferroelectricity and ferromagnetism. Such materials not only are rare, but often suffer from weak coupling between the spin and charge degrees of freedom \cite{Eerenstein:2006p42271}.
Recent advances in atomic layer epitaxy now enable the design and fabrication of heterostructures with atomically flat interfaces that can support new forms of ferroelectricity \cite{Bousquet:2008p42309,Mulder/Benedek/Rondinelli/Fennie:2013,Rondinelli/Fennie:2012} and magnetoelectric coupling owing to interfacial interactions among electronic spins, charges, and orbitals \cite{Wu:2010p42427}. A promising avenue to pursue in the search for new materials with emergent ferroelectricity and a strong magnetic field dependence of the electric polarization exploits an superlattice structure with broken inversion symmetry, which results from being constructed from three distinct layers~\cite{Warusawithana:2003p42526,Lee:2005p42527}. The `tri-color' layering lifts inversion symmetry -- a prerequisite for an electric polarization -- whereas epitaxial strain applied to the heterostructure can promote the formation of electrically and magnetically tunable polarizations, even in the absence of ferroic components~\cite{Tokura:2007p42426,Hatt:2007p42425}.
Using a combination of complementary experimental probes, magnetoelectricity was recently demonstrated in artificial tri-layer heterostructures consisting solely of dielectric antiferromagnetic oxides (\autoref{fig:FE}a). Laser molecular-beam epitaxy was used to create the heterostructure comprising alternating LaMnO$_3$, SrMnO$_3$, NdMnO$_3$ layers on a SrTiO$_3$ substrate. \onlinecite{Rogdakis:2012p42268} report the emergence of ferroelectricity below 40K (\autoref{fig:FE}c) and it was found to depend on the number of NdMnO$_3$ layers $n$ in the superlattice (\autoref{fig:FE}d).
Interestingly, the authors observed slim loop-like polarization--electric ($P$-$E$) field hysteresis, with an extended tail of the polarization above the ferroelectric transition temperature and a thermal hysteresis between zero-field-cooled and field-cooled measurements.
Such features are typical of relaxor ferroelectrics and were attributed to interface effects \cite{Rogdakis:2012p42268}.
We note that this dielectric relaxation also leads to differences in the magnitudes of the measured polarization obtained from
the $P$-$E$ loop and the pyrocurrent measurement, which might also be affected from the challenges in
characterizing the dielectric properties of ultrathin film oxides with techniques commonly used for bulk single crystals.
Nonetheless, the magnetoelectric coupling resulted in 150\% magnetic modulation of the electric polarization, demonstrating how heterostructuring multiple compounds together to lift inversion symmetry in superlattices is an avenue to create new functionalities.
\begin{figure}
\centering
\includegraphics[width=1.\columnwidth]{Fig5}
\caption{%
Ferroelectric and magnetoelectric properties of [(NdMnO$_3$)$_n$/(SrMnO$_3$)$_n$/(LaMnO$_3$)$_n$]$_m$ superlattice, where (n,m) denotes the specfic superlattice structure (a) Schematic
[(NdMnO$_3$)$_5$/(SrMnO$_3$)$_5$/(LaMnO$_3$)$_5$]$_8$ superlattice on single-crystalline SrTiO$_3$ substrate with the metal--oxygen octahedra and A cations emphasized. The arrays of the arrows in (b) represent the corresponding antiferromagnetic spin arrangements for each component of the heterostructure. (c) Temperature ($T$) dependence of the electric polarization ($P$) measured in a superlattice of period (22,2) using the pyroelectric technique for a typical electric field (E$_\mathrm{a}$) of +100 V cm$^{-1}$ (black curve) and $-$100 V cm$^{-1}$ (red curve) applied
perpendicular to the plane of the superlattice layering. The temperature-dependent electric polarization under a magnetic field $H$=6~T applied parallel to the plane of the
superlattice layering (green curve) reveals strong magnetoelectric coupling. (d) Normalized relative change in the electric polarization at fixed electric and magnetic fields for various superlattices. Figure adapted from Ref.~\onlinecite{Rogdakis:2012p42268}. }
\label{fig:FE}
\end{figure}
First-principles density functional calculations indicated that broken space inversion symmetry and mixed valency, arising from the heterostructure geometry (cation layer sequence) and interfacial polar discontinuity, respectively, is responsible for the observed behavior. In particular, the formal charge layering of the LaMnO$_3$ and NdMnO$_3$ components at the interfaces with SrMnO$_3$ give rise to a charge discontinuity, leading to electron transfer and cooperative off-centering of the cations. The $A$ cation layering leads to a pattern of Mn and $A$-cation displacements along the superlattice normal growth direction that lift inversion symmetry and therefore produce the macroscopic electric polarization.
We note that the ferroelectric relaxor behavior could not be seen from the theoretical results, which capture
the static and cation ordered zero-temperature behavior.
This work demonstrates yet another fascinating example of emergent functionality exhibited in heterostructures. The ability to lift inversion symmetry and independently tune spin order allows the design of many more materials with multifunctional behavior \cite{Puggioni/Rondinelli:2014,ADMI:ADMI201400042}. One may exploit these systems to engineer devices from artificial low-dimensional materials exhibiting novel tunable functions distinct from that of bulk systems.
\section{Conclusion}
The physics of interfaces between materials exhibiting correlated electronic behaviors including superconductivity, magnetism and ferroelectricity is a rapidly advancing field, situated at the intersection of materials science, solid state chemistry and condensed matter physics. Understanding and exploiting these remarkable systems places extraordinary demands on synthesis, measurement and theory, and the challenge is stimulating remarkable work in all areas. By way of conclusion we highlight challenges and prospects in correlated oxide interfaces.
\subsection{Chemical and structural order}
Characterization and control of chemical and structural order is a crucial issue. While research to date has revealed remarkable phenomena, clearly related to properties of theoretically ideal interfaces, effects of disorder are not negligible. The brutally short length scales (often only one or two unit cells) pose strong constraints on materials quality. For example, metal to insulator transitions generically occur in oxide heterstructures when the thickness of the metallic layer becomes of the order of 1-2 unit cells. Systematic dependence on strain \cite{Son:2010p14460}, and systematic evolution of electronic structure with thickness \cite{Yoshimatsu:2011p32626} suggest an important intrinsic component, but disorder effects and changes in growth processes on these length scales cannot yet be ruled out as mechanisms. Antisite defects mean that real interfaces are not as sharp as depicted in the idealized sketches shown in this paper, and these defects are not necessarily easy to identify in transmission electron microscopy experiments, which average over columns of order $10^3$ atoms. Further, oxygen defects and interstitials play a crucial role in transition metal oxides and oxygen partial pressure during growth and in post-growth annealing of heterostructures clearly affects properties in many cases \cite{Nakagawa:2006p12453,Ariando11}. Methods to further define and control the actual structure of interfaces are urgently needed. One area of future study, is to couple the insight from {\it in-situ} studies of oxide film synthesis to that of multiscale theory in order to build a mechanistic understanding of the process by which interfaces are created.
\subsection{Theory}
The importance and interest of oxide interfaces for the general issue of the theory of correlated electron materials cannot be overemphasized. Understanding the phenomena at interfaces requires a combination of sophisticated many-body physics (to understand the correlated electron states) and \emph{ab-initio} insights (to understand the implications of the changes in octahedral rotations, atomic coordination, and lattice relaxations). The present state of the theoretical art is a combination of analysis of model systems (in particular the Hubbard model), which cannot easily encode many real materials aspects, in particular the transition-metal/ligand covalence as well as the energetics associated with lattice relaxations and \emph{ab-initio} techniques (especially the DFT+$U$ method) which have provided crucial insights but are based on a greatly oversimplified Hartree approximation to the many-body physics and may overemphasize order \cite{PhysRevB.80.235114,PhysRevB.86.195136}. In particular the status of the DFT+$U$ predictions of magnetism \cite{Okamoto:2006p42492} and charge order \cite{Pentcheva07} at the LAO/STO interface remains unclear.
The combination of density functional band theory and dynamical mean field theory (DFT+DMFT) is a promising alternative \cite{Kotliar06}, combining \emph{ab-initio} and many-body physics in a systematic way. However, working implementations of total energy calculations are only now beginning to appear \cite{Park13} and forces cannot yet be computed so structural optimization remains a challenge. More fundamentally, existing implementations for systems in which more than one $d$-orbital is important are based on the single-site approximation, which is believed to become poor in the two dimensional situation relevant to heterostructures.
\subsection{Topological states of matter}
Topological insulators (TIs) are a fascinating class of materials in which strong spin-orbit interaction promotes gapless electronic states on the surface ($i.e.$ edge states) with the bulk of a material remaining gapped~\cite{Fu:2007p42507,Hsieh:2008p42510,Moore:2010p42512,Qi:2010p42513,Hasan:2010p42511}. Most of the current TI materials belong to the Bi$_2X_3$ ($X$=Se, Te) family. Recently, a new approach has been proposed that is based on superlattices of two (or three) unit cells of a strongly correlated electron perovskite $AB$O$_{3}$ grown \emph{along the [111] direction} combined with a band insulator spacer layer; the resulting heterostructure structurally forms a buckled honeycomb lattice topologically equivalent to that of graphene lattice for the case of three unit cell strongly correlated oxide. Depending on the strength of electron-electron correlations, magnitude of Hund's coupling and inter-site hopping, the proposed heterostructures display potentially rich physics associated with exotic electronic and topological phases~\cite{Ruegg:2011p38511,Yang:2011p38510,Xiao:2011p41911,Ruegg:2012p41179,PhysRevLett.110.066403,Ruegg:2013p42515}.
At present, the main challenge in experimental realization is the film growth along the [111] direction since for the commonly used substrates, $e.g.$ SrTiO$_3$ LaAlO$_3$, NdGaO$_3$, YAlO$_3$, etc., the (111) structure consists of alternating $\pm4e$ or $\pm3e$ charged planes along this direction. The large polar discontinuity generally results in complex surface/interface and electronic reconstructions \cite{Enterkin:2010,Marks20092179}, which can act to compensate for the polar mismatch. To date there is limited understanding of thin film nucleation, growth and charge compensation in perovskites along
highly polar directions. Very recently the synthesis work in this direction has been initiated ~\cite{middey:261602}.
\subsection{Oxygen Defect control}
While many of the examples discussed above involve oxygen stoichiometric perovskites, the ease of removal/addition of oxygen can also offer opportunities for materials that can be programmed by their chemical environment \cite{Kalinin:2012p41881,Kalinin:2013p42533}. While the role of oxygen vacancies has been explored deeply in the context of catalysis and fuel cells \cite{Adler:2004p41756}, recent work has highlighted the controlled stabilization of related oxygen deficient phases using oxide heterostructures. This is interesting for epitaxial thin film phases such as SrCoO$_{3-\delta}$ \cite{Jeen:2013p42437,Jeen:2013p42436} or La$_{1-x}$Sr$_x$FeO$_{3-\delta}$ \cite{Xie:2013p42532}, which can be reversibly converted between oxygen deficient and stoichiometric phases at low temperatures. Since these phases have drastically different ground states, it offers an interesting path for control of strongly correlated electrons via dynamic anion compositional control. By combining low conversion energy with electrochemical gating of vacancies, such as that seen recently for VO$_2$ \cite{Jeong:2013p42020} and $R$NiO$_3$ \cite{Shi:2013p42435}, this approach allows direct control of metal vs. insulating phase as well as possible elements of brain-like (neuromorphic) electronic circuits.
\subsection{Moving beyond the static realm}
Up to now, all the properties that have been discussed were limited to the quasi-equilibrium properties, but in the future one should also investigate the \emph{dynamical} degree of freedom to explore the emergence of unique transient states. While the dynamic response for bulk materials has been extensively investigated~\cite{Averitt:2002p41326,Basov:2005p42476}, oxide heterostructures offer new possibilities. Recent pump-probe studies of oxide films illustrate the potential for ultrafast strain modulation~\cite{Daranciang:2012p41904,Wen:2013p41915}, which allows one to manipulate the lattice in a new direction since the film motion is clamped in-plane by epitaxy and can only alter the lattice out of plane. Using this epitaxial constraint allows one to drive the crystalline lattice (symmetry, rotations, etc...) into distinctly different areas of phase space. For example, experiments in manganite thin films showed the emergence of a \emph{hidden} phase that existed only in the dynamic realm~\cite{Ichikawa:2011p42146}. Moving into the mid-IR region enables direct pumping of lattice modes that can trigger phase transitions~\cite{Rini:2007p20107} and was recently used to trigger a metal-insulator transition through dynamic strain created by direct pumping of substrate phonons~\cite{Caviglia:2012p39471}. Low energy photons in the THz regime can also serve as a dynamic way to drive transitions with ultra-fast electric fields \cite{Liu:2012p41599}. Such experiments have only begun to explore the complex landscape available in the dynamic realm.
\begin{acknowledgments}
Work at Argonne National Laboratory, including the Advanced Photon Source, is supported by the U.S. Department of Energy, Office of Science under grant no.\ DEAC02-06CH11357. JC was supported by DOD-ARO under grant no.\ 0402-17291. The work in Singapore was supported by the National Research Foundation, Singapore, through Grant NRF-CRP4-2008-04. JMR was supported by ARO (W911NF-12-1-0133) and
acknowledges useful discussions leading to ideas presented in this manuscript during a workshop sponsored by the Army Research Office
(grant no.\ W911NF-12-1-0171). AJM acknowledges the US Department of Energy, Office of Science, under grant No. DE-FG02-04ER46169.
\end{acknowledgments}
|
3,212,635,537,786 | arxiv | \section{Introduction}
Drug discovery is an important biomedical task, which targets at finding new potential medical compounds with desired properties such as
better absorption, distribution, metabolism, and excretion (ADME), low toxicity and active pharmacological activity~\cite{rohrer2009maximum,abbasi2019deep,altae2017low}.
It is recorded that drug discovery takes more than 2 billion and at least 10 years in average while the clinical success rate is around 10\%~\cite{paul2010improve,leelananda2016computational,zhavoronkov2019deep}.
To speedup this process,
quantitative structure property/activity relationship (QSPR/QSAR) modeling uses machine learning methods to establish the connection between molecular structure and particular properties~\cite{dahl2014multi}.
It usually consists of two components: a molecular encoder which encodes molecular structure as a fixed-length molecular representation, and a predictor which estimates the activity of a certain property based on the molecular representation.
Predictive models can be leveraged in virtual screening to discover potential molecules more efficiently~\cite{guo2021few}.
However,
molecular property prediction is
essentially a few-shot problem which makes it hard to solve.
Only a small amount of candidate molecules can pass virtual screening to be evaluated in the lead optimization stage of drug discovery~\cite{rong2020self}.
After a series of wet-lab experiments,
most candidates eventually fail to be a potential drug due to the lack of any desired properties~\cite{dahl2014multi}.
These together result in a limited number of labeled data~\cite{nguyen2020meta}.
Few-shot learning (FSL)~\cite{fei2006one,wang2020generalizing}
methods
target at generalizing from a limited number of labeled data.
Recently,
they have also been introduced into molecular property prediction~\cite{altae2017low,guo2021few}.
These methods attempt to learn a predictor
from a set of property prediction tasks and generalize to predict new properties given a few labeled molecules.
As molecules can be naturally represented as graphs,
graph-based molecular representation learning methods use graph neural networks (GNNs)~\cite{kipf2016semi,hamilton2017inductive}
to obtain
graph-level representation as the molecular embedding.
Specifically,
the pioneering
IterRefLSTM~\cite{altae2017low} adopts GNN as the molecular encoder and adapts a classic FSL method~\cite{vinyals2016matching} proposed for image classification to handle few-shot molecular prediction tasks.
The recent Meta-MGNN~\cite{guo2021few} leverages a GNN pretrained from large-scale self-supervised tasks as molecular encoder and introduces additional self-supervised tasks such as bond reconstruction and atom type prediction to be jointly optimized with the molecular property prediction tasks.
However, aforementioned methods neglect two key facts in molecular property prediction.
The first fact is that different molecular properties are attributed to different
molecular substructures as found by previous QSPR studies~\cite{varnek2005substructural,ajmani2009group,costa2021chemical}.
However,
IterRefLSTM and Meta-MGNN use graph-based molecular encoder to encode molecules regardless of target properties whose relevant substructures are quite different.
The second fact is that the relationship among molecules also vary w.r.t. the target property.
This can be commonly observed in benchmark molecular property prediction datasets.
As shown in Figure~\ref{fig:relation-graph-illus}, Mol-1 and Mol-4 from the Tox21 dataset \cite{Tox21} have the same activity in SR-HSE task while acting differently in SR-MMP task.
However, existing works fail to leverage such relation graph among molecules.
\begin{figure}[t]
\centering
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width = 0.88\textwidth]{img/intro.pdf}
\end{minipage}
\begin{minipage}{0.59\textwidth}
\small
\setlength\tabcolsep{1.5pt}
\begin{tabular}{c C{132px}|cc}
\toprule
\multicolumn{2}{c|}{Molecules} & \multicolumn{2}{c}{Label} \\ \midrule
\multicolumn{1}{c|}{ID} & SMILES & SR-HSE & SR-MMP \\ \midrule
\multicolumn{1}{l|}{Mol-1} & c1ccc2sc(SNC3CCCCC3)nc2c1 & 1 & 1 \\ \midrule
\multicolumn{1}{l|}{Mol-2} & Cc1cccc(/N=N/c2ccc(N(C)C)cc2)c1 & 0 & 1 \\ \midrule
\multicolumn{1}{l|}{Mol-3} & C=C(C){[}C@H{]}1CN{[}C@H{]} (C(=O)O){[}C@H{]}1CC(=O)O & 0 & 0 \\ \midrule
\multicolumn{1}{l|}{Mol-4} & O=C(c1ccccc1) C1CCC1 & 1 & 0 \\ \bottomrule
\end{tabular}
\end{minipage}
\caption{Examples of relation graphs for the same molecules coexisting in two tasks of Tox21. Red (blue) edges mean the connected molecules are both active (inactive) on the target property.}
\label{fig:relation-graph-illus}
\vspace{-15pt}
\end{figure}
To handle these problems,
we propose Property-Aware Relation networks (PAR) which is compatible with existing graph-based molecular encoders, and is further equipped with the ability to
obtain property-aware molecular embeddings and model molecular relation graph adaptively.
Specifically, our contribution can be summarized as follows:
\begin{itemize}[leftmargin=*]
\item We
propose a property-aware embedding function which co-adapts each molecular embedding with respect to context information of the task and further projects it to a substructure-aware space w.r.t. the target property.
\item We propose an adaptive relation graph learning module to jointly estimate molecular relation graph and refine molecular embeddings w.r.t. the target property,
such that the limited labels can be effectively propagated among similar molecules.
\item We propose a meta-learning strategy to selectively update parameters within each task, which
is
particularly helpful to separately capture
the generic knowledge shared across different tasks and those specific to each property prediction task.
\item We conduct extensive empirical studies on real molecular property prediction datasets.
Results show that PAR consistently outperforms the others. Further model analysis shows PAR can obtain property-aware molecular embeddings and model molecular relation graph properly.
\end{itemize}
\noindent
\textbf{Notation.}
In the sequel, we denote
vectors by lowercase
boldface,
matrices by uppercase boldface,
and sets by uppercase calligraphic font.
For a vector $\mathbf{x}$, $[\bm{x}]_i$ denotes the $i$th element of $\mathbf{x}$.
For a matrix $\mathbf{X}$,
$[\mathbf{X}]_{i:}$ denotes the vector on its $i$th row,
$[\mathbf{X}]_{ij}$ denotes the $(i,j)$th element of $\mathbf{X}$.
The superscript $(\cdot)^\top$ denotes the matrix transpose.
\section{Review: Graph Neural Networks (GNNs)}
\label{sec:back-gnn}
A graph neural network (GNN) can learn expressive node/graph representation from the topological structure and associated features of a graph via neighborhood aggregation~\cite{kipf2016semi,gilmer2017neural,hu2020open}.
Consider a graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$ with node feature $\mathbf{h}^{(0)}_v$ for each node $v\in\mathcal{V}$ and edge feature $\mathbf{b}^{(0)}_{vu}$ for each edge $e_{vu}\in\mathcal{E}$ between nodes $v,u$.
At the $l$th layer, GNN updates the node embedding $\mathbf{h}_{v}^{(l)}$ of node $v$ as:
\begin{align}\label{eq:gnn-node}
\mathbf{h}_{v}^{(l)}
= \texttt{UPDATE}^{(l)}
\left(
\mathbf{h}_{v}^{(l-1)},\texttt{AGGREGATE}^{(l)}
\left(
\{(\mathbf{h}_{v}^{(l-1)},\mathbf{h}_{u}^{(l-1)},\mathbf{b}_{vu})|u\in\mathcal{N}(v)\}
\right)
\right) ,
\quad
\end{align}
where $\mathcal{N}(v)$ is a set of neighbors of $v$.
After $L$ iterations of aggregation, the graph-level representation $\mathbf{g}$ for $\mathcal{G}$ is obtained as
\begin{align}\label{eq:gnn-graph}
\mathbf{g}=\texttt{READOUT}
\left(
\{\mathbf{h}_v^{(L)}| v\in\mathcal{V}\}
\right) ,
\end{align}
where $\texttt{READOUT}(\cdot)$ function aggregates all node embeddings into the graph embedding~\cite{xu2018powerful}.
Our paper is related to GNN in two aspects: (i) use graph-based molecular encoder to obtain molecular representation, and (ii) conduct
graph structure learning to
model relation graph among molecules.
\paragraph{Graph-based Molecular Representation Learning.}
Representing molecules properly as fixed-length vectors is vital to the success of downstream biomedical applications ~\cite{gawehn2016deep}.
Recently,
graph-based molecular representation learning methods are popularly used and
obtain state-of-the-art performance.
A molecule $\mathbf{x}_{i}$ is represented as an undirected graph $\mathcal{G}_{{i}}=\{\mathcal{V}_{{i}},\mathcal{E}_{{i}}\}$, where
each node $v\in\mathcal{V}_{{i}}$ represents an atom with feature $\mathbf{h}_{v}^{(0)}\in\mathbb R^{d^n}$ and each edge
$e_{vu}\in\mathcal{E}_{{i}}$ represents the bond between two nodes $v,u$ with feature $\mathbf{b}_{vu}\in\mathbb R^{d^e}$.
Graph-based molecular representation learning methods use GNNs to obtain graph-level representation $\mathbf{g}_{i}$ as molecular embedding.
Examples include graph convolutional networks (GCN)~\cite{duvenaud2015convolutional}, graph attention networks (GAT)~\cite{velivckovic2017graph},
message passing neural networks (MPNN)~\cite{gilmer2017neural},
graph isomorphism network (GIN)~\cite{xu2018powerful},
pretrained GNN (Pre-GNN)~\cite{hu2019strategies} and GROVER~\cite{rong2020self}.
Existing two works in few-shot
molecular property prediction both use graph-based molecular encoder to obtain molecular embeddings: IterRefLSTM~\cite{altae2017low} uses GCN while Meta-MGNN~\cite{guo2021few} uses Pre-GNN.
Using these graph-based molecular encoders cannot discover molecular substructures corresponding to the target property.
There exist GNNs which handle subgraphs ~\cite{monti2018motifnet,alsentzer2020subgraph,fu2020magnn},
which are usually predefined or simply K-hop neighborhood.
While discovering and enumerating molecular substructures is extremely hard even for domain experts~\cite{ajmani2009group,yu2013discovery}.
In this paper, we first obtain molecular embeddings using graph-based molecular encoders.
We further learn to extract relevant substructure embeddings w.r.t. the target property upon these generic molecular embeddings,
which is more effective and improves the performance.
\paragraph{Graph Structure Learning.}
As the provided graphs may not be optimal,
a number of
graph structure learning methods target at jointly learning graph structure and node embeddings~\cite{zhu2021deep,chen2020iterative}.
In general, they iterate over two procedures:
(i) estimate adjacency matrix (i.e.,
refining neighborhood $u\in\mathcal{N}(v)$) which encodes graph structure from the current node embeddings; and (ii) apply a GNN on this updated graph to obtain new node embeddings.
There exist some FSL methods~\cite{garcia2018few,liu2018learning,kim2019edge,yang2020dpgn,rodriguez2020embedding}
which learn to construct fully-connected relation graph among images in a $N$-way $K$-shot few-shot image classification task.
Their methods cannot work for the $2$-way $K$-shot property prediction tasks where choosing a wrong neighbor
in the different class will heavily deteriorate the quality of molecular embeddings.
We share the same spirit of learning relation graph, and further
design several regularizations to encourage our adaptive property-aware relation graph learning module to select correct neighbors.
\section{Proposed Method}
\label{sec:promethod}
In this section,
we present the details of PAR, whose overall architecture is shown in Figure~\ref{fig:model_art}.
Considering few-shot molecular property prediction problem,
we first use a specially designed embedding function to
obtain property-aware molecular embedding for each molecule,
and then
adaptively learn relation graph among molecules which allows effective propagation of the limited labels.
Finally, we describe our meta-learning strategy to train PAR.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{img/par_illus.pdf}
\caption{The architecture of the proposed PAR, where we plot a $2$-way $2$-shot task from Tox21.
PAR is optimized over a set of tasks. Within each task $\mathcal{T}_{\tau}$,
the modules with dotted lines are fine-tuned on support set $\mathcal{S}_\tau$ and those with solid lines are fixed.
A query molecule $\mathbf{x}_{\tau,i}$ will first be represented as $\mathbf{g}_{\tau,i}$ using graph-based molecular encoder, then transformed to $\mathbf{p}_{\tau,i}$ by our property-aware embedding function. This $\mathbf{p}_{\tau,i}$ further co-adapts with embeddings of molecules in $\mathcal{S}_{\tau}$ on the relation graph as $\mathbf{h}_{\tau,i}$, which is taken as the final molecular embedding and used for class prediction.
}
\label{fig:model_art}
\vspace{-10px}
\end{figure}
\subsection{Problem Definition}
\label{sec:problem}
Following the problem definition adopted by IterRefLSTM \cite{altae2017low} and Meta-MGNN \cite{guo2021few}, the target is to learn a predictor from a set of few-shot molecular property prediction tasks
$\{ \mathcal{T}_\tau\}_{\tau=1}^{N_t}$
and
generalize to predict new properties given a few labeled molecules.
The $\tau$th task $\mathcal{T}_\tau$ predicts whether
a molecule $\mathbf{x}_{\tau,i}$ with index $i$ is active ($y_{\tau,i}=1$) or inactive ($y_{\tau,i}=0$) on a target property, provided with a small number of $K$ labeled samples per class.
This $\mathcal{T}_\tau$ is then formulated
as a
$2$-way $K$-shot classification task with a support set
$\mathcal{S}_\tau=\{(\mathbf{x}_{\tau,i},y_{\tau,i})\}_{i=1}^{2K}$ containing the $2K$ labeled samples
and a query set $\mathcal{Q}_\tau=\{(\mathbf{x}_{\tau,j},y_{\tau,j})\}_{j=1}^{N^q_\tau}$ containing $N^q_\tau$ unlabeled samples to be classified.
\subsection{Property-aware Molecular Embedding}
\label{sec:proposed-emb}
As different molecular properties are
attributed to different molecule
substructures, we design a property-aware embedding function to transform the generic molecular embeddings to substructure-aware space relevant to the target property.
As introduced in Section~\ref{sec:back-gnn}, graph-based molecular encoders can obtain good molecular embeddings. By learning from large-scale tasks, they can capture generic information shared by molecules \cite{hu2019strategies,rong2020self}.
Thus,
we
first
use a graph-based molecular encoder such as GIN \cite{xu2018powerful} and Pre-GNN~\cite{hu2019strategies} to
extract a molecular embedding $\mathbf{g}_{\tau,i}\in\mathbb R^{d^g}$
of length $d^g$ for each $\mathbf{x}_{\tau,i}$.
The parameter of this graph-based molecular encoder is denoted as $\mathbf{W}_g$.
However, existing graph-based molecular encoders cannot capture property-aware substructures.
Especially when
learning across tasks, a molecule can be evaluated for multiple properties.
This leads to a one-to-many relationship between a molecule and properties, which makes few-shot molecular property prediction particularly hard.
Thus, we are motivated to implicitly capture substructures
in the embedding space w.r.t. the target property of $\mathcal{T}_\tau$.
Let $\mathbf{c}^c_{\tau}$
denote the class prototype for class $c\in\{0,1\}$, which is computed as
\begin{align}
\mathbf{c}^c_{\tau}
= \nicefrac{1}{|\mathcal{S}^c_{\tau}|} \sum\nolimits_{(\mathbf{x}_{\tau,i},y_{\tau,i})\in\mathcal{S}^c_{\tau}} \mathbf{g}_{\tau,i},
\end{align}
where $\mathcal{S}^c_{\tau}=\{(\mathbf{x}_{\tau,i},y_{\tau,i})|(\mathbf{x}_{\tau,i},y_{\tau,i})\in\mathcal{S}_{\tau}~\text{and}~y_{\tau,i}=c\}$.
We take
these class prototypes as the context information of $\mathcal{T}_\tau$, and encode them into the molecular embedding of $\mathbf{x}_{\tau,i}$ as follows:
\begin{align}
\label{eq:emb-context}
\mathbf{b}_{\tau,i}=
\big[
\texttt{softmax}
(
\nicefrac{\mathbf{C}_{\tau,i}\mathbf{C}_{\tau,i}^\top}{\sqrt{d^g}}
)
\mathbf{C}_{\tau,i}
\big]_{1:}
~\text{with}~
\mathbf{C}_{\tau,i}^\top=[\mathbf{g}_{\tau,i},\mathbf{c}^0_{\tau},\mathbf{c}^1_{\tau}]\in\mathbb R^{d^g\times 3},
\end{align}
where
$[\cdot]_{j:}$ extracts the $j$th row vector which corresponds to $\mathbf{x}_{\tau,i}$.
Here $\mathbf{b}_{\tau,i}$ is computed using scaled dot-product self-attention~\cite{vaswani2017attention}, such that each $\mathbf{g}_{\tau,i}$ can be compared with class prototypes in a dimensional wise manner.
The property-aware molecular embedding $\mathbf{p}_{\tau,i}$ is then obtained as
\begin{align}\label{eq:project}
\mathbf{p}_{\tau,i}
=\texttt{MLP}_{\mathbf{W}_p}(\texttt{concat}[\mathbf{g}_{\tau,i},\mathbf{b}_{\tau,i}]).
\end{align}
$\texttt{MLP}_{\mathbf{W}_p}$ denotes the multilayer perceptron (MLP) parameterized by $\mathbf{W}_p$, which is used to find a lower-dimensional space which encodes substructures that are more relevant to the target property of $\mathcal{T}_\tau$.
This contextualized $\mathbf{p}_{\tau,i}$ is property-aware which can be more predictive of the target property.
\subsection{Adaptive Relation Graph Among Molecules}
\label{sec:proposed-relation-graph}
Apart from relevant substructures, the relationship among molecules also changes across properties.
As shown in Figure~\ref{fig:relation-graph-illus}, two molecules with a shared property can be different from each other on another property~\cite{rohrer2009maximum,kuhn2016sider,richard2016toxcast}.
Therefore, we further propose an adaptive relation graph learning module
to capture and leverage this property-aware relation graph among molecules,
such that the limited labels can be efficiently propagated between similar molecules.
In this relation graph learning module, we alternately estimate the adjacency matrix of the relation graph among molecules and refine the molecular embeddings on the learned relation graph for $T$ times.
At the $t$th iteration,
let $\mathcal{G}^{(t)}_{\tau}$ denotes the relation graph where $\mathcal{V}_{\tau}$ takes the $2K$ molecules in $\mathcal{S}_{\tau}$ and a query molecule in $\mathcal{Q}_{\tau}$ as nodes.
$\mathbf{A}^{(t)}_{\tau}\in\mathbb R^{(2K+1)\times(2K+1)}$ denotes the corresponding adjacency matrix encoding the $\mathcal{G}^{(t)}_{\tau}$, where
$[\mathbf{A}^{(t)}_{\tau}]_{ij}\ge0$ if nodes $\mathbf{x}_{\tau,i},\mathbf{x}_{\tau,j}\in\mathcal{V}_{\tau}$ are connected.
Ideally, the similarity between property-aware molecular embeddings $\mathbf{p}_{\tau,i},\mathbf{p}_{\tau,j}$ of $\mathbf{x}_{\tau,i},\mathbf{x}_{\tau,j}$ reveals their relationship under the current property prediction task.
Therefore, we set ${\mathbf{h}}^{(0)}_{\tau,i}=\mathbf{p}_{\tau,i}$ initially.
We first
estimate $\mathbf{A}^{(t)}_{\tau}$ using the current molecular embeddings.
The $(i,j)$th element of
$[\mathbf{A}^{(t)}_{\tau}]_{ij}$ records the similarity between $\mathbf{x}_{\tau,i},\mathbf{x}_{\tau,j}$ which is calculated as:
\begin{align}
\label{eq:relation-adj}
\left[
\mathbf{A}^{(t)}_{\tau} \right]_{ij}
=
\texttt{MLP}_{\mathbf{W}_a}(\exp(-|{\mathbf{h}}^{(t-1)}_{\tau,i}-{\mathbf{h}}^{(t-1)}_{\tau,j}|)),
\end{align}
where $\mathbf{W}_a$ is the parameter of this MLP.
The resultant $\mathbf{A}_{\tau}^{(t)}$ is a dense matrix, which encodes a fully connected
$\mathcal{G}^{(t)}_{\tau}$.
However, a query molecule only has $K$ real neighbors in $\mathcal{G}^{(t)}_{\tau}$ in a $2$-way $K$-shot task.
For binary classification,
choosing a wrong neighbor in the opposite class will heavily deteriorate the quality of molecular embeddings, especially when only one labeled molecule is provided per class.
To avoid the interference of wrong neighbors,
we further reduce
$\mathcal{G}^{(t)}_{\tau}$ to a $K$-nearest neighbor ($K$NN) graph, where $K$ is set to be exactly the same as the number of labeled molecules per class in $\mathcal{S}$.
The indices of the top $K$ largest
$[\mathbf{A}^{(t)}_{\tau}]_{ij}, j=1,\dots,2K-1$ for $\mathbf{x}_{\tau,i}$ is recorded in
$\mathcal{N}^{(t)}(\mathbf{x}_{\tau,i})$.
Then,
we set
\begin{align}\label{eq:adj-knn}
\left[
\hat{\mathbf{A}}^{(t)}_{\tau}
\right]_{ij}=
\begin{cases}
[\mathbf{A}^{(t)}_{\tau}]_{ij}
& \text{if\;} \mathbf{x}_{\tau,j}\in\mathcal{N}^{(t)}(\mathbf{x}_{\tau,i})
\\
0
&
\text{otherwise}
\end{cases}.
\end{align}
The values in $[\hat{\mathbf{A}}^{(t)}_{\tau}]$ are normalized to range between 0 and 1, which is done by applying softmax function on each row $[\hat{\mathbf{A}}^{(t)}_{\tau}]_{i:}$.
This normalization can also be done by
z-score, min-max and sigmoid normalization.
Then, we co-adapt each node embedding $\mathbf{h}^{(t)}$ with respect to other node embeddings on this updated relation graph encoded $\hat{\mathbf{A}}_{\tau}^{(t)}$.
Let $\mathbf{H}^{(t)}_\tau$ denote all node embeddings collectively where the $i$th row corresponds to $\mathbf{h}^{(t)}_{\tau,i}$.
$\mathbf{H}^{(t)}_{\tau}$ is updated as
\begin{align}
\label{eq:relation-emb}
\mathbf{H}^{(t)}_{\tau} &
=
\texttt{LeakyReLu}(\hat{\mathbf{A}}^{(t)}_\tau\mathbf{H}^{(t)}_\tau\mathbf{W}_r),
\end{align}
where $\mathbf{W}_r$ is a learnable parameter.
After $T$ iterations,
we return $\mathbf{h}_{\tau,i} =[{\mathbf{H}}_{\tau}^{(T)}]_{i:}$ as the
final molecular embedding for $\mathbf{x}_{\tau,i}$, and $\hat{\mathbf{A}}_{\tau}=\hat{\mathbf{A}}^{(T)}_{\tau}$ as the final optimized relation graph.
We further design a neighbor alignment regularizer to penalize the selection of wrong neighbors in the relation graph. It is formulated as
\begin{align}\label{eq:neighbor-reg}
r(\hat{\mathbf{A}}_{\tau},\mathbf{A}^*_{\tau})
=\NM{[\mathbf{A}^*_{\tau}]_{i:}-[\hat{\mathbf{A}}_{\tau}]_{i:}}{2}^2,
\end{align}
where ${\mathbf{A}}^*_{\tau}$ is computed using ground-truth labels with $[\mathbf{A}^*_{\tau}]_{ij}=1$ if $y_{\tau,i}=y_{\tau,j}$ and 0 otherwise.
Denote $\hat{\mathbf{y}}_{\tau,i}$ as the class prediction of $\mathbf{x}_{\tau,i}$ w.r.t. active/inactive, which is calculated as
\begin{align}\label{eq:predict}
\hat{\mathbf{y}}_{\tau,i}=\texttt{softmax}
\big(\mathbf{W}_{c}\cdot
{\mathbf{h}}_{\tau,i}
\big),\!
\end{align}
where $[\texttt{softmax}(\mathbf{x})]_i=\exp([\mathbf{x}]_i)/\sum_j\exp([\mathbf{x}]_j)$ is applied for each row, and
$\mathbf{W}_c$ is a parameter.
\subsection{Training and Inference}
\label{sec:train-infer}
For simplicity, we denote PAR as $f_{\bm{\bm{\theta}},\bm{\Phi}}$.
In particular, $\bm{\bm{\theta}}=\{\mathbf{W}_g,\mathbf{W}_a,\mathbf{W}_r\}$ denotes the collection of parameters of graph-based molecular encoder and adaptive relation graph learning module.
While $\bm{\Phi}=\{\mathbf{W}_p,\mathbf{W}_c\}$ includes the parameters of property-aware molecular embedding function and classifier.
We adopt the gradient-based meta-learning strategy \cite{finn2017model}:
a good initialized parameter is learned from a set of meta-training tasks $\{ \mathcal{T}_\tau\}_{\tau=1}^{N_t}$, which acts as starting point for each task $\mathcal{T}_\tau$.
Upon this general strategy, we selectively update parameters within tasks in order to encourage the model to capture generic and property-aware information separately.
In detail,
we keep $\bm{\bm{\theta}}$ fixed while
fine-tuning $\bm{\Phi}$ as $\bm{\Phi}_\tau$ on $\mathcal{S}_\tau$ in each $\mathcal{T}_{\tau}$.
The training loss
$\mathcal L(\mathcal{S}_\tau,f_{\bm{\bm{\theta}},\bm{\Phi}})$ evaluated on $\mathcal{S}_{\tau}$ takes the form:
\begin{align}\label{eq:support-loss}
\mathcal L(\mathcal{S}_\tau,f_{\bm{\bm{\theta}},\bm{\Phi}})
= \sum\nolimits_{(\mathbf{x}_{\tau,i},y_{\tau,i})\in\mathcal{S}_\tau} - \mathbf{y}_{\tau,i}^\top \cdot \log(\hat{\mathbf{y}}_{\tau,i})
+r(\hat{\mathbf{A}}_{\tau},\mathbf{A}^*_{\tau}),
\end{align}
where
$\mathbf{y}_{\tau,i}\in\mathbb R^2$ is a one-hot vector with all 0s but a single one denoting the index of the ground-truth class $c\in\{0,1\}$.
The first term is the cross entropy for classification loss, and the second term is the
neighbor alignment regularizer defined in \eqref{eq:neighbor-reg}.
$\bm{\Phi}_\tau$ is obtained by
taking a few gradient descent updates:
\begin{align}
\label{eq:maml-inner}
\bm{\Phi}_\tau
= \bm{\Phi}-\alpha \nabla_{\bm{\Phi}} \mathcal L(\mathcal{S}_{\tau}, f_{\bm{\bm{\theta}},\bm{\Phi}}),
\end{align}
with learning rate $\alpha$.
$\bm{\bm{\theta}}^*$ and $\bm{\Phi}^*$ are learned by optimizing the following objective:
\begin{align}\label{eq:maml-outer} \min_{\bm{\bm{\theta}},\bm{\Phi}}\sum\nolimits_{\tau=1}^{N_t}\mathcal L(\mathcal{Q}_{\tau},f_{\bm{\bm{\theta}},\bm{\Phi}_\tau}),
\end{align}
where the loss $\mathcal L(\mathcal{Q}_{\tau},f_{\bm{\bm{\theta}},\bm{\Phi}_\tau})$
is calculated in the same form of \eqref{eq:support-loss} but is evaluated on $\mathcal{Q}_{\tau}$ instead.
It is also optimized by gradient descent~\cite{finn2017model}.
The complete algorithm of PAR is shown in Algorithm~\ref{alg:PAR}.
Line~6-7 correspond to property-aware embedding $\mathbf{p}_{\tau,i}$ which encodes substructure w.r.t the target property
(Section~\ref{sec:proposed-emb}).
Line~8-12 correspond to adaptive relation graph learning which facilitates effective label propagation among similar molecules
(Section~\ref{sec:proposed-relation-graph}).
\begin{algorithm}[t]
\caption{Meta-training procedure for PAR.}
\begin{algorithmic}[1]
\STATE initialize $\bm{\bm{\theta}}=\{\mathbf{W}_g,\mathbf{W}_a,\mathbf{W}_r\}$ and $\bm{\Phi}=\{\mathbf{W}_p,\mathbf{W}_c\}$ randomly;
if a pretrained molecular encoder is available, take its parameter as $\mathbf{W}_g$;
\WHILE {not done}
\STATE sample a batch of tasks $\mathcal{T}_{\tau}$;
\FOR {all $\mathcal{T}_{\tau}$}
\STATE sample support set $\mathcal{S}_\tau$ and query set $\mathcal{Q}_\tau$ from $\mathcal{T}_\tau$;
\STATE obtain molecular embedding $\mathbf{g}_{\tau,i}$ for each $\mathbf{x}_{\tau,i}$ by a graph-based molecular encoder;
\STATE adapt $\mathbf{g}_{\tau,i}$ to be property-aware $\mathbf{p}_{\tau,i}$ by \eqref{eq:project};
\STATE initialize node embeddings as $\mathbf{h}^{(0)}_{\tau,i}=\mathbf{p}_{\tau,i}$;
\FOR{$t=1,\dots,T$}
\STATE estimate adjacency matrix $\mathbf{A}^{(t)}_\tau$ of relation graph among molecules using $\mathbf{h}^{(t-1)}_{\tau,i}$ by \eqref{eq:relation-adj};
\STATE refine $\mathbf{h}^{(t)}_{\tau,i}$ on the updated relation graph $\mathbf{A}^{(t)}_\tau$ by \eqref{eq:relation-emb};
\ENDFOR
\STATE obtain class prediction $\hat{\mathbf{y}}_{\tau,i}$ using $\mathbf{h}_{\tau,i}=\mathbf{h}^{(T)}_{\tau,i}$;
\STATE evaluate training loss $\mathcal L(\mathcal{S}_\tau,f_{\bm{\bm{\theta}},\bm{\Phi}})$ on $\mathcal{S}_\tau$;
\STATE fine-tune $\bm{\Phi}$ as $\bm{\Phi}_\tau$ by \eqref{eq:maml-inner};
\STATE evaluate testing loss $\mathcal L(\mathcal{Q}_\tau,f_{\bm{\bm{\theta}},\bm{\Phi}_\tau})$ on $\mathcal{Q}_\tau$;
\ENDFOR
\STATE update $\bm{\bm{\theta}}$ and $\bm{\Phi}$ by \eqref{eq:maml-outer};
\ENDWHILE
\end{algorithmic}
\label{alg:PAR}
\end{algorithm}
For inference, the generalization ability of PAR is evaluated on the query set $\mathcal{Q}_{\text{new}}$ of each new task $\mathcal{T}_\text{new}$ which tests on new property
in meta-testing stage.
Still,
$\bm{\bm{\theta}}^*$ is fixed and
$\bm{\Phi}^*$ is fine-tuned on
$\mathcal{S}_{\text{new}}$.
\section{Experiments}
\label{sec:expts}
\begin{center}
\begin{minipage}{0.4\textwidth}
We perform experiments on widely used benchmark few-shot molecular property prediction datasets (Table~\ref{tab:data_stat}) included in MoleculeNet \cite{wu2018moleculenet}.
Details of these benchmarks are in Appendix~\ref{app:data}.
\end{minipage}
\begin{minipage}{0.59\textwidth}
\centering
\vspace{-25px}
\small
\captionof{table}{Summary of datasets used.}
\vspace{-5px}
\setlength\tabcolsep{4pt}
\begin{tabular}{l|cccc}
\hline
Dataset & Tox21 & SIDER & MUV & ToxCast \\ \hline
\# Compounds & 8014 & 1427 & 93127 & 8615 \\
\# Tasks & 12 & 27 & 17 & 617 \\
\# Meta-Training Tasks & 9 & 21 & 12 & 450 \\
\# Meta-Testing Tasks & 3 & 6 & 5 & 167 \\ \hline
\end{tabular}
\label{tab:data_stat}
\vspace{-20px}
\end{minipage}
\end{center}
\subsection{Experimental Settings}
\label{sec:expts-data}
\paragraph{Baselines.}
In the paper,
we compare our \textbf{PAR} (Algorithm~\ref{alg:PAR}) with two types of baselines:
(i) FSL methods with graph-based molecular encoder learned from scratch, including
\textbf{Siamese}~\cite{koch2015siamese},
\textbf{ProtoNet}~\cite{snell2017prototypical},
\textbf{MAML}~\cite{finn2017model}, \textbf{TPN}~\cite{liu2018learning},
\textbf{EGNN}~\cite{kim2019edge}, and
\textbf{IterRefLSTM}~\cite{altae2017low};
and (ii) methods which leverage pretrained graph-based molecular encoder including
\textbf{Pre-GNN}~\cite{hu2019strategies},
\textbf{Meta-MGNN}~\cite{guo2021few},
and \textbf{Pre-PAR} which is our PAR equipped with Pre-GNN.
We use results of Siamese and IterRefLSTM reported in~\cite{altae2017low} as the codes are not available.
For the other methods,
we implement them using public codes of the respective authors.
More implementation details are in Appendix~\ref{app:codes}.
\paragraph{Generic Graph-based Molecular Representation.}
Following \cite{hu2019strategies,guo2021few}, we use RDKit~\cite{landrum2013rdkit} to build molecular graphs from raw SMILES, and to extract atom features (atom number and chirality tag) and bond features (bond type and bond direction).
For all methods re-implemented by us, we use GIN \cite{xu2018powerful} as the graph-based molecular encoder to extract molecular embeddings.
Pre-GNN, Meta-MGNN and Pre-PAR further use the pretrained GIN which is also provided by the authors of ~\cite{hu2019strategies}.
\paragraph{Evaluation Metrics.}
Following~\cite{hu2019strategies,guo2021few},
we evaluate the binary classification performance by ROC-AUC scores calculated on the query set of each meta-testing task.
We run experiments for ten times with different random seeds, and report
the
mean and standard
deviations of ROC-AUC computed over all meta-testing tasks.
\subsection{Performance Comparison}
Table \ref{tab:exp_fsl} shows the results.
Results of Siamese, IterRefLSTM and Meta-MGNN on ToxCast are not provided: the first two methods lack codes and are not evaluated on ToxCast before, while Meta-MGNN runs out of memory as it weighs the contribution of each task among all tasks during meta-training.
As can be seen, Pre-PAR consistently obtains the best performance while PAR obtains the best performance among methods using graph-based molecular encoders learned from scratch.
In terms of average improvement, PAR obtains significantly better performance than the best baseline learned from scratch (e.g. EGNN) by 1.59\%, and Pre-PAR is better than the best baseline with pretrained molecular encoders (e.g. Meta-MGNN) by 1.49\%.
Pre-PAR also takes less time and episodes to converge than Meta-MGNN, which is shown in Appendix~\ref{app:time}.
In addition,
we observe that
FSL methods that learn relation graphs (i.e., GNN, TPN, EGNN) obtain better performance than the classic ProtoNet and MAML.
\begin{table*}[htbp]
\small
\centering
\setlength\tabcolsep{4pt}
\caption{ROC-AUC scores on benchmark molecular property prediction datasets.
The best results (according to the pairwise t-test with 95\% confidence) are highlighted in gray.
Methods which use pretrained graph-based molecular encoder are marked in green.
}
\begin{tabular}{l|cc|cc|cc|cc}
\hline
\multirow{2}{*}{$\!\!$Method$\!\!$} & \multicolumn{2}{c|}{Tox21} & \multicolumn{2}{c|}{SIDER} & \multicolumn{2}{c|}{MUV} & \multicolumn{2}{c}{ToxCast} \\
& \multicolumn{1}{c}{10-shot} & \multicolumn{1}{c|}{1-shot} & \multicolumn{1}{c}{10-shot} & \multicolumn{1}{c|}{1-shot} & \multicolumn{1}{c}{10-shot} & \multicolumn{1}{c|}{1-shot}& \multicolumn{1}{c}{10-shot} & \multicolumn{1}{c}{1-shot} \\ \hline
$\!\!$Siamese & $\!\!$$80.40_{(0.35)}$$\!\!$ & $\!\!$$65.00_{(1.58)}$$\!\!$ & $\!\!$$71.10_{(4.32)}$$\!\!$ & $\!\!$$51.43_{(3.31)}$$\!\!$ & $\!\!$$59.96_{(5.13)}$$\!\!$ & $\!\!$$50.00_{(0.17)}$$\!\!$ &-&- \\
$\!\!$ProtoNet & $\!\!$$74.98_{(0.32)}$$\!\!$ & $\!\!$$65.58_{(1.72)}$$\!\!$ & $\!\!$$64.54_{(0.89)}$$\!\!$ & $\!\!$$57.50_{(2.34)}$$\!\!$ & {$\!\!$$65.88_{(4.11)}$$\!\!$} & $\!\!$$58.31_{(3.18)}$$\!\!$ & $\!\!$$63.70_{(1.26)}$$\!\!$ & $\!\!$$56.36_{(1.54)}$$\!\!$ \\
$\!\!$MAML & $\!\!$$80.21_{(0.24)}$$\!\!$ & $\!\!$$75.74_{(0.48)}$$\!\!$ & $\!\!$$70.43_{(0.76)}$$\!\!$ & $\!\!$$67.81_{(1.12)}$$\!\!$ & $\!\!$$63.90_{(2.28)}$$\!\!$ & $\!\!$$60.51_{(3.12)}$$\!\!$ & {$\!\!$$66.79_{(0.85)}$$\!\!$} & {$\!\!$$65.97_{(5.04)}$$\!\!$} \\
$\!\!$TPN & $\!\!$$76.05_{(0.24)}$$\!\!$ & $\!\!$$60.16_{(1.18)}$$\!\!$ & $\!\!$$67.84_{(0.95)}$$\!\!$ & $\!\!$$62.90_{(1.38)}$$\!\!$ & $\!\!$$65.22_{(5.82)}$$\!\!$ & $\!\!$$50.00_{(0.51)}$$\!\!$ & $\!\!$$62.74_{(1.45)}$$\!\!$ & $\!\!$$50.01_{(0.05)}$$\!\!$ \\
$\!\!$EGNN & {$\!\!$$81.21_{(0.16)}$$\!\!$} & $\!\!$$79.44_{(0.22)}$$\!\!$ & {$\!\!$$72.87_{(0.73)}$$\!\!$} & $\!\!70.79_{(0.95)}$$\!\!$ & $\!\!$$65.20_{(2.08)}$$\!\!$ & {$\!\!$$62.18_{(1.76)}$$\!\!$} & $\!\!$$63.65_{(1.57)}$$\!\!$ & $\!\!$$61.02_{(1.94)}$$\!\!$ \\
$\!\!$IterRefLSTM$\!\!$ & $\!\!$$81.10_{(0.17)}$$\!\!$ & \cellcolor{LightGray}{$\!\!$$80.97_{(0.10)}$$\!\!$} & $\!\!$$69.63_{(0.31)}$$\!\!$ & {$\!\!$$71.73_{(0.14)}$$\!\!$} & $\!\!$$49.56_{(5.12)}$$\!\!$ & $\!\!$$48.54_{(3.12)}$$\!\!$ &-&- \\
$\!\!$PAR & \cellcolor{LightGray}{$\!\!$$82.06_{(0.12)}$$\!\!$} & {$\!\!$$80.46_{(0.13)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$74.68_{(0.31)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$71.87_{(0.48)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$66.48_{(2.12)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$64.12_{(1.18)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$69.72_{(1.63)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$67.28_{(2.90)}$$\!\!$} \\ \hline
\cellcolor{teagreen}$\!\!$Pre-GNN & $\!\!$$82.14_{(0.08)}$$\!\!$ & $\!\!$$81.68_{(0.09)}$$\!\!$ & $\!\!$$73.96_{(0.08)}$$\!\!$ & {$\!\!$$73.24_{(0.12)}$$\!\!$} & $\!\!$$67.14_{(1.58)}$$\!\!$ & $\!\!$$64.51_{(1.45)}$$\!\!$ & {$\!\!$$73.68_{(0.74)}$$\!\!$} & {$\!\!$$72.90_{(0.84)}$$\!\!$} \\
\cellcolor{teagreen}$\!\!$Meta-MGNN$\!\!$ & {$\!\!$$82.97_{(0.10)}$$\!\!$} & {$\!\!$$82.13_{(0.13)}$$\!\!$} & {$\!\!$$75.43_{(0.21)}$$\!\!$} & {$\!\!$$73.36_{(0.32)}$$\!\!$} & {$\!\!$$68.99_{(1.84)}$$\!\!$} & {$\!\!$$65.54_{(2.13)}$$\!\!$} &-&- \\
\cellcolor{teagreen}$\!\!$Pre-PAR & \cellcolor{LightGray}{$\!\!$$84.93_{(0.11)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$83.01_{(0.09)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$78.08_{(0.16)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$74.46_{(0.29)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$69.96_{(1.37)}$$\!\!$} & \cellcolor{LightGray}{$\!\!66.94_{(1.12)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$75.12_{(0.84)}$$\!\!$} & \cellcolor{LightGray}{$\!\!$$73.63_{(1.00)}$$\!\!$} \\ \hline
\end{tabular}
\label{tab:exp_fsl}
\end{table*}
\subsection{Ablation Study}
\label{sec:expt-model}
We further compare Pre-PAR and PAR with the following variants:
(i) \textbf{w/o P}: w/o applying the property-aware embedding function;
(ii) \textbf{w/o context in P}: w/o context $\mathbf{b}_{\tau,i}$ in equation \eqref{eq:project};
(iii) \textbf{w/o R}: w/o using the adaptive relation graph learning;
(iv) \textbf{w/ cos-sim in R}: use cosine similarity to obtain the adjacency matrix as
$\big[\mathbf{A}_{\tau} \big]_{ij}={\mathbf{p}_{\tau,i}^\top\mathbf{p}_{\tau,j}}
/({\NM{\mathbf{p}_{\tau,i}}{2}\NM{\mathbf{p}_{\tau,j}}{2}})$, then calculate \eqref{eq:adj-knn} and \eqref{eq:relation-emb} as in PAR;
(v) \textbf{w/o $K$NN in R}: w/o reducing $\mathcal{G}_{\tau}$ to $K$NN graph;
(vi) \textbf{w/o reg}: w/o using the
neighbor alignment regularizer in equation \eqref{eq:support-loss};
and
(vii) \textbf{tune all}: fine-tune all parameters on line 15 of Algorithm~\ref{alg:PAR}.
Note that
these variants follows control variates method.
They cover all components of training PAR without overlapping functionalities.
Results on 10-shot tasks are in Figure~\ref{fig:exp_abl}.
Again,
Pre-PAR
obtains better performance than PAR due to a better starting point.
PAR and Pre-PAR outperform their variants.
The removal of any component leads to significant performance drop.
In particular, the performance grain of PAR and Pre-PAR with respect to ``w/ cos-sim in R"
validates the necessity of learning a similarity function from the data rather than using the fixed cosine similarity.
We also try to iterate the estimation of relation graph constructed by cosine similarity, but observe a performance drop given more iterations.
Results on 1-shot is put in Appendix~\ref{app:ablation}
where the observations are consistent.
\begin{figure}[ht]
\centering
\subfigure[Pre-PAR]
{\includegraphics[width=0.45\textwidth]{img/prepar2w10s.pdf}}
\quad
\subfigure[PAR]
{\includegraphics[width=0.45\textwidth]{img/par2w10s.pdf}}
\captionsetup{width=.9\linewidth}
\captionof{figure}{Ablation study on 10-shot tasks from Tox21.}
\label{fig:exp_abl}
\vspace{-10pt}
\end{figure}
\subsection{Using Other Graph-based Molecular Encoders}
\label{app:other-gcn}
In the experiments, we use GIN and its pretrained version.
However,
as introduced in Section~\ref{sec:proposed-emb}, our PAR is compatible with any existing graph-based molecular encoder introduced in Section~\ref{sec:back-gnn}.
Here, we consider the following popular choices as the encoder to output $\mathbf{g}_{\tau,i}$: GIN \cite{xu2018powerful},
GCN~\cite{duvenaud2015convolutional}, GraphSAGE \cite{hamilton2017inductive} and GAT~\cite{velivckovic2017graph},
which are either learned from scratch or pretrained.
We compare the proposed PAR with simply fine-tuning the encoder on support sets (denote as GNN).
Figure~\ref{fig:exp_encoder} shows the results.
As can be seen,
GIN is the best graph-based molecular encoder among the four chosen GNNs.
PAR outperforms the fine-tuned GNN consistently. This validates the effectiveness of the property-aware molecular embedding function and the adaptive relation graph learning module.
We further notice that using pretrained encoders can improve the performance except for GAT, which is also observed in \cite{hu2019strategies}.
Although using pretrained graph-based molecular encoders can improve the performance in general,
please note that both molecular encoders learned from scratch or pretrained are useful.
Pretrained encoders contain rich generic molecular information by learning enormous unlabeled data, while encoders learned from scratch can carry some new insights.
For example, the recent DimeNet \cite{klicpera2019directional} can model directional information such as bond angles and rotations between atoms, which has no pretrained version. As our proposed method can use any molecular encoder to obtain generic molecular embedding, it can easily accommodate newly proposed molecular encoder w/o or w/ pretraining.
\begin{figure}[ht]
\centering
\subfigure[10-shot]{\includegraphics[width=0.485\textwidth]{img/encoder_10shot.pdf}}
\subfigure[1-shot]{\includegraphics[width=0.485\textwidth]{img/encoder_1shot.pdf}}
\caption{ROC-AUC scores on Tox21 using different graph-based molecular encoders.}
\label{fig:exp_encoder}
\vspace{-10pt}
\end{figure}
\subsection{Case Study}
\label{sec:expts-case-study}
Finally, we validate whether PAR
can obtain different property-aware molecular embeddings and relation graphs for tasks containing overlapping molecules but evaluating different properties.
To examine this under a controlled setting, we
sample a fixed group of 10 molecules on Tox21 (Table~\ref{tab:select10} in Appendix~\ref{app:mol-detail}) which coexist in different meta-testing tasks (i.e., the $10$th, $11$th and $12$th tasks).
Provided with the meta-learned parameters $\bm{\bm{\theta}}^*$ and $\bm{\Phi}^*$,
we take these 10 molecules as the support set to fine-tune
$\bm{\Phi}^*$ as $\bm{\Phi}^*_\tau$ and keep $\bm{\bm{\theta}}^*$ fixed in each task $\mathcal{T}_{\tau}$.
As the support set is fixed now,
the ratio of active molecules to inactive molecules among the 10 molecules may not be 1:1 in the three tasks. Thus the resultant task may not evenly contain $K$ labeled samples per class.
\paragraph{Visualization of the Learned Relation Graphs.}
As described in Section~\ref{sec:proposed-relation-graph}, PAR returns $\hat{\mathbf{A}}_{\tau}$ as the adjacency matrix encoding the optimized relation graph among molecules.
Each element $[\hat{\mathbf{A}}_{\tau}]_{ij}$ records the pairwise similarity of the 10 molecules and a random query (which is dropped then).
As the number of active and inactive molecules may not be equal in the support set,
we no longer reduce adjacency matrices ${\mathbf{A}}_{\tau}$ to $\hat{\mathbf{A}}_{\tau}$ which encodes $K$NN graph.
Figure~\ref{fig:adj_full} plots the optimized adjacency matrices obtained on all three tasks.
As can be observed, PAR obtains different adjacency matrices for different property-prediction tasks. Besides, the learned adjacency matrices are visually similar to the ones computed using ground-truth labels.
\begin{figure}[htbp]
\centering
{\includegraphics[width=0.32\textwidth]{img/task_10_label}}
\quad
{\includegraphics[width=0.28\textwidth]{img/task_11_label}}
\quad
{\includegraphics[width=0.32\textwidth]{img/task_12_label}}
\vspace{5pt}
\subfigure[the $10$th task]{\includegraphics[width=0.32\textwidth]{img/task_10_neighbour}}
\quad
\subfigure[the $11$th task]{\includegraphics[width=0.28\textwidth]{img/task_11_neighbour}}
\quad
\subfigure[the $12$th task]{\includegraphics[width=0.32\textwidth]{img/task_12_neighbour}}
\caption{Comparison between
$\mathbf{A}^*_{\tau}$ computed using ground-truth labels (the first row) and
adjacency matrix ${\mathbf{A}}_{\tau}$ returned by PAR (the second row)
for the ten molecules. We set
$[\mathbf{A}^*_{\tau}]_{ij}=1$ if molecules $\mathbf{x}_{\tau,i}$ and $\mathbf{x}_{\tau,j}$ have the same label and 0 otherwise.
}
\label{fig:adj_full}
\end{figure}
\begin{figure}[htbp]
\centering
{\includegraphics[width=0.32\textwidth]{img/tsne_task_10_g}}
{\includegraphics[width=0.30\textwidth]{img/tsne_task_11_g}}
{\includegraphics[width=0.30\textwidth]{img/tsne_task_12_g}}
{\includegraphics[width=0.32\textwidth]{img/tsne_task_10_p}}
{\includegraphics[width=0.30\textwidth]{img/tsne_task_11_p}}
{\includegraphics[width=0.30\textwidth]{img/tsne_task_12_p}}
\vspace{-5px}
\subfigure[the $10$th task]{\includegraphics[width=0.32\textwidth]{img/tsne_task_10_h}}
\subfigure[the $11$th task]{\includegraphics[width=0.30\textwidth]{img/tsne_task_11_h}}
\subfigure[the $12$th task]{\includegraphics[width=0.30\textwidth]{img/tsne_task_12_h}}
\caption{t-SNE visualization of $\mathbf{g}_{\tau,i}$ (the first row), $\mathbf{p}_{\tau,i}$ (the second row), and $\mathbf{h}_{\tau,i}$ (the third row) of the ten molecules. Proto\_active (proto\_inactive) denotes the class prototype of active (inactive) class. }
\label{fig:tsne_change}
\vspace{-10pt}
\end{figure}
\paragraph{Visualization of the Learned Molecular Embeddings.}
We also present the t-SNE visualization of $\mathbf{g}_{\tau,i}$ (molecular embedding obtained by graph-based molecular encoders),
$\mathbf{p}_{\tau,i}$ (molecular embedding obtained by property-aware embedding function), and
$\mathbf{h}_{\tau,i}$ (molecular embedding returned by PAR) for these 10 molecules. For the same $\mathbf{x}_{\tau,i}$, $\mathbf{g}_{\tau,i}$ is the same across $10$th, $11$th, $12$th task, while $\mathbf{h}_{\tau,i}$ and $\mathbf{h}_{\tau,i}$ are property-aware.
Figure~\ref{fig:tsne_change} shows the results.
As shown, PAR indeed captures property-aware information during encoding the same molecules for different molecular property prediction tasks. From the first row to the third row in Figure~\ref{fig:tsne_change}, molecular embeddings gradually get closer to the class prototypes on all three tasks.
\section{Conclusion}
We propose Property-Aware Relation networks (PAR) to address the few-shot molecular property prediction problem.
PAR contains:
a graph-based molecular encoder to encode the topological structure of the molecular graph, atom features, and bond features into a molecular embedding;
a property-aware embedding function to obtain property-aware embeddings encoding context information of each task;
and
an adaptive relation graph learning module to construct a relation graph to effectively propagate information among similar molecules.
Empirical results consistently show that PAR obtains state-of-the-art performance on few-shot molecular property prediction problem.
There are several directions to explore in the future.
In this paper, PAR is evaluated on biophysics and physiology molecular properties which are modeled as classification tasks.
While the prediction of quantum mechanics and physical chemistry properties are mainly regression tasks,
it is interesting to extend PAR to handle these different levels of molecular properties.
In addition, although PAR targets at few-shot molecular property prediction,
the proposed property-aware embedding function, adaptive relation graph learning module, and the neighbor alignment regularizer can be helpful to improve the performance of graph-based molecular encoders in general.
Finally, interpreting the substructures learned by PAR is also a meaningful direction.
\cleardoublepage
\newpage
\section*{Acknowledgements}
We sincerely thank the anonymous reviewers for their valuable comments and suggestions.
Parts of experiments were carried out on Baidu Data Federation Platform.
{
\bibliographystyle{unsrt}
|
3,212,635,537,787 | arxiv | \section{INTRODUCTION}
Accurate and plausible trajectory prediction in crowd scenarios for pedestrians plays a fundamental role in different applications, such as mobile robot navigation~\cite{luo2018porca}, Intelligent Transportation Systems and Intelligent Vehicles~\cite{ballan2016knowledge}, and shared space safety~\cite{li2021autonomous}.
Unlike vehicle movement governed by traffic rules, such as lane geometry, traffic lights and the headway direction, pedestrians may stop or turn at any time and interact more with neighbors, making their behavior highly stochastic.
In order to model pedestrian behavior, a variety of different methods have been applied.
In rule-based models, the interactions among pedestrians, namely social interactions, are described as forces \cite{helbing1995social}.
In data-driven models, attention mechanisms~\cite{vaswani2017attention} and graph convolutional networks~\cite{huang2019stgat} are widely used to extract social interactions and obtain excellent results using supervised learning \cite{alahi2016social,cheng2021amenet,yuan2021agentformer}.
Goal information can also reduce the uncertainties of pedestrians' behavior.
However, each of these methods has its own drawbacks.
Although the rule-based models can simulate a certain degree of pedestrian behavior, they are relatively less robust and resilient in the face of complex scenarios.
Data-driven models often achieve better performance but are data-dependent and less interpretable~\cite{cheng2021trajectory}. In goal-based models, goal information is often directly applied as an input or as the offset of the current position to the goal~\cite{mangalam2020not,dendorfer2020goal,chiara2022goal}. They are sub-optimal because the association between the goal and the current position is not established.
To this end, we propose a novel goal-based pedestrian trajectory prediction framework called \textit{ForceFormer}.
It takes as input not only the sequential motion information but also forces to train a Transformer-based backbone.
Unlike the previous models that directly use last position information as an input feature parallel to other features describing motion dynamics, we apply the goal information to derive social forces so that the changes in velocity, position, and direction are better linked to the goal information.
In addition, we use the generative model AgentFormer~\cite{yuan2021agentformer} as the trajectory prediction backbone, which utilities the Transformer~\cite{vaswani2017attention} network to learn social interactions in the temporal dimensions.
Simultaneously, to estimate the temporary goal position for computing forces in the inference time, a goal-estimation module~\cite{mangalam2021goals,chiara2022goal} is applied.
More specifically, history trajectories are concatenated with semantic scene information, and they are fed into a U-Net~\cite{ronneberger2015u} structure to predict the potential goals.
With this goal-estimation module, we can obtain reliable goal information and as well naturally take into account the constraints of environmental factors.
In summary, our major contributions are as follows:
\begin{itemize}
\item We propose a goal-based trajectory prediction framework \textbf{ForceFormer}. It imports more interpretable features, i.\,e., social forces, into a data-driven model to learn stochastic pedestrian behavior.
\item Different variants of ForceFormer making use of goal information are studied, and we find two effective. Namely, a) \textbf{ForceFormer-Re} applies goal positions to derive repulsive forces, reinforcing the interactive information between the ego pedestrian and neighbors and decreasing the possibility of collisions; b) \textbf{ForceFormer-Dr} applies goals to derive driving force, enhancing destination guidance to predict the ego pedestrian's future trajectory.
\item Extensive empirical studies are carried out on the widely used ETH/UCY \cite{pellegrini2009you,lerner2007crowds} pedestrian datasets. The experimental results show that ForceFormer performs on par with the state-of-the-art models measured by standard distance errors but it evidently decreases collisions, especially in dense pedestrian scenarios.
\end{itemize}
\section{RELATED WORK}
This section briefly reviews the works in sequence modeling, social interaction modeling, and goal-based models.
\vspace{6pt}
\noindent\textbf{Sequence Modeling.}
Essentially, motion trajectory is composed of positional information on time series.
Therefore, converting trajectory prediction to sequence-to-sequence modeling is one of the most common approaches.
In previous works, thanks to their powerful gating functions, Long Short-Term Memories (LSTMs)~\cite{hochreiter1997long} have been widely applied to many pedestrian trajectory prediction tasks and achieved excellent results, especially in the temporal dimension \cite{alahi2016social,huang2019stgat,zhang2019sr,xu2018encoding,cheng2021amenet}.
In recent years, with the great achievements of the Transformer~\cite{vaswani2017attention} network in the domain of Natural Language Process (NLP) \cite{devlin2018bert,lan2019albert}, Transformer-based models are also applied to trajectory forecasting.
In contrast to LSTMs, Transformer networks have a better capability of modeling temporal dependencies in long sequences based on the self-attention mechanism~\cite{giuliari2021transformer,yu2020spatio,yuan2021agentformer}.
In addition to the previously adopted deterministic approaches like LSTMs, an increasing number of deep generative models, such as conditional variational autoencoders (CVAEs) \cite{kingma2013auto, sohn2015learning} and generative adversarial networks (GANs) \cite{goodfellow2020generative} are applied to trajectory forecasting.
Rather than producing one single prediction, generative models learn the potential future trajectories as a distribution and generate multiple possible predictions from latent space.
For example, Social GAN and Sophie \cite{gupta2018social,sadeghian2019sophie} are proposed for pedestrian multi-path trajectory prediction via jointly training a generator and a discriminator.
Compared to GANs, CVAE models predict multiple plausible trajectories conditioned on the past trajectories and acquire better performance in recent works \cite{ivanovic2019trajectron,tang2019multiple,salzmann2020trajectron++,cheng2021amenet, Chen_2021_ICCV}.
\vspace{6pt}
\noindent\textbf{Social Interaction Modeling.} Besides modeling individual trajectory sequences, establishing the influence of pedestrians on each other or from the environment has been a critical issue in pedestrian trajectory prediction.
As groundbreaking work, Helbing et al. \cite{helbing1995social} leverage dynamic social forces to imitate the influence of the surroundings on pedestrians, e.\,g., a repulsive force for collision avoidance and an attractive force for social connection.
The social force model has been effectively applied in various fields like robotics \cite{ferrer2013robot} and crowd analysis \cite{mehran2009abnormal,johora2022generalizability}.
Another pioneering data-driven work is Social LSTM \cite{alahi2016social}. It proposes a new structure, called the social pooling layer, to aggregate the interaction information from neighbors.
With the development of graph neural networks (GNNs) \cite{kipf2016semi}, more recent works of deterministic models like \cite{yu2020spatio,kosaraju2019social} resort to modeling a crowd as a graph and combining GNNs with attention mechanisms to learn spatial interactions.
Other approaches like \cite{salzmann2020trajectron++,huang2019stgat} first encode features over social dimension at each independent time step.
Then, these social features are fed into another temporal sequence model to summarize the social relations over time.
Unlike these methods above, we import social forces at each time step to a Transformer-based backbone, facilitating the learning of social interactions among pedestrians.
\vspace{6pt}
\noindent\textbf{Goal-based Model.} Recently, goal-based models have become an effective way to improve prediction performance \cite{mangalam2020not,Zhao_2021_ICCV}.
Diverse goal information can provide more predictive possibilities to deterministic model~\cite{zhao2020tnt,chiara2022goal}. Moreover, pedestrians are motivated by their destinations. Therefore, high uncertainty behavior can be limited through the goal information \cite{mangalam2021goals}.
In contrast to those models that directly use the goal information as an input feature, we use the goal information to calculate the social forces for each pedestrian \cite{helbing1995social}, and the resulting forces are used as input to our prediction module.
However, the goal information is not accessible in inference time.
To circumvent this issue, we utilize a goal-estimation \cite{mangalam2021goals} module for estimating the goals in the inference time.
\section{METHODOLOGY}
\subsection{Problem formulation}
In the context of pedestrian trajectory prediction problems, a complete trajectory of a pedestrian can be divided into two parts, the observed and the future trajectories.
The observed trajectory at time steps $t \leq 0$ is denoted as $X = (X^{-H}, X^{-H+1}, . . . X^0)$, which in total includes $ H+1 $ observed time steps;
While, the future trajectory at time steps $t > 0$ is denoted as $Y = (Y^1, Y^2, . . ., Y^T)$ over $ T $ future time steps.
Similar to \cite{yuan2021agentformer}, we use the $x$- and $y$-coordinate and the velocity sequence in the 2D coordinate system to parameterize trajectories.
In addition, the joint social sequences of all $ N $ pedestrians in the same scene at the same time step $t$ are denoted as $X^t = (x^t_1, x^t_2, . . ., x^t_N)$ for the observation and $Y^t = (y^t_1, y^t_2, . . ., y^t_N)$ for the future trajectories.
In our proposed generative model $p_{\theta}(Y|X, G, F)$, where $\theta$ are the model parameters, the task is to forecast future trajectories $Y$ depending on not only observed trajectories $X$ but also goal information $G$ and social forces $F$.
Following \cite{chiara2022goal}, we use both the position of the last time step and the differences between every single position and the goal position to parameterize the goal information.
It should be noted that we use the ground truth of the last position $Y^T$ to derive the goal representation in the training phase, while we use the estimation from the goal-estimation module in~Sec.~\ref{framework} to derive the goal representation in the test phase.
Additionally, two kinds of forces, i.\,e., driving force $F_\text{Dr}$ and repulsive force $F_\text{Re}$, are calculated for each agent at every time step. They are also represented as sequences.
\begin{equation} \label{two kinds of forces}
F = \left\{ \begin{array}{ll}
F_\text{Dr} = ({f_\text{Dr}}_1^{-H},...,{f_\text{Dr}}_N^{-H},...,{f_\text{Dr}}_1^{T},...,{f_\text{Dr}}_N^{T}),\\
F_\text{Re} = ({f_\text{Re}}_1^{-H},...,{f_\text{Re}}_N^{-H},...,{f_\text{Re}}_1^{T},...,{f_\text{Re}}_N^{T}).
\end{array} \right.
\end{equation}
In order to explore different ways of incorporating the goal information, on the basis of the baseline model AgentFormer~\cite{yuan2021agentformer} that takes velocity and position sequences as input, we propose three variants of the additional goal information.
As denoted in Figure~\ref{fig:Inputscomparison}, \textbf{ForceFormer-Goal} directly adds the additional goal sequence to the input.
Alternatively, \textbf{ForceFormer-Dr} uses $ F_\text{Dr} $ as the additional conditional information.
\textbf{ForceFormer-Re} uses both the goal sequence and the repulsive force $ F_\text{Re}$ sequence as the additional input.
\begin{figure}[ht]
\centering
\includegraphics[trim=0.2in 0.1in 0.3in 0.0in, clip=true, width=\linewidth]{images/Inputs_comparison.pdf}
\caption{Inputs of AgentFormer~\cite{yuan2021agentformer} and the proposed ForceFormer-Goal, ForceFormer-Dr, and ForceFormer-Re.}
\label{fig:Inputscomparison}
\end{figure}
\subsection{The Proposed Framework}
\label{framework}
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{images/CVAE_framework.pdf}
\caption{An overview of the proposed framework {ForceFormer}.}
\label{fig:Social-AgentFormer}
\end{figure*}
Figure~\ref{fig:Social-AgentFormer} depicts the overview of our proposed framework ForceFormer. It mainly consists of three modules: AgentFormer (X-Encoder, Y-Encoder, and Decoder) as the backbone, Goal-estimation module, and Social force model.
In the training process, the goal-estimation module and AgentFormer are trained separately.
The goal information is supplied from ground truth $ Y^T $, which is used for training the goal-estimation module and the calculation of social forces.
The repulsive force and driving force are calculated by the position information, velocity information, and goal information.
However, the ground truth $ Y^T $ is unavailable in the test phase.
Hence, during the inference process, we sample $K$ goal candidates for every trajectory from the goal-estimation module.
Following the previous goal-conditioned trajectory prediction models \cite{Zhao_2021_ICCV,mangalam2020not,mangalam2021goals}, we evaluate all potential $K$ goals against the ground truth and choose the one with the smallest $L2$ error as the estimated goal position in the test phase.
\paragraph{AgentFormer}
The backbone prediction model is a CAVE-based model and establishes spatial and temporal relations using attention mechanisms.
Based on the conditions of observed trajectory $X$, goal information $G$, and social forces $F$, the future trajectory distribution is modeled as $p_{\theta}(Y|X,G,F)$.
The future trajectory distribution can be rewritten as
\begin{equation} \label{future distribution}
p_{\theta}(Y|X,G,F) = \int p_{\theta}(Y|Z,X,G,F)p_{\theta}(Z|X,G,F)dZ,
\end{equation}
where $p_{\theta}(Z|X,G,F)$ is the conditional Gaussian prior, which is learned by X-Encoder.
$ p_{\theta}(Y|Z,X,G,F)$ is the conditional likelihood.
Eq.~\eqref{future distribution} proposes a set of latent variables $Z = ( z^{(1)},...,z^{(K)} )$, reflecting the latent intent of pedestrian $ n $ to account for stochasticity and multi-modality in the pedestrian's future behavior.
The negative evidence lower bound $\mathcal{L}_{elbo}$ is used to address the intractable posterior $p_{\theta}(Z|Y,X,G,F)$. Concretely, the CVAE-based model is optimized using the loss function
\begin{equation} \label{CVAE lower bound}
\begin{aligned}
\mathcal{L}_{elbo} = -\mathbb{E}_{q_{\phi}(Z|Y,X,G,F)}[\log p_{\theta}(Y|Z,X,G,F)] \\+ KL(q_{\phi}(Z|Y,X,G,F)||p_{\theta}(Z|X,G,F)),
\end{aligned}
\end{equation}
where $q_{\phi}(Z|Y,X,G,F)$ is the approximate posterior distribution parameterized by $\phi$, which is learned by Y-Encoder.
The first term in the above equation can be considered as the expected predicted probability of the future trajectory $ p_{\theta}(Y|Z,X,G,F) $.
The second term $KL(q_{\phi}(Z|Y,X,G,F)||p_{\theta}(Z|X,G,F))$ denotes the distribution difference between the prior and the approximate posterior, which both tend to be a standard normal distribution.
\paragraph{Social Forces}
A modified social force model \cite{helbing1995social} that contains both driving and repulsive forces is applied in this work.
The \textbf{driving force}, denoted by Eq.~\eqref{driving force}, describes the attractive effect related to the destination (goal position).
\begin{equation} \label{driving force}
\Vec{F}_\alpha^0 = \frac{1}{\tau_\alpha}(v_\alpha^0\Vec{e}_\alpha - \Vec{v}_\alpha).
\end{equation}
The value of the driving force depends on the deviation of the current velocity $\Vec{v}_\alpha(t)$ from the desired velocity $v_\alpha^0(t) = v_\alpha^0\Vec{e}_\alpha$.
Velocity $v_\alpha^0(t) = v_\alpha^0\Vec{e}_\alpha$ denotes that if a pedestrian is not disturbed, she will walk at the desired speed $v_\alpha^0$ along the desired direction $\Vec{e}_\alpha$ pointing to the destination.
The relaxation time $\tau_\alpha$ is a parameter that represents the expected time removing this deviation.
To avoid collisions, pedestrians maintain a proper distance from other strangers.
The \textbf{repulsive force}, denoted by Eq.~\ref{repulsive effect}, describes the avoidance phenomenon between the ego pedestrian $\alpha$ and other pedestrian $\beta$,
\begin{equation} \label{repulsive effect}
\Vec{f}_{\alpha \beta}(\Vec{r}_{\alpha \beta}) = -\nabla_{\Vec{r}_{\alpha \beta}}V_{\alpha \beta}[b(\Vec{r}_{\alpha \beta})].
\end{equation}
The repulsive potential $V_{\alpha \beta}(b)$ is a monotonic decreasing function related to $ b $, which represents the semi-minor axis of an ellipse.
Through $ b $, the equipotential lines keep the form of an ellipse that is pointed to the direction of motion.
\begin{equation} \label{semi-minor axis}
2b = \sqrt{(\left\|\Vec{r}_{\alpha \beta}\right\| + \left\|\Vec{r}_{\alpha \beta} - v_\beta\Delta t \Vec{e}_\beta\right\|)^2 - (v_\beta\Delta t)^2}.
\end{equation}
In addition to distance, the influence of viewpoint also needs to be considered when calculating repulsive forces.
Thus a parameter $ w $ is introduced.
\begin{equation} \label{feld of view}
w(\Vec{e},\Vec{f}) = \left\{ \begin{array}{ll}
1 & \textrm{if $ \Vec{e} \cdot \Vec{f} \geq \left\| \Vec{f} \right\|cos\epsilon $},\\
c & \textrm{otherwise},
\end{array} \right.
\end{equation}
where the effective angle of sight is $2\epsilon$, and $c$ is a constant factor.
Hence, the repulsive force, after taking the perspective factor into account, is constrained as $ \Vec{F}_{\alpha \beta}= w\Vec{f}_{\alpha \beta} $.
Since the repulsive force is based on the premise that two pedestrians are strangers, they want to keep their distance from each other and avoid collisions. However, in reality,
many pedestrians travel in pairs, such as classmates, relatives, and friends, who share a common destination.
Therefore, it is not logical to consider their repulsive force within a group, which could cause significant errors in the experimental results, especially in high pedestrian density scenes.
So we adapt the DBSCAN method, which is a density-based spatial clustering of applications with noise~\cite{ester1996density,schubert2017dbscan} for every time step.
Through the clustering, the candidates of group members are detected~\cite{cheng2019pedestrian}.
Namely, if two pedestrians are in the same cluster for more than $ \sigma $ time steps in the observed frames, they will be judged to be in the same group.
Then the intra-group repulsive forces are eliminated.
\paragraph{Goal-Estimation Module}
The goal-estimation module is employed to provide the goal information for the AgentFormer prediction module and the calculation of social forces in the inference phase.
We adopt the goal module proposed in \cite{mangalam2021goals,chiara2022goal} for this purpose.
First, the past trajectories in a heat map form concatenates with the semantic map information.
The semantic map is adopted from Chiara et al.~\cite{chiara2022goal}, which is extracted from a bird's-eye view of the RGB scene image in ETH/UCY datasets using a pre-trained segmentation network \cite{mangalam2021goals}.
More specifically, through the semantic map, the constraints of environments, for instance, pavement, terrain, and building, can be naturally considered.
The segmentation results in a tensor form $ S \in \mathbb{R}^{W\times L \times C} $ containing $ C $ classes.
$H$ and $L$ are the height and width of the input image.
The past trajectory$ \left\{ x_n^{-H}, x_n^{-H+1},...,x_n^0 \right\} $ of agent $n$ is mapped to the heat map $ M \in \mathbb{R}^{W\times L \times (H+1)} $.
Then, the heat map tensor of the past trajectory is concatenated with the semantic map $ S $ along the channel dimension, generating tensor $ M_s \in \mathbb{R}^{W\times L \times (C+H+1)} $ as the input tensor for goal estimation.
Finally, the concatenated information is fed to a U-Net~\cite{ronneberger2015u} model to generate a probability map of future positions.
\section{EXPERIMENTS}
\subsection{Dataset}
The proposed framework is evaluated on ETH \cite{pellegrini2009you} and UCY \cite{lerner2007crowds}, which have been widely used as the benchmark for pedestrian trajectory prediction.
The datasets contain five different subsets as listed in Table~\ref{pedestrian density}.
A valid trajectory denotes a single pedestrian's track information in 20 consecutive frames captured at 2.5\,Hz.
These twenty-time steps are divided into two parts -- the first eight time steps (3.2\,s) are observed trajectories $ X $, based on which twelve future time steps (4.8\,s) as future trajectories $ Y $ are predicted.
The position of the goal is located at the twentieth time step.
It can be seen that the density of pedestrians varies across the subsets.
The density in a scene largely influences the prediction results, especially in this work, because the calculation of social forces is closely related to crowd density.
\begin{table}[h!]
\begin{centering}
\caption{The number of frames and valid trajectories in the ETH/UCY \cite{pellegrini2009you,lerner2007crowds} datasets.}
\begin{tabular}{ m{1.8cm} | m{0.8cm} m{0.8cm} m{0.8cm} m{0.8cm} m{0.8cm} }
\toprule
& ETH & Hotel & Univ & zara01 & zara02 \\
\midrule
Frame & 1142 & 1788 & 947 & 883 & 1033\\
\hline
Valid trajectory & 364 & 1197 & 24334 & 2356 & 5910 \\
\bottomrule
\end{tabular}
\label{pedestrian density}
\end{centering}
\end{table}
\subsection{State-of-the-art models and baseline}
We compare our proposed method, ForceFormer, with the following models.
AgentFormer~\cite{yuan2021agentformer} is the baseline model without using any goal information.
Sophoie~\cite{sadeghian2019sophie} proposes a GAN-based model that combines trajectory information with context information.
Trajectron++ \cite{salzmann2020trajectron++} is a CVAE-based model that maintains top performance on the ETH/UCY benchmark.
STAR~\cite{yu2020spatio} proposes a Temporal Transformer and a Spatial Transformer to model spatial-temporal information for pedestrian trajectory prediction.
Moreover, ForceFormer is compared with a bunch of goal-based models. Namely, PECNet \cite{mangalam2020not} is a goal-conditioning model for short-term trajectory prediction. Goal-GAN \cite{dendorfer2020goal} integrates goal information in a GAN-based model for trajectory prediction. Heading \cite{Zhao_2021_ICCV} proposes a goal retrieval module that provides goal information for trajectory prediction. Y-net \cite{mangalam2021goals} combines scene information with goals and waypoints for trajectory prediction. Goal-SAR \cite{chiara2022goal} proposes an attention-based recurrent network combined with the same goal-estimation module as ForceFormer.
\subsection{Evaluation Metrics and Protocol}
Three metrics are used to evaluate the proposed model.
First, two standard error metrics are applied to measure the trajectory prediction performance of ForceFormer and compare it fairly with the previous models.
These two distance errors are average displacement error \textbf{$ ADE_K $} and final displacement error \textbf{$ FDE_K $} of $ K $ trajectory samples of each agent compared to the corresponding ground truth.
\begin{equation}
ADE_K = \frac{1}{T} \text{min}_{k=1}^K\sum_{t=1}^T\left\|\hat{y}^{t,(k)}_n - y^t_n \right\|^2,
\end{equation}
\begin{equation}
FDE_K = \text{min}_{k=1}^K\left\|\hat{y}^{T,(k)}_n - y^T_n \right\|^2.
\end{equation}
In addition, the number of collisions as another metric is leveraged to verify the social forces applied to ForceFormer.
\begin{equation}
NC = \sum^N_{m,n=1}\sum^T_{t=1}(\left\|\hat{y}^t_n - \hat{y}^t_m \right\|)< \gamma, m \not= n,
\end{equation}
where $m$ and $n$ are different pedestrians in the same scene at time step $t$. The threshold $ \gamma $ is set for determining whether a collision occurs.
In this paper, $ \gamma = 0.1\,m$.
All the metrics are computed with $K=20$ samples.
The calculation of $ NC $ is based on trajectories with the best $ ADE $.
Following prior works \cite{cheng2022gatraj,yuan2021agentformer,salzmann2020trajectron++,mangalam2020not}, we adopt the leave-one-out strategy for the evaluation.
\subsection{Implementation Details}
For calculating social forces, we adopt \ang{200} as the effective angle $ 2\epsilon $. The factor $ c $ for the field of view is 0.5. The threshold of minimum frames in the same cluster for grouping $\sigma$ is four.
Also, we consider that the desired direction cannot be calculated when the pedestrian position overlaps with the goal position, so we set the social forces at these positions to zero. For the AgentFormer backbone, we use all the same settings as in Yuan et al.~\cite{yuan2021agentformer}.
But we only train the CVAE model using Adam optimizer \cite{kingma2014adam} for 50 epochs, shorter than the original paper.
For the goal-estimation module, in addition to the same settings as in Chiara et al., \cite{chiara2022goal}, we add a goal-specific MSE loss function $ \mathcal{L}_{\text{MSE}} = \frac{1}{N}\sum_{i=1}^{N} \left\| y_i^T - \hat{y}_i^T \right\|^2$ and a hyper-parameter $\lambda = 1e^6$ to balance the original BCE loss function.
All our models are trained on Google Colab with a single Tesla P100 GPU.
\subsection{Results}
In table \ref{Quantitative comparisons of all the baselines}, we compare our approaches with current state-of-the-art methods.
First, our proposed methods ForceFormer-Dr and ForceFormer-Re achieve better performance in all the subsets compared to the baseline model AgentFormer, e.\,g., on average, ForceFormer-Dr reduces FDE by $ 26\% $ and ForceFormer-Re reduces ADE by $ 17\% $.
In addition, when comparing to models that also use goal information, our methods perform on par with the previous best method Y-net.
In particular, when we compare the results on each subset, we can find that our models achieve better performance than Y-net on the other four subsets, except for ETH.
Moreover, when comparing the results in the high-density scenes, i.\,e., on Univ and Zara2, ForceFormer-Dr decreases FDE by 22\% and 23\%, respectively, than Y-net.
The improvements indicate better performance of our method on final position predictions in high-density scenes.
\begin{table}[h!]\scriptsize
\begin{centering}
\caption{Quantitative performances of the state-of-the-art models and our proposed models on the {ETH/UCY} datasets.}
\begin{threeparttable}
\setlength{\tabcolsep}{0.85mm}
\begin{tabular}{l|cccccc}
\toprule
\textbf{Method} & \multicolumn{6}{c}{$ ADE_k/FDE_k (m)\, K=20 $ Samples} \\
\midrule
Datasets & ETH & Hotel & Univ & Zara1 & Zara2 & Average \\
Sophie \cite{sadeghian2019sophie} & 0.70/1.43 & 0.76/1.67 & 0.54/1.24 & 0.30/0.63 & 0.38/0.78 & 0.54/1.15 \\
STAR \cite{yu2020spatio} & 0.36/0.65 & 0.17/0.36 & 0.31/0.62 & 0.26/0.55 & 0.22/0.46 & 0.26/0.53 \\
Trajectron++ \cite{salzmann2020trajectron++} & 0.67/1.18 & 0.18/0.28 & 0.30/0.54 & 0.25/0.41 & 0.18/0.32 & 0.32/0.55 \\
AgentFormer \cite{yuan2021agentformer} & 0.45/0.75 & 0.14/0.22 & 0.25/0.45 & 0.18/0.30 & 0.14/0.24 & 0.23/0.39 \\
\underline{PECNet} \cite{mangalam2020not} & 0.54/0.87 & 0.18/0.24 & 0.35/0.60 & 0.22/0.39 & 0.17/0.30 & 0.29/0.48 \\
\underline{Goal-GAN} \cite{dendorfer2020goal} & 0.59/1.18 & 0.19/0.35 & 0.60/1.19 & 0.43/0.87 & 0.32/0.65 & 0.43/0.85 \\
\underline{Heading} \cite{Zhao_2021_ICCV} & 0.37/0.65 & 0.11/0.15 & \textbf{0.20}/0.44 & 0.15/0.31 & \textbf{0.12}/0.26 & 0.19/0.36 \\
\underline{Y-net} \cite{mangalam2021goals} & \textbf{0.28}/\textbf{0.33} & 0.10/\textbf{0.14} & 0.24/0.41 & 0.17/0.27 & 0.13/0.22 & \textbf{0.18}/\textbf{0.27} \\
\underline{Goal-SAR} \cite{chiara2022goal} & \textbf{0.28}/0.39 & 0.12/0.17 & 0.25/0.43 & 0.17/0.26 & 0.15/0.22 & 0.19/0.29 \\
\hline
{ForceFormer-Dr} & 0.43/0.58 & 0.12/0.16 & 0.21/\textbf{0.32} & \textbf{0.14}/\textbf{0.20} & \textbf{0.12}/\textbf{0.17} & 0.20/0.29 \\
{ForceFormer-Re} & 0.36/0.52 & \textbf{0.09}/\textbf{0.14} & 0.21/0.42 & 0.15/0.22 & \textbf{0.12}/0.20 & 0.19/0.30 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[*] The results of Trajectron++ and Heading are updated according to the implementation issue $ 53 $ \cite{trjectron++.org} and sampling trick \cite{heading.org}. The underlined methods use goal information.
\end{tablenotes}
\end{threeparttable}
\label{Quantitative comparisons of all the baselines}
\end{centering}
\end{table}
\subsection{Ablation study}
The variants of our proposed model making use of the goal information are compared against the FDE values in Table~\ref{AblationFDE} and the number of collisions in Table~\ref{Ablationcollision}.
First, it can be seen clearly that, compared to the baseline model AgentFormer, all the variants making use of the additional goal information achieve smaller average FDE.
Except ForceFormer-Goal, ForceFormer-Dr and ForceFormer-Re have evidently smaller numbers of collisions.
Among the three variants of ForceFormer, ForceFormer-Goal, in general, performs worse than the other models in terms of FDE and the number of collisions across the subsets.
This indicates that directly utilizing the goal sequences may not as effective as the social forces.
With closer observation, we can see that ForceFormer-Dr achieves the smallest FDE in high-density pedestrian scenes like Univ and Zara02. In contrast, ForceFormer-Re has the smallest total collisions, with a 19.8\% reduction compared to the baseline model AgentFormer.
\begin{table}[t]
\begin{centering}
\setlength{\tabcolsep}{0.5mm}
\caption{Performance differences for the methods using goal information measured by FDE.}
\begin{tabular}{l|ccc|ccccc|c}
\toprule
& \multicolumn{8}{c}{$ FDE_k (m) K=20 $ Samples} \\
\hline
{Method} & Goal & $ F_{Dr} $ & $ F_{Re} $& ETH & Hotel & Univ & Zara1 & Zara2 & Average \\
\midrule
{AgentFormer} \cite{yuan2021agentformer} & & & & 0.75 & 0.22 & 0.45 & 0.30 & 0.24 & 0.39 \\
{ForceFormer-Goal} & \checkmark & & & 0.55 & 0.17 & 0.49 & 0.30 & 0.28 & 0.36 \\
{ForceFormer-Dr} & & \checkmark & & 0.58 & 0.16 & \textbf{0.32} & \textbf{0.20} & \textbf{0.17} & \textbf{0.29} \\
{ForceFormer-Re}&\checkmark & & \checkmark & 0.52 & \textbf{0.14} & 0.42 & 0.22 & 0.20 & 0.30 \\
\bottomrule
\end{tabular}
\label{AblationFDE}
\end{centering}
\end{table}
\begin{table}[ht!]
\begin{centering}
\setlength{\tabcolsep}{0.6mm}
\caption{Performance differences for the methods using goal information measured by collision numbers.}
\begin{tabular}{l|ccc|ccccc|c}
\toprule
& \multicolumn{8}{c}{\textbf{Collision number $ CN_K, K = 20 $ samples}} \\
\hline
\textbf{Method} & Goal & $ F_{Dr} $ & $ F_{Re} $& ETH & Hotel & Univ & Zara1 & Zara2 & Sum \\
\midrule
AgentFormer \cite{yuan2021agentformer} & & & & 0 & 2 & 655 & 4 & 22 & 683 \\
ForceFormer-Goal &\checkmark & & & \textbf{0} & \textbf{0} & 672 & \textbf{3} & 22 & 697 \\
ForceFormer-Dr & & \checkmark & & \textbf{0} & 1 & 556 & \textbf{3} & 28 & 588\\
ForceFormer-Re & \checkmark & & \checkmark & \textbf{0} & 1 & \textbf{529} & 5 & \textbf{13} & \textbf{548} \\
\bottomrule
\end{tabular}
\label{Ablationcollision}
\end{centering}
\end{table}
\subsection{Qualitative results}
\begin{figure*}[t]
\centering
\subfigure{
\includegraphics[trim=0.0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/with_out129.pdf}
}
\quad
\subfigure{
\includegraphics[trim=0.0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/dr_129.pdf}}
\quad
\subfigure{
\includegraphics[trim=0.0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/re_129.pdf}}\
\subfigure{
\includegraphics[trim=0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/eth_without_649.pdf}
}
\quad
\subfigure{
\includegraphics[trim=0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/dr_eth_649.pdf}}
\quad
\subfigure{
\includegraphics[trim=0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/re_eth_649.pdf}}
\subfigure{
\includegraphics[trim=0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/without_69.pdf}
}
\quad
\subfigure{
\includegraphics[trim=0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/dr_hotel_69.pdf}}
\quad
\subfigure{
\includegraphics[trim=0in 0.1in 0.0in 0.0in, clip=true,width=0.3\linewidth]{images/re_hotel_69.pdf}}
\caption{Prediction results generated by AgentFormer (left column), ForceFormer-Dr (middle column), and ForceFormer-Re (right column). Each row represents a different scene.}
\label{fig:qualitative}
\end{figure*}
Figure \ref{fig:qualitative} shows the qualitative results predicted by AgentFormer (left column), ForceFormer-Dr (middle column), and ForceFormer-Re (right column), respectively.
From the upper row, we can see that, compared to AgentFormer, ForceFormer-Dr benefits from the goal information and driving force to predict trajectories around corners or turns. Although ForceFormer-Re predicts less accurate curving trajectories, its prediction for other trajectories is closer to the corresponding ground truth.
In the middle row for a scenario with two pedestrians walking in parallel, both ForceFormer-Dr and ForceFormer-Re predict more accurate final positions as the pedestrians make a left turn. In contrast, AgnetFormer does not explore the goal information from the goal-estimation module and predicts walking in the middle of the road.
A more visible scenario of predicting the final position can be seen in the bottom row. The prediction from AgentFormer largely deviates from the ground truth trajectory, while the predictions from ForceFormer-Dr and ForceFormer-Re are well aligned with the ground truth trajectory.
\vspace{6pt}
\noindent\textbf{Limitations.}
Despite the enhanced performance brought by the social forces and the goal-estimation module, several limitations of the proposed model need to be noted.
The collisions have been reduced, but the predictions from ForceFormer are not totally collision-free.
One possible reason might be that in the social force module, we do not consider the interactions and forces within groups, which also may cause collisions. In future work, we will build a more comprehensive social force module and apply it to better simulate interactions among group members.
Moreover, the overall performance of ForceFormer, especially the calculation of social forces, relies on the reliability of the goal-estimation module.
Sub-optimal performance of this module can lead to compound errors in the final prediction.
On the other hand, if we can access the ground truth goal information, we can quickly turn our model into a motion planning model.
\section{Conclusion}
This paper proposes a new goal-based trajectory predictor called ForceFormer that incorporates social forces into a Transformer-based generative model backbone.
A U-Net-based goal-estimation module is adopted to predict the goals of pedestrians' trajectories.
Additional to the position and velocity information, we derive the driving force from the estimated goal to efficiently simulate the guidance of a target on a pedestrian.
Also, repulsive forces are used to help the model learn collision avoidance among neighboring pedestrians.
ForceFormer achieves performance on par with the state-of-art models and better performance in the high-density scenarios on widely used pedestrian datasets.
\newpage
\bibliographystyle{IEEEtran}
|
3,212,635,537,788 | arxiv | \section{Introduction}
In the last years, the cultivation of genetically modified plants (GMP) is very
widespread in the world, in particular in America and Asia. On the other side, the debate in the scientific community and in the public opinion about the GMP effects is becoming harder and harder
\cite{Mendelsohn03,Banks08,Sanvido11}.
It is our belief, that one of the causes of the debate depends on the lack of available mathematical models that allow to assess quantitatively the effects of the GMP cultivation on the biodiversity.
To this purpose, recently appeared some approaches that discuss the effects of Bt-maize \cite{Felke11,Meissle11,Szekacs12,Yu11}, and two mathematical models for estimating the effects of Cry1Ab and Cry1F Bt-maize \cite{Perry10, Perry12}, on non-target lepidoptera. To the best of our knowledge \cite{Lang11, Perry11}, these latter models are the first and the unique mathematical models that assess quantitatively the effects of Bt-crop on the biodiversity. For this reason, we have analyzed in detail the models \cite{Perry10, Perry12}, obtaining novel results. In particular, regarding the Cry1Ab maize \cite{Perry10} has been derived the exact formula, and not the approximated as in \cite{Perry10}, for the proportion of the population that suffers mortality $P$. The exact formula allow to enlight the effective dependence of $P$ on the margin from the Bt crop edge $D$.
\\
Regarding Cry1F model \cite{Perry12}, we have derived a procedure, using a probabilistic and statistical approach, that allows to fix automatically the width of the non Bt-stripes used as mitigation measures. Moreover, we have shown that the values of width, proposed in \cite{Perry12}, does not implement the worst-case method, recommended by the directive of the European Community. Finally, it has been derived a lower bound, using probabilistic consideration, on the species
sensitivity of the hypothethical Lepidoptera considered in \cite{Perry12}.
\\
The work is organized as follows:
In Section \ref{Cry1ab} the model for the estimation of Cry1Ab Bt-maize effects is discussed and
the exact formula for the proportion of the population that suffers mortality $P$ is derived; In Section \ref{Cry1F} the model for the estimation of Cry1F is analyzed and the procedure that
computes the width of the non Bt-stripes is presented; finally, some conclusions are drawn in
Section \ref{concl}.
\section{Estimation of Cry1Ab Bt-maize effects}\label{Cry1ab}
Perry et al. \cite{Perry10} developed a mathematical model for estimating
the effects of exposure of three non-target Lepidoptera (e.g., \emph{Inachis io},
\emph{Vanessa atalanta}, \emph{Plutella xylostella}) to Bt-maize pollen
expressing the protein Cry1Ab.
They computed the estimated proportion of the population of a non-target
Lepidoptera that suffers mortality, hereinafter denoted by $P$ for convenience. $P$ is given by
\begin{equation}\label{eq0}
P= \frac{yzvxa (25 eh \sqrt C + fD \mu)} {(25 e \sqrt C +fD)},
\end{equation}
where the parameters $h$ (within-crop mortality), $x$ (physical effects), $a$ (temporal coincidence),
$z$ (maize cropping), $v$ (utilization rate), $y$ (host plant in arable),
$e$ (host plant within-crop), $f$ (host plant in margin), $C$ (size of maize fields),
$D$ (width of margin) are experimentally measured or taken by the literature; whereas the
parameter $\mu$, depending on $D$, has to be properly computed.
\\
Following Perry et al., before computing $\mu$ it is necessary to recall the so-called \emph{margin mortality} $g(E)$
where $E$ denotes the distance from the edge of the crop, that is given,
for larvae of \emph{Inachis io} and \emph{Vanessa atalanta}, by
\begin{equation}\label{equa1}
g(E) = \frac{\exp(-0.359E)}{33.25 + \exp(-0.359 E)};
\end{equation}
whereas for larvae of \emph{Plutella xylostella} is given by
\begin{equation}\label{equa2}
g(E) = \frac{\exp(-0.349 E)}{55.33 + \exp(-0.349 E)}.
\end{equation}
That being said, Perry et al. declared that $\mu$ is obtained by averaging the value of $g(E)$
over the margin. In practice, $\mu$ is obtained by the numerical integration of equations
(\ref{equa1}) or (\ref{equa2}), between the values $E=0$ and $E=D$. In this way, they
obtained an approximate estimate of $\mu$ and, consequently of the proportion of population that
suffers mortality $P$. Besides, not having an exact formula that states $\mu$ in function of $D$,
they cannot establish the effective dependence of the proportion $P$ on the margin $D$.
\subsection{The exact proportion suffering mortality formula}
Now, we show that, differently of what
claimed by Perry et al., an exact value for $\mu$ and hence $P$ can be derivable.
The parameter $\mu$ is given by
\begin{equation}\label{equa3}
\mu= \frac{1}{D} \int_{0}^D g(E) dE = \frac{1}{D} \int_{0}^D \frac{\exp(- \gamma E)}{\delta + \exp(- \gamma E)} dE,
\end{equation}
where $\gamma$ and $\delta$ are $0.359$ and $33.25$ for the larvae of
\emph{Inachis io} and \emph{Vanessa atalanta}, whereas are $0.349$ e $55.33$ for the larvae of \emph{Plutella
xylostella}.
\\
The integral of the equation (\ref{equa3}) can be solved exactly:
\begin{eqnarray}
\mu & = & \frac{1}{D} \int_{0}^D \frac{\exp(- \gamma E)}{\delta + \exp(- \gamma E)} dE \nonumber \\
& = & \frac{1}{D} \int_{0}^D [ 1 - \frac{\delta}{\delta + \exp(-\gamma E)}] dE \nonumber \\
& = & \frac{1}{D} \int_{0}^D [ 1 - \frac{\delta \exp(\gamma E)}{\delta \exp(\gamma E)+1}] dE \nonumber \\
& = & \frac{1}{D}\displaystyle[E - \frac{1}{\gamma}\ln (1+\delta \exp(\gamma E))]_{0}^{D} \nonumber \\
& = & \frac{1}{D}[ D - \frac{1}{\gamma}\ln (1+\delta \exp(\gamma D)) + \frac{1}{\gamma} \ln (1+\delta)] \nonumber \\
& = & \frac{1}{D}[ D - \frac{1}{\gamma}\ln \frac{1+\delta \exp(\gamma D)}{1+\delta}].
\end{eqnarray}
Plugging last formula in the equation (\ref{eq0}), we obtain the exact formula for computing
the proportion of population that
suffers mortality $P$
\begin{equation}\label{eqfin}
P= \frac{yzvxa \{25 eh \sqrt C + f [ D - \frac{1}{\gamma}\ln \frac{1+\delta \exp(\gamma D)}{1+\delta}]\}} {25 e \sqrt C +fD}.
\end{equation}
The previous equation allows us to predict quantitatively, and not qualitatively as
performed in Perry et al.'s work, the proportion of population that
suffers mortality $P$ at large distance $D$ from the crop edge.
We pass to study the dependence on $D$ of the proportion $P$.
When $D=0$, i.e., when the non-target Lepidoptera is
on the crop edge, $P$ becomes:
\begin{equation}\label{p0}
P(0)=yzvxah.
\end{equation}
When $D$ goes to the infinity the equation (\ref{eqfin}) reduces to:
\begin{equation}\label{eqfin2}
P(D) \approx \frac{yzvxa ( 25 eh \sqrt C + \frac{f}{\gamma} \ln \frac{1+\delta}{\delta} ) } {25 e \sqrt C +fD} \approx O{\left(\frac{1}{D}\right)}.
\end{equation}
This means that to the infinity $P$ goes to $0$ as $\frac{1}{D}$ that implies
that even for large $D$, $P$ is not negligible.
\section{Estimation of Cry1F Bt-maize effects}\label{Cry1F}
Perry et al. \cite{Perry12} developed a mathematical model for estimating
the effects of exposure of five hypothetical non-target Lepidoptera species
to Bt-maize pollen expressing the protein Cry1F.
In the work the estimated proportion of the population of a non-target Lepidoptera that
suffers mortality, $P$, is given by the equation (1), that for our convenience, we recall:
\[
P(D)= \frac{yzvxa(25eh\sqrt{C}+fD\mu)}{25e\sqrt{C}+fD}.
\]
The previous equation combines: (a) \textit{small scale parameters}, namely, \emph{e} (host plant within-crop), \emph{f} (host plant in margin), \emph{C} (size of maize field), \emph{D} (width of margin); (b) \textit{large scale parameters}, namely, \emph{y} (host plant in arable), \emph{z} (maize cropping),
\emph{x} (physical effects), \emph{a} (temporal coincidence); (c) \textit{mortality parameters},
namely, \emph{h} (within crop mortality) and $\mu$ (average mortality within a margin of any particular width $D$).
Small and large scale parameters are taken from literature or experimentally measured. The mortality parameters, instead, have to be properly computed, hence in the following the focus will be on the way this is accomplished.
\\
Equation (\ref{eq0}) is derived for a range of five hypothetical non-target Lepidoptera species rather than a specific one.
To this aim, Perry et al. introduce a further \emph{mortality parameter},
\emph{m}, representing a range of species sensitivities for the hypothetical non-target Lepidoptera.
Sensitivity is expressed by the $LC50$ values for maize $1507$, i.e., the lethal
concentration value that kills on average half of the larvae of the instar considered,
measured in pollen grains per $cm^{2}$.
The parameter \emph{m} affects the way \emph{h} and $\mu$ are computed. Let us focus on a step-by-step mortality parameter derivation
\cite{Perry12}.
The starting point is a mortality-dose laboratory-derived bioassay relationship in
which a logit-transformed probability of mortality, $P$, is regressed on a logarithmically transformed dose, $d$:
\begin{equation}\label{logitPd}
logit(P) = \alpha + 2.473\;log_{10} d.
\end{equation}
Here is where the parameter \emph{m} comes into play. The intercept $\alpha$ is determined by the sensitivity of the species to the Cry1F protein, for which $logit(P)=0$. Five sensitivity values, corresponding to
five hypothetical species, are considered in
\cite{Perry12}
and denoted as \emph{worst-case, extreme} $(m = 1.265)$, \emph{very high} $(m=14.36)$, \emph{high} $(m=163.2)$, \emph{above-average} $(m= 1853)$, and \emph{below-average} $(m=21057)$. These mortality-dose relationships are then combined with a field-derived regression of logarithmically transformed dose, $d$, on distance $E$, from the nearest source of the pollen:
\begin{equation}\label{log10d}
log_{10}d = 2.346 - 0.145E,
\end{equation}
to derive a linear mortality-distance relationship for mortality of larvae in the margin, on the logit scale. So doing, for each species sensitivities the mortality-distance relationships are derived,
from equations (\ref{logitPd}) and (\ref{log10d}), as
\begin{equation}\label{logit}
logit(P) = \beta_0 - 0.3586E,
\end{equation}
where $\beta_0 = \alpha + 5.8017$ and whose values, for each of the considered sensitivities, are shown in
Table \ref{tab1}.
\begin{table}[h]
\begin{center}
\begin{tabular}{p{3cm}r}
\hline\hline
\\
Sensitivity & $\beta_0$ \\ [1ex]
\hline
\textit{extreme} & $5.5492$ \\ [1ex]
\textit{very high} & $2.9399$ \\ [1ex]
\textit{high} & $0.3297$ \\ [1ex]
\textit{above-average} & $-2.2798$ \\ [1ex]
\textit{below-average} & $-4.8901$ \\ [1ex]
\hline
\end{tabular}
\end{center}
\caption{Intercepts, $\beta_0$, for the linear mortality-distance relationships in
equation (\ref{logit}), corresponding to the considered species sensitivities.}
\label{tab1}
\end{table}
\\
Taking the inverse of the logit function, we return to the natural scale thus obtaining the estimated probability of mortality $g(E)$, for a larva at distance $E$ into the margin from the nearest source of pollen at the edge of the field:
\begin{equation}\label{P}
g(E) = logit^{-1}(P) = \frac{\exp(\beta_0-0.3586E)}{1+\exp(\beta_0-0.3586E)}=\frac{\exp(-0.3586E)}{\beta+\exp(-0.3586E)}.
\end{equation}
In equation (\ref{P}), the corresponding values of $\beta = \exp(-\beta_0)$, for each species sensitivity, are listed in Table \ref{tab2}.
The mortality parameters $h$ and $\mu$ are derived from $g(E)$.
\begin{table}[h]
\caption{Values of $\beta$ to compute $g(E)$ in equation (\ref{P}), corresponding to the considered species sensitivities.}
\label{tab2}\centering
\begin{tabular}{p{3cm}r}
\hline\hline
\\
Sensitivity & $\beta$ \\ [1ex]
\hline
\textit{extreme} & $0.0039$ \\ [1ex]
\textit{very high} & $0.0529$ \\ [1ex]
\textit{high} & $0.7191$ \\ [1ex]
\textit{above-average} & $9.7747$ \\ [1ex]
\textit{below-average} & $132.9669$ \\ [1ex]
\hline
\end{tabular}
\end{table}
\\
Concretely, Perry et al. compute $\mu$ by numerically integrating $g(E)$,
in equation (\ref{P}), between $0$ and $D$, as
described in
\cite{Perry10}.\\
To estimate the probability of mortality, \emph{h}, for the five hypothetical larvae within the $Bt$-crop, it is necessary to consider that pollen deposition within a maize crop is $2.757$ times that at the edge
\cite{Perry10}.
Therefore,
\begin{equation}\label{h}
h = 2.757\;g(0) = 2.757 \frac{1}{1+\beta}.
\end{equation}
The values obtained for $h$, for the considered range of sensitivities, are shown in Table \ref{tab3}.
\begin{table}[h]
\caption{Within-crop mortality probabilities, $h$, corresponding to the considered species sensitivities. }
\label{tab3}\centering
\begin{tabular}{p{3cm}r}
\hline\hline
\\
Sensitivity & $h$ \\ [1ex]
\hline
\textit{extreme} & $2.7463$ \\ [1ex]
\textit{very high} & $2.6185$ \\ [1ex]
\textit{high} & $1.6037$ \\ [1ex]
\textit{above-average} & $0.0928$ \\ [1ex]
\textit{below-average} & $0.0075$ \\ [1ex]
\hline
\end{tabular}
\end{table}
\\
To conclude the model overview, two further parameters have to be introduced,
i.e., \emph{mitigation parameter} and \emph{large scale exposure parameter}. We
describe only the former, since the latter does not affect our discussion.
\\
Perry et al.
\cite{Perry12}
considered
mitigation measures, too. Indeed, the parameter $w$ is the width in metres of the non-$Bt$ maize strips which represents the simulated mitigation measures which are assumed to be planted around each of the four field edge. When there is mitigation ($w>0$), similar calculations to those just described are used. However, mortality calculated for larvae in the margin must use an appropriate value of $E$, computed to allow for the fact that the $Bt$-maize is a distance $w$ metres further away. The same reasoning is made for the mortality of larvae within the non-$Bt$-maize.
\subsection{Computation of the non-Bt maize strips width}
In this Section, we propose a procedure that fixes automatically the width of non Bt-maize strips.
To this purpose,
it is necessary to consider that the Cry1F Bt-maize exposure model is based on the
Cry1Ab Bt-maize one, previously described in Section \ref{Cry1ab}.
As a consequence of that, the model inherits from Cry1Ab Bt-maize exposure model,
the proportion of population that suffers mortality $P$.
Therefore, the analysis regarding
the computation of the exact value of $P$, described in Section \ref{Cry1ab},
is equally valid for the Cry1F Bt-maize exposure model, too. This implies that $P$, given by the equation (\ref{eqfin2}),
asymptotically tends to $\frac{1}{D}$ where $D$ is the distance from the crop edge.
That being said, we propose a method to assess automatically, starting from the expression
of the equation (\ref{eqfin2}), the width $w$ of the non-Bt maize strips. We propose
to use the 3$\sigma$ rule \cite{Ross09}, borrowed by the statistical and probability theory,
to derive a fixed value for the width $w$ . Let be $P(0)$, see equation (\ref{p0}), the proportion of population that suffers mortality
on the crop edge, the width $w$ is given by the value of $D$ such that:
\begin{equation}
\frac{P(D)}{P(0)} \le (1- \eta),
\end{equation}
where $\eta$ is the probability that a zero-mean normal distribution with standard deviation $\sigma$ assume values in
$[-3\sigma,3\sigma]$, i.e. $\sim0.9973$. Since ${P(D)} \approx O(\frac{1}{D})$, for large $D$, we fix:
\begin{equation}\label{rule}
w = \{ D : \frac{1}{D} \approx (1- \eta) \}
\end{equation}
Hence $D$ can be fixed to $1/(1-0.9973)$, i.e. $\sim 370$m.
If we use this $D$ value to fix the width $w$ of the non-Bt maize strips we have the consequence that the Bt-maize field, that we assume for simplicity squared, must have a size larger than twice the value of $D$, i.e., $740$m. For instance, we consider the example described in \cite{Perry12} with the unique difference that the size of the field is $774$m (the double of the one considered by Perry\footnote{the original size of the field considered by Perry et al., $387$m, cannot be considered since it is lower than $740$m.}). It's easy to show that the area of the field that can be devoted to the Bt-maize cultivation is a square of size $34$m (see Figure \ref{circleplot}), whereas the rest of the field cannot be used since it should be destined to mitigation field.
\begin{figure}
\begin{center}
\scalebox{2.0}{\includegraphics{quadrato.png}}
\end{center}
\caption{Crop Field. The parameters $\lambda$, $\rho$, $w$ stand for the size of squared crop field,
the size of $Bt$-maize cultivated field, the width of non-$Bt$ strips, respectively. }
\label{circleplot}
\end{figure}
The example shows that the adoption of mitigation measures implies strong constraints. Firstly, field size should be larger than $740m$;
secondly, only a limited field area can be devoted to the $Bt$-maize cultivation.
We conclude the analysis showing how the values proposed by Perry et al. can be viewed in our
statistical and probabilistic framework. In particular if it considers the most severe value
proposed by Perry et al., namely $24m$, after the rule (\ref{rule}) application it obtains for $\eta$ a value
of $ \sim0.0416$ that corresponds to apply approximatively a 2$\sigma$ rule
($2.03 \sigma$ \cite{Korn00} for the precision) . This choice cannot be absolutely considered to fulfill the \emph{worst case scenario}, recommended in the Directive 2001/18/EC of the European Community \cite{Europ01}.
\subsection{Derivation of a lower bound for species sensitivity}
Now we examine the \textit{within crop mortality} parameter, $h$,
and its assumed values listed in Table \ref{tab3} of the section \ref{Cry1F}.
Three out of five $h$ values, corresponding to the \textit{extreme}, \textit{very high} and \textit{high} sensitivities, violate the definition of probability.
Recall that \textit{the probability alway takes on values in between zero and one} \cite{Ross09}, while the above mentioned $h$ values are \textit{greater than one}.
The joint assumptions that pollen deposition within a maize crop is $2.757$ times that at the edge, and the considered range for
the species sensitivities do not appear to fit the theoretical soundness of the proposed model.
The right side of equation (\ref{h}) should be $\le 1$, hence
equation (\ref{h}) \emph{defines a probability if and only if $\beta\geq1.757$}.
Therefore, more restrictive assumptions on the sensitivity should be made and/or further argumentation should be given concerning with the assumption that pollen deposition within a maize crop is $2.757$ times that at the edge.
Obviously, the incorrect values of within-crop mortality $h$,
for \textit{extreme}, \textit{very high} and \textit{high} sensitivities are propagated
through the model affecting the correctness of the proportion of population that suffers mortality $P$ computed
in equation (\ref{eqfin}) and, consequently, of the entire model.
However, assuming the above mentioned Perry et al.'s issues are reasoneable, a lower bound for the values of sensitivity $m$
can be added to the Perry et al's model in order to get a correct probability value for the parameter $h$.
\\
We pass to derive the lower bound.
By applying the definition of probability, the following relations must hold:
\begin{displaymath}
h \leq 1 \Leftrightarrow \frac{2.757}{1+\beta} \leq 1 \Rightarrow \beta \geq 1.757,
\end{displaymath}
and, since $\beta=\exp(-\beta_0)$, we get
\begin{displaymath}
\exp(-\beta_0) \geq 1.757 \Rightarrow \beta_0 \leq -\ln 1.757 \Rightarrow \beta_0 \leq -0.5636.
\end{displaymath}
It comes, from equation (\ref{logit}), that $\beta_0 = \alpha+5.8017$ and therefore, $\alpha \leq -6.3653$.
Now, from equation (\ref{logitPd}) we get:
\begin{equation}
logit(P)-2.473\; \log_{10}d \le -6.353
\end{equation}
and setting $logit(P)=0$, it follows:
\begin{eqnarray}
-2.473\; \log_{10}d & \leq &-6.3653 \nonumber \\
\log_{10}d & \geq & 2.5739 \nonumber \\
d & \geq & 10^{2.5739} \sim 374.89.
\end{eqnarray}
Therefore, since at $LC50$, $m=d$ (see supplementary material $S1$ and $S3$ in \cite{Perry12}),
the lower bound $m\geq374.89$ must hold in order to $h$ to be a probability.
\section{Conclusion}\label{concl}
In this work, some mathematical models for the estimation of the effects of Cry1Ab and Cry1F Bt-maize exposure on
non-target Lepidoptera have been discussed, deriving novel results. Firstly, it is obtained the exact formula for the proportion of the population that suffers mortality exposed to Cry1Ab pollen, studying its dependence on the margin from the $Bt$-crop edge.
Besides, regarding Cry1F pollen effects, a procedure was proposed to fix automatically the width of the non $Bt$-stripes used as mitigation measures. It was also shown that the adoption of mitigation measures, that take into account the worst-case scenario
recommended in the directive 2001/18/EC by European Community, implies strong constraints
on the Bt-maize cultivation. Firstly, field size should be larger than $740m$; secondly, only a limited field area can be devoted to the $Bt$-maize cultivation.
Furthermore, on the basis of probabilistic considerations, it has been derived a lower bound, on the species
sensitivity of Lepidoptera.
\\
We hope that the derived novel results about mathematical models for the estimation of the effects of Cry1Ab and Cry1F Bt-maize exposure on non-target Lepidoptera can contribute to the debate in the scientific community about the GMP effects.
|
3,212,635,537,789 | arxiv | \section{Introduction}
The non-perturbative meson--meson interactions and the related scalar-meson spectroscopy is a topic of great importance.
The nature of scalar mesons is still under debate, in spite of the efforts during several decades in the past, particularly since the discovery of the resonances $f_0(980)$ \cite{Protopopescu:1973sh} and $a_0(980)$ \cite{Ammar:1968zur}.
The scalar mesons below 1~GeV, like the $f_{0}(500)/\sigma, K^*(800)/\kappa, f_0(980), a_0(980)$, are serious candidates to comprise a $J^{PC}=0^{++}$ nonet as required in Refs.~\cite{Jaffe:1976ig,Jaffe:1976ih,vanBeveren:1986ea,Napsuciale:1998ip,Black:2000qq,Oller:2003vf,Moussallam:2011zg}.
These resonances with vacuum quantum numbers are crucial for
the deep understanding of spontaneous chiral symmetry breaking of Quantum Chromodynamics (QCD), its spectroscopy and, in general, of its non-perturbative nature \cite{Moussallam:1999aq,Oller:2006xb,Oller:2007xd,Albaladejo:2010tj,Alvarez-Ruso:2009vkn,Alvarez-Ruso:2010rqm}.
Along the decades the lightest scalar resonances have been accommodated within different models like tetraquark states \cite{Jaffe:1976ig,Jaffe:1976ih,Achasov:1980tb,Achasov:1999wv,Achasov:2020aun,Vijande:2009ac}, molecular states \cite{Weinstein:1982gc,Weinstein:1983gd,Ahmed:2020kmp,Dai:2014zta,Dai:2011bs}, dynamically generated resonances \cite{Oller:1997ti,Oller:1997ng,Oller:1998hw,Oller:1998zr,Janssen:1994wn,Lohse:1990ew}, unitarized quark models \cite{Bramon:1980ni,vanBeveren:1986ea,Tornqvist:1995kr}, linear sigma models \cite{Black:1998wt,Scadron:1997ut,Napsuciale:2004xa,Napsuciale:2004au}, etc.
For example, in Ref.~\cite{Dai:2012kf} the $a_0(980)$ is understood as a Breit-Wigner resonance, not as a dynamically generated resonance, while the $f_0(980)$ is considered as a $K\bar{K}$ bound state.
We also notice that in Ref.~\cite{Sekihara:2014qxa} the compositeness is analyzed via the $f_{0}(980)-a_{0}(980)$ mixing intensity, and it is found that the $f_{0}(980)$ and $a_{0}(980)$ cannot be simultaneously $K\bar{K}$ bound states.
The masses of the two resonances are very close and the $f_{0}(980)-a_{0}(980)$ mixing could occur via the hadronic $K\bar{K}$ loop \cite{Achasov:1979xc,Kudryavtsev:2002uu,Hanhart:2007bd,Oller:1999ag}.
Recently, there is also interest in assessing the nature of the scalar mesons by studying semileptonic decays \cite{Cheng:2017pcq,Kang:2018jzg,Kang:2013jaa}. For some reviews, see Refs.~\cite{Close:2002zu,Klempt:2007cp,Yao:2020bxx,Zyla:2020zbs}.
In fact, a meson has typically several components \cite{Cohen:2014vta}, such as the superposition of $q\bar{q}$ and tetraquarks $qq\bar{q}\bar{q}$ \cite{tHooft:2008rus}, gluonium \cite{Narison:2000dh}, meson-meson components, etc.
The compositeness, usually denoted by $X$, refers to the weight in the resonance state composition of the meson-meson components in the continuum part of the free spectrum \cite{Weinberg:1962hj,Oller:2017alp}. Therefore, it is a fundamental concept that is required for a quantitative analysis on the nature of the resonance. In contrast, the elementariness, typically called $Z$, is the weight of the bare (compact/short-range) degrees of freedom in the resonance constitution such that $1=Z+X$.
For a bound state case, the compositeness is a positive real number \cite{Weinberg:1962hj} between 0 and 1 (as it should be), but its straightforward extension to the resonance case gives rise to complex-valued results \cite{Oller:2017alp}.
Several extensions have been proposed \cite{Baru:2003qq,Hyodo:2011qc,Aceti:2012dd,Aceti:2014ala,Sekihara:2014kya,Matuschek:2020gqe} to end with real sensible values for the compositeness. In this work, we use the results of Refs.~\cite{Guo:2015daa,Oller:2017alp} that allow a probabilistic interpretation of the compositeness relation of the resonance into open channels. Studies along these lines have also been extensively done for the case of heavy-quark resonances \cite{Meissner:2015mza,Kang:2016ezb,Kang:2016jxw,Gao:2018jhk,Guo:2020vmu,Guo:2020pvt,Du:2021bgb}. In addition, we also employ the formalism based on the evaluation of the spectral density function of the bare state associated to the resonance \cite{Bogdanova:1991zz} by using a Flatt\'e parameterization \cite{Baru:2003qq}. We then compare between this formalism and the one previously referred for the evaluation of the compositeness and elementariness, finding compatible results between them.
The $f_{0}(980)$ and $a_0(980)$ resonances couple mainly to the channels $\pi\pi$-$K\bar{K}$ and $\pi\eta$-$K\bar{K}$, respectively. The main equations stem from considering the saturation of the compositeness relationship and the total width of the resonance, from which we calculate the couplings, partial compositeness coefficients and partial-decay widths.
The implication of the branching ratio to the lighter channel, which we call $r_{\rm exp}$,
together with the reproduction of the total width, is also explored within our compositeness formalism. This set up allows to obtain more definite predictions for $X$, and the smaller the branching ratio the larger the resulting $X$ by a linear relation.
In particular, for the $f_0(980)$ the branching ratio $r_{\exp}=0.52\pm 0.12$ \cite{Aubert:2006nu}, the most recent one collected in the PDG \cite{Zyla:2020zbs} from $B$ decays to $K\pi\pi$, implies the largest $X$ ranging around 0.6--0.9 within errors.
In turn for the $a_0(980)$ the branching ratios reported recently \cite{CrystalBarrel:2019zqh,Zyla:2020zbs} are much larger and then $X$ calculated here is significantly smaller, around $0.2-0.4$, taking into account errors and variations in the method of calculation. This indicates that other components in addition to the meson-meson ones play an important role in the constitution of the $a_0(980)$.
However, it is worth keeping in mind that the most sophisticated theoretical studies on $\pi\eta$ scattering matched with lattice QCD \cite{Guo:2016zep,Dudek:2016cru} obtain that the $a_0(980)$ is a pole lying in a hidden Riemann sheet from the physical energy axis.
This was also obtained before in Refs.~\cite{Guo:2012yt,Guo:2012ym}.
At this point there is a caveat, because our study here cannot be applied to such scenario (in which the resonance effects manifest as a strong cusp) in order to clarify the nature of the $a_0(980)$.
We find that for the $f_0(980)$ the $K\bar{K}$ component has a much larger partial compositeness coefficient than the $\pi\pi$ channel. For the $a_0(980)$ it is obtained that still the $K\bar{K}$ compositeness coefficient is also larger than the one of the $\pi\eta$, but not overwhelmingly dominant.
These results are a verification of those already obtained in Refs.~\cite{Janssen:1994wn,Oller:1997ti}, such that if the $\pi\eta$ channel were removed the $a_0(980)$ would disappear, while the $f_0(980)$ would keep appearing as a $K\bar{K}$ bound state.
In connection with our use of Flatt\'e parameterization we stress the importance of distinguishing between bare and renormalized couplings and widths. The former ones are those appearing directly in the Flatt\'e parameterization, while the latter ones are associated to the actual residues of the partial-wave amplitude of interest at the pole position in the complex energy plane. We also show that for the present two-channel coupled scattering, when the pole lies in the second Riemann sheet, one has to modify the interpretation of the theoretically calculated partial-decay width to the lighter channel in terms of renormalized couplings (residues), and give the proper interpretation.
These two effects explain why bare partial-decay widths, often found in the literature, are much bigger than those actually measured.
For the rest of the paper, Sec.~\ref{sec.211010.1} is dedicated to elaborate
the formalism based on the saturation of the total compositeness and decay width of the resonance. In turn, Sec.~\ref{sec.201009.3} develops the method based on the use of a Flatt\'e parameterization and introduces the spectral density function for a near-threshold resonance.
Then, we apply these methods to the study of the $f_0(980)$ and $a_0(980)$ resonances, either by taking $X$ as input in Sec.~\ref{sec.201008.1}, or by using $r_{\rm exp}$ in Sec.~\ref{sec.211126.1}. In terms of them we typically provide the resulting partial compositeness coefficients and partial-decay widths.
Finally, concluding remarks are given in Sec.~\ref{sec.211106.1}.
\section{Formulation of the compositeness-relation and decay-width method}
\label{sec.211010.1}
For definiteness, we proceed with the discussion on the components in the nature of the $f_{0}(980)$, and develop a method to investigate its partial-decay widths, couplings and compositeness. Later on we also apply this method to the related isovector scalar resonance $a_0(980)$.
In what follows, we consider two main decay channels ($\pi\pi$ and $K\bar{K}$) of the $f_0(980)$.
We follow the standard convention such that compositeness and elementariness coefficients are written as $X$ and $Z$, respectively, with $X+Z=1$. For the case of a bound state the coefficient $Z$ corresponds to the field renormalization constant \cite{Weinberg:1962hj,Salam:1962ap}, being real and positive and less than 1 (as $X$ is too).
The straightforward generalization for resonances of the compositeness and elementariness gives rise to complex numbers.
As mentioned in the Introduction, several variants for the compositeness of a resonance have been discussed in the literature.
Here we will follow Ref.~\cite{Guo:2015daa}, which formulates a probabilistic interpretation of the compositeness relation involving only positive and real coefficients for the resonance.
As explained in Ref.~\cite{Oller:2017alp} the compositeness $X$ arises by evaluating the expected value of the number of mesons in the resonance divided by 2 (because we are considering two-body meson states).
After the proper unitary phase transformation of the $S$-matrix, it gives the partial compositeness coefficient for the resonance in the form \cite{Guo:2015daa}
\begin{equation}\label{equ1}
X_{i}=\Big{|}\gamma_{i}^{2}\Big{|}\Big{|}\frac{\partial G_{i}(s)}{\partial s}\Big{|}_{s=s_{R}}\,,
\end{equation}
and the subscript $i$, with $i=1$ and 2, corresponds to the $S$-wave isoscalar $\pi\pi$ and $K\bar{K}$ channels, respectively.
The pole position in the Mandelstam variable $s$ is called $s_R$,
\begin{equation}\label{equ21}
s_{R}=(m_{R}-\frac{i}{2}\Gamma_{R})^2\,,
\end{equation}
with $m_R$ and $\Gamma_R$ the mass and width of the resonance, respectively.
Furthermore, $\gamma_{i}$ is the coupling of the resonance to the channel $i$ that is extracted from the residues of the $T$ matrix at the pole position $s_R$,
\begin{equation}
\gamma^{2}_{i}=-\lim\limits_{s\to s_{R}}(s-s_{R})T(s)_{ii}\,.
\end{equation}
$G_{i}(s)$ is the unitary two-point scalar loop function for the $i_{\rm th}$ channel and it can be written in the form \cite{Guo:2012yt}
\begin{equation}\label{equ2}
\begin{aligned}
G_{i}(s)&=\frac{1}{16\pi^{2}}\big\{a_{i}(\mu)+\log\frac{m_{2}^{2}}{\mu^{2}}-\frac{\Delta+s}{s}\log\frac{m_{2}}{m_{1}}+\frac{p_{i}}{\sqrt{s}}[\log(s-\Delta+2\sqrt{s}p_{i})\\
&+\log(s+\Delta+2\sqrt{s}p_{i})-\log(-s+\Delta+2\sqrt{s}p_{i})-\log(-s-\Delta+2\sqrt{s}p_{i})]\big\}~.\\
\end{aligned}
\end{equation}
Here $\triangle=m_{1}^{2}-m_{2}^{2}$ and $m_{1}$, $m_{2}$ are the masses of the two particles in the channel $i$. We do not take into account the isospin breaking effects and use an average mass of the charged and neutral pions, and proceed analogously for kaons too. However, such effects are expected to be negligible in our exploration. The term $a_{i}(\mu)+\log\frac{m_{2}^{2}}{\mu^{2}}$ in Eq.~\eqref{equ2} is independent of $s$ and it disappears when taking the derivative of $G_{i}(s)$ in Eq.~\eqref{equ1}. Finally in Eq.~\eqref{equ2}, $p_{i}$ is the momentum of the channel $i$,
\begin{equation}\label{eq:momentum}
p_{i}(s)=\frac{\sqrt{[s-(m_{1}+m_{2})^{2}][s-(m_{1}-m_{2})^{2}]}}{2\sqrt{s}}
\end{equation}
The total compositeness coefficient, $X = \sum\limits_{i = 1}^{n} X_i$, is the sum over the partial compositeness coefficients $X_i$, and it must satisfy the condition $X\leq 1 $.
As discussed in more detail in Ref.~\cite{Guo:2015daa}, Eq.~\eqref{equ1} is properly applied to the calculation of $X_i$ for the channel $i$ under the condition that the resonance lies in an unphysical Riemann sheet (RS) that is connected with the physical RS along an interval of the real-$s$ axis (where $s$ is the total energy squared in the center of mass reference frame), lying above the threshold for the channel $i$.
Eq.~\eqref{equ1} is very similar to that for a bound state case, see e.g.~\cite{Oller:2017alp},
\begin{equation}
X_{i}=-\gamma_{i}^{2}\frac{\partial G_{i}(s)}{\partial s}\Big{|}_{s=s_{R}}\,,
\end{equation}
with the difference concerning the introduction of the absolute values.
It is necessary to distinguish the RS in which $s_{R}$ lies. For the different signs of the imaginary part of $p_{1}$ and $p_{2}$ in the complex-$s$ plane, we can define the four different RSs as
\begin{equation}
\begin{aligned}\label{sheets}
\text{Sheet} ~\text{\Rmnum{1}}: ~~~\text{Im}p_{1}>0, ~\text{Im}p_{2}>0
\\
\text{Sheet} ~\text{\Rmnum{2}}: ~~~\text{Im}p_{1}<0, ~\text{Im}p_{2}>0
\\
\text{Sheet} ~\text{\Rmnum{3}}: ~~~\text{Im}p_{1}<0, ~\text{Im}p_{2}<0
\\
\text{Sheet} ~\text{\Rmnum{4}}: ~~~\text{Im}p_{1}>0, ~\text{Im}p_{2}<0
\end{aligned}
\end{equation}
The RSs $\text{\Rmnum{2}}$ and $\text{\Rmnum{3}}$ are connected to the physical RS $\text{\Rmnum{1}}$ from the $\pi\pi$ threshold onwards up to
and above the $K\bar{K}$ threshold in the real-$s$ axis, respectively.
The threshold of the $\pi\pi$ channel is distant from the resonance mass, while the resonance location is remarkably close to the $K\bar{K}$ threshold, cf. Eq.~\eqref{equ22} below.
Next, let us discuss how
to make the analytical extrapolation from the RS \Rmnum{1} to the RSs \Rmnum{2}, \Rmnum{3}, \Rmnum{4} in order to calculate the partial compositeness coefficient $X_i$, attending to the RS in which the pole lies.
We have to cross the cut of $G_{i}(s)$ and use its continuity property for real values of $s$ with $s> (m_{i,1} + m_{i,2})^2$, where $m_{i,1}$ and $m_{i,2}$ are the masses of the first and second particles in the $i_{\rm th}$ channel, respectively. Then, one has that \cite{Oller:1997ti}
\begin{equation}\label{equ20}
\begin{aligned}
G_{i}^{\text{\Rmnum{2}}}(s+i\epsilon)&=G_{i}^{\text{\Rmnum{1}}}(s-i\epsilon)=G_{i}^{\text{\Rmnum{1}}}(s+i\epsilon)-2i\text{Im}G_{i}^{\text{\Rmnum{1}}}(s+i\epsilon)\\
&=G_{i}^{\text{\Rmnum{1}}}(s+i\epsilon)
+\frac{i}{8\pi}\sqrt{\frac{[s+i\epsilon-(m_{1}+m_{2})^{2}][s+i\epsilon-(m_{1}-m_{2})^{2}]}{(s+i\epsilon)^2}}\,,
\end{aligned}
\end{equation}
where the square root is calculated in the first Riemann sheet, with the argument of the radicand between 0 and $2\pi$.
The Eq.~\eqref{equ20} can be extrapolated to any other complex value of $s$.
Thus, the RS \text{\Rmnum{1}} is obtained with $G_{1}^{\text{\Rmnum{1}}}(s), G_{2}^{\text{\Rmnum{1}}}(s)$;
the RS \Rmnum{2} corresponds to take $G_{1}^{\text{\Rmnum{2}}}(s)$, $G_{2}^{\text{\Rmnum{1}}}(s)$;
the RS \Rmnum{3} is obtained with $G_{1}^{\text{\Rmnum{2}}}(s)$, $G_{2}^{\text{\Rmnum{2}}}(s)$;
and the RS \Rmnum{4} implies $G_{1}^{\text{\Rmnum{1}}}(s)$, $G_{2}^{\text{\Rmnum{2}}}(s)$.
The crucial inputs in the evaluation of the coefficients $X_i$, Eq.~\eqref{equ1}, are the pole position and the coupling $|\gamma_i|$.
Regarding the pole parameters of the $f_{0}(980)$, we preferentially consider the results obtained by the dispersive analysis of Ref.~\cite{GarciaMartin:2011jx} based on the use of a set of Roy-like equations called the GKPY equations \cite{Garcia-Martin:2011iqs}.
The mass and width of the resonance calculated in Ref.~\cite{GarciaMartin:2011jx} are
\begin{equation}\label{equ22}
m_{R}=996\pm7\ \text{MeV}, ~~\Gamma_{R}= 50^{+20}_{-12}\ \text{MeV},
\end{equation}
which provides a rather accurate determination for the mass, while the width is affected by rather large errors.
When using this pole we consider the RS \Rmnum{2} because it was found to be there in the original publication \cite{GarciaMartin:2011jx}.
In addition we consider the $f_0(980)$ pole position from Ref.~\cite{Guo:2012yt}.
This reference performs an exhaustive study of $S$ and $P$-wave meson-meson scattering by unitarizing one-loop amplitudes in $U(3)\otimes U(3)$ Chiral Perturbation Theory \cite{DiVecchia:1980yfw,Rosenzweig:1979ay,Witten:1980sp,Kawarabayashi:1980dp,Kawarabayashi:1980uh,Kaiser:2000gs,Herrera-Siklody:1996tqr} with explicit exchange of resonances. A large amount of experimental data on different reactions is reproduced and, at the same time, the consistency of the approach is checked by properly reproducing QCD constraints from spectral sum rules and semilocal duality as a function of the number of colors of QCD. The resulting pole of the $f_0(980)$, found also in the RS \Rmnum{2}, is
\begin{align}
\label{211107.1}
m_{R}=978^{+7}_{-11}\ \text{MeV}, ~~\Gamma_{R}= 58^{+18}_{-22}\ \text{MeV}\,.
\end{align}
Interestingly for this case the mass of the resonance lies clearly below the $K\bar{K}$ threshold, while $m_R$ from Ref.~\cite{GarciaMartin:2011jx} in Eq.~\eqref{equ22} is above. In this way we can now explore what is the effect of such a relative arrangement of the resonance mass with respect to the two-kaon threshold.
Instead, the width of the $f_0(980)$ is rather similar in both cases.
In our considerations, we ignore the multiparticle $4\pi$ channel whose contributions are very small up to 1~GeV as obtained in phenomenological studies where it is considered \cite{Albaladejo:2008qa,Garcia-Martin:2011iqs}, or estimated theoretically in studies based on unitarizing Chiral Perturbation Theory \cite{Salas-Bernardez:2020hua}. We simply notice as well that the electromagnetically driven two-photon decay channel has been ignored in our calculations. References~\cite{Zyla:2020zbs,Dai:2014zta,Oller:2007sh,Oller:2008kf} obtained that $\Gamma_{f_{0}(980)\rightarrow\gamma\gamma}=0.32\pm0.05~\text{MeV}$, which contributes a tiny portion of the total width for the $f_0(980)$, and
should be much smaller than the one for $\pi\pi$, and $K\bar{K}$.
Then, we sensibly assume that the total compositeness coefficient of $f_{0}(980)$ can be expressed as the sum of the $S$-wave isoscalar $\pi\pi$ and $K\bar K$ channels,
\begin{equation}
\label{equ:square2}
X=X_{1}+X_{2}=|\gamma_{1}|^{2}\Big{|} \frac{\partial G_{1}(s)}{\partial s}\Big{|}_{s=s_{R}}+|\gamma_{2}|^{2}\Big{|} \frac{\partial G_{2}(s)}{\partial s}\Big{|}_{s=s_{R}}º\,.
\end{equation}
In addition to Eq.~(\ref{equ:square2}), another main equation stems from imposing the saturation of the width of the $f_{0}(980)$. As the threshold of the $\pi\pi$ channel is distant from the resonance we use the standard formula for the partial-decay width of the $f_0(980)$ to $\pi\pi$,
\begin{equation}\label{equ33}
\Gamma_{1}=\frac{|\gamma_{1}|^{2}p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}}\,,
\end{equation}
where $p_i$ is the momentum in the rest frame of the resonance, cf.~\eqref{eq:momentum} with $s=m_{R}^{2}$.
However, the $K\bar{K}$ threshold is very close to the resonance mass and the effect of the finite width of the $f_{0}(980)$ (around 50 \text{MeV}) in the $K\bar{K}$ phase space is not negligible.
Notice that even the lower limit of the $f_0(980)$ mass within its uncertainty region in Eq.~\eqref{equ22} is indeed smaller than the $K\bar{K}$ threshold. However, since the uncertainty in the mass is much smaller than the width of the resonance, this fact is easily overturn by the mass distribution of the resonance and it does not prevent the actual decay of the $f_0(980)$ to $K\bar{K}$, even when the resonance mass is below the $K\bar{K}$ threshold.
In these regards, we consider a Lorentzian mass distribution for the resonance, and the partial-decay width is written as
\begin{equation}\label{equ:square}
\Gamma_{2}=\frac{|\gamma_{2}|^{2}}{16\pi^{2}} \int_{m_{1}+m_{2}}^{+ \infty}dW\frac{p_2(W^{2})}{W^{2}}\frac{\Gamma_{R}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}
\end{equation}
In the limit $\Gamma_{R}\rightarrow0$, Eq.~(\ref{equ:square}) becomes the standard formula for the decay width.
In a practical calculation, the upper limit of integration ($+\infty$) is replaced by $m_{R}+n\Gamma_{R}$.
For example, in Ref.~\cite{Kang:2016ezb}, the value of $n = 8$ is chosen for the $Z_{b}(10610)/Z_{b}(10650)$ by reproducing the experimental width;
in Ref.~\cite{Guo:2020vmu}, $n = 10$ is adopted for the $Z_{c}(3900)$, $X(4020)$ and $Z_{c}(3985)$ particles.
However, in the Ref.~\cite{Meissner:2015mza}, dedicated to the study of the compositeness of the $\chi_{c1}p$ for the $P_{c}(4450)$, the upper limit of integration used was $m_{R}+2\Gamma_{R}$.
The region for $n=2$ is usually thought to be a reasonable cut in the resonance region \cite{Meissner:2015mza,Dias:2021upl}.
For our consideration, we restrict the upper integration limit in the resonance region to $m_{R}+2\Gamma_{R}$ (which comprises the resonance signal as it can be seen in Fig.~\ref{picflatt}, introduced in Sec.~\ref{sec.201009.3} within the context of a Flatt\'e parameterization).
Another aspect to take into account is the RS in which the pole lies because the sign of the momentum of the kaons in the center of mass reference frame has opposite signs in the RSs \Rmnum{2} and \Rmnum{3} at the pole position.
Notice that for the latter RS the kaon momentum has the standard sign in the lower complex-energy half-plane, corresponding to a $m_R-i\Gamma_1/2-i\Gamma_{2}/2$, while for the former one has instead $m_R-i\Gamma_1/2+i\Gamma_{2}/2$.
Because this change of sign in the kaon momentum the saturation of the resonance width obtained from the pole position varies, such that $\Gamma_2$ standardly adds to $\Gamma_1$ when the pole lies in the RS \Rmnum{3}, but $\Gamma_2$ {\it subtracts from} $\Gamma_1$ when the pole lies in the RS \Rmnum{2}. As a result, the total decay width of the $f_{0}(980)$ is then
\begin{equation}
\label{equ:square1}
\Gamma_{R}=|\gamma_{1}|^{2}\frac{p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}}\pm \frac{|\gamma_{2}|^{2}}{16\pi^{2}} \int_{m_{1}+m_{2}}^{m_{R}+2\Gamma_{R} }dW\frac{p(W^{2})}{W^{2}}\frac{\Gamma_{R}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}\, ,
\end{equation}
for the pole in the RS \Rmnum{3} or \Rmnum{2}, respectively.
Another interesting consequence of this discussion on the RS in which the pole lies is the simple observation that for a pole in the RS \Rmnum{2} the combination $m_R-i\Gamma_1/2+i\Gamma_2/2$ can be rewritten as $m_R-i(\Gamma_1-2\Gamma_2)/2-i\Gamma_2/2$, so that now the decay width to $K\bar{K}$ appears with the right sign in the resonance propagator for an interpretation as a decay width, while the decay width to the lighter channel is $\Gamma_1-2\Gamma_2$. This result can also be obtained in a more straightforward mathematical way in terms of the branching ratio to the first channel, $r_{\rm exp}$, by noticing that
\begin{align}
\label{211123.2}
r_{\rm exp}&=1-\frac{\Gamma_2}{\Gamma_R}=\frac{\Gamma_R-\Gamma_2}{\Gamma_R}=\frac{\Gamma_1-2\Gamma_2}{\Gamma_R}\,,~\text{RS \Rmnum{2}}~,
\end{align}
where we have used again that $\Gamma_R=\Gamma_1-\Gamma_2$. However, for a pole in the RS \Rmnum{3} one has the standard result
\begin{align}
\label{211123.1}
r_{\rm exp}&=1-\frac{\Gamma_2}{\Gamma_R}=\frac{\Gamma_1}{\Gamma_R}\,,~\text{RS \Rmnum{3}}~.
\end{align}
Nonetheless, in what follows, we keep the usual notation of directly calling $\Gamma_i$ as decay widths, though for $i=1$ and the pole in the RS \Rmnum{2} the actual decay width to the lighter channel does not coincide with $\Gamma_1$, as just discussed. Because of this reason we denote by $\Gamma_{\pi\pi}$ or $\Gamma(f_0(980)\to\pi\pi)$ the physical decay width of the $f_0(980)$ to $\pi\pi$, and similarly we use $\Gamma_{\pi\eta}$ or $\Gamma(f_0(980)\to\pi\eta)$ for the physical partial-decay width of the $a_0(980)$ to the lighter channel. For $K\bar{K}$ we can use indistinctly $\Gamma_2$ or $\Gamma_{K\bar{K}}$ since they coincide. They have the same meaning as $\Gamma(f_0(980)\to K\bar{K})$ or $\Gamma(a_0(980)\to K\bar{K})$ in a clear notation.
These points are further elaborated when considering a Flatté parameterization for the $f_0(980)$ and $a_0(980)$ resonances in Sec.~\ref{sec.201009.3}.
Combining Eq.~(\ref{equ:square2}) and Eq.~(\ref{equ:square1}) allows us to solve $|\gamma_{1}|$ and $|\gamma_{2}|$ in terms of the total compositeness and width. Then we can obtain the partial-decay width $\Gamma_i$ and individual compositeness coefficient $X_{i}$ for each channel.\footnote{The fact that the width of the $f_0(980)$ is substantially larger than the difference between its mass and the $K\bar{K}$ threshold allows to apply in a reasonable way the standard formula Eq.~\eqref{equ1} for the partial compositeness $X_2$, even if the $f_0(980)$ pole lies in the 2nd (3rd) RS above (below) the {\it nearby} $K\bar{K}$ threshold. The mass distribution of the resonance smooths the sharp condition, alluded after Eq.~\eqref{eq:momentum}, on the relative position between the resonance mass and the threshold of $K\bar{K}$ when their distance is much smaller than the $f_0(980)$ width.\label{foot.211008.1}}
We consider several choices for $X$ in Eq.~\eqref{equ:square2}, typically from 0.2 up to 0.8 in steps of 0.2.
The information for the branching ratio $r_{\rm exp}=\Gamma(f_{0}(980)\rightarrow\pi\pi)/[\Gamma(f_{0}(980)\rightarrow \pi\pi)+\Gamma(f_{0}(980)\rightarrow K\bar{K})]$ can also be used together with the total width $\Gamma_R$ to fix $|\gamma_1|$ and $|\gamma_2|$.
The results following one way or the other are organized in Secs.~\ref{sec.201008.1} and \ref{sec.211126.1}, respectively.
They are also applied in analogous way to the isovector scalar $a_0(980)$ involving the scattering channels $\pi\eta$(1) and $K\bar{K}$(2), with $r_{\rm exp}=\Gamma(a_{0}(980)\rightarrow\pi\eta)/[\Gamma(a_{0}(980)\rightarrow \pi\eta)+\Gamma(a_{0}(980)\rightarrow K\bar{K})]$ then.
\section{Flatt\'e parameterization and the spectral density of a bare state}
\label{sec.201009.3}
One disadvantage of the approach followed in Sec.~\ref{sec.211010.1} is the necessity to assume a value of $n$ for the upper limit of integration in Eq.~\eqref{equ:square} for evaluating the partial-decay width of the resonance into $K\bar{K}$, that is, $\Gamma_2$.
This can be overcome by using a Flatt\'e parameterization \cite{Flatte:1976xu}, without increasing the number of the input parameters needed to calculate the couplings $|\gamma_i|$, partial-decay widths $\Gamma_i$ and partial compositeness coefficients $X_i$.
Since the $f_0(980)$ and $a_0(980)$ lie very close to the $K\bar{K}$ threshold a Flatt\'e parameterization is then specially suitable \cite{Baru:2003qq}.
Nonetheless, as discussed in Refs.~\cite{Baru:2010ww,Kang:2016jxw}, there is also a limitation in the use of a Flatté parameterization as it assumes that the corresponding $K\bar{K}$ partial-wave amplitude has no zero in the near-threshold region.
We assume that this is the case and proceed with the rather intuitive picture offered by a Flatté parameterization of dressing a bare resonance propagator, $1/D(E)$, by the self energy due to the intermediate channels 1 and 2,
\begin{align}
\label{210703.1}
D(E)&=E-E_f+i\frac{\widetilde{\Gamma}_1}{2}+\frac{i}{2}g_2\sqrt{m_K E}~.
\end{align}
Here $E$ is the total center of mass energy measured with respect to the two-kaon threshold, $E\equiv \sqrt{s}-2m_K$,
$E_f$ is the bare mass of the resonance plus the contributions at around the $K\bar{K}$ threshold from the real parts (which are taken as constants) of the meson-meson loops contributing to the resonance self energy. In addition, $g_i$ is the {\it bare} coupling squared of the resonance to the $i_{\rm th}$ channel,
such that the {\it bare} width $\widetilde\Gamma_1$ to channel 1 is written in terms of $g_1$ as
\begin{align}
\label{211009.2}
\widetilde{\Gamma}_1&=\frac{p_1(m_R)g_1}{8\pi m_R^2}~.
\end{align}
The pole position in the variable $E$ is called $E_R=M_R-i\Gamma_R/2$, {\it with $M_R$ the mass of the resonance with respect to $2m_K$}, $M_R=m_R-2m_K$.
The Flatt\'e parameterization contains as free parameters $E_f$, $\widetilde{\Gamma}_1$ and $g_2$ that can be fixed in terms of the mass
and width of the resonance, that is, by knowing its pole position, and from the knowledge either of the branching ratio $r_{\rm exp}$ to the lighter channel or the total compositeness $X$.
To calculate the resonance pole position we must look for the zeroes of
Eq.~\eqref{210703.1}, $D(E_R)=0$,
\begin{align}
\label{210703.2}
E_R-E_f+{\frac{i}{2}}\,\widetilde{\Gamma}_1=-\frac{i}{2}g_2\sqrt{m_K E_R}~.
\end{align}
Taking the square in both sides of the previous expression and solving the resulting quadratic algebraic equation,
we then have the following solutions for the roots
\begin{align}
\label{210703.4}
E_R&=E_f-\frac{1}{8}m_Kg_2^2-\frac{i}{2}\widetilde{\Gamma}_1 +\sigma\sqrt{\frac{m_Kg_2^2}{4}}
\sqrt{\frac{m_Kg_2^2}{16}-E_f+\frac{i}{2}\widetilde{\Gamma}_1}~,
\end{align}
with $\sigma=\pm 1$ in order to keep track of the two different solutions.
Later on we show that $\sigma=+1(-1)$ corresponds to the pole
lying in the RS \Rmnum{2} (\Rmnum{3}). For the calculation of the square root in the previous equation (taken such that $\Im\sqrt{z}\geq 0$, $z\in \mathbb{C}$) one needs to distinguish two cases according to the sign of
$m_K g_2^2/16-E_f$:
\begin{align}
\label{210703.5}
& \text{i})~\frac{m_Kg_2^2}{16}-E_f > 0~,\\
E_R&=E_f-\frac{i}{2}\widetilde\Gamma_1-\frac{m_K g_2^2}{8}+\frac{\sigma}{2}\sqrt{m_K g_2^2}
\left(\left(\frac{m_Kg_2^2}{16}-E_f\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4}
\exp{\left(-\frac{i}{2}\arctan\frac{\widetilde\Gamma_1/2}{E_f-m_Kg_2^2/16}\right)}~.\nonumber
\end{align}
\begin{align}
\label{210703.6}
& \text{ii)}~\frac{m_Kg_2^2}{16}-E_f<0~,\\
E_R&=E_f-\frac{i}{2}\widetilde\Gamma_1-\frac{m_K g_2^2}{8}+\frac{\sigma}{2}\sqrt{m_Kg_2^2}
\left(\left(\frac{m_Kg_2^2}{16}-E_f\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4}
\exp{\frac{i}{2}\left(\pi-\arctan\frac{\widetilde\Gamma_1/2}{E_f-m_Kg_2^2/16}\right)}~.\nonumber
\end{align}
In what follows we only consider the case i), because for both i) and ii) one obtains the same equations relating $E_f$, $g_2$ and $\widetilde\Gamma_1$ with the inputs $M_R$, $\Gamma_R$ and $r_{\rm exp}$ or $X$.
We introduce the auxiliary angle $\phi$ defined by
\begin{align}
\label{210704.1}
\phi=\arctan\frac{\widetilde\Gamma_1/2}{E_f-m_K g_2^2/16}~.
\end{align}
Therefore, $E_R$ can be written as
\begin{align}
\label{210704.2}
E_R&=E_f-\frac{m_K g_2^2}{8}-\frac{i}{2}\widetilde\Gamma_1+\frac{\sigma}{2}\sqrt{m_Kg_2^2}\left(
\left(E_f-\frac{m_Kg_2^2}{16}\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4}
(\cos\frac{\phi}{2}-i\sin\frac{\phi}{2})~.
\end{align}
Attending to the real and imaginary parts in this equation we have that
\begin{align}
\label{210704.3}
M_R&=E_f-\frac{m_K g_2^2}{8}+\sigma \frac{\sqrt{m_K g_2^2}}{2} \left(\left(E_f-\frac{m_Kg_2^2}{16}\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4}\cos\frac{\phi}{2}~,\\
\Gamma_R&=\widetilde\Gamma_1+\sigma\sqrt{m_K g_2^2}\left(\left(E_f-\frac{m_K g_2^2}{16}\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4} \sin\frac{\phi}{2}~.\nonumber
\end{align}
Taking into account the definition of $\phi$, one also has that
\begin{align}
\label{210704.4}
\left(E_f-\frac{m_K g_2^2}{16}\right)^2+\frac{\widetilde\Gamma_1^2}{4}=
\left(E_f-\frac{m_K g_2^2}{16}\right)^2\left(1+\tan^2\phi\right)
=\left(E_f-\frac{m_K g_2^2}{16}\right)^2\frac{1}{\cos^2\phi}~,
\end{align}
and
\begin{align}
\label{210704.6}
E_f-\frac{m_K g_2^2}{16}&=\frac{\widetilde\Gamma_1}{2}\cot\phi~.
\end{align}
Substituting these two equalities into Eq.~\eqref{210704.3}, with $\phi<0$ for case i), the latter equation becomes
\begin{align}
\label{210704.7}
M_R&
-\frac{m_K g_2^2}{16}+\frac{\widetilde\Gamma_1}{2}\cot\phi+\frac{\sigma}{4}\sqrt{m_K g_2^2\widetilde\Gamma_1|\cot\frac{\phi}{2}|}~,\\
\Gamma_R&
\widetilde\Gamma_1- \frac{\sigma}{2}\sqrt{m_K g_2^2\widetilde\Gamma_1 |\tan\frac{\phi}{2}|}~.\nonumber
\end{align}
From this last equation it follows that
\begin{align}
\label{210704.9}
\frac{\sigma}{2}\sqrt{m_K g_2^2\widetilde{\Gamma}_1}&= (\widetilde{\Gamma}_1-\Gamma_R)\sqrt{|\cot\frac{\phi}{2}|}~.
\end{align}
When this is taken into Eq.~\eqref{210704.7} we can write $M_R$ as
\begin{align}
\label{210705.1b}
M_R&=\frac{\Gamma_R^2}{4\widetilde{\Gamma}_1}\cot\frac{\phi}{2}\left[1-\left(\frac{\widetilde{\Gamma}_1}{\Gamma_R}\tan\frac{\phi}{2}\right)^2\right]~,\\
\Gamma_R&=\widetilde{\Gamma}_1- \frac{\sigma}{2}\sqrt{m_K g_2^2\widetilde{\Gamma}_1 |\tan\frac{\phi}{2}|}~.\nonumber
\end{align}
The equation for $M_R$ is of the form,
\begin{align}
\label{210801.6}
&x-\frac{1}{x}=\frac{4 M_R}{\Gamma_R}~,\\
&x\equiv \frac{\Gamma_R}{\widetilde{\Gamma}_1}\cot\frac{\phi}{2}<0~,\nonumber
\end{align}
and its solution for $x<0$ is
\begin{align}
\label{210801.7}
x&=\frac{2 M_R}{\Gamma_R}-\sqrt{1+\left(\frac{2M_R}{\Gamma_R}\right)^2}~.
\end{align}
After this is substituted in the expression for $\Gamma_R$ in Eq.~\eqref{210705.1b}, we can isolate the bare partial-decay width to the first channel,
and then the bare branching ratio $r\equiv \widetilde{\Gamma}_1/\Gamma_R$ is given by
\begin{align}
\label{210801.9}
r&=1+\frac{\sigma g_2\sqrt{m_K}/2}{u-2M_R}~,\\
u&\equiv (4M_R^2+\Gamma_R^2)^{1/2}~.\nonumber
\end{align}
Once we know $r$ we can also determine $\cot\phi/2$ by using the definition of $x$ and its solution in Eq.~\eqref{210801.7},
\begin{align}
\label{210801.10}
\cot\frac{\phi}{2}&=r x=
\frac{1}{2\Gamma_R}
\left(4M_R-2u
-\sigma g_2 \sqrt{m_K (u-2M_R)}\right)~.
\end{align}
Let us denote by $\beta$ the residue of $1/D(E)$ at the resonance pole,
\begin{align}
\label{210913.2}
\beta&= \left|\lim_{E\to E_R}\frac{E-E_R}{D(E)}\right|=\left|\frac{1}{1+\frac{i g_2}{4}\sqrt{\frac{m_K}{E_R}} }\right|=\frac{\sqrt{8u}}{\left(g_2^2 m_K+8 u+ 4 \sigma g_2 \sqrt{m_K(u-2M_R)}\right)^{1/2}}~.
\end{align}
This expression can be obtained by substituting $\sqrt{E_R}$ from Eq.~\eqref{210703.2},
the relation between $E_f$ and $\tan\phi$, cf. Eq.~\eqref{210704.6}, and finally
the expression for $\tan\phi$ from that of $\cot\phi/2$ given in Eq.~\eqref{210801.10}.
The {\it renormalized or dressed} coupling squared $|\gamma_i^2|$ is related to the bare one $g_i$ by evaluating the residue of the elastic
scattering amplitudes for channel $i$, $g_i/D(E)$. In terms of $\beta$, we have the result
\begin{align}
\label{210801.12}
|\gamma_1|^2=g_1\beta~,\\
|\gamma_2|^2=32\pi m_K^2g_2\beta~,\nonumber
\end{align}
with the numerical factor in front of $g_2\beta$
needed for having the same normalization as in Sec.~\ref{sec.211010.1}.
Up to our knowledge the difference between the bare and dressed couplings in a Flatt\'e parameterization has not been clearly discussed before in the literature, and it has important implications. E.g.
this is one of the reasons why the values for $\widetilde{\Gamma}_{\pi\pi}$ collected in the Table~2 of Ref.~\cite{Baru:2003qq} for the $f_0(980)$ are
typically much bigger than 100~MeV.
A similar comment can also be made for most of the entries of $\widetilde{\Gamma}_{\pi\eta}$ in Table~1 of the same reference regarding the $a_0(980)$.\footnote{The other reason applies to those poles in the RS II because then the physical partial-decay width to the lighter channel is $\Gamma_1-2\Gamma_2$, which is smaller than $\Gamma_1$, cf. Eq.~\eqref{211123.2} and discussions below in this section.} Indeed, this can be a source for confusion in the literature. In this respect, we notice that Ref.~\cite{GarciaMartin:2011jx} compares their $\pi\pi$ $S$-wave residue with bare couplings used in energy-dependent-width Breit-Wigner or Flatt\'e parameterizations without considering the actual residue at the resonance pole position of the parameterization.
In order to obtain $g_2$ we need another input, for which we take either the physical branching ratio $r_{\rm exp}$ or the total compositeness $X$.
For the latter case we need then the expression for calculating $X_1$ and $X_2$, with $X=X_1+X_2$. Recalling Eq.~\eqref{equ1} we have for $X_1$,
\begin{align}
\label{211012.3}
X_1&=\gamma_1^2 \left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}
=\frac{8\pi m_R^2\Gamma_R}{p_1(m_R)} r \beta \left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}~,
\end{align}
with $r$ and $\beta$ given in terms of $g_2$ and the pole parameters in Eqs.~\eqref{210801.9} and \eqref{210913.2}, respectively.
For the calculation of $X_2$ a simpler algebraic formula can be obtained if we use non-relativistic kinematics for the calculation of the derivative of $\partial G_2(s)/\partial s$ at $s=s_R$, taking advantage of the fact that pole lies in the vicinity of the $K\bar{K}$ threshold. Then, $G_2(s)$ is a constant plus $-i \sqrt{m_K E}/(16\pi m_K)+{\cal O}(E/m_K)$, and its derivative with respect to $s$ is
\begin{align}
\label{211106.1}
\left.\frac{\partial G_2(s)}{\partial s}\right|_{s_R}=
\frac{-i}{128\pi m_K^{3/2}\sqrt{E_R}}+{\cal O}(1)~.
\end{align}
We then multiply this derivative by $32\pi m_K^2 g_2 \beta$ and the final expression that results is
\begin{align}
\label{210801.18}
X_2&=\frac{\sqrt{m_K}g_2}{\left(g_2^2m_K+8u+4\sigma g_2 \sqrt{m_K(u-2M_R)}\right)^{\frac{1}{2}}}~.
\end{align}
The equation that is needed to be solved to obtain $g_2$ given $X$ is $X_1+X_2=X$.
Nonetheless, in the numerical results shown below we calculate $X_2$ making use of relativistic kinematics, with differences of around a $10-15$\% compared with the values obtained when using the non-relativistic Eq.~\eqref{210801.18}.
When $r_{\rm exp}$ is the input taken one has to distinguish between whether the pole lies in the RS \Rmnum{2} or RS \Rmnum{3}, due to the change of sign in the analytical extrapolation of $\sqrt{E}$ in $D(E)$, Eq.~\eqref{210703.1}, needed to reach the pole position.
For the pole in the RS \Rmnum{3}, we have the following straightforward relation between the physical $r_{\rm exp}$ and the bare $r$,
\begin{align}
\label{211012.1}
r_{\rm exp}&=r \beta=\frac{\sqrt{2u}\left(2\sqrt{u-2M_R}+g_2\sigma\sqrt{m_K}\right)}{\sqrt{u-2M_R}\left(g_2^2 m_K+8u+4\sigma g_2 \sqrt{m_K(u-2M_R)}\right)^{1/2}}~.
\end{align}
This is a quadratic equation for $g_2$ that can be easily solved. By requiring that $g_2=0$ for $r_{\rm exp}=1$ (which implies that there is no resonance decay at all to the $K\bar{K}$ channel) then there is only one acceptable solution given by
\begin{align}
\label{211012.2}
g_2&=\frac{2\left(-\Gamma_R r_{\exp}+\sqrt{u-2M_R}\sqrt{2M_R r_{\rm exp}^2+(2-r_{\rm exp}^2)u}\right)}{\sqrt{m_K\left(2M_R r_{\rm exp}^2+(2-r_{\rm exp}^2)u\right)}}~.
\end{align}
For a pole in the RS \Rmnum{2}, when moving to the complex-$E$ plane with negative imaginary part the square root $\sqrt[II]{E}$ has a positive imaginary part and negative real one, changing sign with respect to that when calculated in the RS \Rmnum{3}, because there $\sqrt[III]{E}=-\sqrt[II]{E}$ (the momentum $p_2$ changes sign between the two sheets).
As a result, it is indeed the case that the last term in Eq.~\eqref{210703.1}, responsible for the width to $K\bar{K}$, does not add to but subtract from $\widetilde{\Gamma}_1$.
Then, for a physical interpretation of the different terms in this equation it is convenient to rewrite it as
\begin{align}
\label{211109.1}
D(E)&=E-E_f+\frac{i}{2}\left[\widetilde{\Gamma}_1+2g_2\sqrt[II]{m_K E}\right]-\frac{i}{2}g_2\sqrt[II]{m_K E}\\
&=E-E_f+\frac{i}{2}\left[\widetilde{\Gamma}_1-2g_2\sqrt[III]{m_K E}\right]+\frac{i}{2}g_2\sqrt[III]{m_K E}
~.\nonumber
\end{align}
Let us notice that the real part of $\sqrt[III]{M_R-i\,\Gamma_R/2}$ is positive and then
the imaginary part of the last term in Eq.~\eqref{211109.1} appears with the same sign as $\widetilde{\Gamma}_1$.
From this equation we then deduce that, once the bare couplings are dressed, cf. Eq.~\eqref{210801.12}, the partial-decay width to the lighter channel really observed in an experiment is not $\Gamma_1$ but $\Gamma_1-2\Gamma_2$, while the total decay width $\Gamma_R$ is $\Gamma_1-\Gamma_2$ for the RS \Rmnum{2} case. Therefore, when
taking $r_{\rm exp}$ as input for a pole in the RS \Rmnum{2},
\begin{align}
\label{211110.1}
r_{\rm exp}&=\frac{\Gamma_1}{\Gamma_R}-2\frac{\Gamma_2}{\Gamma_R}=r\beta-2(1-r_{\rm exp})
\end{align}
from where the extra equation to be taken into account is:
\begin{align}
\label{211110.2}
2-r_{\rm exp}&=r\beta=\frac{\sqrt{2u}\left(2\sqrt{u-2M_R}+g_2\sigma\sqrt{m_K}\right)}{\sqrt{u-2M_R}\left(g_2^2 m_K+8u+4\sigma g_2 \sqrt{m_K(u-2M_R)}\right)^{1/2}}~,
\end{align}
instead of the straight Eq.~\eqref{211012.1} for a pole in the RS \Rmnum{3}.
The valid solution in this case, the one that gives $g_2=0$ for $r_{\rm exp}=1$, is obtained from Eq.~\eqref{211012.2} by simultaneously multiplying its right-hand side by a minus sign and replacing $r_{\rm exp}$ for $2-r_{\rm exp}$.
As a result of this analysis we then expect that $\Gamma_1>\Gamma_R$ with values in the interval $[\Gamma_R,2\Gamma_R]$ for the resonance when its pole lies in the RS \Rmnum{2}.
To illustrate this point, let us take the residues given in the Refs.~\cite{GarciaMartin:2011jx,Guo:2012yt} for the pole positions of the $f_0(980)$ in Eqs.~\eqref{equ22} and \eqref{211107.1}, respectively, and evaluate $\Gamma_1/\Gamma_R$ making use of Eq.~\eqref{equ33}.
The values of the residues are $|\gamma_1|=2.3\pm 0.2$~GeV \cite{GarciaMartin:2011jx} and $|\gamma_1|=1.80\pm 0.25$~GeV \cite{Guo:2012yt}. Propagating the errors in $m_R$, $\Gamma_R$ and $|\gamma_1|$ from these references, the values that we obtain are
\begin{align}
\label{211111.1}
\frac{\Gamma_1}{\Gamma_R}&=2.0\pm 0.9\,,~\text{Eq.~\eqref{equ22}--Ref.~\cite{GarciaMartin:2011jx}}~,\\
\frac{\Gamma_1}{\Gamma_R}&=1.1\pm 0.6\,,~\text{Eq.~\eqref{211107.1}--Ref.~\cite{Guo:2012yt}}~.\nonumber
\end{align}
This more detailed discussion based on the use of the Flatt\'e parameterization extends the same topic already discussed in Sec.~\ref{sec.211010.1} for the interpretation of the $\Gamma_i$'s and their connection with the experimental decay widths for the case of a pole lying in the RS \Rmnum{2}.
It is worth stressing that when $X$ is taken as input the procedure explained above, Eqs.~\eqref{211012.3}-\eqref{210801.18}, allows to calculate $\Gamma_1/\Gamma_R$ without relying on its connection to $r_{\rm exp}$, and the resulting values, which we discuss below, are perfectly compatible with the picture just explained.
Finally, let us now show the relationship between the sign $\sigma$ and the RS in which the pole lies.
For that, we isolate $\sqrt{E_R}$ from Eq.~\eqref{210703.2}, which can then be written as
\begin{align}
\label{210801.14}
\sqrt{E_R}&=\frac{2i}{g_2 \sqrt{m_K}}(M_R-E_f)+\frac{\Gamma_R-\widetilde{\Gamma}_1}{g_2 \sqrt{m_K}}~.
\end{align}
This equation tells us that if $M_R-E_f>0$ the pole lies in the RS \Rmnum{2}, since then $\Im \sqrt{E_R}>0$ and, conversely, if $M_R-E_f<0$ the pole is located in the RS \Rmnum{3}.
Now, we consider Eq.~\eqref{210704.3} which clearly implies that if $\sigma=-1$ then $M_R-E_f<0$, corresponding to the RS \Rmnum 3.
For $\sigma=+1$ a more careful treatment is needed because the sign of $M_R-E_f$ depends on the relative sign between the last two terms in the right-hand side of Eq.~\eqref{210704.3}.
One can straightforwardly show that the absolute value of the last term is bigger than $m_Kg_2^2/8$ by squaring and subtracting them.
In the process one has to relate $E_f$ with $\tan\phi$, Eq.~\eqref{210704.6}, and use the expression for $\cot\phi/2$ given in Eq.~\eqref{210801.10}.
As a result $M_R-E_f>0$ and the pole lies in the RS \Rmnum 2 for $\sigma=+1$.
We give the results obtained with the present formalism based on the use of the Flatté parameterization, distinguishing between when $X$ or $r_{\rm exp}$ are taken as inputs in Secs.~\ref{sec.201008.1} and \ref{sec.211126.1}, respectively.
\subsection{Spectral density and its integration}
\label{sec.211106.2}
Here we use the spectral density function $\omega(E)$ of a near-threshold resonance, in our case either $f_0(980)$ or $a_0(980)$, as a way to calculate the compositeness of the meson-meson states in these resonances.
We follow the formalism of Ref.~\cite{Baru:2003qq} to which we refer for further details.
There the spectral density function $\omega(E)$ is introduced, and it provides the probability distribution function in energy for finding a bare elementary state in the continuum \cite{Bogdanova:1991zz}.
As a result, its integration around the $K\bar{K}$ threshold comprising the resonance signal, which we call $W_R$, is the probability for finding the bare state. Namely, $W_R$ is calculated as \cite{Baru:2003qq}
\begin{align}
\label{211013.1}
W_R&=\int_{-\Delta}^{+\Delta} dE\, \omega(E)~,\\
\omega(E)&=\frac{1}{2\pi}
\frac{\widetilde{\Gamma}_1+g_2\sqrt{m_KE}\theta(E)}{\left(E-E_f-\frac{1}{2}g_2\sqrt{-m_KE}\theta(-E)\right)^2+\frac{1}{4}\left(\widetilde{\Gamma}_1+g_2\sqrt{m_KE}\theta(E)\right)^2}~,\nonumber
\end{align}
with $\theta(E)$ the Heaviside step function.
In Ref.~\cite{Baru:2003qq} the parameter $\Delta$ was chosen to be $50$~MeV but, since an important dependence on $\Delta$ is observed on the final value of $W_R$, we prefer to present the results for $W_R$ as a function of $\Delta$.
\subsection{Reinterpretation of the method of Sec.~\ref{sec.211010.1} for poles in the Riemann sheet \Rmnum{2}}
\label{sec.211110.1}
By the application of the Flatt\'e parameterization it has been clear that for a near-threshold pole in coupled channels lying in the RS \Rmnum{2} it is necessary to change the interpretation of $\Gamma_1$.
We have seen from the last line in Eq.~\eqref{211109.1} that the partial-decay width into the lighter channel is not directly $\Gamma_1$ but $\Gamma_1-2\Gamma_2$, and that the total width from the pole position should be compared with $\Gamma_1-\Gamma_2$.
Therefore, for a pole in the RS \Rmnum{2} near the heavier threshold the equation for the saturation of the total width, cf. Eq.~\eqref{equ:square1}, reads
\begin{align}
\label{211111.3}
\Gamma_{R}=|\gamma_{1}|^{2}\frac{p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}}
-
\frac{|\gamma_{2}|^{2}}{16\pi^{2}} \int_{m_{1}+m_{2}}^{m_{R}+2\Gamma_{R} }dW\frac{p(W^{2})}{W^{2}}\frac{\Gamma_{R}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}~,
\end{align}
with no change in Eq.~\eqref{equ:square2} for saturating the total compositeness $X$. The latter equation is also reproduced here
\begin{equation}
\label{211111.4}
X=|\gamma_{1}|^{2}\Big{|} \frac{\partial G_{1}(s)}{\partial s}\Big{|}_{s=s_{R}}+|\gamma_{2}|^{2}\Big{|} \frac{\partial G_{2}(s)}{\partial s}\Big{|}_{s=s_{R}}\,.
\end{equation}
However, in applications up to now of these ideas instead of the minus in Eq.~\eqref{211111.3} a ``standard'' plus sign is placed in front of $|\gamma_2|^2$. Namely,
\begin{align}
\label{211128.1}
\Gamma_{R}=|\gamma_{1}|^{2}\frac{p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}}
+\frac{|\gamma_{2}|^{2}}{16\pi^{2}} \int_{m_{1}+m_{2}}^{m_{R}+2\Gamma_{R} }dW\frac{p(W^{2})}{W^{2}}\frac{\Gamma_{R}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}~,
\end{align}
At the practical level as long as $X_1\ll X_2$ the only change is in the value of $|\gamma_1|$, because the use of Eq.~\eqref{211128.1} would provide an ``effective'' value of this coupling by reabsorbing the effect of subtracting $-2\Gamma_2$ in Eq.~\eqref{211111.3}.
This is clear because then $X$ is saturated almost completely by $X_2$ and this quantity alone fixes $|\gamma_2|$.
The use of Eq.~\eqref{211111.3} allows to determine $|\gamma_1|$, which is certainly larger than the ``effective'' one deduced by fulfilling Eq.~\eqref{211128.1} with an intermediate plus sign.
However, the values for $X_2$, $\Gamma_2$ and the physical partial-decay width to channel 1 almost do not change, which are typically the most important pieces of information to infer about the nature of the resonances.
Now, if $r_{\rm exp}$ is taken as input the equations that one has to fulfill are Eq.~\eqref{211111.3} and
\begin{align}
\label{211111.5}
2-r_{\rm exp}=|\gamma_{1}|^{2}\frac{p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}\Gamma_R}~,
\end{align}
which are apparently very different to the ``standard'' Eq.~\eqref{211128.1} and
\begin{align}
\label{211128.2}
r_{\rm exp}=\frac{\Gamma_1}{\Gamma_R}~,
\end{align}
already written for a pole in the RS \Rmnum{3} in Sec.~\ref{sec.211010.1}.
However, they basically provide again the same results for $\Gamma_2$, $X_2$, and the physical partial-decay width to channel 1, while the use of Eqs.~\eqref{211128.1} and \eqref{211128.2} provides the already mentioned ``effective'' value for $|\gamma_1|$.
In order to see it, let us divide Eq.~\eqref{211111.3} by $\Gamma_R$ and write the following equations equivalent to Eqs.~\eqref{211111.3} and \eqref{211111.5},
\begin{align}
\label{211111.6}
1&=\frac{\Gamma_1}{\Gamma_R}-\frac{\Gamma_2}{\Gamma_R}~,\\
2-r_{\rm exp}&=\frac{\Gamma_1}{\Gamma_R}~.\nonumber
\end{align}
Subtracting the first line from the second one we then have
\begin{align}
\label{211111.7}
1-r_{\rm exp}&= \frac{\Gamma_2}{\Gamma_R}~,
\end{align}
which fixes $|\gamma_2|$ as if Eq.~\eqref{211128.1} were used.
Afterwards, Eq.~\eqref{211111.3} is employed to calculate $|\gamma_1|$, while the use of Eq.~\eqref{211128.1} would provide the so-called ``effective'' value for $|\gamma_1|$, which would be smaller than the one obtained from Eq.~\eqref{211111.3}.
\section{Results and discussions using the total compositeness as input}\label{section1}
\label{sec.201008.1}
Here we apply the formalism derived in Secs.~\ref{sec.211010.1} and \ref{sec.201009.3} to study the nature of the resonances $f_0(980)$ and $a_0(980)$. Subsequently, the former method based on the saturation of $\Gamma_R$ and $X$ is denoted by S, while the latter one based on the use of a Flatt\'e parameterization is called $F$. For each resonance pole we first apply the method S and then F.
\subsection{The $f_0(980)$ resonance}
\label{sec.201009.1}
Assuming given values for the total compositeness $X$ of the $f_{0}(980)$, varying it from $0.2$ to $0.8$ in steps of 0.2,
we obtain the couplings, partial-decay widths, and compositeness coefficients by solving Eq.~(\ref{equ:square2}) and Eq.~(\ref{equ:square1}).
As already mentioned, we consider the RS \Rmnum{2} where $s_R$ of Eq.~\eqref{equ22} lies \cite{GarciaMartin:2011jx} and the results calculated are shown in Table~\ref{tabff1}.
One can observe from this table that the compositeness coefficient $X_{2}$ is always much larger than $X_{1}$,
which means that the $K\bar{K}$ channel plays a much more important role than the $\pi\pi$ one in the structure of $f_{0}(980)$.
Related to this point, the $f_{0}(980)$ couples much more strongly to $K\bar{K}$ than to $\pi\pi$, in agreement with the fact that $f_{0}(980)$ sits very close to the $K\bar{K}$ threshold.
We also observe from Table~\ref{tabff1} that as the total compositeness $X$ increases
the physical partial-decay width to $\pi\pi$ ($\Gamma_{\pi\pi}$) decreases, while that to $K\bar{K}$ becomes larger.
Similarly, this fact has also been found in other heavy-quark resonances with open and near-threshold channels \cite{Du:2021bgb,Guo:2020vmu,Guo:2020pvt,Guo:2019kdc}.
In these regards, it follows from Table~\ref{tabff1} that
$\Gamma_{K\bar{K}}>\Gamma_{\pi\pi}$ for $X\gtrsim 0.6$,\footnote{This can be easily seen because $\Gamma_{K\bar{K}}>25$~MeV.} but as $X$ decreases $\Gamma_{\pi\pi}$ becomes larger than $\Gamma_{K\bar{K}}$.
For estimating the errors associated to the input values of the resonance pole mass and width we proceed in this work similarly as done in Ref.~\cite{Kang:2016ezb}. Then, we discretize the inputs, $m_R$ and $\Gamma_R$ from the pole position at several points within one standard deviation region from the central values to generate a data grid.
For each of the points in the grid we proceed to calculate the different outputs so that
their central values correspond to the mean values and
the errors to the square root of the variances. We have also checked that this procedure
is (of course) stable if the number of points in the
grid is increased. For the other input $X$ the variation in the results calculated for the different values of $X$ provides an estimate of this source of uncertainty.
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method S applied to the resonance $f_0(980)$ with pole position in the RS \Rmnum{2} (column 2) from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: The couplings $|\gamma_i|$ (columns 3, 4), corresponding partial-decay widths $\Gamma_{i}$ (columns 5, 6), and individual compositeness coefficients $X_{i}$ (columns 7, 8) are calculated for $X=0.8,$ $0.6,$ $0.4$, $0.2$ (column 1).} }
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt} ~~$X~~~$&~~~$\text{RS}$&~~~$|\gamma_{\pi\pi}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\pi}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt} \multirow{1}*{0.80}&~~~$\text{\Rmnum{2}}$&~~~$2.24\pm0.20$~~~&$4.65\pm0.23$~~~&$97.2\pm16.9$&~~~$43.2\pm8.4$&~~~$0.038\pm0.007$&~~~$0.762\pm0.007$~~~\\
\Xhline{1pt} \multirow{1}*{0.60}&~~~$\text{\Rmnum{2}}$&~~~$2.11\pm0.19$~~~&$4.01\pm0.19$~~~&$86.1\pm14.8$&~~~$32.1\pm6.2$&~~~$0.033\pm0.006$&~~~$0.567\pm0.006$~~~\\
\Xhline{1pt} \multirow{1}*{0.40}&~~~$\text{\Rmnum{2}}$&~~~$1.97\pm0.17$~~~&$3.24\pm0.15$~~~&$75.0\pm12.9$&~~~$21.0\pm4.0$&~~~$0.029\pm0.005$&~~~$0.371\pm0.005$~~~\\
\Xhline{1pt}
\multirow{1}*{0.20}&~~~$\text{\Rmnum{2}}$&~~~$1.82\pm0.16$~~~&$2.23\pm0.09$~~~&$63.9\pm11.0$&~~~~$9.9\pm1.8$&~~~$0.025\pm0.004$&~~~$0.175\pm0.004$~~~\\
\Xhline{1pt}
\end{tabular}
\label{tabff1}
\end{center}
\end{table}
We now consider the application of the method F, take $X=0.2$ to 0.8 in steps of 0.2, and calculate the parameters characterizing the Flatt\'e formula, Eq.~\eqref{210703.1}, that is, the bare width to the lighter channel $\widetilde{\Gamma}_1$, the bare coupling squared $g_2$ to $K\bar{K}$ and $E_f$.
As typical outputs we provide $\Gamma_1$, $\Gamma_2$, $X_1$ and $X_2$.
For the pole position of the $f_0(980)$ we take Eq.~\eqref{equ22}, with the results given in Table~\ref{tabflatt1}.
Comparing the values of $X_2$ for the $f_0(980)$ between Table~\ref{tabflatt1} and the previous Table~\ref{tabff1} we see a good agreement, with the difference in the central values affecting the third decimal figure. This close agreement can be explained, though not completely, by noticing that $X_2\gg X_1$ so that $X_2\approx X$, which fixes it to lie very close to the total compositeness. Regarding the $\Gamma_i$ we see that they are very close between both tables for $X=0.2$, 0.4 and 0.6, with larger differences for $X=0.8$, but perfectly compatible within errors for all values of $X$. Let us notice that the method based on the Flatt\'e parameterization directly provides $\Gamma_i$, since $\Gamma_1=\widetilde{\Gamma}_1\beta$ and $\Gamma_2=\Gamma_1-\Gamma_R$ for a pole in the RS \Rmnum{2}, as it is the case here for the $f_0(980)$ poles in Eqs.~\eqref{equ22} and \eqref{211107.1}. However, the method S in Sec.~\ref{sec.211010.1} keeps some spurious dependence on the value given to $n$ for Eq.~\eqref{equ:square}.
\begin{table}[!htbp]
\begin{center}
\caption{ Method F applied to the resonance $f_0(980)$ with pole position in the RS $\text{\Rmnum{2}}$ (column 2) from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: The value of $X$ taken as input is given in the first column. We calculate the bare width $\widetilde{\Gamma}_1$ (column 3), the bare coupling $g_2$ (column 4), $E_{f}$ (column 5), $\Gamma_{1}$ (column 6), $\Gamma_{2}$ (column 7), $X_1$ (column 8), and $X_{2}$ (column 9). }
\begin{tabular}{|cc|ccccccc|}
\Xhline{1pt}
$X~$&~$\text{RS}$&$\widetilde{\Gamma}_{\pi\pi}$(MeV)&$g_2$&$E_f$(MeV)&$\Gamma_{1}$(MeV)&$\Gamma_{2}$(MeV)&$X_{\pi\pi}$&$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{1}*{0.8}&$\text{\Rmnum{2}}$&$~948.3\pm383.1$&\!\!$10.01\pm4.03$&$-389.7\pm210.7$&$~84.4\pm15.3$&$~30.4\pm8.3$&$~0.033\pm0.006$&$~0.767\pm0.006$\\
\Xhline{1pt}
\multirow{1}*{0.6}&$\text{\Rmnum{2}}$&$200.3\pm38.5$&$1.63\pm0.23$&$-57.6\pm19.3$&$~80.4\pm14.4$&$~26.4\pm7.0$&$~0.031\pm0.006$&$~0.569\pm0.006$\\
\Xhline{1pt}
\multirow{1}*{0.4}&$\text{\Rmnum{2}}$&$113.2\pm20.1$&$0.66\pm0.07$&$-20.4\pm8.6$&$~73.2\pm12.9$&$~19.2\pm4.9$&$~0.028\pm0.005$&$~0.372\pm0.005$\\
\Xhline{1pt}
\multirow{1}*{0.2}&~$\text{\Rmnum{2}}$&$75.2\pm13.0$&$0.24\pm0.02$&$-4.2\pm5.3$&$~63.9\pm11.1$&$~9.9\pm2.4$&$~0.025\pm0.004$&$~0.175\pm0.004$\\
\Xhline{1pt}
\end{tabular}
\label{tabflatt1}
\end{center}
\end{table}
We now move on and consider the $f_0(980)$ pole position from Ref.~\cite{Guo:2012yt} given in Eq.~\eqref{211107.1}, and proceed similarly as done regarding the pole position in Eq.~\eqref{equ22}. Then, we reproduce the values of the mass and width of the resonance in Eq.~\eqref{211107.1} together with a value given for $X$. The results obtained are shown in Table~\ref{tab.211106.1}.
Comparing Tables~\ref{tabff1} and \ref{tab.211106.1} we observe that all the outputs are rather similar. In particular the resulting values for $X_2$ are almost coincident in both Tables, with $X_2\gg X_1$. However, we notice that while the central value of the width for the pole in Eq.~\eqref{equ22} is smaller than that in the pole of Eq.~\eqref{211107.1}, the calculated $\Gamma_{1}$ and $\Gamma_{2}$ in Table~\ref{tabff1} are larger than those in Table~\ref{tab.211106.1}. This indicates that the cancellation between widths in the difference $\Gamma_{1}-2\Gamma_{2}$ for calculating the actual width to $\pi\pi$ is more important in the pole of Eq.~\eqref{equ22} than for the one in Eq.~\eqref{211107.1}.
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method S applied to the resonance $f_0(980)$ with pole position in the RS \Rmnum{2} from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}. The various entries are the same as in Table~\ref{tabff1}.} \label{tab.211106.1}}
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$X~~~$&~~~$\text{RS}$&~~~$|\gamma_{\pi\pi}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\pi}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{1}*{0.80}&~~~$\text{\Rmnum{2}}$&~~~$2.09\pm0.26$~~~&$4.88\pm0.30$~~~&$86.5\pm20.8$&~~~~\,$30.5\pm10.0$&~~~$0.034\pm0.008$&~~~$0.766\pm0.008$~~~\\
\Xhline{1pt} \multirow{1}*{0.60}&~~~$\text{\Rmnum{2}}$&~~~$2.00\pm0.24$~~~&$4.21\pm0.26$~~~&$78.6\pm18.4$&~~~$22.6\pm7.4$&~~~$0.031\pm0.007$&~~~$0.569\pm0.007$~~~\\
\Xhline{1pt} \multirow{1}*{0.40}&~~~$\text{\Rmnum{2}}$&~~~$1.89\pm0.22$~~~&$3.40\pm0.20$~~~&$70.8\pm16.0$&~~~$14.1\pm4.8$&~~~$0.028\pm0.006$&~~~$0.372\pm0.006$~~~\\
\Xhline{1pt}
\multirow{1}*{0.20}&~~~$\text{\Rmnum{2}}$&~~~$1.79\pm0.20$~~~&$2.33\pm0.13$~~~&$62.9\pm13.7$&~~~~$6.9\pm2.2$&~~~$0.025\pm0.005$&~~~$0.175\pm0.005$~~~\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
For the method F the results are given in Table~\ref{tab.211107.1}. We observe that the values of $X_2$ obtained are very similar to those given in Table~\ref{tab.211106.1} for the same input $X$, with $X_2\gg X_1$.
The output $\Gamma_2$ is smaller now than that in Table~\ref{tab.211106.1}, though within errors they are again compatible. This clearly indicates the compatibility between the two methods for this $f_0(980)$ pole too. We also point out that the bare parameters for $X=0.8$ in Table~\ref{tab.211107.1} are essentially undetermined, as indicated by the large intervals given between parenthesis, though the physical outputs appear with an uncertainty of similar size as that in the other rows of the table.
\begin{table}[!htbp]
\begin{center}
\caption{Method F applied to the resonance $f_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}. The various entries are the same as in Table~\ref{tabflatt1}. The numbers between parenthesis indicate huge range of values of the corresponding quantity by varying $m_R$ and $\Gamma_R$ within errors.}
\label{tab.211107.1}
{ \begin{tabular}{|cc|ccccccc|}
\Xhline{1pt}
$X$&$\text{RS}$ &$\widetilde{\Gamma}_{\pi\pi}$(MeV) &$g_2$& $E_{f}$(MeV) &$\Gamma_{1}$(MeV) &$\Gamma_{2}$(MeV) &$X_{\pi\pi}$ &$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{1}*{0.8}& $\text{\Rmnum{2}}$& $(430,3 \!\cdot\! 10^{6})$ &$(5.4,4 \!\cdot\! 10^4)$
&$(-160,-2 \!\cdot\! 10^{6})$& ~$71.6\pm18.2$ & $16.8\pm7.9$&~$0.028\pm0.007$&~$0.772\pm0.007$\\
\Xhline{1pt}
\multirow{1}*{0.6}& $\text{\Rmnum{2}}$&$218.4\pm51.9$ &\,$2.34\pm0.52$~&\!\!$-131.6\pm52.3$&~$68.8\pm16.9$&$12.8\pm7.0$&~$0.027\pm0.007$&~$0.573\pm0.007$\\
\Xhline{1pt}
\multirow{1}*{0.4}& $\text{\Rmnum{2}}$&$113.6\pm25.1$ &$0.83\pm0.13$ &$-52.6\pm20.7$&~$65.6\pm15.4$&~$9.6\pm5.1$&~$0.026\pm0.006$&~$0.374\pm0.006$\\
\Xhline{1pt}
\multirow{1}*{0.2}& $\text{\Rmnum{2}}$&~ $75.2\pm16.1$ &$0.27\pm0.03$ &$-24.2\pm11.8$&~$61.1\pm13.6$&~$5.1\pm2.6$&~$0.024\pm0.005$&~$0.176\pm0.005$\\
\Xhline{1pt}
\end{tabular}
\label{tab.211107.1}}
\end{center}
\end{table}
\subsection{The $a_0(980)$ resonance}
\label{sec.201009.2}
Let us explore the compositeness of the $a_0(980)$ similarly as done above in Sec.~\ref{sec.201009.1} for the $f_0(980)$.
The poles that we are going to consider next for the $a_0(980)$ stems from a recent coupled-channel partial-wave analysis of antiproton-proton annihilation data in Ref.~\cite{CrystalBarrel:2019zqh}, where the pole parameters and the partial-decay width of the $a_0(980)$ are discussed.
In the RS \Rmnum{2} Ref.~\cite{CrystalBarrel:2019zqh} reports the values
\begin{eqnarray}
\label{211009.1}
m_{a_0}=1004.1\pm 6.67\,\, \text{MeV},~ \Gamma_{a_0}=97.2\pm6.01\,\,\text{MeV},~ \Gamma_{K\bar{K}}/\Gamma_{\pi\eta}=(13.8\pm3.5)\,\,\%,
\end{eqnarray}
while in the RS \Rmnum{3} the same reference provides
\begin{eqnarray}\label{eq:ratioa0}
m_{a_0}=1002.4\pm 6.55\,\, \text{MeV},~ \Gamma_{a_0}=127.0\pm7.08\,\,\text{MeV},~ \Gamma_{K\bar{K}}/\Gamma_{\pi\eta}=(14.9\pm3.9)\,\,\%,
\end{eqnarray}
with the uncertainties added in quadrature.
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method S applied for the resonance $a_0(980)$ with pole positions from Ref.~\cite{CrystalBarrel:2019zqh}, Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}, in different RSs (column 2): We take input values for $X$ (column 1), and predict $|\gamma_i|$ (columns 3, 4), $\Gamma_i$ (columns 5, 6), and $X_i$ (columns 7, 8).} }
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$X~~~$&~~~$\text{RS}$&~~~$|\gamma_{\pi\eta}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\eta}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{2}*{0.8}&~~~$\text{\Rmnum{2}}$&~~~$3.57\pm0.07$~~~&$5.15\pm0.04$~~~&$169.1\pm7.0$&~~~$71.9\pm4.9$&~~~$0.151\pm0.005$&~~~$0.649\pm0.005$~~~\\
&~~~$\text{\Rmnum{3}}$&~~~$2.30\pm0.09$~~~&$4.50\pm0.06$~~~&$70.4\pm5.5$&~~~$56.6\pm4.4$&~~~$0.064\pm0.006$&~~~$0.736\pm0.006$~~~\\
\Xhline{1pt}
\multirow{2}*{0.6}&~~~$\text{\Rmnum{2}}$&~~~$3.35\pm0.06$~~~&$4.37\pm0.03$~~~&$149.0\pm5.8$&~~~$51.8\pm5.8$&~~~$0.133\pm0.005$&~~~$0.467\pm0.005$~~~\\
&~~~$\text{\Rmnum{3}}$&~~~$2.56\pm0.07$~~~&$3.78\pm0.05$~~~&$86.9\pm4.9$&~~~$40.1\pm3.1$&~~~$0.079\pm0.005$&~~~$0.521\pm0.005$~~~\\
\Xhline{1pt}
\multirow{2}*{0.4}&~~~$\text{\Rmnum{2}}$&~~~$3.12\pm0.05$~~~&$3.41\pm0.02$~~~&$128.8\pm4.6$&~~~$31.6\pm2.2$&~~~$0.115\pm0.004$&~~~$0.285\pm0.004$~~~\\
&~~~$\text{\Rmnum{3}}$&~~~$2.79\pm0.06$~~~&$2.90\pm0.04$~~~&$103.5\pm4.5$&~~~$23.5\pm1.9$&~~~$0.094\pm0.005$&~~~$0.306\pm0.005$~~~\\
\Xhline{1pt}
\multirow{2}*{0.2}&~~~$\text{\Rmnum{2}}$&~~~$2.86\pm0.05$~~~&$2.05\pm0.01$~~~&$108.6\pm3.7$&~~~$11.4\pm0.8$&~~~$0.097\pm0.003$&~~~$0.103\pm0.003$~~~\\
&~~~$\text{\Rmnum{3}}$&~~~$3.01\pm0.06$~~~&$1.58\pm0.04$~~~&$120.0\pm4.5$&~~~$7.0\pm0.7$&~~~$0.109\pm0.005$&~~~$0.091\pm0.005$~~~\\
\Xhline{1pt}
\end{tabular}
\label{tabff4}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{ Method F applied to the resonance $a_0(980)$ with pole positions from Ref.~\cite{CrystalBarrel:2019zqh}, Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}, in different RSs (column 2): We calculate the bare width $\widetilde{\Gamma}_1$ (column 3), the bare coupling squared $g_2$ (column 4), $E_{f}$ (column 5), $\Gamma_1$ (column 6), $\Gamma_2$ (column 7), $X_1$ (column 8) and $X_{2}$ (column 9). The value of $X$ taken as input is given in the first column. }
\begin{tabular}{|cc|ccccccc|}
\Xhline{1pt}
$X~$&$\text{RS}$&$\widetilde{\Gamma}_{\pi\eta}$(MeV)&~$g_2$&$E_{f}(\text{MeV})$&$\Gamma_{1}$(MeV)&$\Gamma_{2}$(MeV)&$X_{\pi\eta}$&$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{2}*{0.8}&~$\text{\Rmnum{2}}$&~$740.5\pm42.2$~&$5.16\pm0.51$~&$-237.1\pm39.4$&~$157.2\pm8.8$&~$60.0\pm7.4$&~$0.141\pm0.007$&~$0.659\pm0.007$\\
&~$\text{\Rmnum{3}}$&~$35.9\pm7.3$~&$0.66\pm0.03$~&$49.3\pm4.6$&~$45.6\pm9.7$&~$81.4\pm9.3$&~$0.042\pm0.009$&~$0.758\pm0.009$\\
\Xhline{1pt}
\multirow{2}*{0.6}&~$\text{\Rmnum{2}}$&~$289.7\pm8.5$~&$1.54\pm0.06$~&$-61.6\pm10.0$&~$146.2\pm7.2$&~$49.0\pm5.6$&$0.131\pm0.005$&~$0.469\pm0.005$\\
&~$\text{\Rmnum{3}}$&~$59.9\pm5.6$~&$0.49\pm0.02$~&$39.2\pm4.3$&~$72.7\pm7.1$&~$54.3\pm6.2$&~$0.066\pm0.007$&~$0.534\pm0.007$\\
\Xhline{1pt}
\multirow{2}*{0.4}&~$\text{\Rmnum{2}}$&~$177.4\pm5.6$~&$0.64\pm0.01$~&$-18.2\pm5.8$&~$129.9\pm5.4$&~$32.7\pm3.5$&$0.116\pm0.004$&~$0.284\pm0.004$\\
&~$\text{\Rmnum{3}}$&~~~$85.2\pm4.7$~&$0.30\pm0.01$~&$28.6\pm4.1$&~$96.9\pm5.3$&~$30.1\pm3.4$&~$0.089\pm0.006$&~$0.311\pm0.006$\\
\Xhline{1pt}
\multirow{2}*{0.2}&~$\text{\Rmnum{2}}$&~$120.2\pm3.8$~&$0.18\pm0.002$~&$4.0\pm4.3$&~$109.9\pm3.8$&~$12.7\pm1.4$&~$0.099\pm0.003$&~$0.101\pm0.003$\\
&~$\text{\Rmnum{3}}$&~$113.6\pm4.6$~&$0.10\pm0.01$~&$16.7\pm4.0$&~$118.6\pm4.6$&~$8.4\pm1.1$&~$0.108\pm0.005$&~$0.092\pm0.005$\\
\Xhline{1pt}
\end{tabular}
\label{tabflatt3}
\end{center}
\end{table}
Let us denote by 1 the lighter channel $\pi\eta$, and by 2 the heavier one $K\bar{K}$.
Combining Eq.~(\ref{equ:square2}) for the total compositeness and Eq.~(\ref{equ:square1}) for the full width, given input values for the former between 0.2 and 0.8 in steps of 0.2, we derive a series of partial compositeness coefficients and decay widths for the $a_{0}(980)$ in Table \ref{tabff4} by applying the method S.\footnote{The same remark as for the $f_0(980)$ in the footnote \ref{foot.211008.1} can also be applied for the $a_0(980)$ when its pole is taken in the RS \Rmnum{2} for applying Eq.~\eqref{equ1} to calculate $X_2$.
The point is that the $a_0(980)$ mass in Eq.~\eqref{211009.1} is clearly larger than the $K\bar{K}$ threshold but the width of the resonance is much larger than the difference between $m_{a_0}$ and the $K\bar{K}$ threshold.}
We find that $X_2$ is rather similar for the RS \Rmnum{2} and \Rmnum{3} calculations. We also obtain that $X_2\gg X_1$, except for $X=0.2$ in which case they are quite close to each other.
This tells us that the $K\bar{K}$ component typically dominates over the $\pi\eta$ one for the resonance $a_0(980)$.
The results corresponding to the method F for the $a_0(980)$ case are organized in Table~\ref{tabflatt3}.
One then finds that the values of $X_2$ obtained in Tables~\ref{tabflatt3} and \ref{tabff4} are very close independently of the RS
in which the poles lie.
It turns out that $X_2$ is typically much larger than $X_1$ except when $X$ becomes small, like $X=0.2$ in Table~\ref{tabflatt3}, and then both $X_i$ are rather close to each other.
Regarding the $\Gamma_i$ it is clear that for the pole in the RS \Rmnum{2} the values obtained now in Table~\ref{tabflatt3} are remarkably similar to those in Table~\ref{tabff4}. However, they clearly differ for $X\geq 0.8$ for the RS \Rmnum{3} case.
As indicated above one should distinguish between the bare and dressed parameters in a Flatt\'e parameterization.
This reflects in the fact that the bare width $\widetilde{\Gamma}_1$ can be much larger than the total width $\Gamma_R$.
This is particularly true when the pole position is set to lie in the RS \Rmnum{2}, as it is clear from the Tables~\ref{tabflatt1}, \ref{tab.211107.1} and \ref{tabflatt3}.
Here in addition one has to properly relate $\Gamma_1$ and $\Gamma_{\pi\pi}$, cf. Eq.~\eqref{211110.1}.
Nonetheless, these important points have been overlooked in the literature.
Before finishing this section we should mention that in some studies the $a_0(980)$ lies in the RS \Rmnum{4}, so that there is no an interval along the physical real-$s$ axis within the radius of convergence of the Laurent series around the $a_0(980)$.
In such a scenario it is not justified to apply Eq.~\eqref{equ1} for the calculation of $X_2$, as discussed in detail in Ref.~\cite{Guo:2015daa}, and it is not either clear how to connect the $\Gamma_i$ with the physical partial-decay widths. Then, it follows that our method cannot be applied and the conclusions thereof do not hold.
In the studies based on unitarization of SU(3) and U(3) Chiral Perturbation Theory it is observed that the pole for the $a_0(980)$ appears in the 2nd RS if only tree level amplitudes are kept.
This is necessarily the case when unitarizing the leading order amplitudes \cite{Oller:1998zr,Oller:1997ti,Oller:1999ag,Guo:2016zep}.
However, once loop contributions are accounted for the pole moves to the RS \Rmnum{4} \cite{Guo:2011pa,Guo:2012yt,Guo:2016zep}.
This is also the case in the $K$-matrix analysis of Ref.~\cite{Dudek:2016cru}.
The more recent studies among those quoted here \cite{Guo:2016zep,Dudek:2016cru} also reproduce the energy levels from lattice QCD simulations.
When the $a_0(980)$ lies in the RS \Rmnum{4} it manifests on the physical real-$s$ axis as a strong cusp effect.
\section{Results and discussions using the branching ratio $r_{\rm exp}$ as input}
\label{sec.211126.1}
Here, we directly explore the more stringent scenario of taking $r_{\rm exp}$, together with the usual reproduction of the pole position $E_R$, in order to fix all the parameters both in the S and F methods.
The direct consideration of $r_{\rm exp}$ as input is an interesting check on the consistency of interpreting the $\Gamma_i$, calculated from the pole parameters, in connection with the experimental $r_{\rm exp}$ for a pole lying in the RS \Rmnum{2}.
As shown below, the resulting physical-partial-decay widths have values within meaningful ranges, such that they are positive
and smaller than the total width, while the $X_i$ and their sum $X$ also lie within the allowed interval $[0,1]$.\footnote{Otherwise, if not correctly understanding $r_{\rm exp}$ as proposed for a RS \Rmnum{2} pole, there will be no solution within the method F developed in Sec.~\ref{sec.201009.3}.}
This situation is similar to the one obtained before when taking $X$ as input,
where all the Flatt\'e parameters are fixed without the need to connect the $\Gamma_i$ with $r_{\rm exp}$, which comes out as an output.
We also show below that the procedure of providing $r_{\rm exp}$ as input is not completely satisfactory because the output value for the renormalized coupling squared to $K\bar{K}$, $|\gamma_2|^2$, can be very sensitive to small variations in the input. As a result, the calculation of $X_2$ by using Eq.~\eqref{equ1} does not come out very accurate, particularly for the $f_0(980)$. We can circumvent this limitation by then using the method of the spectral function $\omega(E)$ to estimate $X$ with better precision. With this choice we also update values for the resonance inputs compared with the original Ref.~\cite{Baru:2003qq}, in which the use of the spectral density function to calculate the compositeness was applied for the first time to the $f_0(980)$ and $a_0(980)$ resonances.
\subsection{The $f_0(980)$ resonance}
\label{sec.211126.2}
Now, let us directly combine the knowledge of $E_R$ with the input for the branching ratio
$r_{\rm exp}$, $r_{\rm exp}=0.52\pm 0.12$ \cite{Aubert:2006nu} and $r_{\rm exp}=0.75^{+0.11}_{-0.13}$ \cite{BES:2005iaq}, as well as $r_{\rm exp}=0.68$ from the theoretical analysis of Ref.~\cite{Oller:1997ti}.
We start by applying the method S.
It is clear that given $r_{\rm exp}$ and $\Gamma_{R}$ one can solve for the couplings $|\gamma_i|$, $i=1,\,2$, and calculate the partial compositeness coefficients $X_i$.
Since the $f_0(980)$ poles that we are considering lie in the RS \Rmnum{2} we have that $\Gamma_{\pi\pi}=(2-r_{\rm exp})\Gamma_{R}$ and $\Gamma_{K\bar{K}}=(1-r_{\rm exp})\Gamma_{R}$, and similarly for the RS \Rmnum{2} pole for the $a_0(980)$, $\Gamma_{\pi\eta}=(2-r_{\rm exp})\Gamma_{R}$. For the $a_0(980)$ pole in the RS \Rmnum{3} we have instead $\Gamma_{\pi\eta}=r_{\rm exp}\Gamma_{R}$. By combining Eqs. (\ref{equ1}), (\ref{equ33}) and (\ref{equ:square})
the value of the compositeness coefficients $X_{1}$ and $X_{2}$ can be written as
\begin{eqnarray}
X_{1}&=&\frac{8\pi (2-r_{\rm exp})\Gamma_{R} m_R^2}{p_1(m_R^2)} \left|\frac{\partial G_{1}(s)}{\partial s}\right|_{s=s_{R}},~\text{RS \Rmnum {2}}~,\\
X_{1}&=&\frac{8\pi r_{\rm exp}\Gamma_{R} m_R^2}{p_1(m_R^2)} \left|\frac{\partial G_{1}(s)}{\partial s}\right|_{s=s_{R}},~\text{RS \Rmnum {3}}~,\nonumber\\
X_{2}&=&\frac{16\pi^2 (1-r_{\rm exp})\Gamma_{R}}{\int_{m_{1}+m_{2}}^{m_{R}+2\Gamma_{R} }dW\frac{p(W^{2})/W^{2}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}}
\left|\frac{\partial G_{2}(s)}{\partial s}\right|_{s=s_{R}}.
\end{eqnarray}
The results are organized in Table~\ref{tabff3} with $m_R$ and $\Gamma_R$ taken from Eq.~\eqref{equ22}, corresponding to the determination in Ref.~\cite{GarciaMartin:2011jx}. For this and the rest of tables in this section the errorbars given to $X_2$ and $X$ contain also the propagation of the error in the determination of $r_{\rm exp}$ from Refs.~\cite{Aubert:2006nu,BES:2005iaq}.
It follows from this table that $X_2$ is affected by large errorbars, around a 50\%, so that the resulting $X$ is not pined down accurately and the calculation is rather indicative. Only for $r_{\rm exp}=0.52\pm 0.12$ \cite{Aubert:2006nu} one has that $X$ can be larger than 0.5 within errors, with $X=0.49\pm 0.20$. For the other experimental value $r_{\rm exp}=0.75^{+0.11}_{-0.13}$ \cite{BES:2005iaq} the compositeness $X=0.27\pm 0.11$ and it is smaller than 0.5.
We show in the left panel of Fig.~\ref{pic_X_r} the total compositeness $X$ as a function of $r_{\rm exp}$ for the $f_0(980)$ with the method S, and employing the central values for $m_R$ and $\Gamma_R$ for the different poles considered. Thus, the requirement that $r_{\rm exp}>0.4$ implies that $X\lesssim 0.6$ for the pole in Eq.~\eqref{equ22} in the RS \Rmnum{2}.
It is also clear from Fig.~\ref{pic_X_r} the linear decrease of $X$ with $r_{\rm exp}$.
This can be easily understood by noticing that both the
partial-decay widths and partial compositeness coefficients are proportional to $|\gamma_i|^2$, which can be written in turn as $\Gamma_i/\theta_i$. For given values for the mass and width of the resonance the $\theta_i$ is just a measure of the available phase space for the decay to the channel $i$, and then $\theta_2\ll \theta_1$. In this way, for a pole in the RS \Rmnum{3},
\begin{align}
\label{211103.1}
X&=\frac{\Gamma_R r_{\rm exp}}{\theta_1}\left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}
+\frac{\Gamma_R (1-r_{\rm exp})}{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}\\
&=\frac{\Gamma_R }{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}
-r_{\rm exp}\Gamma_R\left(\frac{1}{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}
-\frac{1}{\theta_1}\left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}\right)~.\nonumber
\end{align}
This is a linear dependence with $r_{\rm exp}$, and the coefficient multiplying the latter is negative because both $1/\theta_2$ and $\left|\partial G_2/\partial s\right|_{s_R}$ are basically proportional to $|s_R/4-m_K^2|^{-1/2}$, cf. Eq.~\eqref{211106.1}.\footnote{Eq.~\eqref{211103.1} was written for a pole in the RS \Rmnum{3}.
For a pole lying in the RS \Rmnum{2} one has to take into account that $\Gamma_1=\Gamma_R(2-r_{\rm exp})$ and, then, instead of Eq.~\eqref{211103.1} we have
\begin{align}
\label{211122.1}
X&=2\frac{\Gamma_R }{\theta_1}\left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}
+\frac{\Gamma_R }{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}
-r_{\rm exp}\Gamma_R\left(\frac{1}{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}
+\frac{1}{\theta_1}\left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}\right)~,
\end{align}
so that the linear decrease of $X$ with $r_{\rm exp}$ follows.
}
\begin{table}[!htbp]
\begin{center}
\caption{ Method S applied to the resonance $f_0(980)$ with pole position in the RS \Rmnum{2} (column 2) from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: By reproducing values given for $r_{\rm exp}$ (column 1), we calculate $|\gamma_i|$ (columns 3, 4), $\Gamma_{i}$ (columns 5, 6), and $X_{i}$ (columns 7, 8).}
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$r_{\rm
exp}$&~~~$\text{RS}$&~~~$|\gamma_{1}|(\text{GeV})$~~~&$|\gamma_{2}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\pi}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{1}*{0.52~\cite{Aubert:2006nu}}&~~~$\text{\Rmnum{2}}$&~~~$2.03\pm0.18$~~~&$3.62\pm0.32$~~~&$79.9\pm14.5$&~~~$25.9\pm4.7$&~~~$0.031\pm0.006$&~~~$0.46 \pm 0.20$~~~\\
\Xhline{1pt} \multirow{1}*{0.68~\cite{Oller:1997ti}}&~~~$\text{\Rmnum{2}}$&~~~$1.92\pm0.17$~~~&$2.95\pm0.26$~~~&$71.3\pm13.0$&~~~$17.3\pm3.1$&~~~$0.028\pm0.005$&~~~$0.31\pm0.13 $~~~\\
\Xhline{1pt}
\multirow{1}*{0.75~\cite{BES:2005iaq}}&~~~$\text{\Rmnum{2}}$&~~~$1.87\pm0.17$~~~&$2.61\pm0.23$~~~&$67.5\pm12.3$&~~~$13.5\pm2.5$&~~~$0.026\pm0.005$&~~~$0.24 \pm 0.11$~~~\\
\Xhline{1pt}
\end{tabular}
\label{tabff3}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{ Method F applied to the resonance $f_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: The branching ratio $r_{\text{exp}}$ is taken as input (column 1).
We then calculate the bare partial-decay width $\widetilde{\Gamma}_1$ (column 2), the bare coupling squared $g_2$ (column 3), $E_f$ (column 4), the renormalized $\Gamma_1$ (column 5) and $\Gamma_2$ (column 6), and the partial compositeness coefficients $X_1$ (column 7) and $X_2$ (column 8). The numbers between parenthesis indicate large ranges of the corresponding quantities by varying $m_R$ and $\Gamma_R$ within errors. }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}|c|ccccccc|}
\Xhline{1pt}
$r_{\text{exp}}$ &$\widetilde{\Gamma}_{\pi\pi}\,(\text{MeV})$&$g_2$ &$E_{f}\,(\text{MeV})$& $\Gamma_{1}\,(\text{MeV})$ &$\Gamma_{2}\, (\text{MeV})$ &$X_{\pi\pi}$ &$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{1}*{0.52 \cite{Aubert:2006nu}} &$(71, 2623)$
&$(0.36, 26.8)$
&$(-21.2, 1167.2)$ &$79.9\pm14.5$ &$25.9\pm4.7$ &$0.030\pm0.006$ &$0.48\pm 0.22$\\
\Xhline{1pt}
\multirow{1}*{0.68 \cite{Oller:1997ti}} &$113.8\pm38.3$ &$0.69\pm0.45$ &$(-66.7, 6.3)$ &$71.3\pm13.0$ &$17.3\pm3.1$ &$0.028\pm0.005$ &$0.35\pm0.18$\\
\Xhline{1pt}
\multirow{1}*{0.75 \cite{BES:2005iaq}} &$89.4\pm20.4$ &$0.41\pm0.17$ &$(-42.9, 7.7)$ &$67.5\pm12.3$ &$13.5\pm2.5$ &$0.026\pm0.005$ &$0.26\pm0.14$\\
\Xhline{1pt}
\end{tabular*}
\label{tab.211111.1}
\end{center}
\end{table}
\begin{center}
\begin{figure}[!htbp]
\centering
\caption{ The total compositeness $X$ is plotted as a function of the input branching ratio ($r_\text{exp})$ when applying the method S. Panel (a) corresponds to the $f_0(980)$: The dashed line is for the pole from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}, and the dashed one is for the pole of Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}. Panel (b) corresponds to the $a_0(980)$: The poles for the $a_0(980)$ are given in Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0} for the RSs \Rmnum{2} and \Rmnum{3}, respectively, and they are taken from Ref.~\cite{CrystalBarrel:2019zqh}.} \label{pic_X_r}
\begin{minipage}[t]{1\linewidth}
\centering
\subfigure[]{
\includegraphics[width=0.45\textwidth]{pic_X_r1.pdf}}
\hspace{0.45in}
\subfigure[]{
\includegraphics[width=0.45\textwidth]{pic_X_r2.pdf}}
\end{minipage}
\end{figure}
\end{center}
Now we apply the method F with $r_{\rm exp}$ as input to the same $f_0(980)$ pole given in Eq.~\eqref{equ22}, and the results are presented in the Table~\ref{tab.211111.1}. In the columns 2--4 we provide the resulting parameters characterizing the Flatt\'e parameterization, and in the rest of columns we calculate the outputs in the form of the partial-decay widths $\Gamma_i$ and partial compositeness coefficients $X_i$. We notice that for the smallest
$r_{\rm exp}=0.52$ \cite{Aubert:2006nu} the bare parameters are much more poorly determined than those in the other cases. Nonetheless, the outputs have uncertainties of similar sizes as for the other values taken for $r_{\rm exp}$. The resulting values for $X_2$ in Tables~\ref{tab.211111.1} and \ref{tabff3} turn out to be remarkably close to each other, clearly showing the compatibility between the methods F and S.
Let us compare Table~\ref{tab.211111.1} with Table~\ref{tabflatt1}, where $X$ is taken as input and no use of the interpretation of the $\Gamma_i$ for a pole in the RS \Rmnum{2}, according to Eq.~\eqref{211110.1}, is done to fix the Flatt\'e parameterization.
One can then observe that similar values of $\Gamma_{K\bar{K}}$, which fixes $r_{\rm exp}=1-\Gamma_{K\bar{K}}/\Gamma_R$, lead to close values of $X_2$ in both tables. This agreement is of course an indication that the equations are correctly solved,
because the same results are obtained by taking corresponding values of $X$ or $r_{\rm exp}$.
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method S applied to the resonance $f_0(980)$ with pole position in the RS \Rmnum{2} from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}. The various entries are the same as in Table~\ref{tabff3}.} \label{tab.211106.2}}
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$\text{RS}$&~~~$|\gamma_{\pi\pi}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\pi}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{1}*{0.52~\cite{Aubert:2006nu}}&~~~$\text{\Rmnum{2}}$&~~~$2.05\pm0.22$~~~&$4.71\pm0.70$~~~&$82.9\pm18.2$&~~~$26.9\pm5.9$&~~~$0.033\pm0.007$&~~$0.72\pm 0.34 $~~~\\
\Xhline{1pt} \multirow{1}*{0.68~\cite{Oller:1997ti}}&~~~$\text{\Rmnum{2}}$&~~~$1.94\pm0.21$~~~&$3.85\pm0.58$~~~&$73.9\pm16.2$&~~~$17.9\pm3.9$&~~~$0.029\pm0.006$&~~~$0.48 \pm0.23 $~~~\\
\Xhline{1pt}
\multirow{1}*{0.75~\cite{BES:2005iaq}}&~~~$\text{\Rmnum{2}}$&~~~$1.88\pm0.21$~~~&$3.40\pm0.51$~~~&$70.0\pm15.3$&~~~$14.0\pm3.1$&~~~$0.027\pm0.006$&~~~$0.38 \pm0.18 $~~~\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method F applied to the resonance $f_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}: The entries are the same as those in Table~\ref{tab.211111.1}, except for the values of $r_{\rm exp}$ in the first column because solutions are found only for $r_{\rm exp}>0.82$. The values of $r_{\rm exp}$ taken do not come from any experiment and are considered due to theoretical reasons.}
\label{tabflatt2}}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}|c|ccccccc|}
\Xhline{1pt}
$r_{\text{exp}}$& $\widetilde{\Gamma}_{\pi\pi} (\text{MeV})$ &$g_2$& $E_{f} (\text{MeV})$ & $\Gamma_{1} (\text{MeV})$ &$\Gamma_{2} (\text{MeV})$ &$X{\pi\pi}$ &$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{1}*{0.86} &$(45.8, 2632)$ &$(0.13, 35.6)$ &$(-2358, 0.3)$ &$63.8\pm14.0$ &$7.8\pm1.7$ &$0.026\pm0.005$ &$0.329\pm0.140$\\
\Xhline{1pt}
\multirow{1}*{0.91} &$(41.9, 407.6)$ &$(0.08, 6.64)$ &$(-429.4, 1.3)$ &$61.0\pm13.4$ &$5.1\pm1.1$ &$0.025\pm0.005$ &$0.239\pm0.123$\\
\Xhline{1pt}
\multirow{1}*{0.96} &$66.1\pm13.0$ &$0.16\pm0.12$ &$-20.8\pm14.4$ &$58.2\pm12.8$ &$2.3\pm0.5$ &$0.023\pm0.005$ &$0.105\pm0.058$\\
\Xhline{1pt}
\end{tabular*}
\end{center}
\end{table}
Let us move on and consider the $f_0(980)$ pole position from Ref.~\cite{Guo:2012yt} applying first the method S, with the results given in Table~\ref{tab.211106.2}.
We already noticed when comparing Tables~\ref{tabff1} and \ref{tab.211106.1} that the $\Gamma_i$'s were larger for the former despite the width of the pole in Eq.~\eqref{equ22} is smaller.
By comparing Tables~\ref{tabff3} and \ref{tab.211106.2} with $r_{\rm exp}$ as input we observe that the partial-decay widths $\Gamma_i$'s have close values between them and that the differences translates into the central values of $X_2$, which are always larger in Table~\ref{tab.211106.2} than the ones in Table~\ref{tabff3}. Thus, a somewhat more prominent role of the $K\bar{K}$ component in the case of the $f_0(980)$ pole in Eq.~\eqref{211107.1} arises.
When applying the method F to the $f_0(980)$ pole from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}, there are no acceptable solutions for the input values of $r_{\rm exp}$ from Refs.~\cite{Aubert:2006nu,BES:2005iaq,Oller:1997ti}, with solutions found only for larger values of $r_{\rm exp}$.
Indeed, we see from Eq.~\eqref{211111.1} that the central value for $r_{\rm exp}$ calculated from the information given in Ref.~\cite{Guo:2012yt} is $0.91$, though the error estimated is too large to extract more stringent quantitative conclusions. Our results are given in the Table~\ref{tabflatt2}, where $X_2$ varies by around a factor of 3 when changing $r_{\rm exp}$ by less than a 10\%, from $r_{\rm exp}=0.96$ to $0.86$. For similar values of $\Gamma_{K\bar{K}}$ the results now and those in Table~\ref{tab.211107.1} agree well within errors, and give rise to rather small values for $X_2$.
Interestingly, we can check our procedure with the full characterization of the $f_0(980)$ pole given in Ref.~\cite{Guo:2012yt}, in which the residues, in addition to the pole position, are given. In this reference one has $|\gamma_1|=1.8^{+0.2}_{-0.3}$~GeV and $|\gamma_2/\gamma_1|=2.6^{+0.2}_{-0.3}$.
Precisely, the input value for $r_{\rm exp}=2-\frac{\Gamma_1}{\Gamma_R}=0.909$, cf. Eq.~\eqref{211111.1}, is equivalent to providing the central value of $|\gamma_1|$ as input, and solving the equation
\begin{align}
\label{211127.1}
|\gamma_1|^2&=\beta g_1=\beta \frac{\widetilde{\Gamma}_1 8\pi m_R^2}{p_1(m_R)}~,
\end{align}
instead of Eq.~\eqref{211110.2}.
The result is the same as the one given in the third row of Table~\ref{tabflatt2} corresponding to a value $|\gamma_2|=\sqrt{32 \pi m_K^2 \beta g_2}=2.77$~GeV (which straightforwardly translates into the central value for $X_2$ by using Eq.~\eqref{equ1}). We notice that this value is a factor of 1.7 smaller than the reported one in Ref.~\cite{Guo:2012yt}. Its square is therefore a factor of 2.84 smaller than the value $|\gamma_2|^2=(4.68$~GeV)$^2$ actually obtained in Ref.~\cite{Guo:2012yt} by solving for the residues of the partial-wave amplitudes.
However, instead of $|\gamma_1|$ we can take $|\gamma_2|$ as input and solve the equation
\begin{align}
\label{211127.2}
|\gamma_2|^2&=32\pi m_K^2\beta g_2~,
\end{align}
instead of Eq.~\eqref{211127.1}.
Doing this exercise with the central value $|\gamma_2|=4.68$~GeV \cite{Guo:2012yt}, corresponding to a much larger $X_2=0.68$, we find that $|\gamma_1|=1.875$~GeV, which is indeed perfectly compatible with the value $|\gamma_1|=1.8^{+0.2}_{-0.3}$~GeV given in Ref.~\cite{Guo:2012yt}. The branching ratio $r_{\rm exp}$ obtained with this value of $|\gamma_1|$ is 0.81, instead of 0.91 for $|\gamma_1|=1.8$~GeV (third row in Table~\ref{tabflatt2}).\footnote{At first sight it could seem strange that $|\gamma_1|=1.875>1.8$~GeV has $r_{\rm exp}=0.81<0.91$, respectively. This is because for $|\gamma_1|=1.875$~GeV the coupling $|\gamma_2|=4.68>2.77$~GeV and the decay width to $K\bar{K}$, $\Gamma_1-\Gamma_R$, has increased.
The fact that the width to $K\bar{K}$ has not increased by a relative factor of 2.84 as $|\gamma_2|^2$ does is a reflection of some divergence between the F and S methods for large $K\bar{K}$ couplings in the case of the pole in Eq.~\eqref{211107.1}.
This point is also clear from the absence of solutions for $r_{\rm exp}=0.52$, 0.68 and $0.75$ within the method F, although they are found for the method S in Table~\ref{tab.211106.2}. Some divergence in the results when taking $X$ as input can also be seen for larger values of $X\gtrsim 0.6$ (and hence of $|\gamma_2|$) by comparing the central values for the $\Gamma_i$ between Tables~\ref{tab.211106.1} and \ref{tab.211107.1}. }
After this check we re-consider the application of the method F to the $f_0(980)$ pole in Eq.~\eqref{equ22} and discuss the central values of $|\gamma_1|\,(|\gamma_2|)$ for the input values of $r_{\rm exp}$ in Table~\ref{tab.211111.1}. For the extreme values $r_{\rm exp}=0.52$, and $0.75$ we find $1.96\,(3.88)$~GeV, and $1.78\,(2.40)$~GeV, respectively.
With a small variation of a 5\% in $r_{\rm exp}$, between the values $0.95 r_{\rm exp} $ and $1.05r_{\rm exp}$ the coupling squared $|\gamma_2|^2$ changes by 16\% for $r_{\rm exp}=0.52$ and by 40\% for $r_{\rm exp}=0.75$. This implies that though there is no a critical dependence of $|\gamma_2|$ on the value of $r_{\rm exp}$ as in the case of the $f_0(980)$ from Ref.~\cite{Guo:2012yt}, cf. Table~\ref{tabflatt2}, it is also true that the variations are important. Considering the experimental errors in $r_{\rm exp}$ from Refs.~\cite{Aubert:2006nu,BES:2005iaq} of around a 23\% for $r_{\rm exp}=0.52$ and $15\%$ for $r_{\rm exp}=0.75$, we have added in quadrature the resulting uncertainty of around a 40\% in $X_2$ to the one stemming from the values of $M_R$ and $\Gamma_R$ for calculating the errorbars in Table~\ref{tab.211111.1}.
A similar relative uncertainty, which also follows by inspection of the dashed line in the left panel of Fig.~\ref{pic_X_r}, has been applied to $X_2$ in Table~\ref{tabff3} calculated with the method S. We have proceeded analogously for the errorbars of $X_2$ shown in Table~\ref{tab.211106.2}. Notice that within the method S the sensitivity to the input value of $r_{\rm exp}$ for the pole in Eq.~\eqref{211107.1} is a factor of 1.6 larger than for the pole in Eq.~\eqref{equ22}, as follows by comparing the slope of the solid versus the dashed lines in the left panel of Fig.~\ref{pic_X_r}.
We then find a rather unpleasant situation in which small changes in the input values, perfectly well within the errorbars provided by the analyses where they come from, can give rise to a very different output value for $|\gamma_2|^2$ and then for $X_2$, the partial compositeness coefficient which typically almost saturates the whole $X$.
A way that we have found to circumvent this limitation is to use the spectral density function $\omega(E)$, Eq.~\eqref{211013.1}, in order to calculate the compositeness $X$ as $1-W_R$, since we have checked that it does not depend so sensitively on the input value of $r_{\rm exp}$. In this regard, we calculate $W_R$ as a function of the extent of the interval $[-\Delta,\Delta]$ used in Eq.~\eqref{211013.1} because its dependence on $\Delta$ is worth noticing, and the results for the $f_0(980)$ pole in Eq.~\eqref{211107.1} are given in Table~\ref{tabflatt5g}. In the first column we show the $r_{\rm exp}$ taken, the integration interval in Eq.~\eqref{211013.1} is given in the second column, the third column is for the resulting $W_R$, the fourth column provides $1-W_{R}$, and the last one gives $X$ calculated already with the method based on the Flatt\'e parameterization in Table~\ref{tabflatt2}.
In Ref.~\cite{Baru:2003qq} the cutoff $\Delta$ was taken to be 50~MeV. Since the width of the $f_0(980)$ in Ref.~\cite{Guo:2012yt} is 58~MeV we take $\Delta=60$~MeV for this pole, but estimate an uncertainty for the result of $W_R$ by considering also $\Delta=90$~MeV, which is a 50\% larger than the nominal value.
This is illustrated in Fig.~\ref{picflatt}, where we plot $\omega(E)$ for the $f_0(980)$ and $a_0(980)$ resonances in the top and bottom panels, respectively. It is clearly seen that most of the resonance bump lies in the region $|E|<\Gamma_R$, and that for $|E|>1.5\Gamma_R$ it has already faded away. From Table~\ref{tabflatt5g} we obtain that for $\Delta=60$~MeV and $r_{\rm exp}=0.91$ (the nominal one for \cite{Guo:2012yt}) the resulting compositeness is $1-W_{f_0}=0.52$, and for $\Delta=1.5\Gamma_R\approx 90$~MeV, it decreases up to 0.43. We observe a variation of around a 40\% for $1-W_{f_0}$ in Table~\ref{tabflatt5g} calculated with $\Delta=60$~MeV (much smaller than the 300\% in Table~\ref{tabflatt2}), decreasing from 0.68 for $r_{\rm exp}=0.86$ to 0.39 for $r_{\rm exp}=0.96$.
\begin{table}[!htbp]
\begin{center}
\caption{ Resonance $f_0(980)$ with the pole position of Eq.~\eqref{211107.1} in the RS \Rmnum{2}: We show the dependence of $W_{f_0}$ on the integration interval $[-\Delta,\Delta]$ with $\Delta$ up to $2\Gamma_R$. The last column gives $X=X_1+X_2$ obtained in Table~\ref{tabflatt2}. The values of $r_{\rm exp}$ given do not come from any experiment and are considered due to theoretical reasons. }
\label{tabflatt5g}
\begin{tabular}{|c|ccc|c|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$[-\Delta, \Delta]$&~~~$W_{f_{0}}$ & $1-W_{f_0}$ & $X$ \\
\Xhline{1pt}
\multirow{4}*{0.86}~~~&~~~$[-30, 30 ]$~~~&~~~$0.19$~~~& 0.81 & \\
&~~~$[-60, 60 ]$~~~&~~~$0.32$~~~& 0.68 & \\
&~~~$[-90, 90 ]$~~~&~~~$0.40$~~~& 0.60 & \\
&~~~$[-120, 120 ]$~~~&~~~$0.45$~~~& 0.55 & $0.36\pm0.14$\\
\Xhline{1pt}\multirow{4}*{0.91}~~~&~~~$[-30, 30 ]$~~~&~~~$0.30$~~~& 0.70 & \\
&~~~$[-60, 60 ]$~~~&~~~$0.48$~~~& 0.52 & \\
&~~~$[-90, 90 ]$~~~&~~~$0.57$~~~& 0.43 & \\
&~~~$[-120, 120 ]$~~~&~~~$0.63$~~~& 0.37 & $0.27\pm0.12$\\
\Xhline{1pt}
\multirow{4}*{0.96}~~~&~~~$[-30, 30 ]$~~~&~~~$0.40$~~~& 0.60 & \\
&~~~$[-60, 60 ]$~~~&~~~$0.61$~~~& 0.39 & \\
&~~~$[-90, 90 ]$~~~&~~~$0.71$~~~& 0.29 & \\
&~~~$[-120, 120 ]$~~~&~~~$0.76$~~~& 0.24 & $0.13\pm0.06$\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
\begin{center}
\begin{figure}
\centering
\caption{ The spectral density function $\omega(E)$ is shown for the $f_0(980)$ in the top panels and for the $a_0(980)$ in the bottom ones.
For the $f_0(980)$ the poles considered are in the RS \Rmnum{2}, with the pole in Eq.~\eqref{211107.1} corresponding to the top left panel and the pole in Eq.~\eqref{equ22} to the top right one. For the $a_0(980)$ we take the poles in Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}, which are in the RS \Rmnum{2} (bottom left panel) and RS \Rmnum{3} (bottom right one), respectively.
Every line in a plot corresponds to the indicated value of $r_{\rm exp}$ in the
legends, and for the $a_0(980)$ the poles lie in different RSs.
\label{picflatt}}
\begin{minipage}[t]{1\linewidth}
\centering
\subfigure[]{
\includegraphics[width=0.4\textwidth]{f0978.pdf}\;}
\hspace{1in}
\subfigure[]{
\includegraphics[width=0.4\textwidth]{f0996.pdf}}
\subfigure[]{
\includegraphics[width=0.4\textwidth]{a0RS2.pdf}\;}
\hspace{1in}
\subfigure[]{
\includegraphics[width=0.4\textwidth]{a0RS3.pdf}}
\end{minipage}
\end{figure}
\end{center}
\begin{table}[!htbp]
\begin{center}
\caption{{\small Resonance $f_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: We show the dependence of $W_{f_0}$ on the integration interval $[-\Delta,\Delta]$ with $\Delta$ up to $2\Gamma_R$. For the experimental inputs of $r_{\rm exp}$ we give our final estimate for $1-W_{f_0}$ in the column 5. The total compositeness $X=X_1+X_2$ from Table~\ref{tab.211111.1} is given in the last column. }
\label{tabflatt5}}
\begin{tabular}{|c|ccc|c|c|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$[-\Delta, \Delta]$&~~~$W_{f_{0}}$ & $(1-W_{f_0})_\Delta$ & $1-W_{f_0}$& $X$ \\
\Xhline{1pt}
\multirow{4}*{0.52
\cite{Aubert:2006nu}}~~~&~~~$[-25, 25 ]$~~~&~~~$0.13$~~~& 0.87 & &\\
&~~~$[-50, 50 ]$~~~&~~~$0.21$~~~& $0.79$ & &\\
&~~~$[-75, 75 ]$~~~&~~~$0.27$~~~& 0.73 & &\\
&~~~$[-100, 100 ]$~~~&~~~$0.31$~~~& 0.69 & $0.76\pm 0.15$ &$0.51\pm0.22$\\
\Xhline{1pt}\multirow{4}*{0.68
\cite{Oller:1997ti}}~~~&~~~$[-25, 25 ]$~~~&~~~$0.25$~~~& 0.75 & &\\
&~~~$[-50, 50 ]$~~~&~~~$0.39$~~~& 0.61 & &\\
&~~~$[-75, 75 ]$~~~&~~~$0.47$~~~& 0.53 & & \\
&~~~$[-100, 100 ]$~~~&~~~$0.53$~~~& 0.47 & & $0.38\pm0.18$\\
\Xhline{1pt}
\multirow{4}*{0.75
\cite{BES:2005iaq}}~~~&~~~$[-25, 25 ]$~~~&~~~$0.30$~~~& 0.70 & & \\
&~~~$[-50, 50 ]$~~~&~~~$0.45$~~~& $0.55$ & &\\
&~~~$[-75, 75 ]$~~~&~~~$0.55$~~~& 0.45 & & \\
&~~~$[-100, 100 ]$~~~&~~~$0.61$~~~& 0.39 & $0.50\pm 0.15$ & $0.29\pm0.14$\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
Next, we apply the method based on the spectral density function $\omega(E)$ to the pole from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}, and the results are given in Table~\ref{tabflatt5}. The dependence with $\Delta$ in the compositeness $1-W_{f_0}$ is indicated with the subscript $\Delta$ in the fourth column of Table~\ref{tabflatt5}, $(1-W_{f_0})_\Delta$.
We take as the nominal value for $1-W_{f_0}$ the mean between the ones obtained with $\Delta=\Gamma_R=50$~MeV and $\Delta=1.5\Gamma_R$.
Considering the variation of the results between these two values for $\Delta$, and the errorbar of around 0.12 in $r_{\rm exp}$ in both experiments \cite{Aubert:2006nu,BES:2005iaq} (such that $r_{\rm exp}=0.68$ is around the upper value of $r_{\rm exp}=0.52$ and the lower one for 0.75 within one standard deviation), we see that an uncertainty of at least 0.15 should be considered for {the reference value} of $1-W_{f_0}$.
The final figure is given in the column before the last one in Table~\ref{tabflatt5}.
We can appreciate that the central value obtained by integrating Eq.~\eqref{211013.1} is larger than the central value for $X$ in Table~\ref{tab.211111.1} by employing Eq.~\eqref{equ1}, which is also given in the last column.
Then, we would have $1-W_{f_0}=0.76\pm 0.15$ and $0.50\pm 0.15$ for $r_{\rm exp}=0.52$ \cite{Aubert:2006nu} and $0.75$ \cite{BES:2005iaq}, respectively, as also shown in Table~\ref{tabflatt5}. These figures indicate a dominant meson-meson component, mostly $K\bar{K}$ ($X_2\gg X_1$), in the nature of the $f_0(980)$ pole from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}. It also follows that the results are compatible with those given for $X$ within errors.
However, for $r_{\rm exp}$ from Ref.~\cite{BES:2005iaq} the central value of $1-W_{f_0}$ in Table~\ref{tabflatt5} is only slightly above 0.5, and both values taken for $r_{\rm exp}$ \cite{Aubert:2006nu,BES:2005iaq} generate values of $1-W_{f_0}$ that could decrease substantially once errors are taken into account. Therefore, other components apart from $K\bar{K}$ are likely to play a noticeable role in the composition of the $f_0(980)$.
From non-perturbative studies based on unitarizing Chiral Perturbation Theory with/without resonances these extra components have been also unveiled \cite{Oller:1998zr,Pelaez:2006nj,Guo:2012ym,Guo:2012yt}.
\subsection{The $a_0(980)$ resonance}
\label{sec.211126.2}
Now, we follow similar steps and use the fact that the ratio of $\Gamma(a_0(980)\to K\bar{K})/\Gamma(a_0(980)\to \pi\eta)$ is also given by Ref.~\cite{CrystalBarrel:2019zqh}, see Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}, with the values $0.138\pm 0.035$ (RS \Rmnum{2}) and $0.149\pm 0.039$ (RS \Rmnum{3}), respectively. The typical uncertainty for the inferred $r_{\rm exp}$ is around a 3\%. We also consider the average value given by the PDG \cite{Zyla:2020zbs}, $\Gamma(a_0(980)\to K\bar{K})/\Gamma(a_0(980)\to \pi\eta)=0.177\pm 0.024$, which implies an uncertainty of a 1.7\% in $r_{\rm exp}$.
Its implication has been organized in Table \ref{tabff5}, where we apply the method S with $r_{\rm exp}$ as input to the $a_0(980)$ poles in the RS \Rmnum{2} and \Rmnum{3}.
The $K\bar{K}$ component is larger than the $\pi\eta$ one,
but the total compositeness $X$ is small being less than 0.25 and 0.35 for for the RS \Rmnum{2} and \Rmnum{3} $a_0(980)$ poles, respectively.
\begin{table}[!htbp]
\begin{center}
\caption{ Method S applied to the resonance $a_0(980)$ with pole positions from Ref.~\cite{CrystalBarrel:2019zqh}, Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}: By reproducing the input values of $\Gamma_{a_0}$ and $r_{\rm exp}$ (column 2), we predict $|\gamma_i|$ (columns 3, 4), $\Gamma_i$ (columns 5, 6), and $X_i$ (columns 7, 8).}
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$\text{RS}~~~$&~~~$|\gamma_{\pi\eta}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\eta}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{2}*{0.85 \cite{Zyla:2020zbs}}&~~~$\text{\Rmnum{2}}~~~$&~~~$2.90\pm0.05$~~~&$2.32\pm0.08$~~~&$111.8\pm4.2$&~~~$14.6\pm0.6$&~~~$0.100\pm0.004$&~~~$0.132\pm0.013$~~~\\
&~~~$\text{\Rmnum{3}}~~~$&~~~$2.86\pm0.05$~~~&$2.61\pm0.72$~~~&$108.0\pm3.7$&~~~$19.1\pm0.7$&~~~$0.098\pm0.004$&~~~$0.249\pm0.017$~~~\\
\Xhline{1pt}
\multirow{2}*{0.87 \cite{CrystalBarrel:2019zqh}}&~~~$\text{\Rmnum{2}}~~~$&~~~$2.88\pm0.05$~~~&$2.16\pm0.07$~~~&$109.8\pm4.2$&~~~$12.6\pm0.5$&~~~$0.098\pm0.004$&~$0.115 \pm 0.009$ \\%0.12 \pm 0.19$~~~\\
&~~~$\text{\Rmnum{3}}~~~$&~~~$2.89\pm0.05$~~~&$2.43\pm0.07$~~~&$110.5\pm3.8$&~~~$16.5\pm0.6$&~~~$0.100\pm0.004$&~~~~~$0.216\pm 0.030$ ~~~ \\
\Xhline{1pt}
\end{tabular}
\label{tabff5}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{ Method F applied to the resonance $a_0(980)$ with pole positions from Ref.~\cite{CrystalBarrel:2019zqh}, Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}: The branching ratio $r_{\text{exp}}$ is taken as input (column 1), the RS in which the pole lies is indicated in the column 2. We then calculate the bare partial-decay width $\widetilde{\Gamma}_1$ (column 3), the bare coupling squared $g_2$ (column 4), $E_f$ (column 5), the renormalized $\Gamma_1$ (column 6) and $\Gamma_2$ (column 7), and the partial compositeness coefficients $X_1$ (column 8) and $X_2$ (column 9). }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}|cc|ccccccc|}
\Xhline{1pt}
$r_{\text{exp}}$ &$\text{RS}$ &$\widetilde{\Gamma}_{\pi\eta}\,(\text{MeV})$ &$g_2$ &$E_{f}\,(\text{MeV})$ &$\Gamma_{1}\,(\text{MeV})$ &$\Gamma_{2}\,(\text{MeV})$ &$X{\pi\eta}$ &$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{2}*{0.85 \cite{Zyla:2020zbs}} &$\text{\Rmnum{2}}$ &$124.0\pm5.3$ &$0.21\pm0.03$ &$3.08\pm5.11$& $111.8\pm4.2$ &$14.6\pm0.6$ &$0.100\pm0.004$ &$0.116\pm 0.017$ \\
&$\text{\Rmnum{3}}$ &$98.7\pm3.4$ &$0.21\pm0.01$ &$23.1\pm2.8$ &$108.0\pm3.6$ &$19.1\pm0.6$ &$0.099\pm0.004$ &$0.205\pm 0.028$ \\
\Xhline{1pt}
\multirow{2}*{0.87 \cite{CrystalBarrel:2019zqh}} &$\text{\Rmnum{2}}$ &$119.9\pm4.9$ &$0.18\pm0.02$ &$4.65\pm4.82$ &$109.8\pm4.2$ &$12.6\pm0.5$ &$0.098\pm0.004$ & $0.100\pm 0.016$ \\
&$\text{\Rmnum{3}}$ &$102.0\pm3.5$ &$0.18\pm0.01$ &$21.6\pm2.9$ &$110.5\pm3.7$ &$16.5\pm0.6$ &$0.101\pm0.004$ &$0.178\pm 0.038$ \\%$0.178\pm0.018$\\
\Xhline{1pt}
\end{tabular*}
\label{tabflatt4}
\end{center}
\end{table}
We now proceed with the application of the method F with $r_{\rm exp}$ as input and the results obtained are presented in Table~\ref{tabflatt4}. We observe that the values of $X_2$ and $\Gamma_{K\bar{K}}$ are rather similar between Tables~\ref{tabflatt3} and \ref{tabflatt4}, and similarly if we compare the latter with Table~\ref{tabff5} where $r_{\rm exp}$ is taken as input too. Thus, we find compatibility between different methods for the $a_0(980)$ case.
It is also clear from Table~\ref{tabflatt4} that the value of $r_{\rm exp}=0.85$ obtained from the PDG average value of $\Gamma(a_0(980)\to K\bar{K})/\Gamma(a_0(980)\to \pi\eta)=0.177\pm 0.024$ \cite{Zyla:2020zbs} implies quite small values for $X_2\lesssim 0.3$.
It seems in our opinion that an uncertainty of only $2-3\%$ in $r_{\rm exp}$ for the $a_0(980)$ is probably too optimistic. In this respect we notice that the analysis of Ref.~\cite{CrystalBarrel:2019zqh} for the $a_0(980)$ is based on taking poles in the RS II and III for this resonance, while recent sophisticated theoretical studies \cite{Guo:2016zep,Dudek:2016cru}, which also reproduce lattice QCD data, require a very different qualitative picture with a pole in the RS IV.
In this respect, let us check the sensitivity of the results based on Eq.~\eqref{equ1} for the calculation of $X$, and for that take e.g. the $a_0(980)$ pole in the RS \Rmnum{2} and $r_{\rm exp}=0.85\pm 0.15$, which corresponds to the average value of the PDG with an ad-hoc uncertainty of around a 20\%.
We indeed find a strong sensitivity, such that for the central value $r_{\rm exp}=0.85$ we have the values for the couplings $|\gamma_1|=2.9$~GeV and $|\gamma_2|=2.2$~GeV, while for the lower end $r_{\rm exp}=0.85-0.15=0.70$ we find a new solution with the values $|\gamma_1|=3.1$~GeV and $|\gamma_2|=3.2$~GeV.
With respect to the central value we have a variation of only a 6\% in $|\gamma_1|$, but $|\gamma_2|$ is now a 44\% bigger (a factor of 2 for the square of the coupling).
The variation is of similar size if considering $r_{\rm exp}=0.87\pm 0.17$ with the central value from Ref.~\cite{CrystalBarrel:2019zqh} and an ad-hoc $20\%$ uncertainty taken.
Thus, it is advisable to also apply for the
$a_0(980)$ case the method based on the spectral density function (which is less sensitive to small variations in the input value for $r_{\rm exp}$) and evaluate the compositeness $1-W_{a_0}$ as a function of $\Delta$. We give the results in Tables~\ref{tab.211111.2} and \ref{tabflatt6} for the $a_0(980)$ poles in the RS \Rmnum{2}, Eq.~\eqref{211009.1}, and RS \Rmnum{3}, Eq.~\eqref{eq:ratioa0}, respectively. The dependence with $\Delta$ is indicated by the subscript $\Delta$ in the fourth column, $(1-W_{a_0})_\Delta$.
Taking into account the variation in the value of $1-W_{f_0}$ between $\Delta\approx \Gamma_{a_0}$ and $\Delta\approx 1.5 \Gamma_{a_0}$, we give our range of values calculated for $1-W_{a_0}$ in the column before the last one in Tables~\ref{tab.211111.2} and \ref{tabflatt6}. The output is similar in both tables with values for $1-W_{a_0}$ typically within the range $0.3-0.4$. When compared with $X$ from Table~\ref{tabflatt4}, given in the last column in Tables~\ref{tab.211111.2} and \ref{tabflatt6}, we see a quantitative agreement in the case of the RS \Rmnum{3} pole, and a semiquantitative one for the RS \Rmnum{2} one. The emerging picture is that $X$ is clearly less than 0.5, ranging between $0.2-0.4$ depending on the method of calculation. Therefore, other components beyond $\pi\eta$ and $K\bar{K}$ are also required \cite{Dai:2012kf,Sekihara:2014qxa}. However, if the resonance lied in RS \Rmnum{4}, as preferred by the recent analyses \cite{Guo:2016zep,Dudek:2016cru}, our approach does not apply and we cannot extend such conclusion to that case.
\begin{table}[H]
\begin{center}
\caption{Resonance $a_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{CrystalBarrel:2019zqh}, Eq.~\eqref{211009.1}. The dependence of $W_{a_0}$ on the integration interval $[-\Delta,\Delta]$ for the $a_{0}(980)$ is shown with $\Delta$ up to $2\Gamma_{a_0}$. In the column 5 we provide our interval estimated for $1-W_{a_0}$ and in the last one $X=X_1+X_2$ from Table~\ref{tabflatt4} is given.
\label{tab.211111.2}}
\begin{tabular}{|c|ccc|c|c|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$[-\Delta, \Delta]$&~~~$W_{a_{0}}$& $(1-W_{a_0})_{\Delta}$ & $1-W_{a_0}$ & $X$\\
\Xhline{1pt}
\multirow{4}*{~~~0.85 \cite{Zyla:2020zbs}}~~~&~~~$[-50, 50 ]$~~~&~~~~$0.38$~~~ & 0.62 & &\\
~~~&~~~$[-100, 100 ]$~~~&~~~~$0.57$~~~ & $0.43$ & &\\
~~~&~~~$[-150, 150 ]$~~~&~~~$0.67$~~~& 0.33 & &\\
~~~&~~~$[-200, 200 ]$~~~&~~~$0.73$~~~& 0.27 & $0.33-0.43$ &$0.216 \pm 0.017$\\
\Xhline{1pt}
\multirow{4}*{~~~0.87 \cite{CrystalBarrel:2019zqh}}~~~&~~~$[-50, 50 ]$~~~&~~~~$0.39$~~~ & 0.61 & &\\
~~~&~~~$[-100, 100 ]$~~~&~~~$0.59$~~~& $0.41$ & &\\
~~~&~~~$[-150, 150 ]$~~~&~~~$0.68$~~~& 0.32 & &\\
~~~&~~~$[-200, 200 ]$~~~&~~~$0.74$~~~& 0.26 & $0.32-0.41$ &$0.198 \pm 0.016$\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{ Resonance $a_0(980)$ with the pole position in the RS \Rmnum{3} from Ref.~\cite{CrystalBarrel:2019zqh}, Eq.~\eqref{eq:ratioa0}. The dependence of $W_{a_0}$ on the integration interval $[-\Delta,\Delta]$ for the $a_{0}(980)$ is shown with $\Delta$ up to $2\Gamma_{a_0}$. In the column 5 we provide our interval estimated for $1-W_{a_0}$ and in the last one $X=X_1+X_2$ from Table~\ref{tabflatt4} is given
\label{tabflatt6}}
\begin{tabular}{|c|ccc|c|c|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$[-\Delta, \Delta]$&~~~$W_{a_{0}}$& $(1-W_{a_0})_\Delta$ & $1-W_{a_0}$ & $X$\\
\Xhline{1pt}
\multirow{5}*{~~~0.85 \cite{Zyla:2020zbs}}~~~&~~~$[-50, 50 ]$~~~&~~~~$0.39$~~~ & 0.61 & & \\
~~~&~~~$[-100, 100 ]$~~~&~~~~$0.59$~~~ & $0.41$ & & \\
~~~&~~~$[-150, 150 ]$~~~&~~~$0.69$~~~& 0.31 & &\\
~~~&~~~$[-200, 200 ]$~~~&~~~$0.75$~~~& 0.25 & &\\
~~~&~~~$[-250, 250 ]$~~~&~~~$0.79$~~~& 0.21 & $0.31-0.41$ &$0.303\pm0.030$\\
\Xhline{1pt}
\multirow{5}*{~~~0.87 \cite{CrystalBarrel:2019zqh}}~~~&~~~$[-50, 50 ]$~~~&~~~~$0.40$~~~ & 0.60 & &\\
~~~&~~~$[-100, 100 ]$~~~&~~~$0.60$~~~& $0.40$ & &\\
~~~&~~~$[-150, 150 ]$~~~&~~~$0.70$~~~& 0.30 & &\\
~~~&~~~$[-200, 200 ]$~~~&~~~$0.76$~~~& 0.24 & &\\
~~~&~~~$[-250, 250 ]$~~~&~~~$0.80$~~~& 0.20 & $0.30-0.40$ &$0.279\pm 0.037$\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
We would also like to comment about the clearly visible cusp effect for most of the curves of $\omega(E)$ in Fig.~\ref{picflatt}. This change in the shape of $\omega(E)$ below and above the two-kaon threshold is due to the fact that if the resonance pole lies in the RS \Rmnum{2} (\Rmnum{3}) then there is no associated pole in the RS \Rmnum{3} (\Rmnum{2}) above (below) the $K\bar{K}$ threshold. Precisely the RS III (II) is the one that connects with the physical region there.
\section{Summary and conclusions}
\label{sec.211106.1}
This paper discusses the importance of the continuum channels $\pi\pi$-$K\bar{K}$ and $\pi\eta$-$K\bar{K}$ in the composition of the $f_0(980)$ and $a_0(980)$ resonances, which is quantified by the concept of the total compositeness $X$.
In our calculation we exploit the tight relationship between the compositeness $X$, the mass and the decay width of a resonance.
The threshold of the $K\bar{K}$ pair is very close to the mass of each resonance and this fact has main consequences in our results. We develop two methods: One is based on saturating the total width and compositeness; the other relies on the use of a Flatt\'e parameterization and, in some instances, of the spectral function of a near-threshold resonance.
We provide input values for the mass and width of each resonance by taking their pole positions from relevant analyses in the literature.
Regarding the third input needed in our analyses we first take input values for $X$ in the compositeness relationship and
we predict the couplings, partial compositeness coefficients and partial-decay widths to the $\pi\pi$ ($\pi\eta$) and $K\bar{K}$ channels for the $f_0(980)$ ($a_0(980))$.
There is an interesting trend in the results such that the larger $X$, the smaller the branching decay ratio to the lighter channel, $r_{\rm exp}$.
This is due to the increase of the coupling to the heavier $K\bar{K}$ channel with increasing $X$, compensating the reduced phase space available for the decay of the resonances into this channel. It is also found that for the $f_0(980)$ the partial compositeness coefficient of $K\bar{K}$, $X_2$, is larger by orders of magnitude than the corresponding one to $\pi\pi$, $X_1$. For the $a_0(980)$ the compositeness for $K\bar{K}$ is also larger than the one associated to $\pi\eta$, but as $X$ decreases they tend to become similar in size.
Another possibility is to replace the third input $X$ by reported values in the literature for $r_{\rm exp}$.
However, if $X_2$ is calculated in terms of the coupling squared to $K\bar{K}$ and the derivative of the unitary-loop function in the corresponding Riemann sheet, we typically find a large sensitivity on the input value for $r_{\rm exp}$. The situation is improved when using the method based on integrating the spectral density function around the $K\bar{K}$ threshold along the energy region comprising the resonance signal, so that more stable results are obtained under small changes in $r_{\rm exp}$. It turns out that for the poles considered the meson-meson components are dominant for the $f_0(980)$, while for the $a_0(980)$ they are subdominant.
Throughout the manuscript we have emphasized the need to distinguish in a Flatt\'e parameterization between the bare couplings/widths, on the one hand, and the dressed/renormalized ones, on the other hand. We have also shown how to calculate the latter ones. In addition, we discuss the relationship between the partial-decay widths directly calculated in terms of the dressed couplings and the actually measured ones. In this regard, we show the changes needed for a pole in the second Riemann sheet lying near the heavier threshold, such that the total width is then $\Gamma_R=\Gamma_1- \Gamma_2$, instead of the standard $\Gamma_R=\Gamma_1+\Gamma_2$ for a pole in the third Riemann sheet.
Finally, we stress that the compositeness concept, as a quantitative examination of the inner structure of a resonance/molecule, is a relevant tool to promote a step forward in the understanding of the structure of a hadronic state.
\acknowledgments
We would like to thank useful discussions with Zhi-Hui Guo, J.~R. Pel\'aez and J.~Ruiz de Elvira. The author XWK is supported by the National Natural Science Foundation of China (NSFC) under Project No.~11805012. JAO acknowledges partial financial support by the MICINN AEI (Spain) Grant No. PID2019-106080GB-C22/AEI/10.13039/501100011033, and by the EU Horizon 2020 research and innovation programme, STRONG-2020 project, under grant agreement No.~824093.
\bibliographystyle{apsrev4-1}
\section{Introduction}
The non-perturbative meson--meson interactions and the related scalar-meson spectroscopy is a topic of great importance.
The nature of scalar mesons is still under debate, in spite of the efforts during several decades in the past, particularly since the discovery of the resonances $f_0(980)$ \cite{Protopopescu:1973sh} and $a_0(980)$ \cite{Ammar:1968zur}.
The scalar mesons below 1~GeV, like the $f_{0}(500)/\sigma, K^*(800)/\kappa, f_0(980), a_0(980)$, are serious candidates to comprise a $J^{PC}=0^{++}$ nonet as required in Refs.~\cite{Jaffe:1976ig,Jaffe:1976ih,vanBeveren:1986ea,Napsuciale:1998ip,Black:2000qq,Oller:2003vf,Moussallam:2011zg}.
These resonances with vacuum quantum numbers are crucial for
the deep understanding of spontaneous chiral symmetry breaking of Quantum Chromodynamics (QCD), its spectroscopy and, in general, of its non-perturbative nature \cite{Moussallam:1999aq,Oller:2006xb,Oller:2007xd,Albaladejo:2010tj,Alvarez-Ruso:2009vkn,Alvarez-Ruso:2010rqm}.
Along the decades the lightest scalar resonances have been accommodated within different models like tetraquark states \cite{Jaffe:1976ig,Jaffe:1976ih,Achasov:1980tb,Achasov:1999wv,Achasov:2020aun,Vijande:2009ac}, molecular states \cite{Weinstein:1982gc,Weinstein:1983gd,Ahmed:2020kmp,Dai:2014zta,Dai:2011bs}, dynamically generated resonances \cite{Oller:1997ti,Oller:1997ng,Oller:1998hw,Oller:1998zr,Janssen:1994wn,Lohse:1990ew}, unitarized quark models \cite{Bramon:1980ni,vanBeveren:1986ea,Tornqvist:1995kr}, linear sigma models \cite{Black:1998wt,Scadron:1997ut,Napsuciale:2004xa,Napsuciale:2004au}, etc.
For example, in Ref.~\cite{Dai:2012kf} the $a_0(980)$ is understood as a Breit-Wigner resonance, not as a dynamically generated resonance, while the $f_0(980)$ is considered as a $K\bar{K}$ bound state.
We also notice that in Ref.~\cite{Sekihara:2014qxa} the compositeness is analyzed via the $f_{0}(980)-a_{0}(980)$ mixing intensity, and it is found that the $f_{0}(980)$ and $a_{0}(980)$ cannot be simultaneously $K\bar{K}$ bound states.
The masses of the two resonances are very close and the $f_{0}(980)-a_{0}(980)$ mixing could occur via the hadronic $K\bar{K}$ loop \cite{Achasov:1979xc,Kudryavtsev:2002uu,Hanhart:2007bd,Oller:1999ag}.
Recently, there is also interest in assessing the nature of the scalar mesons by studying semileptonic decays \cite{Cheng:2017pcq,Kang:2018jzg,Kang:2013jaa}. For some reviews, see Refs.~\cite{Close:2002zu,Klempt:2007cp,Yao:2020bxx,Zyla:2020zbs}.
In fact, a meson has typically several components \cite{Cohen:2014vta}, such as the superposition of $q\bar{q}$ and tetraquarks $qq\bar{q}\bar{q}$ \cite{tHooft:2008rus}, gluonium \cite{Narison:2000dh}, meson-meson components, etc.
The compositeness, usually denoted by $X$, refers to the weight in the resonance state composition of the meson-meson components in the continuum part of the free spectrum \cite{Weinberg:1962hj,Oller:2017alp}. Therefore, it is a fundamental concept that is required for a quantitative analysis on the nature of the resonance. In contrast, the elementariness, typically called $Z$, is the weight of the bare (compact/short-range) degrees of freedom in the resonance constitution such that $1=Z+X$.
For a bound state case, the compositeness is a positive real number \cite{Weinberg:1962hj} between 0 and 1 (as it should be), but its straightforward extension to the resonance case gives rise to complex-valued results \cite{Oller:2017alp}.
Several extensions have been proposed \cite{Baru:2003qq,Hyodo:2011qc,Aceti:2012dd,Aceti:2014ala,Sekihara:2014kya,Matuschek:2020gqe} to end with real sensible values for the compositeness. In this work, we use the results of Refs.~\cite{Guo:2015daa,Oller:2017alp} that allow a probabilistic interpretation of the compositeness relation of the resonance into open channels. Studies along these lines have also been extensively done for the case of heavy-quark resonances \cite{Meissner:2015mza,Kang:2016ezb,Kang:2016jxw,Gao:2018jhk,Guo:2020vmu,Guo:2020pvt,Du:2021bgb}. In addition, we also employ the formalism based on the evaluation of the spectral density function of the bare state associated to the resonance \cite{Bogdanova:1991zz} by using a Flatt\'e parameterization \cite{Baru:2003qq}. We then compare between this formalism and the one previously referred for the evaluation of the compositeness and elementariness, finding compatible results between them.
The $f_{0}(980)$ and $a_0(980)$ resonances couple mainly to the channels $\pi\pi$-$K\bar{K}$ and $\pi\eta$-$K\bar{K}$, respectively. The main equations stem from considering the saturation of the compositeness relationship and the total width of the resonance, from which we calculate the couplings, partial compositeness coefficients and partial-decay widths.
The implication of the branching ratio to the lighter channel, which we call $r_{\rm exp}$,
together with the reproduction of the total width, is also explored within our compositeness formalism. This set up allows to obtain more definite predictions for $X$, and the smaller the branching ratio the larger the resulting $X$ by a linear relation.
In particular, for the $f_0(980)$ the branching ratio $r_{\exp}=0.52\pm 0.12$ \cite{Aubert:2006nu}, the most recent one collected in the PDG \cite{Zyla:2020zbs} from $B$ decays to $K\pi\pi$, implies the largest $X$ ranging around 0.6--0.9 within errors.
In turn for the $a_0(980)$ the branching ratios reported recently \cite{CrystalBarrel:2019zqh,Zyla:2020zbs} are much larger and then $X$ calculated here is significantly smaller, around $0.2-0.4$, taking into account errors and variations in the method of calculation. This indicates that other components in addition to the meson-meson ones play an important role in the constitution of the $a_0(980)$.
However, it is worth keeping in mind that the most sophisticated theoretical studies on $\pi\eta$ scattering matched with lattice QCD \cite{Guo:2016zep,Dudek:2016cru} obtain that the $a_0(980)$ is a pole lying in a hidden Riemann sheet from the physical energy axis.
This was also obtained before in Refs.~\cite{Guo:2012yt,Guo:2012ym}.
At this point there is a caveat, because our study here cannot be applied to such scenario (in which the resonance effects manifest as a strong cusp) in order to clarify the nature of the $a_0(980)$.
We find that for the $f_0(980)$ the $K\bar{K}$ component has a much larger partial compositeness coefficient than the $\pi\pi$ channel. For the $a_0(980)$ it is obtained that still the $K\bar{K}$ compositeness coefficient is also larger than the one of the $\pi\eta$, but not overwhelmingly dominant.
These results are a verification of those already obtained in Refs.~\cite{Janssen:1994wn,Oller:1997ti}, such that if the $\pi\eta$ channel were removed the $a_0(980)$ would disappear, while the $f_0(980)$ would keep appearing as a $K\bar{K}$ bound state.
In connection with our use of Flatt\'e parameterization we stress the importance of distinguishing between bare and renormalized couplings and widths. The former ones are those appearing directly in the Flatt\'e parameterization, while the latter ones are associated to the actual residues of the partial-wave amplitude of interest at the pole position in the complex energy plane. We also show that for the present two-channel coupled scattering, when the pole lies in the second Riemann sheet, one has to modify the interpretation of the theoretically calculated partial-decay width to the lighter channel in terms of renormalized couplings (residues), and give the proper interpretation.
These two effects explain why bare partial-decay widths, often found in the literature, are much bigger than those actually measured.
For the rest of the paper, Sec.~\ref{sec.211010.1} is dedicated to elaborate
the formalism based on the saturation of the total compositeness and decay width of the resonance. In turn, Sec.~\ref{sec.201009.3} develops the method based on the use of a Flatt\'e parameterization and introduces the spectral density function for a near-threshold resonance.
Then, we apply these methods to the study of the $f_0(980)$ and $a_0(980)$ resonances, either by taking $X$ as input in Sec.~\ref{sec.201008.1}, or by using $r_{\rm exp}$ in Sec.~\ref{sec.211126.1}. In terms of them we typically provide the resulting partial compositeness coefficients and partial-decay widths.
Finally, concluding remarks are given in Sec.~\ref{sec.211106.1}.
\section{Formulation of the compositeness-relation and decay-width method}
\label{sec.211010.1}
For definiteness, we proceed with the discussion on the components in the nature of the $f_{0}(980)$, and develop a method to investigate its partial-decay widths, couplings and compositeness. Later on we also apply this method to the related isovector scalar resonance $a_0(980)$.
In what follows, we consider two main decay channels ($\pi\pi$ and $K\bar{K}$) of the $f_0(980)$.
We follow the standard convention such that compositeness and elementariness coefficients are written as $X$ and $Z$, respectively, with $X+Z=1$. For the case of a bound state the coefficient $Z$ corresponds to the field renormalization constant \cite{Weinberg:1962hj,Salam:1962ap}, being real and positive and less than 1 (as $X$ is too).
The straightforward generalization for resonances of the compositeness and elementariness gives rise to complex numbers.
As mentioned in the Introduction, several variants for the compositeness of a resonance have been discussed in the literature.
Here we will follow Ref.~\cite{Guo:2015daa}, which formulates a probabilistic interpretation of the compositeness relation involving only positive and real coefficients for the resonance.
As explained in Ref.~\cite{Oller:2017alp} the compositeness $X$ arises by evaluating the expected value of the number of mesons in the resonance divided by 2 (because we are considering two-body meson states).
After the proper unitary phase transformation of the $S$-matrix, it gives the partial compositeness coefficient for the resonance in the form \cite{Guo:2015daa}
\begin{equation}\label{equ1}
X_{i}=\Big{|}\gamma_{i}^{2}\Big{|}\Big{|}\frac{\partial G_{i}(s)}{\partial s}\Big{|}_{s=s_{R}}\,,
\end{equation}
and the subscript $i$, with $i=1$ and 2, corresponds to the $S$-wave isoscalar $\pi\pi$ and $K\bar{K}$ channels, respectively.
The pole position in the Mandelstam variable $s$ is called $s_R$,
\begin{equation}\label{equ21}
s_{R}=(m_{R}-\frac{i}{2}\Gamma_{R})^2\,,
\end{equation}
with $m_R$ and $\Gamma_R$ the mass and width of the resonance, respectively.
Furthermore, $\gamma_{i}$ is the coupling of the resonance to the channel $i$ that is extracted from the residues of the $T$ matrix at the pole position $s_R$,
\begin{equation}
\gamma^{2}_{i}=-\lim\limits_{s\to s_{R}}(s-s_{R})T(s)_{ii}\,.
\end{equation}
$G_{i}(s)$ is the unitary two-point scalar loop function for the $i_{\rm th}$ channel and it can be written in the form \cite{Guo:2012yt}
\begin{equation}\label{equ2}
\begin{aligned}
G_{i}(s)&=\frac{1}{16\pi^{2}}\big\{a_{i}(\mu)+\log\frac{m_{2}^{2}}{\mu^{2}}-\frac{\Delta+s}{s}\log\frac{m_{2}}{m_{1}}+\frac{p_{i}}{\sqrt{s}}[\log(s-\Delta+2\sqrt{s}p_{i})\\
&+\log(s+\Delta+2\sqrt{s}p_{i})-\log(-s+\Delta+2\sqrt{s}p_{i})-\log(-s-\Delta+2\sqrt{s}p_{i})]\big\}~.\\
\end{aligned}
\end{equation}
Here $\triangle=m_{1}^{2}-m_{2}^{2}$ and $m_{1}$, $m_{2}$ are the masses of the two particles in the channel $i$. We do not take into account the isospin breaking effects and use an average mass of the charged and neutral pions, and proceed analogously for kaons too. However, such effects are expected to be negligible in our exploration. The term $a_{i}(\mu)+\log\frac{m_{2}^{2}}{\mu^{2}}$ in Eq.~\eqref{equ2} is independent of $s$ and it disappears when taking the derivative of $G_{i}(s)$ in Eq.~\eqref{equ1}. Finally in Eq.~\eqref{equ2}, $p_{i}$ is the momentum of the channel $i$,
\begin{equation}\label{eq:momentum}
p_{i}(s)=\frac{\sqrt{[s-(m_{1}+m_{2})^{2}][s-(m_{1}-m_{2})^{2}]}}{2\sqrt{s}}
\end{equation}
The total compositeness coefficient, $X = \sum\limits_{i = 1}^{n} X_i$, is the sum over the partial compositeness coefficients $X_i$, and it must satisfy the condition $X\leq 1 $.
As discussed in more detail in Ref.~\cite{Guo:2015daa}, Eq.~\eqref{equ1} is properly applied to the calculation of $X_i$ for the channel $i$ under the condition that the resonance lies in an unphysical Riemann sheet (RS) that is connected with the physical RS along an interval of the real-$s$ axis (where $s$ is the total energy squared in the center of mass reference frame), lying above the threshold for the channel $i$.
Eq.~\eqref{equ1} is very similar to that for a bound state case, see e.g.~\cite{Oller:2017alp},
\begin{equation}
X_{i}=-\gamma_{i}^{2}\frac{\partial G_{i}(s)}{\partial s}\Big{|}_{s=s_{R}}\,,
\end{equation}
with the difference concerning the introduction of the absolute values.
It is necessary to distinguish the RS in which $s_{R}$ lies. For the different signs of the imaginary part of $p_{1}$ and $p_{2}$ in the complex-$s$ plane, we can define the four different RSs as
\begin{equation}
\begin{aligned}\label{sheets}
\text{Sheet} ~\text{\Rmnum{1}}: ~~~\text{Im}p_{1}>0, ~\text{Im}p_{2}>0
\\
\text{Sheet} ~\text{\Rmnum{2}}: ~~~\text{Im}p_{1}<0, ~\text{Im}p_{2}>0
\\
\text{Sheet} ~\text{\Rmnum{3}}: ~~~\text{Im}p_{1}<0, ~\text{Im}p_{2}<0
\\
\text{Sheet} ~\text{\Rmnum{4}}: ~~~\text{Im}p_{1}>0, ~\text{Im}p_{2}<0
\end{aligned}
\end{equation}
The RSs $\text{\Rmnum{2}}$ and $\text{\Rmnum{3}}$ are connected to the physical RS $\text{\Rmnum{1}}$ from the $\pi\pi$ threshold onwards up to
and above the $K\bar{K}$ threshold in the real-$s$ axis, respectively.
The threshold of the $\pi\pi$ channel is distant from the resonance mass, while the resonance location is remarkably close to the $K\bar{K}$ threshold, cf. Eq.~\eqref{equ22} below.
Next, let us discuss how
to make the analytical extrapolation from the RS \Rmnum{1} to the RSs \Rmnum{2}, \Rmnum{3}, \Rmnum{4} in order to calculate the partial compositeness coefficient $X_i$, attending to the RS in which the pole lies.
We have to cross the cut of $G_{i}(s)$ and use its continuity property for real values of $s$ with $s> (m_{i,1} + m_{i,2})^2$, where $m_{i,1}$ and $m_{i,2}$ are the masses of the first and second particles in the $i_{\rm th}$ channel, respectively. Then, one has that \cite{Oller:1997ti}
\begin{equation}\label{equ20}
\begin{aligned}
G_{i}^{\text{\Rmnum{2}}}(s+i\epsilon)&=G_{i}^{\text{\Rmnum{1}}}(s-i\epsilon)=G_{i}^{\text{\Rmnum{1}}}(s+i\epsilon)-2i\text{Im}G_{i}^{\text{\Rmnum{1}}}(s+i\epsilon)\\
&=G_{i}^{\text{\Rmnum{1}}}(s+i\epsilon)
+\frac{i}{8\pi}\sqrt{\frac{[s+i\epsilon-(m_{1}+m_{2})^{2}][s+i\epsilon-(m_{1}-m_{2})^{2}]}{(s+i\epsilon)^2}}\,,
\end{aligned}
\end{equation}
where the square root is calculated in the first Riemann sheet, with the argument of the radicand between 0 and $2\pi$.
The Eq.~\eqref{equ20} can be extrapolated to any other complex value of $s$.
Thus, the RS \text{\Rmnum{1}} is obtained with $G_{1}^{\text{\Rmnum{1}}}(s), G_{2}^{\text{\Rmnum{1}}}(s)$;
the RS \Rmnum{2} corresponds to take $G_{1}^{\text{\Rmnum{2}}}(s)$, $G_{2}^{\text{\Rmnum{1}}}(s)$;
the RS \Rmnum{3} is obtained with $G_{1}^{\text{\Rmnum{2}}}(s)$, $G_{2}^{\text{\Rmnum{2}}}(s)$;
and the RS \Rmnum{4} implies $G_{1}^{\text{\Rmnum{1}}}(s)$, $G_{2}^{\text{\Rmnum{2}}}(s)$.
The crucial inputs in the evaluation of the coefficients $X_i$, Eq.~\eqref{equ1}, are the pole position and the coupling $|\gamma_i|$.
Regarding the pole parameters of the $f_{0}(980)$, we preferentially consider the results obtained by the dispersive analysis of Ref.~\cite{GarciaMartin:2011jx} based on the use of a set of Roy-like equations called the GKPY equations \cite{Garcia-Martin:2011iqs}.
The mass and width of the resonance calculated in Ref.~\cite{GarciaMartin:2011jx} are
\begin{equation}\label{equ22}
m_{R}=996\pm7\ \text{MeV}, ~~\Gamma_{R}= 50^{+20}_{-12}\ \text{MeV},
\end{equation}
which provides a rather accurate determination for the mass, while the width is affected by rather large errors.
When using this pole we consider the RS \Rmnum{2} because it was found to be there in the original publication \cite{GarciaMartin:2011jx}.
In addition we consider the $f_0(980)$ pole position from Ref.~\cite{Guo:2012yt}.
This reference performs an exhaustive study of $S$ and $P$-wave meson-meson scattering by unitarizing one-loop amplitudes in $U(3)\otimes U(3)$ Chiral Perturbation Theory \cite{DiVecchia:1980yfw,Rosenzweig:1979ay,Witten:1980sp,Kawarabayashi:1980dp,Kawarabayashi:1980uh,Kaiser:2000gs,Herrera-Siklody:1996tqr} with explicit exchange of resonances. A large amount of experimental data on different reactions is reproduced and, at the same time, the consistency of the approach is checked by properly reproducing QCD constraints from spectral sum rules and semilocal duality as a function of the number of colors of QCD. The resulting pole of the $f_0(980)$, found also in the RS \Rmnum{2}, is
\begin{align}
\label{211107.1}
m_{R}=978^{+7}_{-11}\ \text{MeV}, ~~\Gamma_{R}= 58^{+18}_{-22}\ \text{MeV}\,.
\end{align}
Interestingly for this case the mass of the resonance lies clearly below the $K\bar{K}$ threshold, while $m_R$ from Ref.~\cite{GarciaMartin:2011jx} in Eq.~\eqref{equ22} is above. In this way we can now explore what is the effect of such a relative arrangement of the resonance mass with respect to the two-kaon threshold.
Instead, the width of the $f_0(980)$ is rather similar in both cases.
In our considerations, we ignore the multiparticle $4\pi$ channel whose contributions are very small up to 1~GeV as obtained in phenomenological studies where it is considered \cite{Albaladejo:2008qa,Garcia-Martin:2011iqs}, or estimated theoretically in studies based on unitarizing Chiral Perturbation Theory \cite{Salas-Bernardez:2020hua}. We simply notice as well that the electromagnetically driven two-photon decay channel has been ignored in our calculations. References~\cite{Zyla:2020zbs,Dai:2014zta,Oller:2007sh,Oller:2008kf} obtained that $\Gamma_{f_{0}(980)\rightarrow\gamma\gamma}=0.32\pm0.05~\text{MeV}$, which contributes a tiny portion of the total width for the $f_0(980)$, and
should be much smaller than the one for $\pi\pi$, and $K\bar{K}$.
Then, we sensibly assume that the total compositeness coefficient of $f_{0}(980)$ can be expressed as the sum of the $S$-wave isoscalar $\pi\pi$ and $K\bar K$ channels,
\begin{equation}
\label{equ:square2}
X=X_{1}+X_{2}=|\gamma_{1}|^{2}\Big{|} \frac{\partial G_{1}(s)}{\partial s}\Big{|}_{s=s_{R}}+|\gamma_{2}|^{2}\Big{|} \frac{\partial G_{2}(s)}{\partial s}\Big{|}_{s=s_{R}}º\,.
\end{equation}
In addition to Eq.~(\ref{equ:square2}), another main equation stems from imposing the saturation of the width of the $f_{0}(980)$. As the threshold of the $\pi\pi$ channel is distant from the resonance we use the standard formula for the partial-decay width of the $f_0(980)$ to $\pi\pi$,
\begin{equation}\label{equ33}
\Gamma_{1}=\frac{|\gamma_{1}|^{2}p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}}\,,
\end{equation}
where $p_i$ is the momentum in the rest frame of the resonance, cf.~\eqref{eq:momentum} with $s=m_{R}^{2}$.
However, the $K\bar{K}$ threshold is very close to the resonance mass and the effect of the finite width of the $f_{0}(980)$ (around 50 \text{MeV}) in the $K\bar{K}$ phase space is not negligible.
Notice that even the lower limit of the $f_0(980)$ mass within its uncertainty region in Eq.~\eqref{equ22} is indeed smaller than the $K\bar{K}$ threshold. However, since the uncertainty in the mass is much smaller than the width of the resonance, this fact is easily overturn by the mass distribution of the resonance and it does not prevent the actual decay of the $f_0(980)$ to $K\bar{K}$, even when the resonance mass is below the $K\bar{K}$ threshold.
In these regards, we consider a Lorentzian mass distribution for the resonance, and the partial-decay width is written as
\begin{equation}\label{equ:square}
\Gamma_{2}=\frac{|\gamma_{2}|^{2}}{16\pi^{2}} \int_{m_{1}+m_{2}}^{+ \infty}dW\frac{p_2(W^{2})}{W^{2}}\frac{\Gamma_{R}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}
\end{equation}
In the limit $\Gamma_{R}\rightarrow0$, Eq.~(\ref{equ:square}) becomes the standard formula for the decay width.
In a practical calculation, the upper limit of integration ($+\infty$) is replaced by $m_{R}+n\Gamma_{R}$.
For example, in Ref.~\cite{Kang:2016ezb}, the value of $n = 8$ is chosen for the $Z_{b}(10610)/Z_{b}(10650)$ by reproducing the experimental width;
in Ref.~\cite{Guo:2020vmu}, $n = 10$ is adopted for the $Z_{c}(3900)$, $X(4020)$ and $Z_{c}(3985)$ particles.
However, in the Ref.~\cite{Meissner:2015mza}, dedicated to the study of the compositeness of the $\chi_{c1}p$ for the $P_{c}(4450)$, the upper limit of integration used was $m_{R}+2\Gamma_{R}$.
The region for $n=2$ is usually thought to be a reasonable cut in the resonance region \cite{Meissner:2015mza,Dias:2021upl}.
For our consideration, we restrict the upper integration limit in the resonance region to $m_{R}+2\Gamma_{R}$ (which comprises the resonance signal as it can be seen in Fig.~\ref{picflatt}, introduced in Sec.~\ref{sec.201009.3} within the context of a Flatt\'e parameterization).
Another aspect to take into account is the RS in which the pole lies because the sign of the momentum of the kaons in the center of mass reference frame has opposite signs in the RSs \Rmnum{2} and \Rmnum{3} at the pole position.
Notice that for the latter RS the kaon momentum has the standard sign in the lower complex-energy half-plane, corresponding to a $m_R-i\Gamma_1/2-i\Gamma_{2}/2$, while for the former one has instead $m_R-i\Gamma_1/2+i\Gamma_{2}/2$.
Because this change of sign in the kaon momentum the saturation of the resonance width obtained from the pole position varies, such that $\Gamma_2$ standardly adds to $\Gamma_1$ when the pole lies in the RS \Rmnum{3}, but $\Gamma_2$ {\it subtracts from} $\Gamma_1$ when the pole lies in the RS \Rmnum{2}. As a result, the total decay width of the $f_{0}(980)$ is then
\begin{equation}
\label{equ:square1}
\Gamma_{R}=|\gamma_{1}|^{2}\frac{p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}}\pm \frac{|\gamma_{2}|^{2}}{16\pi^{2}} \int_{m_{1}+m_{2}}^{m_{R}+2\Gamma_{R} }dW\frac{p(W^{2})}{W^{2}}\frac{\Gamma_{R}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}\, ,
\end{equation}
for the pole in the RS \Rmnum{3} or \Rmnum{2}, respectively.
Another interesting consequence of this discussion on the RS in which the pole lies is the simple observation that for a pole in the RS \Rmnum{2} the combination $m_R-i\Gamma_1/2+i\Gamma_2/2$ can be rewritten as $m_R-i(\Gamma_1-2\Gamma_2)/2-i\Gamma_2/2$, so that now the decay width to $K\bar{K}$ appears with the right sign in the resonance propagator for an interpretation as a decay width, while the decay width to the lighter channel is $\Gamma_1-2\Gamma_2$. This result can also be obtained in a more straightforward mathematical way in terms of the branching ratio to the first channel, $r_{\rm exp}$, by noticing that
\begin{align}
\label{211123.2}
r_{\rm exp}&=1-\frac{\Gamma_2}{\Gamma_R}=\frac{\Gamma_R-\Gamma_2}{\Gamma_R}=\frac{\Gamma_1-2\Gamma_2}{\Gamma_R}\,,~\text{RS \Rmnum{2}}~,
\end{align}
where we have used again that $\Gamma_R=\Gamma_1-\Gamma_2$. However, for a pole in the RS \Rmnum{3} one has the standard result
\begin{align}
\label{211123.1}
r_{\rm exp}&=1-\frac{\Gamma_2}{\Gamma_R}=\frac{\Gamma_1}{\Gamma_R}\,,~\text{RS \Rmnum{3}}~.
\end{align}
Nonetheless, in what follows, we keep the usual notation of directly calling $\Gamma_i$ as decay widths, though for $i=1$ and the pole in the RS \Rmnum{2} the actual decay width to the lighter channel does not coincide with $\Gamma_1$, as just discussed. Because of this reason we denote by $\Gamma_{\pi\pi}$ or $\Gamma(f_0(980)\to\pi\pi)$ the physical decay width of the $f_0(980)$ to $\pi\pi$, and similarly we use $\Gamma_{\pi\eta}$ or $\Gamma(f_0(980)\to\pi\eta)$ for the physical partial-decay width of the $a_0(980)$ to the lighter channel. For $K\bar{K}$ we can use indistinctly $\Gamma_2$ or $\Gamma_{K\bar{K}}$ since they coincide. They have the same meaning as $\Gamma(f_0(980)\to K\bar{K})$ or $\Gamma(a_0(980)\to K\bar{K})$ in a clear notation.
These points are further elaborated when considering a Flatté parameterization for the $f_0(980)$ and $a_0(980)$ resonances in Sec.~\ref{sec.201009.3}.
Combining Eq.~(\ref{equ:square2}) and Eq.~(\ref{equ:square1}) allows us to solve $|\gamma_{1}|$ and $|\gamma_{2}|$ in terms of the total compositeness and width. Then we can obtain the partial-decay width $\Gamma_i$ and individual compositeness coefficient $X_{i}$ for each channel.\footnote{The fact that the width of the $f_0(980)$ is substantially larger than the difference between its mass and the $K\bar{K}$ threshold allows to apply in a reasonable way the standard formula Eq.~\eqref{equ1} for the partial compositeness $X_2$, even if the $f_0(980)$ pole lies in the 2nd (3rd) RS above (below) the {\it nearby} $K\bar{K}$ threshold. The mass distribution of the resonance smooths the sharp condition, alluded after Eq.~\eqref{eq:momentum}, on the relative position between the resonance mass and the threshold of $K\bar{K}$ when their distance is much smaller than the $f_0(980)$ width.\label{foot.211008.1}}
We consider several choices for $X$ in Eq.~\eqref{equ:square2}, typically from 0.2 up to 0.8 in steps of 0.2.
The information for the branching ratio $r_{\rm exp}=\Gamma(f_{0}(980)\rightarrow\pi\pi)/[\Gamma(f_{0}(980)\rightarrow \pi\pi)+\Gamma(f_{0}(980)\rightarrow K\bar{K})]$ can also be used together with the total width $\Gamma_R$ to fix $|\gamma_1|$ and $|\gamma_2|$.
The results following one way or the other are organized in Secs.~\ref{sec.201008.1} and \ref{sec.211126.1}, respectively.
They are also applied in analogous way to the isovector scalar $a_0(980)$ involving the scattering channels $\pi\eta$(1) and $K\bar{K}$(2), with $r_{\rm exp}=\Gamma(a_{0}(980)\rightarrow\pi\eta)/[\Gamma(a_{0}(980)\rightarrow \pi\eta)+\Gamma(a_{0}(980)\rightarrow K\bar{K})]$ then.
\section{Flatt\'e parameterization and the spectral density of a bare state}
\label{sec.201009.3}
One disadvantage of the approach followed in Sec.~\ref{sec.211010.1} is the necessity to assume a value of $n$ for the upper limit of integration in Eq.~\eqref{equ:square} for evaluating the partial-decay width of the resonance into $K\bar{K}$, that is, $\Gamma_2$.
This can be overcome by using a Flatt\'e parameterization \cite{Flatte:1976xu}, without increasing the number of the input parameters needed to calculate the couplings $|\gamma_i|$, partial-decay widths $\Gamma_i$ and partial compositeness coefficients $X_i$.
Since the $f_0(980)$ and $a_0(980)$ lie very close to the $K\bar{K}$ threshold a Flatt\'e parameterization is then specially suitable \cite{Baru:2003qq}.
Nonetheless, as discussed in Refs.~\cite{Baru:2010ww,Kang:2016jxw}, there is also a limitation in the use of a Flatté parameterization as it assumes that the corresponding $K\bar{K}$ partial-wave amplitude has no zero in the near-threshold region.
We assume that this is the case and proceed with the rather intuitive picture offered by a Flatté parameterization of dressing a bare resonance propagator, $1/D(E)$, by the self energy due to the intermediate channels 1 and 2,
\begin{align}
\label{210703.1}
D(E)&=E-E_f+i\frac{\widetilde{\Gamma}_1}{2}+\frac{i}{2}g_2\sqrt{m_K E}~.
\end{align}
Here $E$ is the total center of mass energy measured with respect to the two-kaon threshold, $E\equiv \sqrt{s}-2m_K$,
$E_f$ is the bare mass of the resonance plus the contributions at around the $K\bar{K}$ threshold from the real parts (which are taken as constants) of the meson-meson loops contributing to the resonance self energy. In addition, $g_i$ is the {\it bare} coupling squared of the resonance to the $i_{\rm th}$ channel,
such that the {\it bare} width $\widetilde\Gamma_1$ to channel 1 is written in terms of $g_1$ as
\begin{align}
\label{211009.2}
\widetilde{\Gamma}_1&=\frac{p_1(m_R)g_1}{8\pi m_R^2}~.
\end{align}
The pole position in the variable $E$ is called $E_R=M_R-i\Gamma_R/2$, {\it with $M_R$ the mass of the resonance with respect to $2m_K$}, $M_R=m_R-2m_K$.
The Flatt\'e parameterization contains as free parameters $E_f$, $\widetilde{\Gamma}_1$ and $g_2$ that can be fixed in terms of the mass
and width of the resonance, that is, by knowing its pole position, and from the knowledge either of the branching ratio $r_{\rm exp}$ to the lighter channel or the total compositeness $X$.
To calculate the resonance pole position we must look for the zeroes of
Eq.~\eqref{210703.1}, $D(E_R)=0$,
\begin{align}
\label{210703.2}
E_R-E_f+{\frac{i}{2}}\,\widetilde{\Gamma}_1=-\frac{i}{2}g_2\sqrt{m_K E_R}~.
\end{align}
Taking the square in both sides of the previous expression and solving the resulting quadratic algebraic equation,
we then have the following solutions for the roots
\begin{align}
\label{210703.4}
E_R&=E_f-\frac{1}{8}m_Kg_2^2-\frac{i}{2}\widetilde{\Gamma}_1 +\sigma\sqrt{\frac{m_Kg_2^2}{4}}
\sqrt{\frac{m_Kg_2^2}{16}-E_f+\frac{i}{2}\widetilde{\Gamma}_1}~,
\end{align}
with $\sigma=\pm 1$ in order to keep track of the two different solutions.
Later on we show that $\sigma=+1(-1)$ corresponds to the pole
lying in the RS \Rmnum{2} (\Rmnum{3}). For the calculation of the square root in the previous equation (taken such that $\Im\sqrt{z}\geq 0$, $z\in \mathbb{C}$) one needs to distinguish two cases according to the sign of
$m_K g_2^2/16-E_f$:
\begin{align}
\label{210703.5}
& \text{i})~\frac{m_Kg_2^2}{16}-E_f > 0~,\\
E_R&=E_f-\frac{i}{2}\widetilde\Gamma_1-\frac{m_K g_2^2}{8}+\frac{\sigma}{2}\sqrt{m_K g_2^2}
\left(\left(\frac{m_Kg_2^2}{16}-E_f\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4}
\exp{\left(-\frac{i}{2}\arctan\frac{\widetilde\Gamma_1/2}{E_f-m_Kg_2^2/16}\right)}~.\nonumber
\end{align}
\begin{align}
\label{210703.6}
& \text{ii)}~\frac{m_Kg_2^2}{16}-E_f<0~,\\
E_R&=E_f-\frac{i}{2}\widetilde\Gamma_1-\frac{m_K g_2^2}{8}+\frac{\sigma}{2}\sqrt{m_Kg_2^2}
\left(\left(\frac{m_Kg_2^2}{16}-E_f\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4}
\exp{\frac{i}{2}\left(\pi-\arctan\frac{\widetilde\Gamma_1/2}{E_f-m_Kg_2^2/16}\right)}~.\nonumber
\end{align}
In what follows we only consider the case i), because for both i) and ii) one obtains the same equations relating $E_f$, $g_2$ and $\widetilde\Gamma_1$ with the inputs $M_R$, $\Gamma_R$ and $r_{\rm exp}$ or $X$.
We introduce the auxiliary angle $\phi$ defined by
\begin{align}
\label{210704.1}
\phi=\arctan\frac{\widetilde\Gamma_1/2}{E_f-m_K g_2^2/16}~.
\end{align}
Therefore, $E_R$ can be written as
\begin{align}
\label{210704.2}
E_R&=E_f-\frac{m_K g_2^2}{8}-\frac{i}{2}\widetilde\Gamma_1+\frac{\sigma}{2}\sqrt{m_Kg_2^2}\left(
\left(E_f-\frac{m_Kg_2^2}{16}\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4}
(\cos\frac{\phi}{2}-i\sin\frac{\phi}{2})~.
\end{align}
Attending to the real and imaginary parts in this equation we have that
\begin{align}
\label{210704.3}
M_R&=E_f-\frac{m_K g_2^2}{8}+\sigma \frac{\sqrt{m_K g_2^2}}{2} \left(\left(E_f-\frac{m_Kg_2^2}{16}\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4}\cos\frac{\phi}{2}~,\\
\Gamma_R&=\widetilde\Gamma_1+\sigma\sqrt{m_K g_2^2}\left(\left(E_f-\frac{m_K g_2^2}{16}\right)^2+\frac{\widetilde\Gamma_1^2}{4}\right)^\frac{1}{4} \sin\frac{\phi}{2}~.\nonumber
\end{align}
Taking into account the definition of $\phi$, one also has that
\begin{align}
\label{210704.4}
\left(E_f-\frac{m_K g_2^2}{16}\right)^2+\frac{\widetilde\Gamma_1^2}{4}=
\left(E_f-\frac{m_K g_2^2}{16}\right)^2\left(1+\tan^2\phi\right)
=\left(E_f-\frac{m_K g_2^2}{16}\right)^2\frac{1}{\cos^2\phi}~,
\end{align}
and
\begin{align}
\label{210704.6}
E_f-\frac{m_K g_2^2}{16}&=\frac{\widetilde\Gamma_1}{2}\cot\phi~.
\end{align}
Substituting these two equalities into Eq.~\eqref{210704.3}, with $\phi<0$ for case i), the latter equation becomes
\begin{align}
\label{210704.7}
M_R&
-\frac{m_K g_2^2}{16}+\frac{\widetilde\Gamma_1}{2}\cot\phi+\frac{\sigma}{4}\sqrt{m_K g_2^2\widetilde\Gamma_1|\cot\frac{\phi}{2}|}~,\\
\Gamma_R&
\widetilde\Gamma_1- \frac{\sigma}{2}\sqrt{m_K g_2^2\widetilde\Gamma_1 |\tan\frac{\phi}{2}|}~.\nonumber
\end{align}
From this last equation it follows that
\begin{align}
\label{210704.9}
\frac{\sigma}{2}\sqrt{m_K g_2^2\widetilde{\Gamma}_1}&= (\widetilde{\Gamma}_1-\Gamma_R)\sqrt{|\cot\frac{\phi}{2}|}~.
\end{align}
When this is taken into Eq.~\eqref{210704.7} we can write $M_R$ as
\begin{align}
\label{210705.1b}
M_R&=\frac{\Gamma_R^2}{4\widetilde{\Gamma}_1}\cot\frac{\phi}{2}\left[1-\left(\frac{\widetilde{\Gamma}_1}{\Gamma_R}\tan\frac{\phi}{2}\right)^2\right]~,\\
\Gamma_R&=\widetilde{\Gamma}_1- \frac{\sigma}{2}\sqrt{m_K g_2^2\widetilde{\Gamma}_1 |\tan\frac{\phi}{2}|}~.\nonumber
\end{align}
The equation for $M_R$ is of the form,
\begin{align}
\label{210801.6}
&x-\frac{1}{x}=\frac{4 M_R}{\Gamma_R}~,\\
&x\equiv \frac{\Gamma_R}{\widetilde{\Gamma}_1}\cot\frac{\phi}{2}<0~,\nonumber
\end{align}
and its solution for $x<0$ is
\begin{align}
\label{210801.7}
x&=\frac{2 M_R}{\Gamma_R}-\sqrt{1+\left(\frac{2M_R}{\Gamma_R}\right)^2}~.
\end{align}
After this is substituted in the expression for $\Gamma_R$ in Eq.~\eqref{210705.1b}, we can isolate the bare partial-decay width to the first channel,
and then the bare branching ratio $r\equiv \widetilde{\Gamma}_1/\Gamma_R$ is given by
\begin{align}
\label{210801.9}
r&=1+\frac{\sigma g_2\sqrt{m_K}/2}{u-2M_R}~,\\
u&\equiv (4M_R^2+\Gamma_R^2)^{1/2}~.\nonumber
\end{align}
Once we know $r$ we can also determine $\cot\phi/2$ by using the definition of $x$ and its solution in Eq.~\eqref{210801.7},
\begin{align}
\label{210801.10}
\cot\frac{\phi}{2}&=r x=
\frac{1}{2\Gamma_R}
\left(4M_R-2u
-\sigma g_2 \sqrt{m_K (u-2M_R)}\right)~.
\end{align}
Let us denote by $\beta$ the residue of $1/D(E)$ at the resonance pole,
\begin{align}
\label{210913.2}
\beta&= \left|\lim_{E\to E_R}\frac{E-E_R}{D(E)}\right|=\left|\frac{1}{1+\frac{i g_2}{4}\sqrt{\frac{m_K}{E_R}} }\right|=\frac{\sqrt{8u}}{\left(g_2^2 m_K+8 u+ 4 \sigma g_2 \sqrt{m_K(u-2M_R)}\right)^{1/2}}~.
\end{align}
This expression can be obtained by substituting $\sqrt{E_R}$ from Eq.~\eqref{210703.2},
the relation between $E_f$ and $\tan\phi$, cf. Eq.~\eqref{210704.6}, and finally
the expression for $\tan\phi$ from that of $\cot\phi/2$ given in Eq.~\eqref{210801.10}.
The {\it renormalized or dressed} coupling squared $|\gamma_i^2|$ is related to the bare one $g_i$ by evaluating the residue of the elastic
scattering amplitudes for channel $i$, $g_i/D(E)$. In terms of $\beta$, we have the result
\begin{align}
\label{210801.12}
|\gamma_1|^2=g_1\beta~,\\
|\gamma_2|^2=32\pi m_K^2g_2\beta~,\nonumber
\end{align}
with the numerical factor in front of $g_2\beta$
needed for having the same normalization as in Sec.~\ref{sec.211010.1}.
Up to our knowledge the difference between the bare and dressed couplings in a Flatt\'e parameterization has not been clearly discussed before in the literature, and it has important implications. E.g.
this is one of the reasons why the values for $\widetilde{\Gamma}_{\pi\pi}$ collected in the Table~2 of Ref.~\cite{Baru:2003qq} for the $f_0(980)$ are
typically much bigger than 100~MeV.
A similar comment can also be made for most of the entries of $\widetilde{\Gamma}_{\pi\eta}$ in Table~1 of the same reference regarding the $a_0(980)$.\footnote{The other reason applies to those poles in the RS II because then the physical partial-decay width to the lighter channel is $\Gamma_1-2\Gamma_2$, which is smaller than $\Gamma_1$, cf. Eq.~\eqref{211123.2} and discussions below in this section.} Indeed, this can be a source for confusion in the literature. In this respect, we notice that Ref.~\cite{GarciaMartin:2011jx} compares their $\pi\pi$ $S$-wave residue with bare couplings used in energy-dependent-width Breit-Wigner or Flatt\'e parameterizations without considering the actual residue at the resonance pole position of the parameterization.
In order to obtain $g_2$ we need another input, for which we take either the physical branching ratio $r_{\rm exp}$ or the total compositeness $X$.
For the latter case we need then the expression for calculating $X_1$ and $X_2$, with $X=X_1+X_2$. Recalling Eq.~\eqref{equ1} we have for $X_1$,
\begin{align}
\label{211012.3}
X_1&=\gamma_1^2 \left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}
=\frac{8\pi m_R^2\Gamma_R}{p_1(m_R)} r \beta \left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}~,
\end{align}
with $r$ and $\beta$ given in terms of $g_2$ and the pole parameters in Eqs.~\eqref{210801.9} and \eqref{210913.2}, respectively.
For the calculation of $X_2$ a simpler algebraic formula can be obtained if we use non-relativistic kinematics for the calculation of the derivative of $\partial G_2(s)/\partial s$ at $s=s_R$, taking advantage of the fact that pole lies in the vicinity of the $K\bar{K}$ threshold. Then, $G_2(s)$ is a constant plus $-i \sqrt{m_K E}/(16\pi m_K)+{\cal O}(E/m_K)$, and its derivative with respect to $s$ is
\begin{align}
\label{211106.1}
\left.\frac{\partial G_2(s)}{\partial s}\right|_{s_R}=
\frac{-i}{128\pi m_K^{3/2}\sqrt{E_R}}+{\cal O}(1)~.
\end{align}
We then multiply this derivative by $32\pi m_K^2 g_2 \beta$ and the final expression that results is
\begin{align}
\label{210801.18}
X_2&=\frac{\sqrt{m_K}g_2}{\left(g_2^2m_K+8u+4\sigma g_2 \sqrt{m_K(u-2M_R)}\right)^{\frac{1}{2}}}~.
\end{align}
The equation that is needed to be solved to obtain $g_2$ given $X$ is $X_1+X_2=X$.
Nonetheless, in the numerical results shown below we calculate $X_2$ making use of relativistic kinematics, with differences of around a $10-15$\% compared with the values obtained when using the non-relativistic Eq.~\eqref{210801.18}.
When $r_{\rm exp}$ is the input taken one has to distinguish between whether the pole lies in the RS \Rmnum{2} or RS \Rmnum{3}, due to the change of sign in the analytical extrapolation of $\sqrt{E}$ in $D(E)$, Eq.~\eqref{210703.1}, needed to reach the pole position.
For the pole in the RS \Rmnum{3}, we have the following straightforward relation between the physical $r_{\rm exp}$ and the bare $r$,
\begin{align}
\label{211012.1}
r_{\rm exp}&=r \beta=\frac{\sqrt{2u}\left(2\sqrt{u-2M_R}+g_2\sigma\sqrt{m_K}\right)}{\sqrt{u-2M_R}\left(g_2^2 m_K+8u+4\sigma g_2 \sqrt{m_K(u-2M_R)}\right)^{1/2}}~.
\end{align}
This is a quadratic equation for $g_2$ that can be easily solved. By requiring that $g_2=0$ for $r_{\rm exp}=1$ (which implies that there is no resonance decay at all to the $K\bar{K}$ channel) then there is only one acceptable solution given by
\begin{align}
\label{211012.2}
g_2&=\frac{2\left(-\Gamma_R r_{\exp}+\sqrt{u-2M_R}\sqrt{2M_R r_{\rm exp}^2+(2-r_{\rm exp}^2)u}\right)}{\sqrt{m_K\left(2M_R r_{\rm exp}^2+(2-r_{\rm exp}^2)u\right)}}~.
\end{align}
For a pole in the RS \Rmnum{2}, when moving to the complex-$E$ plane with negative imaginary part the square root $\sqrt[II]{E}$ has a positive imaginary part and negative real one, changing sign with respect to that when calculated in the RS \Rmnum{3}, because there $\sqrt[III]{E}=-\sqrt[II]{E}$ (the momentum $p_2$ changes sign between the two sheets).
As a result, it is indeed the case that the last term in Eq.~\eqref{210703.1}, responsible for the width to $K\bar{K}$, does not add to but subtract from $\widetilde{\Gamma}_1$.
Then, for a physical interpretation of the different terms in this equation it is convenient to rewrite it as
\begin{align}
\label{211109.1}
D(E)&=E-E_f+\frac{i}{2}\left[\widetilde{\Gamma}_1+2g_2\sqrt[II]{m_K E}\right]-\frac{i}{2}g_2\sqrt[II]{m_K E}\\
&=E-E_f+\frac{i}{2}\left[\widetilde{\Gamma}_1-2g_2\sqrt[III]{m_K E}\right]+\frac{i}{2}g_2\sqrt[III]{m_K E}
~.\nonumber
\end{align}
Let us notice that the real part of $\sqrt[III]{M_R-i\,\Gamma_R/2}$ is positive and then
the imaginary part of the last term in Eq.~\eqref{211109.1} appears with the same sign as $\widetilde{\Gamma}_1$.
From this equation we then deduce that, once the bare couplings are dressed, cf. Eq.~\eqref{210801.12}, the partial-decay width to the lighter channel really observed in an experiment is not $\Gamma_1$ but $\Gamma_1-2\Gamma_2$, while the total decay width $\Gamma_R$ is $\Gamma_1-\Gamma_2$ for the RS \Rmnum{2} case. Therefore, when
taking $r_{\rm exp}$ as input for a pole in the RS \Rmnum{2},
\begin{align}
\label{211110.1}
r_{\rm exp}&=\frac{\Gamma_1}{\Gamma_R}-2\frac{\Gamma_2}{\Gamma_R}=r\beta-2(1-r_{\rm exp})
\end{align}
from where the extra equation to be taken into account is:
\begin{align}
\label{211110.2}
2-r_{\rm exp}&=r\beta=\frac{\sqrt{2u}\left(2\sqrt{u-2M_R}+g_2\sigma\sqrt{m_K}\right)}{\sqrt{u-2M_R}\left(g_2^2 m_K+8u+4\sigma g_2 \sqrt{m_K(u-2M_R)}\right)^{1/2}}~,
\end{align}
instead of the straight Eq.~\eqref{211012.1} for a pole in the RS \Rmnum{3}.
The valid solution in this case, the one that gives $g_2=0$ for $r_{\rm exp}=1$, is obtained from Eq.~\eqref{211012.2} by simultaneously multiplying its right-hand side by a minus sign and replacing $r_{\rm exp}$ for $2-r_{\rm exp}$.
As a result of this analysis we then expect that $\Gamma_1>\Gamma_R$ with values in the interval $[\Gamma_R,2\Gamma_R]$ for the resonance when its pole lies in the RS \Rmnum{2}.
To illustrate this point, let us take the residues given in the Refs.~\cite{GarciaMartin:2011jx,Guo:2012yt} for the pole positions of the $f_0(980)$ in Eqs.~\eqref{equ22} and \eqref{211107.1}, respectively, and evaluate $\Gamma_1/\Gamma_R$ making use of Eq.~\eqref{equ33}.
The values of the residues are $|\gamma_1|=2.3\pm 0.2$~GeV \cite{GarciaMartin:2011jx} and $|\gamma_1|=1.80\pm 0.25$~GeV \cite{Guo:2012yt}. Propagating the errors in $m_R$, $\Gamma_R$ and $|\gamma_1|$ from these references, the values that we obtain are
\begin{align}
\label{211111.1}
\frac{\Gamma_1}{\Gamma_R}&=2.0\pm 0.9\,,~\text{Eq.~\eqref{equ22}--Ref.~\cite{GarciaMartin:2011jx}}~,\\
\frac{\Gamma_1}{\Gamma_R}&=1.1\pm 0.6\,,~\text{Eq.~\eqref{211107.1}--Ref.~\cite{Guo:2012yt}}~.\nonumber
\end{align}
This more detailed discussion based on the use of the Flatt\'e parameterization extends the same topic already discussed in Sec.~\ref{sec.211010.1} for the interpretation of the $\Gamma_i$'s and their connection with the experimental decay widths for the case of a pole lying in the RS \Rmnum{2}.
It is worth stressing that when $X$ is taken as input the procedure explained above, Eqs.~\eqref{211012.3}-\eqref{210801.18}, allows to calculate $\Gamma_1/\Gamma_R$ without relying on its connection to $r_{\rm exp}$, and the resulting values, which we discuss below, are perfectly compatible with the picture just explained.
Finally, let us now show the relationship between the sign $\sigma$ and the RS in which the pole lies.
For that, we isolate $\sqrt{E_R}$ from Eq.~\eqref{210703.2}, which can then be written as
\begin{align}
\label{210801.14}
\sqrt{E_R}&=\frac{2i}{g_2 \sqrt{m_K}}(M_R-E_f)+\frac{\Gamma_R-\widetilde{\Gamma}_1}{g_2 \sqrt{m_K}}~.
\end{align}
This equation tells us that if $M_R-E_f>0$ the pole lies in the RS \Rmnum{2}, since then $\Im \sqrt{E_R}>0$ and, conversely, if $M_R-E_f<0$ the pole is located in the RS \Rmnum{3}.
Now, we consider Eq.~\eqref{210704.3} which clearly implies that if $\sigma=-1$ then $M_R-E_f<0$, corresponding to the RS \Rmnum 3.
For $\sigma=+1$ a more careful treatment is needed because the sign of $M_R-E_f$ depends on the relative sign between the last two terms in the right-hand side of Eq.~\eqref{210704.3}.
One can straightforwardly show that the absolute value of the last term is bigger than $m_Kg_2^2/8$ by squaring and subtracting them.
In the process one has to relate $E_f$ with $\tan\phi$, Eq.~\eqref{210704.6}, and use the expression for $\cot\phi/2$ given in Eq.~\eqref{210801.10}.
As a result $M_R-E_f>0$ and the pole lies in the RS \Rmnum 2 for $\sigma=+1$.
We give the results obtained with the present formalism based on the use of the Flatté parameterization, distinguishing between when $X$ or $r_{\rm exp}$ are taken as inputs in Secs.~\ref{sec.201008.1} and \ref{sec.211126.1}, respectively.
\subsection{Spectral density and its integration}
\label{sec.211106.2}
Here we use the spectral density function $\omega(E)$ of a near-threshold resonance, in our case either $f_0(980)$ or $a_0(980)$, as a way to calculate the compositeness of the meson-meson states in these resonances.
We follow the formalism of Ref.~\cite{Baru:2003qq} to which we refer for further details.
There the spectral density function $\omega(E)$ is introduced, and it provides the probability distribution function in energy for finding a bare elementary state in the continuum \cite{Bogdanova:1991zz}.
As a result, its integration around the $K\bar{K}$ threshold comprising the resonance signal, which we call $W_R$, is the probability for finding the bare state. Namely, $W_R$ is calculated as \cite{Baru:2003qq}
\begin{align}
\label{211013.1}
W_R&=\int_{-\Delta}^{+\Delta} dE\, \omega(E)~,\\
\omega(E)&=\frac{1}{2\pi}
\frac{\widetilde{\Gamma}_1+g_2\sqrt{m_KE}\theta(E)}{\left(E-E_f-\frac{1}{2}g_2\sqrt{-m_KE}\theta(-E)\right)^2+\frac{1}{4}\left(\widetilde{\Gamma}_1+g_2\sqrt{m_KE}\theta(E)\right)^2}~,\nonumber
\end{align}
with $\theta(E)$ the Heaviside step function.
In Ref.~\cite{Baru:2003qq} the parameter $\Delta$ was chosen to be $50$~MeV but, since an important dependence on $\Delta$ is observed on the final value of $W_R$, we prefer to present the results for $W_R$ as a function of $\Delta$.
\subsection{Reinterpretation of the method of Sec.~\ref{sec.211010.1} for poles in the Riemann sheet \Rmnum{2}}
\label{sec.211110.1}
By the application of the Flatt\'e parameterization it has been clear that for a near-threshold pole in coupled channels lying in the RS \Rmnum{2} it is necessary to change the interpretation of $\Gamma_1$.
We have seen from the last line in Eq.~\eqref{211109.1} that the partial-decay width into the lighter channel is not directly $\Gamma_1$ but $\Gamma_1-2\Gamma_2$, and that the total width from the pole position should be compared with $\Gamma_1-\Gamma_2$.
Therefore, for a pole in the RS \Rmnum{2} near the heavier threshold the equation for the saturation of the total width, cf. Eq.~\eqref{equ:square1}, reads
\begin{align}
\label{211111.3}
\Gamma_{R}=|\gamma_{1}|^{2}\frac{p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}}
-
\frac{|\gamma_{2}|^{2}}{16\pi^{2}} \int_{m_{1}+m_{2}}^{m_{R}+2\Gamma_{R} }dW\frac{p(W^{2})}{W^{2}}\frac{\Gamma_{R}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}~,
\end{align}
with no change in Eq.~\eqref{equ:square2} for saturating the total compositeness $X$. The latter equation is also reproduced here
\begin{equation}
\label{211111.4}
X=|\gamma_{1}|^{2}\Big{|} \frac{\partial G_{1}(s)}{\partial s}\Big{|}_{s=s_{R}}+|\gamma_{2}|^{2}\Big{|} \frac{\partial G_{2}(s)}{\partial s}\Big{|}_{s=s_{R}}\,.
\end{equation}
However, in applications up to now of these ideas instead of the minus in Eq.~\eqref{211111.3} a ``standard'' plus sign is placed in front of $|\gamma_2|^2$. Namely,
\begin{align}
\label{211128.1}
\Gamma_{R}=|\gamma_{1}|^{2}\frac{p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}}
+\frac{|\gamma_{2}|^{2}}{16\pi^{2}} \int_{m_{1}+m_{2}}^{m_{R}+2\Gamma_{R} }dW\frac{p(W^{2})}{W^{2}}\frac{\Gamma_{R}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}~,
\end{align}
At the practical level as long as $X_1\ll X_2$ the only change is in the value of $|\gamma_1|$, because the use of Eq.~\eqref{211128.1} would provide an ``effective'' value of this coupling by reabsorbing the effect of subtracting $-2\Gamma_2$ in Eq.~\eqref{211111.3}.
This is clear because then $X$ is saturated almost completely by $X_2$ and this quantity alone fixes $|\gamma_2|$.
The use of Eq.~\eqref{211111.3} allows to determine $|\gamma_1|$, which is certainly larger than the ``effective'' one deduced by fulfilling Eq.~\eqref{211128.1} with an intermediate plus sign.
However, the values for $X_2$, $\Gamma_2$ and the physical partial-decay width to channel 1 almost do not change, which are typically the most important pieces of information to infer about the nature of the resonances.
Now, if $r_{\rm exp}$ is taken as input the equations that one has to fulfill are Eq.~\eqref{211111.3} and
\begin{align}
\label{211111.5}
2-r_{\rm exp}=|\gamma_{1}|^{2}\frac{p_{1}(m_{R}^{2})}{8\pi m^{2}_{R}\Gamma_R}~,
\end{align}
which are apparently very different to the ``standard'' Eq.~\eqref{211128.1} and
\begin{align}
\label{211128.2}
r_{\rm exp}=\frac{\Gamma_1}{\Gamma_R}~,
\end{align}
already written for a pole in the RS \Rmnum{3} in Sec.~\ref{sec.211010.1}.
However, they basically provide again the same results for $\Gamma_2$, $X_2$, and the physical partial-decay width to channel 1, while the use of Eqs.~\eqref{211128.1} and \eqref{211128.2} provides the already mentioned ``effective'' value for $|\gamma_1|$.
In order to see it, let us divide Eq.~\eqref{211111.3} by $\Gamma_R$ and write the following equations equivalent to Eqs.~\eqref{211111.3} and \eqref{211111.5},
\begin{align}
\label{211111.6}
1&=\frac{\Gamma_1}{\Gamma_R}-\frac{\Gamma_2}{\Gamma_R}~,\\
2-r_{\rm exp}&=\frac{\Gamma_1}{\Gamma_R}~.\nonumber
\end{align}
Subtracting the first line from the second one we then have
\begin{align}
\label{211111.7}
1-r_{\rm exp}&= \frac{\Gamma_2}{\Gamma_R}~,
\end{align}
which fixes $|\gamma_2|$ as if Eq.~\eqref{211128.1} were used.
Afterwards, Eq.~\eqref{211111.3} is employed to calculate $|\gamma_1|$, while the use of Eq.~\eqref{211128.1} would provide the so-called ``effective'' value for $|\gamma_1|$, which would be smaller than the one obtained from Eq.~\eqref{211111.3}.
\section{Results and discussions using the total compositeness as input}\label{section1}
\label{sec.201008.1}
Here we apply the formalism derived in Secs.~\ref{sec.211010.1} and \ref{sec.201009.3} to study the nature of the resonances $f_0(980)$ and $a_0(980)$. Subsequently, the former method based on the saturation of $\Gamma_R$ and $X$ is denoted by S, while the latter one based on the use of a Flatt\'e parameterization is called $F$. For each resonance pole we first apply the method S and then F.
\subsection{The $f_0(980)$ resonance}
\label{sec.201009.1}
Assuming given values for the total compositeness $X$ of the $f_{0}(980)$, varying it from $0.2$ to $0.8$ in steps of 0.2,
we obtain the couplings, partial-decay widths, and compositeness coefficients by solving Eq.~(\ref{equ:square2}) and Eq.~(\ref{equ:square1}).
As already mentioned, we consider the RS \Rmnum{2} where $s_R$ of Eq.~\eqref{equ22} lies \cite{GarciaMartin:2011jx} and the results calculated are shown in Table~\ref{tabff1}.
One can observe from this table that the compositeness coefficient $X_{2}$ is always much larger than $X_{1}$,
which means that the $K\bar{K}$ channel plays a much more important role than the $\pi\pi$ one in the structure of $f_{0}(980)$.
Related to this point, the $f_{0}(980)$ couples much more strongly to $K\bar{K}$ than to $\pi\pi$, in agreement with the fact that $f_{0}(980)$ sits very close to the $K\bar{K}$ threshold.
We also observe from Table~\ref{tabff1} that as the total compositeness $X$ increases
the physical partial-decay width to $\pi\pi$ ($\Gamma_{\pi\pi}$) decreases, while that to $K\bar{K}$ becomes larger.
Similarly, this fact has also been found in other heavy-quark resonances with open and near-threshold channels \cite{Du:2021bgb,Guo:2020vmu,Guo:2020pvt,Guo:2019kdc}.
In these regards, it follows from Table~\ref{tabff1} that
$\Gamma_{K\bar{K}}>\Gamma_{\pi\pi}$ for $X\gtrsim 0.6$,\footnote{This can be easily seen because $\Gamma_{K\bar{K}}>25$~MeV.} but as $X$ decreases $\Gamma_{\pi\pi}$ becomes larger than $\Gamma_{K\bar{K}}$.
For estimating the errors associated to the input values of the resonance pole mass and width we proceed in this work similarly as done in Ref.~\cite{Kang:2016ezb}. Then, we discretize the inputs, $m_R$ and $\Gamma_R$ from the pole position at several points within one standard deviation region from the central values to generate a data grid.
For each of the points in the grid we proceed to calculate the different outputs so that
their central values correspond to the mean values and
the errors to the square root of the variances. We have also checked that this procedure
is (of course) stable if the number of points in the
grid is increased. For the other input $X$ the variation in the results calculated for the different values of $X$ provides an estimate of this source of uncertainty.
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method S applied to the resonance $f_0(980)$ with pole position in the RS \Rmnum{2} (column 2) from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: The couplings $|\gamma_i|$ (columns 3, 4), corresponding partial-decay widths $\Gamma_{i}$ (columns 5, 6), and individual compositeness coefficients $X_{i}$ (columns 7, 8) are calculated for $X=0.8,$ $0.6,$ $0.4$, $0.2$ (column 1).} }
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt} ~~$X~~~$&~~~$\text{RS}$&~~~$|\gamma_{\pi\pi}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\pi}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt} \multirow{1}*{0.80}&~~~$\text{\Rmnum{2}}$&~~~$2.24\pm0.20$~~~&$4.65\pm0.23$~~~&$97.2\pm16.9$&~~~$43.2\pm8.4$&~~~$0.038\pm0.007$&~~~$0.762\pm0.007$~~~\\
\Xhline{1pt} \multirow{1}*{0.60}&~~~$\text{\Rmnum{2}}$&~~~$2.11\pm0.19$~~~&$4.01\pm0.19$~~~&$86.1\pm14.8$&~~~$32.1\pm6.2$&~~~$0.033\pm0.006$&~~~$0.567\pm0.006$~~~\\
\Xhline{1pt} \multirow{1}*{0.40}&~~~$\text{\Rmnum{2}}$&~~~$1.97\pm0.17$~~~&$3.24\pm0.15$~~~&$75.0\pm12.9$&~~~$21.0\pm4.0$&~~~$0.029\pm0.005$&~~~$0.371\pm0.005$~~~\\
\Xhline{1pt}
\multirow{1}*{0.20}&~~~$\text{\Rmnum{2}}$&~~~$1.82\pm0.16$~~~&$2.23\pm0.09$~~~&$63.9\pm11.0$&~~~~$9.9\pm1.8$&~~~$0.025\pm0.004$&~~~$0.175\pm0.004$~~~\\
\Xhline{1pt}
\end{tabular}
\label{tabff1}
\end{center}
\end{table}
We now consider the application of the method F, take $X=0.2$ to 0.8 in steps of 0.2, and calculate the parameters characterizing the Flatt\'e formula, Eq.~\eqref{210703.1}, that is, the bare width to the lighter channel $\widetilde{\Gamma}_1$, the bare coupling squared $g_2$ to $K\bar{K}$ and $E_f$.
As typical outputs we provide $\Gamma_1$, $\Gamma_2$, $X_1$ and $X_2$.
For the pole position of the $f_0(980)$ we take Eq.~\eqref{equ22}, with the results given in Table~\ref{tabflatt1}.
Comparing the values of $X_2$ for the $f_0(980)$ between Table~\ref{tabflatt1} and the previous Table~\ref{tabff1} we see a good agreement, with the difference in the central values affecting the third decimal figure. This close agreement can be explained, though not completely, by noticing that $X_2\gg X_1$ so that $X_2\approx X$, which fixes it to lie very close to the total compositeness. Regarding the $\Gamma_i$ we see that they are very close between both tables for $X=0.2$, 0.4 and 0.6, with larger differences for $X=0.8$, but perfectly compatible within errors for all values of $X$. Let us notice that the method based on the Flatt\'e parameterization directly provides $\Gamma_i$, since $\Gamma_1=\widetilde{\Gamma}_1\beta$ and $\Gamma_2=\Gamma_1-\Gamma_R$ for a pole in the RS \Rmnum{2}, as it is the case here for the $f_0(980)$ poles in Eqs.~\eqref{equ22} and \eqref{211107.1}. However, the method S in Sec.~\ref{sec.211010.1} keeps some spurious dependence on the value given to $n$ for Eq.~\eqref{equ:square}.
\begin{table}[!htbp]
\begin{center}
\caption{ Method F applied to the resonance $f_0(980)$ with pole position in the RS $\text{\Rmnum{2}}$ (column 2) from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: The value of $X$ taken as input is given in the first column. We calculate the bare width $\widetilde{\Gamma}_1$ (column 3), the bare coupling $g_2$ (column 4), $E_{f}$ (column 5), $\Gamma_{1}$ (column 6), $\Gamma_{2}$ (column 7), $X_1$ (column 8), and $X_{2}$ (column 9). }
\begin{tabular}{|cc|ccccccc|}
\Xhline{1pt}
$X~$&~$\text{RS}$&$\widetilde{\Gamma}_{\pi\pi}$(MeV)&$g_2$&$E_f$(MeV)&$\Gamma_{1}$(MeV)&$\Gamma_{2}$(MeV)&$X_{\pi\pi}$&$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{1}*{0.8}&$\text{\Rmnum{2}}$&$~948.3\pm383.1$&\!\!$10.01\pm4.03$&$-389.7\pm210.7$&$~84.4\pm15.3$&$~30.4\pm8.3$&$~0.033\pm0.006$&$~0.767\pm0.006$\\
\Xhline{1pt}
\multirow{1}*{0.6}&$\text{\Rmnum{2}}$&$200.3\pm38.5$&$1.63\pm0.23$&$-57.6\pm19.3$&$~80.4\pm14.4$&$~26.4\pm7.0$&$~0.031\pm0.006$&$~0.569\pm0.006$\\
\Xhline{1pt}
\multirow{1}*{0.4}&$\text{\Rmnum{2}}$&$113.2\pm20.1$&$0.66\pm0.07$&$-20.4\pm8.6$&$~73.2\pm12.9$&$~19.2\pm4.9$&$~0.028\pm0.005$&$~0.372\pm0.005$\\
\Xhline{1pt}
\multirow{1}*{0.2}&~$\text{\Rmnum{2}}$&$75.2\pm13.0$&$0.24\pm0.02$&$-4.2\pm5.3$&$~63.9\pm11.1$&$~9.9\pm2.4$&$~0.025\pm0.004$&$~0.175\pm0.004$\\
\Xhline{1pt}
\end{tabular}
\label{tabflatt1}
\end{center}
\end{table}
We now move on and consider the $f_0(980)$ pole position from Ref.~\cite{Guo:2012yt} given in Eq.~\eqref{211107.1}, and proceed similarly as done regarding the pole position in Eq.~\eqref{equ22}. Then, we reproduce the values of the mass and width of the resonance in Eq.~\eqref{211107.1} together with a value given for $X$. The results obtained are shown in Table~\ref{tab.211106.1}.
Comparing Tables~\ref{tabff1} and \ref{tab.211106.1} we observe that all the outputs are rather similar. In particular the resulting values for $X_2$ are almost coincident in both Tables, with $X_2\gg X_1$. However, we notice that while the central value of the width for the pole in Eq.~\eqref{equ22} is smaller than that in the pole of Eq.~\eqref{211107.1}, the calculated $\Gamma_{1}$ and $\Gamma_{2}$ in Table~\ref{tabff1} are larger than those in Table~\ref{tab.211106.1}. This indicates that the cancellation between widths in the difference $\Gamma_{1}-2\Gamma_{2}$ for calculating the actual width to $\pi\pi$ is more important in the pole of Eq.~\eqref{equ22} than for the one in Eq.~\eqref{211107.1}.
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method S applied to the resonance $f_0(980)$ with pole position in the RS \Rmnum{2} from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}. The various entries are the same as in Table~\ref{tabff1}.} \label{tab.211106.1}}
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$X~~~$&~~~$\text{RS}$&~~~$|\gamma_{\pi\pi}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\pi}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{1}*{0.80}&~~~$\text{\Rmnum{2}}$&~~~$2.09\pm0.26$~~~&$4.88\pm0.30$~~~&$86.5\pm20.8$&~~~~\,$30.5\pm10.0$&~~~$0.034\pm0.008$&~~~$0.766\pm0.008$~~~\\
\Xhline{1pt} \multirow{1}*{0.60}&~~~$\text{\Rmnum{2}}$&~~~$2.00\pm0.24$~~~&$4.21\pm0.26$~~~&$78.6\pm18.4$&~~~$22.6\pm7.4$&~~~$0.031\pm0.007$&~~~$0.569\pm0.007$~~~\\
\Xhline{1pt} \multirow{1}*{0.40}&~~~$\text{\Rmnum{2}}$&~~~$1.89\pm0.22$~~~&$3.40\pm0.20$~~~&$70.8\pm16.0$&~~~$14.1\pm4.8$&~~~$0.028\pm0.006$&~~~$0.372\pm0.006$~~~\\
\Xhline{1pt}
\multirow{1}*{0.20}&~~~$\text{\Rmnum{2}}$&~~~$1.79\pm0.20$~~~&$2.33\pm0.13$~~~&$62.9\pm13.7$&~~~~$6.9\pm2.2$&~~~$0.025\pm0.005$&~~~$0.175\pm0.005$~~~\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
For the method F the results are given in Table~\ref{tab.211107.1}. We observe that the values of $X_2$ obtained are very similar to those given in Table~\ref{tab.211106.1} for the same input $X$, with $X_2\gg X_1$.
The output $\Gamma_2$ is smaller now than that in Table~\ref{tab.211106.1}, though within errors they are again compatible. This clearly indicates the compatibility between the two methods for this $f_0(980)$ pole too. We also point out that the bare parameters for $X=0.8$ in Table~\ref{tab.211107.1} are essentially undetermined, as indicated by the large intervals given between parenthesis, though the physical outputs appear with an uncertainty of similar size as that in the other rows of the table.
\begin{table}[!htbp]
\begin{center}
\caption{Method F applied to the resonance $f_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}. The various entries are the same as in Table~\ref{tabflatt1}. The numbers between parenthesis indicate huge range of values of the corresponding quantity by varying $m_R$ and $\Gamma_R$ within errors.}
\label{tab.211107.1}
{ \begin{tabular}{|cc|ccccccc|}
\Xhline{1pt}
$X$&$\text{RS}$ &$\widetilde{\Gamma}_{\pi\pi}$(MeV) &$g_2$& $E_{f}$(MeV) &$\Gamma_{1}$(MeV) &$\Gamma_{2}$(MeV) &$X_{\pi\pi}$ &$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{1}*{0.8}& $\text{\Rmnum{2}}$& $(430,3 \!\cdot\! 10^{6})$ &$(5.4,4 \!\cdot\! 10^4)$
&$(-160,-2 \!\cdot\! 10^{6})$& ~$71.6\pm18.2$ & $16.8\pm7.9$&~$0.028\pm0.007$&~$0.772\pm0.007$\\
\Xhline{1pt}
\multirow{1}*{0.6}& $\text{\Rmnum{2}}$&$218.4\pm51.9$ &\,$2.34\pm0.52$~&\!\!$-131.6\pm52.3$&~$68.8\pm16.9$&$12.8\pm7.0$&~$0.027\pm0.007$&~$0.573\pm0.007$\\
\Xhline{1pt}
\multirow{1}*{0.4}& $\text{\Rmnum{2}}$&$113.6\pm25.1$ &$0.83\pm0.13$ &$-52.6\pm20.7$&~$65.6\pm15.4$&~$9.6\pm5.1$&~$0.026\pm0.006$&~$0.374\pm0.006$\\
\Xhline{1pt}
\multirow{1}*{0.2}& $\text{\Rmnum{2}}$&~ $75.2\pm16.1$ &$0.27\pm0.03$ &$-24.2\pm11.8$&~$61.1\pm13.6$&~$5.1\pm2.6$&~$0.024\pm0.005$&~$0.176\pm0.005$\\
\Xhline{1pt}
\end{tabular}
\label{tab.211107.1}}
\end{center}
\end{table}
\subsection{The $a_0(980)$ resonance}
\label{sec.201009.2}
Let us explore the compositeness of the $a_0(980)$ similarly as done above in Sec.~\ref{sec.201009.1} for the $f_0(980)$.
The poles that we are going to consider next for the $a_0(980)$ stems from a recent coupled-channel partial-wave analysis of antiproton-proton annihilation data in Ref.~\cite{CrystalBarrel:2019zqh}, where the pole parameters and the partial-decay width of the $a_0(980)$ are discussed.
In the RS \Rmnum{2} Ref.~\cite{CrystalBarrel:2019zqh} reports the values
\begin{eqnarray}
\label{211009.1}
m_{a_0}=1004.1\pm 6.67\,\, \text{MeV},~ \Gamma_{a_0}=97.2\pm6.01\,\,\text{MeV},~ \Gamma_{K\bar{K}}/\Gamma_{\pi\eta}=(13.8\pm3.5)\,\,\%,
\end{eqnarray}
while in the RS \Rmnum{3} the same reference provides
\begin{eqnarray}\label{eq:ratioa0}
m_{a_0}=1002.4\pm 6.55\,\, \text{MeV},~ \Gamma_{a_0}=127.0\pm7.08\,\,\text{MeV},~ \Gamma_{K\bar{K}}/\Gamma_{\pi\eta}=(14.9\pm3.9)\,\,\%,
\end{eqnarray}
with the uncertainties added in quadrature.
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method S applied for the resonance $a_0(980)$ with pole positions from Ref.~\cite{CrystalBarrel:2019zqh}, Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}, in different RSs (column 2): We take input values for $X$ (column 1), and predict $|\gamma_i|$ (columns 3, 4), $\Gamma_i$ (columns 5, 6), and $X_i$ (columns 7, 8).} }
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$X~~~$&~~~$\text{RS}$&~~~$|\gamma_{\pi\eta}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\eta}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{2}*{0.8}&~~~$\text{\Rmnum{2}}$&~~~$3.57\pm0.07$~~~&$5.15\pm0.04$~~~&$169.1\pm7.0$&~~~$71.9\pm4.9$&~~~$0.151\pm0.005$&~~~$0.649\pm0.005$~~~\\
&~~~$\text{\Rmnum{3}}$&~~~$2.30\pm0.09$~~~&$4.50\pm0.06$~~~&$70.4\pm5.5$&~~~$56.6\pm4.4$&~~~$0.064\pm0.006$&~~~$0.736\pm0.006$~~~\\
\Xhline{1pt}
\multirow{2}*{0.6}&~~~$\text{\Rmnum{2}}$&~~~$3.35\pm0.06$~~~&$4.37\pm0.03$~~~&$149.0\pm5.8$&~~~$51.8\pm5.8$&~~~$0.133\pm0.005$&~~~$0.467\pm0.005$~~~\\
&~~~$\text{\Rmnum{3}}$&~~~$2.56\pm0.07$~~~&$3.78\pm0.05$~~~&$86.9\pm4.9$&~~~$40.1\pm3.1$&~~~$0.079\pm0.005$&~~~$0.521\pm0.005$~~~\\
\Xhline{1pt}
\multirow{2}*{0.4}&~~~$\text{\Rmnum{2}}$&~~~$3.12\pm0.05$~~~&$3.41\pm0.02$~~~&$128.8\pm4.6$&~~~$31.6\pm2.2$&~~~$0.115\pm0.004$&~~~$0.285\pm0.004$~~~\\
&~~~$\text{\Rmnum{3}}$&~~~$2.79\pm0.06$~~~&$2.90\pm0.04$~~~&$103.5\pm4.5$&~~~$23.5\pm1.9$&~~~$0.094\pm0.005$&~~~$0.306\pm0.005$~~~\\
\Xhline{1pt}
\multirow{2}*{0.2}&~~~$\text{\Rmnum{2}}$&~~~$2.86\pm0.05$~~~&$2.05\pm0.01$~~~&$108.6\pm3.7$&~~~$11.4\pm0.8$&~~~$0.097\pm0.003$&~~~$0.103\pm0.003$~~~\\
&~~~$\text{\Rmnum{3}}$&~~~$3.01\pm0.06$~~~&$1.58\pm0.04$~~~&$120.0\pm4.5$&~~~$7.0\pm0.7$&~~~$0.109\pm0.005$&~~~$0.091\pm0.005$~~~\\
\Xhline{1pt}
\end{tabular}
\label{tabff4}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{ Method F applied to the resonance $a_0(980)$ with pole positions from Ref.~\cite{CrystalBarrel:2019zqh}, Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}, in different RSs (column 2): We calculate the bare width $\widetilde{\Gamma}_1$ (column 3), the bare coupling squared $g_2$ (column 4), $E_{f}$ (column 5), $\Gamma_1$ (column 6), $\Gamma_2$ (column 7), $X_1$ (column 8) and $X_{2}$ (column 9). The value of $X$ taken as input is given in the first column. }
\begin{tabular}{|cc|ccccccc|}
\Xhline{1pt}
$X~$&$\text{RS}$&$\widetilde{\Gamma}_{\pi\eta}$(MeV)&~$g_2$&$E_{f}(\text{MeV})$&$\Gamma_{1}$(MeV)&$\Gamma_{2}$(MeV)&$X_{\pi\eta}$&$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{2}*{0.8}&~$\text{\Rmnum{2}}$&~$740.5\pm42.2$~&$5.16\pm0.51$~&$-237.1\pm39.4$&~$157.2\pm8.8$&~$60.0\pm7.4$&~$0.141\pm0.007$&~$0.659\pm0.007$\\
&~$\text{\Rmnum{3}}$&~$35.9\pm7.3$~&$0.66\pm0.03$~&$49.3\pm4.6$&~$45.6\pm9.7$&~$81.4\pm9.3$&~$0.042\pm0.009$&~$0.758\pm0.009$\\
\Xhline{1pt}
\multirow{2}*{0.6}&~$\text{\Rmnum{2}}$&~$289.7\pm8.5$~&$1.54\pm0.06$~&$-61.6\pm10.0$&~$146.2\pm7.2$&~$49.0\pm5.6$&$0.131\pm0.005$&~$0.469\pm0.005$\\
&~$\text{\Rmnum{3}}$&~$59.9\pm5.6$~&$0.49\pm0.02$~&$39.2\pm4.3$&~$72.7\pm7.1$&~$54.3\pm6.2$&~$0.066\pm0.007$&~$0.534\pm0.007$\\
\Xhline{1pt}
\multirow{2}*{0.4}&~$\text{\Rmnum{2}}$&~$177.4\pm5.6$~&$0.64\pm0.01$~&$-18.2\pm5.8$&~$129.9\pm5.4$&~$32.7\pm3.5$&$0.116\pm0.004$&~$0.284\pm0.004$\\
&~$\text{\Rmnum{3}}$&~~~$85.2\pm4.7$~&$0.30\pm0.01$~&$28.6\pm4.1$&~$96.9\pm5.3$&~$30.1\pm3.4$&~$0.089\pm0.006$&~$0.311\pm0.006$\\
\Xhline{1pt}
\multirow{2}*{0.2}&~$\text{\Rmnum{2}}$&~$120.2\pm3.8$~&$0.18\pm0.002$~&$4.0\pm4.3$&~$109.9\pm3.8$&~$12.7\pm1.4$&~$0.099\pm0.003$&~$0.101\pm0.003$\\
&~$\text{\Rmnum{3}}$&~$113.6\pm4.6$~&$0.10\pm0.01$~&$16.7\pm4.0$&~$118.6\pm4.6$&~$8.4\pm1.1$&~$0.108\pm0.005$&~$0.092\pm0.005$\\
\Xhline{1pt}
\end{tabular}
\label{tabflatt3}
\end{center}
\end{table}
Let us denote by 1 the lighter channel $\pi\eta$, and by 2 the heavier one $K\bar{K}$.
Combining Eq.~(\ref{equ:square2}) for the total compositeness and Eq.~(\ref{equ:square1}) for the full width, given input values for the former between 0.2 and 0.8 in steps of 0.2, we derive a series of partial compositeness coefficients and decay widths for the $a_{0}(980)$ in Table \ref{tabff4} by applying the method S.\footnote{The same remark as for the $f_0(980)$ in the footnote \ref{foot.211008.1} can also be applied for the $a_0(980)$ when its pole is taken in the RS \Rmnum{2} for applying Eq.~\eqref{equ1} to calculate $X_2$.
The point is that the $a_0(980)$ mass in Eq.~\eqref{211009.1} is clearly larger than the $K\bar{K}$ threshold but the width of the resonance is much larger than the difference between $m_{a_0}$ and the $K\bar{K}$ threshold.}
We find that $X_2$ is rather similar for the RS \Rmnum{2} and \Rmnum{3} calculations. We also obtain that $X_2\gg X_1$, except for $X=0.2$ in which case they are quite close to each other.
This tells us that the $K\bar{K}$ component typically dominates over the $\pi\eta$ one for the resonance $a_0(980)$.
The results corresponding to the method F for the $a_0(980)$ case are organized in Table~\ref{tabflatt3}.
One then finds that the values of $X_2$ obtained in Tables~\ref{tabflatt3} and \ref{tabff4} are very close independently of the RS
in which the poles lie.
It turns out that $X_2$ is typically much larger than $X_1$ except when $X$ becomes small, like $X=0.2$ in Table~\ref{tabflatt3}, and then both $X_i$ are rather close to each other.
Regarding the $\Gamma_i$ it is clear that for the pole in the RS \Rmnum{2} the values obtained now in Table~\ref{tabflatt3} are remarkably similar to those in Table~\ref{tabff4}. However, they clearly differ for $X\geq 0.8$ for the RS \Rmnum{3} case.
As indicated above one should distinguish between the bare and dressed parameters in a Flatt\'e parameterization.
This reflects in the fact that the bare width $\widetilde{\Gamma}_1$ can be much larger than the total width $\Gamma_R$.
This is particularly true when the pole position is set to lie in the RS \Rmnum{2}, as it is clear from the Tables~\ref{tabflatt1}, \ref{tab.211107.1} and \ref{tabflatt3}.
Here in addition one has to properly relate $\Gamma_1$ and $\Gamma_{\pi\pi}$, cf. Eq.~\eqref{211110.1}.
Nonetheless, these important points have been overlooked in the literature.
Before finishing this section we should mention that in some studies the $a_0(980)$ lies in the RS \Rmnum{4}, so that there is no an interval along the physical real-$s$ axis within the radius of convergence of the Laurent series around the $a_0(980)$.
In such a scenario it is not justified to apply Eq.~\eqref{equ1} for the calculation of $X_2$, as discussed in detail in Ref.~\cite{Guo:2015daa}, and it is not either clear how to connect the $\Gamma_i$ with the physical partial-decay widths. Then, it follows that our method cannot be applied and the conclusions thereof do not hold.
In the studies based on unitarization of SU(3) and U(3) Chiral Perturbation Theory it is observed that the pole for the $a_0(980)$ appears in the 2nd RS if only tree level amplitudes are kept.
This is necessarily the case when unitarizing the leading order amplitudes \cite{Oller:1998zr,Oller:1997ti,Oller:1999ag,Guo:2016zep}.
However, once loop contributions are accounted for the pole moves to the RS \Rmnum{4} \cite{Guo:2011pa,Guo:2012yt,Guo:2016zep}.
This is also the case in the $K$-matrix analysis of Ref.~\cite{Dudek:2016cru}.
The more recent studies among those quoted here \cite{Guo:2016zep,Dudek:2016cru} also reproduce the energy levels from lattice QCD simulations.
When the $a_0(980)$ lies in the RS \Rmnum{4} it manifests on the physical real-$s$ axis as a strong cusp effect.
\section{Results and discussions using the branching ratio $r_{\rm exp}$ as input}
\label{sec.211126.1}
Here, we directly explore the more stringent scenario of taking $r_{\rm exp}$, together with the usual reproduction of the pole position $E_R$, in order to fix all the parameters both in the S and F methods.
The direct consideration of $r_{\rm exp}$ as input is an interesting check on the consistency of interpreting the $\Gamma_i$, calculated from the pole parameters, in connection with the experimental $r_{\rm exp}$ for a pole lying in the RS \Rmnum{2}.
As shown below, the resulting physical-partial-decay widths have values within meaningful ranges, such that they are positive
and smaller than the total width, while the $X_i$ and their sum $X$ also lie within the allowed interval $[0,1]$.\footnote{Otherwise, if not correctly understanding $r_{\rm exp}$ as proposed for a RS \Rmnum{2} pole, there will be no solution within the method F developed in Sec.~\ref{sec.201009.3}.}
This situation is similar to the one obtained before when taking $X$ as input,
where all the Flatt\'e parameters are fixed without the need to connect the $\Gamma_i$ with $r_{\rm exp}$, which comes out as an output.
We also show below that the procedure of providing $r_{\rm exp}$ as input is not completely satisfactory because the output value for the renormalized coupling squared to $K\bar{K}$, $|\gamma_2|^2$, can be very sensitive to small variations in the input. As a result, the calculation of $X_2$ by using Eq.~\eqref{equ1} does not come out very accurate, particularly for the $f_0(980)$. We can circumvent this limitation by then using the method of the spectral function $\omega(E)$ to estimate $X$ with better precision. With this choice we also update values for the resonance inputs compared with the original Ref.~\cite{Baru:2003qq}, in which the use of the spectral density function to calculate the compositeness was applied for the first time to the $f_0(980)$ and $a_0(980)$ resonances.
\subsection{The $f_0(980)$ resonance}
\label{sec.211126.2}
Now, let us directly combine the knowledge of $E_R$ with the input for the branching ratio
$r_{\rm exp}$, $r_{\rm exp}=0.52\pm 0.12$ \cite{Aubert:2006nu} and $r_{\rm exp}=0.75^{+0.11}_{-0.13}$ \cite{BES:2005iaq}, as well as $r_{\rm exp}=0.68$ from the theoretical analysis of Ref.~\cite{Oller:1997ti}.
We start by applying the method S.
It is clear that given $r_{\rm exp}$ and $\Gamma_{R}$ one can solve for the couplings $|\gamma_i|$, $i=1,\,2$, and calculate the partial compositeness coefficients $X_i$.
Since the $f_0(980)$ poles that we are considering lie in the RS \Rmnum{2} we have that $\Gamma_{\pi\pi}=(2-r_{\rm exp})\Gamma_{R}$ and $\Gamma_{K\bar{K}}=(1-r_{\rm exp})\Gamma_{R}$, and similarly for the RS \Rmnum{2} pole for the $a_0(980)$, $\Gamma_{\pi\eta}=(2-r_{\rm exp})\Gamma_{R}$. For the $a_0(980)$ pole in the RS \Rmnum{3} we have instead $\Gamma_{\pi\eta}=r_{\rm exp}\Gamma_{R}$. By combining Eqs. (\ref{equ1}), (\ref{equ33}) and (\ref{equ:square})
the value of the compositeness coefficients $X_{1}$ and $X_{2}$ can be written as
\begin{eqnarray}
X_{1}&=&\frac{8\pi (2-r_{\rm exp})\Gamma_{R} m_R^2}{p_1(m_R^2)} \left|\frac{\partial G_{1}(s)}{\partial s}\right|_{s=s_{R}},~\text{RS \Rmnum {2}}~,\\
X_{1}&=&\frac{8\pi r_{\rm exp}\Gamma_{R} m_R^2}{p_1(m_R^2)} \left|\frac{\partial G_{1}(s)}{\partial s}\right|_{s=s_{R}},~\text{RS \Rmnum {3}}~,\nonumber\\
X_{2}&=&\frac{16\pi^2 (1-r_{\rm exp})\Gamma_{R}}{\int_{m_{1}+m_{2}}^{m_{R}+2\Gamma_{R} }dW\frac{p(W^{2})/W^{2}}{(m_{R}-W)^{2}+\Gamma_{R}^2/4}}
\left|\frac{\partial G_{2}(s)}{\partial s}\right|_{s=s_{R}}.
\end{eqnarray}
The results are organized in Table~\ref{tabff3} with $m_R$ and $\Gamma_R$ taken from Eq.~\eqref{equ22}, corresponding to the determination in Ref.~\cite{GarciaMartin:2011jx}. For this and the rest of tables in this section the errorbars given to $X_2$ and $X$ contain also the propagation of the error in the determination of $r_{\rm exp}$ from Refs.~\cite{Aubert:2006nu,BES:2005iaq}.
It follows from this table that $X_2$ is affected by large errorbars, around a 50\%, so that the resulting $X$ is not pined down accurately and the calculation is rather indicative. Only for $r_{\rm exp}=0.52\pm 0.12$ \cite{Aubert:2006nu} one has that $X$ can be larger than 0.5 within errors, with $X=0.49\pm 0.20$. For the other experimental value $r_{\rm exp}=0.75^{+0.11}_{-0.13}$ \cite{BES:2005iaq} the compositeness $X=0.27\pm 0.11$ and it is smaller than 0.5.
We show in the left panel of Fig.~\ref{pic_X_r} the total compositeness $X$ as a function of $r_{\rm exp}$ for the $f_0(980)$ with the method S, and employing the central values for $m_R$ and $\Gamma_R$ for the different poles considered. Thus, the requirement that $r_{\rm exp}>0.4$ implies that $X\lesssim 0.6$ for the pole in Eq.~\eqref{equ22} in the RS \Rmnum{2}.
It is also clear from Fig.~\ref{pic_X_r} the linear decrease of $X$ with $r_{\rm exp}$.
This can be easily understood by noticing that both the
partial-decay widths and partial compositeness coefficients are proportional to $|\gamma_i|^2$, which can be written in turn as $\Gamma_i/\theta_i$. For given values for the mass and width of the resonance the $\theta_i$ is just a measure of the available phase space for the decay to the channel $i$, and then $\theta_2\ll \theta_1$. In this way, for a pole in the RS \Rmnum{3},
\begin{align}
\label{211103.1}
X&=\frac{\Gamma_R r_{\rm exp}}{\theta_1}\left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}
+\frac{\Gamma_R (1-r_{\rm exp})}{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}\\
&=\frac{\Gamma_R }{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}
-r_{\rm exp}\Gamma_R\left(\frac{1}{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}
-\frac{1}{\theta_1}\left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}\right)~.\nonumber
\end{align}
This is a linear dependence with $r_{\rm exp}$, and the coefficient multiplying the latter is negative because both $1/\theta_2$ and $\left|\partial G_2/\partial s\right|_{s_R}$ are basically proportional to $|s_R/4-m_K^2|^{-1/2}$, cf. Eq.~\eqref{211106.1}.\footnote{Eq.~\eqref{211103.1} was written for a pole in the RS \Rmnum{3}.
For a pole lying in the RS \Rmnum{2} one has to take into account that $\Gamma_1=\Gamma_R(2-r_{\rm exp})$ and, then, instead of Eq.~\eqref{211103.1} we have
\begin{align}
\label{211122.1}
X&=2\frac{\Gamma_R }{\theta_1}\left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}
+\frac{\Gamma_R }{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}
-r_{\rm exp}\Gamma_R\left(\frac{1}{\theta_2}\left|\frac{\partial G_2}{\partial s}\right|_{s=s_R}
+\frac{1}{\theta_1}\left|\frac{\partial G_1}{\partial s}\right|_{s=s_R}\right)~,
\end{align}
so that the linear decrease of $X$ with $r_{\rm exp}$ follows.
}
\begin{table}[!htbp]
\begin{center}
\caption{ Method S applied to the resonance $f_0(980)$ with pole position in the RS \Rmnum{2} (column 2) from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: By reproducing values given for $r_{\rm exp}$ (column 1), we calculate $|\gamma_i|$ (columns 3, 4), $\Gamma_{i}$ (columns 5, 6), and $X_{i}$ (columns 7, 8).}
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$r_{\rm
exp}$&~~~$\text{RS}$&~~~$|\gamma_{1}|(\text{GeV})$~~~&$|\gamma_{2}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\pi}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{1}*{0.52~\cite{Aubert:2006nu}}&~~~$\text{\Rmnum{2}}$&~~~$2.03\pm0.18$~~~&$3.62\pm0.32$~~~&$79.9\pm14.5$&~~~$25.9\pm4.7$&~~~$0.031\pm0.006$&~~~$0.46 \pm 0.20$~~~\\
\Xhline{1pt} \multirow{1}*{0.68~\cite{Oller:1997ti}}&~~~$\text{\Rmnum{2}}$&~~~$1.92\pm0.17$~~~&$2.95\pm0.26$~~~&$71.3\pm13.0$&~~~$17.3\pm3.1$&~~~$0.028\pm0.005$&~~~$0.31\pm0.13 $~~~\\
\Xhline{1pt}
\multirow{1}*{0.75~\cite{BES:2005iaq}}&~~~$\text{\Rmnum{2}}$&~~~$1.87\pm0.17$~~~&$2.61\pm0.23$~~~&$67.5\pm12.3$&~~~$13.5\pm2.5$&~~~$0.026\pm0.005$&~~~$0.24 \pm 0.11$~~~\\
\Xhline{1pt}
\end{tabular}
\label{tabff3}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{ Method F applied to the resonance $f_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: The branching ratio $r_{\text{exp}}$ is taken as input (column 1).
We then calculate the bare partial-decay width $\widetilde{\Gamma}_1$ (column 2), the bare coupling squared $g_2$ (column 3), $E_f$ (column 4), the renormalized $\Gamma_1$ (column 5) and $\Gamma_2$ (column 6), and the partial compositeness coefficients $X_1$ (column 7) and $X_2$ (column 8). The numbers between parenthesis indicate large ranges of the corresponding quantities by varying $m_R$ and $\Gamma_R$ within errors. }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}|c|ccccccc|}
\Xhline{1pt}
$r_{\text{exp}}$ &$\widetilde{\Gamma}_{\pi\pi}\,(\text{MeV})$&$g_2$ &$E_{f}\,(\text{MeV})$& $\Gamma_{1}\,(\text{MeV})$ &$\Gamma_{2}\, (\text{MeV})$ &$X_{\pi\pi}$ &$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{1}*{0.52 \cite{Aubert:2006nu}} &$(71, 2623)$
&$(0.36, 26.8)$
&$(-21.2, 1167.2)$ &$79.9\pm14.5$ &$25.9\pm4.7$ &$0.030\pm0.006$ &$0.48\pm 0.22$\\
\Xhline{1pt}
\multirow{1}*{0.68 \cite{Oller:1997ti}} &$113.8\pm38.3$ &$0.69\pm0.45$ &$(-66.7, 6.3)$ &$71.3\pm13.0$ &$17.3\pm3.1$ &$0.028\pm0.005$ &$0.35\pm0.18$\\
\Xhline{1pt}
\multirow{1}*{0.75 \cite{BES:2005iaq}} &$89.4\pm20.4$ &$0.41\pm0.17$ &$(-42.9, 7.7)$ &$67.5\pm12.3$ &$13.5\pm2.5$ &$0.026\pm0.005$ &$0.26\pm0.14$\\
\Xhline{1pt}
\end{tabular*}
\label{tab.211111.1}
\end{center}
\end{table}
\begin{center}
\begin{figure}[!htbp]
\centering
\caption{ The total compositeness $X$ is plotted as a function of the input branching ratio ($r_\text{exp})$ when applying the method S. Panel (a) corresponds to the $f_0(980)$: The dashed line is for the pole from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}, and the dashed one is for the pole of Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}. Panel (b) corresponds to the $a_0(980)$: The poles for the $a_0(980)$ are given in Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0} for the RSs \Rmnum{2} and \Rmnum{3}, respectively, and they are taken from Ref.~\cite{CrystalBarrel:2019zqh}.} \label{pic_X_r}
\begin{minipage}[t]{1\linewidth}
\centering
\subfigure[]{
\includegraphics[width=0.45\textwidth]{pic_X_r1.pdf}}
\hspace{0.45in}
\subfigure[]{
\includegraphics[width=0.45\textwidth]{pic_X_r2.pdf}}
\end{minipage}
\end{figure}
\end{center}
Now we apply the method F with $r_{\rm exp}$ as input to the same $f_0(980)$ pole given in Eq.~\eqref{equ22}, and the results are presented in the Table~\ref{tab.211111.1}. In the columns 2--4 we provide the resulting parameters characterizing the Flatt\'e parameterization, and in the rest of columns we calculate the outputs in the form of the partial-decay widths $\Gamma_i$ and partial compositeness coefficients $X_i$. We notice that for the smallest
$r_{\rm exp}=0.52$ \cite{Aubert:2006nu} the bare parameters are much more poorly determined than those in the other cases. Nonetheless, the outputs have uncertainties of similar sizes as for the other values taken for $r_{\rm exp}$. The resulting values for $X_2$ in Tables~\ref{tab.211111.1} and \ref{tabff3} turn out to be remarkably close to each other, clearly showing the compatibility between the methods F and S.
Let us compare Table~\ref{tab.211111.1} with Table~\ref{tabflatt1}, where $X$ is taken as input and no use of the interpretation of the $\Gamma_i$ for a pole in the RS \Rmnum{2}, according to Eq.~\eqref{211110.1}, is done to fix the Flatt\'e parameterization.
One can then observe that similar values of $\Gamma_{K\bar{K}}$, which fixes $r_{\rm exp}=1-\Gamma_{K\bar{K}}/\Gamma_R$, lead to close values of $X_2$ in both tables. This agreement is of course an indication that the equations are correctly solved,
because the same results are obtained by taking corresponding values of $X$ or $r_{\rm exp}$.
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method S applied to the resonance $f_0(980)$ with pole position in the RS \Rmnum{2} from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}. The various entries are the same as in Table~\ref{tabff3}.} \label{tab.211106.2}}
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$\text{RS}$&~~~$|\gamma_{\pi\pi}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\pi}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{1}*{0.52~\cite{Aubert:2006nu}}&~~~$\text{\Rmnum{2}}$&~~~$2.05\pm0.22$~~~&$4.71\pm0.70$~~~&$82.9\pm18.2$&~~~$26.9\pm5.9$&~~~$0.033\pm0.007$&~~$0.72\pm 0.34 $~~~\\
\Xhline{1pt} \multirow{1}*{0.68~\cite{Oller:1997ti}}&~~~$\text{\Rmnum{2}}$&~~~$1.94\pm0.21$~~~&$3.85\pm0.58$~~~&$73.9\pm16.2$&~~~$17.9\pm3.9$&~~~$0.029\pm0.006$&~~~$0.48 \pm0.23 $~~~\\
\Xhline{1pt}
\multirow{1}*{0.75~\cite{BES:2005iaq}}&~~~$\text{\Rmnum{2}}$&~~~$1.88\pm0.21$~~~&$3.40\pm0.51$~~~&$70.0\pm15.3$&~~~$14.0\pm3.1$&~~~$0.027\pm0.006$&~~~$0.38 \pm0.18 $~~~\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{{\small Method F applied to the resonance $f_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}: The entries are the same as those in Table~\ref{tab.211111.1}, except for the values of $r_{\rm exp}$ in the first column because solutions are found only for $r_{\rm exp}>0.82$. The values of $r_{\rm exp}$ taken do not come from any experiment and are considered due to theoretical reasons.}
\label{tabflatt2}}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}|c|ccccccc|}
\Xhline{1pt}
$r_{\text{exp}}$& $\widetilde{\Gamma}_{\pi\pi} (\text{MeV})$ &$g_2$& $E_{f} (\text{MeV})$ & $\Gamma_{1} (\text{MeV})$ &$\Gamma_{2} (\text{MeV})$ &$X{\pi\pi}$ &$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{1}*{0.86} &$(45.8, 2632)$ &$(0.13, 35.6)$ &$(-2358, 0.3)$ &$63.8\pm14.0$ &$7.8\pm1.7$ &$0.026\pm0.005$ &$0.329\pm0.140$\\
\Xhline{1pt}
\multirow{1}*{0.91} &$(41.9, 407.6)$ &$(0.08, 6.64)$ &$(-429.4, 1.3)$ &$61.0\pm13.4$ &$5.1\pm1.1$ &$0.025\pm0.005$ &$0.239\pm0.123$\\
\Xhline{1pt}
\multirow{1}*{0.96} &$66.1\pm13.0$ &$0.16\pm0.12$ &$-20.8\pm14.4$ &$58.2\pm12.8$ &$2.3\pm0.5$ &$0.023\pm0.005$ &$0.105\pm0.058$\\
\Xhline{1pt}
\end{tabular*}
\end{center}
\end{table}
Let us move on and consider the $f_0(980)$ pole position from Ref.~\cite{Guo:2012yt} applying first the method S, with the results given in Table~\ref{tab.211106.2}.
We already noticed when comparing Tables~\ref{tabff1} and \ref{tab.211106.1} that the $\Gamma_i$'s were larger for the former despite the width of the pole in Eq.~\eqref{equ22} is smaller.
By comparing Tables~\ref{tabff3} and \ref{tab.211106.2} with $r_{\rm exp}$ as input we observe that the partial-decay widths $\Gamma_i$'s have close values between them and that the differences translates into the central values of $X_2$, which are always larger in Table~\ref{tab.211106.2} than the ones in Table~\ref{tabff3}. Thus, a somewhat more prominent role of the $K\bar{K}$ component in the case of the $f_0(980)$ pole in Eq.~\eqref{211107.1} arises.
When applying the method F to the $f_0(980)$ pole from Ref.~\cite{Guo:2012yt}, Eq.~\eqref{211107.1}, there are no acceptable solutions for the input values of $r_{\rm exp}$ from Refs.~\cite{Aubert:2006nu,BES:2005iaq,Oller:1997ti}, with solutions found only for larger values of $r_{\rm exp}$.
Indeed, we see from Eq.~\eqref{211111.1} that the central value for $r_{\rm exp}$ calculated from the information given in Ref.~\cite{Guo:2012yt} is $0.91$, though the error estimated is too large to extract more stringent quantitative conclusions. Our results are given in the Table~\ref{tabflatt2}, where $X_2$ varies by around a factor of 3 when changing $r_{\rm exp}$ by less than a 10\%, from $r_{\rm exp}=0.96$ to $0.86$. For similar values of $\Gamma_{K\bar{K}}$ the results now and those in Table~\ref{tab.211107.1} agree well within errors, and give rise to rather small values for $X_2$.
Interestingly, we can check our procedure with the full characterization of the $f_0(980)$ pole given in Ref.~\cite{Guo:2012yt}, in which the residues, in addition to the pole position, are given. In this reference one has $|\gamma_1|=1.8^{+0.2}_{-0.3}$~GeV and $|\gamma_2/\gamma_1|=2.6^{+0.2}_{-0.3}$.
Precisely, the input value for $r_{\rm exp}=2-\frac{\Gamma_1}{\Gamma_R}=0.909$, cf. Eq.~\eqref{211111.1}, is equivalent to providing the central value of $|\gamma_1|$ as input, and solving the equation
\begin{align}
\label{211127.1}
|\gamma_1|^2&=\beta g_1=\beta \frac{\widetilde{\Gamma}_1 8\pi m_R^2}{p_1(m_R)}~,
\end{align}
instead of Eq.~\eqref{211110.2}.
The result is the same as the one given in the third row of Table~\ref{tabflatt2} corresponding to a value $|\gamma_2|=\sqrt{32 \pi m_K^2 \beta g_2}=2.77$~GeV (which straightforwardly translates into the central value for $X_2$ by using Eq.~\eqref{equ1}). We notice that this value is a factor of 1.7 smaller than the reported one in Ref.~\cite{Guo:2012yt}. Its square is therefore a factor of 2.84 smaller than the value $|\gamma_2|^2=(4.68$~GeV)$^2$ actually obtained in Ref.~\cite{Guo:2012yt} by solving for the residues of the partial-wave amplitudes.
However, instead of $|\gamma_1|$ we can take $|\gamma_2|$ as input and solve the equation
\begin{align}
\label{211127.2}
|\gamma_2|^2&=32\pi m_K^2\beta g_2~,
\end{align}
instead of Eq.~\eqref{211127.1}.
Doing this exercise with the central value $|\gamma_2|=4.68$~GeV \cite{Guo:2012yt}, corresponding to a much larger $X_2=0.68$, we find that $|\gamma_1|=1.875$~GeV, which is indeed perfectly compatible with the value $|\gamma_1|=1.8^{+0.2}_{-0.3}$~GeV given in Ref.~\cite{Guo:2012yt}. The branching ratio $r_{\rm exp}$ obtained with this value of $|\gamma_1|$ is 0.81, instead of 0.91 for $|\gamma_1|=1.8$~GeV (third row in Table~\ref{tabflatt2}).\footnote{At first sight it could seem strange that $|\gamma_1|=1.875>1.8$~GeV has $r_{\rm exp}=0.81<0.91$, respectively. This is because for $|\gamma_1|=1.875$~GeV the coupling $|\gamma_2|=4.68>2.77$~GeV and the decay width to $K\bar{K}$, $\Gamma_1-\Gamma_R$, has increased.
The fact that the width to $K\bar{K}$ has not increased by a relative factor of 2.84 as $|\gamma_2|^2$ does is a reflection of some divergence between the F and S methods for large $K\bar{K}$ couplings in the case of the pole in Eq.~\eqref{211107.1}.
This point is also clear from the absence of solutions for $r_{\rm exp}=0.52$, 0.68 and $0.75$ within the method F, although they are found for the method S in Table~\ref{tab.211106.2}. Some divergence in the results when taking $X$ as input can also be seen for larger values of $X\gtrsim 0.6$ (and hence of $|\gamma_2|$) by comparing the central values for the $\Gamma_i$ between Tables~\ref{tab.211106.1} and \ref{tab.211107.1}. }
After this check we re-consider the application of the method F to the $f_0(980)$ pole in Eq.~\eqref{equ22} and discuss the central values of $|\gamma_1|\,(|\gamma_2|)$ for the input values of $r_{\rm exp}$ in Table~\ref{tab.211111.1}. For the extreme values $r_{\rm exp}=0.52$, and $0.75$ we find $1.96\,(3.88)$~GeV, and $1.78\,(2.40)$~GeV, respectively.
With a small variation of a 5\% in $r_{\rm exp}$, between the values $0.95 r_{\rm exp} $ and $1.05r_{\rm exp}$ the coupling squared $|\gamma_2|^2$ changes by 16\% for $r_{\rm exp}=0.52$ and by 40\% for $r_{\rm exp}=0.75$. This implies that though there is no a critical dependence of $|\gamma_2|$ on the value of $r_{\rm exp}$ as in the case of the $f_0(980)$ from Ref.~\cite{Guo:2012yt}, cf. Table~\ref{tabflatt2}, it is also true that the variations are important. Considering the experimental errors in $r_{\rm exp}$ from Refs.~\cite{Aubert:2006nu,BES:2005iaq} of around a 23\% for $r_{\rm exp}=0.52$ and $15\%$ for $r_{\rm exp}=0.75$, we have added in quadrature the resulting uncertainty of around a 40\% in $X_2$ to the one stemming from the values of $M_R$ and $\Gamma_R$ for calculating the errorbars in Table~\ref{tab.211111.1}.
A similar relative uncertainty, which also follows by inspection of the dashed line in the left panel of Fig.~\ref{pic_X_r}, has been applied to $X_2$ in Table~\ref{tabff3} calculated with the method S. We have proceeded analogously for the errorbars of $X_2$ shown in Table~\ref{tab.211106.2}. Notice that within the method S the sensitivity to the input value of $r_{\rm exp}$ for the pole in Eq.~\eqref{211107.1} is a factor of 1.6 larger than for the pole in Eq.~\eqref{equ22}, as follows by comparing the slope of the solid versus the dashed lines in the left panel of Fig.~\ref{pic_X_r}.
We then find a rather unpleasant situation in which small changes in the input values, perfectly well within the errorbars provided by the analyses where they come from, can give rise to a very different output value for $|\gamma_2|^2$ and then for $X_2$, the partial compositeness coefficient which typically almost saturates the whole $X$.
A way that we have found to circumvent this limitation is to use the spectral density function $\omega(E)$, Eq.~\eqref{211013.1}, in order to calculate the compositeness $X$ as $1-W_R$, since we have checked that it does not depend so sensitively on the input value of $r_{\rm exp}$. In this regard, we calculate $W_R$ as a function of the extent of the interval $[-\Delta,\Delta]$ used in Eq.~\eqref{211013.1} because its dependence on $\Delta$ is worth noticing, and the results for the $f_0(980)$ pole in Eq.~\eqref{211107.1} are given in Table~\ref{tabflatt5g}. In the first column we show the $r_{\rm exp}$ taken, the integration interval in Eq.~\eqref{211013.1} is given in the second column, the third column is for the resulting $W_R$, the fourth column provides $1-W_{R}$, and the last one gives $X$ calculated already with the method based on the Flatt\'e parameterization in Table~\ref{tabflatt2}.
In Ref.~\cite{Baru:2003qq} the cutoff $\Delta$ was taken to be 50~MeV. Since the width of the $f_0(980)$ in Ref.~\cite{Guo:2012yt} is 58~MeV we take $\Delta=60$~MeV for this pole, but estimate an uncertainty for the result of $W_R$ by considering also $\Delta=90$~MeV, which is a 50\% larger than the nominal value.
This is illustrated in Fig.~\ref{picflatt}, where we plot $\omega(E)$ for the $f_0(980)$ and $a_0(980)$ resonances in the top and bottom panels, respectively. It is clearly seen that most of the resonance bump lies in the region $|E|<\Gamma_R$, and that for $|E|>1.5\Gamma_R$ it has already faded away. From Table~\ref{tabflatt5g} we obtain that for $\Delta=60$~MeV and $r_{\rm exp}=0.91$ (the nominal one for \cite{Guo:2012yt}) the resulting compositeness is $1-W_{f_0}=0.52$, and for $\Delta=1.5\Gamma_R\approx 90$~MeV, it decreases up to 0.43. We observe a variation of around a 40\% for $1-W_{f_0}$ in Table~\ref{tabflatt5g} calculated with $\Delta=60$~MeV (much smaller than the 300\% in Table~\ref{tabflatt2}), decreasing from 0.68 for $r_{\rm exp}=0.86$ to 0.39 for $r_{\rm exp}=0.96$.
\begin{table}[!htbp]
\begin{center}
\caption{ Resonance $f_0(980)$ with the pole position of Eq.~\eqref{211107.1} in the RS \Rmnum{2}: We show the dependence of $W_{f_0}$ on the integration interval $[-\Delta,\Delta]$ with $\Delta$ up to $2\Gamma_R$. The last column gives $X=X_1+X_2$ obtained in Table~\ref{tabflatt2}. The values of $r_{\rm exp}$ given do not come from any experiment and are considered due to theoretical reasons. }
\label{tabflatt5g}
\begin{tabular}{|c|ccc|c|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$[-\Delta, \Delta]$&~~~$W_{f_{0}}$ & $1-W_{f_0}$ & $X$ \\
\Xhline{1pt}
\multirow{4}*{0.86}~~~&~~~$[-30, 30 ]$~~~&~~~$0.19$~~~& 0.81 & \\
&~~~$[-60, 60 ]$~~~&~~~$0.32$~~~& 0.68 & \\
&~~~$[-90, 90 ]$~~~&~~~$0.40$~~~& 0.60 & \\
&~~~$[-120, 120 ]$~~~&~~~$0.45$~~~& 0.55 & $0.36\pm0.14$\\
\Xhline{1pt}\multirow{4}*{0.91}~~~&~~~$[-30, 30 ]$~~~&~~~$0.30$~~~& 0.70 & \\
&~~~$[-60, 60 ]$~~~&~~~$0.48$~~~& 0.52 & \\
&~~~$[-90, 90 ]$~~~&~~~$0.57$~~~& 0.43 & \\
&~~~$[-120, 120 ]$~~~&~~~$0.63$~~~& 0.37 & $0.27\pm0.12$\\
\Xhline{1pt}
\multirow{4}*{0.96}~~~&~~~$[-30, 30 ]$~~~&~~~$0.40$~~~& 0.60 & \\
&~~~$[-60, 60 ]$~~~&~~~$0.61$~~~& 0.39 & \\
&~~~$[-90, 90 ]$~~~&~~~$0.71$~~~& 0.29 & \\
&~~~$[-120, 120 ]$~~~&~~~$0.76$~~~& 0.24 & $0.13\pm0.06$\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
\begin{center}
\begin{figure}
\centering
\caption{ The spectral density function $\omega(E)$ is shown for the $f_0(980)$ in the top panels and for the $a_0(980)$ in the bottom ones.
For the $f_0(980)$ the poles considered are in the RS \Rmnum{2}, with the pole in Eq.~\eqref{211107.1} corresponding to the top left panel and the pole in Eq.~\eqref{equ22} to the top right one. For the $a_0(980)$ we take the poles in Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}, which are in the RS \Rmnum{2} (bottom left panel) and RS \Rmnum{3} (bottom right one), respectively.
Every line in a plot corresponds to the indicated value of $r_{\rm exp}$ in the
legends, and for the $a_0(980)$ the poles lie in different RSs.
\label{picflatt}}
\begin{minipage}[t]{1\linewidth}
\centering
\subfigure[]{
\includegraphics[width=0.4\textwidth]{f0978.pdf}\;}
\hspace{1in}
\subfigure[]{
\includegraphics[width=0.4\textwidth]{f0996.pdf}}
\subfigure[]{
\includegraphics[width=0.4\textwidth]{a0RS2.pdf}\;}
\hspace{1in}
\subfigure[]{
\includegraphics[width=0.4\textwidth]{a0RS3.pdf}}
\end{minipage}
\end{figure}
\end{center}
\begin{table}[!htbp]
\begin{center}
\caption{{\small Resonance $f_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}: We show the dependence of $W_{f_0}$ on the integration interval $[-\Delta,\Delta]$ with $\Delta$ up to $2\Gamma_R$. For the experimental inputs of $r_{\rm exp}$ we give our final estimate for $1-W_{f_0}$ in the column 5. The total compositeness $X=X_1+X_2$ from Table~\ref{tab.211111.1} is given in the last column. }
\label{tabflatt5}}
\begin{tabular}{|c|ccc|c|c|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$[-\Delta, \Delta]$&~~~$W_{f_{0}}$ & $(1-W_{f_0})_\Delta$ & $1-W_{f_0}$& $X$ \\
\Xhline{1pt}
\multirow{4}*{0.52
\cite{Aubert:2006nu}}~~~&~~~$[-25, 25 ]$~~~&~~~$0.13$~~~& 0.87 & &\\
&~~~$[-50, 50 ]$~~~&~~~$0.21$~~~& $0.79$ & &\\
&~~~$[-75, 75 ]$~~~&~~~$0.27$~~~& 0.73 & &\\
&~~~$[-100, 100 ]$~~~&~~~$0.31$~~~& 0.69 & $0.76\pm 0.15$ &$0.51\pm0.22$\\
\Xhline{1pt}\multirow{4}*{0.68
\cite{Oller:1997ti}}~~~&~~~$[-25, 25 ]$~~~&~~~$0.25$~~~& 0.75 & &\\
&~~~$[-50, 50 ]$~~~&~~~$0.39$~~~& 0.61 & &\\
&~~~$[-75, 75 ]$~~~&~~~$0.47$~~~& 0.53 & & \\
&~~~$[-100, 100 ]$~~~&~~~$0.53$~~~& 0.47 & & $0.38\pm0.18$\\
\Xhline{1pt}
\multirow{4}*{0.75
\cite{BES:2005iaq}}~~~&~~~$[-25, 25 ]$~~~&~~~$0.30$~~~& 0.70 & & \\
&~~~$[-50, 50 ]$~~~&~~~$0.45$~~~& $0.55$ & &\\
&~~~$[-75, 75 ]$~~~&~~~$0.55$~~~& 0.45 & & \\
&~~~$[-100, 100 ]$~~~&~~~$0.61$~~~& 0.39 & $0.50\pm 0.15$ & $0.29\pm0.14$\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
Next, we apply the method based on the spectral density function $\omega(E)$ to the pole from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}, and the results are given in Table~\ref{tabflatt5}. The dependence with $\Delta$ in the compositeness $1-W_{f_0}$ is indicated with the subscript $\Delta$ in the fourth column of Table~\ref{tabflatt5}, $(1-W_{f_0})_\Delta$.
We take as the nominal value for $1-W_{f_0}$ the mean between the ones obtained with $\Delta=\Gamma_R=50$~MeV and $\Delta=1.5\Gamma_R$.
Considering the variation of the results between these two values for $\Delta$, and the errorbar of around 0.12 in $r_{\rm exp}$ in both experiments \cite{Aubert:2006nu,BES:2005iaq} (such that $r_{\rm exp}=0.68$ is around the upper value of $r_{\rm exp}=0.52$ and the lower one for 0.75 within one standard deviation), we see that an uncertainty of at least 0.15 should be considered for {the reference value} of $1-W_{f_0}$.
The final figure is given in the column before the last one in Table~\ref{tabflatt5}.
We can appreciate that the central value obtained by integrating Eq.~\eqref{211013.1} is larger than the central value for $X$ in Table~\ref{tab.211111.1} by employing Eq.~\eqref{equ1}, which is also given in the last column.
Then, we would have $1-W_{f_0}=0.76\pm 0.15$ and $0.50\pm 0.15$ for $r_{\rm exp}=0.52$ \cite{Aubert:2006nu} and $0.75$ \cite{BES:2005iaq}, respectively, as also shown in Table~\ref{tabflatt5}. These figures indicate a dominant meson-meson component, mostly $K\bar{K}$ ($X_2\gg X_1$), in the nature of the $f_0(980)$ pole from Ref.~\cite{GarciaMartin:2011jx}, Eq.~\eqref{equ22}. It also follows that the results are compatible with those given for $X$ within errors.
However, for $r_{\rm exp}$ from Ref.~\cite{BES:2005iaq} the central value of $1-W_{f_0}$ in Table~\ref{tabflatt5} is only slightly above 0.5, and both values taken for $r_{\rm exp}$ \cite{Aubert:2006nu,BES:2005iaq} generate values of $1-W_{f_0}$ that could decrease substantially once errors are taken into account. Therefore, other components apart from $K\bar{K}$ are likely to play a noticeable role in the composition of the $f_0(980)$.
From non-perturbative studies based on unitarizing Chiral Perturbation Theory with/without resonances these extra components have been also unveiled \cite{Oller:1998zr,Pelaez:2006nj,Guo:2012ym,Guo:2012yt}.
\subsection{The $a_0(980)$ resonance}
\label{sec.211126.2}
Now, we follow similar steps and use the fact that the ratio of $\Gamma(a_0(980)\to K\bar{K})/\Gamma(a_0(980)\to \pi\eta)$ is also given by Ref.~\cite{CrystalBarrel:2019zqh}, see Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}, with the values $0.138\pm 0.035$ (RS \Rmnum{2}) and $0.149\pm 0.039$ (RS \Rmnum{3}), respectively. The typical uncertainty for the inferred $r_{\rm exp}$ is around a 3\%. We also consider the average value given by the PDG \cite{Zyla:2020zbs}, $\Gamma(a_0(980)\to K\bar{K})/\Gamma(a_0(980)\to \pi\eta)=0.177\pm 0.024$, which implies an uncertainty of a 1.7\% in $r_{\rm exp}$.
Its implication has been organized in Table \ref{tabff5}, where we apply the method S with $r_{\rm exp}$ as input to the $a_0(980)$ poles in the RS \Rmnum{2} and \Rmnum{3}.
The $K\bar{K}$ component is larger than the $\pi\eta$ one,
but the total compositeness $X$ is small being less than 0.25 and 0.35 for for the RS \Rmnum{2} and \Rmnum{3} $a_0(980)$ poles, respectively.
\begin{table}[!htbp]
\begin{center}
\caption{ Method S applied to the resonance $a_0(980)$ with pole positions from Ref.~\cite{CrystalBarrel:2019zqh}, Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}: By reproducing the input values of $\Gamma_{a_0}$ and $r_{\rm exp}$ (column 2), we predict $|\gamma_i|$ (columns 3, 4), $\Gamma_i$ (columns 5, 6), and $X_i$ (columns 7, 8).}
\begin{tabular}{|cc|cccccc|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$\text{RS}~~~$&~~~$|\gamma_{\pi\eta}|(\text{GeV})$~~~&$|\gamma_{K\bar{K}}|(\text{GeV})$~~~&$\Gamma_{1}(\text{MeV})$&~~~$\Gamma_{2}(\text{MeV})$&~~~$X_{\pi\eta}$&~~~$X_{K\bar{K}}$~~~\\
\Xhline{1pt}
\multirow{2}*{0.85 \cite{Zyla:2020zbs}}&~~~$\text{\Rmnum{2}}~~~$&~~~$2.90\pm0.05$~~~&$2.32\pm0.08$~~~&$111.8\pm4.2$&~~~$14.6\pm0.6$&~~~$0.100\pm0.004$&~~~$0.132\pm0.013$~~~\\
&~~~$\text{\Rmnum{3}}~~~$&~~~$2.86\pm0.05$~~~&$2.61\pm0.72$~~~&$108.0\pm3.7$&~~~$19.1\pm0.7$&~~~$0.098\pm0.004$&~~~$0.249\pm0.017$~~~\\
\Xhline{1pt}
\multirow{2}*{0.87 \cite{CrystalBarrel:2019zqh}}&~~~$\text{\Rmnum{2}}~~~$&~~~$2.88\pm0.05$~~~&$2.16\pm0.07$~~~&$109.8\pm4.2$&~~~$12.6\pm0.5$&~~~$0.098\pm0.004$&~$0.115 \pm 0.009$ \\%0.12 \pm 0.19$~~~\\
&~~~$\text{\Rmnum{3}}~~~$&~~~$2.89\pm0.05$~~~&$2.43\pm0.07$~~~&$110.5\pm3.8$&~~~$16.5\pm0.6$&~~~$0.100\pm0.004$&~~~~~$0.216\pm 0.030$ ~~~ \\
\Xhline{1pt}
\end{tabular}
\label{tabff5}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\caption{ Method F applied to the resonance $a_0(980)$ with pole positions from Ref.~\cite{CrystalBarrel:2019zqh}, Eqs.~\eqref{211009.1} and \eqref{eq:ratioa0}: The branching ratio $r_{\text{exp}}$ is taken as input (column 1), the RS in which the pole lies is indicated in the column 2. We then calculate the bare partial-decay width $\widetilde{\Gamma}_1$ (column 3), the bare coupling squared $g_2$ (column 4), $E_f$ (column 5), the renormalized $\Gamma_1$ (column 6) and $\Gamma_2$ (column 7), and the partial compositeness coefficients $X_1$ (column 8) and $X_2$ (column 9). }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}|cc|ccccccc|}
\Xhline{1pt}
$r_{\text{exp}}$ &$\text{RS}$ &$\widetilde{\Gamma}_{\pi\eta}\,(\text{MeV})$ &$g_2$ &$E_{f}\,(\text{MeV})$ &$\Gamma_{1}\,(\text{MeV})$ &$\Gamma_{2}\,(\text{MeV})$ &$X{\pi\eta}$ &$X_{K\bar{K}}$\\
\Xhline{1pt}
\multirow{2}*{0.85 \cite{Zyla:2020zbs}} &$\text{\Rmnum{2}}$ &$124.0\pm5.3$ &$0.21\pm0.03$ &$3.08\pm5.11$& $111.8\pm4.2$ &$14.6\pm0.6$ &$0.100\pm0.004$ &$0.116\pm 0.017$ \\
&$\text{\Rmnum{3}}$ &$98.7\pm3.4$ &$0.21\pm0.01$ &$23.1\pm2.8$ &$108.0\pm3.6$ &$19.1\pm0.6$ &$0.099\pm0.004$ &$0.205\pm 0.028$ \\
\Xhline{1pt}
\multirow{2}*{0.87 \cite{CrystalBarrel:2019zqh}} &$\text{\Rmnum{2}}$ &$119.9\pm4.9$ &$0.18\pm0.02$ &$4.65\pm4.82$ &$109.8\pm4.2$ &$12.6\pm0.5$ &$0.098\pm0.004$ & $0.100\pm 0.016$ \\
&$\text{\Rmnum{3}}$ &$102.0\pm3.5$ &$0.18\pm0.01$ &$21.6\pm2.9$ &$110.5\pm3.7$ &$16.5\pm0.6$ &$0.101\pm0.004$ &$0.178\pm 0.038$ \\%$0.178\pm0.018$\\
\Xhline{1pt}
\end{tabular*}
\label{tabflatt4}
\end{center}
\end{table}
We now proceed with the application of the method F with $r_{\rm exp}$ as input and the results obtained are presented in Table~\ref{tabflatt4}. We observe that the values of $X_2$ and $\Gamma_{K\bar{K}}$ are rather similar between Tables~\ref{tabflatt3} and \ref{tabflatt4}, and similarly if we compare the latter with Table~\ref{tabff5} where $r_{\rm exp}$ is taken as input too. Thus, we find compatibility between different methods for the $a_0(980)$ case.
It is also clear from Table~\ref{tabflatt4} that the value of $r_{\rm exp}=0.85$ obtained from the PDG average value of $\Gamma(a_0(980)\to K\bar{K})/\Gamma(a_0(980)\to \pi\eta)=0.177\pm 0.024$ \cite{Zyla:2020zbs} implies quite small values for $X_2\lesssim 0.3$.
It seems in our opinion that an uncertainty of only $2-3\%$ in $r_{\rm exp}$ for the $a_0(980)$ is probably too optimistic. In this respect we notice that the analysis of Ref.~\cite{CrystalBarrel:2019zqh} for the $a_0(980)$ is based on taking poles in the RS II and III for this resonance, while recent sophisticated theoretical studies \cite{Guo:2016zep,Dudek:2016cru}, which also reproduce lattice QCD data, require a very different qualitative picture with a pole in the RS IV.
In this respect, let us check the sensitivity of the results based on Eq.~\eqref{equ1} for the calculation of $X$, and for that take e.g. the $a_0(980)$ pole in the RS \Rmnum{2} and $r_{\rm exp}=0.85\pm 0.15$, which corresponds to the average value of the PDG with an ad-hoc uncertainty of around a 20\%.
We indeed find a strong sensitivity, such that for the central value $r_{\rm exp}=0.85$ we have the values for the couplings $|\gamma_1|=2.9$~GeV and $|\gamma_2|=2.2$~GeV, while for the lower end $r_{\rm exp}=0.85-0.15=0.70$ we find a new solution with the values $|\gamma_1|=3.1$~GeV and $|\gamma_2|=3.2$~GeV.
With respect to the central value we have a variation of only a 6\% in $|\gamma_1|$, but $|\gamma_2|$ is now a 44\% bigger (a factor of 2 for the square of the coupling).
The variation is of similar size if considering $r_{\rm exp}=0.87\pm 0.17$ with the central value from Ref.~\cite{CrystalBarrel:2019zqh} and an ad-hoc $20\%$ uncertainty taken.
Thus, it is advisable to also apply for the
$a_0(980)$ case the method based on the spectral density function (which is less sensitive to small variations in the input value for $r_{\rm exp}$) and evaluate the compositeness $1-W_{a_0}$ as a function of $\Delta$. We give the results in Tables~\ref{tab.211111.2} and \ref{tabflatt6} for the $a_0(980)$ poles in the RS \Rmnum{2}, Eq.~\eqref{211009.1}, and RS \Rmnum{3}, Eq.~\eqref{eq:ratioa0}, respectively. The dependence with $\Delta$ is indicated by the subscript $\Delta$ in the fourth column, $(1-W_{a_0})_\Delta$.
Taking into account the variation in the value of $1-W_{f_0}$ between $\Delta\approx \Gamma_{a_0}$ and $\Delta\approx 1.5 \Gamma_{a_0}$, we give our range of values calculated for $1-W_{a_0}$ in the column before the last one in Tables~\ref{tab.211111.2} and \ref{tabflatt6}. The output is similar in both tables with values for $1-W_{a_0}$ typically within the range $0.3-0.4$. When compared with $X$ from Table~\ref{tabflatt4}, given in the last column in Tables~\ref{tab.211111.2} and \ref{tabflatt6}, we see a quantitative agreement in the case of the RS \Rmnum{3} pole, and a semiquantitative one for the RS \Rmnum{2} one. The emerging picture is that $X$ is clearly less than 0.5, ranging between $0.2-0.4$ depending on the method of calculation. Therefore, other components beyond $\pi\eta$ and $K\bar{K}$ are also required \cite{Dai:2012kf,Sekihara:2014qxa}. However, if the resonance lied in RS \Rmnum{4}, as preferred by the recent analyses \cite{Guo:2016zep,Dudek:2016cru}, our approach does not apply and we cannot extend such conclusion to that case.
\begin{table}[H]
\begin{center}
\caption{Resonance $a_0(980)$ with the pole position in the RS \Rmnum{2} from Ref.~\cite{CrystalBarrel:2019zqh}, Eq.~\eqref{211009.1}. The dependence of $W_{a_0}$ on the integration interval $[-\Delta,\Delta]$ for the $a_{0}(980)$ is shown with $\Delta$ up to $2\Gamma_{a_0}$. In the column 5 we provide our interval estimated for $1-W_{a_0}$ and in the last one $X=X_1+X_2$ from Table~\ref{tabflatt4} is given.
\label{tab.211111.2}}
\begin{tabular}{|c|ccc|c|c|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$[-\Delta, \Delta]$&~~~$W_{a_{0}}$& $(1-W_{a_0})_{\Delta}$ & $1-W_{a_0}$ & $X$\\
\Xhline{1pt}
\multirow{4}*{~~~0.85 \cite{Zyla:2020zbs}}~~~&~~~$[-50, 50 ]$~~~&~~~~$0.38$~~~ & 0.62 & &\\
~~~&~~~$[-100, 100 ]$~~~&~~~~$0.57$~~~ & $0.43$ & &\\
~~~&~~~$[-150, 150 ]$~~~&~~~$0.67$~~~& 0.33 & &\\
~~~&~~~$[-200, 200 ]$~~~&~~~$0.73$~~~& 0.27 & $0.33-0.43$ &$0.216 \pm 0.017$\\
\Xhline{1pt}
\multirow{4}*{~~~0.87 \cite{CrystalBarrel:2019zqh}}~~~&~~~$[-50, 50 ]$~~~&~~~~$0.39$~~~ & 0.61 & &\\
~~~&~~~$[-100, 100 ]$~~~&~~~$0.59$~~~& $0.41$ & &\\
~~~&~~~$[-150, 150 ]$~~~&~~~$0.68$~~~& 0.32 & &\\
~~~&~~~$[-200, 200 ]$~~~&~~~$0.74$~~~& 0.26 & $0.32-0.41$ &$0.198 \pm 0.016$\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{ Resonance $a_0(980)$ with the pole position in the RS \Rmnum{3} from Ref.~\cite{CrystalBarrel:2019zqh}, Eq.~\eqref{eq:ratioa0}. The dependence of $W_{a_0}$ on the integration interval $[-\Delta,\Delta]$ for the $a_{0}(980)$ is shown with $\Delta$ up to $2\Gamma_{a_0}$. In the column 5 we provide our interval estimated for $1-W_{a_0}$ and in the last one $X=X_1+X_2$ from Table~\ref{tabflatt4} is given
\label{tabflatt6}}
\begin{tabular}{|c|ccc|c|c|}
\Xhline{1pt}
~~$r_{\rm exp}$&~~~$[-\Delta, \Delta]$&~~~$W_{a_{0}}$& $(1-W_{a_0})_\Delta$ & $1-W_{a_0}$ & $X$\\
\Xhline{1pt}
\multirow{5}*{~~~0.85 \cite{Zyla:2020zbs}}~~~&~~~$[-50, 50 ]$~~~&~~~~$0.39$~~~ & 0.61 & & \\
~~~&~~~$[-100, 100 ]$~~~&~~~~$0.59$~~~ & $0.41$ & & \\
~~~&~~~$[-150, 150 ]$~~~&~~~$0.69$~~~& 0.31 & &\\
~~~&~~~$[-200, 200 ]$~~~&~~~$0.75$~~~& 0.25 & &\\
~~~&~~~$[-250, 250 ]$~~~&~~~$0.79$~~~& 0.21 & $0.31-0.41$ &$0.303\pm0.030$\\
\Xhline{1pt}
\multirow{5}*{~~~0.87 \cite{CrystalBarrel:2019zqh}}~~~&~~~$[-50, 50 ]$~~~&~~~~$0.40$~~~ & 0.60 & &\\
~~~&~~~$[-100, 100 ]$~~~&~~~$0.60$~~~& $0.40$ & &\\
~~~&~~~$[-150, 150 ]$~~~&~~~$0.70$~~~& 0.30 & &\\
~~~&~~~$[-200, 200 ]$~~~&~~~$0.76$~~~& 0.24 & &\\
~~~&~~~$[-250, 250 ]$~~~&~~~$0.80$~~~& 0.20 & $0.30-0.40$ &$0.279\pm 0.037$\\
\Xhline{1pt}
\end{tabular}
\end{center}
\end{table}
We would also like to comment about the clearly visible cusp effect for most of the curves of $\omega(E)$ in Fig.~\ref{picflatt}. This change in the shape of $\omega(E)$ below and above the two-kaon threshold is due to the fact that if the resonance pole lies in the RS \Rmnum{2} (\Rmnum{3}) then there is no associated pole in the RS \Rmnum{3} (\Rmnum{2}) above (below) the $K\bar{K}$ threshold. Precisely the RS III (II) is the one that connects with the physical region there.
\section{Summary and conclusions}
\label{sec.211106.1}
This paper discusses the importance of the continuum channels $\pi\pi$-$K\bar{K}$ and $\pi\eta$-$K\bar{K}$ in the composition of the $f_0(980)$ and $a_0(980)$ resonances, which is quantified by the concept of the total compositeness $X$.
In our calculation we exploit the tight relationship between the compositeness $X$, the mass and the decay width of a resonance.
The threshold of the $K\bar{K}$ pair is very close to the mass of each resonance and this fact has main consequences in our results. We develop two methods: One is based on saturating the total width and compositeness; the other relies on the use of a Flatt\'e parameterization and, in some instances, of the spectral function of a near-threshold resonance.
We provide input values for the mass and width of each resonance by taking their pole positions from relevant analyses in the literature.
Regarding the third input needed in our analyses we first take input values for $X$ in the compositeness relationship and
we predict the couplings, partial compositeness coefficients and partial-decay widths to the $\pi\pi$ ($\pi\eta$) and $K\bar{K}$ channels for the $f_0(980)$ ($a_0(980))$.
There is an interesting trend in the results such that the larger $X$, the smaller the branching decay ratio to the lighter channel, $r_{\rm exp}$.
This is due to the increase of the coupling to the heavier $K\bar{K}$ channel with increasing $X$, compensating the reduced phase space available for the decay of the resonances into this channel. It is also found that for the $f_0(980)$ the partial compositeness coefficient of $K\bar{K}$, $X_2$, is larger by orders of magnitude than the corresponding one to $\pi\pi$, $X_1$. For the $a_0(980)$ the compositeness for $K\bar{K}$ is also larger than the one associated to $\pi\eta$, but as $X$ decreases they tend to become similar in size.
Another possibility is to replace the third input $X$ by reported values in the literature for $r_{\rm exp}$.
However, if $X_2$ is calculated in terms of the coupling squared to $K\bar{K}$ and the derivative of the unitary-loop function in the corresponding Riemann sheet, we typically find a large sensitivity on the input value for $r_{\rm exp}$. The situation is improved when using the method based on integrating the spectral density function around the $K\bar{K}$ threshold along the energy region comprising the resonance signal, so that more stable results are obtained under small changes in $r_{\rm exp}$. It turns out that for the poles considered the meson-meson components are dominant for the $f_0(980)$, while for the $a_0(980)$ they are subdominant.
Throughout the manuscript we have emphasized the need to distinguish in a Flatt\'e parameterization between the bare couplings/widths, on the one hand, and the dressed/renormalized ones, on the other hand. We have also shown how to calculate the latter ones. In addition, we discuss the relationship between the partial-decay widths directly calculated in terms of the dressed couplings and the actually measured ones. In this regard, we show the changes needed for a pole in the second Riemann sheet lying near the heavier threshold, such that the total width is then $\Gamma_R=\Gamma_1- \Gamma_2$, instead of the standard $\Gamma_R=\Gamma_1+\Gamma_2$ for a pole in the third Riemann sheet.
Finally, we stress that the compositeness concept, as a quantitative examination of the inner structure of a resonance/molecule, is a relevant tool to promote a step forward in the understanding of the structure of a hadronic state.
\acknowledgments
We would like to thank useful discussions with Zhi-Hui Guo, J.~R. Pel\'aez and J.~Ruiz de Elvira. The author XWK is supported by the National Natural Science Foundation of China (NSFC) under Project No.~11805012. JAO acknowledges partial financial support by the MICINN AEI (Spain) Grant No. PID2019-106080GB-C22/AEI/10.13039/501100011033, and by the EU Horizon 2020 research and innovation programme, STRONG-2020 project, under grant agreement No.~824093.
\bibliographystyle{apsrev4-1}
|
3,212,635,537,790 | arxiv |
\section{INTRODUCTION}
End-to-end approaches to visuomotor learning are appealing in their ability to discover which features of an observed environment are most relevant for a task,
and to be able to exploit large amounts of training data to discover both a policy and a co-dependent visual representation.
Yet, the key benefit of such approaches---that they learn from task experience---is also their Achilles' heel when it comes to many real-world settings,
where behavioral training data is not unlimited and correct perception of the many rare events that can be encountered is critical for robust performance.
Learning all visual parameters of a visuomotor policy from task reward (or demonstration cloning) places an undue burden on task-level supervision or reward.
In autonomous driving scenarios, for example, an agent should ideally be able to perceive objects and vehicles with a wide range of appearance, even those that are not well represented in a behavioral training set.
Indeed, for many visuomotor tasks, there exist related datasets with supervision for perception tasks, such as detection or segmentation, that do not provide supervision for behaviour learning.
Learning the entire range of vehicle appearance from steering supervision alone, while optimal in the limit of infinite training data, clearly misses the mark in many practical settings. We propose to leverage such datasets when learning driving policies.
Classic approaches to robotic perception have employed separate object detectors to provide a fixed state representation to a rule-based policy.
Multistage methods, such as those which first segment a scene~\cite{muller2018driving},
can avoid some aspects of the domain transfer problem,
but do not encode discrete objects and thus are limited to holistic reasoning. End-to-end learning with pixel-wise attention can localize specific objects and provide interpretability, but throws away the existence of instances.
We propose an object-centric perception approach to deep control problems, and focus our experimentation on autonomous driving.
Existing end-to-end models are holistic in nature; our approach augments policy learning with explicit representations that provide object-level attention.
In this work, we explore a taxonomy of representations that consider different levels of objects-centric representations, with various discreteness and sparsity.
We define a family of approaches to object-centric models, and provide a comparative evaluation of the benefit of incorporating object knowledge either at a pixel or box level, with either sparse or dense coverage, and with either pooled or concatenated features.
We evaluate our models in a challenging simulated driving environment with many cars and pedestrians, as well as on real dash-cam data, as shown in Figure~\ref{fig:challenge}.
We show that using a sparse and discrete object-centric representation with a learned per-object attention outperforms previous methods in on-policy evaluations, such models provide the additional interpretability about which objects were determined most relevant to the policy.
\begin{figure}[t!]
\begin{center}
\begin{tabular}{c | c}
\includegraphics[width=0.45\linewidth]{obj0.jpg} &
\includegraphics[width=0.45\linewidth]{nexar0.jpg} \\
\includegraphics[width=0.45\linewidth]{obj2.jpg} &
\includegraphics[width=0.45\linewidth]{nexar4.jpg} \\
\end{tabular}
\end{center}
\caption{Our method uses discrete objects as part of the policy model for driving in traffic. The learned selector identifies the objects most relevant to the policy, which is often the nearest car. We evaluate on both simulated (left, GTAV) and real (right, BDDV) datasets.}
\label{fig:challenge}
\end{figure}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.8\linewidth]{fig2.pdf}
\end{center}
\vspace{-5mm}
\caption{Overview of our object-centric architecture. An image is first passed through a convolutional network, which outputs RoI pooled features for each object along with globally pooled features for the whole image. Then object-level attention layer calculates the task-oriented importance score for each RoI. The linear policy layer takes both global and object features and predicts action for next step.}
\label{fig:pipeline}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.8\linewidth]{fig3.pdf}
\end{center}
\vspace{-3mm}
\caption{An illustration of the representation taxonomy we describe in Section~\ref{sec:taxonomy}. (a) shows a global image representation that does not leverage objects. (b) is a continuous (pixel-level) attention that selects salient parts of the image. (c) is a dense and discrete object representation that selects all objects in the scene. (d) is a discrete but sparse object presentation that only selects the objects important for the task, and (e) is a sparse representation that treats each object individually by concatenating instead of averaging the object features.}
\label{fig:taxonomy}
\end{figure*}
\section{RELATED WORK}
Approaches to robot skill learning face bias/variance trade-offs,
including in the definition of a policy model.
One extreme of this trade-off is to make no assumptions about the structure of the observations,
such as end-to-end behavior cloning from raw sensory data~\cite{bojarski2016end, bojarski2017explaining, xu2017end}.
At the opposite end, one can design a policy structure that is very specific to a particular task,
\textit{e}.\textit{g}. for driving by calculating margins between cars, encoding lane following, and tracking pedestrians~\cite{huval2015empirical}.
These modular pipelines with rule-based system dominate autonomous driving industry~\cite{thrun2006stanley, urmson2008autonomous, ziegler2014making}.
The first attempt at training an end-to-end driving policy from raw inputs traces back to 1980s with ALVINN~\cite{pomerleau1989alvinn}.
Muller \textit{et al}. revisited this idea to help off-road mobile robots with obstacle avoidance system~\cite{muller2006off}.
Recently, Bojarski \textit{et al}. demonstrate the appeal of foregoing structure by training a more advanced convolutional network to imitate demonstrated driving~\cite{bojarski2016end, bojarski2017explaining}.
Xu \textit{et al}. advocate learning a driving policy from a uncalibrated crowd-sourced video dataset~\cite{xu2017end} and show their model can predict the true actions taken by the drivers from RGB inputs.
Codevilla \textit{et al}. leverage the idea of conditional imitation learning on high-level command input in order to resolve the ambiguity in action space~\cite{codevilla2017end}.
These end-to-end models, which automatically discover and construct the mapping from sensory input to control output,
reduce the burden of hand-crafting rules and features.
However, these approaches have not yet been shown to work in complex environments, such as intersections with other drivers and pedestrians.
We address how to best represent images for
robotics tasks such as driving.
Muller \textit{et al}. train a policy model from the semantic segmentation of images, which increases generalization from synthetic to real-world~\cite{muller2018driving}.
Chen \textit{et al}. provide an additional intermediate stage for end-to-end learning,
which learns the policy on the top of some ConvNet-based measurements,
such as affordance of road/traffic state for driving~\cite{chen2015deepdriving}.
Sauer \textit{et al}. combine the advantages of conditional learning and affordance~\cite{sauer2018conditional}.
The policy module is built on a set of low-dimensional affordance measurements, with the given navigation commands.
We argue for an object-centric approach which allows objects to be handled explicitly by the model. Prior work has
encoded objects as bounding box positions~\cite{devin2017deep} for manipulation tasks, but does not use end-to-end training and discards the features of the objects, instead just concatenating their pixel positions. We expand upon this work and evaluate a taxonomy of ``object-centric" neural network models on the driving task.
\section{OBJECT-CENTRIC POLICIES}
We describe a generic architecture that takes in RGB images and outputs actions. Our model expresses a series of choices that provide different object properties to the model. Our goal is to identify which aspects are important for visuomotor tasks such as autonomous driving. Algorithm~\ref{alg:object} provides pseudo-code for implementing the different variants of our method.
\subsection{Generic Architecture}
The generic form of our model takes in an RGB image and outputs two sets of features: global image contextual features and an object-centric representation.
The global contextual features are produced by a convolutional network over the whole image, followed by a global average pooling operation.
The object-centric representation is constructed as described below to produce a fixed-length object-centric representation.
The global features are concatenated with the object representation, and passed to a fully connected policy network which outputs a discretized action.
For on-policy evaluation, a hard-coded PID controller converts the action to low-level throttle, steer, and brake commands.
\input{algorithm.tex}
\subsection{Objectness Taxonomy}
\label{sec:taxonomy}
What does it mean for an end-to-end model to be ``object-centric"?
In this section, we define a taxonomy of structures that leverage different aspects of ``objectness".
By defining this taxonomy and placing previous work within it, we evaluate which aspects bring the greatest gains in performance in various driving scenarios.
The aspects discussed are \emph{countability}, \emph{selection}, and \emph{aggregation}. Figure~\ref{fig:taxonomy} visualizes our representation taxonomy.
\subsubsection{Countability: Discrete vs Continuous}
An example of a continuous object-centric representation is a pixel-level attention map over an image, as used in~\cite{kim2017interpretable}.
In contrast, a discrete representation could be a bounding box or instance mask.
The potential benefit of keeping a discrete object structure is that a model may need to reason explicitly over instances (such as cars navigating an intersection) rather than reasoning over a bag-of-vehicles representation.
Our implementation of discrete objects applies a pre-trained FPN detector \cite{lin2017feature} to output bounding boxes for vehicles and pedestrians. We utilize an RoI-pooling layer \cite{girshick2015fast} to extract region features for each box. The boxes and their respective features are treated as a set of objects.
In the discrete setting, we define $O$ as the list of objects returned by the detector, and $f(o_i)$ as the RoI features of the $i$-th object. We define $G$ as the global features from the whole image.
\subsubsection{Selection: Sparse vs Dense}
Should the policy model reason over all objects at once (dense), or should it first select a fixed number (sparse) of salient objects and consider only those?
The former allows more flexibility, but \textit{e}.\textit{g}. may distract the policy with cars that are very far away or separated from the agent by a median.
To obtain a relevance score for each object, we train a task-specific selector jointly with the policy to minimum the behavioral cloning loss. The selector is a network that takes in the RoI features of each object concatenated with the global image features and outputs a scalar score, indicating the relevance of the object. The scores $w$ are evaluated with a softmax to produce a weight between 0 and 1 for each object.
In the sparse model, only the top $k$ scoring objects are used in the policy.
\subsubsection{Aggregation: Sum vs Concatenate}
If using discrete objects, a decision needs to be taken about how to combine the objects into a single representation.
One possible approach is to weight and sum the features of the objects, while another approach is to concatenate the features.
The former is agnostic to the number of objects and is order invariant, while the latter may allow for more nuanced computation about multi-object decisions. Our implementation of the concatenation approach is to sort the objects by their selector weights and concatenate the features $\bar{w}_i * f_i$ in order from largest $w_i$ to smallest.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=\textwidth]{stat_content.jpg}
\includegraphics[width=\textwidth]{stat_legend.jpg}
\end{center}
\vspace{-3mm}
\caption{Driving performance. From left to right: driving distance between interventions, number of interventions per 100m, number of collisions per 100m. The top row shows results using a learned detection model, while the bottom row uses ground-truth bounding box. The object centric models (green) overall perform better than the object agnostic models (blue), with the sparse models being the best. The highway environment is easier to drive than the urban environment. Comparing the heuristic selector with the learned selector used in the ``sparse object" model, it is clear that learning a selector provides better results. }
\label{fig:onpolicy}
\end{figure*}
\section{EXPERIMENTS}
We evaluate our object-centric models on both a simulated environment and a real-world dataset. Specifically, we use the Grand Theft Auto V simulation \cite{krahenbuhl2018free} and the Berkeley DeepDrive Video dataset \cite{xu2017end} for online and offline evaluation, respectively. All models are
trained on a behavioral cloning objective.
\subsection{Evaluation Setup}
\subsubsection{Online Driving Simulation}
For the simulation experiments, $1.6$ million training frames were collected by using the in-game
navigation system as the expert policy. Following a DAgger-like~\cite{ross2011reduction} augmented imitation learning pipeline, noise was added to the control command every 30 seconds to generate diverse behavior. The noisy control frames and the following $\sim 7$ frames were dropped during training to avoid replicating noisy behavior. The simulation was rendered at 12 frames per second. The training dataset was collected over 1000 random paths across 2km in the game. The in-game times ranged from 8:00 am to 7:00 pm with the default weather condition set to ``cloudy".
In total,
Each frame included control signals, such as speed, angle, throttle, steering, brake, as well as ground-truth bounding boxes around vehicles and pedestrians. During our training and testing procedure we used a camera in front of the car which keeps a fixed $60^\circ$ horizontal field of view (FoV). The maximum speed of all vehicles was set to 20km/h.
When training a policy, the expert's continuous action was discretized into 9 actions: (\textit{left, straight, right}) $\times$ (\textit{fast, slow, stop}). At evaluation time, we used a PID controller (shared for all models) to translate the discrete actions into continuous control signals per frame.
For testing, we deployed the model in $8$ locations unseen during training: $2$ highway and $6$ urban intersections.
Figure~\ref{fig:selection} demonstrates some example scene layouts in our simulation environment.
For each location, we tested the model for $100$ minutes: the agent was run for 10 independent roll-outs lasting 10 minutes each.
If the vehicle crashed or got stuck during a rollout, the incident was recorded and the in-game AI intervened over for at least 15 seconds until it recovered.
An extreme accident which took more time to recover from would be penalized more in our metric as it would travel less overall distance; the frames during the intervention were not counted towards the total.
The models were evaluated with several metrics.
For each roll-out, we calculated the total distance travelled, the number of collisions, and the number of interventions by the in-game AI.
To compare across roll-outs, we computed the distance driven between AI interventions, the number of collisions and interventions per 100m traveled.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.325\linewidth]{intervention.jpg}
\includegraphics[width=0.325\linewidth]{dist.jpg}
\includegraphics[width=0.325\linewidth]{dist_highway.png}
\end{center}
\vspace{-3.5mm}
\caption{Analysis of intervention frequency. Same legend as Figure~\ref{fig:onpolicy}. On the left, the shaded region indicates the proportion of interventions caused by collisions. In the highway environment, almost all interventions are cause by collisions, but in the urban environment, the policy gets stuck at intersections, as shown in the supplementary video.
On the right, histograms shows how far each model drove between interventions and collisions. In the urban environment the object centric approaches drove farthest before an intervention. In the highway environment, the pixel attention performs slightly better.}
\label{fig:urban}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=\linewidth]{fig6.pdf}
\end{center}
\vspace{-4mm}
\caption{Sample trajectories from the evaluation. Yellow dots indicate to interventions while red dots indicate collisions (best viewed on screen). This example illustrates the reliability of the object centric models over the the baselines, with fewer collisions \& interventions and greater road coverage.}
\label{fig:GPS}
\end{figure*}
\subsubsection{Real-world Offline Dataset}
We used $2.2$ million training frames and $0.2$ million testing frames from a large-scale crowd-sourcing dash-cam video dataset, with diverse driving behaviors.
Each frame was accompanied by raw sensory data from GPS, IMU, gyroscope, magnetometer, as well as sensor-fused measurements like course and speed.
As in Xu et al~\cite{xu2017end}, the model was trained to predict the expert's future linear and angular speeds for each frame at intervals of $1/3$ seconds during training. For evaluation, speed and angle were discretized into $30$ bins each and then were mapped into the joint distribution of speed and angle into $30\times30=900$ bins.
Following the method in Xu et al, we evaluated the $900$-way classification model by the perplexity of the model on withheld test data.
Specifically, we calculated the value of softmax loss function as perplexity indicator.
\input{table.tex}
\subsection{Implementation Details}
The convolutional backbone is a 34-layer DLA model~\cite{yu2018deep} pre-trained on ImageNet~\cite{russakovsky2015imagenet}, with the open-source framework PyTorch~\cite{paszke2017automatic}. For all models, including the baseline and attention models, this network is trained end-to-end as part of the policy.
We use a Detectron model~\cite{Detectron2018} trained on MSCOCO~\cite{lin2014microsoft} to generate bounding boxes for moving objects, specifically vehicles and pedestrians.
We used the Adam optimizer~\cite{kingma2014adam} for $3$ epochs with initial learning rate $0.001$, weight decay $10^{-4}$, and batch size 128.
Unlike in ~\cite{codevilla2017end, sauer2018conditional}, we do not use any data augmentation. All sparse models use $k=5$ to keep the top $5$ objects and discard the rest. The selector is a linear mapping from object features to object score, optimized jointly with the features and the policy.
A final linear layer maps from the concatenation of the object and global features to the predicted action.
\subsection{Results}
We evaluate several baselines, prior methods, and ablations.
The \emph{baseline} method does not represent objects or use attention at inference time.
The \emph{pixel attention} method is the same as \emph{baseline} but with an additional pixel-level attention mechanism, learned end-to-end with the task.
Next, we evaluate several object-centric models drawn from our taxonomy. The results labeled \emph{dense object} use a discrete and dense object representation with summation of the objects weighted by a learned selector. \emph{Sparse object} is the same as \emph{dense object}, but only looks at the top 5 objects in the scene, as scored by the learned selector. While the preceding models used the selector to weight object features before summing them, \emph{sparse object concat} concatenates the features of the top 5 objects and passes the entire list to the fully connected policy.
We also evaluate our selector by comparing to a heuristic selector: the size of the object's bounding box. The results using the heuristic selector in a sparse object model are labeled \emph{heuristic selector}.
The results of the on-policy simulated driving are shown in Figure~\ref{fig:onpolicy}. We show several metrics: the number of collisions, the number of times the agent got stuck, and the distance driven between these. Each evaluation was repeated for two environments: urban (which has many intersections and cars/pedestrians) and highway (which is mostly driving straight). The object-centric methods consistently outperform the two object-agnostic method in the urban evaluation, while the highway environment shows good performance for all attention models.
The comparable performance between the evaluation with ground truth boxes versus predicted boxes (from a detector trained on MSCOCO~\cite{lin2014microsoft}) indicates that our method is robust to noisy detections. Figure~\ref{fig:GPS} visualizes evaluation roll-outs along a map with collisions and interventions drawn in. These maps show how the object centric models drive for longer without crashing or getting stuck, and how they end up farther from their start point than the baseline and pixel attention models. This is supported by the histograms of distance between interventions in Figure~\ref{fig:urban} which shows how the sparse models especially drive farther between interventions.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.233\linewidth]{boxsucc0.jpg}
\includegraphics[width=0.233\linewidth]{boxsucc1.jpg}
\includegraphics[width=0.233\linewidth]{boxsucc6.jpg}
\includegraphics[width=0.233\linewidth]{boxsucc7.jpg}
\includegraphics[width=0.233\linewidth]{attsucc0.jpg}
\includegraphics[width=0.233\linewidth]{attsucc1.jpg}
\includegraphics[width=0.233\linewidth]{attsucc6.jpg}
\includegraphics[width=0.233\linewidth]{attsucc7.jpg}
\includegraphics[width=0.233\linewidth]{boxfail0.jpg}
\includegraphics[width=0.233\linewidth]{boxfail1.jpg}
\includegraphics[width=0.233\linewidth]{boxfail2.jpg}
\includegraphics[width=0.233\linewidth]{boxfail3.jpg}
\includegraphics[width=0.233\linewidth]{attfail0.jpg}
\includegraphics[width=0.233\linewidth]{attfail1.jpg}
\includegraphics[width=0.233\linewidth]{attfail2.jpg}
\includegraphics[width=0.233\linewidth]{attfail3.jpg}
\end{center}
\caption{Sample scenes from the Grand Theft Auto V simulation with our sparse model's learned object selector compared against a learned pixel-level attention. For rows 1 and 3, red indicates a high scoring object, and blue is low scoring (best viewed on screen). For rows 2 and 4,
then pixel attention is shown by brightness of the pixels.
The actions output by each model are shown by the white squares in the corners: accelerator is the top square, and the bottom squares are turn left, brake, and turn right, respectively. A single action may both turn and accelerate or brake.
Rows 1 and 2 shows both models performing well, while rows 3 and 4 show the pixel attention model ignoring pedestrians and deciding to accelerate towards them. The object centric model is more conservative and attends strongly to the pedestrians, choosing to slow down instead of speeding up.}
\label{fig:selection}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\ra{1.3}
\includegraphics[width=0.233\linewidth]{n2.jpg}
\includegraphics[width=0.233\linewidth]{n3.jpg}
\includegraphics[width=0.233\linewidth]{n4.jpg}
\includegraphics[width=0.233\linewidth]{n5.jpg}
\end{center}
\caption{Sample scenes from the Berkeley DeepDrive Video dataset with the sparse model's learned selector visualized. Red indicates a high scoring object, and blue is low scoring (best viewed on screen). Our method is robust to imperfect detections, such as overlapping bounding boxes, for both both day and night scenes.}
\label{fig:selectionnexar}
\end{figure*}
To identify the benefits of using a learned selector over boxes, we compared the \emph{sparse object} model against a heuristic selector, which assigns importance to objects based on their size. The motivation for this heuristic is that larger objects are likely to be closer, and therefore more important for the policy. Figure~\ref{fig:onpolicy} shows that the model with a learned selector performs equally or better than the heuristic for every metric. Although some other heuristic may work better, we conclude that learning the selector jointly with the policy is beneficial.
The final experiment in Table~\ref{table:realperplexity} is an off-policy evaluation on the real world dataset that measures the perplexity of the learned model with respect to test data. As we cannot evaluate the policies in the real world, the perplexity provides a proxy for how well the data can be modeled.
When trained on only a subset of the data (from 5\% to 50\%), the sparse object models performs best, with concatenation overtaking summation in the medium data regime.
The concatenation model performs equally well to the baseline once all the data has been seen, indicating that the sparse model is advantageous for low data problems, and that the sparse concat model is ideal for medium to large data situations.
The object prior that our models leverage allows them to learn quickly from little data without being distracted by irrelevant pixels.
Figure~\ref{fig:selectionnexar} shows example scenes with our model's attention.
\section{CONCLUSION}
We defined a taxonomy over object-centric models and showed in an on-policy evaluation that sparse object models outperformed object-agnostic models according to our metrics of distance driven and frequency of collisions and interventions. Our results show that highway driving is significantly easier than navigating intersections; the necessity of navigating city environments showcase the advantages of representing objects. Discreteness and sparsity, along with a learned selection mechanism, seem to be the most important aspects of object-centric models.
For simplicity, this work only considered the presence of vehicles and pedestrians and did not evaluate the policy's ability to follow the rules of the road. Using generic object detection rather than class specific detection is an obvious direction of future work, and would hopefully lead to paying attention to streetlights, signage, and other objects relevant to driving. These types of objects are crucial for following the rules of the road, and we expect that object-centric policies which attend to these objects would provide even more gains. Promising avenues for future work also include leveraging the 3D nature of objects and their temporal coherence.
\newpage
\section{ACKNOWLEDGEMENTS}
This work was supported by Berkeley AI Research and Berkeley Deep Drive. Coline Devin is supported by an NSF Graduate Research Fellowship.
\addtolength{\textheight}{-12cm}
\bibliographystyle{IEEEtran}
|
3,212,635,537,791 | arxiv | \section{Introduction}
There are at least three equivalent approaches to (classical) convolutions of probability measures on groups and semi-groups, which we shall call here the harmonic analysis, the probabilistic, and the characteristic functions approach. The main goal of the present paper is to prove the equivalence of the probabilistic and the characteristic functions approach for the convolutions based on monotone and boolean independence introduced recently.
In harmonic analysis the convolution of two probability measure $\mu$ and $\nu$ on a group or semi-group $G$ is defined as the image measure $T^{-1}\mu\otimes\nu$ of their product $\mu\otimes\nu$ under the map $T:G\times G\to G$ defined by the (semi-) group operation, $T(x,y)=xy$. The probabilistic definition would be to take the law of the product $XY$ of two $G$-valued random variables $X$ and $Y$ as the convolution of $\mu$ and $\nu$, if $X$ and $Y$ are independent and distributed according to $\mu$ and $\nu$, respectively. The equivalence of the two approaches follows from the fact that two random variables are independent if and only if their joint law is the product of their marginals.
In the characteristic function approach one defines a bijection between probability measures and an appropriate class of functions, e.g.\ in the case of probability measures on the real line one usually chooses the Fourier transform, $\hat{\mu}(t)=\int_\mathbb{R} e^{itx}{\rm d}\mu(x)$. By Bochner's theorem, the Fourier transform defines a bijection between the probability measures on the real line and the class of uniformly continuous, positive definite functions one the real line, whose value at the origin is equal to one. Checking that this class is closed under the pointwise products, one could define the (additive) convolution of the probability measures $\mu$ and $\nu$ on $\mathbb{R}$ as the unique probability measure $\lambda$ such that $\hat{\lambda}(t) = \hat{\mu}(t)\hat{\nu}(t)$ for all $t\in\mathbb{R}$. Usually this approach is not used to define the convolution but rather to compute it or to study its properties, prove limit theorems, etc.
These three approaches can also be used to define and study the free, monotone, and boolean convolutions, which are based on the corresponding notions of independence in quantum probability. For this purpose groups have to be replaced by dual groups \cite{voiculescu87,voiculescu90} and random variables have to be replaced by quantum random variables, i.e.\ appropriate classes of operators on Hilbert spaces. Furthermore, it is necessary to find adequate characteristic functions.
In the free case the equivalence of the probabilistic and the characteristic function approach was first proven in \cite{voiculescu86} for the additive free convolution on the real line and for probability measures with compact support. Later it was extended to multiplicative free convolutions \cite{bercovici+voiculescu92} and probability measures with non-compact support in \cite{maassen92,bercovici+voiculescu93}.
For the convolutions based on the monotone and boolean independence, this equivalence has been proven so far only for probability measures with compact support, cf.\ \cite{muraki00,bercovici04,bercovici05,franz05a} and \cite{speicher+woroudi93,franz04,bercovici04b}. For non-compactly supported probability measures these convolutions are currently defined only via their characteristic functions (which are certain functions of their Cauchy transforms), and not via the more natural harmonic analysis or probabilistic approach. In this paper, we will extend the equivalence between the probabilistic approach and the characteristic function approach to non-compactly supported probability measures.
This extension is necessary, e.g., for studying monotone and boolean counterparts of stable laws by probabilistic means. We also need it to extend the results of \cite{franz+muraki05} beyond the bounded case, and to construct and study unbounded quantum stochastic processes with monotonically or boolean independent increments. In the monotone case, these processes turn out to have an interesting relation to the theory of Loewner chains, cf.\ \cite{franz07a}.
The equivalence between the harmonic analysis approach and the probabilistic approach for the free convolutions was established in \cite{voiculescu87,voiculescu90} where dual groups where introduced for this purpose. Due to the universal properties of the products underlying the free, monotone, and boolean independence, it follows in the same way for the monotone and boolean convolutions, cf. \cite{schuermann95b,benghorbal+schuermann02,franz03c,muraki03}.
To apply the probabilistic approach to non-compactly supported probability measures, it is necessary to extend the definition of monotone or boolean independence to unbounded operators. This is done in Definitions \ref{def-mon-indep} and \ref{def-bool-indep}. Two not necessarily bounded operators that admit a functional calculus for continuous functions are called monotonically or boolean independent, if the algebras consisting of bounded functions vanishing at the origin of these operators are monotonically or boolean independent. The restriction to functions vanishing at the origin is necessary, because for monotone and boolean independence one has to allow non-unital algebras to get non-trivial examples, see also \cite{benghorbal+schuermann02,muraki03}.
As a next step we show that the general case of two monotonically or boolean independent normal operators can be reduced to a universal model which allows explicit calculations, see Theorems \ref{thm-mon-model} and \ref{thm-bool-model}.
In Theorems \ref{thm-mon-add-conv} and \ref{thm-bool-add-conv} we then show that the sum of two monotonically or boolean independent self-adjoint operators is essentially self-adjoint, if the state vector is cyclic for the algebra generated by these two operators. This condition is not very restrictive. If the state vector is not cyclic, then one can always restrict to the subspace that is generated from it. Furthermore, we show that the distribution of the sum is equal to the additive monotone or boolean convolution of the distributions of the two operators. This justifies the name {\em additive} monotone or boolean convolution.
Next we treat the multiplicative convolutions of probability measures on the positive half-line. This case is more subtle, because there are many possibilities for constructing ``multiplicatively'' a positive operator out of two given positive operators. Here we consider only the two cases $(X,Y)\mapsto \sqrt{X}Y\sqrt{X}$ and $(X,Y)\mapsto \sqrt{Y}X\sqrt{Y}$. These operations are neither commutative nor associative. Nonetheless, in the free case, both lead to the same multiplicative free convolution of probability measures on the positive half-line, which is associative and commutative. This follows from the fact that the free product is trace-preserving.
But the monotone and the boolean product are not trace-preserving, and so it is not surprising that the situation becomes more complicated here.
In the monotone case, we show that the operation $(X,Y)\mapsto \sqrt{X}Y\sqrt{X}$ leads to a ``nice'' convolution product that agrees with the one defined in \cite{bercovici04}, see also \cite{franz05a} and Theorem \ref{thm-mon-mult}.
We also study the case $(X,Y)\mapsto \sqrt{Y}X\sqrt{Y}$, but it turns out that this leads to a convolution which is not associative and which does not seem to have a ``nice'' definition in terms of characteristic functions, see Subsection \ref{subsec-mon-mult-other}.
Then we consider these operations for boolean independent positive operators and give an alternative definition of the multiplicative boolean convolution for probability measures on the positive half-line. The definition proposed by Bercovici \cite{bercovici04b} has the disadvantage that it is not always defined. In Remark \ref{rem-new-bool-mult-conv}, we give a new probabilistically motivated definition of the multiplicative convolution of two probability measures on the positive half-line. But it turns out that this convolution is neither commutative nor associative.
Probability measures on the unit circle are of course always compactly supported and the associated operators are unitary and therefore bounded. In this case the equivalence between the probabilistic and the characteristic function approach was already established in \cite{bercovici04,franz05a,franz04}. For completeness we recall these results in Subsections \ref{subsec-mult-mon-T} and \ref{subsec-mult-bool-T}.
Finally, in Section \ref{sec-relations} we discuss some relations between free, monotone, and boolean convolutions.
\section{Preliminaries}
\subsection{Quantum probability}
In quantum probability the commutative algebra of random variables is replaced by a possibly non-commutative algebra $\mathcal{A}$ of operators on a Hilbert space $H$ and the role of the probability measure is taken by a state $\Phi$, i.e.\ a positive normalized functional on that algebra \cite{parthasarathy92}. In our paper this state will always be a vector state, i.e.\ of the form $\Phi(X)=\langle\Omega,X\Omega\rangle$ for some unit vector $\Omega\in H$. (Note that our inner products are linear in the second argument.)
If $X$ is a quantum random variable, i.e.\ an operator on a Hilbert space $H$, for which a functional calculus $C_b(\mathbb{C})\ni h\mapsto h(X)\in\mathcal{B}(H)$ for bounded continuous functions can be defined, then we call a probability measure $\mu$ on $\mathbb{C}$ the {\em distribution} of $X$ with respect to the vector state given by $\Omega\in H$, if
\[
\langle\Omega,h(X)\Omega\rangle = \int_\mathbb{C}h(x){\rm d}\mu(x)
\]
for all $h\in C_b(\mathbb{C})$. In this case we shall also write $\mu=\mathcal{L}(X,\Omega)$.
A densely defined, closed operator $X$ on a Hilbert space $H$ with domain ${\rm Dom}\,X$ is called {\em normal}, if the domains of $XX^*$ and $X^*X$ coincide and we have $XX^*=X^*X$ on this common domain.
By the spectral theorem, an operator $X$ on a separable Hilbert space $H$ is normal if an only if there exists a $\sigma$-finite measure space $(E,\mathcal{E},\mu)$ and an $\mathcal{E}$-measurable function $\phi$ s.t.\ $X$ is unitarily equivalent to the operator $M_\phi$ of multiplication by $\phi$ on $L^2(E,\mathcal{E},\nu)$, i.e., there exists a unitary operator $U:H\to L^2(E,\mathcal{E},\nu)$ s.t.\ $UX\xi = \phi U\xi$ for all $\xi\in{\rm Dom}\,X$, see, e.g., \cite{conway85,pedersen89}. It follows that normal operators are quantum random variables in the sense above, since a functional calculus for them can be defined by $h(X)=U^*M_{h\circ\phi}U$. The distribution of $X$ w.r.t.\ to a unit vector $\Omega\in H$ is given by
\[
\mathcal{L}(X,\Omega) = \phi^{-1}\left(|U\Omega|^2\nu\right),
\]
since
\[
\langle \Omega, h(X)\Omega\rangle = \int_E h\circ\phi\, |U\Omega|^2{\rm d}\nu
\]
for all $h\in C_b(\mathbb{C})$.
To deal with unitary equivalence of possibly unbounded normal operators, we will use the following lemma.
\begin{lemma}\label{lem-unitary-eq}
Let $X$ and $X'$ be possibly unbounded normal operators on Hilbert spaces $H$ and $H'$. Assume that there exists a unitary operator $U:H\to H'$ such that
\begin{equation}\label{eq-unitary-lem-cond}
Uf(X) = f(X')U
\end{equation}
for any bounded continuous function $f$ on $\mathbb{C}$.
Then $UX = X'U$.
\end{lemma}
To prove this lemma, one can first prove the analogous statement for self-adjoint operators, e.g., using resolvents, and then apply the properties of the decomposition $X=A+iB$ of normal perators as a linear combination of two commuting self-adjoint operators, cf.\ \cite[Proposition 5.1.10]{pedersen89}. It is actually sufficient to require Condition \eqref{eq-unitary-lem-cond} for a much smaller class of functions, e.g.\ compactly supported real-valued $C^\infty$-functions.
\subsection{Nevanlinna theory and Cauchy transforms}\label{nevanlinna}
Denote by $\mathbb{C}^+=\{z\in\mathbb{C};{\rm Im}\,z>0\}$ and $\mathbb{C}^-=\{z\in\mathbb{C};{\rm Im}\,z<0\}$ the upper and lower half plane. For $\mu$ a probability measure on $\mathbb{R}$ and $z\in\mathbb{C}^+$, we define its {\em Cauchy transform} $G_\mu$ by
\[
G_\mu(z) = \int_{\mathbb{R}} \frac{1}{z-x}{\rm d}\mu(x)
\]
and its {\em reciprocal Cauchy transform} $F_\mu$ by
\[
F_\mu(z) = \frac{1}{G_\mu(z)}.
\]
Denote by $\mathcal{F}$ the following class of holomorphic
self-maps,
\[
\mathcal{F}=\left\{F:\mathbb{C}^+\to\mathbb{C}^+; F\mbox{
holomorphic and }\inf_{z\in\mathbb{C}^+} \frac{{\rm Im}\, F(z)}{{\rm
Im}\,z} = 1\right\}
\]
The map $\mu\mapsto F_\mu$ defines a bijection between the class $\mathcal{M}_1(\mathbb{R})$ of probability measures on $\mathbb{R}$ and $\mathcal{F}$, as follows from the following theorem.
\begin{theorem}\cite{maassen92}
Let $F: \mathbb{C}^+\to\mathbb{C}^+$ be holomorphic, then the
following are equivalent.
\begin{description}
\item[(i)] $\inf_{z\in\mathbb{C}^+} \frac{{\rm Im}\, F(z)}{{\rm
Im}\,z} = 1$;
\item[(ii)]
there exists a $\mu\in\mathcal{M}_1(\mathbb{R})$ such that
$F=F_\mu$.
\end{description}
Furthermore, $\mu$ is uniquely determined by $F$.
\end{theorem}
Similarly, for $\mu$ a probability measure on the unit circle
$\mathbb{T}=\{z\in\mathbb{C};|z|=1\}$ or on the positive half-line
$\mathbb{R}_+=\{x\in\mathbb{R};x\ge 0\}$, we define
\[
\psi_\mu(z) = \int \frac{xz}{1-xz}{\rm d}\mu
\]
and
\[
K_\mu(z)=\frac{\psi_\mu(z)}{1+\psi_\mu(z)}
\]
for $z\in\mathbb{C}\backslash{\rm supp}\,\mu$.
The map $\mu\mapsto K_\mu$ defines bijections between the class $\mathcal{M}_1(\mathbb{T})$ of probability measures on $\mathbb{T}$ and the class
\[
\mathcal{S}=\{K:\mathbb{D}\to\mathbb{D};K\mbox{ holomorphic and
}K(0)=0\},
\]
where $\mathbb{D}=\{z\in\mathbb{C};|z|<1\}$, and between the class $\mathcal{M}_1(\mathbb{R}_+)$ of probability measures on $\mathbb{R}_+$ and the class
\[
\mathcal{P}=\left\{K:\mathbb{C}\backslash\mathbb{R}_+\to
\mathbb{C}\backslash\mathbb{R}_+; \begin{array}{c} K \mbox{
holomorphic},\lim_{t\nearrow 0} K(t)=0,
K(\overline{z})=\overline{K(z)}, \\ \pi\ge\arg K(z)\ge\arg z\mbox{
for all }z\in \mathbb{C}^+\end{array}\right\},
\]
cf.\ \cite{belinschi+bercovici05} and the references therein.
In the following, if $X$ is an operator with distribution $\mu=\mathcal{L}(X,\Omega)$ w.r.t.\ $\Omega$, then we will write $G_X$, $F_X$, $\Psi_X$ or $K_X$ instead of $G_{\mathcal{L}(X,\Omega)}$, $F_{\mathcal{L}(X,\Omega)}$, $\psi_{\mathcal{L}(X,\Omega)}$, or $K_{\mathcal{L}(X,\Omega)}$ for the transforms of the distribution of $X$.
\subsection{Free convolutions}\label{free-conv}
By $\mathbb{A}_k$ we call denote the set of alternating $k$-tuples of 1's and 2's, i.e.
\[
\mathbb{A}_k= \big\{ (\varepsilon_1,\ldots,\varepsilon_k)\in \{1,2\}^k; \varepsilon_1\not=\varepsilon_2\not=\ldots\not=\varepsilon_k\big\}.
\]
\begin{definition}
\cite{voiculescu86}
Let $\mathcal{A}_1,\mathcal{A}_2\subseteq\mathcal{B}(H)$ be two $*$-algebras of bounded operators on a Hilbert space and assume $\mathbf{1}\in\mathcal{A}_i$, $i=1,2$. Let $\Omega$ be a unit vector in $H$ and denote by $\Phi$ the vector state associated to $\Omega$. We say that $\mathcal{A}_1$ and $\mathcal{A}_2$ are {\em free}, if we have
\[
\Phi(X_1\cdots X_k) =0
\]
for all $k\ge 1$, $\varepsilon \in\mathbb{A}_k$, $X_1\in\mathcal{A}_{\varepsilon_1},\ldots,X_k\in\mathcal{A}_{\varepsilon_k}$ such that
\[
\Phi(X_1)=\cdots=\Phi(X_k)=0.
\]
\end{definition}
Two normal operators $X$ and $Y$ are called {\em free}, if the algebras ${\rm alg}(X)=\{h(X);h\in C_b(\mathbb{C})\}$ and ${\rm alg}(Y)=\{h(Y);h\in C_b(\mathbb{C})\}$ they generate are free.
\begin{theorem}\label{thm-parallelogramm}
\cite{maassen92,chistyakov+goetze05,chistyakov+goetze06}
Let $\mu$ and $\nu$ be two probability measures on the real line, with reciprocal Cauchy transforms $F_\mu$ and $F_\nu$. Then there exist unique functions $Z_1,Z_2\in\mathcal{F}$ such that
\[
F_\mu\big(Z_1(z)\big) = F_\nu\big(Z_2(z)\big)= Z_1(z)+Z_2(z)-z
\]
for all $z\in\mathbb{C}^+$.
\end{theorem}
The function $F=F_\mu\circ Z_1 = F_\nu\circ Z_2$ also belongs to $\mathcal{F}$ and is therefore the the reciprocal Cauchy transform of some probability measure $\lambda$. One defines the additive free convolution of $\mu$ and $\nu$ as this unique probability measure and writes $\mu\boxplus\nu=\lambda$. This is justified by the following theorem.
\begin{theorem}
\cite{maassen92,bercovici+voiculescu93}
Let $X$ and $Y$ be two self-adjoint operators on some Hilbert space $H$ that are free w.r.t.\ some unit vector $\Omega\in H$. If $\Omega$ is cyclic, i.e.\ if
\[
\overline{{\rm alg}\{h(X),h(Y);h\in C_b(\mathbb{R})\}\Omega}=H.
\]
then $X+Y$ is essentially self-adjoint and the distribution w.r.t.\ $\Omega$ of its closure is equal to the additive free convolution of the distributions of $X$ and $Y$ w.r.t.\ to $\Omega$, i.e.
\[
\mathcal{L}(X+Y,\Omega) = \mathcal{L}(X,\Omega)\boxplus\mathcal{L}(Y,\Omega).
\]
\end{theorem}
There exist analogous results for the multiplicative convolutions of probability measures on the unit circle and the positive half-line, cf.\ \cite{maassen92,bercovici+voiculescu93,chistyakov+goetze05,chistyakov+goetze06}
\begin{theorem}
\item[(i)]
Let $\mu$ and $\nu$ be two probability measures on the unit circle with transforms $K_\mu$ and $K_\nu$ and whose first moments do not vanish, $\int_\mathbb{T}x{\rm d}\mu(x)\not=0$, $\int_\mathbb{T}x{\rm d}\nu(x)\not=0$. Then there exist unique functions $Z_1,Z_2\in\mathcal{S}$
such that
\[
K_\mu\big(Z_1(z)\big) = K_\nu\big(Z_2(z)\big)=\frac{Z_1(z)Z_2(z)}{z}
\]
for all $z\in\mathbb{D}\backslash\{0\}$. The multiplicative free convolution $\lambda=\mu\boxtimes\nu$ is defined as the unique probability measure $\lambda$ with transform $K_\lambda=K_\mu\circ Z_1 = K_\nu\circ Z_2$.
\item[(ii)]
Let $U$ and $V$ be two unitary operators on some Hilbert space $H$ that are free w.r.t.\ some unit vector $\Omega\in H$. Then the products $UV$ and $VU$ are also unitary and their distributions w.r.t.\ to $\Omega$ are equal to the free convolution of the distributions of $U$ and $V$ w.r.t.\ $\Omega$, i.e.\, i.e.\
\[
\mathcal{L}(UV,\Omega) = \mathcal{L}(VU,\Omega) = \mathcal{L}(U,\Omega)\boxtimes\mathcal{L}(V,\Omega).
\]
\end{theorem}
\begin{theorem}
\item[(i)]
Let $\mu$ and $\nu$ be two probability measures on the positive half-line such that $\mu\not=\delta_0$, $\nu\not=\delta_0$ and denote their transforms by $K_\mu$ and $K_\nu$. Then there exist unique functions $Z_1,Z_2\in\mathcal{P}$ such that
\[
K_\mu\big(Z_1(z)\big) = K_\nu\big(Z_2(z)\big)=\frac{Z_1(z)Z_2(z)}{z}
\]
for all $z\in\mathbb{C}\backslash\mathbb{R}_+$. The multiplicative free convolution $\lambda=\mu\boxtimes\nu$ is defined as the unique probability measure $\lambda$ with transform $K_\lambda=K_\mu\circ Z_1 = K_\nu\circ Z_2$.
\item[(ii)]
Let $X$ and $Y$ be two positive operators on some Hilbert space $H$ that are free w.r.t.\ some unit vector $\Omega\in H$. Assume furthermore that $\Omega$ is cyclic, i.e. that
\[
\overline{{\rm alg}\{h(X),h(Y);h\in C_b(\mathbb{R})\}\Omega}=H.
\]
Then the products $\sqrt{X}Y\sqrt{X}$ and $\sqrt{Y}X\sqrt{Y}$ are essentially self-adjoint and positive, and their distributions w.r.t.\ to $\Omega$ are equal to the free convolution of the distributions of $X$ and $Y$ w.r.t.\ $\Omega$, i.e.\, i.e.\
\[
\mathcal{L}(\sqrt{X}Y\sqrt{X},\Omega) = \mathcal{L}(\sqrt{Y}X\sqrt{Y},\Omega) = \mathcal{L}(X,\Omega)\boxtimes\mathcal{L}(Y,\Omega).
\]
\end{theorem}
\section{Monotone Convolutions}
\begin{definition}\cite{muraki00}
Let $\mathcal{A}_1,\mathcal{A}_2\subset\mathcal{B}(H)$ be two
$*$-algebras of bounded operators on a Hilbert space $H$, and let
$\Omega\in H$ be a unit vector. We say that $\mathcal{A}_1$ and
$\mathcal{A}_2$ are {\em monotonically independent} w.r.t.\
$\Omega$, if we have
\[
\langle \Omega, X_1X_2\cdots X_k\Omega\rangle = \left\langle\Omega,\prod
_{\kappa:\varepsilon_\kappa=1} X_\kappa\Omega\right\rangle\prod
_{\kappa:\varepsilon_\kappa=2} \langle \Omega, X_\kappa\Omega\rangle
\]
for all $k\in\mathbb{N}$, $\varepsilon\in\mathbb{A}_k$,
$X_1\in\mathcal{A}_{\varepsilon_1},\ldots,X_k\in\mathcal{A}_{\varepsilon_k}$.
\end{definition}
\begin{remark}\label{rem-mon-cyclic}
\item[(a)]
Note that this notion depends on the order, i.e.\ if $\mathcal{A}_1$ and $\mathcal{A}_2$ are monotonically independent, then this does {\em not} imply that $\mathcal{A}_2$ and $\mathcal{A}_1$ are monotonically independent. In fact, if $\mathcal{A}_1$ and $\mathcal{A}_2$ are monotonically independent and $\mathcal{A}_2$ and $\mathcal{A}_1$ are also monotonically independent, and $\Phi(\cdot)=\langle\Omega,\,\cdot\,\Omega\rangle$ does not vanish on one of the algebras, then restrictions of $\Phi$ to $\mathcal{A}_1$ and $\mathcal{A}_2$ have to be homomorphisms. To prove this for the restriction to, e.g., $\mathcal{A}_1$, take an element $Y\in\mathcal{A}_2$ such that $\Phi(Y)\not=0$, then
\[
\Phi(X_1X_2) = \frac{\Phi(X_1YX_2)}{\Phi(Y)} = \Phi(X_1)\Phi(X_2)
\]
for all $X_1,X_2\in\mathcal{A}_1$.
\item[(b)]
The algebras are not required to be unital. If $\mathcal{A}_1$ is
unital, then the restriction of
$\Phi(\cdot)=\langle\Omega,\,\cdot\,\Omega\rangle$ to
$\mathcal{A}_2$ has to be a homomorphism, since monotone
independence implies
\[
\langle \Omega, XY\Omega\rangle = \langle \Omega,
X\mathbf{1}Y\Omega\rangle = \langle \Omega, X\Omega\rangle\langle
\Omega,Y\Omega\rangle
\]
for $X,Y\in\mathcal{A}_2$.
\item[(c)]
In the definition of monotone independence the condition
\[
XYZ= \langle\Omega,Y\Omega\rangle XZ
\]
for all $X,Z\in\mathcal{A}_1$, $Y\in\mathcal{A}_2$ is often also
imposed. If the state vector $\Omega$ is cyclic for the algebra
generated by $\mathcal{A}_1$ and $\mathcal{A}_2$, then this is
automatically satisfied. Let $X_1,X_3,\ldots,Z_1,Z_3,\ldots\in\mathcal{A}_1$ and $Y,X_2,X_4,\ldots,Z_2,Z_4,\ldots\in\mathcal{A}_2$, then
\begin{eqnarray*}
&& \langle X_1\cdots X_n\Omega, YZ_1\cdots Z_m\Omega\rangle = \langle \Omega,X_n^*\cdots X_1^* YZ_1\cdots Z_m\Omega\rangle \\
&=& \langle \Omega, Y\Omega\rangle \prod_{k \mbox{ even}}\langle \Omega, X^*_k\Omega\rangle\prod_{\ell \mbox{ even}}\langle \Omega, Z_\ell\Omega\rangle \langle X_1X_3\cdots\Omega,Z_1Z_3\cdots\Omega \\
&=&\langle \Omega, Y\Omega\rangle\langle X_1\cdots X_n\Omega, Z_1\cdots Z_m\Omega\rangle,
\end{eqnarray*}
for all $n,m\ge 1$, i.e., $X_1^*YZ_1$ and $\langle\Omega, Y\Omega\rangle X_1^*Z_1$ coincide on the subspace generated by $\mathcal{A}_1$ and $\mathcal{A}_2$ from $\Omega$.
\end{remark}
\begin{definition}\label{def-mon-indep}
Let $X$ and $Y$ be two normal operators on a Hilbert space $H$,
not necessarily bounded. We say that $X$ and $Y$ are {\em
monotonically independent} w.r.t.\ $\Omega$, if the $*$-algebras
${\rm alg}_0(X)=\{h(X);h\in C_b( \mathbb{C}),h(0)=0\}$ and ${\rm
alg}_0(Y)=\{h(Y);h\in C_b( \mathbb{C}),h(0)=0\}$ are monotonically
independent w.r.t.\ $\Omega$.
\end{definition}
Let us now introduce the model we shall use for calculations with monotonically independent operators.
\begin{proposition}\label{prop-mon-model}
Let $\mu,\nu$ be two probability measures on $\mathbb{C}$ and define normal operators $X$ and $Y$ on $L^2(\mathbb{C}\times\mathbb{C},\mu\otimes\nu)$ by
\begin{eqnarray*}
{\rm Dom}\, X &=& \left\{\psi\in L^2(\mathbb{C}\times\mathbb{C},\mu\otimes\nu); \int_{\mathbb{C}}\left|x\int_{\mathbb{C}}\psi(x,y){\rm d}\nu(y)\right|^2{\rm d}\mu(x)<\infty\right\}, \\
{\rm Dom}\, Y &=& \left\{\psi\in L^2(\mathbb{C}\times\mathbb{C},\mu\otimes\nu); \int_{\mathbb{C}\times\mathbb{C}}|y\psi(x,y)|^2{\rm d}\mu\otimes\nu(x,y)<\infty\right\},
\end{eqnarray*}
\begin{eqnarray*}
(X\psi)(x,y) &=& x\int_{\mathbb{C}}\psi(x,y'){\rm d}\nu(y'), \\
(X\psi)(x,y) &=& y\psi(x,y).
\end{eqnarray*}
Then $\mathcal{L}(X,\mathbf{1}) = \mu$, $\mathcal{L}(Y,\mathbf{1}) = \nu$, and $X$ and $Y$ are monotonically independent w.r.t.\ the constant function $\mathbf{1}$.
\end{proposition}
\begin{proof}
Denote by $P_2$ the orthogonal projection onto the space of functions in $L^2(\mathbb{C}\times\mathbb{C},\mu\otimes\nu)$ which do not depend on the second variable, and by $M_x$ multiplication by the first variable, then $X=M_xP_2$. This operator is normal, we have
\[
h(X)\psi(x,y) = \big(h(x)-h(0)\big)\int_{\mathbb{C}}\psi(x,y){\rm d}\nu(y) + h(0)\psi(x,y)
\]
and $\langle \mathbf{1},h(X)\mathbf{1}\rangle = \int_{\mathbb{C}} h(x){\rm d}\mu(x)$ for all $h\in C_b(\mathbb{C})$, i.e.\ $\mathcal{L}(X,\mathbf{1})=\mu$. The operator $Y$ is multiplication by the second variable, it is clearly normal. We have
\[
h(Y)\psi(x,y)= h(y)\psi(x,y)
\]
and $\langle \mathbf{1},h(Y)\mathbf{1}\rangle = \int_{\mathbb{C}} h(y){\rm d}\nu(y)$ for all $h\in C_b(\mathbb{C})$, i.e.\ $\mathcal{L}(Y,\mathbf{1})=\nu$.
Let $f_1,\ldots,f_n,g_1,\ldots,g_n\in C_b(\mathbb{C})$, $f_1(0)=\cdots=f_n(0)=0$. Then
\[
f_n(X)g_{n-1}(Y)\cdots g_1(Y)f_1(X)\mathbf{1}=\prod_{k=1}^{n-1}\int_{\mathbb{C}}g_k(y){\rm d}\nu(y)\, f_1\cdots f_n
\]
and
\begin{eqnarray*}
\langle\mathbf{1},f_n(X)g_{n-1}(Y)\cdots g_1(Y)f_1(X)\mathbf{1}\rangle &=& \prod_{k=1}\int_{\mathbb{C}}g_k(y){\rm d}\nu(y)\int_{\mathbb{C}} f_1(x)\cdots f_n(x){\rm d}\mu(x) \\
&=& \prod_{k=1}^{n-1}\langle\mathbf{1},g_k(Y)\mathbf{1}\rangle\langle\mathbf{1} f_1(X)\cdots f_n(X)\mathbf{1}\rangle,
\end{eqnarray*}
i.e.\ the condition for monotone independence is satisfied in this case.
Similarly one checks the expectation of $g_n(Y)f_n(X)\cdots g_1(Y)f_1(X)$, $f_n(X)g_n(Y)\cdots f_1(X)g_1(Y)$, and $g_n(Y)f_{n-1}(X)\cdots f_1(X)g_1(Y)$.
\end{proof}
The following theorem shows that any pair of monotonically independent normal operators can be reduced to this model.
\begin{theorem}\label{thm-mon-model}
Let $X$ and $Y$ be two normal operators on a Hilbert space
$H$ that are monotonically independent with respect to $\Omega\in H$
and let $\mu=\mathcal{L}(X,\Omega)$, $\nu=\mathcal{L}(Y,\Omega)$.
Then there exists an isometry $W:L^2(\mathbb{C}\times\mathbb{C},\mu\otimes \nu)\to H$ such that
\begin{eqnarray}\label{eq-mon-model}
W^*h(X)W\psi(x,y) &=& \big(h(x)-h(0)\big)\int \psi(x,y){\rm d}\nu(y)
+
h(0)\psi(x,y), \\
W^*h(Y)W\psi(x,y) &=& h(y)\psi(x,y) \nonumber
\end{eqnarray}
for $x,y\in\mathbb{C}$, $\psi\in L^2(\mathbb{C}\times\mathbb{C},\mu\otimes \nu)\cong L^2(\sigma_X,\mu)\otimes L^2(\sigma_Y,\nu)$ and $h\in C_b(\mathbb{C})$.
We have $WL^2(\mathbb{C}\times\mathbb{C},\mu\otimes\nu) = \overline{{\rm alg}\{h(X),h(Y);h\in C_b( \mathbb{C})\}\Omega}$.
If the vector $\Omega\in H$ is cyclic for the algebra ${\rm
alg}(X,Y)={\rm alg}\{h(X),h(Y);h\in C_b( \mathbb{C})\}$
generated by $X$ and $Y$, then $W$ is unitary.
\end{theorem}
\begin{proof}
Define $W$ on simple tensors of bounded continuous functions by
\[
W f\otimes g = g(Y)f(X)\Omega
\]
for $f,g\in C_b(\mathbb{C})$. It follows from the monotone
independence of $X$ and $Y$ that this defines an isomorphism, since
\begin{eqnarray*}
\langle W f_1\otimes g_1, W f_2\otimes g_2\rangle &=& \langle
\Omega, f_1(X)^* g_1(Y)^* g_2(Y)f_2(X)\Omega\rangle \\
&=& \langle \Omega, f_1(X)^*f_2(X)\Omega\rangle \langle \Omega,
g_1(Y)^*g_2(Y)\Omega\rangle \\
&=& \int \overline{f_1(t)}f_2(t){\rm d}\mu(t) \int
\overline{g_1(t)}g_2(t){\rm d}\nu(t).
\end{eqnarray*}
Since $C_b(\mathbb{C})\otimes C_b(\mathbb{C})$ is dense in
$L^2(\mathbb{C}\times\mathbb{C},\mu\otimes \nu)$, $W$ extends to a
unique isomorphism on $L^2(\mathbb{C}\times\mathbb{C},\mu\otimes \nu)$.
The relations
\begin{multline*}
\langle W f_1\otimes g_1, h(X) W f_2\otimes g_2\rangle = \langle
\Omega, f_1(X)^* g_1(Y)^* h(X) g_2(Y)f_2(X)\Omega\rangle \\
= \langle \Omega, f_1(X)^* \big(h(X)-h(0)\big)f_2(X)\Omega\rangle
\langle \Omega, g_1(Y)^*\Omega\rangle \langle \Omega,
g_2(Y)\Omega\rangle \\
+h(0) \langle \Omega, f_1(X)^* g_1(Y)^*g_2(Y)f_2(X)\Omega\rangle
\\
=\langle\Omega,g_2(Y)\Omega\rangle\left\langle W f_1\otimes g_1, W
\big((h-h(0)1\big)f_1\otimes
1\right\rangle + h(0)\langle W f_1\otimes g_1, Wf_1\otimes g_2\rangle \\
= \left\langle W f_1\otimes g_1, W\left( \int g_2(y){\rm
d}\nu(y)(h-h(0)1)f_1\otimes 1 + h(0) f_2\otimes g_2\right)
\right\rangle
\end{multline*}
and
\begin{eqnarray*}
\langle W f_1\otimes g_1, h(Y) W f_2\otimes g_2\rangle &=& \langle
\Omega, f_1(X)^* g_1(Y)^* h(Y) g_2(Y)f_2(X)\Omega\rangle \\
&=& \langle W f_1\otimes g_1, W f_2\otimes (h g_2)\rangle
\end{eqnarray*}
shows that we have the desired formulas for simple tensors of functions $f_1,f_2,g_1,g_2\in C_b(\mathbb{C})$. The general case follows by linearity and continuity. Remark \ref{rem-mon-cyclic}(c) implies
\begin{eqnarray*}
WL^2(\mathbb{C}\times\mathbb{C},\mu\otimes\nu) &=& \overline{{\rm span}\,\{g(Y)f(X)\Omega;f,g\in C_b(\mathbb{C})\}} \\
&=& \overline{{\rm alg}\{h(X),h(Y);h\in C_b( \mathbb{C})\}\Omega}.
\end{eqnarray*}
If $\Omega$ is cyclic, then $W$ is surjective and therefore unitary.
\end{proof}
\begin{remark}\label{rem-mon-reduction}
It follows that the joint law of two monotonically independent, normal operators is uniquely determined by their marginal distributions, in the sense that the restriction of $\Phi(\cdot)=\langle\Omega,\cdot\,\Omega\rangle$ to ${\rm alg}(X,Y)={\rm alg}\{h(X),h(Y);h\in C_b( \mathbb{C})\}$ is uniquely determined by $\mathcal{L}(X,\Omega)$ and $\mathcal{L}(Y,\Omega)$. But by Lemma \ref{lem-unitary-eq}, also computations for unbounded functions of $X$ and $Y$, e.g., concerning the operators $X+Y$ for self-adjoint $X$ and $Y$, or $\sqrt{X}Y\sqrt{Y}$ for positive $X$ and $Y$, reduce to the model introduced in Proposition \ref{prop-mon-model}.
\end{remark}
\subsection{Additive monotone convolution on
$\mathcal{M}_1(\mathbb{R})$}
\begin{definition} \cite{muraki00}
Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}$ with
reciprocal Cauchy transforms $F_\mu$ and $F_\nu$. Then we define the
additive monotone convolution $\lambda=\mu\triangleright\nu$ of
$\mu$ and $\nu$ as the unique probability measure on $\mathbb{R}$
with reciprocal Cauchy transform $F_\lambda = F_\mu \circ F_\nu$.
\end{definition}
It follows from Subsection \ref{nevanlinna} that the additive
monotone convolution is well-defined. Let us first recall some basic
properties of the additive monotone convolution.
\begin{proposition}\cite{muraki00}\label{prop-mon-add-properties}
The additive monotone convolution is associative and $*$-weakly continuous in both arguments. It is affine in the first argument and convolution from the right by a Dirac measure corresponds to translation, i.e. $\mu\triangleright\delta_x = T_x^{-1}\mu$ for $x\in\mathbb{R}$, where $T_x:\mathbb{R}\to\mathbb{R}$ is defined by $T_x(t)=t+x$.
\end{proposition}
This convolution is not commutative, i.e.\ in general we have $\mu\triangleright\nu\not=\nu\triangleright\mu$.
Let $x\in\mathbb{R}$ and $0\le p \le 1$. Then one can compute, e.g.,
\[
\delta_x\triangleright\big(p\delta_{1} + (1-p)\delta_{-1}\big) = q\delta_{z_1} + (1-q)\delta_{z_2}
\]
where
\begin{eqnarray*}
z_1 &=& \frac{1}{2}\left(x +\sqrt{x^2+4(2p-1)x+4}\right) , \\
z_2 &=& \frac{1}{2}\left(x -\sqrt{x^2+4(2p-1)x+4}\right) , \\
q &=& \frac{x+4p-2+\sqrt{x^2+4(2p-1)x+4}}{2\sqrt{x^2+4(2p-1)x+4}}.
\end{eqnarray*}
This example shows that convolution from the left by a Dirac mass is in general not equal to a translation and that the additive monotone convolution is not affine in the second argument.
Note that the continuity and the fact that the monotone convolution is affine in the first argument imply the following formula
\begin{equation}\label{eq-int-formula}
\mu\triangleright\nu = \int_\mathbb{R} \delta_x\triangleright \nu\, {\rm d}\mu(x)
\end{equation}
for all $\mu,\nu\in\mathcal{M}_1(\mathbb{R})$.
The following proposition is the key to treating the additive monotone convolution for general probability measures on $\mathbb{R}$.
\begin{proposition}\label{prop-mon-add-resolvent}
Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}$ and denote by $M_x$ and $M_y$ the self-adjoint operators on $L^2(\mathbb{R}\times\mathbb{R},\mu\otimes\nu)$ defined by multiplication with the coordinate functions. Denote by $P_2$ the orthogonal projection onto the subspace of functions which do not depend on the second coordinate, $L^2(\mathbb{R}\times\mathbb{R},\mu\otimes\nu)\ni\psi\mapsto \int_\mathbb{R}\psi(\cdot,y){\rm d}\nu(y)\in L^2(\mathbb{R}\times\mathbb{R},\mu\otimes\nu)$. Then $M_xP_2=P_2M_x$ and $M_y$ are self-adjoint and monotonically independent w.r.t.\ the constant function and the operator $z-M_xP_2-M_y$ has a bounded inverse for all $z\in\mathbb{C}\backslash\mathbb{R}$, given by
\begin{equation}\label{eq-mon-add-resolvent}
\left((z-M_xP_2-M_y)^{-1}\psi\right)(x,y) = \frac{\psi(x,y)}{z-y} + \frac{x \int_\mathbb{R} \frac{\psi(x,y')}{z-y'}{\rm d}\nu(y')}{(z-y)(1-xG_\nu(z))}.
\end{equation}
\end{proposition}
\begin{proof}
$M_xP_2$ and $M_y$ are monotonically independent by \ref{prop-mon-model}.
The first term on the right-hand-side of Equation \eqref{eq-mon-add-resolvent} is obtained from $\psi$ by multiplication with a bounded function, the second by composition of multiplications with bounded functions and the projection $P_2$. Equation \eqref{eq-mon-add-resolvent} therefore clearly defines a bounded operator. To check that it is indeed the inverse of $z-M_xP_2-M_y$ is straightforward,
\begin{multline*}
(z-M_xP_2-M_y) \left(\frac{\psi(x,y)}{z-y} + \frac{x \int_\mathbb{R} \frac{\psi(x,y')}{z-y'}{\rm d}\nu(y')}{(z-y)\big(1-xG_\nu(z)\big)}\right) \\
= \psi(x,y) + \frac{x\int\frac{\psi(x,y')}{z-y'}{\rm d}\nu(y')}{1-xG_\nu(z)} - x\int_\mathbb{R} \frac{\psi(x,y')}{z-y'}{\rm d}\nu(y')-x\int_\mathbb{R}\frac{x \int_\mathbb{R} \frac{\psi(x,y'')}{z-y''}{\rm d}\nu(y'')}{(z-y')\big(1-xG_\nu(z)\big)}{\rm d}\nu(y') \\
= \psi(x,y) +\frac{\Big((z-y)-(z-y)\big(1-xG_\nu(z)\big) - xG_\nu(z)(z-y)\Big)x\int_\mathbb{R}\frac{\psi(x,y')}{z-y'}{\rm d}\nu(y')}{(z-y)\big(1-xG_\nu(z)\big)} \\
= \psi(x,y)
\end{multline*}
\end{proof}
\begin{theorem}\label{thm-mon-add-conv}
Let $X$ and $Y$ be two self-adjoint operators on a Hilbert space $H$ that are monotonically independent w.r.t.\ to a unit vector $\Omega\in H$. Assume furthermore that $\Omega$ is cyclic, i.e. that
\[
\overline{{\rm alg}\{h(X),h(Y);h\in C_b(\mathbb{R})\}\Omega}=H.
\]
Then $X+Y$ is essentially self-adjoint and the distribution w.r.t.\ $\Omega$ of its closure is equal to the additive monotone convolution of the distributions of $X$ and $Y$ w.r.t.\ to $\Omega$, i.e.
\[
\mathcal{L}(X+Y,\Omega) = \mathcal{L}(X,\Omega)\triangleright\mathcal{L}(Y,\Omega).
\]
\end{theorem}
\begin{proof}
Let $\mu=\mathcal{L}(X,\Omega)$, $\nu=\mathcal{L}(Y,\Omega)$.
By Theorem \ref{thm-mon-model} and Lemma \ref{lem-unitary-eq} it is sufficient to consider the case where $X$ and $Y$ are given by Proposition \ref{prop-mon-model}. Proposition \ref{prop-mon-add-resolvent} shows that $z-X-Y$ admits a bounded inverse and therefore that ${\rm Ran}\,(z-X-Y)$ is dense for $z\in\mathbb{C}\backslash\mathbb{R}$. By \cite[Theorem VIII.3]{reed+simon80} this is equivalent to $X+Y$ being essentially self-adjoint.
Using Equation \eqref{eq-mon-add-resolvent}, we can compute the Cauchy transform of the distribution of the closure of $X+Y$. Let $z\in\mathbb{C}^+$, then we have
\begin{multline*}
G_{X+Y}(z) = \langle \Omega, (z-X-Y)^{-1}\Omega\rangle - = \left\langle \mathbf{1}, (z-M_xP_2-M_y)^{-1}\mathbf{1}\right\rangle \\
=\left\langle\mathbf{1},\frac{1}{z-y} + \frac{xG_\nu(z)}{(z-y)(1-xG_\nu(z))}\right\rangle = \int_{\mathbb{R}\times\mathbb{R}} \frac{1}{(z-y)(1-xG_\nu(z))}{\rm d}\mu\otimes\nu \\
= \int_\mathbb{R} \frac{G_\nu(z)}{1-xG_\nu(z)}{\rm d}\mu(x) = G_\mu\left(\frac{1}{G_\nu(z)}\right)=G_\mu\big(F_\nu(z)\big),
\end{multline*}
or
\[
F_{X+Y}(z) = \frac{1}{G_{X+Y}(z)} = \frac{1}{G_\mu\big(F_\nu(z)\big)} = F_\mu\big(F_\nu(z)\big)= F_{\mu\triangleright\nu}(z).
\]
\end{proof}
\subsection{Multiplicative monotone convolution on
$\mathcal{M}_1(\mathbb{R}_+)$}
\begin{definition}\cite{bercovici04}
Let $\mu$ and $\nu$ be two probability measures on the positive half-line $\mathbb{R}_+$ with transforms $K_\mu$ and $K_\nu$. Then the multiplicative monotone convolution of $\mu$ and $\nu$ is defined as the unique probability measure $\lambda=\mu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\nu$ on $\mathbb{R}_+$ with transform $K_\lambda=K_\mu\circ K_\nu$.
\end{definition}
It follows from Subsection \ref{nevanlinna} that the multiplicative monotone convolution on $\mathcal{M}_1(\mathbb{R}_+)$ is well-defined.
Let us first recall some basic properties of the multiplicative monotone convolution.
\begin{proposition}\label{prop-mon-mult1-properties}
The multiplicative monotone convolution $\mathcal{M}_1(\mathbb{R}_+)$ is associative and $*$-weakly continuous in both arguments. It is affine in the first argument and convolution from the right by a Dirac measure corresponds to dilation, i.e. $\mu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\delta_\alpha = D_\alpha^{-1}\mu$ for $\alpha\in\mathbb{R}_+$, where $D_\alpha:\mathbb{R}_+\to\mathbb{R}_+$ is defined by $D_\alpha(t)=\alpha t$.
\end{proposition}
This convolution is not commutative, i.e.\ in general we have $\mu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\nu\not=\nu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\mu$. As in the additive case is not affine in the second argument, either, and convolution from the left by a Dirac mass is in general not equal to a dilation.
We want to extend \cite[Corollary 4.3]{franz05a} to unbounded positive operators, i.e.\ we want to show that if $X$ and $Y$ are two positive operators such that $X-\mathbf{1}$ and $Y$ are monotonically independent, then the distribution of $\sqrt{X}Y\sqrt{X}$ is equal to the multiplicative monotone convolution of the distributions of $X$ and $Y$. By Theorem \ref{thm-mon-model}, it is sufficient to do the calculations for the case where $X$ and $Y$ are constructed from multiplication with the coordinate functions and the projection $P_2$.
\begin{proposition}\label{prop-mon-mult1-resolvent}
Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}_+$, $\nu\not=\delta_0$, and let $M_y$ be the self-adjoint operator on $L^2(\mathbb{R}_+\times\mathbb{R}_+,\mu\otimes\nu)$ defined by multiplication with the coordinate function $(x,y)\mapsto y$. Define $S_x$ on $L^2(\mathbb{R}_+\times\mathbb{R}_+,\mu\otimes\nu)$ by
\begin{eqnarray} \label{eq-sx-definition} \\
{\rm Dom}\,S_x &=& \left\{\psi\in L^2(\mathbb{R}_+\times\mathbb{R}_+,\mu\otimes\nu); \int_{\mathbb{R}_+} x\psi(x,y){\rm d}\nu(y)\in L^2(\mathbb{R}_+,\mu)\right\} , \nonumber \\
(S_x\psi)(x,y) &=& (x-1)\int_{\mathbb{R}_+} \psi(x,y){\rm d}\nu(y) + \psi(x,y) \nonumber
\end{eqnarray}
Then $S_x-\mathbf{1}$ and $M_y$ are monotonically independent w.r.t.\ to the constant function and the operator $z-\sqrt{S_x}M_y\sqrt{S_x}$ has a bounded inverse for all $z\in\mathbb{C}\backslash\mathbb{R}$, given by
\begin{equation}\label{eq-mon-mult1-resolvent}
\left((z-\sqrt{S_x}M_y\sqrt{S_x})^{-1}\psi\right)(x,y) =\frac{\psi(x,y)+g(x)}{z-y} + h(x).
\end{equation}
where
\begin{eqnarray*}
g(x) &=& \frac{\sqrt{x}-x}{(1-x)zG_\nu(z)+x}\int_{\mathbb{R}_+} \psi(x,y){\rm d}\nu(y) \\
&& + \frac{z(x-1)}{(1-x)zG_\nu(z)+x}\int_{\mathbb{R}_+}\frac{\psi(x,y)}{z-y}{\rm d}\nu(y), \\
h(x) &=& \frac{(\sqrt{x}-1)^2G_\nu(z)}{(1-x)zG_\nu(z)+x}\int_{\mathbb{R}_+}\psi(x,y){\rm d}\nu(y) \\
&& +\frac{\sqrt{x}-x}{(1-x)zG_\nu(z)+x}\int_{\mathbb{R}_+}\frac{\psi(x,y)}{z-y}{\rm d}\nu(y).
\end{eqnarray*}
\end{proposition}
\begin{proof}
Fix $z\in\mathbb{C}^+$. Let $x>0$, then
\[
{\rm Im}\frac{z}{z-x} = - \frac{x{\rm Im}\,z}{({\rm Re}\,z-x)^2+({\rm Im}\,z)^2}< 0,
\]
and therefore
\[
{\rm Im}\,zG_\nu(z) = {\rm Im}\int_{\mathbb{R}_+}\frac{z}{z-x}{\rm d}\nu(x) <0.
\]
Similarly, we get ${\rm Im}\,zG_\nu(z)>0$ for $z\in\mathbb{C}^-$. It follows that the functions in front of the integrals in the definitions of $g$ and $h$ are bounded as functions of $x$, and therefore $g$ and $h$ are square-integrable. Since $\frac{1}{z-y}$ is bounded, too, we see that Equation \eqref{eq-mon-mult1-resolvent} defines a bounded operator.
Let us now check that it is the inverse of $z-\sqrt{S_x}M_y\sqrt{S_x}$.
Using the notation of the previous subsection, we can write $S_x$ also as $S_x=M_{x-1}P_2+\mathbf{1}=M_xP_2 + P_2^\perp$, where $P_2^\perp$ is the projection onto the orthogonal complement of the subspace of functions which do not depend on $y$. Its square root can be written as $\sqrt{S_x}= M_{\sqrt{x}}P_2 + P_2^\perp = M_{\sqrt{x}-1}P_2 +\mathbf{1}$, it acts as
\[
\left(\sqrt{S_x}\psi\right)(x,y) = \left(\sqrt{x}-1\right)\int_{\mathbb{R}_+} \psi(x,y){\rm d}\nu(y) + \psi(x,y)
\]
on a function $\psi\in{\rm Dom}\,\sqrt{S_x}\subseteq L^2(\mathbb{R}_+\times\mathbb{R}_+,\mu\otimes\nu)$.
Since $h$ does not depend on $y$, we have $\sqrt{S_x}h=\sqrt{x}h$. For $g$ we get
\begin{eqnarray*}
\left(\sqrt{S_x}\frac{g}{z-y}\right)(x) &=& (\sqrt{x}-1)\int_{\mathbb{R}_+}\frac{g(x)}{z-y}{\rm d}\nu(y) + \frac{g(x)}{z-y} \\
&=& \left((\sqrt{x}-1)G_\nu(z)+\frac{1}{z-y}\right) g(x).
\end{eqnarray*}
Set $\varphi=\frac{\psi+g}{z-y}+h$. Applying $\sqrt{S_x}$ to $\varphi$, we get
\begin{eqnarray*}
\left(\sqrt{S_x}\varphi\right)(x,y) &=& \frac{\psi(x,y)}{z-y}+\frac{\sqrt{x}-x}{(z-y)\big((1-x)zG_\nu(z)+x\big)}\int_{\mathbb{R}_+}\psi(x,y){\rm d}\nu(y) \\
&& +\frac{z(x-1)}{(z-y)\big((1-x)zG_\nu(z)+x\big)}\int_{\mathbb{R}_+}\frac{\psi(x,y)}{z-y}{\rm d}\nu(y) \\
&=& \frac{\psi(x,y)+g(x)}{z-y}.
\end{eqnarray*}
{}From this we get
\begin{eqnarray*}
\left(\left(z-\sqrt{S_x}M_y\sqrt{S_x}\right)\varphi\right)(x,y) &=& \psi(x,y)
\end{eqnarray*}
after some tedious, but straightforward computation.
\end{proof}
\begin{remark}
It $\nu=\delta_0$, then $M_y=0$ on $L^2(\mathbb{R}_+\times\mathbb{R}_+,\mu\otimes\nu)$, and therefore $\sqrt{S_x}M_y\sqrt{S_x}=0$. This is of course a positive operator, and its distribution is $\delta_0$.
\end{remark}
\begin{theorem}\label{thm-mon-mult}
Let $X$ and $Y$ be two positive self-adjoint operators on a Hilbert space $H$ such that $X-\mathbf{1}$ and $Y$ are monotonically independent w.r.t.\ to a unit vector $\Omega\in H$. Assume furthermore that $\Omega$ is cyclic, i.e.
\[
\overline{{\rm alg}\{h(X),h(Y);h\in C_b(\mathbb{R}_+)\}\Omega}=H.
\]
Then $\sqrt{X}Y\sqrt{X}$ is essentially self-adjoint and the distribution w.r.t.\ $\Omega$ of its closure is equal to the multiplicative monotone convolution of the distributions of $X$ and $Y$ w.r.t.\ $\Omega$, i.e.
\[
\mathcal{L}\left(\sqrt{X}Y\sqrt{X},\Omega\right)=\mathcal{L}(X,\Omega)\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\mathcal{L}(Y,\Omega).
\]
\end{theorem}
\begin{proof}
Let $\mu=\mathcal{L}(X,\Omega)$, $\nu=\mathcal{L}(Y,\Omega)$.
By Theorem \ref{thm-mon-model} it is sufficient to consider the case $X=S_x$ and $Y=M_y$. In this case Proposition \ref{prop-mon-mult1-resolvent} shows that $z-\sqrt{X}Y\sqrt{X}$ has a bounded inverse for all $z\in\mathbb{C}\backslash\mathbb{R}$. This implies that ${\rm Ran}(z-\sqrt{X}Y\sqrt{X})$ is dense for all $z\in\mathbb{C}\backslash\mathbb{R}$ and that $\sqrt{X}Y\sqrt{X}$ is essentially self-adjoint, cf.\ \cite[Theorem VIII.3]{reed+simon80}.
Using Equation \eqref{eq-mon-mult1-resolvent}, we can compute the Cauchy transform of the distribution of the closure of $\sqrt{X}Y\sqrt{X}$. Let $z\in\mathbb{C}^+$, then we have
\begin{eqnarray*}
G_{\sqrt{X}Y\sqrt{X}}(z) &=& \left\langle\Omega,\left(z-\sqrt{X}Y\sqrt{X}\right)^{-1}\Omega\right\rangle = \left\langle \mathbf{1}, \left(z-\sqrt{S_x}M_y\sqrt{S_x}\right)^{-1}\mathbf{1}\right\rangle \\
&=& \left\langle \mathbf{1},\frac{1+g_1}{z-y}+h_1\right\rangle
\end{eqnarray*}
where
\begin{eqnarray*}
g_1(x) &=&\frac{\sqrt{x}-x+(x-1)zG_\nu(x)}{(1-x)zG_\nu(z)+x} = \frac{\sqrt{x}}{(1-x)zG_\nu(z)+x} -1, \\
h_1(x) &=&\frac{(1-\sqrt{x})G_\nu(z)}{(1-x)zG_\nu(z)+x}.
\end{eqnarray*}
Therefore
\begin{eqnarray}
G_{\sqrt{X}Y\sqrt{X}}(z) &=& \int_{\mathbb{R}_+\times\mathbb{R}_+} \left(\frac{1+g_1(x)}{z-y} + h_1(x)\right){\rm d}\mu\otimes\nu(x,y) \nonumber \\
&=& \int_{\mathbb{R}_+} \frac{G_\nu(z)}{(1-x)zG_\nu(z)+x}{\rm d}\mu(x) = \frac{G_\nu(z)}{zG_\nu(z)-1}G_\mu\left(\frac{zG_\nu(z)}{zG_\nu(z)-1}\right). \label{eq-xyx-yxy}
\end{eqnarray}
Using the relation
\[
G_\mu(z) = \frac{1}{z}\left(\psi_\mu\left(\frac{1}{z}\right)+1\right)
\]
to replace the Cauchy transforms by the $\psi$-transforms, this becomes
\[
\psi_{\sqrt{X}Y\sqrt{X}}\left(\frac{1}{z}\right) = \psi_\mu\left(\frac{\psi_\nu(1/z)}{\psi_\nu(1/z)+1}\right),
\]
or finally
\[
K_{\sqrt{X}Y\sqrt{X}}(z) = K_\mu\big(K_\nu(z)\big)=K_{\mu\kern0.17em\lower0.1ex\hbox{\rule{0.025em}{0.43em}}\kern-.105em\gtrdot\nu}(z).
\]
\end{proof}
\subsection{The ``other'' multiplicative convolution on $\mathcal{M}_1(\mathbb{R}_+)$}\label{subsec-mon-mult-other}
Let $X$ and $Y$ be two positive operators such that $X-\mathbf{1}$ and $Y$ are monotonically independent w.r.t.\ to some unit vector. We have just shown that the distribution of $\sqrt{X}Y\sqrt{X}$ is given by the multiplicative monotone convolution of the distributions of $X$ and $Y$, as defined by \cite{bercovici04}. But in \cite{franz05a} it is was already shown that in general this is {\em not} the case for $\sqrt{Y}X\sqrt{Y}$, which would be another obvious choice for constructing multiplicatively a positive operator out of $X$ and $Y$. It is possible to characterize the distribution of $\sqrt{Y}X\sqrt{Y}$ using the same methods as in the previous subsection. We will summarize the main results here, but omit the details of the calculations.
\begin{proposition}
Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}_+$, and $S_x$, $M_y$ as in Proposition \ref{prop-mon-mult1-resolvent}.
Then the operator $z-\sqrt{M_y}S_x\sqrt{M_y}$ has a bounded inverse for all $z\in\mathbb{C}\backslash\mathbb{R}$, given by
\begin{equation}
\left((z-\sqrt{M_y}S_x\sqrt{M_y})^{-1}\psi\right)(x,y) = \frac{\psi(x,y)+\sqrt{y}h(x)}{z-y},
\end{equation}
where
\[
h(x) = \frac{(x-1)\int_{\mathbb{R}_+} \frac{\sqrt{y}\psi(x,y)}{z-y}{\rm d}\nu(y)}{(1-x)zG_\nu(z)+x}.
\]
\end{proposition}
\begin{theorem}
Let $X$ and $Y$ be two positive self-adjoint operators on a Hilbert space $H$ such that $X-\mathbf{1}$ and $Y$ are monotonically independent w.r.t.\ to a unit vector $\Omega\in H$. Assume furthermore that $\Omega$ is cyclic, i.e.
\[
\overline{{\rm alg}\{h(X),h(Y);h\in C_b(\mathbb{R}_+)\}\Omega}=H.
\]
Then $\sqrt{Y}X\sqrt{Y}$ is positive and essentially self-adjoint. Denote by $\mu=\mathcal{L}(X,\Omega)$ and $\nu=\mathcal{L}(Y,\Omega)$ the distributions of $X$ and $Y$ w.r.t.\ $\Omega$, let $W_\nu(z)=G_{\sqrt{y}\nu}(z) = \int_{\mathbb{R}_+}\frac{\sqrt{y}}{z-y}{\rm d}\nu(y)$ for $y\in\mathbb{C}\backslash\mathbb{R}$. Then the distribution $\lambda=\mathcal{L}(\sqrt{Y}X\sqrt{Y},\Omega)$ of its closure w.r.t.\ $\Omega$ is characterized by its Cauchy transform
\begin{equation}\label{eq-mon-mult-alt}
G_\lambda(z) =G_\nu(z) -\frac{\big(W_\nu(z)\big)^2}{zG_\nu(z)-1} + \frac{\big(W_\nu(z)\big)^2}{\big(zG_\nu(z)-1\big)^2}G_\mu\left(\frac{zG_\nu(z)}{zG_\nu(z)-1}\right).
\end{equation}
\end{theorem}
\begin{remark}\label{rem-mon-mult-alt}
We can use Equation \eqref{eq-mon-mult-alt} to define an alternative multiplicative monotone convolution. Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}_+$, then we define $\lambda=\mu\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\nu$ as the unique probability measure $\lambda$ on $\mathbb{R}_+$ whose Cauchy transform is given by Equation \eqref{eq-mon-mult-alt}.
If $\nu=\delta_y$ is a Dirac mass, then $W_{\delta_y}=\frac{\sqrt{y}}{z-y}$, $\big(W_{\delta_y}(z)\big)^2=\frac{y}{(z-y)^2}=G_{\delta_y}(z)\big(zG_{\delta_y}(z)-1\big)$, and Equation \eqref{eq-mon-mult-alt} reduces to Equation \eqref{eq-xyx-yxy}. Therefore
\[
\mu\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\delta_y = \mu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\delta_y= D^{-1}_y\mu
\]
for all $y\in\mathbb{R}_+$ and $\mu\in\mathcal{M}_1(\mathbb{R}_+)$.
But in general the two convolutions are different, as was already stated in \cite{franz05a}. Actually, the convolution $\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em$ is not even associative, as the following examples show. Let $x,y>0$, $0<p<1$, and set $X=\left(\begin{array}{cc} x & 0 \\ 0 & 1\end{array}\right)$, $Y=y \left(\begin{array}{cc} p & \sqrt{p(1-p)} \\ \sqrt{p(1-p)} & 1-p\end{array}\right)$, $\Omega=\left(\begin{array}{c} 1 \\ 0 \end{array}\right)$, then $X-\mathbf{1}$ and $Y$ are monotonically independent and $\mathcal{L}(X,\Omega)=\delta_x$, $\mathcal{L}(Y,\Omega=p\delta_0+(1-p)\delta_y$. Furthermore we get
\[
\delta_x\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\big(p\delta_0+(1-p)\delta_y) = \mathcal{L}\big(\sqrt{Y}X\sqrt{Y},\Omega\big) = p\delta_0 + (1-p)\delta_{y(xp+1-p)}.
\]
Therefore
\begin{eqnarray*}
(\delta_{x_1}\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\delta_{x_2})\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\big(p\delta_0+(1-p)\delta_y\big) &=& \delta_{x_1x_2}\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\big(p\delta_0+(1-p)\delta_y\big) \\
&=& \big(p\delta_0+(1-p)\delta_{y(x_1x_2p+1-p)}\big) \\
&\not=& \delta_{x_1}\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\left(\delta_{x_2}\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\big(p\delta_0+(1-p)\delta_{y}\big)\right) \\
&=& \delta_{x_1}\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\big(p\delta_0+(1-p)\delta_{y(x_2p+1-p)}\big) \\
&=&\big(p\delta_0+(1-p)\delta_{y(x_1p+1-p)(x_2+1-p)}\big) \\
\end{eqnarray*}
in general.
\end{remark}
\subsection{Multiplicative monotone convolution on $\mathcal{M}_1(\mathbb{T})$}\label{subsec-mult-mon-T}
\begin{definition}\cite{bercovici04}
Let $\mu$ and $\nu$ be two probability measure on the unit circle $\mathbb{T}$ with transforms $K_\mu$ and $K_\nu$. Then the multiplicative monotone convolution of $\mu$ and $\nu$ is defined as the unique probability measure $\lambda=\mu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\nu$ on $\mathbb{T}$ with transform $K_\lambda=K_\mu\circ K_\nu$.
\end{definition}
It follows from Subsection \ref{nevanlinna} that the multiplicative monotone convolution on $\mathcal{M}_1(\mathbb{T})$ is well-defined.
Let us first recall some basic properties of the multiplicative monotone convolution.
\begin{proposition}
The multiplicative monotone convolution on $\mathcal{M}_1(\mathbb{T})$ is associative and $*$-weakly continuous in both arguments. It is affine in the first argument and convolution from the right by a Dirac measure corresponds to rotation, i.e. $\mu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\delta_{e^{i\vartheta}} = R_\vartheta^{-1}\mu$ for $\vartheta\in[0,2\pi[$, where $R_\vartheta:\mathbb{T}\to\mathbb{T}$ is defined by $R_\vartheta(t)=e^{i\vartheta} t$.
\end{proposition}
This convolution is not commutative, i.e.\ in general we have $\mu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\nu\not=\nu\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\mu$. As in the additive case is not affine in the second argument, either, and convolution from the left by a Dirac mass is in general not equal to a rotation.
Probability measures on the unit circle arise as distributions of unitary operators and they are completely characterized by their moments. Therefore the following theorem is a straightforward consequence of \cite{bercovici04} (see also \cite[Theorem 4.1 and Corollary 4.2]{franz05a}).
\begin{theorem}
Let $U$ and $V$ be two unitary operators on a Hilbert space $H$, $\Omega\in H$ a unit vector and assume furthermore that $U-\mathbf{1}$ and $V$ are monotonically independent w.r.t.\ $\Omega$. Then the products $UV$ and $VU$ are also unitary and their distribution w.r.t. $\Omega$ is equal to the multiplicative monotone convolution of the distributions of $U$ and $V$, i.e.\
\begin{equation}\label{eq-mon-mult-conv}
\mathcal{L}(UV,\Omega) = \mathcal{L}(VU,\Omega) = \mathcal{L}(U,\Omega)\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\mathcal{L}(V,\Omega).
\end{equation}
\end{theorem}
\begin{remark}
Note that the order of the convolution product on the right-hand-side of Equation \eqref{eq-mon-mult-conv} depends only on the order in which the operators $U-\mathbf{1}$ and $V-\mathbf{1}$ are monotonically independent, but not on the order in which $U$ and $V$ are multiplied.
\end{remark}
\section{Boolean Convolutions}
\begin{definition}
Let $\mathcal{A}_1,\mathcal{A}_2\subset\mathcal{B}(H)$ be two
$*$-algebras of bounded operators on a Hilbert space $H$, and let
$\Omega\in H$ be a unit vector. We say that $\mathcal{A}_1$ and
$\mathcal{A}_2$ are boolean independent w.r.t.\ $\Omega$, if we have
\[
\langle \Omega, X_1X_2\cdots X_k\Omega\rangle = \prod _{\kappa=1}^k
\langle \Omega, X_\kappa\Omega\rangle
\]
for all $k\in\mathbb{N}$, $\varepsilon\in\mathbb{A}_k$,
$X_1\in\mathcal{A}_{\varepsilon_1},\ldots,X_k\in\mathcal{A}_{\varepsilon_k}$.
\end{definition}
\begin{remark}
The algebras are not required to be unital. If one of them is
unital, say $\mathcal{A}_1$, then the restriction of
$\Phi(\cdot)=\langle\Omega,\,\cdot\,\Omega\rangle$ to the other
algebra, say $\mathcal{A}_2$, has to be a homomorphism, since the
boolean independence implies
\[
\langle \Omega, XY\Omega\rangle = \langle \Omega,
X\mathbf{1}Y\Omega\rangle = \langle \Omega, X\Omega\rangle\langle
\Omega,Y\Omega\rangle
\]
for $X,Y\in\mathcal{A}_2$.
\end{remark}
\begin{definition}\label{def-bool-indep}
Let $X$ and $Y$ be two normal operators on a Hilbert space $H$,
not necessarily bounded. We say that $X$ and $Y$ are boolean
independent, if the $*$-algebras ${\rm alg}_0(X)=\{h(X):h\in C_b(
\mathbb{C}),h(0)=0\}$ and ${\rm alg}_0(Y)=\{h(Y):h\in C_b(
\mathbb{C}),h(0)=0\}$ are boolean independent.
\end{definition}
We will start by characterizing up to unitary transformations the general form of two boolean independent normal operators. Given a measure space $(M,\mathcal{M},\mu)$, we shall denote by $L^2(M,\mu)_0$ the orthogonal complement of the constant function, i.e.\
\[
L^2(M,\mu)_0 = \left\{\psi\in L^2(M,\mu); \int_M \psi {\rm d}\mu=0\right\}.
\]
\begin{proposition}\label{prop-bool-model}
Let $\mu,\nu$ be two probability measures on $\mathbb{C}$ and define normal operators $N_x$ and $N_y$ on $\mathbb{C}\oplus L^2(\mathbb{C},\mu)_0\oplus L^2(\mathbb{C},\nu)_0$ by
\begin{eqnarray*}
{\rm Dom}\, N_x &=& \left\{\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right)\in \mathbb{C}\oplus L^2(\mathbb{C},\mu)_0\oplus L^2(\mathbb{C},\nu)_0; \int_{\mathbb{C}} \left| x \big(\psi_1(x)+\alpha\big)\right|^2{\rm d}\mu(x) < \infty\right\}, \\
{\rm Dom}\, N_y &=& \left\{\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right)\in \mathbb{C}\oplus L^2(\mathbb{C},\mu)_0\oplus L^2(\mathbb{C},\nu)_0; \int_{\mathbb{C}} \left| y \big(\psi_2(y)+\alpha\big)\right|^2{\rm d}\nu(y) < \infty\right\},
\end{eqnarray*}
\begin{eqnarray*}
N_x\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right) &=& \left(\begin{array}{c} \int_{\mathbb{C}}x\big(\psi_1(x)+\alpha\big){\rm d}\mu(x) \\ x(\psi_1+\alpha)- \int_{\mathbb{C}}x\big(\psi_1(x)+\alpha\big){\rm d}\mu(x) \\ 0 \end{array}\right), \\[1mm]
N_y\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right)&=& \left(\begin{array}{c} \int_{\mathbb{C}}y\big(\psi_2(y)+\alpha\big){\rm d}\nu(y) \\ 0 \\ x(\psi_2+\alpha) - \int_{\mathbb{C}}y\big(\psi_2(y)+\alpha\big){\rm d}\nu(y) \end{array}\right).
\end{eqnarray*}
Then $N_x$ and $N_y$ are boolean independent w.r.t.\ the vector $\omega=\left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right)$ and we have $\mathcal{L}(N_x,\omega) = \mu$, $\mathcal{L}(N_y,\omega) = \nu$.
\end{proposition}
\begin{proof}
Under the identification $\mathbb{C}\oplus L^2(\mathbb{C},\mu)_0\oplus L^2(\mathbb{C},\nu)_0\cong L^2(\mathbb{C},\mu)\oplus L^2(\mathbb{C},\nu)_0$, where
\[
\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right) \cong \left(\begin{array}{c} \psi_1+\alpha \\ \psi_2 \end{array}\right),
\]
the operator $N_x$ becomes multiplication by the variable $x$ on $L^2(\mathbb{C},\mu)$. It is clearly normal and we have
\[
h(N_x)\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2
\end{array}\right) =
\left( \begin{array}{c} \int_{\mathbb{C}} h(x)\big(\alpha + \psi_1(x)\big){\rm d}\mu(x) \\
h(\alpha+\psi_1) -\int_{\mathbb{C}} h(x)\big(\alpha + \psi_1(x)\big){\rm d}\mu(x) \\
h(0)\psi_2 \end{array}\right)
\]
and $\langle \omega, h(N_x)\omega\rangle=\int_{\mathbb{C}} h{\rm d}\mu$ for all $h\in C_b(\mathbb{C})$, i.e.\ $\mathcal{L}(N_x,\omega)=\mu$. Similarly
\[
h(N_y)\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2
\end{array}\right) = \left( \begin{array}{c}\int_{\mathbb{C}} h(y)\big(\alpha + \psi_2(y)\big){\rm d}\nu(y) \\
h(0)\psi_1 \\
h(\alpha+\psi_2) -\int_{\mathbb{C}} h(y)\big(\alpha + \psi_2(y)\big){\rm d}\nu(y) \end{array}\right)
\]
for all $h\in C_b(\mathbb{C})$, and $\mathcal{L}(N_y,\omega)=\nu$.
Let $f_1,\ldots,f_n,g_1,\ldots,g_n\in C_b(\mathbb{C})$, with $f_1(0)=\cdots=f_n(0)=g_1(0)=\cdots=g_n(0)=0$. Then
\[
f_n(N_x)g_{n-1}(N_y)\cdots g_1(N_y)f_1(N_x)\omega = \left(\begin{array}{c} \prod_{k=1}^n \int_{\mathbb{C}} f_k{\rm d}\mu \prod_{\ell=1}^{n-1} \int_{\mathbb{C}} g_\ell{\rm d}\nu
\\
\prod_{k=1}^{n-1} \int_{\mathbb{C}} f_k{\rm d}\mu \prod_{\ell=1}^{n-1} \int_{\mathbb{C}} g_\ell{\rm d}\nu \left(f_n - \int_{\mathbb{C}}f_n{\rm d}\mu\right) \\
0\end{array}\right)
\]
and therefore
\begin{eqnarray*}
\langle \omega, f_n(N_x)g_{n-1}(N_y)\cdots g_1(N_y)f_1(N_x)\omega\rangle
&=& \prod_{k=1}^n \int_{\mathbb{C}} f_k\,{\rm d}\mu \,\prod_{\ell=1}^{n-1} \int_{\mathbb{C}} g_\ell\,{\rm d}\nu \\
&=& \prod_{k=1}^n \langle\omega,f_k(N_x)\omega\rangle \prod_{\ell=1}^{n-1} \langle\omega,g_\ell(N_y)\omega\rangle
\end{eqnarray*}
i.e.\ the condition for boolean independence is satisfied in this case.
Similarly one checks the expectation of $g_n(N_y)f_n(N_x)\cdots g_1(N_y)f_1(N_x)$, $f_n(N_x)g_n(N_y)\cdots f_1(N_x)g_1(N_y)$, and $g_n(N_y)f_{n-1}(N_x)\cdots f_1(N_x)g_1(N_y)$.
\end{proof}
We shall now show that any pair of boolean independent normal operators can be reduced to this model.
\begin{theorem}\label{thm-bool-model}
Let $X$ and $Y$ be two normal operators on a Hilbert space
$H$ that are boolean independent w.r.t.\ to $\Omega\in H$ and let
$\mu=\mathcal{L}(X,\Omega)$, $\nu=\mathcal{L}(Y,\Omega)$.
Then there exists an isometry $W:\mathbb{C}\oplus
L^2(\mathbb{C},\mu)_0\oplus L^2(\mathbb{C},\nu)_0 \to H$ such that
\begin{eqnarray}\label{eq-bool-model}
\\
W^*h(X)W\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2
\end{array}\right) &=&
\left( \begin{array}{c} \int_{\mathbb{C}} h(x)\big(\alpha + \psi_1(x)\big){\rm d}\mu(x) \\
h(\alpha+\psi_1) -\int_{\mathbb{C}} h(x)\big(\alpha + \psi_1(x)\big){\rm d}\mu(x) \\
h(0)\psi_2 \end{array}\right), \nonumber \\
W^*h(Y)W \left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2
\end{array}\right) &=& \left( \begin{array}{c}\int_{\mathbb{C}} h(y)\big(\alpha + \psi_2(y)\big){\rm d}\nu(y) \\
h(0)\psi_1 \\
h(\alpha+\psi_2) -\int_{\mathbb{C}} h(y)\big(\alpha + \psi_2(y)\big){\rm d}\nu(y) \end{array}\right) \nonumber
\end{eqnarray}
for all $h\in C_b(\mathbb{C})$, $\alpha\in\mathbb{C}$, $\psi_1\in
L^2(\mathbb{C},\mu)_0$, $\psi_2\in L^2(\mathbb{C},\nu)_0$.
We have $W\big(\mathbb{C}\oplus L^2(\mathbb{C},\mu)_0\oplus L^2(\mathbb{C},\nu)_0\big)=\overline{{\rm alg}\{h(X),h(Y):h\in C_b( \mathbb{C})\} \Omega}$.
If the vector $\Omega\in H$ is cyclic for the algebra ${\rm alg}(X,Y)={\rm alg}\{h(X),h(Y):h\in C_b( \mathbb{C})\}$ generated by $X$ and $Y$, then $W$ is unitary.
\end{theorem}
\begin{proof}
For a probability measure $\mu$ on $\mathbb{C}$, let
\[
C_b(\mathbb{C})_{\mu,0} = \left\{ f\in C_b(\mathbb{C});
\int_\mathbb{C} f(z){\rm d}\mu(x) =0\right\},
\]
then $C_b(\mathbb{C})_{\mu,0}$ is dense in $L^2(\mathbb{C},\mu)_0$.
Define $W:\mathbb{C}\oplus C_b(\mathbb{C})_{\mu,0}\oplus
C_b(\mathbb{C})_{\nu,0}\to H$ by
\[
W \left(\begin{array}{c}\alpha \\ f \\ g\end{array}\right) =
\big(\alpha+f(X)+g(Y)\big)\Omega.
\]
This is an isometry, since
\begin{eqnarray*}
\left\langle W \left(\begin{array}{c}\alpha_1 \\ f_1 \\
g_1\end{array}\right),
W \left(\begin{array}{c}\alpha_2 \\ f_2 \\
g_2\end{array}\right)\right\rangle &=&
\left\langle\big(\alpha_1+f_1(X)+g_1(Y)\big)\Omega,
\big(\alpha_2+f_2(X)+g_2(Y)\big)\Omega\right\rangle \\
&=&\overline{\alpha_1}\alpha_2 + \int_{\mathbb{C}}
\overline{f_1(x)}f_2(x){\rm d}\mu(x) + \int_{\mathbb{C}}
\overline{g_1(y)}g_2(y){\rm d}\mu(y),
\end{eqnarray*}
where the mixed terms all vanish because
$\langle\Omega,f_i(X)\Omega\rangle=\langle\Omega,g_i(Y)\Omega\rangle
= 0$ for $i=1,2$. Therefore $W$ extends in a unique way to an
isometry on $\mathbb{C}\oplus L^2(\mathbb{C},\mu)_0\oplus
L^2(\mathbb{C},\nu)_0$
Let now $h\in C_b(\mathbb{C})$, then we get
\begin{multline*}
\left\langle W \left(\begin{array}{c}\alpha_1 \\ f_1 \\
g_1\end{array}\right),
h(X)W \left(\begin{array}{c}\alpha_2 \\ f_2 \\
g_2\end{array}\right)\right\rangle \\
=
\left\langle\big(\alpha_1+f_1(X)+g_1(Y)\big)\Omega,
(h(X)-h(0)\mathbf{1}\big) \big(\alpha_2+f_2(X)+g_2(Y)\big)\Omega\right\rangle \\
+ h(0)\left\langle W \left(\begin{array}{c}\alpha_1 \\ f_1 \\
g_1\end{array}\right),W \left(\begin{array}{c}\alpha_2 \\ f_2 \\
g_2\end{array}\right)\right\rangle\\
= \left\langle \big(\alpha_1+f_1(X)\big)\Omega, \big(h(X)-h(0)\mathbf{1}\big)\big(\alpha_2+f_2(X)\big)\Omega\right\rangle +h(0)\left\langle \left(\begin{array}{c}\alpha_1 \\ f_1 \\
g_1\end{array}\right),\left(\begin{array}{c}\alpha_2 \\ f_2 \\
g_2\end{array}\right)\right\rangle,
\end{multline*}
because the boolean independence and $\langle\Omega,g_i(Y)\Omega\rangle=0$ imply that all other terms vanish. But since $\langle\Omega,f_i(Y)\Omega\rangle=0$, this is equal to
\begin{multline*}
\left\langle \big(\alpha_1+f_1(X)\big)\Omega, h(X)\big(\alpha_2+f_2(X)\big)\Omega\right\rangle \\
+h(0)\left(\left\langle \left(\begin{array}{c}\alpha_1 \\ f_1 \\
g_1\end{array}\right),\left(\begin{array}{c}\alpha_2 \\ f_2 \\
g_2\end{array}\right)\right\rangle - \overline{\alpha_1}\alpha_2-\langle f_1,f_2\rangle\right) \\
=\left\langle\left(\begin{array}{c} \alpha_1 \\ f_1 \\g_1 \end{array}\right) , \left(\begin{array}{c} \int h(x)\big(f_2(x)+\alpha_2\big){\rm d}\mu(x) \\ h(f_2+\alpha_2) - \int h(x)\big(f_2(x)+\alpha_2\big){\rm d}\mu(x) \\ h(0)g_2 \end{array}\right)\right\rangle.
\end{multline*}
This proves the first formula. The second formula follows by symmetry.
Let $f,g\in C_b(\mathbb{C})$, $f(0)=0$, and note that
\begin{eqnarray*}
&& \left|\left|f(X)g(Y)\Omega - \int_{\mathbb{C}}g{\rm d}\nu\,f(X)\Omega\right|\right|^2 \\
&=& \langle \Omega, g(Y)^*|f(X)|^2g(Y)\Omega - \int_{\mathbb{C}}g{\rm d}\nu\,\langle\Omega, g(Y)^*|f(X)|^2 \Omega \\
&& - \int_{\mathbb{C}}\overline{g}{\rm d}\nu\,\langle\Omega, |f(X)|^2g(Y) \Omega\rangle + \left(\int_{\mathbb{C}}g{\rm d}\nu\right)^2 \langle\Omega,|f(X)|^2\Omega\rangle \\
&=& 0,
\end{eqnarray*}
i.e.\ $f(X)g(Y)\Omega= \int_{\mathbb{C}}g{\rm d}\nu\,f(X)\Omega$. Similarly $f(Y)g(X)\Omega = \int_{\mathbb{C}}g{\rm d}\mu\, f(Y)\Omega$
and thus
\begin{eqnarray*}
\overline{{\rm alg}\{h(X),h(Y):h\in C_b( \mathbb{C})\} \Omega} &=& \overline{{\rm span}\,\{\Omega,f(X)\Omega,f(Y)\Omega; f\in C_b(\mathbb{C})\}} \\
&=& W\big(\mathbb{C}\oplus L^2(\mathbb{C},\mu)_0\oplus L^2(\mathbb{C},\nu)_0\big).
\end{eqnarray*}
If $\Omega$ is cyclic, then $W$ is surjective and therefore unitary.
\end{proof}
\begin{remark}\label{rem-bool-reduction}
As in the monotone case, cf.\ Remark \ref{rem-mon-reduction}, this theorem shows that joint law of bounded functions on $X$ and $Y$ is uniquely determined by $\mathcal{L}(X,\Omega)$ and $\mathcal{L}(Y,\Omega)$. Furthermore, the characterisation and computation of the law of unbounded functions of $X$ and $Y$ like, e.g., $X+Y$ or $\sqrt{X}Y\sqrt{Y}$, is also reduced to the model introduced in Proposition \ref{prop-bool-model}.
\end{remark}
\subsection{Additive boolean convolution on
$\mathcal{M}_1(\mathbb{R})$}
\begin{definition}\cite{speicher+woroudi93}
Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}$ with
reciprocal Cauchy transforms $F_\mu$ and $F_\nu$. Then we define the
additive monotone convolution $\lambda=\mu\uplus\nu$ of $\mu$ and
$\nu$ as the unique probability measure $\lambda$ on $\mathbb{R}$
with reciprocal Cauchy transform given by
\[
F_\lambda(z)= F_\mu(z)+F_\nu(z)-z
\]
for $z\in\mathbb{C}^+$.
\end{definition}
That the additive boolean convolution is well-defined follows from
Subsection \ref{nevanlinna}. It is commutative and associative, $*$-weakly continuous, but not affine, cf.\ \cite{speicher+woroudi93}.
\begin{proposition}\label{prop-bool-add-resolvent}
Let $\mu$ and $\nu$ be two probabilities on $\mathbb{R}$ and define operators $N_x$ and $N_y$ as in Proposition \ref{prop-bool-model}. Then $N_x$ and $N_y$ are self-adjoint and boolean independent w.r.t.\ $\omega=\left(\begin{array}{c} 1 \\ 0 \\ 0\end{array}\right)$. Furthermore, the operator $z-N_x-N_y$ has a bounded inverse for all $z\in\mathbb{C}\backslash\mathbb{R}$, given by
\begin{equation}\label{eq-bool-add-resolvent}
(z-N_x-N_y)^{-1} \left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2\end{array}\right) =\left(\begin{array}{c} \beta \\ \frac{\psi_1 + \beta x - c_x}{z-x} \\\frac{\psi_2 + \beta y - c_y}{z-y} \end{array}\right),
\end{equation}
where
\begin{equation}\label{eq-beta-value}
\beta=\frac{\alpha G_\mu(z)G_\nu(z) + G_\nu(z)\int_\mathbb{R}\frac{\psi_1(x)}{z-x}{\rm d}\mu(x) + G_\mu(z)\int_\mathbb{R} \frac{\psi_2(y)}{z-y}{\rm d}\nu(y)}{G_\mu(z)+G_\nu(z)-zG_\mu(z)G_\nu(z)},
\end{equation}
and $c_x,c_y\in\mathbb{C}$ have to be chosen such that
\begin{equation}\label{eq-cx-cy-constants}
\int_\mathbb{R}\frac{\psi_1(x) + \beta x - c_x}{z-x}{\rm d}\mu(x)=0=\int_\mathbb{R}\frac{\psi_2(y) + \beta y - c_y}{z-y}{\rm d}\nu(y).
\end{equation}
\end{proposition}
Note that Equation \eqref{eq-cx-cy-constants} yields the following formulas for the constants $c_x,c_y$,
\begin{eqnarray*}
c_x &=& \frac{\int\frac{\psi_1(x)}{z-x}{\rm d}\mu(x)+\beta\big(zG_\mu(z)-1\big)}{G_\mu(z)}, \\
c_y &=& \frac{\int\frac{\psi_2(y)}{z-y}{\rm d}\nu(y)+\beta\big(zG_\nu(z)-1\big)}{G_\nu(z)}.
\end{eqnarray*}
\begin{proof}
$N_x$ and $N_y$ are boolean independent by Proposition \ref{prop-bool-model}.
For $z\in\mathbb{C}^+$, we have ${\rm Im}\,F_\mu(z)\ge {\rm Im}\,z>0$, ${\rm Im}\,F_\nu(z)\ge {\rm Im}\,z>0$, and therefore
\[
{\rm Im}\frac{G_\mu(z)+G_\nu(z)-zG_\mu(z)G_\nu(z)}{G_\mu(z)G_\nu(z)} = {\rm Im}\,\big(F_\mu(z)+F_\nu(z)-z\big)>0.
\]
This shows that the denominator of the right-hand-side of Equation \eqref{eq-beta-value} can not vanish for $z\in\mathbb{C}^+$. Since $G_\mu(\overline{z}) = \overline{G_\mu(z)}$, $G_\nu(\overline{z}) = \overline{G_\nu(z)}$, it can not vanish for $z$ with ${\rm Im}\,z<0$, either. The functions $\frac{1}{z-x}$ and $\frac{x}{z-x}$ are bounded on $\mathbb{R}$ for $z\in\mathbb{C}\backslash\mathbb{R}$, therefore Equation \eqref{eq-bool-add-resolvent} defines a bounded operator.
Let
\[
\varphi_1=\frac{\psi_1 + \beta x - c_x}{z-x} \qquad\mbox{ and }\qquad \varphi_2=\frac{\psi_2 + \beta y - c_y}{z-y},
\]
then
\[
(z-N_x-N_y)\left(\begin{array}{c} \beta \\ \varphi_1 \\ \varphi_2 \end{array}\right) = \left( \begin{array}{c} z\beta + d_x+d_y \\ (z-x)\varphi_1 - \beta x - d_x \\ (z-y)\varphi_2 - \beta y - d_y\end{array}\right) =\left( \begin{array}{c} z\beta + d_x+d_y \\ \psi_1 - c_x - d_x \\ \psi_2 - c_y - d_y\end{array}\right) \\
\]
where
\[
d_x= \int x\big(\varphi_1(x)+\beta\big){\rm d}\mu(x), \qquad d_y= \int y \big(\varphi_2(y)+\beta\big){\rm d}\nu(y).
\]
Since $\psi_1\in L^2(\mathbb{R},\mu)_0$, $\psi_2\in L^2(\mathbb{R},\nu)_0$, integrating over the second and third component gives $c_x=-d_x$ and $c_y=-d_y$. Therefore
\[
(z-N_x-N_y)\left(\begin{array}{c} \beta \\ \varphi_1 \\ \varphi_2 \end{array}\right) = \left( \begin{array}{c} z\beta - c_x-c_y \\ \psi_1 \\ \psi_2 \end{array}\right) \\
\]
We have to show that the first component is equal to $\alpha$. We get
\begin{multline*}
z\beta - c_x-c_y = z\beta - \frac{\int\frac{\psi_1(x)}{z-x}{\rm d}\mu(x)+\beta\big(zG_\mu(z)-1\big)}{G_\mu(z)} - \frac{\int\frac{\psi_1(x)}{z-x}{\rm d}\mu(x)+\beta\big(zG_\nu(z)-1\big)}{G_\nu(z)} \\
= \beta\frac{G_\mu(z)+G_\nu(z)-zG_\mu(z)G_\nu(z)}{G_\mu(z)G_\nu(z)} - \frac{1}{G_\mu(z)}\int\frac{\psi_1(x)}{z-x}{\rm d}\mu(x) - \frac{1}{G_\nu(z)}\int\frac{\psi_2(y)}{z-y}{\rm d}\nu(y)
\end{multline*}
Substituting Equation \eqref{eq-beta-value} into this expression, we get the desired result $z\beta-c_x-c_y=\alpha$.
\end{proof}
\begin{theorem}\label{thm-bool-add-conv}
Let $X$ and $Y$ be two self-adjoint operators on a Hilbert space $H$ that are
boolean independent w.r.t.\ a unit vector $\Omega\in H$ and assume that $\Omega$
is cyclic, i.e.\ that
\[
\overline{{\rm alg}\{h(X),h(Y);h\in C_b(\mathbb{R})\}\Omega}=H.
\]
Then $X+Y$ is essentially self-adjoint and the distribution w.r.t.\ $\Omega$ of the closure of $X+Y$ is equal to the boolean convolution of the distributions of $X$ and $Y$ w.r.t.\ $\Omega$, i.e.\
\[
\mathcal{L}(X+Y,\Omega) = \mathcal{L}(X,\Omega)\uplus\mathcal{L}(Y,\Omega).
\]
\end{theorem}
\begin{proof}
Let $\mu=\mathcal{L}(X,\Omega)$, $\nu=\mathcal{L}(Y,\Omega)$.
By Theorem \ref{thm-bool-model} and Lemma \ref{lem-unitary-eq} it is sufficient to consider the case where $X$ and $Y$ are defined as in Proposition \ref{prop-bool-model}. Then Proposition \ref{prop-bool-add-resolvent} shows that $z-X-Y$ admits a bounded inverse for all $z\in\mathbb{C}\backslash\mathbb{R}$ and therefore that ${\rm Ran}\,(z-X-Y)$ is dense. By \cite[Theorem VIII.3]{reed+simon80} this is equivalent to $X+Y$ being essentially self-adjoint.
Using Equation \eqref{eq-bool-add-resolvent}, we can compute the Cauchy transform of the distribution of the closure of $X+Y$. Let $z\in\mathbb{C}^+$, then
\begin{eqnarray*}
G_{X+Y}(z) &=& \langle\Omega, (z-X-Y)^{-1}\Omega\rangle = \left\langle \omega, (z-N_x-N_y)^{-1}\omega\right\rangle \\
&=& \left\langle\left(\begin{array}{c}1 \\ 0 \\ 0 \end{array}\right), \frac{G_\mu(z)G_\nu(z)}{G_\mu(z)+G_\nu(z)-zG_\mu(z)G_\nu(z)}\left(\begin{array}{c} 1 \\ \frac{x-\frac{zG_\mu(z)-1}{G_\mu(z)}}{z-x} \\ \frac{y-\frac{zG_\nu(z)-1}{G_\nu(z)}}{z-y} \end{array}\right)\right\rangle \\
&=& \frac{G_\mu(z)G_\nu(z)}{G_\mu(z)+G_\nu(z)-zG_\mu(z)G_\nu(z)}.
\end{eqnarray*}
Replacing all Cauchy transforms by their reciprocals, this becomes
\[
F_{X+Y}(z) = F_\mu(z)+F_\nu(z)-z=F_{\mu\uplus\nu}(z).
\]
\end{proof}
\subsection{Multiplicative boolean convolution on
$\mathcal{M}_1(\mathbb{R}_+)$}
Let us first recall Bercovici's definition of the boolean convolution for probability measures in the positive half-line, cf.\ \cite{bercovici04b}.
\begin{definition}\cite{bercovici04b}
Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}_+$ with transforms $K_\mu$ and $K_\nu$. If the holomorphic function defined by
\begin{equation}\label{eq-def-mult1-bool}
K(z) = \frac{K_\mu(z)K_\nu(z)}{z}
\end{equation}
for $z\in\mathbb{C}\backslash\mathbb{R}_+$ belongs to the class $\mathcal{P}$ introduced in Subsection \ref{nevanlinna}, then the boolean convolution $\lambda=\mu\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\cup\end{array}\!\nu$ is defined as the unique probability measure $\lambda$ on $\mathbb{R}_+$ with transform $K_\lambda=K$.
\end{definition}
But in general the function $K$ defined in Equation \eqref{eq-def-mult1-bool} does not belong to $\mathcal{P}$ and in that case the convolution of $\mu$ and $\nu$ is not defined. Bercovici has shown that for any probability measure $\mu$ on $\mathbb{R}_+$ not concentrated in one point there exists an $n\in\mathbb{N}$ such that the $n$-fold convolution product $\mu^{\!\!{\scriptstyle\begin{array}{c} {\scriptscriptstyle\times} \\[-12pt]\cup\end{array}}\!\! n}$ of $\mu$ with itself is not defined, cf.\ \cite[Proposition 3.1]{bercovici04b}.
This is of course related to the problem that in general the product of two positive operators is not positive. One might hope that taking e.g.\ $\sqrt{X}Y\sqrt{X}$ could lead to a better definition of the multiplicative boolean convolution, since this operator will automatically be positive. We will see below that this approach leads to a convolution that is always defined, but that is not associative.
Let us first introduce the model which we will use for our calculations.
\begin{proposition}
Let $\mu$ and $\nu$ be probability measures on $\mathbb{R}_+$. Define operators $Q_x$ and $Q_y$ on $H=\mathbb{C}\oplus L^2(\mathbb{R}_+,\mu)_0\oplus L^2(\mathbb{R},\nu)_0$ by
\begin{eqnarray*}
{\rm Dom}\,Q_x &=&\left\{\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right)\in H ; \int_{\mathbb{R}_+} x^2\big|\psi_1(x)+\alpha\big|^2{\rm d}\mu(x) <\infty\right\}, \\
{\rm Dom}\,Q_y &=&\left\{\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right)\in H ; \int_{\mathbb{R}_+} y^2\big|\psi_2(y)+\alpha\big|^2{\rm d}\mu(y) <\infty\right\}, \\
Q_x\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right) &=& \left(\begin{array}{c} \int_{\mathbb{R}_+}x\big(\psi_1(x)+\alpha\big){\rm d}\mu(x) \\ x(\psi_1+\alpha) - \int_{\mathbb{R}_+}x\big(\psi_1(x)+\alpha\big){\rm d}\mu(x) \\ \psi_2 \end{array}\right), \\
Q_x\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right) &=& \left(\begin{array}{c} \int_{\mathbb{R}_+}y\big(\psi_2(y)+\alpha\big){\rm d}\nu(y) \\ \psi_1 \\ y(\psi_2+\alpha) - \int_{\mathbb{R}_+}y\big(\psi_2(y)+\alpha\big){\rm d}\nu(y) \end{array}\right).
\end{eqnarray*}
Then $Q_x-\mathbf{1}$ and $Q_y-\mathbf{1}$ are boolean independent w.r.t.\ $\omega=\left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right)$, and $\mathcal{L}(Q_x,\omega)=\mu$, $\mathcal{L}(Q_y,\omega)=\nu$.
\end{proposition}
\begin{proof}
It follows from Propisition \ref{prop-bool-model}, that $Q_x-\mathbf{1}$ and $Q_y-\mathbf{1}$ are boolean independent.
Note that the functional calculus for $Q_x$ is given by $C_b(\mathbb{R}_+)\ni f\mapsto f(Q_x)\in\mathcal{B}(H)$,
\[
f(Q_x)\left(\begin{array}{c} \alpha \\ \psi_1 \\ \psi_2 \end{array}\right) = \left(\begin{array}{c} \int_{\mathbb{R}_+} f(x)\big(\psi_1(x)+\alpha\big){\rm d}\mu(x) \\ f(\psi_1+\alpha) -\int_{\mathbb{R}_+} f(x)\big(\psi_1(x)+\alpha\big){\rm d}\mu(x) \\ f(1)\psi_2 \end{array}\right).
\]
Therefore
\[
\langle \omega , f(Q_x)\omega\rangle = \left\langle\left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right), \left(\begin{array}{c} \int_{\mathbb{R}_+}f(x){\rm d}\mu(x) \\ f-\int_{\mathbb{R}_+}f(x){\rm d}\mu(x) \\ 0 \end{array}\right)\right\rangle = \int_{\mathbb{R}_+}f(x){\rm d}\mu(x),
\]
i.e.\ $\mathcal{L}(X,\omega)=\mu$. Similarly $\mathcal{L}(Y,\omega)=\nu$.
\end{proof}
\begin{proposition}
Let $z\in\mathbb{C}\backslash\mathbb{R}$, then $z-\sqrt{X}Y\sqrt{X}$ has a bounded inverse, given by
\begin{equation}\label{eq-bool-mult-xyx}
\left(z-\sqrt{X}Y\sqrt{X}\right)^{-1}\left(\begin{array}{c}\alpha \\ \psi_1 \\ \psi_2 \end{array}\right) = \left(\begin{array}{c} \beta \\ \varphi_1 \\ \varphi_2 \end{array}\right),
\end{equation}
where
\begin{eqnarray*}
\varphi_1(x) &=& \frac{\psi_1(x) + \beta x + (c_2-c_1)\sqrt{x}-c_3}{z-x}, \\
\varphi_2(y) &=& \frac{\psi_2(y) +c_1y-c_2}{z-y}, \\
\beta &=& \alpha G_\mu(z) + \int_{\mathbb{R}_+}\frac{\psi_1(x)}{z-x}{\rm d}\mu(x) + W_\mu(z)(c_2-c_1), \\
c_1 &=& \frac{\big(zG_\mu(z)-1\big)\int_{\mathbb{R}_+}\frac{\psi_2(y)}{z-y}{\rm d}\nu(y) + G_\nu(z)\left(\alpha W_\mu(z) + \int_{\mathbb{R}_+}\frac{\sqrt{x}\psi_1(x)}{z-x}{\rm d}\mu(x)\right)}{zG_\mu(z)G_\nu(z)-\big(zG_\mu(z)-1\big)\big(zG_\nu(z)-1\big)}, \\
c_2 &=& \frac{zG_\mu(z)\int_{\mathbb{R}_+}\frac{\psi_2(y)}{z-y}{\rm d}\nu(y) + \big(zG_\nu(z)-1\big)\left(\alpha W_\mu(z) + \int_{\mathbb{R}_+}\frac{\sqrt{x}\psi_1(x)}{z-x}{\rm d}\mu(x)\right)}{zG_\mu(z)G_\nu(z)-\big(zG_\mu(z)-1\big)\big(zG_\nu(z)-1\big)}, \\
c_3 &=& z\beta-\alpha,
\end{eqnarray*}
and $W_\mu$ denotes the Cauchy transform of $\sqrt{x}\mu$, i.e.\
\[
W_\mu(z) = G_{\sqrt{x}\mu}(z) = \int_{\mathbb{R}_+}\frac{\sqrt{x}}{z-x}{\rm d}\mu(x).
\]
\end{proposition}
\begin{proof}
This can be checked by applying $z-\sqrt{X}Y\sqrt{X}$ to the right-hand-side of Equation \eqref{eq-bool-mult-xyx}. The computations are straight-forward, but rather tedious.
\end{proof}
\begin{theorem}
Let $X$ and $Y$ be two positive operators on a Hilbert space $H$ such that $X-\mathbf{1}$ and $Y-\mathbf{1}$ are boolean independent w.r.t.\ a unit vector $\Omega\in H$. Suppose furthermore that $\Omega$ is cyclic, i.e.\ that
\[
\overline{{\rm alg}\{h(X),h(Y);h\in C_b(\mathbb{R}_+)\}\Omega}=H.
\]
Then $\sqrt{X}Y\sqrt{X}$ is essentially self-adjoint, its closure is positive, and the distribution $\lambda=\mathcal{L}(\sqrt{X}Y\sqrt{X})$ w.r.t.\ $\Omega$ of its closure is given by
\begin{equation}\label{eq-def-bool-mult-conv1}
G_\lambda(z) = G_X(z) + \frac{\big(W_X(z)\big)^2\big((z-1)G_Y(z)-1\big)}{zG_X(z)G_Y(z)-\big(zG_X(z)-1\big)\big(zG_Y(z)-1\big)}
\end{equation}
for $z\in\mathbb{C}\backslash\mathbb{R}$, where
\[
W_X(z) = \left\langle \Omega, \frac{\sqrt{X}}{z-X}\Omega\right\rangle.
\]
\end{theorem}
\begin{proof}
As in the previous cases, the existence of a bounded inverse of $z-\sqrt{X}Y\sqrt{X}$ for $z\in\mathbb{C}\backslash\mathbb{R}$ implies that $\sqrt{X}Y\sqrt{X}$ is essentially self-adjoint. Furthermore it is clearly positive.
Using Equation \eqref{eq-bool-mult-xyx}, one can calculate the Cauchy transform of the distribution of $\sqrt{X}Y\sqrt{X}$.
\end{proof}
\begin{remark}\label{rem-new-bool-mult-conv}
One can now use Equation \eqref{eq-def-bool-mult-conv1} to define a ``quantum probabilistically motivated'' boolean convolution $\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!$ for probability measures on $\mathbb{R}_+$. Let $\mu,\nu\in\mathcal{M}_1(\mathbb{R}_+)$, then $\lambda=\mu\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\nu$ is defined as the unique probability measure $\lambda$ on $\mathbb{R}_+$ such that
\[
G_\lambda(z) = G_\mu(z) + \frac{\big(W_\mu(z)\big)^2\big((z-1)G_\nu(z)-1\big)}{zG_\mu(z)G_\nu(z)-\big(zG_\mu(z)-1\big)\big(zG_\nu(z)-1\big)}
\]
for $z\in\mathbb{C}^+$.
This new convolution is defined for arbitrary probability measures $\mu$ and $\nu$ on $\mathbb{R}_+$, but it is neither associative nor commutative. For explicit calculations we use again the matrices $X$ and $Y$ introduced in Remark \ref{rem-mon-mult-alt}. The matrices $X-\mathbf{1}$ and $Y-\mathbf{1}$ are boolean independent w.r.t.\ to $\Omega$, see also Subsection \ref{subsec-rel-dirac}. Therefore
\begin{eqnarray*}
\big(p \delta_0 + (1-p)\delta_y\big)\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\delta_x &=& \mathcal{L}\big(\sqrt{Y}X\sqrt{Y},\Omega\big) \\
&=& \delta_x\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\big(p\delta_0+(1-p)\delta_y\big) \\
&=& p\delta_0 + (1-p)\delta_{y(xp+1-p)}, \\
\delta_x\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\big(p \delta_0 + (1-p)\delta_y\big) &=& \mathcal{L}\big(\sqrt{X}Y\sqrt{X},\Omega\big) \\
&=& \delta_x\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\big(p\delta_0+(1-p)\delta_y\big) \\
&=& \frac{1-p}{xp+1-p}\delta_0 + \frac{xp}{xp+1-p}\delta_{y(xp+1-p)}
\end{eqnarray*}
for $x,y>0$, $0<p<1$. It is now easy to find explicit examples such that $\mu\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\nu\not=\nu\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\mu$ and $\lambda\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!(\mu\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\nu)\not=(\lambda\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\mu)\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\nu$.
\end{remark}
\subsection{Multiplicative boolean convolution on $\mathcal{M}_1(\mathbb{T})$}\label{subsec-mult-bool-T}
For completeness we recall the results of \cite{franz04} for the multiplicative boolean convolution on $\mathcal{M}_1(\mathbb{T})$.
\begin{definition}\cite{franz04}
Let $\mu$ and $\nu$ be two probability measures on the unit circle $\mathbb{T}$ with transforms $K_\mu$ and $K_\nu$. Then the multiplicative monotone convolution $\lambda=\mu\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\cup\end{array}\!\nu$ is defined as the unique probability on $\mathbb{T}$ with transform $K_\lambda$ given by
\[
K_\lambda(z) = \frac{K_\mu(z)K_\nu(z)}{z}
\]
for $z\in\mathbb{D}$.
\end{definition}
It is easy to deduce from Subsection \ref{nevanlinna} that the multiplicative boolean convolution on $\mathcal{M}_1(\mathbb{T})$ is well-defined. It is associative, commutative, $*$-weakly continuous in both arguments, but not affine.
\begin{theorem}\cite[Theorem 2.2]{franz04}
Let $U$ and $V$ be two unitary operators on a Hilbert space $H$, $\Omega\in H$ a unit vector and assume furthermore that $U-\mathbf{1}$ and $V-\mathbf{1}$ are boolean independent w.r.t.\ $\Omega$. Then the products $UV$ and $VU$ are also unitary and their distribution w.r.t. $\Omega$ is equal to the multiplicative boolean convolution of the distributions of $U$ and $V$, i.e.\
\[
\mathcal{L}(UV,\Omega) = \mathcal{L}(VU,\Omega) = \mathcal{L}(U,\Omega)\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\cup\end{array}\!\mathcal{L}(V,\Omega).
\]
\end{theorem}
\section{Some Relations between Free, Monotone, and Boolean Convolutions}\label{sec-relations}
\subsection{Decomposing free convolution products into monotone or boolean convolution products}
The theorems by Maassen, Chistyakov and G\"otze that we cited in Subsection \ref{free-conv} have an interesting formulation purely in terms of measures. We will consider only the additive case here, but similar results exist also for the two multiplicative free convolutions.
\begin{theorem}\label{thm-free-mon-bool-conv}
Let $\mu$ and $\nu$ be two probability measures on the real line. Then there exist two unique probability measures $\zeta_1$ and $\zeta_2$ on the real line such that
\[
\mu\boxplus\nu=\mu\triangleright\zeta_1 = \nu\triangleright\zeta_2 = \zeta_1\uplus\zeta_2.
\]
\end{theorem}
\begin{proof}
Apply Theorem \ref{thm-parallelogramm} to $F_1=F_\mu$ and $F_2=F_\nu$, and take for $\zeta_1$ and $\zeta_2$ the probability measures on $\mathbb{R}$ with reciprocal Cauchy transforms $Z_1$ and $Z_2$, respectively.
\end{proof}
\begin{remark}
The existence of unique probability measures $\zeta_1,\zeta_2$ such that $\mu\boxplus\nu=\mu\triangleright\zeta_1$ and $\mu\boxplus\nu=\nu\triangleright\zeta_2$ follows also from analytic subordination. E.g., $\zeta_1$ is obtained from the Markov kernel in \cite[Theorem 3.1]{biane98} by setting $x=0$.
\end{remark}
Recently, Accardi, Lenczewski, and Sa{\l}apata have given a similar result for products of graphs. Given two graphs $\mathcal{G}_1$ and $\mathcal{G}_2$, they gave an explicit construction of two more graphs $\mathcal{B}_1$ and $\mathcal{B}_2$ such that the free product of $\mathcal{G}_1$ and $\mathcal{G}_2$, the star product of $\mathcal{B}_1$ and $\mathcal{B}_2$, and the comb products of $\mathcal{G}_1$ and $\mathcal{B}_2$, or $\mathcal{G}_2$ and $\mathcal{B}_1$ are all isomorphic. For details, see \cite{accardi+lenczewski+salapata06}.
Lenczweski \cite{lenczewski06} has also given a more explicit version of Theorem \ref{thm-free-mon-bool-conv}. Given two bounded free operators $X$ and $Y$, he decomposes their sum as $X+Y=X_0+Z$ such that $X_0$ has the same distribution as $X$ and $X_0$ and $Z$ are monotonically independent. Let $H_X$ denote the subspace generated by ${\rm alg}(X)$ from the vacuum vector, $H_X=\overline{{\rm alg}(X)\Omega}$ and $P_X$ the orthogonal projection onto $H_X$. Then $X_0$ is given by $X_0=XP_X$ and $Z$ by $Z=X(\mathbf{1}-P_X)+Y$.
Theorem \ref{thm-free-mon-bool-conv} has also an interesting consequence for independent increment processes.
\begin{corollary}
Let $T>0$ and $(\mu_{st})_{0\le s\le t \le T}$ be a free convolution hemi-group in $\mathcal{M}_1(\mathbb{R})$, i.e.\ a two-parameter family of probability measures on the real line such that
\[
\mu_{st}\boxplus\mu_{tu}=\mu_{su}
\]
for all $0\le s\le t\le u\le T$. Then there exists a unique monotone convolution hemi-group $(\zeta_{st})_{0\le s\le t\le T}$ such that $\mu_{0t}=\zeta_{0t}$ for all $0\le t\le T$.
\end{corollary}
\begin{proof}
Let $0\le s\le t\le T$ and define $\zeta_{st}$ as the unique probability measure such that
\[
\mu_{0t}=\mu_{0s}\boxplus\mu_{st} = \mu_{0s}\triangleright\zeta_{st}.
\]
Clearly, we have $\zeta_{0t}=\mu_{0t}$ for all $0\le t\le T$. To check that the $\zeta_{st}$ form a monotone convolution hemi-group, rewrite $\mu_{0u}$ in two ways,
\begin{eqnarray*}
\mu_{0u} &=& \mu_{0s}\boxplus \mu_{su} = \mu_{0s}\triangleright\zeta_{su} \\
&=&
\mu_{0t}\boxplus\mu_{tu} = \mu_{0t}\triangleright\zeta_{tu} = (\mu_{0s}\boxplus\mu_{st})\triangleright\zeta_{tu} \\ &=& (\mu_{0s}\triangleright\zeta_{st})\triangleright\zeta_{tu} = \mu_{0s} \triangleright(\zeta_{st}\triangleright\zeta_{tu})
\end{eqnarray*}
and therefore by uniqueness $\zeta_{st}\triangleright\zeta_{tu}=\zeta_{su}$.
\end{proof}
Since independent increment processes are uniquely determined by the hemi-group of their marginal distributions, this induces a map from free independent increment processes to monotone independent increment processes. Under this map the free additive L\'evy processes of the second kind introduced in \cite{biane98} correspond exactly to stationary monotone increment processes.
\subsection{Monotone and boolean convolutions involving Dirac measures}\label{subsec-rel-dirac}
Let $X$ be a normal operator on some Hilbert space. The distribution of $X$ w.r.t.\ to some unit vector $\Omega\in H$ is concentrated in one point if and only if the vector state acts as a homomorphism on the algebra generated by $X$, or equivalently, if $\Omega$ is an eigenvector of $X$. Let $X$ and $Y$ be two normal operators and assume $\mathcal{L}(X,\Omega)=\delta_x$ for some $x\in\mathbb{C}$. Then $X$ and $Y$ are monotonically independent if and only if they are boolean independent. Therefore we get the following relations for monotone and boolean convolutions,
\begin{eqnarray*}
\delta_x\triangleright\mu &=& \delta_x\uplus\mu \qquad\mbox{ for } x\in\mathbb{R}, \quad \mu\in\mathcal{M}_1(\mathbb{R}), \\
\delta_x\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\mu &=& \delta_x\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\cup\end{array}\!\mu \qquad\mbox{ for } x\in\mathbb{T}, \quad \mu\in\mathcal{M}_1(\mathbb{T}), \\
\delta_x\kern0.3em\rule{0.04em}{0.52em}\kern-.35em\gtrdot\mu &=& \delta_x\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\mu \qquad\mbox{ for } x\in\mathbb{R}_+, \quad \mu\in\mathcal{M}_1(\mathbb{R}_+), \\
\delta_x\kern0.3em\rule{0.04em}{0.52em}\kern-.15em\tilde{\gtrdot}\kern0.2em\mu &=& \mu\!\begin{array}{c} {\scriptstyle\times} \\[-12pt]\tilde{\cup}\end{array}\!\delta_x \qquad\mbox{ for } x\in\mathbb{R}_+, \quad \mu\in\mathcal{M}_1(\mathbb{R}_+).
\end{eqnarray*}
{}From Equation \eqref{eq-int-formula} we now get, e.g.,
\[
\mu\triangleright\nu = \int_\mathbb{R} \delta_x\uplus\nu{\rm d}\mu(x)
\]
for $\mu,\nu\in\mathcal{M}_1(\mathbb{R})$, i.e.\ the monotone convolution can be considered as a linearization of the boolean convolution w.r.t.\ to the first argument.
\section*{Acknowledgements}
This work was completed while I was visiting the Graduate School of Information Sciences of Tohoku University as Marie-Curie fellow. I would like to thank Professors Nobuaki Obata, Fumio Hiai, and the other members of the GSIS for their hospitality. I am also indebted to an anonymous referee for suggesting important corrections and improvements.
|
3,212,635,537,792 | arxiv |
\section{Acknowledgment}
This work has been supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center contract number D11-PC20066. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not with-standing any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/NBC, or the U.S. Government.
\section{Conclusion}
\label{sec:conclusion}
\noindent Driven by the conjecture that appearance, motion and occluding boundaries in monocular videos afford a depth of the scene, we have proposed a novel algorithm for depth estimation in videos. Our approach does not rely on using the structure-from-motion approaches that require cameras to translate and foreground movements to be minimal. Of course, a promising future work direction is to incorporate robust methods for structure-from-motion for camera and scene estimation within our framework. We have employed learned mappings from appearance and motion of spatio-temporal patches in videos to their corresponding depth estimates using ground truth data collected as co-registered videos and 3D point clouds. These learned mappings are incorporated into an MRF framework in which data terms are derived from these mappings while smoothness across spatio-temporal segments is formulated as the inter-segment constraint. Smoothness is allowed to be violated across occluding boundaries. Evidence for occluding boundaries is computed through edgelets and motion signatures within and across neigboring segments. Quantitative results of our algorithm on static image datasets such as Make3D demonstrate that on static images we are at least as competitive as state-of-the-art approaches. Since there are no standard 3D video datasets, we have shown results on a newly captured datset with co-registered videos and 3D point clouds.
\vspace{-0.5cm}
\section{Data Collection}
\label{sec:data}
\paragraph{Existing Datasets:}
We propose an algorithm to extract temporal depth information in outdoor videos using scene structure (geometric context and occlusion boundaries). For training and evaluating our algorithm, we require a video dataset with depth ground truth. Unfortunately, such a dataset is not available. NYU RGB-D dataset provides depth ground truth for indoor scenes collected with Kinect sensor \cite{Silberman:ECCV12}. Karsch \emph{et al}\bmvaOneDot \cite{karsch2012depth} developed a video ground truth dataset for indoor videos using Kinect but Kinect cannot produce ground truth for outdoor videos due to IR interference. KITTI dataset provides outdoor videos with depth ground truth from velodyne \cite{Geiger2013IJRR}. However, the velodyne is mounted on a car to maximize the depth accuracy along the road, so most of the trees, buildings and other scene structures don't have any depth information. Since our study is focused on extracting the depth in videos combined with scene description (geometric context, and occlusion boundaries), missing ground truth depth of the whole scene makes KITTI dataset unsuitable for our study. The only existing dataset with full scene ground truth depth is Make3d dataset\cite{saxena2009make3d}, which is only limited to images. Therefore, for depth estimation in videos, we have developed a video dataset for ground truth depth.
\paragraph{Video Depth Dataset for Scene Understanding:}
We collected a video depth data set from a robot mounted by an RGB camera and a velodyne HDL-32E rotating 3D laser scanner. The HDL-32E velodyne has 32 laser/detector pairs, $+10.67$ to $-30.67$ degrees vertical field of view at $10$Hz frame-rate. It results in a total of $700K$ laser points per second over a range of $80$m. We estimate the velodyne to camera transformation by the method proposed by Unnikrishnan \emph{et al}\bmvaOneDot \cite{unnikrishnan2005fast}. Now we can project a 3D points $\mathbf{x}$ from velodyne coordinates to image coordinates $\mathbf{y}$ using
\begin{equation} \label{eq:projection}
\mathbf{y}= \mathbf{K} \mathbf{T_{velo}^{cam}} \mathbf{x}
\end{equation}
where $K$ is intrinsic matrix. $\mathbf{T_{velo}^{cam}}$ is the transformation matrix from velodyne coordinates to camera coordinates given as
$$
\mathbf{T_{velo}^{cam}}=\begin{bmatrix}
\mathbf{R_{velo}^{cam}} & \mathbf{t_{velo}^{cam}} \\
0 & 1\\
\end{bmatrix}
$$
where $\mathbf{R_{velo}^{cam}}$ and $\mathbf{t_{velo}^{cam}}$ are rotation and translation matrices from velodyne to camera.
Using the transformation given in Equation 8, we project velodyne laser points on a video. However, since velodyne laser points have lower resolution than video, we leverage from video segmentation to generate cleaned depth maps. We compute the average depth of all the laser points project on a spatio-temporal region, over a temporal window of 5 frames to generate the ground truth depth.
\subsubsection{Data Term}
\section{Introduction and Approach}
\label{sec:approach}
\vspace{-0.3cm}
Methods exploiting visual and contextual cues for depth can be used to provide an additional source of depth information to the structure from motion or multi-view stereo based depth estimation systems.
In this paper, we focus on texture features, geometric context, motion boundary based monocular cues along with co-planarity, connectivity and spatio-temporal consistency constraints to predict depth in videos. We assume that a scene can be decomposed into planes, each with its own planar parameters. We over-segment a video into spatio-temporal regions and compute depth cues from each region along with scene structure from geometric contexts. These depth cues are used to train and predict depth from features. However, such appearance to depth mappings are typically noisy and ambiguous. We incorporate the independent features to depth mapping of each spatio-temporal region within in a MRF framework that encodes constraints from scene layout properties of co-planarity, connectivity and occlusions. To model the connectivity and co-planarity in a scene, we explicitly learn occlusion boundaries in videos. To further remove the inconsistencies from temporal depth prediction, we apply a sliding window to smooth the depth prediction. Our approach doesn't require camera translation or large rigid scene for depth estimation. Moreover, it provides a source of depth information that is largely complementary to triangulation based depth estimation methods~\cite{saxena2007depth}.
\noindent\textbf{The primary contributions of our method to extract depth from videos are:}
\begin{list}{$\bullet$}
{\setlength{\leftmargin}{2ex}\setlength{\labelsep}{.5ex}\setlength{\parsep}{.4ex plus .2ex minus .1ex}\setlength{\itemsep}{0ex plus .1ex}\setlength{\topsep}{0ex plus .1ex}}
\item Adoption of a learning and inference approach that explicitly models appearance to geometry mappings and piecewise scene smoothness;
\item Learning and estimating occlusion boundaries in videos and utilizing these to constrain smoothness across the scene;
\item There is no requirement of a translating camera or a wide-baseline for depth estimation;
\item An algorithm for video depth estimation that is complementary to traditional structure from motion approaches, and that can incorporate these approaches to compute depth estimates for natural scenes;
\end{list}
\vspace{-0.3cm}
\section{Experiments and Results}
\label{sec:result}
\vspace{-0.3cm}
\noindent We perform extensive experiments on video depth data to evaluate our algorithm. We perform 5-fold cross-validation over 36 videos ($\sim$ 6400 frames). We compute average log-error $|\log d - \log {\hat{d}}|$ and average relative error $|\frac{ d - \hat{d}}{d}|$ to report the accuracy of our method. We achieve an accuracy of $0.153$ log-error and $0.44$ on relative error (Table \ref{Table:result}). Figure \ref{fig:vid_depth_result} shows some example scenes from our dataset with ground truth and predicted depth.
Our approach for depth estimation can also be applied to images. We applied our algorithm over a publicly available Make3d depth image dataset \cite{saxena2009make3d}. Table \ref{table:make3d} gives the comparison of the single image variant of our approach with the state of the art and we achieve competitive results. It should be noted that our algorithm depends on occlusion boundary detection and geometric context (for which motion based features are important and is not optimized to extract depth from single images.
\begin{figure
\centering
\includegraphics[width=0.45\textwidth]{vid_depth_examples_eccv_2.pdf
\vspace{-0.3cm}
\caption{Examples of videos scenes, ground truth, and predicted depth by our method. Legend shows depth range from 0m (blue) to 80m (red). }
\label{fig:vid_depth_result}
\vspace{-0.3cm}
\end{figure}
\begin{table
\begin{minipage}[t]{0.45\linewidth}
\centering
\small
\begin{tabular}{|c|c|c|c|}\hline
Features & $log10$ & rel-depth \\\hline
ALL & 0.153 & 0.44\\\hline
App.+Flow & 0.176 & 0.533\\\hline
Appearance & 0.175 & 0.512\\\hline
\end{tabular}
\caption{Performance of our algorithm on video dataset, combining appearance, flow, and surface layout features give best accuracy.}
\label{Table:result}
\end{minipage}
\hspace{0.2cm}
\begin{minipage}[t]{0.5\linewidth}
\centering
\small
\begin{tabular}{|c|c|c|c|}\hline
Algorithm & $log10$ & rel-$log$ \\\hline
SCN \cite{saxena2005learning} & 0.198 & 0.530\\\hline
HEH \cite{HoiemIJCV2007} & 0.320 & 1.423\\\hline
Baseline \cite{saxena2009make3d} & 0.334 & 0.516\\
PP-MRF \cite{saxena2009make3d} & 0.187 & 0.370\\
Depth Transfer \cite{karsch2012depth} & 0.148 & 0.362 \\\hline
Sematic Labels \cite{liu2010single} & 0.148 & 0.379 \\\hline
\textbf{*Geom. Context} & & \\
\textbf{Occl. Bound.} & \textbf{0.159} & \textbf{0.386} \\\hline
\end{tabular}
\caption{Our approach can also be applied to images. We apply it to Make3d depth image dataset \cite{saxena2009make3d}.}
\label{table:make3d}
\end{minipage}
\vspace{-0.5cm}
\end{table}
\subsection{Features:}
\label{sec:feats}
\vspace{-0.2cm}
\noindent Based on the observation that appearance of surface patches in the natural world varies with depth in an imaged scene, we compute a number of appearance and motion based features for each spatio-temporal region obtained by the video segmentation. These features are used to learn a mapping from features to depth as described below.
\vspace{-0.3cm}
\paragraph{Appearance:}
To capture the appearance of a segment, we compute color, 2D location, and texture features for the segment. Color of a segment is represented by the mean RGB and HSV values within the segment. We apply the texture filters proposed by Leung \emph{et al}\bmvaOneDot \cite{LMTexture} and compute filter responses binned into $15$ bins. We assume that camera is approximately parallel (horizontal) to the ground, therefore, locations of vertical regions can provide useful information about the depth. We compute the mean location of normalized y-axis and the vertical distance from the horizon as location features for a segment.
\vspace{-0.3cm}
\paragraph{Motion:}
Motion provides useful information for depth in the scene, \emph{e.g}\bmvaOneDot, segments closer to the camera exhibit more motion than the segments far from the camera. To include the motion features in our framework, we compute an $8$-bin histogram of dense optical flow \cite{wedel2009improved}, and mean flow within each segment. We also compute the histogram of flow along $x$ and $y$ derivatives. To account for different spatial scales, we compute derivatives for different kernel sizes of Sobel filters ($3$, $5$ and $7$). In addition, we compute the optical flow histograms from reference frame $I_j$ to $I_{j-1}$, $I_{j-3}$, and $I_{j-5}$ to account for temporal smoothing.
\vspace{-0.3cm}
\paragraph{Geometric Context:}
\label{sec:geometric}
Understanding the geometric context of a scene can provide useful information for depth estimation, \emph{e.g}\bmvaOneDot sky is farthest and ground is horizontal. An object at a distance will have less motion than the object closer to the camera. Moreover, local appearance and motion features are dependent on the geometric class, \emph{e.g}\bmvaOneDot, appearance of a tree and object may vary differently with distance than appearance of a building, therefore, we include geometric layout of each region in our algorithm. Specifically, we estimate the geometric context in a video scene by the method proposed by Raza \emph{et al}\bmvaOneDot \cite{Raza2013GCFV}, which extends \cite{HoiemGeometric2005} from images to videos. They decompose a video into geometric classes given as sky, ground, solid, porous, and moveable objects. Their geometric context from video algorithm predicts the confidence for each pixel belonging to a geometric class. We compute the mean confidence of all the pixels in a spatio-temporal region produced by the video segmentation and include it as a feature vector to the appearance and motion features.
\subsection{Features to Depth Mapping}
\label{sec:unary_depth}
We use random forest regression to estimate the depth for each 2D region in a frame. Random forests provide feature selection and estimate the out-of-the-bag feature importance \cite{breiman2001random}. Moreover, random forests have been shown to perform excellent over large datasets with huge number of variables. We train random forests with $105$ trees, $11$ random features per node, and a maximum depth of $35$ nodes for a tree. In training random forests, we include all the temporal variations of a 2D segment within its spatio-temporal region provided by the video segmentation. We train and test the random forests over $log$ depth rather than depth to model the depth of nearby regions more accurately.
\section{Introduction}
\label{sec:intro}
\noindent Human vision is capable of infering the 3D structure of a scene even when observed in monocular imagery. However, estimating 3D structure from videos of general scenes still remains a challenging problem in computer vision.
A large majority of prior work on 3D structure extraction from a monocular camera, like structure from motion and multi-view stereo relies on pose estimation and triangulation to estimate depth maps \cite{zhang2009consistent,newcombe2010live}. However, most of these geometric methods for depth estimation require camera translation, absence of large independently moving foreground objects, and rigid textured image regions that can be matched across views. These assumptions are not true in many consumer videos, which tend to have people moving around in the foreground, small translations and potentially non-rigid background (see Figure \ref{fig:intro}).
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{depth_intro_1.pdf
\caption{A video scene (left) and the depth map (right) }
\label{fig:intro}
\end{figure}
In this paper, instead of using triangulation based constraints, we pose the problem of depth estimation in terms of statistical learning and inference. Rather than explicitly triangulating points from the video, we propose to learn a mapping of monocular cues to depth. There are numerous monocular cues such as texture variations, occlusion boundaries, defocus, color/haze, surface layout, and size/shape of known objects, that contain useful and important cues for depth information.
Specifically we focus on texture features, geometric context, motion boundary based monoculor cues along with co-planarity, connectivity and spatio-temporal consistency constraints to predict depth in videos. We assume that a scene can be decomposed into planes each with its own planar parameters. We over-segment a video into spatio-temporal regions and compute depth cues from each region along with scene structure from geometric contexts. These depth cues are used to train and predict depth from features. However, such appearance to depth mappings are typically noisy and ambiguous. We incorporate the independent features to depth mapping of each spatio-temporal region within in an MRF framework that encodes constraints from $3D$ scene properties of co-planarity, connectivity and occlusions. To model the connectivity and co-planarity in a scene, we explicitly learn occlusion boundaries in videos. To further remove the inconsistencies from temporal depth prediction, we apply a sliding window to smooth the depth prediction. Our approach doesn't require camera translation or large rigid scene for depth estimation. Moreover, it provides a source of depth information that is largely complementary to triangulation based depth estimation methods~\cite{saxena2007depth}.
\noindent\textbf{The primary technical contributions of our method to extract depth from videos are:}
\begin{list}{$\bullet$}
{\setlength{\leftmargin}{2ex}\setlength{\labelsep}{.5ex}\setlength{\parsep}{.4ex plus .2ex minus .1ex}\setlength{\itemsep}{0ex plus .1ex}\setlength{\topsep}{0ex plus .1ex}}
\item Adoption of a learning and inference approach that explicitly models appearance to geometry mappings and piecewise scene smoothness;
\item Learning and estimating occlusion boundaries in videos and utilizing these to constrain smoothness across the scene;
\item An algorithm for video depth estimation that is complementary to traditional structure from motion approaches, and that can incorporate these approaches to compute depth estimates for natural scenes of a much wider variety then what the SFM approache can handle;
\item A new co-registered video and depth dataset with over $38$ outdoor videos ($\sim$ $6400$ frames) and pixel registered depth maps.
\end{list}
We also undertake a detailed analysis of contribution of monocular features in depth estimation and evaluation on images and videos.
\section{Introduction}
\label{sec:intro}
\noindent Human vision is capable of inferring the depth of a scene even when observed in monocular imagery. However, estimating depth from videos of general scenes still remains a challenging problem in computer vision.
A large majority of prior work on 3D structure extraction from a monocular camera, like structure from motion and multi-view stereo relies on pose estimation and triangulation to estimate depth maps \cite{zhang2009consistent,newcombe2010live}(see Figure \ref{fig:intro}). In this paper, instead of using triangulation based constraints, we propose to learn a mapping of monocular cues to depth using statistical learning and inference techniques. There are numerous monocular cues such as texture variations, occlusion boundaries, defocus, color/haze, surface layout, and size/shape of known objects, that contain useful and important cues for depth information. These sources of depth information are complementary to triangulation. Therefore, methods exploiting such visual and contextual cues for depth can be used to provide an additional source of depth information to the structure from motion or multi-view stereo based depth estimation systems.
\begin{figure}
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width=\textwidth]{depth_intro_2.pdf
\caption{Video scene (left) and the ground truth depth from LiDAR (right).}
\label{fig:intro}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{vid_seg_1.pdf
\caption{Video scenes and spatio-temporal segmentation using the method proposed by Grundmann \emph{et al}\bmvaOneDot \cite{MatthiasSegmentation}.}
\label{fig:vidSeg}
\end{minipage}
\end{figure}
In this paper, we focus on texture features, geometric context, motion boundary based monocular cues along with co-planarity, connectivity and spatio-temporal consistency constraints to predict depth in videos. We assume that a scene can be decomposed into planes, each with its own planar parameters. We over-segment a video into spatio-temporal regions and compute depth cues from each region along with scene structure from geometric contexts. These depth cues are used to train and predict depth from features. However, such appearance to depth mappings are typically noisy and ambiguous. We incorporate the independent features to depth mapping of each spatio-temporal region within in a MRF framework that encodes constraints from scene layout properties of co-planarity, connectivity and occlusions. To model the connectivity and co-planarity in a scene, we explicitly learn occlusion boundaries in videos. To further remove the inconsistencies from temporal depth prediction, we apply a sliding window to smooth the depth prediction. Our approach doesn't require camera translation or large rigid scene for depth estimation. Moreover, it provides a source of depth information that is largely complementary to triangulation based depth estimation methods~\cite{saxena2007depth}.
\noindent\textbf{The primary contributions of our method to extract depth from videos are:}
\begin{list}{$\bullet$}
{\setlength{\leftmargin}{2ex}\setlength{\labelsep}{.5ex}\setlength{\parsep}{.4ex plus .2ex minus .1ex}\setlength{\itemsep}{0ex plus .1ex}\setlength{\topsep}{0ex plus .1ex}}
\item Adoption of a learning and inference approach that explicitly models appearance to geometry mappings and piecewise scene smoothness;
\item Learning and estimating occlusion boundaries in videos and utilizing these to constrain smoothness across the scene;
\item There is no requirement of a translating camera or a wide-baseline for depth estimation;
\item An algorithm for video depth estimation that is complementary to traditional structure from motion approaches, and that can incorporate these approaches to compute depth estimates for natural scenes;
\item We present a novel dataset providing ground truth depth for over 38 outdoor videos ($\sim$ 6400 frames). We also undertake a detailed analysis of contribution of monocular features in depth estimation and thorough evaluation on images and videos;
\end{list}
\subsubsection{Occlusion Boundaries}
\label{sec:occl}
Occlusion boundaries arise in a scene due to depth ordering of the objects and geometric classes. Within and across regions in the scene that are connected by a non-occluding boundary such as an orientation discontinuity or surface markings, depth varies smoothly, while occlusion boundaries also correspond to depth discontinuities. Therefore, we enforce connectivity and co-planarity in the scene conditioned on the probability of occlusion boundaries $y_{ij}$ between the regions $i$ and $j$. To estimate occlusion boundaries in a scene, we compute features for each edgelet in the video. Specifically, we compute the difference in color, geometric context, and motion features for the regions on both sides of an edgelet. In addition, we compute the flow-consistency features along each edgelet boundary \cite{humayun2011learning}. We use standard pairwise potential term over edgelet occlusion prediction. Our occlusion boundary MRF is given as
\begin{equation} \label{eq:occl}
P(e=\mbox{n-occl}|X)=\frac{1}{Z}\prod\limits_{n=1}^Ng_n(e_n|X_n)\\
\prod\limits_{m\in \mbox{conn.(n)}}f_{mn}(e_n,e_m)
\end{equation}
The term $g_n(.)$ captures the probability for an edge being a non-occlusion boundary given the edgelet feature $X_n$. We compute the unary term by training a random forest with the ground truth from the video geometric context dataset \cite{Raza2013GCFV}. The boundaries between different objects and geometric classes, \textit{i.e.~}, sky, ground, trees, building, and objects provide occlusion boundaries and other boundaries correspond to non-occlusion boundaries. The pairwise term $f_{mn}$ is $\sqrt{P(e_n)P(e_m)}$, where edgelet $m$ is connected to edgelet $n$. Next, to make our occlusion boundary prediction temporally consistent, we smooth the occlusion boundary probabilities $P(e=n-occl|data)$ over a temporal window of $30$ frames. Note that the edgelet $e_n$ in Equation 2 is the boundary between regions $i$ and $j$ in Equation 1, and occlusion probability of an edge $P(e_n=n-occl|data)$ is the same as $y_{ij}$.
Now, we solve the \Cref{eq:depth} using the L-BFGS algorithm \cite{liu1989limited} and compute the depth of each pixel $p$ in region $i$ using $d_{i,k}=1/r_{i,k}^T\alpha_i$. We compute depth for each 2D region in a frame using the MRF and then use a sliding window across frames to smooth out the inconsistencies in predicted depth.
\section{Depth Estimation in Videos}
\label{sec:overview}
\noindent We propose an algorithm to estimate the depth in a dynamic video scene.
Our conjecture is that appearance and depth correlations learned from training data can be combined with video parsing based on motion coherence of patches and occluding boundaries to infer depth of novel videos. If this approach succeeds, then depth can be computed from videos even when moving cameras do not translate
and there are independently moving foregrounds in the scene. We formulate the depth estimation problem as a problem of inference of locally planar structure within image
patches. The problem is formulated within an MRF framework in which learning provides the data term for planar parameters of patches and inter-patch smoothness is modelled as co-planarity that is allowed to be violated at occlusion boundaries. Motion estimation provides the occlusion boundaries. The learned parameters guide the MAP estimation of the unknown scenes which is solved using simultaneous estimation of all parameters.
We process a video in three stages. We first decompose a video into spatio-temporal regions using video segmentation and extract local appearance and motion features for each region (Sec. \ref{sec:vidseg}). In the second stage, we extract the geometric context from the video and apply a random forest based appearance to geometry mapping to predict depth for each spatio-temporal region (Sec. \ref{sec:feats}). Subsequently, we estimate the occlusion boundaries in the video scene (Sec. \ref{sec:occl}) and infer the plane parameters for each spatio-temporal region by enforcing connectivity and co-planarity over non-occluding regions (Sec. \ref{sec:depthMRFfull}). Figure ~\ref{fig:overview} shows the steps of our method. Now we explain each of these steps in detail.
\begin{figure*
\centering
\includegraphics[width=\textwidth]{depth_flow_1.pdf
\caption{Flow of depth estimation in videos. We first segment the videos into spatio-temporal super-voxels and then extract color, location, texture, motion, and geometric context based features. We predict unary depth prediction with random forest regression, and refine the depth estimate by incorporating 3D scene properties in MRF with occlusion boundaries.}
\label{fig:overview}
\vspace{-0.3cm}
\end{figure*}
\section{Related Work}
\label{sec:related}
Structure from motion has been widely applied for 3D scene reconstruction from video. Nister \emph{et al}\bmvaOneDot ~\cite{nister2005real} have developed and demonstrated
real-time reconstruction of point clouds and dense 3D maps from videos. Newcombe \emph{et al}\bmvaOneDot \cite{newcombe2010live} proposed a structure from motion (SfM) based method to construct a dense 3D depth map from a single camera video. We cannot do justice here to the vast literature on SfM. However, all SfM approaches need translational motion to
compute depth maps.
A key differentiator for the class of approaches represented by this paper is that instead of applying projective geometry techniques to the videos, we reconstruct depth maps based on learning the depth structure of general scenes for reconstruction. As a result, our technique can be applied to videos with no translational motion, substantial motion of foreground objects, and also situations where the camera can be still for short durations.
Techniques that are most relevant to our work are the semantic scene structure based scene reconstruction works primarily applied to image analysis. Hoiem \emph{et al}\bmvaOneDot \cite{hoiem2007occl} proposed a depth ordering of regions in a single image using geometric context and occlusions depth order. Hane \emph{et al}\bmvaOneDot \cite{hane2013joint} proposed joint image segmentation and reconstruction by incorporating class-specific geometry prior from training data. Ladicky \emph{et al}\bmvaOneDot \cite{ladicky2012joint} performed a joined semantic segmentation and stereo reconstruction in a CRF framework for street view analysis. Liu \emph{et al}\bmvaOneDot \cite{liu2009nonparametric} proposed a non-parametric semantic labeling by pixel label transfer given a databased of known pixel labels. Their method was extended to depth transfer in images and videos by Karsch \emph{et al}\bmvaOneDot \cite{karsch2012depth}. Depth transfer algorithm is highly dependent on the global similarity of the target image with candidate images. Moreover, this method does not scale (computationally) for large environments as it requires imagery with associated depth for the environment and a test time global image search over the database for each target image of the video. In contrast, our approach for learning depth from local features, and holistic scene semantics is potentially more likely to scale to larger environments and real-time systems.
Saxena \emph{et al}\bmvaOneDot \cite{saxena2009make3d} proposed depth estimation from a single image using local image appearance features. Their depth prediction algorithm was improved by Liu \emph{et al}\bmvaOneDot \cite{liu2010single}, incorporating semantic class prior for semantic aware image reconstruction. Our work is inspired by these two approaches but differs in the following key respects, First, we develop a formulation for video reconstruction and second, use occlusion boundaries in the reconstruction. Moreover, we take advantage of temporal context in videos by using video segmentation to get spatio-temporal regions and estimate temporal occlusion boundaries.
\section{Experiments and Results}
\label{sec:result}
\noindent We perform extensive experiments on video depth data to evaluate our algorithm. We perform 5-fold cross-validation over 36 videos ($\sim$ 6400 frames). We compute average log-error $|\log d - \log {\hat{d}}|$ and average relative error $|\frac{ d - \hat{d}}{d}|$ to report the accuracy of our method. We achieve an accuracy of $0.153$ log-error and $0.44$ on relative error (Table \ref{Table:result}). Figure \ref{fig:vid_depth_result} shows some example scenes from our dataset with ground truth and predicted depth.
\begin{figure*
\centering
\includegraphics[width=0.75\textwidth]{vid_depth_examples_eccv_2.pdf
\caption{Examples of videos scenes, ground truth, and predicted depth by our method. Legend shows depth range from 0m (blue) to 80m (red). }
\label{fig:vid_depth_result}
\vspace{-0.5cm}
\end{figure*}
We also compute depth accuracy over each predicted geometric class. We achieve good performance for objects, ground, and sky. Geometric context provides high accuracy for ground and sky prediction, contributing to better depth prediction for these classes. We achieve comparatively lower accuracy for trees and buildings. Highly porous tree branches and glass buildings can have error in ground truth depth due to velodyne limitation. Moreover, occlusion boundary prediction within porous regions can have lower accuracy. However, using geometric and motion features enable our algorithm to accurately predict temporal depth for moving objects, \emph{e.g}\bmvaOneDot, Figure \ref{fig:obj_depth} shows examples of temporal update in object depth prediction.
\Cref{Table:result} shows the importance of features for depth estimation on video depth dataset. It shows that geometric features provide useful information for depth estimation. Figure \ref{fig:feat_imp} shows average out-of-bag feature importance by random forest regression over 5-fold cross-validation normalized by sub-feature dimension. Figure \ref{fig:feat_imp} shows that location and geometric features provide most useful information for the random forest. Motion features also provide useful features but have lower average feature importance because of high feature dimension. Motion features are also useful for extracting geometric context in videos and occlusion boundaries detection.
Our approach for depth estimation can also be applied to images. We applied our algorithm over a publicly available Make3d depth image dataset \cite{saxena2009make3d}. The Make3d dataset contains $400$ training images and $134$ test images. To apply our video-based algorithm to images, we first apply image segmentation by Felzenshawb \emph{et al}\bmvaOneDot \cite{superpixel}. Next, to estimate geometric context, we apply the publicly available code from Hoiem \emph{et al}\bmvaOneDot \cite{HoiemIJCV2007}. For occlusion boundary prediction, we train our random forest classifier with only the appearance based features from video geometric context dataset \cite{Raza2013GCFV}. Table \ref{table:make3d} gives the comparison of the single image variant of our approach with the state of the art and we achieve competitive results. It should be noted that our algorithm depends on occlusion boundary detection and geometric context (for which motion based features are important \cite{stein2009occlusion,Raza2013GCFV}) and is not optimized to extract depth from single images.
\begin{table
\begin{minipage}[t]{0.47\linewidth}
\centering
\small
\begin{tabular}{|c|c|c|c|}\hline
Features & $log10$ & rel-depth \\\hline
ALL & 0.153 & 0.44\\\hline
Appearance+Flow & 0.176 & 0.533\\\hline
Appearance & 0.175 & 0.512\\\hline
\end{tabular}
\vspace{0.1cm}
\caption{Performance of our algorithm on video dataset, combining appearance, flow, and surface layout features give best accuracy.}
\label{Table:result}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.47\linewidth}
\centering
\small
\begin{tabular}{|c|c|c|c|}\hline
Algorithm & $log10$ & rel-$log$ \\\hline
SCN \cite{saxena2005learning} & 0.198 & 0.530\\\hline
HEH \cite{HoiemIJCV2007} & 0.320 & 1.423\\\hline
Baseline \cite{saxena2009make3d} & 0.334 & 0.516\\
PP-MRF \cite{saxena2009make3d} & 0.187 & 0.370\\
Depth Transfer \cite{karsch2012depth} & 0.148 & 0.362 \\\hline
Sematic Labels \cite{liu2010single} & 0.148 & 0.379 \\\hline
\textbf{*Geom. Context} & & \\
\textbf{Occl. Bound.} & \textbf{0.159} & \textbf{0.386} \\\hline
\end{tabular}
\vspace{0.1cm}
\caption{Our approach can also be applied to images. We apply it to Make3d depth image dataset \cite{saxena2009make3d}.}
\label{table:make3d}
\end{minipage}
\end{table}
\begin{figure}
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width=0.8\columnwidth]{depth_feat_imp_1.pdf
\caption{Random forest's out-of-bag features importance for depth regression.}
\label{fig:feat_imp}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width=0.8\textwidth]{vid_depth_obj_1.pdf
\caption{Our method can predict temporal object depths with high accuracy.
\label{fig:obj_depth}
\end{minipage}
\end{figure}
\subsection{MRF Framework}
\label{sec:depthMRFfull}
The region-based depth predictions obtained from the random forest learning are typically noisy and do not exploit the piecewise regularities and smoothness structure inherent in most natural scenes. In particular, depths across segments change smoothly except at occluding boundaries. We co-exploit segment-based depth estimation with cross-segment smoothing within an MRF formulation.
A scene in the video is composed of spatio-temporal regions. We model depths within a segment using planar patches and model cross-segment smoothness using 3D connectivity and co-planarity constraint. The cross-segment constraints are allowed to be violated based on evidence from an occluding boundary.
We assume a pinhole camera with no lens distortion. A 3D point in world coordinate is projected onto an image plane pixel $p$. The ray $r_p$ in the world passing through pixel $p$ is given as $r_p=R^{-1}K^{-1}\begin{bmatrix}u_p & v_p & 1\end{bmatrix}^T$, where $(u_p,v_p)$ are image coordinates of pixel $p$. For simplicity, we assume that the camera is parallel to the ground with $\theta_{yz}=0$, and rotation matrix is identity. Note that any of the well-known methods for camera motion estimation from videos can be applied in this context to derive an instantaneous rotation and translation with respect to a reference frame. This can be easily incorporated within our framework. However, since our focus in this work is to explore dynamic depth estimation, we simplify the camera pose problem for expediency. $K$ is the camera matrix, given as
$$
K=\begin{bmatrix}
f_u & 0 & u_o\\
0 & f_v & v_o\\
0 & 0 & 1\\
\end{bmatrix}
$$
where $f_u$ and $f_v$ are the camera focal lengths and $u_o$ and $v_o$ is the image center.
We make the commonly used assumption that each spatio-temporal segment within a video can be described as a planar patch. So all the pixels $p$ in a segment $S$ can be parametrized by plane parameters $\alpha \in \mathbf{R}^3$. The orientation of the plane is given as $\hat{\alpha}=\frac{\alpha}{|\alpha|}$ and the distance of a pixel $p$ on the plane with parameter $\alpha$ is given as $d_p=1/{r_p^T\alpha}$.
The goal of inference in our approach is to estimate $\alpha$'s for each segment in the video within an MRF framework that combines unary terms computed from the appearance to depth mapping, and pairwise terms that enforce connectivity, and co-planarity between planes conditioned on the occlusion boundary between them. The MRF formulation is given as:
\begin{equation}
\label{eq:depth}
P(\alpha|X,S,y)=\frac{1}{Z}\prod\limits_{i=1}^Ng_i(\alpha_i|X_i,S_i)\\
\prod\limits_{ij}f_{ij}(\alpha_i,\alpha_j,y_{ij})
\end{equation}
Where for segment $i$, $X_i$ is the vector of appearance and motion features, $S_i$ is the segment ID, and $y_{ij}$ is the probability of a boundary to be a non-occluding boundary between segments $i$ and $j$. The term $f_{ij}$ is the potential function that measures violations of smoothness between connected segments. Therefore, if the boundary between two spatio-temporal regions is an occlusion boundary then $y_{ij}=0$ and the second term $f_{ij}$ in the MRF becomes zero, \textit{i.e.,} both regions are not connected or co-planar. But if the boundary is a non-occluding boundary between two spatio-temporal regions then $y_{ij}=1$ and connectivity and co-planarity will be enforced. To handle the errors in the occlusion boundary prediction, we include the confidence for a boundary to be occlusion boundary in our MRF framework (Sec. \ref{sec:occl}).
The term $g_i(.)$ in Equation 1 minimizes the total fractional depth error over all the pixels $k$ in a region $i$ in a frame. The fractional error is given as
\begin{equation}
\frac{\hat{d}_{k,i}-d_{k,i}}{d_{k,i}}=\hat{d}_{k,i} \cdot r_{k,i}^T\alpha_i-1
\end{equation}
Where $d$ is the ground truth depth and $\hat{d}$ is the predicted depth by the random forest mapping (Sec. \ref{sec:unary_depth}). The data term to minimize the total fractional error in a region $i$ with plane parameter $\alpha_i$ is given by:
\begin{equation}
g_i(\alpha_i|X_i,S_i)=\exp(\sum\limits_{k=1}^K||(r_{i,k}^T\alpha_i)\hat{d}_{i,k}-1||^2)
\end{equation}
Connectivity constrains the pixels on the boundary to be connected by minimizing the fractional distance between the pixels on the boundary $B_{ij}$ of regions $i$ and $j$ in absence of the occlusion boundary. The connectivity term is given as:
\begin{equation}
f_{\mbox{conn}}(\alpha_i,\alpha_j,y_{ij})=\exp(\frac{1}{|B_{ij}|}\sum\limits_{p\in B_{ij}}y_{ij}\\||(r_p^T\alpha_i-r_p^T\alpha_j)\sqrt{\hat{d_i}\hat{d_j}}||^2)
\end{equation}
where $B_{ij}$ are the pixels on the boundary of regions $i$ and $j$ in a frame. The co-planarity term minimizes the fractional distance between the centers of the region and is given by:
\begin{equation}
f_{\mbox{cop}}(\alpha_i,\alpha_j,y_{ij})=\exp(y_{ij}||(r_q^T\alpha_i-r_q^T\alpha_j)\hat{d}_j||^2)
\end{equation}
where $r_q$ is the center pixel of region $j$.
\subsection{Video Segmentation}
\label{sec:vidseg}
\noindent We assume that a video scene is made up of a small number of planes having uniform depth and orientation \cite{saxena2009make3d,liu2010single}. Therefore, instead of estimating the 3D position of each pixel, we can work on spatio-temporal regions in the video. To decompose a video into spatio-temporal region, we use the graph based video segmentation proposed by Grundmann \emph{et al}\bmvaOneDot \cite{MatthiasSegmentation,Corso2012Evaluation}. Their video segmentation algorithm constructs a 3D graph over the video volume and groups together the regions which are coherent in appearance and motion. Their graph based video segmentation processes long sequences of videos by dividing the video into smaller clips and overlapping frames between successive clips to automatically achieve long term coherence. Figure \ref{fig:vidSeg} shows examples of spatio-temporal regions produced by video segmentation \cite{MatthiasSegmentation} \footnote{using www.videosegmentation.com}.
|
3,212,635,537,793 | arxiv | \section{\label{sec:introduction}Introduction}
Topology has signaled a shift in modern condensed matter research since the emergence of the quantum Hall effect \cite{PhysRevLett.45.494}. Many systems with desirable physical behaviors are now known to have an underlying nontrivial topological classification \cite{10.2307.2397741,PhysRevLett.49.405,PhysRevLett.61.2015,PhysRevB.22.2099}. These systems are promising platforms to potentially revolutionize technology as we approach the limit of traditional semi-conductor based devices \cite{ieee-roadmap}. One idea is to design electronics using the robust conducting edge states of topological insulators \cite{electronics7100225,2021CmPhy.4.70G} that could replace standard transistor-based switching components. Another idea is to leverage topologically protected spin-locked states as a basis for memory in spin-tronics \cite{2018NatPh.14.242S}. Finally, topological insulators and superconductors \cite{2012majorana} have been proposed as a basis for quantum computing \cite{osti_1811722}. With all of these possible applications at stake, it is clearly imperative that we maximize our ability to design and control topological phases of matter.
Consequently \textit{Floquet engineering} \cite{Okaengineering}, where systems are governed by a Hamiltonian possessing dynamic periodicity, has emerged as a promising candidate for precise tuning of topology through the laser-matter interaction. Floquet engineering has already been used to both emulate the Thouless pump with quantized energy as opposed to charge \cite{PhysRevLett.120.150601,Pan:21}, and as a method to create novel topological phases from initially trivial phases \cite{rudner2013anomalous,2011TNt-qwell,2011TNt-majorana,PhysRevB.79.081406,PhysRevB.84.235108,PhysRevLett.110.016802-mod-f-ti}.
The recent growth in this field is owed to advances in experimental capabilities and theoretical understanding \cite{RevModPhys.53.287,1986PhR.141.320M,FAINSHTEIN1992111,CHU20041,doi:10.1142/5476} in the field of laser-driven quantum mechanics. One powerful theoretical tool in the field of driven lattice systems \cite{PhysRevLett.110.200403}, establishes a mapping of the dynamic D-dimensional system to a static D$+1$-dimensional system, in which the modes of the Fourier expansion play the role of lattice points in the new direction. An intriguing feature of the mapping is the emergence of a frequency-dependent field along the new frequency space direction - arising uniquely from the time derivative in the Schr\"{o}dinger equation. This field dictates the method of solution to be employed. In the case of adiabatic driving where the driving frequency is small, the frequency-dependent field is negligible or perturbative, and translational invariance along the frequency direction is assumed. For example, an adiabatically-driven $1$D Hamiltonian may be mapped to a $2$D static representation, where the Floquet-Bloch \cite{PhysRevLett.110.200403} formalism allows for standard calculations of the Chern number \cite{xiao2010,fukui_2005} to classify the topology. Alternatively in the high-frequency regime, couplings between neighboring Fourier modes become perturbative with the unperturbed Hamiltonian being time-derivative operator \cite{Eckardt_2015}. In the intermediate-frequency region, because the energy scales of the Floquet field and the static Hamiltonian are comparable, approaches beyond perturbation theory should be employed.
In this paper, we study the topological properties of a system driven by two distinct frequencies in the adiabatic and intermediate frequency regimes. Floquet engineering is often employed for a single frequency drive, with multi-frequency (MF) cases being studied more recently. Broadly speaking, there are two options within the MF formalism: frequencies with (i) commensurate \cite{PhysRevA.101.032116} and (ii) incommensurate \cite{Martin_2017} relationships. The formalism of incommensurate multi-frequency driving demands the introduction of a Fourier manifold for each additional drive \cite{Martin_2017}, which has yielded useful application for $0$-dimensional qubit frequency mixers \cite{PhysRevX.12.021061}. However, this formalism and computation could be cumbersome for two dimensional systems and above. Moreover, the results in Floquet formalism rely on truncation of the typically infinite-dimensional Fourier mode space. Two competing truncation methods for each space of a two-tone incommensurate drive may yield results only applicable in limited situations. Commensurate driving on the other hand, has already been employed in a variety of situations with great experimental impact. Topology of $1$D lattice systems under commensurate driving has been studied before \cite{PhysRevB.102.235143} to examine the quality of localization in edge-modes. Another example is commensurate frequency driving being used to create quantum destructive interference in a Fermi-Hubbard model to suppress heating effects \cite{PhysRevX.11.011057}, which is a prevalent problem in all of Floquet engineering. Finally two-tone drives have been used to engineer non-trivial band structures \cite{Sandholzer_2022,Minguzzi_2022}.
We employ the commensurate frequency framework developed in \cite{PhysRevB.102.235143} to express the MF Floquet formalism using a single period, and apply this drive to The Su-Schreiffer-Heeger (SSH) model. Note that single frequency driven SSH variations have already generated great interest in the field \cite{PhysRevA.92.023624,Borja_2022,dmytruk2022controlling,Agrawal_2022}. In the adiabatic driving scheme, we map the commensurate drives to the frequency space, resulting in the emergence of new couplings. Careful tuning of the frequencies would allow for simulating those nontrivial couplings that are difficult to realize in the real-space. To demonstrate this, we explore effects such as a next-nearest neighbor Floquet hopping, and large-lattice hopping with two larger, close by frequencies. The latter effect motivated our interest in this study, as a potential temporal analogue to the Moir\'e pattern observed in twisted bi-layer graphene \cite{doi:10.1073/pnas.1108174108}. Finally, we demonstrate that the dual frequency drive provides an experimentally appealing method for creating a topological phase transition. The different topological regions are simply reached through varying the amplitude mixture.
In simulating the model, we find that standard computational approaches to topology can be troublesome in the presence of the Floquet field originating in the intermediate frequency regime. We address this by employing a real space variant of the Chern number, called the local Chern marker \cite{2011bianco}. This method allows us to not only visualize the effect of open-boundaries, disorder, and electric field on the topology locally, but to also compute the system topology in the presence of a non-zero Floquet field, which is a feature completely unique to the Floquet engineering systems.
\maketitle
\section{\label{sec:MMFT}Model}
A static Su-Schreiffer-Heeger (SSH) model is known to possess a dimer type lattice with atoms $A,B$ forming the members of each dimer, and it is topologically non-trivial provided that the inter-cell coupling is stronger than the intra-cell coupling. To demonstrate the consequences of multi-frequency driving, we consider a Floquet Su-Schreiffer-Heeger (FSSH) model with time varying, two-frequency tunneling coefficients. The Hamiltonian is kept similar to previous works \cite{PhysRevB.95.205125,L__2019} to ensure that upon relaxing the two-drive condition to a single drive, we recover well-established results.
\begin{equation}\label{eq:FSSH}
\begin{split}
H(t)=\sum_{n}^N U_1(t) \hat{c}^\dagger_{n,B}(t)\hat{c}_{n,A}(t)+U_2(t) \hat{c}^\dagger_{n+1,A}(t)\hat{c}_{n,B}(t) \\
V_A(t) \hat{c}^\dagger_{n+1,A}(t)\hat{c}_{n,A}(t) + V_B(t) \hat{c}^\dagger_{n+1,B}(t)\hat{c}_{n,B}(t) + h.c.
\end{split}
\end{equation}
In Eq. \ref{eq:FSSH}, $U_1(t),U_2(t)$ are the intra-cell and inter-cell tunneling strengths, respectively, which are periodic in $T$. Additionally, we consider the next-nearest neighbor coupling terms $V_A(t),V_B(t)$. The real-space coordinate can have a periodic boundary condition ($N+1=1$) or an open boundary. The tunneling coefficients;
\begin{equation*}
\begin{split}
U_1(t)=u(1+2(\cos{\Omega_1 t}+\alpha\cos{\Omega_2 t})) \\
U_2(t)=u(1-2(\cos{\Omega_2 t}+\alpha\cos{\Omega_2 t})) \\
V_A(t)=v(\cos{(\Omega_1 t+\theta)}+\alpha\cos{(\Omega_2 t+\theta)}) \\
V_B(t)=v(\cos{(\Omega_1 t-\theta)}+\alpha\cos{(\Omega_2 t-\theta)})
\end{split}
\end{equation*}
are dynamical with driving frequencies $\Omega_{1,2}$ and tunneling amplitudes $u,v$ for the nearest and next-nearest neighbor hopping, respectively. The $\Omega_2$ driving factor possess an "offset" amplitude $\alpha$, the consequences of which will be discussed in the Results section.
\maketitle
\subsection{Dual Frequency Driving}
\begin{figure}[t]
\centering
\includegraphics[scale=0.375]{fssh_2p.png}
\caption[Chern Marker Visualized]{$2$D Static Floquet SSH representation. In gray: intra-cell hopping, purple: inter-cell hopping, green: nearest neighbor Floquet (NNF) hopping, yellow: next-nearest neighbor Floquet (NNNF) coupling. The basic couplings are shown but the Hamiltonian allows for any combination, e.g. NNNF+inter-cell off-diagonal element is present in $H$. Note NNNF couplings may only occur with the additional frequency $n_2=2$ drive.}
\label{fig:fssh-2p}
\end{figure}
Here we outline the treatment of Eq. \ref{eq:FSSH} in which the sources are periodic in $T_1$ and $T_2$ and are subject to the following condition:
\begin{equation}\label{ratio}
\frac{T_1}{T_2}=\frac{n_2}{n_1}
\end{equation}
for $\{n_1,n_2\}$ $\epsilon$ $\mathbb{Z}^+$ meaning that we may always find \cite{PhysRevB.102.235143} a period, $T$, such that
\begin{equation}
T=n_1T_1=n_2T_2
\end{equation}
which may be used to employ the Floquet theory. Note that $\Omega_{1,2}=\frac{2\pi}{T_{1,2}}$. The Hamiltonian may be expressed in terms of "components" for each period,
\begin{equation}\label{eq:H-comp}
H(\textbf{r},t)=H_0+H^{T_1}(\textbf{r},t)+H^{T_2}(\textbf{r},t)
\end{equation}
where the components have the following periodicity: $H^{T_1 (T_2)}(\textbf{r},t+T_1 (+T_2))=H^{T_1 (T_2)}(\textbf{r},t)$, and the system has $H^{T_1 (T_2)}(\textbf{r},t+T)=H^{T_1 (T_2)}(\textbf{r},t)$. The $H_0$ term is the undriven Hamiltonian.
It is important to note that the period of the system is $T$ and so Floquet theory is employed on $T$, not on either $T_1$ or $T_2$. Due to this condition, the derivation of the Floquet Hamiltonian, $\mathcal{K}$, is the same as in the single frequency case. The single-frequency Floquet Hamiltonian has been derived many times so we refer the reader to Refs. \cite{Okaengineering,https://doi.org/10.48550/arxiv.2003.08252}. The general procedure is as follows: from the time-dependent Schr\"{o}dinger equation, express the eigenstates as Floquet states that are composed of a non-periodic phase factor and a $T-$periodic function, then expand the states using a Fourier expansion. The single frequency case diverges from the dual frequency case as we take the expansion in terms of the components of Eq. \ref{eq:H-comp}. The components are written in Eq. \ref{mmode_k}.
\begin{equation}\label{mmode_k}
\mathcal{K}=\left( H^{T_1}(\textbf{r},t)+H^{T_2}(\textbf{r},t) \right)-i\frac{\partial}{\partial t}
\end{equation}
The Fourier expansion on the Floquet modes is then,
\begin{sloppypar}
\begin{equation*}
\begin{split}
\left( H^{T_1}(\textbf{r},t)+H^{T_2}(\textbf{r},t) \right) \sum_m e^{im\Omega t}\ket{\phi^m_\alpha}\\+\sum_m m\Omega e^{im\Omega t}\ket{\phi^m_\alpha}
=\epsilon_\alpha \sum_m e^{im\Omega t}\ket{\phi^m_\alpha}
\end{split}
\end{equation*}
\end{sloppypar}
meaning that the matrix elements of Eq. \ref{mmode_k}, given by the universal equation: $\bra{\alpha,n}\dots\ket{\beta,m}=1/T\int_0^T dt \dots$ may computed as in the usual way. However, we must pay consideration to each new hopping term emerging from each commensurate frequency;
\begin{equation*}
\begin{split}
\sum_{m,n}\int_0^T \textrm{d}t H^{T_1}(\textbf{r},t)e^{i\Omega t(m-n)}+\\ \int_0^T\textrm{d}t H^{T_2}(\textbf{r},t)e^{i\Omega t(m-n)} =\epsilon_\alpha -m\Omega \delta_{mn}
\end{split}
\end{equation*}
resulting in
\begin{equation}\label{eq:mmode_h}
\sum_{m,n} \left( H^{T_1}_{(m-n)}+H^{T_2}_{(m-n)} \right) \ket{\phi^m_\alpha}+m\Omega \delta_{m,n}\ket{\phi^m_\alpha}=\epsilon_\alpha \ket{\phi^n_\alpha}
\end{equation}
where $H^{T_1 (T_2)}_{(m-n)}=\frac{1}{T}\int_0^T \textrm{d}t H^{T_1 (T_2)}(\textbf{r},t)e^{i\Omega t(m-n)}$. Note that the Fourier factor $e^{i\Omega t (m-n)}$ is left in $\Omega$ the frequency of the system, not in either $\Omega_1$ or $\Omega_2$. The result of mapping our Hamiltonian to the static $2$D version is shown in Fig. \ref{fig:fssh-2p}. The new couplings seen there emerge from new frequencies added in the drive.
The coupling factor in the Floquet Hamiltonian in Eq. \ref{eq:mmode_h} compared to the single-frequency case reveals that multi-mode theory with commensurate drives allows for construction of new kinetic terms. One may expect a new coupling for each commensurate drive added. Careful construction of these new frequencies may yield exotic new physics, or open the route for Floquet systems to mimic the physics of some experimentally intractable static systems in condensed matter.
\maketitle
\section{\label{sec:Results}Results}
The Hamiltonian (Eq. \ref{eq:FSSH}) is constructed for atoms $A,B$ with $20$ dimers, resulting in $40$ real space matrix elements. We set $u=1,v=0.2,\theta=0.5\pi,\alpha=2$, unless otherwise states. We employ the multi-frequency Floquet theory (Eq. \ref{eq:mmode_h}) to map the time dependent $1$D system to the static $2$D enlarged space (Fig. \ref{fig:fssh-2p}). As opposed to mapping the system via the Floquet-Bloch transformation, we leave the system in the real-space matrix form. While this matrix is technically infinite, we can study a truncated space using the Chern marker to examine the topological order \cite{https://doi.org/10.48550/arxiv.2003.08252,Holthaus_2015}. We consider $200$ Floquet modes, $m$, resulting in a $8000\times8000$ matrix unless otherwise stated, which we construct and diagonalize in Fortran. All presented calculations of the Chern number are accurate up to a maximum error of $1\%$. Where stated, the Stark field is considered by adding in the $m\Omega$ dependent value along the Floquet diagonal $\delta_{nm}$. As for the coupling, the integers chosen in Eq. \ref{ratio} result in different delta functionals after integration of Eq. \ref{eq:mmode_h} in the Floquet space due to the cosine drive. For example $n_1=10\rightarrow \delta_{n,m+10}+\delta_{n,m-10}$. Although theoretically any integer ratio may be employed, here we consider the $\{n_1,n_2\}$ cases of $\{1,1\}$ (single drive reference), $\{1,2\}$ (Floquet next-nearest neighbor), $\{10,11\}$ (close-by beat frequency).
\subsection{Topology - Single Drive}
Our model relaxes to a single drive case by setting $\alpha=0$, and setting $\Omega_1$ as the base frequency. In Fig. \ref{fig:simple chern marker}, we plot the Chern marker over the static $2$D representation of the sample in the presence of periodic boundaries. As expected \cite{2011bianco}, averaged over the entire sample the marker is $0$ due to its commutator definition. However, in the bulk of the sample, the average Chern marker yields $1$, in excellent agreement with the Fukui method. Previous works \cite{L__2019} have discovered that the single drive Hamiltonian is topologically non-trivial for non-zero $\theta$. However, these predictions enforce translationally invariant samples and rely on computational methods using $k$-space, meaning the effect of the Floquet field on the topological order is ignored. By employing the local Chern marker \cite{2011bianco} we provide both real-space confirmation of single drive topology, and simple determination of topology in the face of the Floquet electric field.
\begin{figure}[t]
\centering
\includegraphics[scale=0.40]{chm_match2.PNG}
\caption[Chern Marker Visualized]{Two plots of the Chern marker. Left is the Chern marker over the whole system, in two dimensions, real space and Floquet space. The average Chern number is $0$, due to the trace identity \cite{2011bianco}. In the region away from the edge as shown in the right plot, we obtain a Chern marker of $1$, corresponding to the bulk topology. The marker is stable in the bulk, and non-physical near the edges.}
\label{fig:simple chern marker}
\end{figure}
An advantage of computing topology using the Chern marker is that we may easily reintroduce the Floquet field along the frequency direction for small values of $\Omega$. Consequently, the topology may be visualized along each direction in response to increasing field value, or even disorder along the real-space direction. In Fig. \ref{fig:ch_vs_omega}, we plot the Chern marker along real and frequency space with increasing electric field $\propto \Omega$. We see the real space Chern marker unaffected everywhere by increasing $\Omega$. Similarly, for finite but small $\Omega$ the frequency space Chern marker remains unaffected. However as $\Omega$ increases the LCM along the frequency space does not remain topologically invariant and the system does not have a meaningful topology. At the very center of the sample where $m=n\approx 0$, the expected topology is recovered which is consistent with the adiabatic theorem for small fields $(m\Omega)$. Consequently, this calculation may be used to probe maximum allowed values of $\Omega$ above which the topology becomes ill-defined.
\begin{figure}[t]
\centering
\includegraphics[scale=0.34]{cm_omega5.png}
\caption{Chern marker plotted in I. real space and II. Floquet space, for increasing $\Omega$, magnitude of the Stark field. Note that only $100$ Floquet modes were used. In I. the real space Chern marker is not strongly changed by increasing $\Omega$ - a sensible result as the Stark field only permeates the Floquet space. In II. the Stark field destroys the topological order in the sample as $\Omega$ increases, but notice that for small $m$ the Chern marker returns to the expected Chern number of $1$, confirming the adiabatic theorem for small $\Omega$.}
\label{fig:ch_vs_omega}
\end{figure}
\subsection{Multi-Frequency Drive: Case 1,2}\label{sec:1_2}
\subsubsection{Topological Phase Transition}
Here we examine the effects of the second drive in the case of frequency ratio $n_1,n_2=1,2$. This frequency ratio has been studied before for a variety of systems, \cite{Sandholzer_2022,Minguzzi_2022,PhysRevX.11.011057}, but to our knowledge, not in the driven SSH model. The band structure and the Chern marker for the case of periodic boundaries in each direction (hence adiabatic driving) are computed and plotted in Fig. \ref{fig:NNN-floq}. The physical interpretation of this model is the presence of a next-nearest neighbor coupling along the frequency space. The system still possesses a gap for $n_1,n_2=1,2$, but only the case that the amplitudes of the respective drives are different. Here we fix $\alpha=2$. Due to the invariance the Chern marker is seen to be relatively stable with little variation over the sample bulk.
\begin{figure}[t]
\centering
\includegraphics[scale=0.325]{d_1_2_2.png}
\caption[Next-Nearest Floquet Hopping]{The band (left) and Chern marker interior (right) for the case $n_1=1,n_2=2$. This case simulates a second neighbor hopping in the Floquet space - impossible to achieve without a second drive. The system is found to be insulating with a quasi-energy gap of $\sim 2.0$, meaning topology can be computed. The right plot shows the interior of the Chern marker, and the bulk average value of $2$. }
\label{fig:NNN-floq}
\end{figure}
We find that the Chern number is $2$, indicating that an advantage of commensurate driving is the ability to engineer non-trivial phases with $C>1$. Surprisingly, we discover that the offset amplitude between the drives functions as a tuning parameter for a topological phase transition (TI). This transition is plotted in Fig. \ref{fig:C-vs-alpha12}. For $\alpha<1$, $C=1$, and for $\alpha>1$, $C=2$. The transition occurs through the gap closing condition of $\alpha=1$. Note that the gap is computed as the difference between the lowest conduction band and highest valence band quasi-energies. The gap closes on Brillouin zone corners $\{k_x,k_f\}=\{-\pi,-\pi\}$, etc., but initially the smallest difference is elsewhere in the Brillouin zone.
\begin{figure}[t]
\centering
\includegraphics[scale=0.42]{C-gap-vs-alpha-1-2-final.png}
\caption[C vs Alpha]{The Chern number in red plotted against the $\Omega_2$ driving amplitude $\alpha$. Note that the amplitude on $\Omega_1$ is $1$. Below the transition point $1$, the Chern number is robustly $1$. Above the transition point the Chern number is robustly $2$. The quasi-energy gap is shown to close at the transition point in green. The color gradient is shown to signify that even changing parameters such as $\theta,u,v$ will yield the same results provided that the gap is not closed. }
\label{fig:C-vs-alpha12}
\end{figure}
This transition is not unique to the case of $n_1,n_2=1,2$. We expect this behavior for any choice of $\Omega_1,\Omega_2$ provided that the system remains gapped. The Chern number will transition with the amplitude mixture controlling the critical point, and the frequencies controlling the topology. This amplitude modulation of a two-frequency drive should be experimental feasible. It does not rely on fabrication of the lattice or on a quantum well thickness \cite{Bernevig_2006}. The formalism presented here poses an easily tune-able topological phase transition, based simply on the control of the driving lasers. However, it is also known \cite{Minguzzi_2022} that the relative phase of the two-frequency drive can change the symmetry of the system which is being examined in ongoing calculations.
\subsubsection{Edge States}
The topology present in this drive case has an observable physical effect, manifested in the emergence of edge modes along the edges of the effective $2$D sample. In Fig. \ref{fig:n-1-2} we plot the zero-energy eigenstates upon opening both the Floquet, and the real space boundary. The states are seen in Fig. \ref{fig:n-1-2} to occupy the real space boundary, with amplitude diminishing to $0$ in the center of the sample. This plot is constructed by taking $\psi^{\dagger}\psi_{i,j}$ for $i,j$ the real and frequency space elements, respectively, for the eigenstates at $0$-quasi-energy.
\begin{figure}[t]
\centering
\includegraphics[scale=0.29]{n-1-2_4.png}
\caption[n-1-2]{Edge modes occurring along the real space edge for the case of $n_1,n_2=1,2$ driving frequency ratio. The contour is computed for $\psi^{\dagger}\psi_{i,j}$, for $i,j$ the real and frequency space, respectively. The states plotted are the $0$-quasi-energy states.}
\label{fig:n-1-2}
\end{figure}
The real-space treatment easily allows us to plot the zero-energy states for a variety of boundary conditions. For example one may reinstate translational invariance along the SSH chain, break invariance of the Fourier manifold, and recompute the amplitude of \ref{fig:n-1-2}. In this case, we find that the edge states exist along the "Fourier edge" only. Unlike SSH chain edge states which may be observable in current measurements, Fourier edges are only an artifact of the theoretical Floquet mapping and subsequent truncation scheme, so we neglect the result in the current paper.
\subsection{Multi-Frequency Drive: Case 10,11}
\subsubsection{Exotic Topology}
Here we examine the effects of the second drive in the case of frequency ratio $n_1,n_2=10,11$. The band structure and the Chern marker for the case of periodic boundaries in each direction (hence adiabatic driving) are computed and plotted in Fig. \ref{fig:d-10-11}. The system possesses a gap for $n_1,n_2=10,11$, with condition $\alpha=2$. The Chern marker is seen to be not as stable over the sample bulk as in the $n_1,n_2=1,2$ case, resulting from interference between the two close-by frequencies. We again tune $\alpha$ through the critical point, as plotted in Fig. \ref{fig:C-vs-alpha1011}. The system displays the same phase transition behavior as in the $n_1,n_2=1,2$ case. The Chern number is found to be $C=10$ for $\alpha<1$ and $C=11$ for $\alpha>1$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{d-10-11_unscaled.png}
\caption[Long Range Floquet Hopping]{The band and Chern marker interior for the case $n_1=10,n_2=11$. The band is shown in the left, and the system is found to be gapped in quasi-energy, meaning topology can be computed. The right plot shows the interior of the Chern marker, and the bulk average value of $11$. Note that the $n_1,n_2=10,11$ shows more interference in the LCM than the $1,2$ case.}
\label{fig:d-10-11}
\end{figure}
Our motivation in studying the multi-frequency driving was to model a beat frequency Hamiltonian. This is based on the hope that the disorder induced by the beat frequency along the Floquet direction would be a temporal analog to the twisted bi-layer graphene, in which maximum disorder occurs for certain "magic" angles. The requirement then, is that the two drives possess frequencies which are close in value. The size of the matrix must accommodate long-range couplings. The case for $n_1,n_2=10,11$ is explored using our current Fortran code, but larger frequency ratios like $100,101$ demand a much larger matrix. This larger case would be ideal to consider for the beat frequency analogue.
\begin{figure}[t]
\centering
\includegraphics[scale=0.42]{C-gap-vs-alpha-10-11-final.png}
\caption[C vs Alpha]{The Chern number in red plotted against the $\Omega_2$ driving amplitude $\alpha$. Note that the amplitude on $\Omega_1$ is $1$. Below the transition point $1$, the Chern number is robustly $10$. Above the transition point the Chern number is robustly $11$. The quasi-energy gap closes at the transition point, of $\alpha=1$. The color gradient is shown to signify that even changing parameters such as $\theta,u,v$ will yield the same results provided that the gap is not closed.}
\label{fig:C-vs-alpha1011}
\end{figure}
\subsubsection{Edge States}
Since the system still possess topological order, we can plot edge modes by opening the sample boundaries. We plot the $0$-energy states in the case of broken periodic boundaries in each case. This result is shown in Fig. \ref{fig:n-10-11}. Again as in the $n_1,n_2=1,2$ we find states existing along the real space edge only.
\begin{figure}[t]
\centering
\includegraphics[scale=0.29]{n-10-11_2.png}
\caption[n-10-11]{Edge modes occurring along the real space edge for the case of $n_1,n_2=10,11$ driving frequency ratio. The contour is computed for $\psi^{\dagger}\psi_{i,j}$, for $i,j$ the real and frequency space, respectively. The states plotted are the $0$-quasi-energy states.}
\label{fig:n-10-11}
\end{figure}
As noted in the case of $n_1,n_2=1,2$, we also may break translational invariance along the Fourier space. In this case the amplitude is only non-zero for certain modes along the Fourier manifold, as opposed to the modes at the very "edge".
The presence of mode-localized states could be an artifact of the truncation scheme, so their effect is best seen in defining and computing a physical observable in the $1$-D time dependent model.
\section{\label{sec:Discussion}Discussion}
\textit{Multi-Tonal Driving - Integer Multiples} There are, broadly speaking, three contrasting multi-tonal driving cases for which the Floquet formalism may be constructed. The simplest is when the frequencies are not only commensurate, but related via integer multiple. Examples for $\Omega_2/\Omega_1=n_1/n_2$ are $n_1,n_2=2,4$ or $n_1,n_2=1,3$. In this case, one frequency may be determined in terms of the other. The relative phase of the drives plays a critical role, as explored in recent works \cite{PhysRevX.11.011057,Sandholzer_2022,Minguzzi_2022}. The formalism of Eq. \ref{ratio} is unnecessary for this driving protocol. In fact, constructing the formalism in terms of the base period may yield incorrect computation of observables. For example, treating $n_1,n_2=2,4$, with a base frequency with $n=1$ will encode an extra integer lattice spacing, yielding new non-physical twisting in the Berry curvature from $\vec{k}\rightarrow \vec{k}+d\vec{k}$. For example, the computation of the transition would yield $C=2\rightarrow C=4$, when it is physically $C=1\rightarrow C=2$.
\textit{Multi-Tonal Driving - Commensurate and Incommensurate} On the other hand, there is another sort of commensurate two-tone drive where the integers cannot be uniquely expressed via an integer multiple, such as $n_1,n_2=4,5$ or $n_1,n_2=10,11$. The Floquet theorem and topology of this case may studied with the formalism of \cite{PhysRevB.102.235143}, and the topology may be computed using the frequency space Chern marker presented here. This treatment follows from the fact that time degree of freedom should have a one-to-one correspondence with the Fourier transform to the extended space. The Floquet lattice obtains new couplings computed from the off-diagonal matrix elements of Eq. \ref{mmode_k}. This case is distinguished from the incommensurate frequency driving \cite{Martin_2017} in which each frequency yields an additional Fourier manifold. To treat the $1D$ SSH in this case would require a $3D$ computation. Additionally, two truncation schemes of the Fourier space are needed. It could be advantageous to approximate certain incommensurate ratios with nearby commensurate ones and carry out the simpler calculations presented here.
\textit{Floquet Edge States} Upon opening the "Fourier boundary" we find the zero quasi-energy modes localized to the truncation edge in the case of $n_1,n_2=1,2$, and localized to certain frequencies in the case of $n_1,n_2=10,11$. Since the Fourier boundary physically does not exist, it is simply an artifact of the theoretical mapping, these results are neglected as byproducts of the frequency truncation. The distinction between the two cases (one edge-localized and one mode-localized) could arise from the fact that the matrix size stays the same, but for higher frequencies the kinetic terms populate even further off-diagonal elements. In other words, increasing the number of Floquet modes may cause the mode-localized states to localize to the edges. While the Floquet edge is nonphysical, it is possible to break periodic boundary conditions along the Fourier manifold with small Floquet-Stark field. The consequences are best observed in this case by computing observables such as current in the original time-dependent representation.
\section{\label{sec:Conclusions}Conclusions}
In this paper, we have shown that commensurate multi-frequency driving formalism may be modeled using the Fourier mapping of Floquet theory to gain practical levels of engineering control. The frequency ratio may be chosen to create nontrivial couplings, which allows the Floquet formalism to mimic nontrivial static systems. We have shown that the second drive with a commensurate frequency can be included as an extra hopping term in the Fourier manifold. This approach necessitates only one Fourier manifold extension, meaning that the commensurate driving can be studied easily in $1$, and $2$-dimensional systems. Additionally, only one truncation scheme is needed in the extended space as opposed to two or more for incommensurate driving.
To explore the topological properties, we have employed the real space Chern marker instead of the Berry curvature in $k$-space representation, which allows us to study the adiabatic and intermediate frequency regimes using the same framework. As such, the model could incorporate disorder and fields, more closely approximating a real world system. This treatment yields direct examination of the local fluctuations in topology resulting from interference, and of the edge states. Moreover our work suggests a new method for controlling the topological phase of an SSH sample, and appropriate choice of the frequency ratio allows for engineering of Chern numbers $C>1$. Consequentially, the amplitude proportion of the two drives may be tuned to induce topologically distinct states, meaning that these systems can be engineered to sweep through a topological phase transition. Since these topological phases are induced via the amplitude, this model hosts an experimentally appealing transition, that doesn't rely on a more complicated switching mechanism such as in the quantum wells. We have further demonstrated a computation technique to view edge states in the insulating phase, providing additional confirmation that these systems are topological insulators.
\section{Acknowledgement}
This work was supported by the Air Force Office of Scientific Research
Multi-Disciplinary Research Initiative (MURI) entitled, “Cross-disciplinary
Electronic-ionic Research Enabling Biologically Realistic Autonomous Learning
(CEREBRAL)” under Award No. FA9550-18-1-0024 administered by Dr. Ali Sayir.
S.W.O and W.-C.L. are grateful for the support of the summer faculty fellowship program (SFFP) sponsored by the Air-Force-Research-Lab (AFRL) while this paper is being finalized.
|
3,212,635,537,794 | arxiv | \section{Introduction}
In this article, we examine the nonexistence of classical stable solutions of the weighted elliptic system given by
\begin{equation}\label{eq:1.1}
\begin{cases}
-\Delta u=|x|^{\beta} v^{\vartheta},\\
-\Delta v=|x|^{\alpha} |u|^{p-1}u,
\end{cases}\quad \mbox{in}\;\ \Omega,
\end{equation}where $\Omega$ is a subset of $\mathbb{R}^N$, $N\ge 5$, $\alpha>-4$, $0 \le \beta \le \dfrac{N-4}{2}$ and $p \theta>1$.\vskip .06in
The idea of using the Morse index of a solution for a semilinear elliptic equation was first explored by Bahri and Lions \cite{Bahri}
to get further qualitative properties of the solution. Recently, along this line of research, Dancer
\cite{Dancer-TAMS,Dancer-IUMJ,Dancer-JRAM} introduced the finite Morse index solution
and made the significant progress in the elliptic equations. Let us note that the solution $u$ is stable if and only if its Morse index is equal to zero.
In 2007, Farina considered the Lane-Emden equation
\begin{equation}\label{eq:1.2}
- \Delta u =|u|^{p-1}u,
\end{equation}on bounded and unbounded domains of $\Omega \subset \mathbb{R}^N$, with $N \ge 2$ and $p >1$. Based on a delicate application of
the classical Moser's iteration, he gave the complete
classification of finite Morse index solutions (positive or sign-changing) in his seminal paper \cite{Farina}. Hereafter, many experts utilized the Moser's iterative method to discuss the stable and finite Morse index solutions of the harmonic and fourth-order
elliptic equation and obtained many excellent results. We refer to \cite{Dancer-1,Wang-Ye,Wei-Ye,Wei-Xu-Yang}
and the reference therein.\vskip .06in
However, the classical Moser's iterative technique does not completely
classify finite Morse index solutions of the biharmonic equation
\begin{equation*}
\Delta ^2 u=|u|^{p-1}u,\quad \mbox{in} \;\ \Omega \subset \mathbb{R}^N.
\end{equation*}
To solve the problem, D\'{a}vila et al. \cite{Davila} have recently
derived a monotonicity formula of solutions and given the complete classification of stable and finite Morse index solutions for the biharmonic equation by the application of Pohozaev identity and the monotonicity formula. We note that many outstanding papers \cite{Davila-JFA,Davila,Pacard-1,Pacard-2,Wang} utilize a monotonicity formula to study the partial regularity of stationary weak solution, stable and finite Morse index solutions for the harmonic and fourth-order equation.\vskip .06in
On the other hand, some experts were interesting in the Lane-Emden system and obtained some excellent results \cite{Cowan-Nonlinearity,Esposito-PRSE,Fazly-ANS,Fazly-Ghoussoub}. In 2013, applying a iterative method and the pointwise estimate in \cite{Souplet}, Cowan proved the following result.\vskip .06in
\noindent {\bf Theorem A.} (\cite[Theorem 2]{Cowan-Nonlinearity}) {\it Suppose that $p>\theta=1$, $\alpha=\beta=0$ and
\begin{equation*}
N<2+\dfrac{4(p+1)}{p-1}\left (\sqrt{\dfrac{2p}{p+1}}+\sqrt{\dfrac{2p}{p+1}-\sqrt{\dfrac{2p}{p+1}}}\right ).
\end{equation*}Then there is no positive stable solution of (\ref{eq:1.1}).}\vskip .06in
Adopting the same method as Cowan \cite{Cowan-Nonlinearity}, Fazly obtained the following result.\vskip .06in
\noindent {\bf Theorem B.} (\cite[Theorem 2.4]{Fazly-ANS}) {\it Suppose that $(u,v)$ is $C^2(\mathbb{R}^N)$ nonnegative entire semi-stable solution of
\begin{equation*}
\begin{cases}
-\Delta u=\rho (1+|x|^2)^{\frac{\alpha}{2}}v,\\
-\Delta v=\varrho (1+|x|^2)^{\frac{\alpha}{2}}u^p,
\end{cases}
\end{equation*}
with $\rho, \varrho >0$ in the dimension
\begin{equation*}
N<8+3\alpha+\dfrac{8+4\alpha}{p-1}.
\end{equation*}Then, $(u,v)$ is the trivial solution.}\vskip .06in
We observe that the dimension $N<8+3\alpha+\dfrac{8+4\alpha}{p-1}$ in \cite[Theorem 2.4]{Fazly-ANS}
is already larger than the {\it critical hyperbola}, i.e., $N=4+\alpha+\dfrac{8+4\alpha}{p-1}$.
Recently, Fazly and Ghoussoub
\cite[Theorem 4]{Fazly-Ghoussoub} have considered the nonexistence of positive stable solutions for the weighted
elliptic system (\ref{eq:1.1}), which the dimension satisfies
\begin{equation*}
N<2+2\left (\dfrac{p(\beta+2)+\alpha+2}{p \theta-1} \right ) \left (\sqrt{\dfrac{p\theta (\theta+1)}{p+1}}+\sqrt{\dfrac{p \theta (\theta+1)}{p+1}-\sqrt{\dfrac{p\theta (\theta+1)}{p+1}}}\right ).
\end{equation*}Clearly, if $\theta=1$ and $\alpha=\beta=0$ in (\ref{eq:1.1}), then their result is the same as Theorem A.\vskip .1in
Let us briefly recall the fact that Liouvile-type theorem of solutions for various Lane-Emden equations and systems is interesting and challenging for decades.\vskip .06in
First, Pohozaev identity shows that the Lane-Emden equation with the Dirichlet boundary condition has no positive
solution on a bounded star-shaped domain $\Omega \subset \mathbb{R}^N$, whenever $p \ge \dfrac{N+2}{N-2}$. On the other hand, Gidas and Spruck obtained the optimal Liouville-type theorems in the celebrated paper \cite{Gidas}, that is, the Lane-Emden equation (\ref{eq:1.2}) has no positive solution if and only if $1<p <\dfrac{N+2}{N-2}(=+\infty,$ if $N\le 2)$. In 1991, Bidaut-V\'{e}ron and V\'{e}ron \cite{Bidaut} obtained the asymptotic behavior of positive solution by utilizing the Bochner-Lichnerowicz-Weitzenb\"{o}ck formula in $\mathbb{R}^N$.\vskip .06in
In the case of the Lane-Emden systems (\ref{eq:1.1}) with $\alpha=\beta=0$, Pucci and Serrin \cite{Pucci} proved that if $\dfrac{N}{p+1}+\dfrac{N}{\theta+1} \le N-2$ and $\Omega$ is a bounded star-shaped domain of $\mathbb{R}^N$, then there is no positive solution of (\ref{eq:1.1}) with the Dirichlet boundary conditions. Noting that the curve $\dfrac{N}{p+1}+\dfrac{N}{\theta+1}=N-2$ is the {\it critical Sobolev hyperbola}. Similar to the Lane-Emden equation, the following conjecture is interesting and challenging.\vskip .06in
\noindent {\bf Conjecture} ({\it Lane-Emden Conjecture}) {\it Suppose $(p,\theta)$ is under the critical Sobolev hyperbola, i.e.,
\begin{equation*}
\dfrac{N}{p+1}+\dfrac{N}{\theta+1}>N-2.
\end{equation*}Then there is no positive solution for the elliptic system (\ref{eq:1.1}) with $\alpha=\beta=0$.}\vskip .06in
The case of radial solutions was solved by Mitidieri \cite{Mitidieri} in any dimension, and the positive radial solutions on and above the critical Sobolev hyperbola was constructed by \cite{Mitidieri,Serrin-Zou-98}, which is the optimal Liouville-type theorem for radial solutions. The {\it conjecture} (for non-radial solutions) seems difficult. In the dimension $N=3$, Serrin and Zou \cite{Serrin-Zou-96} proved the {\it conjecture} for the polynomially bounded solutions, which the boundedness was removed in \cite{Polacik}.
In 2009, Souplet \cite{Souplet} solved the {\it conjecture} in $N=4$ or a new
region for $N \ge 5$.
However, the weighted Lane-Emden system (\ref{eq:1.1}) is even less understood. For example,
the paper \cite{Phan} proved the {\it conjecture}
for the equation $-\Delta u=|x|^{\alpha} u^p$ in $N=3$; In 2012, Phan \cite{Phan-ADE}
solved the {\it conjecture} for the system (\ref{eq:1.1}) in two cases: {\it case 1}. $N=3$ and bounded solutions; {\it case 2}. $N=3$ or $4$ and $\alpha, \beta \le 0$.\vskip .1in
Here and in the following, we always assume that $N\ge 5$, $\alpha>-4$, $0\le \beta \le \dfrac{N-4}{2}$, $p>1$ and $\theta=1$.
Motivated by the ideas in \cite{Davila,Du,Hu}, we will construct a monotonicity formula of solutions in the dimension $4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}<N<N_{\alpha,\beta}(p)$ ($N_{\alpha,\beta}(p)$ see below (\ref{eq:3.1})) and get various integral estimates, and then use these results to study Liouville-type theorems of stable solution for the weighted elliptic system (\ref{eq:1.1}).
\vskip .1in
\begin{theorem}\label{eq:t1.1}
For any $4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}<N<N_{\alpha,\beta}(p)$, assume that $u\in
W^{2,2}_{loc}(\mathbb{R}^N\backslash \{0\})$ is a homogeneous, stable solution of (\ref{eq:1.1}), $|x|^{\alpha}|u|^{p+1} \in L_{loc}^1(\mathbb{R}^N
\backslash \{0\})$ and $|x|^{-\beta}|\Delta u|^2 \in L_{loc}^1(\mathbb{R}^N\backslash \{0\})$. Then
$u \equiv 0$.
\end{theorem}
Applying Theorem \ref{eq:t1.1} and the properties of monotonicity formula (\ref{eq:2.14}), we get
\begin{theorem}\label{eq:t1.2}
If $u\in C^4(\mathbb{R}^N)$ is a stable solution of (\ref{eq:1.1}) in $\mathbb{R}^N$ and $5 \le N \le N_{\alpha,\beta}(p)$, then $u \equiv 0$.
\end{theorem}
\begin{remark}
\begin{itemize}
\item [\rm (1)] We apply Pohozaev identity to construct a monotonicity formula. From the process of the proof in Theorem \ref{eq:t2.1}, we can
observe that Pohozaev identity is equivalence to the certain derivative-type of the monotonicity formula.
\item [\rm (2)] Let us note that for the dimensions $4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}<N<N_{\alpha,\beta}(p)$, we adopt a new
method of monotonicity formula together with blowing down sequence to investigate Liouville-type theorem. In addition,
a difficulty stems from the fact that the terms $|x|^{\alpha}$ and $|x|^{\beta}$ in (\ref{eq:1.1}) leads to the singularity. For this reason, we use a more delicate approach to derive
improved integral estimates.
\item[\rm (3)] From the computation of $N_{\alpha,\beta}(p)$ (in Section 3), we find the following relation:
\begin{equation*}
\begin{cases}
N_{0,0}(p)>2+\dfrac{4(p+1)}{p-1}\left (\sqrt{\dfrac{2p}{p+1}}+\sqrt{\dfrac{2p}{p+1}-\sqrt{\dfrac{2p}{p+1}}}\right ),& \mbox{if}\;\ \alpha =\beta =0,\\
N_{\alpha,\alpha}(p)>8+3\alpha+\dfrac{8+4\alpha}{p-1},& \mbox{if}\;\ \alpha=\beta.
\end{cases}
\end{equation*}
Therefore, in contrast with {\bf Theorem A} and {\bf Theorem B}, we obtain Liouville-type theorem in the higher dimension.
\end{itemize}
\end{remark}
Next, we list some definitions and notations. Let $\Omega$ be a subset of $\mathbb{R}^N$ and $f,g \in C^1\left (\mathbb{R}^{N+2},\Omega \right )$. Following Montenegro \cite{Montenegro}, we consider the general elliptic system
\begin{equation*}
(S_{f,g})\;
\begin{cases}
-\Delta u=f(u,v,x),\\
-\Delta v=g(u,v,x),
\end{cases}\quad x \in \Omega.
\end{equation*}A solution $(u,v)\in C^2(\Omega)\times C^2(\Omega)$ of $(S_{f,g})$ is called {\it stable}, if the eigenvalue problem
\begin{equation*}
(E_{f,g})
\begin{cases}
-\Delta \phi=f_u(u,v,x)\phi+f_v(u,v,x)\psi+\eta \phi,\\
-\Delta \psi =g_u(u,v,x)\phi+g_v(u,v,x)\psi+\eta \psi,
\end{cases}
\end{equation*} has a first positive eigenvalue $\eta>0$, with corresponding positive smooth eigenvalue pair $(\phi,\psi)$.
A solution $(u,v)$ is said to be {\it semi-stable}, if the first eigenvalue $\eta$ is nonnegative.\vskip .06in
Inspired by the above definition, we give the integration-type definition of stability.
\begin{definition}
We recall that a critical point $u \in C^4(\Omega)$ of the energy function
\begin{equation*}
\mathcal{E}(u)=\int_{\Omega} \left [\dfrac{1}{2}\dfrac{|\Delta u|^2}{|x|^{\beta}} -\dfrac{1}{p+1} |x|^{\alpha} |u|^{p+1} \right ]dx
\end{equation*}is said to be a stable solution of (\ref{eq:1.1}), if, for any $\zeta \in C_0^2 (\Omega)$, we have
\begin{equation*}
p\int_{\Omega} |x|^{\alpha}|u|^{p-1}\zeta^2 dx \le \int_{\Omega} \dfrac{|\Delta \zeta|^2}{|x|^{\beta}} dx.
\end{equation*}
\end{definition}
The definition is interesting and well-defined. In deed, if $(u,v)$ is a semi-stable solution, then there exist $\eta \ge 0$ and a positive smooth eigenvalue pair $(\phi,\psi)$ such that
\begin{equation*}
\begin{cases}
-\Delta \phi = |x|^{\beta} \psi+\eta \phi,\\
-\Delta \psi = p|x|^{\alpha}|u|^{p-1}\phi+\eta \psi.
\end{cases}
\end{equation*}Multiply the second equation by $\dfrac{\zeta^2}{\phi}$ with $\zeta \in C_0^2(\Omega)$ to get
\begin{align}\label{eq:1.3}
& p\int_{\Omega} |x|^{\alpha}|u|^{p-1}\zeta^2 dx \le \int_{\Omega} -\Delta \psi \dfrac{\zeta^2}{\phi} dx
=\int_{\Omega}-\psi\Delta \left (\dfrac{\zeta^2}{\phi} \right ) dx \nonumber \\[0.1cm]
& = \int_{\Omega}\dfrac{1}{|x|^{\beta}}\left [1-\dfrac{\eta \phi}{|x|^{\beta}\psi+\eta \phi} \right ] \Delta \phi\Delta
\left (\dfrac{\zeta^2}{\phi}\right )dx.
\end{align}A simple calculation leads to
\begin{equation*}
\Delta \left (\frac{\zeta^2}{\phi}\right )=2\phi^{-1}|\nabla \zeta|^2+2\zeta\phi^{-1}\Delta \zeta-4\zeta\phi^{-2}\nabla \zeta\cdot \nabla \phi
+2\zeta^2\phi^{-3} |\nabla \phi|^2-\zeta^2\phi^{-2}\Delta \phi.
\end{equation*}Then we find
\begin{align*}
\Delta \phi \Delta \left (\dfrac{\zeta^2}{\phi} \right ) & -|\Delta \zeta|^2=2\zeta \phi^{-1}\Delta \zeta \Delta \phi
-\zeta^2\phi^{-2}|\Delta \phi|^2 -|\Delta \zeta|^2 \\
& +2 \phi^{-1}\Delta \phi [ |\nabla \zeta|^2-2\zeta\phi^{-1} \nabla \zeta \cdot \nabla \phi +\zeta^2 \phi^{-2} |\nabla \phi|^2 ] \\
= &- \left [(\zeta \phi^{-1}\Delta \phi -\Delta \zeta)^2 +2(\phi^{-1} |x|^{\beta}\psi+\eta) (\nabla \zeta- \zeta \phi^{-1} \nabla \phi)^2\right ] \\
\le & 0,
\end{align*}implies
\begin{equation*}
\int_{\Omega} \dfrac{\Delta \phi}{|x|^{\beta}} \Delta \left (\dfrac{\zeta^2}{\phi} \right ) dx
\le \int_{\Omega} \dfrac{|\Delta \zeta|^2}{|x|^{\beta}} dx.
\end{equation*}Therefore, combining the above inequality with (\ref{eq:1.3}), we obtain
\begin{equation*}
p\int_{\Omega} |x|^{\alpha} |u|^{p-1}\zeta^2 dx \le \int_{\Omega} \dfrac{|\Delta \zeta|^2}{|x|^{\beta}} dx.
\end{equation*}
\begin{remark}
Since $\phi$ is a smooth function, $\zeta \in C_0^2(\Omega)$ and $\beta \le \dfrac{N-4}{2}$, then the integration
$\displaystyle \int_{\Omega} \dfrac{1}{|x|^{\beta}}dx $ is well defined.
\end{remark}\vskip .06in
\noindent {\bf Notations.} Throughout this paper, $B_r(x)$ denotes the open ball of radius $r$ centered at $x$.
If $x=0$, we simply denote $B_r(0)$ by $B_r$. $C$ denotes various irrelevant
positive constants.\vskip .06in
The rest of the paper is organized as follows. In Section 2, we derive various
integral estimates and construct a monotonicity formula. In Section 3, we prove Liouville-type theorem of homogeneous, stable solutions in the dimensions $4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}<N< N_{\alpha,\beta}(p)$. Finally, we study the qualitative properties of the monotonicity function
$\mathcal{M}$, and prove Theorem \ref{eq:t1.2} which is based on {\it Pohozaev-type identity}, {\it monotonicity formula} together with {\it blowing down} sequences in Section 4.
\vskip .2in
\section{Some estimates and a monotonicity formula}
\vskip .1in
\begin{lemma}(\cite[Lemma 2.2]{Wei-Ye})\label{eq:l2.1}
For any $\zeta \in C^4(\mathbb{R}^N)$ and $\eta \in C^4(\mathbb{R}^N)$, the identity holds
\begin{equation*}
\Delta \zeta \Delta \left (\zeta\eta^2 \right )=[\Delta (\zeta \eta)]^2-4 (\nabla \zeta \cdot \nabla \eta)^2-\zeta^2 |\Delta \eta|^2
+2\zeta \Delta \zeta |\nabla \eta|^2-4\zeta \Delta \eta \nabla \zeta \cdot \nabla \eta.
\end{equation*}
\end{lemma}
\begin{lemma}\label{eq:l2.2}
For any $\zeta \in C^4(\mathbb{R}^N)$ and $\eta \in C_0^4 (\mathbb{R}^N)$, then the following equalities hold
\begin{align}\label{eq:2.1}
\int_{\mathbb{R}^N} \Delta \left (\dfrac{\Delta \zeta}{|x|^{\beta}} \right ) & \zeta \eta^2 dx = \int_{\mathbb{R}^N} \dfrac{[\Delta(\zeta \eta)]^2}
{|x|^{\beta}}+\int_{\mathbb{R}^N}\dfrac{1}{|x|^{\beta}}\Big [-4(\nabla \zeta \cdot \nabla \eta)^2+2\zeta\Delta \zeta |\nabla \eta|^2 \Big ] dx \nonumber\\[0.2cm]
& +\int_{\mathbb{R}^N} \dfrac{\zeta^2}{|x|^{\beta}}\Big [ 2\nabla (\Delta \eta) \cdot \nabla \eta+|\Delta \eta|^2
-2\beta|x|^{-2}\Delta \eta(x\cdot \nabla \eta) \Big ] dx,
\end{align}and
\begin{align}\label{eq:2.2}
2\int_{\mathbb{R}^N} & \dfrac{|\nabla \zeta|^2|\nabla \eta|^2}{|x|^{\beta}} dx=\int_{\mathbb{R}^N}\left [ \dfrac{2}{|x|^{\beta}}
\zeta (-\Delta \zeta)|\nabla \eta|^2+\dfrac{\zeta^2}{|x|^{\beta}}\Delta
\left (|\nabla \eta|^2\right ) \right ]dx \nonumber \\[0.2cm]
&+\int_{\mathbb{R}^N} \dfrac{\zeta^2}{|x|^{\beta+2}}\Big [\beta(\beta+2-N) |\nabla \eta|^2-2\beta \left (x \cdot \nabla \left (|\nabla \eta|^2 \right )\right ) \Big ]dx.
\end{align}
\end{lemma}
\begin{proof}By the divergence theorem and integration by parts, we get
\begin{align*}
-4\int_{\mathbb{R}^N} & \dfrac{1}{|x|^{\beta}} \zeta \Delta \eta \nabla \zeta \cdot \nabla \eta dx =-2
\int_{\mathbb{R}^N} \dfrac{1}{|x|^{\beta}} \Delta \eta \nabla \zeta^2 \cdot \nabla \eta dx\\[0.18cm]
& = 2\int_{\mathbb{R}^N} \dfrac{\zeta^2}{|x|^{\beta}} \Big [\nabla (\Delta \eta)\cdot \nabla \eta +|\Delta \eta|^2 -\beta |x|^{-2}
\Delta \eta (x \cdot \nabla \eta)\Big ] dx.
\end{align*}Combining with Lemma \ref{eq:l2.1}, it implies that the identity (\ref{eq:2.1}) holds.
On the other hand, it is easy to see that
\begin{equation*}
\dfrac{1}{2}\Delta (\zeta^2)=\zeta \Delta \zeta +|\nabla \zeta|^2,
\end{equation*}then we obtain
\begin{equation*}
\int_{\mathbb{R}^N}\dfrac{|\nabla \zeta|^2|\nabla \eta|^2}{|x|^{\beta}}= \int_{\mathbb{R}^N} \dfrac{\zeta (-\Delta \zeta)|\nabla \eta|^2}{|x|^{\beta}}+\dfrac{1}{2}\int_{\mathbb{R}^N}
\zeta^2 \Delta \left ( \dfrac{ |\nabla \eta|^2}{|x|^{\beta}} \right )dx.
\end{equation*}A direct computation yields
\begin{equation*}
\Delta \left ( \dfrac{ |\nabla \eta|^2}{|x|^{\beta}} \right ) =
\dfrac{1}{|x|^{\beta}} \Big [\beta (\beta+2-N)|x|^{-2}|\nabla \eta|^2-2\beta |x|^{-2}(x\cdot \nabla (|\nabla \eta|^2))+\Delta (|\nabla \eta|^2) \Big ].
\end{equation*}Substituting into the above identity, we get the identity (\ref{eq:2.2}).
\end{proof}
\begin{lemma}\label{eq:al2.3}
Let $u \in C^4(\mathbb{R}^N)$ be a stable solution of (\ref{eq:1.1}). Then we find
\begin{align}\label{eq:2.3}
& \int_{B_R(x)}\left (\dfrac{|\Delta u|^2}{|z|^{\beta}} +|z|^{\alpha}|u|^{p+1} \right )dz \nonumber \\[0.1cm]
& \le CR^{-2} \int_{B_{2R}(x)\backslash B_R(x)}\dfrac{|u\Delta u|}{|z|^{\beta}} dz +CR^{-4}\int_{B_{2R}(x)\backslash B_R(x)}\dfrac{u^2}{|z|^{\beta}} dz.
\end{align}Furthermore, for large enough $m$, we obtain that for any $\psi \in C_0^4(\mathbb{R}^N)$ with $0\le \psi \le 1$
\begin{align*}
\int_{\mathbb{R}^N}\left [\dfrac{|\Delta u|^2}{|x|^{\beta}}+|x|^{\alpha}|u|^{p+1}\right ] & \psi^{2m} dx
\le C \int_{\mathbb{R}^N}|x|^{-\frac{2\alpha+\beta p+\beta}{p-1}}\mathfrak{Q}(\psi^m)^{\frac{p+1}{p-1}}dx \\[0.15cm]
& +C \int_{\mathbb{R}^N}|x|^{-\frac{2\alpha+(\beta+2)(p+1)}{p-1}}\mathfrak{R}(\psi^m)^{\frac{p+1}{p-1}}dx,
\end{align*}and
\begin{equation}\label{eq:2.4}
\int_{B_R(x)}\left [\dfrac{|\Delta u|^2}{|z|^{\beta}}+|z|^{\alpha}|u|^{p+1}\right ]\psi^{2m} dz
\le CR^{N-4-\beta-\frac{8+2\alpha+2\beta}{p-1}}.
\end{equation}Here
\begin{eqnarray*}
& \mathfrak{Q}(\psi^m)=|\nabla \psi|^4+\psi^{2(2-m)}\Big [|\nabla (\Delta \psi^m)\cdot \nabla \psi^m |+|\Delta \psi^m|^2+
\left |\Delta |\nabla \psi^m|^2 \right | \Big ],& \\[0.1cm]
& \mathfrak{R}(\psi^m)=\psi^{2(2-m)}
\Big [\left |\Delta \psi^m \right |\left |x \cdot \nabla \psi^m \right |+
\left |\nabla \psi^m \right |^2+\left |x\cdot \nabla
(|\nabla \psi^m|^2)\right | \Big ].&
\end{eqnarray*}
\end{lemma}
\noindent {\it proof.}
From the definition of a stable solution $u$, it implies that if we take arbitrarily $\zeta \in C_0^4(\mathbb{R}^N)$,
then we obtain
\begin{equation}\label{eq:2.5}
\int_{\mathbb{R}^N}|x|^{\alpha}|u|^{p-1}u\zeta dx=\int_{\mathbb{R}^N} \dfrac{\Delta u}{|x|^{\beta}}\Delta \zeta dx,
\end{equation}and
\begin{equation}\label{eq:2.6}
p\int_{\mathbb{R}^N}|x|^{\alpha}|u|^{p-1}\zeta^2 dx\le \int_{\mathbb{R}^N}\dfrac{|\Delta \zeta|^2}{|x|^{\beta}}dx.
\end{equation}Now, in (\ref{eq:2.5}), we choose $\zeta=u\psi^2$ with $\psi\in C_0^4({\mathbb{R}^N})$, and find
\begin{equation}\label{eq:2.7}
\int_{\mathbb{R}^N}|x|^{\alpha} |u|^{p+1} \psi^2 dx=\int_{\mathbb{R}^N}\dfrac{\Delta u}{|x|^{\beta}}
\Delta (u\psi^2)dx.
\end{equation}We insert the test function $\zeta=u\psi$ into (\ref{eq:2.6}) and get
\begin{equation*}
p \int_{\mathbb{R}^N}|x|^{\alpha}|u|^{p+1} \psi^2 dx\le \int_{\mathbb{R}^N}\dfrac{[\Delta (u\psi)]^2}{|x|^{\beta}} dx.
\end{equation*}Putting the above inequality and (\ref{eq:2.7}) back into (\ref{eq:2.1}) yields
\begin{align*}
(p-1)\int_{\mathbb{R}^N}& |x|^{\alpha}|u|^{p+1}\psi^2 dx \le
\int_{\mathbb{R}^N}\dfrac{1}{|x|^{\beta}} \Big [4(\nabla u\cdot \nabla \psi)^2-2u\Delta u|\nabla \psi|^2\Big ]dx\\[0.08cm]
& +\int_{\mathbb{R}^N}\dfrac{u^2}{|x|^{\beta}} \Big [2|\nabla (\Delta \psi)\cdot \nabla \psi|+|\Delta \psi|^2
+2\beta |x|^{-2}\Delta \psi (x\cdot \nabla \psi) \Big] dx.
\end{align*}Combining with the identity (\ref{eq:2.2}), we have
\begin{align}\label{eq:2.8}
\int_{\mathbb{R}^N}|x|^{\alpha} & |u|^{p+1} \psi^2 dx \le C \int_{\mathbb{R}^N} \dfrac{|u\Delta u|}{|x|^{\beta}} |\nabla \psi|^2 dx\nonumber \\[0.1cm]
& +C \int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta}} \Big [|\nabla (\Delta \psi)\cdot \nabla \psi|+|\Delta \psi|^2+\left |\Delta |\nabla \psi|^2 \right |\Big ]dx \nonumber \\[0.1cm]
&+ C\int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta+2}}\Big [|\Delta \psi||x \cdot \nabla \psi|+|\nabla \psi|^2+\left |x\cdot \nabla
(|\nabla \psi|^2)\right | \Big ]dx.
\end{align}Since $\Delta (u\psi)=\Delta u \psi+2\nabla u \cdot \nabla \psi+u\Delta \psi$, it implies from (\ref{eq:2.7}), (\ref{eq:2.8}) and Lemma \ref{eq:l2.2}, that
\begin{align}\label{eq:2.9}
\int_{\mathbb{R}^N}\dfrac{|\Delta u|^2}{|x|^{\beta}} \psi^2dx \le & C \int_{\mathbb{R}^N}\dfrac{|u\Delta u|}{|x|^{\beta}} |\nabla \psi|^2 dx \nonumber +C \int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta}} \Big [|\nabla (\Delta \psi)\cdot \nabla \psi|+|\Delta \psi|^2+\left |\Delta |\nabla \psi|^2 \right |\Big ]dx \nonumber \\[0.1cm]
&+ C\int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta+2}}\Big [|\Delta \psi||x \cdot \nabla \psi|+|\nabla \psi|^2+\left |x\cdot \nabla
(|\nabla \psi|^2)\right | \Big ]dx.
\end{align}\vskip .05in
Replace $\psi$ by $\psi^m$ in (\ref{eq:2.8}) and (\ref{eq:2.9}) with $m>2$ to lead to
\begin{align*}
\int_{\mathbb{R}^N}&\left [\dfrac{|\Delta u|^2}{|x|^{\beta}} +|x|^{\alpha}|u|^{p+1}\right ]\psi^{2m} dx \le C\int_{\mathbb{R}^N}
\dfrac{|u\Delta u|}{|x|^{\beta}} \psi^{2(m-1)} |\nabla \psi|^2 dx\\[0.1cm]
& +C\int_{\mathbb{R}^N}\dfrac{u^2}{|x|^{\beta}} \Big [|\nabla (\Delta \psi^m)\cdot \nabla \psi^m|+|\Delta \psi^m|^2+\left |
\Delta |\nabla \psi^m|^2 \right | \Big ]dx \\[0.1cm]
& + C\int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta+2}}\Big [\left |\Delta \psi^m \right |\left |x \cdot \nabla \psi^m \right |+
\left |\nabla \psi^m \right |^2+\left |x\cdot \nabla
(|\nabla \psi^m|^2)\right | \Big ]dx.
\end{align*}Utilizing Young's inequality, we obtain
\begin{equation*}
\int_{\mathbb{R}^N} \dfrac{|u\Delta u|}{|x|^{\beta}}\psi^{2(m-1)}|\nabla \psi|^2 dx\le \dfrac{1}{2C} \int_{\mathbb{R}^N}
\dfrac{|\Delta u|^2}{|x|^{\beta}} \psi^{2m} dx +C\int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta}}\psi^{2(m-2)}|\nabla \psi|^4dx.
\end{equation*}Thus, it implies
\begin{align*}
\int_{\mathbb{R}^N}\left [\dfrac{|\Delta u|^2}{|x|^{\beta}}+|x|^{\alpha}|u|^{p+1}\right ] & \psi^{2m} dx
\le C\int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta}} \psi^{2(m-2)} \mathfrak{Q}(\psi^m)dx \\[0.1cm]
& +C \int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta+2}} \psi^{2(m-2)} \mathfrak{R}(\psi^m) dx,
\end{align*}where $\mathfrak{Q}(\psi^m)=|\nabla \psi|^4+\psi^{2(2-m)}\Big [|\nabla (\Delta \psi^m)\cdot \nabla \psi^m |+|\Delta \psi^m|^2+
\left |\Delta |\nabla \psi^m|^2 \right | \Big ]$ and $\mathfrak{R}(\psi^m)=\psi^{2(2-m)}
\Big [\left |\Delta \psi^m \right |\left |x \cdot \nabla \psi^m \right |+
\left |\nabla \psi^m \right |^2+\left |x\cdot \nabla
(|\nabla \psi^m|^2)\right | \Big ]$. Taking $(m-2)(p+1)\ge 2m$,
we use H\"{o}lder's inequality to the both terms in the right hand side of the above inequality and get
\begin{align*}
\int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta}} \psi^{2(m-2)} \mathfrak{Q}(\psi^m)dx =\int_{\mathbb{R}^N} |x|^{\frac{2\alpha}{p+1}}u^2 \psi^{2(m-2)}|x|^{-\frac{2\alpha}{p+1}-\beta}\mathfrak{Q}(\psi^m)dx \\[0.1cm]
\le \left (\int_{\mathbb{R}^N}|x|^{\alpha}|u|^{p+1}\psi^{2m}dx \right )^{\frac{2}{p+1}}
\left (\int_{\mathbb{R}^N}|x|^{-\frac{2\alpha+\beta p+\beta}{p-1}}\mathfrak{Q}(\psi^m)^{\frac{p+1}{p-1}}dx \right )^{\frac{p-1}{p+1}},
\end{align*}and
\begin{align*}
\int_{\mathbb{R}^N} \dfrac{u^2}{|x|^{\beta+2}} \psi^{2(m-2)} \mathfrak{R}(\psi^m)dx =\int_{\mathbb{R}^N} |x|^{\frac{2\alpha}{p+1}}u^2 \psi^{2(m-2)}|x|^{-\frac{2\alpha}{p+1}-\beta-2}\mathfrak{R}(\psi^m)dx \\[0.1cm]
\le \left (\int_{\mathbb{R}^N}|x|^{\alpha}|u|^{p+1}\psi^{2m}dx \right )^{\frac{2}{p+1}}
\left (\int_{\mathbb{R}^N}|x|^{-\frac{2\alpha+(\beta+2)(p+1)}{p-1}}\mathfrak{R}(\psi^m)^{\frac{p+1}{p-1}}dx \right )^{\frac{p-1}{p+1}}.
\end{align*}
Therefore, we find
\begin{align*}
\int_{\mathbb{R}^N}\left [\dfrac{|\Delta u|^2}{|x|^{\beta}}+|x|^{\alpha}|u|^{p+1}\right ] & \psi^{2m} dx
\le C\int_{\mathbb{R}^N}|x|^{-\frac{2\alpha+\beta p+\beta}{p-1}}\mathfrak{Q}(\psi^m)^{\frac{p+1}{p-1}}dx\\[0.1cm]
& +C\int_{\mathbb{R}^N}|x|^{-\frac{2\alpha+(\beta+2)(p+1)}{p-1}}\mathfrak{R}(\psi^m)^{\frac{p+1}{p-1}}dx.
\end{align*}
Let us choose $\psi \in C_0^4(B_{2R}(x))$ a cut-off function verifying $0 \le \psi \le 1$, $\psi \equiv 1$ in $B_R(x)$, and $|\nabla^k \psi|
\le \dfrac{C}{R^k}$ for $k\le 3$. Substituting $\psi$ into (\ref{eq:2.8}), (\ref{eq:2.9}) and the above inequality, we have
\begin{align*}
& \int_{B_R(x)}\left (\dfrac{|\Delta u|^2}{|z|^{\beta}} +|z|^{\alpha}|u|^{p+1} \right )dz\\[0.1cm]
& \le CR^{-2} \int_{B_{2R}(x)\backslash B_R(x)}\dfrac{|u\Delta u|}{|z|^{\beta}} dz +CR^{-4}\int_{B_{2R}(x)\backslash B_R(x)}\dfrac{u^2}{|z|^{\beta}} dz.
\end{align*}and
\begin{equation*}
\int_{B_R(x)}\left [\dfrac{|\Delta u|^2}{|z|^{\beta}}+|z|^{\alpha}|u|^{p+1}\right ]\psi^{2m} dz
\le CR^{N-4-\beta-\frac{8+2\alpha+2\beta}{p-1}}. \eqno \square
\end{equation*}\vskip .08in
\begin{remark}\label{eq:r2.1}
If the domain $\mathbb{R}^N$ is replaced by the subset $\Omega$ (boundedness or not) in Lemma \ref{eq:l2.1}-Lemma \ref{eq:al2.3}, then the
conclusions are also true.
\end{remark}
\begin{lemma}{\em (Pohozaev identity)}
Let $u$ be a classical solution of (\ref{eq:1.1}), then we have
\begin{align}\label{eq:2.10}
& \dfrac{N-4-\beta}{2}\int_{\Omega}\dfrac{|\Delta u|^2}{|x|^{\beta}} dx -\dfrac{N+\alpha}{p+1} \int_{\Omega} |x|^{\alpha}|u|^{p+1} dx \nonumber \\[0.08cm]
&=\dfrac{1}{2} \int_{\partial \Omega} \dfrac{|\Delta u|^2}{|x|^{\beta}} (x \cdot \nu) dS
-\dfrac{1}{p+1} \int_{\partial \Omega} |x|^{\alpha}|u|^{p+1}(x \cdot \nu) dS \nonumber \\[0.1cm]
&\;\;\ -\int_{\partial \Omega}\dfrac{\Delta u}{|x|^{\beta}} \nabla (x\cdot \nabla u)\cdot \nu dS+
\int_{\partial \Omega}\nabla \left (\dfrac{\Delta u}{|x|^{\beta}}\right )\cdot \nu (x \cdot \nabla u)dS,
\end{align}where $\nu$ denotes the outward unit normal vector field.
\end{lemma}
\begin{proof}
Multiplying (\ref{eq:1.1}) by $(x \cdot \nabla u)$, we obtain
\begin{equation*}
\Delta (|x|^{-\beta} \Delta u)(x \cdot \nabla u)=|x|^{\alpha} |u|^{p-1}u(x\cdot \nabla u),\quad \mbox{in}\;\; \Omega \backslash \{0\}.
\end{equation*}Hence, for every small $\varepsilon >0$, we have
\begin{equation}\label{eq:2.11}
\int_{\Omega \backslash B_{\varepsilon}}\Delta (|x|^{-\beta} \Delta u)(x \cdot \nabla u) dx
=\int_{\Omega \backslash B_{\varepsilon}} |x|^{\alpha} |u|^{p-1}u(x\cdot \nabla u) dx.
\end{equation}Apply the divergence theorem and integration by parts to calculate the right hand side and
the left hand side of (\ref{eq:2.11}) respectively, and get
\begin{align}\label{eq:2.12}
& \dfrac{1}{p+1} \int_{\Omega \backslash B_{\varepsilon}} |x|^{\alpha} \left (x \cdot \nabla \left (|u|^{p+1}\right )\right ) dx =-\dfrac{N+\alpha}{p+1}\int_{\Omega \backslash B_{\varepsilon}} |x|^{\alpha}|u|^{p+1} dx \nonumber \\[0.2cm]
& +\dfrac{1}{p+1} \int_{\partial \Omega}|x|^{\alpha}|u|^{p+1}(x \cdot \nu)dS-\dfrac{1}{p+1} \int_{\partial B_{\varepsilon}}|x|^{\alpha}|u|^{p+1}(x \cdot \nu)dS,
\end{align}and
\begin{align*}
& \int_{\Omega \backslash B_{\varepsilon}}\Delta (|x|^{-\beta} \Delta u)(x \cdot \nabla u) dx=\sum\limits_{i,j=1}^N\int_{\Omega \backslash B_{\varepsilon}} (|x|^{-\beta}\Delta u)_{x_ix_i}
(x^ju_{x_j})dx \\[0.2cm]
&=\sum\limits_{i,j=1}^N \int_{\Omega \backslash B_{\varepsilon}}(|x|^{-\beta}\Delta u)(x^ju_{x_j})_{x_ix_i} dx-\int_{\partial \Omega} |x|^{-\beta}\Delta u \nabla (x\cdot \nabla u)\cdot \nu dS \\[0.2cm]
&\quad +\int_{\partial B_{\varepsilon}} |x|^{-\beta}\Delta u \nabla (x\cdot \nabla u)\cdot \nu dS +\int_{\partial \Omega} \nabla (|x|^{-\beta}\Delta u)\cdot \nu (x\cdot \nabla u)dS \\[0.2cm]
& \quad -\int_{\partial B_{\varepsilon}} \nabla (|x|^{-\beta}\Delta u)\cdot \nu (x\cdot \nabla u)dS.
\end{align*}Again computing the first term in the right hand side of the above equality yields
\begin{align*}
& \sum\limits_{i,j=1}^N \int_{\Omega \backslash B_{\varepsilon}}(|x|^{-\beta}\Delta u)(x^ju_{x_j})_{x_ix_i} dx =\sum\limits_{i,j=1}^N
\int_{\Omega \backslash B_{\varepsilon}} |x|^{-\beta}\Delta u \Big [2\delta^{ij}u_{x_ix_j} +x^ju_{x_ix_ix_j}\Big ]dx\\[0.2cm]
&=-\dfrac{N-4-\beta}{2}\int_{\Omega \backslash B_{\varepsilon}}|x|^{-\beta}|\Delta u|^2 dx+\int_{\partial \Omega} \dfrac{|\Delta u|^2}{2} |x|^{-\beta} x\cdot \nu dS-\int_{\partial B_{\varepsilon}} \dfrac{|\Delta u|^2}{2} |x|^{-\beta} x\cdot \nu dS,
\end{align*}and putting back into the above equality leads to
\begin{align}\label{eq:2.13}
& \int_{\Omega \backslash B_{\varepsilon}}\Delta (|x|^{-\beta} \Delta u)(x \cdot \nabla u) dx=-\dfrac{N-4-\beta}{2}\int_{\Omega \backslash B_{\varepsilon}}\dfrac{|\Delta u|^2}{|x|^{\beta}} dx+\int_{\partial \Omega} \dfrac{|\Delta u|^2}{2} |x|^{-\beta} x\cdot \nu dS \nonumber \\[0.2cm]
&\quad -\int_{\partial \Omega} \dfrac{\Delta u}{|x|^{\beta}} \nabla (x\cdot \nabla u)\cdot \nu dS
+\int_{\partial \Omega} \nabla \left (\dfrac{\Delta u}{|x|^{\beta}} \right )\cdot \nu (x\cdot \nabla u)dS-\int_{\partial B_{\varepsilon}} \dfrac{|\Delta u|^2}{2} |x|^{-\beta} x\cdot \nu dS \nonumber \\[0.2cm]
& \quad +\int_{\partial B_{\varepsilon}} \dfrac{\Delta u}{|x|^{\beta}} \nabla (x\cdot \nabla u)\cdot \nu dS
-\int_{\partial B_{\varepsilon}} \nabla \left (\dfrac{\Delta u}{|x|^{\beta}} \right )\cdot \nu (x\cdot \nabla u)dS.
\end{align}Since $u\in C^4(\Omega)$, $\alpha>-4$ and $0 \le \beta \le \dfrac{N-4}{2}$, the above integrations are well-defined. Now, we insert (\ref{eq:2.12}) and (\ref{eq:2.13}) into (\ref{eq:2.11}), take $\varepsilon \to 0$ and pass to the limit to obtain the identity (\ref{eq:2.10}).
\end{proof}
Inspired by the ideas of \cite{Davila,Du,Hu}, we will apply {\it Pohozaev identity} to construct a monotonicity formula
which is a crucial tool. More precisely, choose $u\in W^{4,2}_{loc}(\Omega)$ and $|x|^{\alpha}|u|^{p+1} \in L_{loc}^1(\Omega)$,
fix $x\in \Omega$, let $0<r<R$ and $B_r(x) \subset B_R(x) \subset \Omega$, and define
\begin{align}\label{eq:2.14}
\mathcal{M}(r;x,u)=& r^{\delta} \int_{B_r(x)} \dfrac{1}{2}\dfrac{|\Delta u|^2}{|z|^{\beta}}-\dfrac{1}{p+1}|z|^{\alpha}|u|^{p+1} \nonumber \\[0.1cm]
& +\dfrac{(1+\beta)\lambda}{2}(N-2-\lambda) \left (r^{2\lambda+1-N}\int_{\partial B_r(x)} u^2 \right ) \nonumber \\[0.1cm]
& +\dfrac{\lambda}{2} (N-2-\lambda) \dfrac{d}{dr} \left (r^{2\lambda+2-N}
\int_{\partial B_r(x)} u^2 \right ) \nonumber \\[0.1cm]
&+\frac{r^3}{2}\dfrac{d}{dr} \left [r^{2\lambda+1-N}\int_{\partial B_r(x)} \left (\lambda r^{-1}u+\dfrac{\partial u}{\partial r}\right )^2 \right ] \nonumber \\[0.1cm]
& +\dfrac{1+\beta-\lambda}{2}r^{2\lambda +3-N} \int_{\partial B_r(x)} \left (|\nabla u|^2-\left |\dfrac{\partial u}{\partial r}\right |^2 \right ) \nonumber \\[0.1cm]
& +\dfrac{1}{2}\dfrac{d}{dr} \left [ r^{2\lambda+4-N} \int_{\partial B_r(x)} \left (|\nabla u|^2-\left |\dfrac{\partial u}{\partial r}\right |^2 \right ) \right ].
\end{align}Here and in the following, we always set $\delta:=\dfrac{8+2\alpha+2\beta}{p-1}+4+\beta-N$ and $\lambda:=\dfrac{4+\alpha+\beta}{p-1}$.\vskip .06in
\begin{theorem}\label{eq:t2.1}
Let $p \ge \dfrac{N+4+2\alpha+\beta}{N-4-\beta}$ and let $u\in W^{2,2}_{loc}(\Omega)$,
$|x|^{\alpha} |u|^{p+1} \in L_{loc}^1(\Omega)$ and $|x|^{-\beta}|\Delta u|^2 \in L_{loc}^1(\Omega)$
be a weak solution of (\ref{eq:1.1}). Then $\mathcal{M}(r;x,u)$ is nondecreasing in $r \in (0,R)$ and satisfies the inequality
\begin{equation*}
\dfrac{d}{dr}\mathcal{M}(r;0,u) \ge C(N,p, \alpha, \beta) r^{-N+2+2\lambda} \int_{\partial B_r}
\left (\lambda r^{-1}u+\dfrac{\partial u}{\partial r} \right )^2 dS,
\end{equation*}where $C(N,p,\alpha,\beta)=(N-2)(2+\beta)+2\lambda(N-4-\beta-\lambda)-\dfrac{\beta^2}{8}>0$.\vskip .06in
Furthermore, if $\mathcal{M}(r;0,u)\equiv \mbox{const}$, for all $r \in (0,R)$, then $u$ is
homogeneous in $B_R\backslash \{0\}$, i.e., $\forall \mu \in (0,1]$, $x \in B_R\backslash \{0\}$,
\begin{equation*}
u(\mu x)=\mu^{-\frac{4+\alpha+\beta}{p-1}}u(x).
\end{equation*}
\end{theorem}
\noindent {\it Proof.}
Define a function by
\begin{equation*}
\mathcal{F}(\kappa):=\kappa^{\delta} \int_{B_{\kappa}} \left (\dfrac{1}{2}\dfrac{|\Delta u|^2}{|x|^{\beta}}-\dfrac{1}{p+1}|x|^{\alpha}|u|^{p+1} \right )dx.
\end{equation*}Differentiating the function $\mathcal{F}(\kappa)$ in $\kappa$ arrives at
\begin{align}\label{eq:2.15}
\dfrac{d\mathcal{F}(\kappa)}{d\kappa} = &\delta \kappa^{\delta-1}\int_{B_{\kappa}} \left (\dfrac{1}{2}\dfrac{|\Delta u|^2}{|x|^{\beta}}-\dfrac{1}{p+1}|x|^{\alpha}|u|^{p+1} \right )dx \nonumber \\[0.1cm]
&+\kappa^{\delta }\int_{\partial B_{\kappa}} \left (\dfrac{1}{2}\dfrac{|\Delta u|^2}{|x|^{\beta}}-\dfrac{1}{p+1}|x|^{\alpha}|u|^{p+1} \right )dS.
\end{align}Multiply the equation (\ref{eq:1.1}) by $u$ and
integrate by parts to get
\begin{align*}
\int_{\Omega}|x|^{\alpha}|u|^{p+1} dx & =-\int_{\Omega} \nabla \left (\dfrac{\Delta u}{|x|^{\beta}} \right )\cdot \nabla u dx+\int_{\partial \Omega}\dfrac{\partial }{\partial \nu}
\left (\dfrac{\Delta u}{|x|^{\beta}} \right ) udS\\[0.1cm]
&=\int_{\Omega}\dfrac{|\Delta u|^2}{|x|^{\beta}} dx-\int_{\partial \Omega}\dfrac{\Delta u}{|x|^{\beta}}(\nabla u\cdot \nu )dS+\int_{\partial \Omega}\dfrac{\partial }{\partial \nu}
\left (\dfrac{\Delta u}{|x|^{\beta}} \right ) udS,
\end{align*}implies
\begin{equation}\label{eq:2.16}
\int_{\Omega}\dfrac{|\Delta u|^2}{|x|^{\beta}} dx-\int_{\Omega}|x|^{\alpha}|u|^{p+1} dx=\int_{\partial \Omega}\dfrac{\Delta u}{|x|^{\beta}}(\nabla u\cdot \nu )dS-\int_{\partial \Omega}\dfrac{\partial }{\partial \nu}
\left (\dfrac{\Delta u}{|x|^{\beta}} \right ) udS.
\end{equation}In addition, it is easily to see that
\begin{align*}
&\delta \kappa^{\delta-1}\int_{B_{\kappa}} \left (\dfrac{1}{2}\dfrac{|\Delta u|^2}{|x|^{\beta}}-\dfrac{1}{p+1}|x|^{\alpha}|u|^{p+1} \right )dx\\[0.15cm]
&=\lambda \kappa^{\delta-1} \int_{B_{\kappa}} \left (\dfrac{|\Delta u|^2}{|x|^{\beta}} -|x|^{\alpha}|u|^{p+1}\right ) dx \;\;\ -\kappa^{\delta-1}\int_{B_{\kappa}}\left ( \dfrac{N-4-\beta}{2}\dfrac{|\Delta u|^2}{|x|^{\beta}} -\dfrac{N+\alpha}{p+1} |x|^{\alpha}|u|^{p+1} \right ) dx.
\end{align*}Therefore, combining the result with (\ref{eq:2.10}), (\ref{eq:2.15}) and (\ref{eq:2.16}), we obtain
\begin{align}\label{eq:2.17}
& \dfrac{d\mathcal{F}(\kappa)}{d\kappa}= \lambda \kappa^{\delta -1} \left [\int_{\partial B_{\kappa}}\dfrac{\Delta u}{|x|^{\beta}}(\nabla u\cdot \nu )dS-\int_{\partial B_{\kappa}}\dfrac{\partial }{\partial \nu} \left (\dfrac{\Delta u}{|x|^{\beta}} \right ) udS \right ] \nonumber \\[0.1cm]
& \; \ + \kappa^{\delta -1}\int_{\partial B_{\kappa}} \dfrac{\Delta u}{|x|^{\beta}} \nabla (x\cdot \nabla u)\cdot \nu dS -\kappa^{\delta -1}
\int_{\partial B_{\kappa}}\nabla \left (\dfrac{\Delta u}{|x|^{\beta}} \right )\cdot \nu (x \cdot \nabla u)dS.
\end{align}\vskip .05in
Denote $u^{\kappa}(x):=\kappa^{\frac{4+\alpha+\beta}{p-1}}u(\kappa x)$. Now, computing the first term in the right hand side of (\ref{eq:2.17}) leads to
\begin{align}\label{eq:2.18}
\lambda \kappa^{\delta-1}\int_{\partial B_{\kappa}} |x|^{-\beta} \Delta u (\nabla u \cdot \nu)dS &=\lambda \kappa^2 \int_{\partial B_R}
\Delta \kappa^{\frac{4+\alpha+\beta}{p-1}} u \left (\nabla \kappa^{\frac{4+\alpha+\beta}{p-1}} u \cdot \nu \right ) dS \cdot \kappa^{1-N} \nonumber \\[0.1cm]
&= \dfrac{\lambda}{\kappa} \int_{\partial B_1} \Delta u^{\kappa} (\nabla u^{\kappa} \cdot \nu)d \sigma.
\end{align}Similarly, we calculate the second term in the right hand side of (\ref{eq:2.17}) and get
\begin{align}\label{eq:2.19}
& \lambda \kappa^{\delta-1}\int_{\partial B_{\kappa}}\dfrac{\partial }{\partial \nu} \left (\dfrac{\Delta u}{|x|^{\beta}} \right ) udS =
\lambda \kappa^{\delta-1}\int_{\partial B_{\kappa}}|x|^{-\beta} \left [(\nabla (\Delta u)\cdot \nu)-\beta|x|^{-2}\Delta u(x\cdot \nu) \right ] udS \nonumber \\[0.2cm]
& =\lambda \kappa^2 \int_{\partial B_{\kappa}} \left [\left (\nabla \left (\Delta \kappa^{\frac{4+\alpha+\beta}{p-1}}u\right )\cdot \nu\right )-
\beta \kappa^{-2}\Delta \left ( \kappa^{\frac{4+\alpha+\beta}{p-1}}u\right )(x\cdot \nu) \right ] \kappa^{\frac{4+\alpha+\beta}{p-1}}udS \cdot \kappa^{1-N} \nonumber \\[0.2cm]
& =\dfrac{\lambda}{\kappa}\int_{\partial B_1} \Big [(\nabla (\Delta u^{\kappa}) \cdot \nu) -\beta \Delta u^{\kappa} \Big ] u^{\kappa} d \sigma.
\end{align}Similar to the above calculation, we find
\begin{align}\label{eq:2.20}
& \kappa^{\delta -1}\int_{\partial B_{\kappa}} |x|^{-\beta} \Delta u \nabla (x\cdot \nabla u)\cdot \nu dS =\dfrac{1}{\kappa}\int_{\partial B_1} \Delta u^{\kappa} \nabla \left (x\cdot \nabla u^{\kappa} \right ) \cdot \nu d \sigma,
\end{align}and
\begin{align}\label{eq:2.21}
& \kappa^{\delta -1}
\int_{\partial B_{\kappa}}\nabla (|x|^{-\beta} \Delta u)\cdot \nu (x \cdot \nabla u)dS =\dfrac{1}{\kappa}\int_{\partial B_1} \Big [\left (\nabla \left (\Delta u^{\kappa}\right ) \cdot \nu\right ) -\beta \Delta u^{\kappa}\Big ]\left (x \cdot \nabla u^{\kappa} \right )d \sigma.
\end{align}We use spherical coordinates $r=|x|$, $\theta=\dfrac{x}{|x|}\in \mathbb{S}^{N-1}$ and
write $u^{\kappa}(x)=u^{\kappa}(r,\theta)$, then we insert (\ref{eq:2.18})-(\ref{eq:2.21}) into (\ref{eq:2.17}) to obtain
\begin{align*}
\dfrac{d\mathcal{F}(\kappa)}{d\kappa}=& \dfrac{\lambda}{\kappa}\int_{\partial B_1} \Delta u^{\kappa} (\nabla u^{\kappa} \cdot \nu)d \sigma -\dfrac{\lambda}{\kappa}
\int_{\partial B_1} \Big [(\nabla (\Delta u^{\kappa}) \cdot \nu) -\beta \Delta u^{\kappa} \Big ] u^{\kappa} d \sigma \\[0.1cm]
&+\dfrac{1}{\kappa} \int_{\partial B_1} \Delta u^{\kappa} \nabla \left (x\cdot \nabla u^{\kappa} \right ) \cdot \nu d \sigma
-\dfrac{1}{\kappa}\int_{\partial B_1} \Big [\left (\nabla \left (\Delta u^{\kappa}\right ) \cdot \nu\right ) -\beta \Delta u^{\kappa}\Big ]\left (x \cdot \nabla u^{\kappa} \right )d \sigma\\[0.1cm]
= & \dfrac{1}{\kappa} \int_{\partial B_1} \lambda \left (\dfrac{\partial^2u^{\kappa}}{\partial r^2} +(N-1)\dfrac{\partial u^{\kappa}}{\partial r}
+\Delta_{\theta} u^{\kappa} \right )\dfrac{\partial u^{\kappa}}{\partial r} \\[0.1cm]
& -\lambda \left [\dfrac{\partial^3 u^{\kappa}}{\partial r^3}+(N-1-\beta)\dfrac{\partial^2 u^{\kappa}}{\partial r^2}
-(N-1)(1+\beta)\dfrac{\partial u^{\kappa}}{\partial r}-(2+\beta) \Delta_{\theta} u^{\kappa}\right ]u^{\kappa} \\[0.1cm]
& +\left [\dfrac{\partial^2 u^{\kappa}}{\partial r^2}+(N-1)\dfrac{\partial u^{\kappa}}{\partial r}+\Delta_{\theta} u^{\kappa} \right ]\left (\dfrac{\partial^2 u^{\kappa}}{\partial r^2}+\dfrac{\partial u^{\kappa}}{\partial r} \right )\\[0.1cm]
&- \left [\dfrac{\partial^3 u^{\kappa}}{\partial r^3}+(N-1-\beta)\dfrac{\partial^2 u^{\kappa}}{\partial r^2}
-(N-1)(1+\beta)\dfrac{\partial u^{\kappa}}{\partial r}-(2+\beta) \Delta_{\theta} u^{\kappa}\right ]\dfrac{\partial u^{\kappa}}{\partial r} \\[0.15cm]
=& \dfrac{1}{\kappa} \int_{\partial B_1}- \dfrac{\partial^3 u^{\kappa}}{\partial r^3}\dfrac{\partial u^{\kappa}}{\partial r}
-\lambda \dfrac{\partial^3 u^{\kappa}}{\partial r^3}u^{\kappa}+\left (\dfrac{\partial^2 u^{\kappa}}{\partial r^2}\right )^2
+(\lambda+1+\beta)\dfrac{\partial^2u^{\kappa}}{\partial r^2}\dfrac{\partial u^{\kappa}}{\partial r}\\[0.1cm]
& +(N-1)(\lambda+2+\beta)\left (\dfrac{\partial u^{\kappa}}{\partial r}\right )^2
-\lambda (N-1-\beta) \dfrac{\partial^2 u^{\kappa}}{\partial r^2}u^{\kappa}
+\lambda (N-1)(1+\beta) \dfrac{\partial u^{\kappa}}{\partial r}u^{\kappa}\\[0.1cm]
& +\dfrac{1}{\kappa}\int_{\partial B_1} \Delta_{\theta} u^{\kappa}\dfrac{\partial ^2 u^{\kappa}}{\partial r^2} +(\lambda+3+\beta)
\Delta_{\theta} u^{\kappa}\dfrac{\partial u^{\kappa}}{\partial r}+\lambda (2+\beta )\Delta_{\theta}u^{\kappa}u^{\kappa}\\[0.1cm]
:= &\; \mathfrak{T}_1+\mathfrak{T}_2,
\end{align*}where $\Delta_{\theta}$ represents the Laplace-Beltrami operator on $\partial B_1$ and
$\nabla_{\theta}$ is the tangential derivative on $\partial B_1$.\vskip 0.05in
Differentiating $u^{\kappa}$ in $\kappa$ implies
\begin{equation}\label{eq:2.22}
\dfrac{du^{\kappa}}{d\kappa}(x)=\dfrac{1}{\kappa} \left [\lambda u^{\kappa}(x)+r\dfrac{\partial u^{\kappa}}{\partial r}(x)\right ]
\Longrightarrow
r \dfrac{\partial u^{\kappa}}{\partial r} =\kappa \dfrac{du^{\kappa}}{d\kappa}-\lambda u^{\kappa}.
\end{equation}
Differentiating the equation (\ref{eq:2.22}) in $\kappa$ and $r$ respectively yields
\begin{align*}
r \dfrac{\partial}{\partial r}\dfrac{du^{\kappa}}{d\kappa}=\kappa \dfrac{d^2u^{\kappa}}{d\kappa^2}+(1-\lambda)\dfrac{du^{\kappa}}{d\kappa}\;\; \mbox{and}\;\;
\kappa \dfrac{\partial }{\partial r}\dfrac{du^{\kappa}}{d\kappa}=& (1+\lambda)\dfrac{\partial u^{\kappa}}{\partial r}+r\dfrac{\partial^2 u^{\kappa}}{\partial r^2}.
\end{align*}Then, combining the above two equalities with (\ref{eq:2.22}), we obtain that, on $\partial B_1$
\begin{equation}\label{eq:2.23}
\dfrac{\partial^2 u^{\kappa}}{\partial r^2}=r^2 \dfrac{\partial^2 u^{\kappa}}{\partial r^2} =
\kappa^2\dfrac{d^2u^{\kappa}}{d\kappa^2}-2\lambda\kappa \dfrac{du^{\kappa}}{d\kappa}+\lambda (1+\lambda ) u^{\kappa}.
\end{equation}Similarly, we find
\begin{align*}
r^2 \dfrac{\partial^3 u}{\partial r^3}+2r\dfrac{\partial^2 u}{\partial r^2} &=\kappa^2\dfrac{\partial}{\partial r}
\dfrac{d^2u^{\kappa}}{d\kappa^2}-2\lambda \kappa \dfrac{\partial}{\partial r}\dfrac{du^{\kappa}}{d\kappa}+\lambda(1+\lambda)\dfrac{\partial u^{\kappa}}{\partial r},\\[0.1cm]
r\dfrac{\partial }{\partial r}\dfrac{d^2 u^{\kappa}}{d\kappa^2}& =\kappa \dfrac{d^3u^{\kappa}}{d\kappa^3}+(2-\lambda)\dfrac{d^2u^{\kappa}}{d\kappa^2}.
\end{align*}Then, on $\partial B_1$, we have
\begin{align}\label{eq:2.24}
\dfrac{\partial^3 u}{\partial r^3}
=& \kappa^3\dfrac{d^3u^{\kappa}}{d\kappa^3} -3\lambda \kappa^2\dfrac{d^2u^{\kappa}}{d\kappa^2} +3\lambda(1+\lambda)\kappa\dfrac{du^{\kappa}}{d\kappa} -\lambda(1+\lambda)(2+\lambda)u^{\kappa} .
\end{align}\vskip 0.05in
Substituting (\ref{eq:2.22}) and (\ref{eq:2.23}) into the expression of $\mathfrak{T}_2$ arrives at
\begin{align*}
\mathfrak{T}_2
= & \int_{\partial B_1} \kappa \Delta_{\theta}u^{\kappa} \dfrac{d^2 u^{\kappa}}{d \kappa^2}-(\lambda-3-\beta) \Delta_{\theta}u^{\kappa} \dfrac{d u^{\kappa}}{d \kappa} \\[0.1cm]
= & \int_{\partial B_1}-\kappa \nabla_{\theta}u^{\kappa}\nabla_{\theta}\dfrac{d^2u^{\kappa}}{d\kappa^2}+(\lambda-3-\beta)\nabla_{\theta}u^{\kappa}
\nabla_{\theta}\dfrac{du^{\kappa}}{d\kappa} \\[0.1cm]
=& -\dfrac{1}{2}\dfrac{d^2}{d\kappa^2}\left [\kappa \int_{\partial B_1} \left |\nabla_{\theta}u^{\kappa} \right |^2 \right ]
+\dfrac{\lambda-1-\beta}{2}\dfrac{d}{d\kappa} \int_{\partial B_1}\left |\nabla_{\theta}u^{\kappa} \right |^2
+\kappa \int_{\partial B_1}\left |\nabla_{\theta}\dfrac{du^{\kappa}}{d\kappa} \right |^2 \\[0.1cm]
\ge & -\dfrac{1}{2}\dfrac{d^2}{d\kappa^2}\left [\kappa \int_{\partial B_1} \left |\nabla_{\theta}u^{\kappa} \right |^2 \right ]
+\dfrac{\lambda -1-\beta}{2}\dfrac{d}{d\kappa} \int_{\partial B_1}\left |\nabla_{\theta}u^{\kappa} \right |^2.
\end{align*}Let us note the two equalities
\begin{align*}
-\kappa^3\dfrac{du^{\kappa}}{d\kappa}\dfrac{d^3u^{\kappa}}{d\kappa^3}= & \dfrac{d}{d\kappa}\left (-\dfrac{\kappa^3}{2}\dfrac{d}{d\kappa}\left (\dfrac{du^{\kappa}}{d\kappa}\right )^2 \right )+3\kappa^2\dfrac{du^{\kappa}}{d\kappa}\dfrac{d^2u^{\kappa}}{d\kappa^2}+\kappa^3\left (\dfrac{d^2u^{\kappa}}{d\kappa^2}\right )^2,\\[0.1cm]
\kappa u^{\kappa}\dfrac{d^2u^{\kappa}}{d\kappa^2}= & \dfrac{d^2}{d\kappa^2}\left (\dfrac{\kappa \left (u^{\kappa}\right )^2}{2} \right )
-2u^{\kappa}\dfrac{du^{\kappa}}{d\kappa}-\kappa \left (\dfrac{du^{\kappa}}{d\kappa} \right )^2.
\end{align*}Inserting (\ref{eq:2.22})-(\ref{eq:2.24}) into the expression of $\mathfrak{T}_1$, and combining with the above two equalities,
we get
\begin{align*}
\mathfrak{T}_1=& \int_{\partial B_1} -\kappa^3 \dfrac{d^3 u^{\kappa}}{d\kappa^3} \dfrac{du^{\kappa}}{d\kappa}+\kappa^3 \left (\dfrac{d^2u^{\kappa}}{d\kappa^2}\right )^2+(1+\beta)\kappa^2 \dfrac{d^2u^{\kappa}}{d\kappa^2}\dfrac{du^{\kappa}}{d\kappa} \\[0.1cm]
& +\Big [(N-1)(2+\lambda+\beta)-\lambda (5+\lambda+2\beta) \Big ]\kappa \left (\dfrac{du^{\kappa}}{d\kappa}\right )^2 \\[0.1cm]
& + \lambda(2+\lambda-N)\kappa u^{\kappa} \dfrac{d^2u^{\kappa}}{d\kappa^2}
+\lambda(3+\beta)(\lambda+2-N) u^{\kappa}\dfrac{du^{\kappa}}{d\kappa} \\[0.15cm]
=& \int_{\partial B_1}\dfrac{d}{d\kappa} \left (-\dfrac{\kappa^3}{2}\dfrac{d}{d\kappa} \left (\dfrac{du^{\kappa}}{d\kappa} \right )^2 \right )
+2\kappa^3 \left (\dfrac{d^2u^{\kappa}}{d\kappa^2} \right )^2+(4+\beta)\kappa^2 \dfrac{d^2u^{\kappa}}{d\kappa^2}\dfrac{du^{\kappa}}{d\kappa} \\[0.1cm]
& +\Big [(N-1)(2+\beta)+2\lambda(N-4-\beta-\lambda) \Big ]\kappa \left (\dfrac{du^{\kappa}}{d\kappa}\right )^2 \\[0.1cm]
& +\lambda (2+\lambda-N) \dfrac{d^2}{d\kappa^2} \left (\dfrac{\kappa (u^{\kappa})^2}{2}\right )
+\lambda(2+\lambda-N)(1+\beta)u^{\kappa}\dfrac{du^{\kappa}}{d\kappa}
\end{align*}
\begin{align*}
\ge & \int_{\partial B_1}\dfrac{d}{d\kappa} \left (-\dfrac{\kappa^3}{2}\dfrac{d}{d\kappa} \left (\dfrac{du^{\kappa}}{d\kappa} \right )^2 \right )
+\dfrac{\lambda (2+\lambda-N)}{2} \dfrac{d^2}{d\kappa^2} \left (\kappa (u^{\kappa})^2\right ) \\[0.1cm]
& +\dfrac{\lambda}{2}(2+\lambda-N)(1+\beta)\dfrac{d}{d\kappa}(u^{\kappa})^2.
\end{align*}Since $p\ge \dfrac{N+4+2\alpha+\beta}{N-4-\beta}$, the deleted terms of $\mathfrak{T}_1$ satisfies
\begin{align*}
2\kappa^3 \left (\dfrac{d^2u^{\kappa}}{d\kappa^2} \right )^2+ & (4+\beta)\kappa^2 \dfrac{d^2u^{\kappa}}{d\kappa^2}\dfrac{du^{\kappa}}{d\kappa}
+\Big [(N-1)(2+\beta)+2\lambda(N-4-\beta-\lambda) \Big ]\kappa \left (\dfrac{du^{\kappa}}{d\kappa}\right )^2 \\[0.15cm]
&=2\kappa\left [ \kappa \dfrac{d^2u^{\kappa}}{d\kappa^2} +\left (1+\dfrac{\beta}{4}\right ) \dfrac{du^{\kappa}}{d\kappa} \right ]^2
+C(N,p,\alpha,\beta) \kappa \left (\dfrac{du^{\kappa}}{d\kappa}\right )^2\ge 0,
\end{align*}where
\begin{align*}
C(N,p,\alpha,\beta)= & (N-1)(2+\beta)+2\lambda(N-4-\beta-\lambda) -2\left (1+\dfrac{\beta}{4}\right )^2\\
= & (N-2)(2+\beta)+2\lambda(N-4-\beta-\lambda)-\dfrac{\beta^2}{8}> 0.
\end{align*}
Now, we rescale and write those $\kappa$ derivatives in $\mathfrak{T}_1$ and $\mathfrak{T}_2$ as follows.
\begin{align*}
\int_{\partial B_1}\dfrac{d }{d \kappa}\left (u^{\kappa}\right )^2 & =\dfrac{d}{d \kappa} \left (\kappa^{2\lambda+1-N}\int_{\partial B_{\kappa}} u^2 \right ),\\[0.1cm]
\int_{\partial B_1} \dfrac{d^2 }{d \kappa^2} \left [\kappa \left (u^{\kappa}\right )^2 \right ] & =\dfrac{d^2}{d\kappa^2} \left (
\kappa^{2\lambda+2-N}\int_{\partial B_{\kappa}} u^2 \right ),\\[0.1cm]
\int_{\partial B_1} \dfrac{d}{d \kappa}\left [\kappa^3 \dfrac{d }{d \kappa}\left ( \dfrac{d u^{\kappa}}{d \kappa}\right )^2 \right ] &=
\dfrac{d}{d \kappa} \left [\kappa^3 \dfrac{d}{d\kappa} \left ( \kappa^{2\lambda+1-N}\int_{\partial B_{\kappa}}
\left (\lambda \kappa^{-1} u +\dfrac{\partial u}{\partial r} \right )^2\right ) \right ],\\[0.1cm]
\dfrac{d}{d \kappa} \left ( \int_{\partial B_1}\left | \nabla_{\theta}u^{\kappa} \right |^2 \right )
& =\dfrac{d}{d\kappa} \left [\kappa^{ 2\lambda+3-N}\int_{\partial B_{\kappa}}\left (|\nabla u|^2-\left |\dfrac{\partial u}{\partial r} \right |^2
\right ) \right ], \\[0.1cm]
\dfrac{d^2}{d \kappa^2} \left (\kappa \int_{\partial B_1}\left | \nabla_{\theta}u^{\kappa} \right |^2 \right )
& =\dfrac{d^2}{d\kappa^2} \left [\kappa^{ 2\lambda+4-N}\int_{\partial B_{\kappa}}\left (|\nabla u|^2-\left |\dfrac{\partial u}{\partial r} \right |^2
\right ) \right ].
\end{align*}Substituting these terms into $\dfrac{d\mathcal{F}(\kappa)}{d \kappa}$ yields
\begin{align*}
\dfrac{d \mathcal{F}(\kappa)}{d\kappa} \ge & \dfrac{\lambda(2+\lambda-N)(1+\beta)}{2}\dfrac{d}{d \kappa} \left (\kappa^{2\lambda+1-N}\int_{\partial B_{\kappa}} u^2 \right )\\[0.1cm]
&+ \dfrac{\lambda(2+\lambda-N)}{2}\dfrac{d^2}{d\kappa^2} \left (
\kappa^{2\lambda+2-N}\int_{\partial B_{\kappa}} u^2 \right )\\[0.1cm]
&-\dfrac{1}{2}\dfrac{d}{d \kappa} \left [\kappa^3 \dfrac{d}{d\kappa} \left ( \kappa^{2\lambda+1-N}\int_{\partial B_{\kappa}}
\left (\lambda \kappa^{-1} u +\dfrac{\partial u}{\partial r} \right )^2\right ) \right ]\\[0.1cm]
& +\dfrac{\lambda-1-\beta}{2} \dfrac{d}{d\kappa} \left [\kappa^{ 2\lambda+3-N}\int_{\partial B_{\kappa}}\left (|\nabla u|^2-\left |\dfrac{\partial u}{\partial r} \right |^2
\right ) \right ]\\[0.1cm]
& - \dfrac{1}{2}\dfrac{d^2}{d\kappa^2} \left [\kappa^{ 2\lambda+4-N}\int_{\partial B_{\kappa}}\left (|\nabla u|^2-\left |\dfrac{\partial u}{\partial r} \right |^2
\right ) \right ].
\end{align*}Applying the properties of integration, we conclude that $\mathcal{M}(r;x,u)$ is well defined and non-decreasing in $r \in (0,R)$.\vskip .06in
Next, we let $\mathcal{M}(r;0,u)\equiv \mbox{const}$, for all $r\in (0,R)$, then, for any $r_1,r_2 \in (0,R)$ with $r_1<r_2$, we have
\begin{align*}
0 & =\mathcal{M}(r_2;0,u)-\mathcal{M}(r_1;0,u) = \int_{r_1}^{r_2} \dfrac{d}{d \mu}\mathcal{M}(\mu;0,u)d \mu \\
& \ge C(N,p,\alpha,\beta) \int_{B_{r_2}\backslash B_{r_1}} |x|^{2+2\lambda -N}\left
(\lambda \mu^{-1}u+\dfrac{\partial u}{\partial \mu} \right )^2 dx.
\end{align*}Thus, we get
\begin{equation*}
\lambda \mu^{-1}u+\dfrac{\partial u}{\partial \mu}=0,\quad \mbox{a.e.}\quad \mbox{in}\; B_R\backslash \{0\}.
\end{equation*}Integrating in $r$ shows that
\begin{equation*}
u(\mu x)=\mu^{-\frac{4+\alpha+\beta}{p-1}}u(x),\;\ \forall \mu \in (0,1],\; x \in B_R\backslash \{0\}. \eqno \square
\end{equation*}
\begin{remark}\label{eq:r2.2}
From the proof of Theorem \ref{eq:t2.1}, we can find that if the linear combination of Pohozaev identity and the identity (\ref{eq:2.16})
minus some terms of $\mathfrak{T}_1$ and $\mathfrak{T}_2$, then it {\it is equivalent to} the derivative form of the monotonicity formula (\ref{eq:2.14}).
\end{remark}
\vskip .2in
\section{Proof of Theorem \ref{eq:t1.1}}
\vskip .1in
First, we give the expression of $N_{\alpha,\beta}(p)$. Now, we define four functions by
\begin{eqnarray*}
& \mathfrak{f}(N):=p\dfrac{4+\alpha+\beta}{p-1}\left (\dfrac{4+\alpha+\beta p}{p-1}+2 \right ) \left (N-2-\dfrac{4+\alpha+\beta}{p-1} \right )
\left (N-4-\dfrac{4+\alpha+\beta p}{p-1} \right ), & \\[0.15cm]
& \mathfrak{g}(N):=p\left (\dfrac{4+\alpha+\beta p}{p-1}+2 \right ) \left (N-4- \dfrac{4+\alpha+\beta p}{p-1}\right )+p\dfrac{4+\alpha+\beta}{p-1}
\left (N-2-\dfrac{4+\alpha+\beta}{p-1} \right ), & \\[0.15cm]
& \mathfrak{F}(N): =\dfrac{(N+\beta)^2(N-4-\beta)^2}{16}, & \\[0.15cm]
& \mathfrak{G}(N): =\dfrac{(N+\beta)(N-4-\beta)}{2}.&
\end{eqnarray*}Differentiating the functions $\mathfrak{f}(N)$ and $\mathfrak{F}(N)$ in $N$, we obtain
\begin{eqnarray*}
& \mathfrak{f}'(N)=p\dfrac{4+\alpha+\beta}{p-1}\left (2+\dfrac{4+\alpha+\beta p}{p-1} \right )\left (2N-6-\beta-\dfrac{8+2\alpha+2\beta}{p-1} \right ),& \\[0.15cm]
& \mathfrak{F}'(N)=\dfrac{1}{4}(N+\beta)(N-2)(N-4-\beta).&
\end{eqnarray*}A simple computation yields
\begin{align*}
\mathfrak{f} & (4+\beta+2\lambda)-\mathfrak{F}(4+\beta+2\lambda)=(p-1) \lambda^2 (2+\beta+\lambda)^2 >0,\\[0.15cm]
\mathfrak{f} & (4+\beta+(4p+1)\lambda)-\mathfrak{F}(4+\beta+(4p+1)\lambda) \\[0.1cm]
& =4p^2\lambda^2(2+\beta+\lambda)(2+\beta+4p\lambda)
-\dfrac{(4p+1)^2\lambda^2}{16}\Big [(4+2\beta)+(4p+1)\lambda \Big ]^2\\[0.1cm]
& <0,\\[0.15cm]
\mathfrak{g} & (4+\beta+2\lambda)-\mathfrak{G}(4+\beta+2\lambda)=2(p-1)\lambda (2+\beta+\lambda) >0,\\[0.15cm]
\mathfrak{f} & ' (4+\beta+2\lambda)-\mathfrak{F}'(4+\beta+2\lambda) =(p-1)\lambda (2+\beta+\lambda)(2+\beta+2\lambda)
>0.
\end{align*}Therefore, we take the least real root $N(p,\alpha,\beta)$ of the following algebra equation between $4+\beta+2\lambda$ with $4+\beta+(4p+1)\lambda$
\begin{align*}
& (p^4-4p^3+6p^2-4p+1)y^4-(8p^4-32p^3+48p^2-32p+8)y^3\\[0.1cm]
& -(p^2-2p+1)\Big [(32\alpha +104\beta +16 \alpha\beta +18\beta^2+112) p^2+(16\alpha \beta \\[0.1cm]
& +16 \alpha^2 -4\beta^2 +16\beta +96\alpha +160) p
+8\beta +2\beta^2 -16 \Big ]y^2\\[0.1cm]
& +\Big \{ \Big [\Big (48+44\beta +12 \beta^2 +8\alpha \beta +12\alpha +\alpha \beta^2+\beta^3 \Big ) p^2+\Big (64 +56 \alpha +28 \alpha\beta \\[0.1cm]
& +10\alpha^2 +40\beta +10\beta^2 +4\alpha\beta^2 +3\alpha^2\beta +\beta^3\Big ) p +28\alpha+16+14\alpha^2 +2\alpha^3 \\[0.1cm]
& +12\beta+12\alpha\beta +3\alpha^2\beta +2\beta^2+\alpha\beta^2 \Big ]16 (p^2-p)-(p-1)^4(32 \beta+8\beta^2)\Big \} y\\[0.1cm]
& +(p-1)^4\beta^2(\beta+4)^2-16\Big [(8+(2+\beta)\alpha+(6+\beta)\beta)p +(6+\alpha+\beta)\alpha+2\beta+8 \Big ] \\[0.1cm]
& \times \Big [(8+2\beta) p^2+(6\beta +6\alpha +\alpha\beta +\beta^2+8) p+2\alpha +\alpha^2+\alpha \beta \Big ] p\\[0.1cm]
& =0.
\end{align*}
Define
\begin{equation}\label{eq:3.1}
N_{\alpha,\beta}(p):=N(p,\alpha,\beta).
\end{equation}Then, for any $4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}<N<N_{\alpha,\beta}(p)$, we find
\begin{equation}\label{eq:3.2}
\mathfrak{f}(N) >\dfrac{(N+\beta)^2(N-4-\beta)^2}{16}.
\end{equation}Furthermore, combining the above inequality with the inequality $a+b\ge 2\sqrt{ab}$, for all $a,b\ge 0$, we have
\begin{equation}\label{eq:3.3}
\mathfrak{g}(N) >\dfrac{(N+\beta)(N-4-\beta)}{2}.
\end{equation}On the other hand, we easily check that the equality $\mathfrak{f}(N)-\mathfrak{F}(N)>0$ holds, if one of the following conditions holds:
\begin{itemize}
\item [\rm (i)] $\alpha=\beta$ and $4+\alpha+\dfrac{8+4\alpha}{p-1}<N< 8+3\alpha+\dfrac{8+4\alpha}{p-1}$; or
\item [\rm (ii)] $\alpha=\beta=0$ and
$4+\dfrac{8}{p-1}<N<2+\dfrac{4(p+1)}{p-1}\left (\sqrt{\dfrac{2p}{p+1}}+\sqrt{\dfrac{2p}{p+1}-\sqrt{\dfrac{2p}{p+1}}}\right )$.
\end{itemize}
\vskip .15in
Let us recall that if we take
\begin{equation*}
\Gamma=\dfrac{4+\alpha+\beta}{p-1}\left (\dfrac{4+\alpha+\beta p}{p-1}+2 \right )\left (N-2-\dfrac{4+\alpha+\beta}{p-1} \right )
\left (N-4-\dfrac{4+\alpha+\beta p}{p-1} \right ),
\end{equation*}then
\begin{equation*}
u_{\Gamma}(r)=\Gamma^{\frac{1}{p-1}}r^{-\frac{4+\alpha+\beta}{p-1}}
\end{equation*}is a singular solution of (\ref{eq:1.1}) in $\mathbb{R}^N\backslash \{0\}$.
By the well-known weighted Hardy-Rellich inequality (\cite{Ghoussoub}) with the best constant
\begin{equation*}
\int_{\mathbb{R}^N} \dfrac{|\Delta \psi|^2}{|x|^{\beta}}dx \ge \frac{(N+\beta)^2(N-4-\beta)^2}{16}
\int_{\mathbb{R}^N} \dfrac{\psi^2}{|x|^{4+\beta}} dx,\quad \forall \psi \in H^2_{loc}(\mathbb{R}^N),
\end{equation*}we conclude that the singular solution $u_{\Gamma}$ is stable in $\mathbb{R}^N\backslash \{0\}$ if and only if
\begin{equation*}
\mathfrak{f}(N)=p\Gamma \le \frac{(N+\beta)^2(N-4-\beta)^2}{16}=\mathfrak{F}(N).
\end{equation*}Here $-1-\dfrac{\sqrt{1+(N-1)^2}}{2} \le \beta \le \dfrac{N-4}{2}$.\vskip .15in
\noindent {\bf Proof of Theorem \ref{eq:t1.1}.}
Since $u \in W^{2,2}(B_2 \backslash B_1)$, $|x|^{\alpha}|u|^{p+1} \in L_{loc}^1(\mathbb{R}^N
\backslash \{0\})$ and $|x|^{-\beta}|\Delta u|^2 \in L_{loc}^1(\mathbb{R}^N\backslash \{0\})$, we can assume that there exists a $\Psi \in W^{2,2}(\mathbb{S}^{N-1}) \cap L^{p+1}(\mathbb{S}^{N-1})$, such that
in polar coordinates
\begin{equation*}
u(r,\theta)=r^{-\frac{4+\alpha+\beta}{p-1}}\Psi (\theta).
\end{equation*}Substituting into (\ref{eq:1.1}) to get
\begin{equation*}
\Delta_{\theta}^2 \Psi-\Upsilon\Delta_{\theta} \Psi+\Gamma \Psi=|\Psi|^{p-1}\Psi,
\end{equation*}where
\begin{eqnarray*}
& \Upsilon =\lambda
(N-2-\lambda)+ \left (\dfrac{4+\alpha+\beta p}{p-1}+2 \right )\left (N-4-\dfrac{4+\alpha+\beta p}{p-1}\right ),&\\[0.15cm]
& \Gamma =\lambda (N-2-\lambda)\left (\dfrac{4+\alpha+\beta p}{p-1}+2 \right )
\left (N-4-\dfrac{4+\alpha+\beta p}{p-1} \right ).&
\end{eqnarray*}Multiplying the above equation by $\Psi$ and integration by parts yields
\begin{equation}\label{eq:3.4}
\int_{\mathbb{S}^{N-1}} |\Delta_{\theta} \Psi|^2+\Upsilon |\nabla_{\theta}\Psi|^2 +\Gamma \Psi^2=\int_{\mathbb{S}^{N-1}}|\Psi|^{p+1}.
\end{equation}
Since $u$ is a stable solution, we can take a test function $r^{-\frac{N-4-\beta}{2}}\Psi(\theta)\xi_{\varepsilon}(r)$ and obtain
\begin{equation}\label{eq:3.5}
p\int_{\mathbb{R}^N} |x|^{\alpha}|u|^{p-1}\left (r^{-\frac{N-4-\beta}{2}} \Psi(\theta)\xi_{\varepsilon}(r) \right )^2 dx
\le \int_{\mathbb{R}^N} \dfrac{\left |\Delta \left (r^{-\frac{N-4-\beta}{2}} \Psi(\theta)\xi_{\varepsilon}(r) \right )\right |^2}{|x|^{\beta}}dx.
\end{equation}Here, for any $\varepsilon >0$, we choose $\xi_{\varepsilon} \in C_0^2\left (\left (\frac{\varepsilon}{3},\frac{3}{\varepsilon}\right ) \right )$ such that
$\xi_{\varepsilon} \equiv 1$ in $\left (\varepsilon, \frac{1}{\varepsilon}\right )$ and
\begin{equation*}
r|\xi_{\varepsilon}'(r)|+r^2|\xi_{\varepsilon}''(r)| \le C,
\end{equation*}for all $r>0$. Then one can easily deduce that
\begin{equation*}
\int_0^{\infty}r^{-1} \xi_{\varepsilon}^2(r)dr \ge \int_{\varepsilon}^{\frac{1}{\varepsilon}}r^{-1}dr =2|\ln \varepsilon|,
\end{equation*}and
\begin{equation*}
\int_0^{\infty}\Big [r|\xi_{\varepsilon}'(r)|^2+r^3|\xi_{\varepsilon}''(r)|^2+|\xi_{\varepsilon}'(r)\xi_{\varepsilon}(r)|+r|\xi_{\varepsilon}(r)
\xi_{\varepsilon}''(r)| \Big ]dr \le C.
\end{equation*}Applying the coordinate transformation to the left hand side of (\ref{eq:3.5}), we get
\begin{align}\label{eq:3.6}
p\int_0^{+\infty} \int_{\mathbb{S}^{N-1}} r^{\alpha}|u|^{p-1}\left (r^{-\frac{N-4-\beta}{2}} \Psi(\theta)\xi_{\varepsilon}(r) \right )^2r^{N-1}dr d\theta \nonumber \\[0.1cm]
=p \left (\int_{\mathbb{S}^{N-1}} |\Psi|^{p+1} d \theta \right )\left (\int_0^{+\infty} r^{-1}\xi_{\varepsilon}^2(r) dr\right ).
\end{align}A direct calculation finds
\begin{align*}
\Delta \left (r^{-\frac{N-4-\beta}{2}} \Psi(\theta)\xi_{\varepsilon}(r) \right )= & -\dfrac{(N+\beta)(N-4-\beta)}{4}r^{-\frac{N-\beta}{2}}\xi_{\varepsilon}(r)\Psi(\theta)
+r^{-\frac{N-\beta}{2}}\xi_{\varepsilon}(r)\Delta_{\theta} \Psi \\[0.1cm]
& +(3+\beta)r^{-\frac{N-2-\beta}{2}}\xi'_{\varepsilon}(r)\Psi(\theta)+r^{-\frac{N-4-\beta}{2}}\xi''_{\varepsilon}(r)\Psi(\theta),
\end{align*}and inserting into the right hand side of (\ref{eq:3.5}) yields
\begin{align}\label{eq:3.7}
\int_{\mathbb{R}^N}& |x|^{-\beta} \left |\Delta \left (r^{-\frac{N-4-\beta}{2}} \Psi(\theta)\xi_{\varepsilon}(r) \right )\right |^2 dx \nonumber \\[0.13cm]
\le & \left [\int_{\mathbb{S}^{N-1}} \left (|\Delta_{\theta} \Psi|^2+\dfrac{(N+\beta)(N-4-\beta)}{2}|\nabla_{\theta} \Psi|^2+\dfrac{(N+\beta)^2(N-4-\beta)^2}{16}\Psi^2 \right ) d\theta \right ]\nonumber \\[0.13cm]
& \times \left (\int_0^{+\infty} r^{-1}\xi_{\varepsilon}^2(r) dr \right ) \nonumber \\[0.13cm]
& +O \left \{\int_0^{+\infty} \left [r|\xi_{\varepsilon}'(r)|^2+r^3|\xi_{\varepsilon}''(r)|^2+|\xi_{\varepsilon}'(r)|\xi_{\varepsilon}(r)+r\xi_{\varepsilon}(r)|\xi_{\varepsilon}''(r)| \right ] dr \right \}\nonumber \\[0.13cm]
& \times \int_{\mathbb{S}^{N-1}} \left [\Psi(\theta)^2+|\nabla_{\theta}\Psi(\theta)|^2 \right ] d\theta.
\end{align}Put (\ref{eq:3.6}) and (\ref{eq:3.7}) back into (\ref{eq:3.5}), take $\varepsilon \to 0$,
and pass to the limit to obtain
\begin{align*}
& p \int_{\mathbb{S}^{N-1}} |\Psi|^{p+1} d\theta \\[0.1cm]
&\le
\int_{\mathbb{S}^{N-1}} \left [|\Delta_{\theta} \Psi|^2+\dfrac{(N+\beta)(N-4-\beta)}{2}|\nabla_{\theta} \Psi|^2+\dfrac{(N+\beta)^2(N-4-\beta)^2}{16}\Psi^2 \right ] d\theta.
\end{align*}\vskip .05in
Now, combining the above inequality with (\ref{eq:3.4}), we have
\begin{align*}
\int_{\mathbb{S}^{N-1}} (p-1) & |\Delta_{\theta} \Psi|^2+\left (p\Upsilon-\dfrac{(N+\beta)(N-4-\beta)}{2} \right )|\nabla_{\theta} \Psi |^2 \\[0.1cm]
&+\left (p\Gamma-\dfrac{(N+\beta)^2(N-4-\beta)^2}{16}\right )\Psi^2 \le 0.
\end{align*}Since $4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}<N<N_{\alpha,\beta}(p)$, it implies from the definition of $N_{\alpha,\beta}(p)$, (\ref{eq:3.2}) and (\ref{eq:3.3}) that
\begin{equation*}
\Psi(\theta)\equiv 0.
\end{equation*}Therefore, we get
\begin{equation*}
u \equiv 0. \eqno \square
\end{equation*}\vskip .2in
\section{Proof of Theorem \ref{eq:t1.2}}
\vskip .1in
\noindent {\bf Proof of Theorem \ref{eq:t1.2}.} We divide the proof into three cases.\vskip 0.1in
\noindent {\bf Case I.} $5 \le N < 4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}$.\vskip 0.1in
Since $N<4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}$, it implies from (\ref{eq:2.4}) that as $R \to +\infty$,
\begin{equation*}
\int_{B_R(x)} \left [\dfrac{|\Delta u|^2}{|z|^{\beta}} +|z|^{\alpha} |u|^{p+1} \right ]dz \le C R^{N-4-\beta-\frac{8+2\alpha+2\beta}{p-1}} \to
0.
\end{equation*}Therefore, we get
\begin{equation*}
u \equiv 0.
\end{equation*}
\noindent {\bf Case II.} $N=4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}$.\vskip 0.1in
From the inequality (\ref{eq:2.4}), we obtain that
\begin{equation*}
\int_{\mathbb{R}^N} \left [\dfrac{|\Delta u|^2}{|z|^{\beta}} +|z|^{\alpha} |u|^{p+1} \right ]dz <+\infty,
\end{equation*}implies
\begin{equation*}
\lim\limits_{R\to +\infty} \int_{\mathfrak{D}} \left [\dfrac{|\Delta u|^2}{|z|^{\beta}}+|z|^{\alpha}
|u|^{p+1} \right ]dz=0,
\end{equation*}where $\mathfrak{D}:=B_{2R}(x) \backslash B_R(x)$. Applying (\ref{eq:2.3}) and H\"{o}lder's inequality yields
\begin{align*}
\int_{B_R(x)} & \left [\dfrac{|\Delta u|^2}{|z|^{\beta}}+|z|^{\alpha} |u|^{p+1} \right ] dz
\le CR^{-2} \int_{\mathfrak{D}}\dfrac{|u\Delta u|}{|z|^{\beta}} dz +CR^{-4}\int_{\mathfrak{D}}\dfrac{u^2}{|z|^{\beta}} dz\\[0.1cm]
& \le C\mathfrak{C}R^{-2}\left (\int_{\mathfrak{D}} |z|^{\alpha}|u|^{p+1} dz\right )^{\frac{1}{p+1}}
\left (\int_{\mathfrak{D}} |z|^{-\frac{2\alpha+\beta (p+1)}{p-1}} dz\right )^{\frac{p-1}{2(p+1)}} \\[0.1cm]
&\;\;\ +CR^{-4} \left (\int_{\mathfrak{D}}|z|^{\alpha}|u|^{p+1} dz\right )^{\frac{2}{p+1}}\left (
\int_{\mathfrak{D}} |z|^{-\frac{2\alpha+\beta (p+1)}{p-1}} dz \right )^{\frac{p-1}{p+1}} \\[0.1cm]
& \le C\mathfrak{C}R^{\left [N-4-\beta-\frac{8+2\alpha+2\beta}{p-1} \right ]\frac{p-1}{2(p+1)}} \left (\int_{\mathfrak{D}}
|z|^{\alpha}|u|^{p+1} dz\right )^{\frac{p-1}{2(p+1)}} \\[0.1cm]
& \;\;\ +CR^{\left [N-4-\beta-\frac{8+2\alpha+2\beta}{p-1} \right ]\frac{p-1}{p+1}}\left (
\int_{\mathfrak{D}} |z|^{\alpha}|u|^{p+1} dz\right )^{\frac{p-1}{p+1}},
\end{align*}where $\mathfrak{C}=\left (\displaystyle \int_{\mathfrak{D}} \dfrac{|\Delta u|^2}{|z|^{\beta}} dz \right )^{\frac{1}{2}}$.
From $N=4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}$, it implies that the right hand side of the above inequality converges to $0$ as $R \to +\infty$. Therefore, we obtain
\begin{equation*}
u \equiv 0.
\end{equation*}
\noindent {\bf Case III.} $4+\beta+\dfrac{8+2\alpha+2\beta}{p-1}<N<N_{\alpha,\beta}(p)$.\vskip 0.1in
First, we will obtain some properties of the
function $\mathcal{M}$.
\begin{lemma}\label{eq:l4.1}
$\lim\limits_{r \to +\infty} \mathcal{M}(r;0,u) <+\infty$.
\end{lemma}
\begin{proof}
The proof mainly use the estimate (\ref{eq:2.4}) and the monotonicity of the function $\mathcal{M}(r;0,u)$ in $r$.
Applying (\ref{eq:2.4}) to estimate the first term in the right hand side of (\ref{eq:2.14}) yields
\begin{align*}
& r^{\frac{8+2\alpha+2\beta}{p-1}+4+\beta-N} \int_{B_r} \left [ \dfrac{1}{2}
\dfrac{(\Delta u)^2}{|x|^{\beta}}-\dfrac{1}{p+1}|x|^{\alpha} |u|^{p+1} \right ]dx \\[0.1cm]
& \le C r^{\frac{8+2\alpha+2\beta}{p-1}+4+\beta-N} r^{N-4-\beta-\frac{8+2\alpha+2\beta}{p-1}} \\
& \le C.
\end{align*}Utilize H\"{o}lder's inequality to estimate the second term in the right hand side of (\ref{eq:2.14})
\begin{align*}
& r^{\frac{8+2\alpha+2\beta}{p-1}+1-N} \int_{\partial B_r} u^2 \le \dfrac{1}{r} \int_r^{2r} \left (\mu^{\frac{8+2\alpha+2\beta}{p-1}+1-N} \int_{\partial B_{\mu}} u^2 dS \right )d \mu \\[0.1cm]
& \le \dfrac{1}{r} \left ( \int_{B_{2r}\backslash B_r} \left ( |x|^{\frac{8+2\alpha+2\beta}{p-1}+1-N-\frac{2\alpha}{p+1}} \right
)^{\frac{p+1}{p-1}} \right )^{\frac{p-1}{p+1}} \left ( \int_{B_{3r}} |x|^{\alpha} |u|^{p+1} \right )^{\frac{2}{p+1}} \\[0.1cm]
& \le C r^{\left [\frac{8+2\alpha+2\beta}{p-1}-N-\frac{2\alpha}{p+1}+N\frac{p-1}{p+1}\right ]} r^{\frac{2}{p+1}\left
[N-4-\beta-\frac{8+2\alpha+2\beta}{p-1} \right ]}\\[0.1cm]
& \le C.
\end{align*}Similarly, we find
\begin{align*}
\dfrac{d}{dr} \left (r^{2\lambda+2-N}
\int_{\partial B_r} u^2 \right ) & \le \dfrac{1}{r^2} \int^{2r}_r \int_{\iota}^{\iota+r} \dfrac{d}{d \mu} \left (\mu^{2\lambda+2-N}
\int_{\partial B_{\mu}} u^2 \right )d \mu d\iota \\[0.1cm]
& \le C.
\end{align*}By the interpolation inequality and H\"{o}lder's inequality, we get
\begin{align}\label{eq:4.1}
\int_{B_r} |\nabla u|^2 \le & Cr^2 \int_{B_r} |\Delta u|^2 +Cr^{-2} \int_{B_r} u^2 \nonumber \\[0.1cm]
\le & Cr^2 \int_{B_r} |x|^{\beta}\dfrac{|\Delta u|^2}{|x|^{\beta}} +Cr^{-2} \left (
\int_{B_r} |x|^{\alpha}|u|^{p+1} \right )^{\frac{2}{p+1}} \left (\int_{B_r} |x|^{-\frac{2\alpha}{p-1}} dx \right )^{\frac{p-1}{p+1}} \nonumber \\[0.1cm]
\le & Cr^{N-2-\frac{8+2\alpha+2\beta}{p-1}}.
\end{align}Then, it implies that
\begin{align*}
r^{\frac{8+2\alpha+2\beta}{p-1}+3-N} \int_{\partial B_r}
|\nabla u|^2 dS \le \dfrac{1}{r} \int_r^{2r} \left (\mu^{\frac{8+2\alpha+2\beta}{p-1}+3-N} \int_{\partial B_{\mu}} |\nabla u|^2 dS \right )d \mu \le C.
\end{align*}Therefore, we get the boundedness of the fifth and sixth terms in the right hand side of (\ref{eq:2.14}). Utilizing
H\"{o}lder's inequality and (\ref{eq:4.1}), we find
\begin{align*}
\dfrac{1}{r^2} & \int_r^{2r} \int_{\iota}^{\iota+r} \dfrac{\mu^3}{2} \dfrac{d}{d \mu} \left [
\mu^{2\lambda +1-N} \int_{\partial B_{\mu}} \Big (\lambda \mu^{-1}u+\dfrac{\partial u}{\partial r}
\Big )^2 \right ] d \mu d\iota \\[0.13cm]
= & \dfrac{1}{2r^2}\int_r^{2r} \left \{(\iota+r)^{2\lambda +4-N} \int_{\partial B_{\iota+r}}
\Big [\lambda (\iota+r)^{-1}u+\dfrac{\partial u}{\partial r} \Big ]^2-
\iota^{2\lambda+4-N} \int_{\partial B_r}
\Big [\lambda \iota^{-1}u+\dfrac{\partial u}{\partial r} \Big ]^2\right \} \\[0.13cm]
& -\dfrac{3}{2r^2} \int_r^{2r} \int_{\iota}^{\iota+r} \mu^{2\lambda+3-N} \int_{\partial B_{\mu}}\left (\lambda
\mu^{-1} u+\dfrac{\partial u}{\partial r} \right )^2 \\[0.13cm]
\le & \dfrac{C}{r^2} \int_{B_{3r}\backslash B_r} |x|^{2\lambda +2-N} \left (u^2 +
|x|^2 \left (\dfrac{\partial u}{\partial r} \right )^2 \right )dx \\[0.1cm]
\le & C.
\end{align*}Consequently, we obtain the desired result.
\end{proof}
\begin{lemma}\label{eq:l4.2} For all $\kappa>0$, define
{\it blowing down} sequences
\begin{equation*}
u^{\kappa}(x):=\kappa^{\frac{4+\alpha+\beta}{p-1}}u(\kappa x),
\end{equation*}then $u^{\kappa}$ strongly converges to $u^{\infty}$ in $W_{loc}^{1,2}
(\mathbb{R}^N) \cap L_{loc}^{p+1}(\mathbb{R}^N)$. Furthermore, $u^{\infty}$ is a homogeneous stable solution of (\ref{eq:1.1}).
\end{lemma}
\begin{proof}
Since $u$ is a stable solution of (\ref{eq:1.1}), we can find
\begin{align}\label{eq:4.2}
& p \int_{\mathbb{R}^N} |x|^{\alpha}|u^{\kappa}|^{p-1} \zeta^2(x)dx=p\int_{\mathbb{R}^N}|\kappa x|^{\alpha}\kappa^{4+\beta}|u(\kappa x)|^{p-1}\zeta^2(x)dx \nonumber \\[0.1cm]
& =p\kappa^{4+\beta-N}\int_{\mathbb{R}^N}|y|^{\alpha}|u(y)|^{p-1}\psi^2(y)dy\quad\; \mbox{taking}\;\ \psi(y):=\zeta(x),\; x=\dfrac{y}{\kappa} \nonumber \\[0.1cm]
& \le \kappa^{4+\beta-N}\int_{\mathbb{R}^N}\dfrac{|\Delta \psi(y)|^2}{|y|^{\beta}}dy \nonumber \\[0.1cm]
& =\int_{\mathbb{R}^N} \dfrac{|\Delta \zeta|^2}{|x|^{\beta}}dx.
\end{align}Thus, $u^{\kappa}$ is a stable solution of (\ref{eq:1.1}). Furthermore, from (\ref{eq:2.4}),
it implies that
\begin{align*}
& \int_{B_r(x)} \left [|y|^{-\beta} \left (\Delta u^{\kappa}\right )^2+|y|^{\alpha} |u^{\kappa}|^{p+1} \right ] dy \nonumber \\[0.1cm]
& = \kappa^{4+\beta+\frac{8+2\alpha+2\beta}{p-1}-N} \int_{B_{\kappa r}(x)} \left [ |z|^{-\beta}|\Delta(z)|^2+|z|^{\alpha} |u(z)|^{p+1} \right ] dz \nonumber \\[0.1cm]
& \le Cr^{N-4-\beta-\frac{8+2\alpha+2\beta}{p-1}},
\end{align*}and applying H\"{o}lder's inequality yields
\begin{align*}
\int_{B_r(x)} \left | u^{\kappa} \right |^2 dz & \le \left (\int_{B_r(x)} |z|^{\alpha} \left |u^{\kappa}\right |^{p+1} dz\right )^{\frac{2}{p+1}}
\left (\int_{B_r(x)} |z|^{-\frac{2\alpha }{p-1}} dz \right )^{\frac{p-1}{p+1}}\\[0.1cm]
& \le Cr^{N-2\lambda}.
\end{align*}Clearly, we also obtain
\begin{align*}
\int_{B_r(x)} |\Delta u^{\kappa}|^2 dz & =\int_{B_{\kappa r}(x)} \kappa^{2\lambda+4-N}|z|^{\beta}\dfrac{|\Delta u(z)|^2}{|z|^{\beta}}dz\\
& \le Cr^{N-4-2\lambda}.
\end{align*}\vskip .06in
By the application of the elliptic regularity theory,
it implies that $u^{\kappa}$ are uniformly bounded in $W^{2,2}_{loc}(\mathbb{R}^N)$.
Again $u\in C^4(\mathbb{R}^N)$ implies $u^{\kappa} \in L^{p+1}_{loc}(\mathbb{R}^N)$. Then
we can suppose that $u^{\kappa} \rightharpoonup u^{\infty}$ weakly
in $W^{2,2}_{loc}(\mathbb{R}^N) \cap L^{p+1}_{loc}(\mathbb{R}^N)$ (if necessary, we can extract a subsequence).
Now, using the standard embeddings, we get
$u^{\kappa} \to u^{\infty}$ strongly in $W^{1,2}_{loc}(\mathbb{R}^N)$. Therefore, applying the interpolation inequality
between $L^q$ spaces with $q\in (1,p+1)$, we get that, for any ball $B_r$
\begin{equation}\label{eq:4.3}
\|u^{\kappa}-u^{\infty}\|_{L^q(B_r)}\le \|u^{\kappa}-u^{\infty}\|_{L^1(B_r)}^t \|u^{\kappa}-u^{\infty}\|_{L^{p+1}(B_r)}^{1-t} \to 0,\;\;\mbox{as}\;\; \kappa \to +\infty,
\end{equation}where $t \in (0,1)$ satisfying $\dfrac{1}{q}=t +\dfrac{1-t}{p+1}$. Next, combining with the definition of $u^{\kappa}$ and (\ref{eq:4.2}), we conclude that, for any $\zeta \in C_0^2 (\mathbb{R}^N)$
\begin{align*}
\int_{\mathbb{R}^N} \dfrac{\Delta u^{\infty}}{|x|^{\beta}}\Delta \zeta -|x|^{\alpha} |u^{\infty}|^{p-1}u^{\infty} \zeta = \lim\limits_{\kappa \to \infty}
\int_{\mathbb{R}^N} \dfrac{\Delta u^{\kappa}}{|x|^{\beta}}\Delta \zeta -|x|^{\alpha} |u^{\kappa}|^{p-1}u^{\kappa} \zeta, \\[0.15cm]
\int_{\mathbb{R}^N} \dfrac{\left (\Delta \zeta \right )^2}{|x|^{\beta}}-p |x|^{\alpha} |u^{\infty}|^{p-1} \zeta^2 = \lim\limits_{\kappa \to \infty} \int_{\mathbb{R}^N} \dfrac{(\Delta \zeta)^2}{|x|^{\beta}} -p |x|^{\alpha} |u^{\kappa}|^{p-1} \zeta^2 \ge 0,
\end{align*}that is, $u^{\infty} \in W_{loc}^{2,2}
(\mathbb{R}^N) \cap L_{loc}^{p+1}(\mathbb{R}^N)$ is a stable solution of (\ref{eq:1.1}) in $\mathbb{R}^N$. \vskip .1in
From the boundedness and monotonicity of $\mathcal{M}(r;0,u)$, it implies that for any $0<r_1<r_2<+\infty$,
\begin{equation*}
\lim\limits_{\kappa \to \infty} \Big [\mathcal{M}(\kappa r_2;0,u)-\mathcal{M}(\kappa r_1; 0,u)\Big ]=0.
\end{equation*}Again using the scaling invariance and Theorem \ref{eq:t2.1}, we get
\begin{align*}
0 & =\lim\limits_{\kappa \to \infty} \left [\mathcal{M}\left (r_2;0,u^{\kappa}\right )-\mathcal{M}\left (r_1;0,u^{\kappa}
\right )\right ] \\[0.1cm]
& = \lim\limits_{\kappa \to \infty} \int_{r_1}^{r_2} \dfrac{d }{d \mu}\mathcal{M}\left (\mu; 0,u^{\kappa} \right ) d \mu \\[0.1cm]
& \ge C(N,p,\alpha,\beta) \int_{B_{r_2} \backslash B_{r_1}} |x|^{2+2\lambda-N}\left (\lambda \mu^{-1}
u^{\infty}+\dfrac{\partial u^{\infty}}{\partial \mu}\right )^2dx.
\end{align*}Adopting the same calculation as Theorem \ref{eq:t2.1}, we obtain
that $u^{\infty}$ is homogeneous.
\end{proof}
\begin{lemma}\label{eq:l4.3}
$\lim\limits_{r \to \infty} \mathcal{M} (r;0,u)=0$.
\end{lemma}
\noindent {\it Proof.} Since $u^{\infty}$ is a homogeneous, stable solution of (\ref{eq:1.1}),
it implies from Theorem \ref{eq:t1.1} that
\begin{equation*}
u^{\infty} \equiv 0.
\end{equation*}Combining (\ref{eq:4.3}) with the above equality, we find that
\begin{equation*}
\lim\limits_{\kappa \to +\infty} u^{\kappa} =0, \;\ \mbox{stongly}\; \mbox{in}\; \ L^2(B_6),
\end{equation*}
i.e.,
\begin{equation*}
\lim\limits_{\kappa \to +\infty} \int_{B_6} |u^{\kappa}|^2 =0.
\end{equation*}From the uniform boundedness of $\Delta u^{\kappa}$ in $L^2(B_6)$, we get
\begin{equation*}
\lim\limits_{\kappa \to \infty} \int_{B_6} \left |u^{\kappa}\Delta u^{\kappa}\right | \le \lim\limits_{\kappa \to \infty}
\left (\int_{B_6} \left |u^{\kappa}\right |^2 \right )^{\frac{1}{2}} \left ( \int_{B_6}\left |\Delta u^{\kappa}\right |^2 \right )^{\frac{1}{2}}=0.
\end{equation*}Therefore, it implies from (\ref{eq:2.3}) that
\begin{equation*}
\lim\limits_{\kappa \to +\infty} \int_{B_1} \left | \Delta u^{\kappa}\right |^2 +|x|^{\alpha} |u^{\kappa}|^{p+1} \le
C \lim\limits_{\kappa \to +\infty} \int_{B_6} \left |u^{\kappa}\right |^2+\left |u^{\kappa} \Delta u^{\kappa}\right | =0.
\end{equation*}A direct application of the interior $L^p$-estimates gets
\begin{equation*}
\lim\limits_{\kappa \to +\infty} \int_{B_2} \sum\limits_{j \le 2}
|\nabla^j u^{\kappa}|=0,
\end{equation*}implies
\begin{equation*}
\int_1^2 \left ( \sum\limits_{i=1}^{\infty} \int_{\partial B_r} \sum\limits_{j\le 2}
|\nabla^j u^{\kappa_i}|^2\right ) dr \le \sum\limits_{i=1}^{\infty} \int_{B_{2r}\backslash B_r}
\sum\limits_{j \le 2} |\nabla^j u^{\kappa_i}|^2 \le 1.
\end{equation*}Then, let us note that there exists a $\gamma \in (1,2)$ such that
\begin{equation*}
\lim\limits_{\kappa \to \infty} \|u^{\kappa}\|_{W^{2,2}(\partial B_{\gamma})}=0.
\end{equation*}
Now, combing the above results with the scaling invariance of $\mathcal{M}(r;0,u)$, we obtain
\begin{equation*}
\lim\limits_{i \to \infty}\mathcal{M}(\kappa_i \gamma; 0,u)=\lim\limits_{i \to \infty}\mathcal{M}(\gamma;0,u^{\kappa_i})=0.
\end{equation*}Again since $\kappa_i \gamma \to +\infty$ and $\mathcal{M}(r;0,u)$ is non-decreasing in $r$, we get
\begin{equation*}
\lim\limits_{r \to \infty} \mathcal{M}(r;0,u)=0.\eqno \square
\end{equation*}
Since $u \in C^4(\mathbb{R}^N)$, we get $\lim\limits_{r \to 0} \mathcal{M}(r;0,u)=0$. Again using the monotonicity of $\mathcal{M}(r;0,u)$
and Lemma \ref{eq:l4.3}, we get
\begin{equation*}
\mathcal{M}(r;0,u)=0,\quad \mbox{for}\;\; \mbox{all}\;\; r>0.
\end{equation*}Therefore, combining with Theorem \ref{eq:t2.1}, we conclude that $u$ is homogeneous and by Theorem \ref{eq:t1.1}
\begin{equation*}
u\equiv 0. \eqno \square
\end{equation*}
\vskip .3in
\noindent {\bf Acknowledge:} The author wishes to express his warmest thanks to Chern Institute of Mathematics, Nankai University (where part of this work was done) for their warm hospitality.
|
3,212,635,537,795 | arxiv | \section{Introduction}
\noindent Survival analysis aims to analyze the expected duration of time until events happen. It tries to find the answer of questions like: how does the proportion of a population survive past a certain time (e.g. 5 years)? What rate will they die or fail?
It is a very important clinical application and many efforts have been made to search for biomarkers from omics data that are significantly related to patient death \cite{shedden2008gene,tibshirani1997lasso,bair2004semi,bair2006prediction,park2007l1}. Recent technological innovations are enabling scientists to capture big whole slide images (WSIs) at increasing speed and resolution for diagnosis. The learning model is required to correctly predict the survival risk of each patient from his/her tumor tissue whole slide images. The more precise is risk assessment for a cancer patient, the better the patient can be treated. Compare with genomics data, pathological images can present tumor growth and morphology in extremely detailed, gigapixel resolution which is extremely useful for cancer study~\cite{warth2012novel, yuan2012quantitative}.
The diagnosis is extremely laborious and highly dependent on expertise which requires pathologists to carefully examine the biopsies under the microscope~\cite{bejnordi2017diagnostic}. To reduce the risk of misdiagnosis, pathologists have to conduct a thorough inspection of the whole slide which make the diagnosis quite cumbersome. Automatic analysis of histology has become one of the most rapidly expanding fields in medical imaging. Computer aided diagnostics in digital pathology can not only alleviate pathologists' workloads, but also help to reduce the chance of diagnosis mistakes. However, using WSIs for survival prediction is very challenging due to several reasons: 1) pathological images in real cancer dataset might be in terabytes ($10^{12}$ pixels) level which makes most models computationally impossible. 2) the large variations of textures and biological structures from tumor heterogeneity, As the solid tumor may have a mixture of tissue architectures and structures, multiple WSIs from different parts of the patient's tissue are collected for diagnosis; 3) label on patient-level while each patient might have multiple WSIs for diagnosis. Those terabyte-size large WSIs from one patient will share the survival label which will make the problem more challenging.
\subsection{Related Work}
During recent years, many methods have been proposed for survival prediction using pathological slides. They can be categorized into two categories: ROI-based and WSI-based methods.
\textbf{Region of Interest Analysis.}
Pathological images usually come with a very high resolution which makes most of exiting models and algorithms computationally infeasible even though the high resolution of image data greatly benefits survival analysis with more precise information.
Previously due to the lack of computational power, most of the literature focused on regions of interest (ROI) patches which are selected by pathologists from WSIs~\cite{gurcan2009histopathological}.
Instead of handling original WSIs, ROI-based methods extracted hand-crafted features from ROIs for predictions~\cite{yuan2012quantitative,Barker201660,Zhu2016Lung,yao2016imaging,Wang2014Novel,yu2016predicting, cheng2017identification,yao2015computer}. Wang et al.~\cite{Wang2014Novel} proposed a novel framework to first segment cells in annotated patches and then perform cellular morphological properties from those cells which result in 166 imaging features. Yu et al.~\cite{yu2016predicting} extract 9,879 quantitative image features from annotated regions of interest and results suggest that automatically derived image features can predict the prognosis of lung cancer patients and thereby contribute to precision oncology. Beyond classical cell detection, Yao et al.~\cite{yao2016imaging} used a deep subtype cell detection first to classify different cell subtypes and then extracted features from cellular subtype information. Cheng et al.~\cite{cheng2017identification} used a deep auto-encoder to cluster cell patches into different types and then extracted topological features to characterize cell type distributions from ROIs for prediction. These methods extracted hand-crafted features based on nuclei detection and segmentation and those features were considered to represent prior knowledge of boundary, region or shape. However, hand-crafted features are limited in representation power and capability.
Recently, with the advance of deep neural networks, deep learning-based survival models are proposed for seeking more powerful deep representation~\cite{Katzman2016deepsurv, zhu2016deep, yao2017deep, mobadersany2018predicting}.
Katzman $et~al.$ first proposed a deep fully connected network (DeepSurv) to represent the nonlinear risk function~\cite{Katzman2016deepsurv}. They demonstrated that DeepSurv outperformed the standard linear Cox proportional hazard model. Another improvement is deep convolutional survival learning (DeepConvSurv) which is the first attempt to use pathological images in deep survival model~\cite{zhu2016deep}. Later, Yao et al.~\cite{yao2017deep} integrated genome modality with DeepConvSurv for survival prediction using multi-modality data. However, DeepConvSurv is designed to use pre-selected ROI patches by pathologists from WSIs for convolution operations. A small set of image tiles might not completely and properly reflect the patients' tumor morphology. Also,
those methods perform average pooling to achieve patient-wise predictions from patch-based results. Such combination cannot effectively aggregate predictions from patch-level and needs further attention.
Thus, it would be much helpful if we can facilitate knowledge discovery from big whole slide images.
\textbf{Whole-slide Image Analysis.}
With detailed and densely annotations on WSIs, nowadays a series of approaches in whole-slide image analysis have been proposed for a variety of applications including classification, detection or segmentation~\cite{wang2016deep, bejnordi2017diagnostic, kong2017cancer, li2018cancer, liu2017detecting}. Applying deep learning for supervised learning on computational pathology has achieved promising results. However, the applicability of these models in clinical practice remains in questions because of the wide variance of clinical samples. Extensive and time-consuming human manual annotations in clinical practice is impossible. Moreover, the success of those applications is built on integrating detailed patch contents and using labor-extensive annotations which might not be applicable for survival prediction.
To properly address the shortcomings of current models, one possible direction is to consider weakly supervised manner. Recently, researchers have developed many weakly supervised algorithms to medical images including weakly-supervised X-rays screening~\cite{wangyi2019weakly, yan2018weakly} and WSI classification~\cite{Hou2016PatchbasedCN, mercan2018multi, wang2018weakly, wang2019rmdl, wang2019weakly}.
WSI classification models are designed to find the most differentiated regions correspond to different tumor types. A two-step approach is usually used and the first step is a classifier at the tile level and then predicted scores for each tile within a WSI are aggregated with various strategies. However, learning survival from histology and developing prognosis model is considerably more difficult as risk is often reflected from a range of histology patterns that correspond to varying degrees of disease progression. Tumor heterogeneity plays an important role in cancer study which includes inter-tumor and intra-tumor heterogeneity~\cite{jamal2015translational}. Inter-tumor heterogeneity refers to the differences found between tumors in different patients. Intra-tumor heterogeneity refers to distinct tumor cell populations within the same tumor specimen. Most recent weakly-supervised WSI classification focused on localizing most differentiated regions correspond to tumor types across patients. Therefore, they are more likely to capture inter-tumor heterogeneity between tumors or subtype tumors. Understanding how to label one person's tumor type may not be enough to study the degree of tumor progression. The pathophysiology of tumor progression and proliferation is complex
and thus a new image-based prognosis model which can integrate information from heterogeneous tissue regions is a better approach.
Additionally, existing weakly-supervised WSI classification task is at slide-level, while survival prediction is at patient-level analysis (one patient might have multiple whole slide images). Devising patient-level decisions from slide-level results is not the objective of those studies.
To achieve survival prediction from whole slide images without using annotations, Zhu et al.~\cite{zhu2017wsisa} proposed a patch-based two-stage framework to predict patients' survival outcomes. Patches are extracted from the WSIs and clustered to different patterns defined as "phenotypes" according to their visual appearances in the first stage.
Then WSISA~\cite{zhu2017wsisa} adopted DeepConvSurv~\cite{zhu2016deep} to select important patch clusters and then aggregated those clusters for final prediction. Although this framework has practical merits to consider important patch clusters, it is hard to incorporate it into state-of-the-art deep learning paradigm as the whole approach has separate steps. In addition, it is not a scalable solution because the first stage will be significantly inefficient if more patches are sampled. One recent work ~\cite{tang2019capsurv} proposed CapSurv by introducing Capsule network~\cite{sabour2017dynamic}. However, CapSurv still has similar issues with WSISA as the main framework is following the WSISA pipeline.
The relationship of tissue patterns on WSI is the great importance on survival analysis. Li et al.~\cite{li2018graph} proposed a graph convolutional network (GCN) based method to consider such relationship of patches in the WSI and then learn effective representation for survival prediction. However, this method requires detailed graph structure knowledge to construct a complete graph representation for effective GCN training which is not flexible and needs prior knowledge.
\subsection{Contributions}
Though many works can be found on WSI analysis for segmentation, classification and detection, there were limited works on weakly-supervised learning for survival prediction. Based on the literature review, a method that can adaptively learn patient-level representations with limited prior knowledge is needed. In this study, we propose a novel framework, referred to as Deep Attention Multiple-Instance Survival Learning (DeepAttnMISL) for whole slide images. In contrast to the standard supervised learning, multiple instance learning (MIL) considers a set of bags, each containing multiple feature vectors referred to as instances. The available label is only assigned to bag-level and labels of individual instances in the bag are not known. In MIL, not all the instances are necessarily relevant and some of them in the bag might not be relevant to certain labels. In observation, if the slide is from a low risk patient, most of its tiles might be benign and or contain low-grade tumor. In contrast, if the slide is from the high risk patient, it must be true that at least one of all of the possible tiles contains malignant tumor. This formalization of the WSI survival learning problem is an example of the general standard multiple instance assumption and thus MIL is a good to fit to solve such problem.
Our preliminary work that only using deep multiple instance learning can help achieve better prognosis performance was published in MICCAI 2019~\cite{yao2019deep}. Compared to the previous work, we offered new contributions in following aspects. We introduced attention mechanism into deep multiple instance survival learning. The proposed DeepAttnMISL not only uses the siamese MI-FCN network to learn features from different phenotype clusters, but also largely improves performance with Attention-based MIL pooling layer to perform a trainable weighted aggregation.
More importantly, the proposed framework can effectively highlight the prognosis-related clusters and has better interpretability as well as performance than our preliminary work~\cite{yao2019deep}. The contributions can be summarized as follows.
\begin{itemize}
\item Phenotype clusters provide morphology-specific representation, the proposed DeepAttnMISL first extracts phenotype-level information through a Siamese MIL-based network from patch-level features. The attention mechanism is then used to aggregate these phenotype features into patient-level information with a trainable weighted average where weights can be fully parameterized by neural networks. Such attention-based aggregation is much flexible than fixed pooling operators in recent work~\cite{yao2019deep, zhu2017wsisa,tang2019capsurv}.
\item With the advantage of MIL and attention mechanism, the proposed model has a good interpretability to find important patterns of patients. Those identified important regions and patches are more likely to be associated with prognosis and overall the proposed model can achieve better patient-level predictions and improve prediction performance than our previous work~\cite{yao2019deep}.
\item To evaluate the performance of the proposed DeepAttnMISL model, two large WSI datasets on lung and colorectal cancer are used and extensive experimental results verify the effectiveness.
\end{itemize}
Our method can efficiently exploit and utilize all discriminative patterns in whole slide pathological images to perform accurate patients' survival predictions. Additionally, we present results representing a patient's treatment group to illustrate how to view the proposed model as a treatment recommender system. Results validate that the proposed model can accurately model the risk functions of the population and thus guide treatment decisions for improving patient lifespan.
\begin{figure*}[htb]
\centering
\includegraphics[width=1\linewidth]{fig/overview.pdf}
\caption{An overview of the proposed DeepAttnMISL model.}
\label{fig: Framework}
\end{figure*}
\section{Methodology}
Considering a set of $N$ patients, $\{X_i\}, i=1 \dots N$, each patient has the follow-up label $(t_i, \delta_i)$ indicating the overall survival. The observation time $t_i$ is either a survival time or a censored time for each patient.
$\delta_i$ is the indicator which is 1 for an uncensored instance (death occurs during the study) and 0 for a censored instance.
Survival model predicts a value of a target variable O for a given patient. As we discussed above, patient $X_i$ will have multiple WSIs and our goal is to predict the corresponding target $o_i$ from those imaging data. As we don't have pixel-level annotations but only know patient-level information, this weakly-supervised learning can be solved by Multiple Instance Learning (MIL).
In the case of MIL problem, patient $X$ is a bag of instances, $X = \{x_1, ..., x_C\}$ and the number of instances $C$ could vary for different bags. Furthermore, we assume that individual true labels exist for the instances within a bag, i.e., $y_1, ..., y_C$ but those values remain unknown during training. One very important assumption is that neither ordering nor dependency of instances within a bag and a MIL model must be permutation-invariant.
Instances within the bag can be defined as sampling patches from WSIs and several studies~\cite{campanella2019clinical,Wulczyn2020} developed MIL-based deep learning approaches for automated cancer diagnosis and prognosis. In our case, we introduce phenotype cluster as the instance of the bag instead of individual patch. Cancer histology contains rich phenotypic information that reflects underlying molecular processes and disease progression. Phenotype of the pathological slides is a combination of tissue's various observable characteristics. This provides a convenient visual representation of disease aggressiveness. Recent studies have shown phenotypic information could be useful for prediction of prognosis~\cite{zhu2017wsisa,mobadersany2018predicting}. The purpose of the proposed framework is to predict patient outcomes from whole slides images. The study involves partitioning the original slides into a number of phenotype patterns. Each phenotype describes a type of histology pattern and includes a number of smaller patches or tiles.
\subsection{DeepAttnMISL}
Fig.\ref{fig: Framework} shows the overview of the proposed Deep Attention Multiple Instance Survival Learning (DeepAttnMISL). In Multiple Instance Learning, each data sample is a bag of instances and the bag can be seen as one patient in our problem. Each patient $X_i$ may contain multiple whole slides and it is not practical to use whole slides as instances due to the extreme large size. We choose phenotypes instead of raw sampling patches as instances within the bag because it will considerably reduce the complexity of the problem as the number of heterogeneous patches is actually very huge. By using phenotype patterns which are constructed by clustering, we can build the model for different types of tissues to extract morphology-specific features. To learn patient-level information from phenotype clusters, we design a Multiple Instance Fully Convolutional Network (MI-FCN) running inside our deep learning architecture with weights being shared among them as in the siamese architecture. To detect important phenotypes associated with patients' clinical outcomes, attention-based MIL pooling layer is used to aggregate phenotype-level representation. The output is the hazard risk to represent how well for the patient behaves in the population of certain type of diseases.
\subsubsection{Sampling and Clustering}
At the first step, we extract patches from all WSIs belong to the same patient and then cluster them into different phenotypes. To capture detailed information of the images, those patches are extracted from 20X (0.5 microns per pixel) objective magnifications and then fixed to $500 \times 500 \times 3$ size. In one whole slide image, usually about 50\% of areas are background and it is easy to select regions to contain tissues rather than background or irregular regions according to pixel values. Even we only extract tissue patches and ignore background regions, it can still get tens of thousands of patches per WSI which will result in a huge number of images from the whole dataset. Different from recent segmentation and detection task in whole slide image analysis, our task is for patient-level decision aggregated from patch-level results. As pointed out in~\cite{hou2015efficient}, training patch-based CNNs for weakly supervised learning is very time costly (several weeks) and we propose to use features from pre-trained models instead of using CNNs to learn features from the scratch. We use the pre-trained model (e.g. VGG) from ImageNet~\cite{simonyan2014very} to extract features for each image patch which have more representation power than smaller size (50 $\times$ 50) thumbnail images to represent their phenotypes~\cite{zhu2017wsisa}. Then we adopt K-means clustering to cluster patches based on their deep learning features. Notice that one patient might have multiple WSIs and we actually perform clustering on patient-level instead of the whole database. Fig.\ref{fig:clustering} shows one patient's example and this patient has three WSIs that were sampled from different locations of the biopsy tissue. The corresponding phenotype clustering are shown in the right and each color means one type of phenotype clusters. In this example, we chose to cluster 10 phenotype patterns. The results show the effectiveness of this strategy as we can see similar patches are grouped into the same cluster. This could demonstrate that features from pre-trained model are capable of identifying patterns of whole slide images and we would expect them to be distinctive and informative for later survival learning task.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{fig/clustering.pdf}
\caption{Phenotype patterns visualization after clustering on three WSIs belong to the same patient.}
\label{fig:clustering}
\end{figure}
By clustering different patches from all WSIs of the patient into several distinguished phenotype groups, we will have different phenotype groups with various prediction powers on this patient’s clinical outcome. The proposed DeepAttnMISL takes phenotypes as multiple inputs and consider their connections for predicting survival outcomes.
\subsubsection{Siamese MI-FCN}
After clustering, the patient is a set of phenotype clusters and we design a siamese Multiple Instance Fully Convolutional
Networks (MI-FCN) to learn features from those patterns, \textcolor{red}{similar to the work in \cite{yao2019deep}}. Most existing well-known pre-trained models were trained based on single-instance bases, and the labels are associated with each image which is not the case of our problem. We embed multiple sub-networks running inside our deep learning architecture with weights being shared among them as in the siamese architecture. Each sub-network is based on fully convolutional neural networks (FCN) that can learn informative representation for individual phenotype of the patient.
The architecture of each Multiple Instance Fully Convolutional
Networks (MI-FCN) is shown in Fig.\ref{fig:MIL}.
The combination of multiple layers of fully convolutional layers and non-linear activation functions has proven to be a powerful non-linear feature mapping in multiple instance problem~\cite{yang2017miml}.
The reason to use the fully convolutional networks (FCN) without including any fully connected layers is that FCN is more flexible and can handle any spatial resolution, which is needed for the considered problem since the number of patch samples in each phenotype varies.
For each phenotype, the input is a set of features from $m_i$ patches, can be organized as $1\times m_i\times d$ ($d$ is the feature dimension or channel). The network consists of several layer-pairs of $1\times 1$ conv layer and ReLU layer (we show 2 pairs in Fig.\ref{fig:MIL}). The global pooling layer (e.g. average pooling) will be added at the end. For $j$-th phenotype, its representation is denoted as $\mathbf{r_j}$.
The network receives one kind of phenotypes (tensor) as input and it can focus on local information and generate representation for the phenotype. Since the number of patches in each phenotype varies, the fully convolutional network is more flexible to handle this scenario.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\linewidth]{fig/MI_FCN.pdf}
\caption{The network architecture in each MI-FCN. }
\label{fig:MIL}
\end{figure}
\subsubsection{Aggregation via Attention-based MIL pooling layer}
Local representations from MI-FCN encode information of the corresponding phenotype clusters and how to aggregate them into patient-level representation is one necessary step. Let $R=\{\mathbf{r}_1, \mathbf{r}_2, ..., \mathbf{r}_C\}$ be one patient with $C$ phenotype local representations and the goal is to get patient-level representation $\mathbf{z}$. The very straightforward choice is to use maximum or the mean operator, but drawbacks are very clear that they are pre-defined and non-trainable which might not be flexible and adjustable to the specific task.
Previous work~\cite{zhu2017wsisa} used weighted average of features from clusters to get the patient feature but they performed such patient-level aggregation in a separate stage and the whole approach cannot be trained end-to-end from instance-level to patient-level. A better way to integrate phenotype-level information is to leverage an attention mechanism that considers the importance of each phenotype. In this paper,
we propose to use the attention-based MIL pooling~\cite{ilse2018attention} for aggregation which is flexible and adaptive. By using such pooling operator, the patient-level representation can be calculated as
\begin{equation}
\mathbf{z} = \sum_{k=1}^C a_k \mathbf{r}_k,
\end{equation}
where
\begin{equation}
a_k = \frac{\exp\{\mathbf{w}^\top \tanh(\mathbf{V}\mathbf{r}_k^\top) \}}{\sum_{j=1}^C \exp\{\mathbf{w}^\top \tanh(\mathbf{V}\mathbf{r}_j^\top)\}}.
\end{equation}
In the weight $a_k$ calculation, $\mathbf{w} \in \mathbb{R}^{L\times1}$ and $\mathbf{V} \in \mathbb{R}^{L \times M}$ are trainable parameters. Tangent $\tanh(.)$ element-wise non-linearity is introduced both negative and positive values for proper gradient flow. The attention-based MIL pooling allows to assign different weights to phenotype clusters within one patient and hence the final patient-level representation could be highly informative for survival prediction. In other words, it should be able to locate key clusters and provide potential ROIs.
Different from traditional attention mechanism that all instances are sequentially dependent~\cite{lin2017structured, raffel2015feed}, multiple instance learning assumes all instances are independent. As phenotype in our problem is more natural to be independent to each other, attention mechanism used in MIL pooling will be beneficial to achieve good results.
\subsubsection{Loss Function}
After attention-based MIL pooling, we will generate the patient-level aggregation from all local representations. For $i$-th patient sample passing through the proposed model, the output of this patient's hazard risk is denoted as $\mathbf{o}_i$. Table \ref{tab:Bag_arch} presents architecture details of the proposed DeepAttnMISL. Input of our model is the set of patients' phenotype features, organized as $[(1\times m_1 \times d), (1\times m_2 \times d),..., (1\times m_c \times d)]$ where $C$ is the number of phenotypes and $m_i$ means the number of patches in $i$-th phenotype.
\begin{table}[!htb] \caption{The architecture of DeepAttnMISL.}
\begin{center}
\begin{tabular}{l|l|l }
\hline
{Layer} & {Input} & {Output size} \\ \hline \hline
MI-FCN $i$ & 1 $\times$ $m_i \times d$ & $64$ ($\mathbf{r}_i$)\\ \hline
Attention MIL pooling & $64 \times C$ & 64 \\ \hline
Fully-Con. & 64 & 32 \\ \hline
Fully-Con. & 32 & 1 ($\mathbf{o_i}$) \\ \hline
\end{tabular}
\end{center}
\label{tab:Bag_arch}
\end{table}
Denote the label of the $i$-th patient as $(t_i, \delta_i)$ where $t_i$ is the observed time,
We assume that censoring data ($\delta = 0$, death not observed) is non-informative in that, given $\mathbf{x}_i$, the event and censoring time for the $j$-th patient are independent. Let $t_1<t_2<\dots <t_N$ denote the ordered event times. The risk set $R(t_i)$ is the set of all individuals who are still under study. For example, the patient $j$ in risk set has the survival time is equal or larger than $t_i$ ($t_j \ge t_i$). Conditioned upon the existence of a unique event at some particular time $t$ the probability that the death event occurs in the patient $i$ is
\begin{equation}
L_i = \frac{\exp(\mathbf{o}_i)}{\Sigma_{j \in R(t_i)}\exp(\mathbf{o}_j)},
\end{equation}
Assuming the patients' events were statistically independent, the joint probability of all death events conditioned upon the existence of events at those times is the partial likelihood:
\begin{equation}
L = \prod_{i:\delta_i=1}\frac{\exp(\mathbf{o}_i)}{\Sigma_{j \in R(t_i)}\exp(\mathbf{o}_j)},
\end{equation}
The corresponding log partial likelihood is
\begin{align}
l = log(L) & = \sum_{i:\delta_i=1}(\mathbf{o}_i-\log\sum_{j:R(t_i)}\exp(\mathbf{o}_j)) = \sum_{i}\delta_i(\mathbf{o}_i-\log\sum_{j:R(t_i)}\exp(\mathbf{o}_j)),
\end{align}
The function can be maximized over the network parameters to produce maximum partial likelihood estimates. It is equivalent to minimize the negative log partial likelihood.
We then use the negative log partial likelihood as the loss function in our model as shown in below
\begin{align}\label{eq:survival_loss}
L(\mathbf{o}_i) =\sum_{i}\delta_i(-\mathbf{o}_i + \log{\sum_{j:t_j>=t_i}\exp(\mathbf{o}_j)}).
\end{align}
In a simplified view, the loss function contributes to overall concordance by penalizing any discordance in any values of higher risk patients if they are greater than lower those of lower risk.
Different with other deep models used the same loss function~\cite{Katzman2016deepsurv,zhu2016deep, zhu2017wsisa}, the proposed model can better fit realistic patients' whole slide imaging data and learn complex interactions using deep multiple instance representation that cover both holistic and local information.
Since patient's risk is correlated with phenotypes from WSIs, the proposed framework can efficiently exploit phenotypes by deep multiple instance learning and attention mechanism for clinical outcome prediction at patient-level.
\section{Experiments}
\subsection{Dataset Description}
To validate the performance of the proposed DeepAttnMISL, we used two very large datasets on lung and colorectal cancers with high-resolution WSIs.
They are the National Lung Screening Trial (NLST)~\cite{national2011national} and the Molecular and Cellular Oncology (MCO) study ~\cite{2015mco,jonnagaddala2016integration}. NLST is a very large lung cancer dataset collected by the National Cancer Institute's Division of Cancer Prevention (DCP) and Division of Cancer Treatment and Diagnosis (DCTD). The MCO study is a collection of imaging, specimen, clinical and genetic data from over 1,500 Australian individuals who underwent curative resection for colorectal cancer from 1994 to 2010. Clinical and pathological data were collected on all those cases, including follow-up data. The WSIs collection in MCO study consists of more than 1,500 WSIs representing at least one typical section from each tumour case, stained with Hematoxylin and eosin, and scanned using a 40x objective. We have different experiment comparison settings on two datasets because we only have annotations that locate tumor regions in NLST. Both datasets are good for WSI-based models as those models without requiring ROI labelling but more extensive experiments with ROI-based comparisons can only be made on NLST dataset.
The numbers of WSIs and patients in each dataset are shown in Table \ref{tab:WSIs_number}.
State-of-the-art WSI models~\cite{zhu2017wsisa, tang2019capsurv} need to control the scale of data as they will have significant computational issues on the very large number of patches. They sampled hundreds of patches per WSI and collected around 20K-200K patches in total. One advantage of the proposed model is the computational efficiency because it uses MIL with attention to aggregate 1D deep features from pre-trained models instead of training patch-based CNNs which is very time costly~\cite{hou2015efficient}.
For the purpose of training baseline WSI survival model, we first extract in total of 130K and 275K patches for MCO and NLST, respectively. We then sample more patches on MCO dataset and collect 915K patches and each WSI will have more than 500 patches. MCO study has more than 1000 patients which is much larger than data used in recent work~\cite{zhu2017wsisa, tang2019capsurv}.
\begin{table}[!htb]
\begin{center}
\caption{The numbers of WSIs, patients, patches, and the average number of patches per WSI extracted in each dataset.} \label{tab:WSIs_number}
\begin{tabular}{c|c|c|c}
\hline
{Dataset} & {NLST} &{MCO\_130K} & MCO\_1M \\ \hline
\#patients & 387 & 1,146 & 1,146 \\ \hline
\#WSIs & 1,177 & 1,614 & 1,614\\ \hline
\#patches & 275,244 & 132,910 & 915,324\\ \hline
\#patches/WSI & 234 & 82 & 567\\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Implementation details}
For training, we use Adam optimization with weight decay $5 \times 10^{-4}$. The learning rate is set to $10^{-4}$ and the training monitors the loss on validation dataset and it will early stop if the loss goes increased much.
To evaluate the performances in survival prediction, we take the concordance index (C-index) and area under curve (AUC) as our evaluation metrics~\cite{heagerty2005survival}.
The C-index quantifies the ranking quality of rankings and is calculated as follows
\begin{equation}
c=\frac{1}{n}\sum_{i\in\{1...N|\delta_i=1\}}\sum_{t_j>t_i}I[f_i>f_j]
\end{equation}
where $n$ is the number of comparable pairs and $I[.]$ is the indicator function. $t.$ is the actual time observation. $f.$ denotes the corresponding risk. The value of C-index ranges from 0 to 1. The larger the value is, the better the model predicts.
\subsection{MCO results}
\subsubsection{Settings and Parameters}
To see effects from phenotype patterns, we tested different cluster numbers changing from 6 to 12. We split the data into 80\% training and 20\% testing. 10\% of training data will be used as validation data for achieving early stop training.
We would like to note that the number of phenotype clusters is the maximum number that allows each patient sample can have. The proposed model is flexible to handle patients with fewer patterns (e.g. smaller biopsy tissue). We implement this by setting the corresponding weight $a_k$ to zero if there are no patches in this cluster. To evaluate the use of different pooling ways, we built two baselines by replacing attention MIL pooling layer in DeepAttnMISL with commonly used Max and Mean pooling layer, and we indicate them as "DeepMIL+ Max/Mean" below.
Table \ref{tab:tune_phenotypes_MCO} presents results of each model. We first notice DeepAttnMISL can achieve best results in all cases which demonstrate attention MIL pooling is more flexible and better than fixed pooling operators. Second, when the phenotype is set to large values, results get worse which show more clusters actually cannot guarantee prediction benefits.
\begin{table}[!htb] \caption{ Performances with different number of phenotypes.}
\begin{center}
\begin{tabular}{ c|cccc}\hline
{\bfseries Model } & c=6 & c=8 & c=10 & c=12 \\ \hline
DeepAttnMISL & \textbf{0.652} & \textbf{0.648} & \textbf{0.624} & \textbf{0.607} \\
DeepMIL + Max & 0.594 & 0.606 & 0.606 & 0.540\\
DeepMIL + Mean & 0.604 & 0.578 & 0.604 & 0.601 \\ \hline
\end{tabular}
\end{center}
\label{tab:tune_phenotypes_MCO}
\end{table}
The basic MI-FCN network of our DeepAttnMISL consists of one convlolutional layer, one ReLU layer, one pooling layer. We study the effects of different number of convolution and ReLu layer-pairs and report results in Table \ref{tab:tune_layers_MCO_cur}. For the 1 layer, we used 64 filters in the convlolutional layer. We used $\{2048, 64\}$ number of filters in 2 layers and $\{2048, 1024, 64\}$ for 3 layers setting, respectively. From the table, we decide to choose one convolutional-ReLU layer pair with Global Average Pooling in MI-FCN network.
\begin{table}[!htb]\caption{ Results under different network configurations on testing data. The cluster number is set to 6.}
\begin{center}
\begin{tabular}{ c|ccc}\hline
{c=6 } & 1 layer & 2 layers & 3 layers \\ \hline
Global Average Pooling & \textbf{0.652} & 0.634 & 0.644 \\
Global Max Pooling & 0.615 & 0.640 & 0.623 \\ \hline
\end{tabular}
\end{center}
\label{tab:tune_layers_MCO_cur}
\end{table}
To validate the effectiveness of Siamese, we then remove the Siamese network and only use attention pooling layer on input features. In this case, no phenotype clusters are considered. This scenario will be the direct application of attention aggregation without using phenotype clusters~\cite{ilse2018attention}. 5 fold cross-validation is performed with the cluster number 6 on MCO-130K. Results can be found in Table \ref{tab:siamese}. We can see the overall performance is not good as the DeepAttnMISL which means the importance of Siamese network. This validates the effectiveness of phenotype clusters in Siamese network.
The final c-index across 5 folds is $0.542\pm 0.022$ for model without Siamese and $0.595 \pm 0.036$ for model with Siamese, respectively. Results suggest the usefulness of phenotype patterns and the Siamese architecture.
\begin{table}[!htb]\caption{ Validation of Siamese on MCO-130K dataset.}
\begin{center}
\begin{tabular}{ c|ccccc}\hline
{c=6} & fold 1 & fold 2 & fold 3 & fold 4 & fold 5 \\ \hline
No Siamese & 0.564 & 0.538 & 0.515 & 0.527 & 0.564 \\
w Siamese & 0.652 & 0.579 & 0.609 & 0.564 & 0.573 \\ \hline
\end{tabular}
\end{center}
\label{tab:siamese}
\end{table}
To validate effects of different components, we add more evaluations by changing encoder/clustering part, and results can be found in the Table \ref{tab:InceptionV3}. The more advanced InceptionV3~\cite{szegedy2016rethinking} model is tested and we also introduce spectral clustering as the alternative method for Kmeans. All other settings and architectures are kept the same. Details about each fold can be seen in the Table \ref{tab:InceptionV3}. For model with InceptionV3 and Kmeans clustering, C-index result is $0.598 \pm 0.054 $ on 5-fold cross validation. When changing Kmeans clustering to spectral clustering, the performance is $0.593 \pm 0.032$. Compared with the model using VGG-16 and Kmeans clustering ($0.595 \pm 0.036$), performances from different variants of models are quite similar. Therefore, we decide to use VGG-16 and Kmeans clustering for comparisons.
\begin{table}[!htb]\caption{ Results with different feature extractor and clustering on MCO-130K dataset.}
\begin{center}
\begin{tabular}{ c|ccccc}\hline
{c=6 } & fold 1 & fold 2 & fold 3 & fold 4 & fold 5 \\ \hline
InceptionV3+k & 0.670 & 0.540 & 0.630 & 0.603 & 0.549 \\
InceptionV3+sp & 0.630 & 0.565 & 0.611 & 0.554 & 0.607 \\ \hline
\end{tabular}
\end{center}
\label{tab:InceptionV3}
\end{table}
We also try with the more advanced gating mechanism~\cite{ilse2018attention,dauphin2017language} together with $tanh(.)$ non-linearity in eq (2). Results on MCO-130K are reported in Table \ref{tab:gated}. We can find gated-attention and plain attention mechanism behave similarly in different phenotype cluster settings but the plain attention is slightly better.
\begin{table}[!htb]\caption{ Results of different attention mechanisms on MCO-130K dataset.}
\begin{center}
\begin{tabular}{l|cc}
\hline
& Gated-Attention & Attention \\ \hline
c=6 & 0.596 (0.029) & 0.595 (0.036) \\ \hline
c=8 & 0.586 (0.043) & 0.599 (0.049) \\ \hline
c=10 & 0.561 (0.048) & 0.585 (0.036) \\ \hline
c=12 & 0.579 (0.031) & 0.591 (0.026) \\ \hline
\end{tabular}
\end{center}
\label{tab:gated}
\end{table}
\subsubsection{Comparisons}
WSISA~\cite{zhu2017wsisa} is one representative WSI-based survival learning but it only extracts features from WSIs and needs a separate survival learning to get final predictions. We choose three top survival models according to settings in WSISA~\cite{zhu2017wsisa}, they are Lasso-Cox~\cite{tibshirani1997lasso}, En-Cox~\cite{yang2012cocktail} and MTLSA~\cite{li2016multi}. As WSISA has the computational issue when there are too many patches in the whole dataset and thus the scale of 100K-200K patches is acceptable for experiments.
We have a another collection of patches with around 1 million patches to see effects from the patch scale but only perform our model on this scale because training with WSISA is not endurable.
\textcolor{red}{Our preliminary work DeepMISL~\cite{yao2019deep} has shown the effectiveness of using both global and local representation from Multiple Instance Learning can benefit survival prediction. However, the model still treats phenotype clusters equally and cannot recognize clusters that contribute more on patients' survival.}
We perform 5 fold cross-validation and report the average values of C-index and AUC on Table \ref{tab:MCO_CI} and \ref{tab:MCO_AUC}, respectively. From both tables, one can see that the proposed method achieves best results than models using WSISA features in all cluster number settings on MCO-130k. Improvements can be related to the following differences.
First, clustering is performed on patient-wise while recent WSI-based approaches~\cite{zhu2017wsisa, tang2019capsurv} need to cluster on all patches from patients of the database.
Because WSISA~\cite{zhu2017wsisa} needs independent DeepConvSurv to select important clusters and it has to divide the whole dataset into different types by clustering on all patches.
\textcolor{red}{DeepMISL~\cite{yao2019deep} can combine both local and bag representation with MIL but it is still unable to treat phenotype clusters differently which will limit its use on larger datasets.}
With the advantage of MIL and attention mechanism, the proposed DeepAttnMISL can easily find important instances (clusters) within the bag are more likely to achieve better patient-level predictions. There is no need to perform clustering on the whole dataset. A trainable and adaptive attention-based MIL pooling in DeepAttnMISL can adjust to a task and data which could help succeed in calculating the better patient representation. Increases with 1\%-3\% are observed when we use more patches from MCO-1M data and this reminds us more patches can benefit predictions but actually cannot offer significant improvements. This demonstrates the robustness of the proposed DeepAttnMISL that is not rely on the number of sampling patches.
\begin{table}[!htb]\caption{ C-index values of the proposed model and WSISA with different settings.}
\begin{center}
\begin{tabular}{l|ccccc}
\hline
Method & Settings & c=6 & c=8 & c=10 & c=12 \\ \hline
\multirow{2}{*}{DeepAttnMISL} & 130K & 0.595 & 0.599 & 0.585 & 0.591 \\
& 1M & \textbf{0.606} & \textbf{0.600} & \textbf{0.603} & \textbf{0.599} \\ \hline
\multirow{2}{*}{DeepMISL} & 130K & 0.557 & 0.547 & 0.587 & 0.543 \\
& 1M & 0.569 & 0.575 & 0.573 & 0.567 \\ \hline
W-MTLSA & 130K & 0.558 & 0.567 & 0.524 & 0.547 \\
W-LassoCox & 130K & 0.552 & 0.546 & 0.503 & 0.523 \\
W-EnCox & 130K & 0.552 & 0.545 & 0.504 & 0.522 \\ \hline
\end{tabular}
\end{center}
\label{tab:MCO_CI}
\end{table}
\begin{table}[!htb]\caption{ AUC values of the proposed model and WSISA with different settings.}
\begin{center}
\begin{tabular}{l|lllll}
\hline
Method & Settings & c=6 & c=8 & c=10 & c=12 \\ \hline
\multirow{2}{*}{DeepAttnMISL} & 130K & 0.623 & \textbf{0.640} & \textbf{0.636} & 0.622 \\
& 1M & \textbf{0.644} & 0.638 & 0.633 & \textbf{0.637} \\ \hline
\multirow{2}{*}{DeepMISL} & 130K & 0.564 & 0.552 & 0.590 & 0.547 \\
& 1M & 0.570 & 0.587 & 0.579 & 0.576 \\ \hline
W-MTLSA & 130K & 0.560 & 0.560 & 0.531 & 0.555 \\
W-LassoCox & 130K & 0.531 & 0.541 & 0.495 & 0.495 \\
W-EnCox & 130K & 0.532 & 0.544 & 0.497 & 0.496 \\ \hline
\end{tabular}
\end{center}
\label{tab:MCO_AUC}
\end{table}
Fig.\ref{fig:MCO_ci_boxplot} and Fig.\ref{fig:MCO_auc_boxplot} present boxplots of C-index and AUC values from each model with different phenotype cluster numbers. We only show captions in the top left figure and others will also share this description. We can see that results of our method on MCO-1M and MCO-130K don't have significant differences. This shows sampling strategies will not affect final results of the proposed method in cross-validation settings.
One can observe that our models consistently perform better than WSISA models across different phenotype cluster numbers.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/mco_ci.pdf}
\caption{Boxplots of C-index values with different numbers of phenotype patterns.}
\label{fig:MCO_ci_boxplot}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/mco_auc.pdf}
\caption{Boxplots of AUC values with different numbers of phenotype patterns.}
\label{fig:MCO_auc_boxplot}
\end{figure}
Fig.\ref{fig:MCO_tune_c} visualizes clustered phenotype patterns and selected patches from DeepAttnMISL and WSISA on MCO-130K when cluster number is set as 6. The first row shows results from DeepAttnMISL while the second one presents results from WSISA.
In MCO-130K, around 100 patches per WSI are sampled and it clearly can see that clustering based on VGG-16 features is capable of identifying patches from different layers of WSI and grouping similar patches into the same category. The most important advantage of DeepAttnMISL is its good interpretability and we create a heatmap by showing the corresponding attention weight of each phenotype cluster. We rescaled the attention weights using $a_k^{'} = (a_k - min(a))/(max(a)-min(a))$.
Red color indicates the highest attention weight while blue means the lowest values. From the obtained heatmap, we can see the proposed approach can identify higher risk regions properly because most of patches with high attention weights are from tumor regions. When we look at selected patches from WSISA, we can observe that many patches from non-tumor regions are also selected. That is because WSISA selects clusters based on patches from the whole database and thus it cannot guarantee reliable selection on the specific patient due to the heterogeneity across patients.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/MCO_WSISA_100K.pdf}
\caption{Comparison of phenotype patterns distribution in the first column. The second column shows heatmap and selected patches from the proposed model and WSISA on MCO-130K, respectively.}
\label{fig:MCO_tune_c}
\end{figure}
More clear visualizations can be found in Fig.\ref{fig:MCO_1M} on MCO-1M set and more patches (about 1000) are sampled per WSI. The first column shows phenotype patterns from the proposed model with different numbers. The second column shows the corresponding heatmaps. Attention mechanism in DeepAttnMISL allows to easily interpret the provided decision in terms of instance-level labels. From heatmaps, we can see results from $c=6$ and $c=8$ look better as most patches from cancerous regions are given by high attention weights.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/MCO_1M.pdf}
\caption{Phenotype patterns clustering visualizations and the corresponding heatmaps from the proposed model on MCO-1M.}
\label{fig:MCO_1M}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/MCO_vis_1.pdf}
\caption{(a) Original WSI, (b) Phenotype patterns distribution, (c) Heatmaps from our model, (d) Selected patches with highest attention weights. The bottom shows representative patches from each phenotype.}
\label{fig:MCO_vis1}
\end{figure}
To better visually validate the effect of attention mechanism, we collect and examine the attention weights as well as their corresponding patch images on MCO-1M data in Fig.\ref{fig:MCO_vis1}. The bottom shows randomly selected patches from each phenotype and the frame colors of patches correspond to pattern colors in Fig.\ref{fig:MCO_vis1}-(b). We use threshold as 0.8 to only show patterns with higher attention weights in Fig.\ref{fig:MCO_vis1}-d. Each color represents each phenotype pattern of the whole slide image and we can see the proposed model has higher interest on patches more related to tumor regions. Relative low attention weights are given to normal tissue regions. More surprisingly, the model can also give low attentions on background regions as they don't provide any information and are noisy images.
Fig.\ref{fig:MCO_vis2} shows another example. From the figure, we can see most patches from tumor regions are found and our model can successfully assign higher attention weight for such pattern. For patches with relatively less complex structures and textures, our model can identify them as not very important regions by giving lower attention weights.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{fig/MCO_vis_2.pdf}
\caption{(a) Original WSI, (b) Phenotype patterns distribution, (c) Heatmaps from our model, (d) Selected patches with highest attention weights. The bottom shows representative patches from each phenotype.}
\label{fig:MCO_vis2}
\end{figure}
\subsection{Lung Cancer dataset results}
\subsubsection{Baseline models}
As we have annotations on NLST dataset, we can conduct more extensive experiments with ROI-based survival models.
Following the recent framework~\cite{yu2016predicting}, we extracted 10 dense image patches from ROIs and calculated hand-crafted features using CellProfiler~\cite{carpenter2006cellprofiler} which serves as a state-of-the-art medical image feature extracting and quantitative analysis tool. A total of 1,795 quantitative features were obtained from each image tile. Then we averaged those features across different patches for each patient.
These types of image features include cell shape, size, texture of the cells and nuclei, as well as the distribution of pixel intensity in the cells and nuclei.
We can summarize the comparison methods into five categories as follows:
\begin{itemize}
\item \textbf{Cox models}: The Cox proportional hazards model is the most commonly used semi-parametric model in survival analysis. Two regularized Cox models $l_1$-norm (LASSO-Cox)~\cite{tibshirani1997lasso} and boosting cox model (Cox-boost)~\cite{Binder2008Allowing} are compared in experiments.
\item \textbf{Parametric censored regression models}: PCR models formulates the joint probability of the uncensored and censored instances as a product of death density function and survival functions, respectively~\cite{lee2003statistical}. We choose Weibull, Logistic distribution to approximate the survival data.
\item \textbf{MTLSA}: Multi-Task Learning model for Survival Analysis (MTLSA)~\cite{li2016multi} reformulates the survival model into a multi-task learning problem.
\item \textbf{WSISA}: WSISA can learn effective features from WSIs~\cite{zhu2017wsisa}. We train LassoCox and MTLSA using WSISA learned features as they are top models based on their report. To investigate performance from pre-trained network, a ResNet34~\cite{he2016deep} model is used and then fine-tuned as the backbone network in WSISA.
\item \textbf{DeepMISL}: Deep Multiple Survival Learning combined both local and global representation to predict outcomes~\cite{yao2019deep}.
\end{itemize}
\subsubsection{Results}
We reported results from a few possible numbers of phenotypes, such as $\{6, 8, 10, 12\}$ on the testing dataset. From the Table \ref{tab:tune_phenotypes}, we can see models using fewer clusters are unable to achieve good results. The reason might be patches of lung cancer patients are very heterogeneous and it is relative difficult to learn survival-related representations from fewer phenotypes.
Results suggest the number of 10 achieves slightly better predictions which is consistent with findings in WSISA~\cite{zhu2017wsisa}. Thus, we decide to choose to cluster 10 phenotypes in our model. Other parameters are kept the same with settings in MCO experiments.
\begin{table}[!htb] \caption{ Performances with different number of phenotypes.}
\begin{center}
\begin{tabular}{ c|cccc}\hline
{\bfseries No.} & 6 & 8 & 10 & 12 \\ \hline
CI & 0.673 & 0.769 & 0.775 & 0.742 \\ \hline
\end{tabular}
\end{center}
\label{tab:tune_phenotypes}
\end{table}
Table~\ref{tab:ci_NLST} shows C-index and AUC values by various survival regression methods on 5-fold cross validation. It shows the prediction power of the proposed method compared with different survival models. One can see that the proposed method achieves both highest C-index and AUC values which present the best prediction performance among all methods.
From the table, baseline models using hand-crafted features perform not well due to following reasons: 1) the limitation of local information provided by the patches extracted from the ROI using hand-crafted features; 2) the non-effective aggregation way to represent the heterogeneity of tumor and patient from patch-based results. Instead of using a small set of patches and human-designed features, the proposed method can effectively learn complex deep bag representation from phenotype patterns to predict patient survival outcomes.
\begin{table*}[!htb] \caption{Performance comparison of the proposed methods and other existing related methods using C-index values on NLST dataset.}
\centering
\begin{tabular}{p{3cm}|p{7cm}|p{2.5cm}|p{2.5cm}}
\hline
Type & Method & C-index & AUC \\ \hline
\multirow{4}{*}{Deep Learning} & DeepAttnMISL & \textbf{0.6963} (0.0660) & \textbf{0.7143} (0.0541) \\
& DeepMISL~\cite{yao2019deep} & 0.6476 (0.0698) & 0.6693 (0.0866) \\
& Finetuned-WSISA-LassoCox \cite{zhu2017wsisa} & 0.6123 (0.0216) & 0.6427 (0.0575) \\
& Finetuned-WSISA-MTLSA \cite{zhu2017wsisa} & 0.6428 (0.0259) & 0.6963 (0.0668) \\
& WSISA-LassoCox \cite{zhu2017wsisa} & 0.5996 (0.0750) & 0.5957 (0.0674) \\
& WSISA-MTLSA \cite{zhu2017wsisa} & 0.6305 (0.0575) & 0.6479 (0.0936) \\ \hline
\multirow{2}{*}{Cox-based} & Lasso-Cox~\cite{tibshirani1997lasso} & 0.4842 (0.0508) & 0.4903 (0.1011) \\
& Cox-boost~\cite{Binder2008Allowing} & 0.5474 (0.0370) & 0.5271 (0.0386) \\ \hline
\multirow{2}{*}{Parametric models} & Logistic~\cite{kalbfleisch2011statistical} & 0.4998 (0.0881) & 0.5013 (0.1146) \\
& Weibull~\cite{kalbfleisch2011statistical} & 0.5577 (0.0395) & 0.5618 (0.0976) \\ \hline
Multi-task based & MTLSA~\cite{li2016multi} & 0.5053 (0.0509) & 0.5362 (0.0416) \\ \hline
Ranking based & BoostCI~\cite{mayr2014boosting} & 0.5595 (0.0610) & 0.5487 (0.0532) \\ \hline
\end{tabular}
\label{tab:ci_NLST}
\end{table*}
WSISA achieves better results than baseline models which shows the good representative ability of features from WSISA.
However, WSISA needs a separate stage to train several DeepConvSurv models independently and will discard some phenotypes in the final stage, the performance actually depends on how well to select important clusters and WSISA still has the chance to lose in selecting survival-related clusters for a good final survival prediction.
To investigate results from pre-trained model, we replaced the original 2DCNN network of WSISA by using a pre-trained ResNet34~\cite{he2016deep}. The whole model will be fine-tuned following the same process. It is clear to see improved C-index which can demonstrate that fine-tuned models can bring benefits but our DeepAttnMISL is still better than finetuned-WSISA by a large margin.
When introducing MIL into survival learning, DeepMISL and the proposed model can improve predictions from WSISA by C-index metric. Instead of selecting phenotypes, DeepMISL and the proposed model are designed to consider all possible patterns. Performance is further improved when we use the more flexible attention mechanism to learn informative and discriminate patterns. This architecture makes the proposed method can better learn heterogeneous information encoded in WSIs which will make it more practical and have better intepretability than DeepMISL in real applications.
We pick one patient as the example to show visualization results. Fig.\ref{fig:nlst_rois} presents this patient's all WSIs and the corresponding tumor region annotations.
Fig.\ref{fig:nlst_vis2_ours}-\ref{fig:nlst_vis2_WSISA} show results from the proposed model and WSISA, respectively. In Fig.\ref{fig:nlst_vis2_ours}, the first row shows attention weights heatmaps and the second row shows phenotype pattern distributions on original WSIs. The bottom presents randomly selected patches with higher attention weights (patches with red colors in heatmaps). It is clear to see that most patches from tumor regions are highlighted with high attentions while patches from normal tissues are treated with lower attentions. Compared with results from WSISA shown in Fig.\ref{fig:nlst_vis2_WSISA}, we can see that WSISA will miss many tumor patches and select many normal patches as discriminative patterns. Patches from cancerous regions can be grouped in similar clusters but not all of them will be selected by WSISA as the selection is performed via DeepConvSurv on all patches of the database. Selected phenotypes are more likely discriminative for the whole database with all patients and they are not well interpreted for the specific patient.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.85\linewidth]{fig/nlst_ROIs.pdf}
\caption{WSI Annotations of one example patient.}
\label{fig:nlst_rois}
\end{figure}
\begin{figure*}[!htb]
\centering
\includegraphics[width=0.85\linewidth]{fig/nlst_ours_1.pdf}
\caption{Phenotype pattern distribution and the corresponding heatmaps from the proposed model on three WSIs of the same patient. The bottom shows patches from phenotypes with high attention values.}
\label{fig:nlst_vis2_ours}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/nlst_wsisa_1.pdf}
\caption{Phenotype pattern distribution and selected patterns from WSISA. Missing tumor patches can be observed from selected patterns by WSISA.}
\label{fig:nlst_vis2_WSISA}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=1\linewidth]{fig/kmplots_R2.pdf}
\caption{Kaplan-Meier survival curves of different models for one testing fold. High risk (great than median) groups are plotted as green lines, and low risk (less than or equal to median) groups are plotted as red lines. The x axis shows the time in days and y axis presents the probability of overall survival. Log rank $p$ value is shown on each figure. "+" means the censored patient.}
\label{fig: kmplot}
\end{figure*}
Given the trained survival models, we can use the estimated testing risk scores to classify patients into low or high-risk group for personalized treatments. Two groups are classified by the median of predicted risk scores. We evaluate if those models can correctly classify death patients (uncensored data) into two groups since uncensored data is more informative. Patients with longer survival time should be classified into low risk group and vice versa. If the model cannot correctly distinguish high and low risk death patients, two average death times should be very close. We plot Kaplan-Meier survival curves on one testing fold in Fig.\ref{fig: kmplot}. From the figure, one can see that the proposed model can more successfully group testing death patients into two groups than other methods in all datasets.
The log rank test is conducted to test the difference of two curves. It is shown that the proposed method can achieve the most significant log rank test outcome (p-value = $4.527 \times 10^{-3}$) while some of others do not reach statistical significances. Kaplan-Meier curves suggest that the proposed comprehensive prediction model can offer personalized risk scores which can better group individuals into two groups. The proposed model has a significant impact on population survival times. It can be used as a recommendation system for offering personalized treatments by determining the relationship between a patient's whole slide pathological images and his or her risk of an event (death).
p value of log-rank test less than 0.05 is considered significant and $p<0.1$ is marginal significant. For five testing folds, our model can achieve four significant results and one marginal significant result with $p=0.09$, DeepMISL achieves two significant results and one marginal significant with $p=0.07$, Finetuned-WSISA-MTLSA achieves one significant result and three marginal significant results. Logistic model achieves only one significant result ($p=0.049$), shown in Fig.13.
BoostCI has one fold significant result ($p=0.0123$). All other baseline models cannot have significant results on all testing folds.
Overall, deep learning
models perform well to achieve significant results than models with hand-crafted features. The proposed model achieves the most number of significant results which could validate that the predictor from our model is an important prognostic factor which could be used for a good patient risk stratification.
\subsection{Ensemble Models}
We investigated if ensemble models could benefit final results. During each fold, we train five models and then average prediction score on the corresponding testing fold. The maximum cluster number is set to 6 and 10, respectively. Table \ref{tab:ensemb} shows C-index values using single and ensemble models on MCO-1M and NLST dataset.
The average c-index across five folds is 0.606 for MCO single and 0.600 for MCO ensemble, respectively. On NLST dataset, the averaged c-index of single model is 0.696 and the ensemble model is 0.695. From the table, it can be seen that ensemble models cannot provide additional power for predictions.
\begin{table}[!htb]\caption{ Results of single and ensemble models}
\begin{center}
\begin{tabular}{ c|cccccc}\hline
{} & fold 1 & fold 2 & fold 3 & fold 4 & fold 5 \\ \hline
MCO single & 0.666 & 0.571 & 0.591 & 0.565 & 0.636 \\
MCO ensemble & 0.665 & 0.564 & 0.585 & 0.549 & 0.637 \\
NLST single & 0.750 & 0.775 & 0.613 & 0.663 & 0.680 \\
NLST ensemble & 0.766 & 0.797 & 0.555 & 0.656 & 0.699 \\
\hline
\end{tabular}
\end{center}
\label{tab:ensemb}
\end{table}
\section{Conclusion}
In this paper, we proposed a deep multiple instance model to directly learn survival patterns from gigapixel images without annotations which make it more easily applicable in large scale cancer dataset. Compared to existing image-based survival models, the developed framework can handle various numbers and sizes whole slide images among different patients. It can learn holistic information of the patient using bag representations and achieve much better performance compared to the ROI patch based methods. Moreover, the flexible and interpretable attention-based MIL pooling can overcome drawbacks from fixed aggregation techniques in state-of-the-art survival learning models. We showed that our approach provides an interpretation of the clinical outcome prediction by presenting reasonable ROIs which is very important in such practical application. Additionally,
We illustrated the proposed method can provide personalized treatment for patients and can be used by doctors to guide their treatment decisions for improving patient lifespan. With future research and development, the proposed approach has the potential to be applied in other tumor types.
\section*{Acknowledgements}
This work was partially supported by US National Science Foundation IIS-1718853, the CAREER grant IIS-1553687 and Cancer Prevention and Research Institute of Texas (CPRIT) award (RP190107).
The authors would like to thank the National Cancer Institute for access to NCI’s data collected by the National Lung
Screening Trial. The statements contained herein are solely of the authors and do not
represent or imply concurrence or endorsement by NCI.
\bibliographystyle{unsrt}
\section{Introduction}
\noindent Survival analysis aims to analyze the expected duration of time until events happen. It tries to find the answer of questions like: how does the proportion of a population survive past a certain time (e.g. 5 years)? What rate will they die or fail?
It is a very important clinical application and many efforts have been made to search for biomarkers from omics data that are significantly related to patient death \cite{shedden2008gene,tibshirani1997lasso,bair2004semi,bair2006prediction,park2007l1}. Recent technological innovations are enabling scientists to capture big whole slide images (WSIs) at increasing speed and resolution for diagnosis. The learning model is required to correctly predict the survival risk of each patient from his/her tumor tissue whole slide images. The more precise is risk assessment for a cancer patient, the better the patient can be treated. Compare with genomics data, pathological images can present tumor growth and morphology in extremely detailed, gigapixel resolution which is extremely useful for cancer study~\cite{warth2012novel, yuan2012quantitative}.
The diagnosis is extremely laborious and highly dependent on expertise which requires pathologists to carefully examine the biopsies under the microscope~\cite{bejnordi2017diagnostic}. To reduce the risk of misdiagnosis, pathologists have to conduct a thorough inspection of the whole slide which make the diagnosis quite cumbersome. Automatic analysis of histology has become one of the most rapidly expanding fields in medical imaging. Computer aided diagnostics in digital pathology can not only alleviate pathologists' workloads, but also help to reduce the chance of diagnosis mistakes. However, using WSIs for survival prediction is very challenging due to several reasons: 1) pathological images in real cancer dataset might be in terabytes ($10^{12}$ pixels) level which makes most models computationally impossible. 2) the large variations of textures and biological structures from tumor heterogeneity, As the solid tumor may have a mixture of tissue architectures and structures, multiple WSIs from different parts of the patient's tissue are collected for diagnosis; 3) label on patient-level while each patient might have multiple WSIs for diagnosis. Those terabyte-size large WSIs from one patient will share the survival label which will make the problem more challenging.
\subsection{Related Work}
During recent years, many methods have been proposed for survival prediction using pathological slides. They can be categorized into two categories: ROI-based and WSI-based methods.
\textbf{Region of Interest Analysis.}
Pathological images usually come with a very high resolution which makes most of exiting models and algorithms computationally infeasible even though the high resolution of image data greatly benefits survival analysis with more precise information.
Previously due to the lack of computational power, most of the literature focused on regions of interest (ROI) patches which are selected by pathologists from WSIs~\cite{gurcan2009histopathological}.
Instead of handling original WSIs, ROI-based methods extracted hand-crafted features from ROIs for predictions~\cite{yuan2012quantitative,Barker201660,Zhu2016Lung,yao2016imaging,Wang2014Novel,yu2016predicting, cheng2017identification,yao2015computer}. Wang et al.~\cite{Wang2014Novel} proposed a novel framework to first segment cells in annotated patches and then perform cellular morphological properties from those cells which result in 166 imaging features. Yu et al.~\cite{yu2016predicting} extract 9,879 quantitative image features from annotated regions of interest and results suggest that automatically derived image features can predict the prognosis of lung cancer patients and thereby contribute to precision oncology. Beyond classical cell detection, Yao et al.~\cite{yao2016imaging} used a deep subtype cell detection first to classify different cell subtypes and then extracted features from cellular subtype information. Cheng et al.~\cite{cheng2017identification} used a deep auto-encoder to cluster cell patches into different types and then extracted topological features to characterize cell type distributions from ROIs for prediction. These methods extracted hand-crafted features based on nuclei detection and segmentation and those features were considered to represent prior knowledge of boundary, region or shape. However, hand-crafted features are limited in representation power and capability.
Recently, with the advance of deep neural networks, deep learning-based survival models are proposed for seeking more powerful deep representation~\cite{Katzman2016deepsurv, zhu2016deep, yao2017deep, mobadersany2018predicting}.
Katzman $et~al.$ first proposed a deep fully connected network (DeepSurv) to represent the nonlinear risk function~\cite{Katzman2016deepsurv}. They demonstrated that DeepSurv outperformed the standard linear Cox proportional hazard model. Another improvement is deep convolutional survival learning (DeepConvSurv) which is the first attempt to use pathological images in deep survival model~\cite{zhu2016deep}. Later, Yao et al.~\cite{yao2017deep} integrated genome modality with DeepConvSurv for survival prediction using multi-modality data. However, DeepConvSurv is designed to use pre-selected ROI patches by pathologists from WSIs for convolution operations. A small set of image tiles might not completely and properly reflect the patients' tumor morphology. Also,
those methods perform average pooling to achieve patient-wise predictions from patch-based results. Such combination cannot effectively aggregate predictions from patch-level and needs further attention.
Thus, it would be much helpful if we can facilitate knowledge discovery from big whole slide images.
\textbf{Whole-slide Image Analysis.}
With detailed and densely annotations on WSIs, nowadays a series of approaches in whole-slide image analysis have been proposed for a variety of applications including classification, detection or segmentation~\cite{wang2016deep, bejnordi2017diagnostic, kong2017cancer, li2018cancer, liu2017detecting}. Applying deep learning for supervised learning on computational pathology has achieved promising results. However, the applicability of these models in clinical practice remains in questions because of the wide variance of clinical samples. Extensive and time-consuming human manual annotations in clinical practice is impossible. Moreover, the success of those applications is built on integrating detailed patch contents and using labor-extensive annotations which might not be applicable for survival prediction.
To properly address the shortcomings of current models, one possible direction is to consider weakly supervised manner. Recently, researchers have developed many weakly supervised algorithms to medical images including weakly-supervised X-rays screening~\cite{wangyi2019weakly, yan2018weakly} and WSI classification~\cite{Hou2016PatchbasedCN, mercan2018multi, wang2018weakly, wang2019rmdl, wang2019weakly}.
WSI classification models are designed to find the most differentiated regions correspond to different tumor types. A two-step approach is usually used and the first step is a classifier at the tile level and then predicted scores for each tile within a WSI are aggregated with various strategies. However, learning survival from histology and developing prognosis model is considerably more difficult as risk is often reflected from a range of histology patterns that correspond to varying degrees of disease progression. Tumor heterogeneity plays an important role in cancer study which includes inter-tumor and intra-tumor heterogeneity~\cite{jamal2015translational}. Inter-tumor heterogeneity refers to the differences found between tumors in different patients. Intra-tumor heterogeneity refers to distinct tumor cell populations within the same tumor specimen. Most recent weakly-supervised WSI classification focused on localizing most differentiated regions correspond to tumor types across patients. Therefore, they are more likely to capture inter-tumor heterogeneity between tumors or subtype tumors. Understanding how to label one person's tumor type may not be enough to study the degree of tumor progression. The pathophysiology of tumor progression and proliferation is complex
and thus a new image-based prognosis model which can integrate information from heterogeneous tissue regions is a better approach.
Additionally, existing weakly-supervised WSI classification task is at slide-level, while survival prediction is at patient-level analysis (one patient might have multiple whole slide images). Devising patient-level decisions from slide-level results is not the objective of those studies.
To achieve survival prediction from whole slide images without using annotations, Zhu et al.~\cite{zhu2017wsisa} proposed a patch-based two-stage framework to predict patients' survival outcomes. Patches are extracted from the WSIs and clustered to different patterns defined as "phenotypes" according to their visual appearances in the first stage.
Then WSISA~\cite{zhu2017wsisa} adopted DeepConvSurv~\cite{zhu2016deep} to select important patch clusters and then aggregated those clusters for final prediction. Although this framework has practical merits to consider important patch clusters, it is hard to incorporate it into state-of-the-art deep learning paradigm as the whole approach has separate steps. In addition, it is not a scalable solution because the first stage will be significantly inefficient if more patches are sampled. One recent work ~\cite{tang2019capsurv} proposed CapSurv by introducing Capsule network~\cite{sabour2017dynamic}. However, CapSurv still has similar issues with WSISA as the main framework is following the WSISA pipeline.
The relationship of tissue patterns on WSI is the great importance on survival analysis. Li et al.~\cite{li2018graph} proposed a graph convolutional network (GCN) based method to consider such relationship of patches in the WSI and then learn effective representation for survival prediction. However, this method requires detailed graph structure knowledge to construct a complete graph representation for effective GCN training which is not flexible and needs prior knowledge.
\subsection{Contributions}
Though many works can be found on WSI analysis for segmentation, classification and detection, there were limited works on weakly-supervised learning for survival prediction. Based on the literature review, a method that can adaptively learn patient-level representations with limited prior knowledge is needed. In this study, we propose a novel framework, referred to as Deep Attention Multiple-Instance Survival Learning (DeepAttnMISL) for whole slide images. In contrast to the standard supervised learning, multiple instance learning (MIL) considers a set of bags, each containing multiple feature vectors referred to as instances. The available label is only assigned to bag-level and labels of individual instances in the bag are not known. In MIL, not all the instances are necessarily relevant and some of them in the bag might not be relevant to certain labels. In observation, if the slide is from a low risk patient, most of its tiles might be benign and or contain low-grade tumor. In contrast, if the slide is from the high risk patient, it must be true that at least one of all of the possible tiles contains malignant tumor. This formalization of the WSI survival learning problem is an example of the general standard multiple instance assumption and thus MIL is a good to fit to solve such problem.
Our preliminary work that only using deep multiple instance learning can help achieve better prognosis performance was published in MICCAI 2019~\cite{yao2019deep}. Compared to the previous work, we offered new contributions in following aspects. We introduced attention mechanism into deep multiple instance survival learning. The proposed DeepAttnMISL not only uses the siamese MI-FCN network to learn features from different phenotype clusters, but also largely improves performance with Attention-based MIL pooling layer to perform a trainable weighted aggregation.
More importantly, the proposed framework can effectively highlight the prognosis-related clusters and has better interpretability as well as performance than our preliminary work~\cite{yao2019deep}. The contributions can be summarized as follows.
\begin{itemize}
\item Phenotype clusters provide morphology-specific representation, the proposed DeepAttnMISL first extracts phenotype-level information through a Siamese MIL-based network from patch-level features. The attention mechanism is then used to aggregate these phenotype features into patient-level information with a trainable weighted average where weights can be fully parameterized by neural networks. Such attention-based aggregation is much flexible than fixed pooling operators in recent work~\cite{yao2019deep, zhu2017wsisa,tang2019capsurv}.
\item With the advantage of MIL and attention mechanism, the proposed model has a good interpretability to find important patterns of patients. Those identified important regions and patches are more likely to be associated with prognosis and overall the proposed model can achieve better patient-level predictions and improve prediction performance than our previous work~\cite{yao2019deep}.
\item To evaluate the performance of the proposed DeepAttnMISL model, two large WSI datasets on lung and colorectal cancer are used and extensive experimental results verify the effectiveness.
\end{itemize}
Our method can efficiently exploit and utilize all discriminative patterns in whole slide pathological images to perform accurate patients' survival predictions. Additionally, we present results representing a patient's treatment group to illustrate how to view the proposed model as a treatment recommender system. Results validate that the proposed model can accurately model the risk functions of the population and thus guide treatment decisions for improving patient lifespan.
\begin{figure*}[htb]
\centering
\includegraphics[width=1\linewidth]{fig/overview.pdf}
\caption{An overview of the proposed DeepAttnMISL model.}
\label{fig: Framework}
\end{figure*}
\section{Methodology}
Considering a set of $N$ patients, $\{X_i\}, i=1 \dots N$, each patient has the follow-up label $(t_i, \delta_i)$ indicating the overall survival. The observation time $t_i$ is either a survival time or a censored time for each patient.
$\delta_i$ is the indicator which is 1 for an uncensored instance (death occurs during the study) and 0 for a censored instance.
Survival model predicts a value of a target variable O for a given patient. As we discussed above, patient $X_i$ will have multiple WSIs and our goal is to predict the corresponding target $o_i$ from those imaging data. As we don't have pixel-level annotations but only know patient-level information, this weakly-supervised learning can be solved by Multiple Instance Learning (MIL).
In the case of MIL problem, patient $X$ is a bag of instances, $X = \{x_1, ..., x_C\}$ and the number of instances $C$ could vary for different bags. Furthermore, we assume that individual true labels exist for the instances within a bag, i.e., $y_1, ..., y_C$ but those values remain unknown during training. One very important assumption is that neither ordering nor dependency of instances within a bag and a MIL model must be permutation-invariant.
Instances within the bag can be defined as sampling patches from WSIs and several studies~\cite{campanella2019clinical,Wulczyn2020} developed MIL-based deep learning approaches for automated cancer diagnosis and prognosis. In our case, we introduce phenotype cluster as the instance of the bag instead of individual patch. Cancer histology contains rich phenotypic information that reflects underlying molecular processes and disease progression. Phenotype of the pathological slides is a combination of tissue's various observable characteristics. This provides a convenient visual representation of disease aggressiveness. Recent studies have shown phenotypic information could be useful for prediction of prognosis~\cite{zhu2017wsisa,mobadersany2018predicting}. The purpose of the proposed framework is to predict patient outcomes from whole slides images. The study involves partitioning the original slides into a number of phenotype patterns. Each phenotype describes a type of histology pattern and includes a number of smaller patches or tiles.
\subsection{DeepAttnMISL}
Fig.\ref{fig: Framework} shows the overview of the proposed Deep Attention Multiple Instance Survival Learning (DeepAttnMISL). In Multiple Instance Learning, each data sample is a bag of instances and the bag can be seen as one patient in our problem. Each patient $X_i$ may contain multiple whole slides and it is not practical to use whole slides as instances due to the extreme large size. We choose phenotypes instead of raw sampling patches as instances within the bag because it will considerably reduce the complexity of the problem as the number of heterogeneous patches is actually very huge. By using phenotype patterns which are constructed by clustering, we can build the model for different types of tissues to extract morphology-specific features. To learn patient-level information from phenotype clusters, we design a Multiple Instance Fully Convolutional Network (MI-FCN) running inside our deep learning architecture with weights being shared among them as in the siamese architecture. To detect important phenotypes associated with patients' clinical outcomes, attention-based MIL pooling layer is used to aggregate phenotype-level representation. The output is the hazard risk to represent how well for the patient behaves in the population of certain type of diseases.
\subsubsection{Sampling and Clustering}
At the first step, we extract patches from all WSIs belong to the same patient and then cluster them into different phenotypes. To capture detailed information of the images, those patches are extracted from 20X (0.5 microns per pixel) objective magnifications and then fixed to $500 \times 500 \times 3$ size. In one whole slide image, usually about 50\% of areas are background and it is easy to select regions to contain tissues rather than background or irregular regions according to pixel values. Even we only extract tissue patches and ignore background regions, it can still get tens of thousands of patches per WSI which will result in a huge number of images from the whole dataset. Different from recent segmentation and detection task in whole slide image analysis, our task is for patient-level decision aggregated from patch-level results. As pointed out in~\cite{hou2015efficient}, training patch-based CNNs for weakly supervised learning is very time costly (several weeks) and we propose to use features from pre-trained models instead of using CNNs to learn features from the scratch. We use the pre-trained model (e.g. VGG) from ImageNet~\cite{simonyan2014very} to extract features for each image patch which have more representation power than smaller size (50 $\times$ 50) thumbnail images to represent their phenotypes~\cite{zhu2017wsisa}. Then we adopt K-means clustering to cluster patches based on their deep learning features. Notice that one patient might have multiple WSIs and we actually perform clustering on patient-level instead of the whole database. Fig.\ref{fig:clustering} shows one patient's example and this patient has three WSIs that were sampled from different locations of the biopsy tissue. The corresponding phenotype clustering are shown in the right and each color means one type of phenotype clusters. In this example, we chose to cluster 10 phenotype patterns. The results show the effectiveness of this strategy as we can see similar patches are grouped into the same cluster. This could demonstrate that features from pre-trained model are capable of identifying patterns of whole slide images and we would expect them to be distinctive and informative for later survival learning task.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{fig/clustering.pdf}
\caption{Phenotype patterns visualization after clustering on three WSIs belong to the same patient.}
\label{fig:clustering}
\end{figure}
By clustering different patches from all WSIs of the patient into several distinguished phenotype groups, we will have different phenotype groups with various prediction powers on this patient’s clinical outcome. The proposed DeepAttnMISL takes phenotypes as multiple inputs and consider their connections for predicting survival outcomes.
\subsubsection{Siamese MI-FCN}
After clustering, the patient is a set of phenotype clusters and we design a siamese Multiple Instance Fully Convolutional
Networks (MI-FCN) to learn features from those patterns, \textcolor{red}{similar to the work in \cite{yao2019deep}}. Most existing well-known pre-trained models were trained based on single-instance bases, and the labels are associated with each image which is not the case of our problem. We embed multiple sub-networks running inside our deep learning architecture with weights being shared among them as in the siamese architecture. Each sub-network is based on fully convolutional neural networks (FCN) that can learn informative representation for individual phenotype of the patient.
The architecture of each Multiple Instance Fully Convolutional
Networks (MI-FCN) is shown in Fig.\ref{fig:MIL}.
The combination of multiple layers of fully convolutional layers and non-linear activation functions has proven to be a powerful non-linear feature mapping in multiple instance problem~\cite{yang2017miml}.
The reason to use the fully convolutional networks (FCN) without including any fully connected layers is that FCN is more flexible and can handle any spatial resolution, which is needed for the considered problem since the number of patch samples in each phenotype varies.
For each phenotype, the input is a set of features from $m_i$ patches, can be organized as $1\times m_i\times d$ ($d$ is the feature dimension or channel). The network consists of several layer-pairs of $1\times 1$ conv layer and ReLU layer (we show 2 pairs in Fig.\ref{fig:MIL}). The global pooling layer (e.g. average pooling) will be added at the end. For $j$-th phenotype, its representation is denoted as $\mathbf{r_j}$.
The network receives one kind of phenotypes (tensor) as input and it can focus on local information and generate representation for the phenotype. Since the number of patches in each phenotype varies, the fully convolutional network is more flexible to handle this scenario.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\linewidth]{fig/MI_FCN.pdf}
\caption{The network architecture in each MI-FCN. }
\label{fig:MIL}
\end{figure}
\subsubsection{Aggregation via Attention-based MIL pooling layer}
Local representations from MI-FCN encode information of the corresponding phenotype clusters and how to aggregate them into patient-level representation is one necessary step. Let $R=\{\mathbf{r}_1, \mathbf{r}_2, ..., \mathbf{r}_C\}$ be one patient with $C$ phenotype local representations and the goal is to get patient-level representation $\mathbf{z}$. The very straightforward choice is to use maximum or the mean operator, but drawbacks are very clear that they are pre-defined and non-trainable which might not be flexible and adjustable to the specific task.
Previous work~\cite{zhu2017wsisa} used weighted average of features from clusters to get the patient feature but they performed such patient-level aggregation in a separate stage and the whole approach cannot be trained end-to-end from instance-level to patient-level. A better way to integrate phenotype-level information is to leverage an attention mechanism that considers the importance of each phenotype. In this paper,
we propose to use the attention-based MIL pooling~\cite{ilse2018attention} for aggregation which is flexible and adaptive. By using such pooling operator, the patient-level representation can be calculated as
\begin{equation}
\mathbf{z} = \sum_{k=1}^C a_k \mathbf{r}_k,
\end{equation}
where
\begin{equation}
a_k = \frac{\exp\{\mathbf{w}^\top \tanh(\mathbf{V}\mathbf{r}_k^\top) \}}{\sum_{j=1}^C \exp\{\mathbf{w}^\top \tanh(\mathbf{V}\mathbf{r}_j^\top)\}}.
\end{equation}
In the weight $a_k$ calculation, $\mathbf{w} \in \mathbb{R}^{L\times1}$ and $\mathbf{V} \in \mathbb{R}^{L \times M}$ are trainable parameters. Tangent $\tanh(.)$ element-wise non-linearity is introduced both negative and positive values for proper gradient flow. The attention-based MIL pooling allows to assign different weights to phenotype clusters within one patient and hence the final patient-level representation could be highly informative for survival prediction. In other words, it should be able to locate key clusters and provide potential ROIs.
Different from traditional attention mechanism that all instances are sequentially dependent~\cite{lin2017structured, raffel2015feed}, multiple instance learning assumes all instances are independent. As phenotype in our problem is more natural to be independent to each other, attention mechanism used in MIL pooling will be beneficial to achieve good results.
\subsubsection{Loss Function}
After attention-based MIL pooling, we will generate the patient-level aggregation from all local representations. For $i$-th patient sample passing through the proposed model, the output of this patient's hazard risk is denoted as $\mathbf{o}_i$. Table \ref{tab:Bag_arch} presents architecture details of the proposed DeepAttnMISL. Input of our model is the set of patients' phenotype features, organized as $[(1\times m_1 \times d), (1\times m_2 \times d),..., (1\times m_c \times d)]$ where $C$ is the number of phenotypes and $m_i$ means the number of patches in $i$-th phenotype.
\begin{table}[!htb] \caption{The architecture of DeepAttnMISL.}
\begin{center}
\begin{tabular}{l|l|l }
\hline
{Layer} & {Input} & {Output size} \\ \hline \hline
MI-FCN $i$ & 1 $\times$ $m_i \times d$ & $64$ ($\mathbf{r}_i$)\\ \hline
Attention MIL pooling & $64 \times C$ & 64 \\ \hline
Fully-Con. & 64 & 32 \\ \hline
Fully-Con. & 32 & 1 ($\mathbf{o_i}$) \\ \hline
\end{tabular}
\end{center}
\label{tab:Bag_arch}
\end{table}
Denote the label of the $i$-th patient as $(t_i, \delta_i)$ where $t_i$ is the observed time,
We assume that censoring data ($\delta = 0$, death not observed) is non-informative in that, given $\mathbf{x}_i$, the event and censoring time for the $j$-th patient are independent. Let $t_1<t_2<\dots <t_N$ denote the ordered event times. The risk set $R(t_i)$ is the set of all individuals who are still under study. For example, the patient $j$ in risk set has the survival time is equal or larger than $t_i$ ($t_j \ge t_i$). Conditioned upon the existence of a unique event at some particular time $t$ the probability that the death event occurs in the patient $i$ is
\begin{equation}
L_i = \frac{\exp(\mathbf{o}_i)}{\Sigma_{j \in R(t_i)}\exp(\mathbf{o}_j)},
\end{equation}
Assuming the patients' events were statistically independent, the joint probability of all death events conditioned upon the existence of events at those times is the partial likelihood:
\begin{equation}
L = \prod_{i:\delta_i=1}\frac{\exp(\mathbf{o}_i)}{\Sigma_{j \in R(t_i)}\exp(\mathbf{o}_j)},
\end{equation}
The corresponding log partial likelihood is
\begin{align}
l = log(L) & = \sum_{i:\delta_i=1}(\mathbf{o}_i-\log\sum_{j:R(t_i)}\exp(\mathbf{o}_j)) = \sum_{i}\delta_i(\mathbf{o}_i-\log\sum_{j:R(t_i)}\exp(\mathbf{o}_j)),
\end{align}
The function can be maximized over the network parameters to produce maximum partial likelihood estimates. It is equivalent to minimize the negative log partial likelihood.
We then use the negative log partial likelihood as the loss function in our model as shown in below
\begin{align}\label{eq:survival_loss}
L(\mathbf{o}_i) =\sum_{i}\delta_i(-\mathbf{o}_i + \log{\sum_{j:t_j>=t_i}\exp(\mathbf{o}_j)}).
\end{align}
In a simplified view, the loss function contributes to overall concordance by penalizing any discordance in any values of higher risk patients if they are greater than lower those of lower risk.
Different with other deep models used the same loss function~\cite{Katzman2016deepsurv,zhu2016deep, zhu2017wsisa}, the proposed model can better fit realistic patients' whole slide imaging data and learn complex interactions using deep multiple instance representation that cover both holistic and local information.
Since patient's risk is correlated with phenotypes from WSIs, the proposed framework can efficiently exploit phenotypes by deep multiple instance learning and attention mechanism for clinical outcome prediction at patient-level.
\section{Experiments}
\subsection{Dataset Description}
To validate the performance of the proposed DeepAttnMISL, we used two very large datasets on lung and colorectal cancers with high-resolution WSIs.
They are the National Lung Screening Trial (NLST)~\cite{national2011national} and the Molecular and Cellular Oncology (MCO) study ~\cite{2015mco,jonnagaddala2016integration}. NLST is a very large lung cancer dataset collected by the National Cancer Institute's Division of Cancer Prevention (DCP) and Division of Cancer Treatment and Diagnosis (DCTD). The MCO study is a collection of imaging, specimen, clinical and genetic data from over 1,500 Australian individuals who underwent curative resection for colorectal cancer from 1994 to 2010. Clinical and pathological data were collected on all those cases, including follow-up data. The WSIs collection in MCO study consists of more than 1,500 WSIs representing at least one typical section from each tumour case, stained with Hematoxylin and eosin, and scanned using a 40x objective. We have different experiment comparison settings on two datasets because we only have annotations that locate tumor regions in NLST. Both datasets are good for WSI-based models as those models without requiring ROI labelling but more extensive experiments with ROI-based comparisons can only be made on NLST dataset.
The numbers of WSIs and patients in each dataset are shown in Table \ref{tab:WSIs_number}.
State-of-the-art WSI models~\cite{zhu2017wsisa, tang2019capsurv} need to control the scale of data as they will have significant computational issues on the very large number of patches. They sampled hundreds of patches per WSI and collected around 20K-200K patches in total. One advantage of the proposed model is the computational efficiency because it uses MIL with attention to aggregate 1D deep features from pre-trained models instead of training patch-based CNNs which is very time costly~\cite{hou2015efficient}.
For the purpose of training baseline WSI survival model, we first extract in total of 130K and 275K patches for MCO and NLST, respectively. We then sample more patches on MCO dataset and collect 915K patches and each WSI will have more than 500 patches. MCO study has more than 1000 patients which is much larger than data used in recent work~\cite{zhu2017wsisa, tang2019capsurv}.
\begin{table}[!htb]
\begin{center}
\caption{The numbers of WSIs, patients, patches, and the average number of patches per WSI extracted in each dataset.} \label{tab:WSIs_number}
\begin{tabular}{c|c|c|c}
\hline
{Dataset} & {NLST} &{MCO\_130K} & MCO\_1M \\ \hline
\#patients & 387 & 1,146 & 1,146 \\ \hline
\#WSIs & 1,177 & 1,614 & 1,614\\ \hline
\#patches & 275,244 & 132,910 & 915,324\\ \hline
\#patches/WSI & 234 & 82 & 567\\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Implementation details}
For training, we use Adam optimization with weight decay $5 \times 10^{-4}$. The learning rate is set to $10^{-4}$ and the training monitors the loss on validation dataset and it will early stop if the loss goes increased much.
To evaluate the performances in survival prediction, we take the concordance index (C-index) and area under curve (AUC) as our evaluation metrics~\cite{heagerty2005survival}.
The C-index quantifies the ranking quality of rankings and is calculated as follows
\begin{equation}
c=\frac{1}{n}\sum_{i\in\{1...N|\delta_i=1\}}\sum_{t_j>t_i}I[f_i>f_j]
\end{equation}
where $n$ is the number of comparable pairs and $I[.]$ is the indicator function. $t.$ is the actual time observation. $f.$ denotes the corresponding risk. The value of C-index ranges from 0 to 1. The larger the value is, the better the model predicts.
\subsection{MCO results}
\subsubsection{Settings and Parameters}
To see effects from phenotype patterns, we tested different cluster numbers changing from 6 to 12. We split the data into 80\% training and 20\% testing. 10\% of training data will be used as validation data for achieving early stop training.
We would like to note that the number of phenotype clusters is the maximum number that allows each patient sample can have. The proposed model is flexible to handle patients with fewer patterns (e.g. smaller biopsy tissue). We implement this by setting the corresponding weight $a_k$ to zero if there are no patches in this cluster. To evaluate the use of different pooling ways, we built two baselines by replacing attention MIL pooling layer in DeepAttnMISL with commonly used Max and Mean pooling layer, and we indicate them as "DeepMIL+ Max/Mean" below.
Table \ref{tab:tune_phenotypes_MCO} presents results of each model. We first notice DeepAttnMISL can achieve best results in all cases which demonstrate attention MIL pooling is more flexible and better than fixed pooling operators. Second, when the phenotype is set to large values, results get worse which show more clusters actually cannot guarantee prediction benefits.
\begin{table}[!htb] \caption{ Performances with different number of phenotypes.}
\begin{center}
\begin{tabular}{ c|cccc}\hline
{\bfseries Model } & c=6 & c=8 & c=10 & c=12 \\ \hline
DeepAttnMISL & \textbf{0.652} & \textbf{0.648} & \textbf{0.624} & \textbf{0.607} \\
DeepMIL + Max & 0.594 & 0.606 & 0.606 & 0.540\\
DeepMIL + Mean & 0.604 & 0.578 & 0.604 & 0.601 \\ \hline
\end{tabular}
\end{center}
\label{tab:tune_phenotypes_MCO}
\end{table}
The basic MI-FCN network of our DeepAttnMISL consists of one convlolutional layer, one ReLU layer, one pooling layer. We study the effects of different number of convolution and ReLu layer-pairs and report results in Table \ref{tab:tune_layers_MCO_cur}. For the 1 layer, we used 64 filters in the convlolutional layer. We used $\{2048, 64\}$ number of filters in 2 layers and $\{2048, 1024, 64\}$ for 3 layers setting, respectively. From the table, we decide to choose one convolutional-ReLU layer pair with Global Average Pooling in MI-FCN network.
\begin{table}[!htb]\caption{ Results under different network configurations on testing data. The cluster number is set to 6.}
\begin{center}
\begin{tabular}{ c|ccc}\hline
{c=6 } & 1 layer & 2 layers & 3 layers \\ \hline
Global Average Pooling & \textbf{0.652} & 0.634 & 0.644 \\
Global Max Pooling & 0.615 & 0.640 & 0.623 \\ \hline
\end{tabular}
\end{center}
\label{tab:tune_layers_MCO_cur}
\end{table}
To validate the effectiveness of Siamese, we then remove the Siamese network and only use attention pooling layer on input features. In this case, no phenotype clusters are considered. This scenario will be the direct application of attention aggregation without using phenotype clusters~\cite{ilse2018attention}. 5 fold cross-validation is performed with the cluster number 6 on MCO-130K. Results can be found in Table \ref{tab:siamese}. We can see the overall performance is not good as the DeepAttnMISL which means the importance of Siamese network. This validates the effectiveness of phenotype clusters in Siamese network.
The final c-index across 5 folds is $0.542\pm 0.022$ for model without Siamese and $0.595 \pm 0.036$ for model with Siamese, respectively. Results suggest the usefulness of phenotype patterns and the Siamese architecture.
\begin{table}[!htb]\caption{ Validation of Siamese on MCO-130K dataset.}
\begin{center}
\begin{tabular}{ c|ccccc}\hline
{c=6} & fold 1 & fold 2 & fold 3 & fold 4 & fold 5 \\ \hline
No Siamese & 0.564 & 0.538 & 0.515 & 0.527 & 0.564 \\
w Siamese & 0.652 & 0.579 & 0.609 & 0.564 & 0.573 \\ \hline
\end{tabular}
\end{center}
\label{tab:siamese}
\end{table}
To validate effects of different components, we add more evaluations by changing encoder/clustering part, and results can be found in the Table \ref{tab:InceptionV3}. The more advanced InceptionV3~\cite{szegedy2016rethinking} model is tested and we also introduce spectral clustering as the alternative method for Kmeans. All other settings and architectures are kept the same. Details about each fold can be seen in the Table \ref{tab:InceptionV3}. For model with InceptionV3 and Kmeans clustering, C-index result is $0.598 \pm 0.054 $ on 5-fold cross validation. When changing Kmeans clustering to spectral clustering, the performance is $0.593 \pm 0.032$. Compared with the model using VGG-16 and Kmeans clustering ($0.595 \pm 0.036$), performances from different variants of models are quite similar. Therefore, we decide to use VGG-16 and Kmeans clustering for comparisons.
\begin{table}[!htb]\caption{ Results with different feature extractor and clustering on MCO-130K dataset.}
\begin{center}
\begin{tabular}{ c|ccccc}\hline
{c=6 } & fold 1 & fold 2 & fold 3 & fold 4 & fold 5 \\ \hline
InceptionV3+k & 0.670 & 0.540 & 0.630 & 0.603 & 0.549 \\
InceptionV3+sp & 0.630 & 0.565 & 0.611 & 0.554 & 0.607 \\ \hline
\end{tabular}
\end{center}
\label{tab:InceptionV3}
\end{table}
We also try with the more advanced gating mechanism~\cite{ilse2018attention,dauphin2017language} together with $tanh(.)$ non-linearity in eq (2). Results on MCO-130K are reported in Table \ref{tab:gated}. We can find gated-attention and plain attention mechanism behave similarly in different phenotype cluster settings but the plain attention is slightly better.
\begin{table}[!htb]\caption{ Results of different attention mechanisms on MCO-130K dataset.}
\begin{center}
\begin{tabular}{l|cc}
\hline
& Gated-Attention & Attention \\ \hline
c=6 & 0.596 (0.029) & 0.595 (0.036) \\ \hline
c=8 & 0.586 (0.043) & 0.599 (0.049) \\ \hline
c=10 & 0.561 (0.048) & 0.585 (0.036) \\ \hline
c=12 & 0.579 (0.031) & 0.591 (0.026) \\ \hline
\end{tabular}
\end{center}
\label{tab:gated}
\end{table}
\subsubsection{Comparisons}
WSISA~\cite{zhu2017wsisa} is one representative WSI-based survival learning but it only extracts features from WSIs and needs a separate survival learning to get final predictions. We choose three top survival models according to settings in WSISA~\cite{zhu2017wsisa}, they are Lasso-Cox~\cite{tibshirani1997lasso}, En-Cox~\cite{yang2012cocktail} and MTLSA~\cite{li2016multi}. As WSISA has the computational issue when there are too many patches in the whole dataset and thus the scale of 100K-200K patches is acceptable for experiments.
We have a another collection of patches with around 1 million patches to see effects from the patch scale but only perform our model on this scale because training with WSISA is not endurable.
\textcolor{red}{Our preliminary work DeepMISL~\cite{yao2019deep} has shown the effectiveness of using both global and local representation from Multiple Instance Learning can benefit survival prediction. However, the model still treats phenotype clusters equally and cannot recognize clusters that contribute more on patients' survival.}
We perform 5 fold cross-validation and report the average values of C-index and AUC on Table \ref{tab:MCO_CI} and \ref{tab:MCO_AUC}, respectively. From both tables, one can see that the proposed method achieves best results than models using WSISA features in all cluster number settings on MCO-130k. Improvements can be related to the following differences.
First, clustering is performed on patient-wise while recent WSI-based approaches~\cite{zhu2017wsisa, tang2019capsurv} need to cluster on all patches from patients of the database.
Because WSISA~\cite{zhu2017wsisa} needs independent DeepConvSurv to select important clusters and it has to divide the whole dataset into different types by clustering on all patches.
\textcolor{red}{DeepMISL~\cite{yao2019deep} can combine both local and bag representation with MIL but it is still unable to treat phenotype clusters differently which will limit its use on larger datasets.}
With the advantage of MIL and attention mechanism, the proposed DeepAttnMISL can easily find important instances (clusters) within the bag are more likely to achieve better patient-level predictions. There is no need to perform clustering on the whole dataset. A trainable and adaptive attention-based MIL pooling in DeepAttnMISL can adjust to a task and data which could help succeed in calculating the better patient representation. Increases with 1\%-3\% are observed when we use more patches from MCO-1M data and this reminds us more patches can benefit predictions but actually cannot offer significant improvements. This demonstrates the robustness of the proposed DeepAttnMISL that is not rely on the number of sampling patches.
\begin{table}[!htb]\caption{ C-index values of the proposed model and WSISA with different settings.}
\begin{center}
\begin{tabular}{l|ccccc}
\hline
Method & Settings & c=6 & c=8 & c=10 & c=12 \\ \hline
\multirow{2}{*}{DeepAttnMISL} & 130K & 0.595 & 0.599 & 0.585 & 0.591 \\
& 1M & \textbf{0.606} & \textbf{0.600} & \textbf{0.603} & \textbf{0.599} \\ \hline
\multirow{2}{*}{DeepMISL} & 130K & 0.557 & 0.547 & 0.587 & 0.543 \\
& 1M & 0.569 & 0.575 & 0.573 & 0.567 \\ \hline
W-MTLSA & 130K & 0.558 & 0.567 & 0.524 & 0.547 \\
W-LassoCox & 130K & 0.552 & 0.546 & 0.503 & 0.523 \\
W-EnCox & 130K & 0.552 & 0.545 & 0.504 & 0.522 \\ \hline
\end{tabular}
\end{center}
\label{tab:MCO_CI}
\end{table}
\begin{table}[!htb]\caption{ AUC values of the proposed model and WSISA with different settings.}
\begin{center}
\begin{tabular}{l|lllll}
\hline
Method & Settings & c=6 & c=8 & c=10 & c=12 \\ \hline
\multirow{2}{*}{DeepAttnMISL} & 130K & 0.623 & \textbf{0.640} & \textbf{0.636} & 0.622 \\
& 1M & \textbf{0.644} & 0.638 & 0.633 & \textbf{0.637} \\ \hline
\multirow{2}{*}{DeepMISL} & 130K & 0.564 & 0.552 & 0.590 & 0.547 \\
& 1M & 0.570 & 0.587 & 0.579 & 0.576 \\ \hline
W-MTLSA & 130K & 0.560 & 0.560 & 0.531 & 0.555 \\
W-LassoCox & 130K & 0.531 & 0.541 & 0.495 & 0.495 \\
W-EnCox & 130K & 0.532 & 0.544 & 0.497 & 0.496 \\ \hline
\end{tabular}
\end{center}
\label{tab:MCO_AUC}
\end{table}
Fig.\ref{fig:MCO_ci_boxplot} and Fig.\ref{fig:MCO_auc_boxplot} present boxplots of C-index and AUC values from each model with different phenotype cluster numbers. We only show captions in the top left figure and others will also share this description. We can see that results of our method on MCO-1M and MCO-130K don't have significant differences. This shows sampling strategies will not affect final results of the proposed method in cross-validation settings.
One can observe that our models consistently perform better than WSISA models across different phenotype cluster numbers.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/mco_ci.pdf}
\caption{Boxplots of C-index values with different numbers of phenotype patterns.}
\label{fig:MCO_ci_boxplot}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/mco_auc.pdf}
\caption{Boxplots of AUC values with different numbers of phenotype patterns.}
\label{fig:MCO_auc_boxplot}
\end{figure}
Fig.\ref{fig:MCO_tune_c} visualizes clustered phenotype patterns and selected patches from DeepAttnMISL and WSISA on MCO-130K when cluster number is set as 6. The first row shows results from DeepAttnMISL while the second one presents results from WSISA.
In MCO-130K, around 100 patches per WSI are sampled and it clearly can see that clustering based on VGG-16 features is capable of identifying patches from different layers of WSI and grouping similar patches into the same category. The most important advantage of DeepAttnMISL is its good interpretability and we create a heatmap by showing the corresponding attention weight of each phenotype cluster. We rescaled the attention weights using $a_k^{'} = (a_k - min(a))/(max(a)-min(a))$.
Red color indicates the highest attention weight while blue means the lowest values. From the obtained heatmap, we can see the proposed approach can identify higher risk regions properly because most of patches with high attention weights are from tumor regions. When we look at selected patches from WSISA, we can observe that many patches from non-tumor regions are also selected. That is because WSISA selects clusters based on patches from the whole database and thus it cannot guarantee reliable selection on the specific patient due to the heterogeneity across patients.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/MCO_WSISA_100K.pdf}
\caption{Comparison of phenotype patterns distribution in the first column. The second column shows heatmap and selected patches from the proposed model and WSISA on MCO-130K, respectively.}
\label{fig:MCO_tune_c}
\end{figure}
More clear visualizations can be found in Fig.\ref{fig:MCO_1M} on MCO-1M set and more patches (about 1000) are sampled per WSI. The first column shows phenotype patterns from the proposed model with different numbers. The second column shows the corresponding heatmaps. Attention mechanism in DeepAttnMISL allows to easily interpret the provided decision in terms of instance-level labels. From heatmaps, we can see results from $c=6$ and $c=8$ look better as most patches from cancerous regions are given by high attention weights.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/MCO_1M.pdf}
\caption{Phenotype patterns clustering visualizations and the corresponding heatmaps from the proposed model on MCO-1M.}
\label{fig:MCO_1M}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/MCO_vis_1.pdf}
\caption{(a) Original WSI, (b) Phenotype patterns distribution, (c) Heatmaps from our model, (d) Selected patches with highest attention weights. The bottom shows representative patches from each phenotype.}
\label{fig:MCO_vis1}
\end{figure}
To better visually validate the effect of attention mechanism, we collect and examine the attention weights as well as their corresponding patch images on MCO-1M data in Fig.\ref{fig:MCO_vis1}. The bottom shows randomly selected patches from each phenotype and the frame colors of patches correspond to pattern colors in Fig.\ref{fig:MCO_vis1}-(b). We use threshold as 0.8 to only show patterns with higher attention weights in Fig.\ref{fig:MCO_vis1}-d. Each color represents each phenotype pattern of the whole slide image and we can see the proposed model has higher interest on patches more related to tumor regions. Relative low attention weights are given to normal tissue regions. More surprisingly, the model can also give low attentions on background regions as they don't provide any information and are noisy images.
Fig.\ref{fig:MCO_vis2} shows another example. From the figure, we can see most patches from tumor regions are found and our model can successfully assign higher attention weight for such pattern. For patches with relatively less complex structures and textures, our model can identify them as not very important regions by giving lower attention weights.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{fig/MCO_vis_2.pdf}
\caption{(a) Original WSI, (b) Phenotype patterns distribution, (c) Heatmaps from our model, (d) Selected patches with highest attention weights. The bottom shows representative patches from each phenotype.}
\label{fig:MCO_vis2}
\end{figure}
\subsection{Lung Cancer dataset results}
\subsubsection{Baseline models}
As we have annotations on NLST dataset, we can conduct more extensive experiments with ROI-based survival models.
Following the recent framework~\cite{yu2016predicting}, we extracted 10 dense image patches from ROIs and calculated hand-crafted features using CellProfiler~\cite{carpenter2006cellprofiler} which serves as a state-of-the-art medical image feature extracting and quantitative analysis tool. A total of 1,795 quantitative features were obtained from each image tile. Then we averaged those features across different patches for each patient.
These types of image features include cell shape, size, texture of the cells and nuclei, as well as the distribution of pixel intensity in the cells and nuclei.
We can summarize the comparison methods into five categories as follows:
\begin{itemize}
\item \textbf{Cox models}: The Cox proportional hazards model is the most commonly used semi-parametric model in survival analysis. Two regularized Cox models $l_1$-norm (LASSO-Cox)~\cite{tibshirani1997lasso} and boosting cox model (Cox-boost)~\cite{Binder2008Allowing} are compared in experiments.
\item \textbf{Parametric censored regression models}: PCR models formulates the joint probability of the uncensored and censored instances as a product of death density function and survival functions, respectively~\cite{lee2003statistical}. We choose Weibull, Logistic distribution to approximate the survival data.
\item \textbf{MTLSA}: Multi-Task Learning model for Survival Analysis (MTLSA)~\cite{li2016multi} reformulates the survival model into a multi-task learning problem.
\item \textbf{WSISA}: WSISA can learn effective features from WSIs~\cite{zhu2017wsisa}. We train LassoCox and MTLSA using WSISA learned features as they are top models based on their report. To investigate performance from pre-trained network, a ResNet34~\cite{he2016deep} model is used and then fine-tuned as the backbone network in WSISA.
\item \textbf{DeepMISL}: Deep Multiple Survival Learning combined both local and global representation to predict outcomes~\cite{yao2019deep}.
\end{itemize}
\subsubsection{Results}
We reported results from a few possible numbers of phenotypes, such as $\{6, 8, 10, 12\}$ on the testing dataset. From the Table \ref{tab:tune_phenotypes}, we can see models using fewer clusters are unable to achieve good results. The reason might be patches of lung cancer patients are very heterogeneous and it is relative difficult to learn survival-related representations from fewer phenotypes.
Results suggest the number of 10 achieves slightly better predictions which is consistent with findings in WSISA~\cite{zhu2017wsisa}. Thus, we decide to choose to cluster 10 phenotypes in our model. Other parameters are kept the same with settings in MCO experiments.
\begin{table}[!htb] \caption{ Performances with different number of phenotypes.}
\begin{center}
\begin{tabular}{ c|cccc}\hline
{\bfseries No.} & 6 & 8 & 10 & 12 \\ \hline
CI & 0.673 & 0.769 & 0.775 & 0.742 \\ \hline
\end{tabular}
\end{center}
\label{tab:tune_phenotypes}
\end{table}
Table~\ref{tab:ci_NLST} shows C-index and AUC values by various survival regression methods on 5-fold cross validation. It shows the prediction power of the proposed method compared with different survival models. One can see that the proposed method achieves both highest C-index and AUC values which present the best prediction performance among all methods.
From the table, baseline models using hand-crafted features perform not well due to following reasons: 1) the limitation of local information provided by the patches extracted from the ROI using hand-crafted features; 2) the non-effective aggregation way to represent the heterogeneity of tumor and patient from patch-based results. Instead of using a small set of patches and human-designed features, the proposed method can effectively learn complex deep bag representation from phenotype patterns to predict patient survival outcomes.
\begin{table*}[!htb] \caption{Performance comparison of the proposed methods and other existing related methods using C-index values on NLST dataset.}
\centering
\begin{tabular}{p{3cm}|p{7cm}|p{2.5cm}|p{2.5cm}}
\hline
Type & Method & C-index & AUC \\ \hline
\multirow{4}{*}{Deep Learning} & DeepAttnMISL & \textbf{0.6963} (0.0660) & \textbf{0.7143} (0.0541) \\
& DeepMISL~\cite{yao2019deep} & 0.6476 (0.0698) & 0.6693 (0.0866) \\
& Finetuned-WSISA-LassoCox \cite{zhu2017wsisa} & 0.6123 (0.0216) & 0.6427 (0.0575) \\
& Finetuned-WSISA-MTLSA \cite{zhu2017wsisa} & 0.6428 (0.0259) & 0.6963 (0.0668) \\
& WSISA-LassoCox \cite{zhu2017wsisa} & 0.5996 (0.0750) & 0.5957 (0.0674) \\
& WSISA-MTLSA \cite{zhu2017wsisa} & 0.6305 (0.0575) & 0.6479 (0.0936) \\ \hline
\multirow{2}{*}{Cox-based} & Lasso-Cox~\cite{tibshirani1997lasso} & 0.4842 (0.0508) & 0.4903 (0.1011) \\
& Cox-boost~\cite{Binder2008Allowing} & 0.5474 (0.0370) & 0.5271 (0.0386) \\ \hline
\multirow{2}{*}{Parametric models} & Logistic~\cite{kalbfleisch2011statistical} & 0.4998 (0.0881) & 0.5013 (0.1146) \\
& Weibull~\cite{kalbfleisch2011statistical} & 0.5577 (0.0395) & 0.5618 (0.0976) \\ \hline
Multi-task based & MTLSA~\cite{li2016multi} & 0.5053 (0.0509) & 0.5362 (0.0416) \\ \hline
Ranking based & BoostCI~\cite{mayr2014boosting} & 0.5595 (0.0610) & 0.5487 (0.0532) \\ \hline
\end{tabular}
\label{tab:ci_NLST}
\end{table*}
WSISA achieves better results than baseline models which shows the good representative ability of features from WSISA.
However, WSISA needs a separate stage to train several DeepConvSurv models independently and will discard some phenotypes in the final stage, the performance actually depends on how well to select important clusters and WSISA still has the chance to lose in selecting survival-related clusters for a good final survival prediction.
To investigate results from pre-trained model, we replaced the original 2DCNN network of WSISA by using a pre-trained ResNet34~\cite{he2016deep}. The whole model will be fine-tuned following the same process. It is clear to see improved C-index which can demonstrate that fine-tuned models can bring benefits but our DeepAttnMISL is still better than finetuned-WSISA by a large margin.
When introducing MIL into survival learning, DeepMISL and the proposed model can improve predictions from WSISA by C-index metric. Instead of selecting phenotypes, DeepMISL and the proposed model are designed to consider all possible patterns. Performance is further improved when we use the more flexible attention mechanism to learn informative and discriminate patterns. This architecture makes the proposed method can better learn heterogeneous information encoded in WSIs which will make it more practical and have better intepretability than DeepMISL in real applications.
We pick one patient as the example to show visualization results. Fig.\ref{fig:nlst_rois} presents this patient's all WSIs and the corresponding tumor region annotations.
Fig.\ref{fig:nlst_vis2_ours}-\ref{fig:nlst_vis2_WSISA} show results from the proposed model and WSISA, respectively. In Fig.\ref{fig:nlst_vis2_ours}, the first row shows attention weights heatmaps and the second row shows phenotype pattern distributions on original WSIs. The bottom presents randomly selected patches with higher attention weights (patches with red colors in heatmaps). It is clear to see that most patches from tumor regions are highlighted with high attentions while patches from normal tissues are treated with lower attentions. Compared with results from WSISA shown in Fig.\ref{fig:nlst_vis2_WSISA}, we can see that WSISA will miss many tumor patches and select many normal patches as discriminative patterns. Patches from cancerous regions can be grouped in similar clusters but not all of them will be selected by WSISA as the selection is performed via DeepConvSurv on all patches of the database. Selected phenotypes are more likely discriminative for the whole database with all patients and they are not well interpreted for the specific patient.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.85\linewidth]{fig/nlst_ROIs.pdf}
\caption{WSI Annotations of one example patient.}
\label{fig:nlst_rois}
\end{figure}
\begin{figure*}[!htb]
\centering
\includegraphics[width=0.85\linewidth]{fig/nlst_ours_1.pdf}
\caption{Phenotype pattern distribution and the corresponding heatmaps from the proposed model on three WSIs of the same patient. The bottom shows patches from phenotypes with high attention values.}
\label{fig:nlst_vis2_ours}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{fig/nlst_wsisa_1.pdf}
\caption{Phenotype pattern distribution and selected patterns from WSISA. Missing tumor patches can be observed from selected patterns by WSISA.}
\label{fig:nlst_vis2_WSISA}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=1\linewidth]{fig/kmplots_R2.pdf}
\caption{Kaplan-Meier survival curves of different models for one testing fold. High risk (great than median) groups are plotted as green lines, and low risk (less than or equal to median) groups are plotted as red lines. The x axis shows the time in days and y axis presents the probability of overall survival. Log rank $p$ value is shown on each figure. "+" means the censored patient.}
\label{fig: kmplot}
\end{figure*}
Given the trained survival models, we can use the estimated testing risk scores to classify patients into low or high-risk group for personalized treatments. Two groups are classified by the median of predicted risk scores. We evaluate if those models can correctly classify death patients (uncensored data) into two groups since uncensored data is more informative. Patients with longer survival time should be classified into low risk group and vice versa. If the model cannot correctly distinguish high and low risk death patients, two average death times should be very close. We plot Kaplan-Meier survival curves on one testing fold in Fig.\ref{fig: kmplot}. From the figure, one can see that the proposed model can more successfully group testing death patients into two groups than other methods in all datasets.
The log rank test is conducted to test the difference of two curves. It is shown that the proposed method can achieve the most significant log rank test outcome (p-value = $4.527 \times 10^{-3}$) while some of others do not reach statistical significances. Kaplan-Meier curves suggest that the proposed comprehensive prediction model can offer personalized risk scores which can better group individuals into two groups. The proposed model has a significant impact on population survival times. It can be used as a recommendation system for offering personalized treatments by determining the relationship between a patient's whole slide pathological images and his or her risk of an event (death).
p value of log-rank test less than 0.05 is considered significant and $p<0.1$ is marginal significant. For five testing folds, our model can achieve four significant results and one marginal significant result with $p=0.09$, DeepMISL achieves two significant results and one marginal significant with $p=0.07$, Finetuned-WSISA-MTLSA achieves one significant result and three marginal significant results. Logistic model achieves only one significant result ($p=0.049$), shown in Fig.13.
BoostCI has one fold significant result ($p=0.0123$). All other baseline models cannot have significant results on all testing folds.
Overall, deep learning
models perform well to achieve significant results than models with hand-crafted features. The proposed model achieves the most number of significant results which could validate that the predictor from our model is an important prognostic factor which could be used for a good patient risk stratification.
\subsection{Ensemble Models}
We investigated if ensemble models could benefit final results. During each fold, we train five models and then average prediction score on the corresponding testing fold. The maximum cluster number is set to 6 and 10, respectively. Table \ref{tab:ensemb} shows C-index values using single and ensemble models on MCO-1M and NLST dataset.
The average c-index across five folds is 0.606 for MCO single and 0.600 for MCO ensemble, respectively. On NLST dataset, the averaged c-index of single model is 0.696 and the ensemble model is 0.695. From the table, it can be seen that ensemble models cannot provide additional power for predictions.
\begin{table}[!htb]\caption{ Results of single and ensemble models}
\begin{center}
\begin{tabular}{ c|cccccc}\hline
{} & fold 1 & fold 2 & fold 3 & fold 4 & fold 5 \\ \hline
MCO single & 0.666 & 0.571 & 0.591 & 0.565 & 0.636 \\
MCO ensemble & 0.665 & 0.564 & 0.585 & 0.549 & 0.637 \\
NLST single & 0.750 & 0.775 & 0.613 & 0.663 & 0.680 \\
NLST ensemble & 0.766 & 0.797 & 0.555 & 0.656 & 0.699 \\
\hline
\end{tabular}
\end{center}
\label{tab:ensemb}
\end{table}
\section{Conclusion}
In this paper, we proposed a deep multiple instance model to directly learn survival patterns from gigapixel images without annotations which make it more easily applicable in large scale cancer dataset. Compared to existing image-based survival models, the developed framework can handle various numbers and sizes whole slide images among different patients. It can learn holistic information of the patient using bag representations and achieve much better performance compared to the ROI patch based methods. Moreover, the flexible and interpretable attention-based MIL pooling can overcome drawbacks from fixed aggregation techniques in state-of-the-art survival learning models. We showed that our approach provides an interpretation of the clinical outcome prediction by presenting reasonable ROIs which is very important in such practical application. Additionally,
We illustrated the proposed method can provide personalized treatment for patients and can be used by doctors to guide their treatment decisions for improving patient lifespan. With future research and development, the proposed approach has the potential to be applied in other tumor types.
\section*{Acknowledgements}
This work was partially supported by US National Science Foundation IIS-1718853, the CAREER grant IIS-1553687 and Cancer Prevention and Research Institute of Texas (CPRIT) award (RP190107).
The authors would like to thank the National Cancer Institute for access to NCI’s data collected by the National Lung
Screening Trial. The statements contained herein are solely of the authors and do not
represent or imply concurrence or endorsement by NCI.
\bibliographystyle{unsrt}
|
3,212,635,537,796 | arxiv | \section{Introduction}
Since the time of Hilbert and Bernays \cite{HilBer68}, a notable research issue has been finding the proper way of interfacing logic with analysis. One of the first methods to represent real numbers in a logic in a built-in matter was attempted in the 1960s -- see, e.g., the book by Chang and Keisler \cite{ChaKei66}. Later, Ben Yaacov and others realized that the lack of fruitful lines of research out of that logic was due to an unfortunate choice of parameters -- specifically, the truth values could vary wildly along an arbitrary compact Hausdorff space (instead of just the interval $[0,1]$), while equality itself was tightly restricted to binary values. Their efforts led to what has been called ``continuous first-order logic'', a system in which many celebrated and relatively advanced results of 20th century model theory could be reasonably translated -- see \cite{BenBerHenUsv08} for an introduction. Another strand of developments came from Henson's positive-bounded logic, introduced in \cite{Hen76} and later shown to be largely equivalent to continuous first-order logic. However, due to its later exhaustive treatment by Henson and Iovino focusing on the model-theoretic ultraproduct construction \cite{HenIov02}, this logic was subject to an investigation from which it resulted that, in combination with the aforementioned ultraproducts, it could be used to prove uniformity results in nonlinear analysis and ergodic theory -- see the recent paper of Avigad and Iovino \cite{AviIov13}.
What interests us here is the other known method of obtaining such results, namely the research program of ``proof mining'' -- a project first suggested by G. Kreisel in the 1950s under the name of ``unwinding of proofs'' and then given maturity by U. Kohlenbach and his collaborators starting in the 1990s. Proof mining aims to analyse existing proofs in branches of ordinary mathematics in order to exhibit their hidden combinatorial and computational content and also to devise general ``metatheorems'' that explain when such concrete witnesses or bounds may be extracting from a known proof, conditional on the formalization of the given proof inside some higher-order system of arithmetic. These general results may also specify when some parameters do not partake in the final formula -- hence the uniformity result is implicit in the quantitative one. So far, the research has been largely focused on nonlinear analysis and, naturally, the question of the right way of formalizing metric or normed spaces has been raised. Fortunately, the higher-order nature of the systems with which proof mining works has provided the following fourth solution to the problem: spaces are encoded as separate primitive types out of which the type algebra is constructed and on which axioms like the Banach space ones can be added as simply as in the purely arithmetic situation. A comprehensive references of the theory of proof mining and its results up to 2008 is \cite{Koh08}, while a recent survey is \cite{Koh16}.
We can now ask the question of whether these proof-theoretic methods are sufficiently powerful to provide us with all uniformity results given to us by the model-theoretic properties of positive-bounded formulas. (Proof theory already had the upper hand in the matter of being able to deal with weak forms of extensionality.) The answer, as presented in the 2016 paper of G\"unzel and Kohlenbach \cite{GunKoh16}, is in the affirmative. To give a rough sketch, the positive-bounded formulas are there translated into a special class of higher-order formulas denoted by $\mathcal{PBL}$, which are then turned into $\Delta$-formulas, a class of formulas which can be freely added as additional axioms, with no negative consequences to the bound extraction procedure, as per the classical metatheorems of proof mining. A new metatheorem is then obtained for the classes of spaces which could be axiomatized by positive-bounded formulas. In addition, the treatment of a ``uniform boundedness principle'' tries to clarify just what exactly is the role played by the ultraproduct construction. Examples are given of such classes of spaces, and the translations for each set of axioms into the higher-order language are given explicitly, together with their metatheorems. Notable among these are the $L^p$ and $BL^pL^q$ Banach lattices, which are usually defined by a construction, but for which axiomatic characterizations into positive-bounded logic have been found, for the last one by Henson and Raynaud \cite{HenRay07}.
The space of $p$-integrable functions on a measure space $(\Omega,\mathcal{F},\mu)$ -- denoted by $L^p(\Omega,\mathcal{F},\mu)$ or simply by $L^p(\mu)$ -- is the Banach space built on the set of all real-valued measurable functions $f$ on $\Omega$ having the property that
$$\int_\Omega |f|^p \mathrm{d}\mu < \infty,$$
a set then factored by the a.e.-equality relation (which makes the canonical seminorm into a norm). It turns out -- see \cite{Lac74,LinTza73} for detailed expositions -- that these spaces can be given an implicit characterization, which resembles a bit the axiomatization of $BL^pL^q$ lattices which was analysed by G\"unzel and Kohlenbach. Notably, and in contrast to that, this characterization does not use at all the natural lattice structure. What we shall do is to show how it may be modified in order to build from it a logical system that (i) accurately represents the $L^p(\mu)$ spaces relatively to their standard models (Theorem~\ref{sound}); (ii) allows for a bound extraction metatheorem (Theorem~\ref{S3thm}); and (iii) admits an internal proof that the standard modulus of uniform convexity is valid for this class of spaces (Theorem~\ref{uconv}).
The next section runs parallel to the exposition in \cite{Koh08} (updated by \cite{GunKoh16}) and familiarizes the reader with the basic notions regarding the logical system used and the formulation of metatheorems, adapted for the present situation. Section~\ref{sec:delta} introduces and proves the essential lemmas that we use to obtain our characterization, presents and justifies its translation into the higher-order language and gives the corresponding metatheorem. Finally, Section~\ref{sec:app} exhibits an application of our results, namely the way in which one can quantitatively derive the uniform convexity of $L^p$ spaces using our axiomatization, for the case $p \geq 2$.
\section{Logical preliminaries}
The most powerful foundational system that has so far been studied from the viewpoint of proof mining is the system $\mathcal{A}^\omega$ of weakly extensional classical analysis in all finite types. It is created by adjoining certain choice principles -- the quantifier-free axiom schema of choice and the axiom schema of dependent choice -- to a base system of higher-order arithmetic, namely weakly extensional classical (Peano) arithmetic in all finite types, a system that is a modification of G\"odel's System T, which is equivalent in proof-theoretic power to the first-order theory {\bf PA}. The system $\mathcal{A}^\omega$, in its turn, is of a power comparable to the first-order, two-sorted theory usually denoted by $\text{\bf Z}_\text{\bf 2}$ or ``full second-order arithmetic''. A detailed presentation of it can be found in the monograph \cite{Koh08}. We note in passing that it represents real numbers by functions $\mathbb{N} \to \mathbb{N}$ in such a way that the usual binary relations $=_\mathbb{R}$ and $\leq_\mathbb{R}$ are actually expanded into purely universal formulas, and $<_\mathbb{R}$ into a purely existential one. However, situations like the one happening in the usual definition of a convergent sequence, where ``$<\varepsilon$'' can be readily substituted for ``$\leq\varepsilon$'', occur frequently, this giving us a leeway in minimizing the complexity of the formulas under discussion.
From the way it has been built up, it is immediate that this system admits a G\"odelian functional interpretation in its bar-recursive extension devised by Spector \cite{Spe62}. However, in order to be useful to actual applications, we must do slight modifications of it, as it has been done for the general logical metatheorems of proof mining, developed by Kohlenbach \cite{Koh05} and by Gerhardy and Kohlenbach \cite{GerKoh06,GerKoh08}. We follow in the sequel the exposition of \cite{Koh08}, in order to present the first such extension, $\mathcal{A}^\omega[X,\|\cdot\|]$, which allows us to speak about normed spaces.
The set of types for this system, $\text{{\bf T}}^X$, will be generated by two ``primitive'' types, the type $0$ of natural numbers and a new abstract type $X$, representing elements from our space, forming a free algebra with a single binary operation $\to$, representing function types. (We will write $\tau(\rho)$ for $\rho\to\tau$.) For such a type $\rho$, we define the type $\hat{\rho}$ by replacing all occurences of $X$ in $\rho$ by $0$.
\begin{definition}
Such a type is {\bf small} if it is of the form $\rho\underbrace{(0)\ldots(0)}_{n\text{ times}}$, where $\rho \in \{0,X\}$ and $n \geq 0$.
\end{definition}
\begin{definition}
Such a type is {\bf admissible} if it is of the form $\rho(\tau_n)\ldots(\tau_1)$, where $\rho \in \{0,X\}$, $n \geq 0$ and $\tau_1$, ..., $\tau_n$ are all small.
\end{definition}
Clearly, all small types are admissible.
Also, we add new constants for the various operations common to normed spaces, i.e. $0_X$ and $1_X$ of type $X$, $+_X$ of type $X(X)(X)$, $-_X$ of type $X(X)$, $\cdot_X$ of type $X(X)(1)$ (where $1=0(0)$ is the type of real numbers) and $\|\cdot\|_X$ of type $1(X)$. We allow infix notation and the ``syntactic sugar'' of writing $x -_X y$ for $x +_X (-_X y)$. Finally, we add the following axioms:
\begin{enumerate}
\item the equational, and hence purely universal, axioms for vector spaces;
\item $\forall x^X (\|x-_X x\|_X =_\mathbb{R} 0_\mathbb{R} )$;
\item $\forall x^X y^X (\|x -_X y\|_X =_\mathbb{R} \|y -_X x\|_X)$;
\item $\forall x^Xy^Xz^X (\|x -_X z\|_X \leq_\mathbb{R} \|x-_X y\|_X +_\mathbb{R} \|y-_Xz\|_X)$;
\item $\forall \alpha^1 x^X y^X (\|\alpha x-_X\alpha y\|_X =_\mathbb{R} \|\alpha\|_\mathbb{R} \cdot_\mathbb{R} \|x-_X y\|_X$;
\item $\forall \alpha^1 \beta^1 x^X (\|\alpha x -_X \beta x\|_X =_\mathbb{R} |\alpha -_\mathbb{R} \beta |_\mathbb{R} \cdot_\mathbb{R} \|x\|_X$;
\item $\forall x^X \forall y^X \forall u^X \forall v^X (\|(x+_X y) -_X (u+_X v)\|_X \leq_\mathbb{R} \|x -_X u\|_X +_\mathbb{R} \|y-_X v\|_X)$;
\item $\forall x^X y^X (\|(-_X x) -_X (-_X y)\|_X =_\mathbb{R} \|x-_X y\|_X)$;
\item $\forall x^X y^X (|\|x\|_X -_\mathbb{R} \|y\|_X|_\mathbb{R} \leq_\mathbb{R} \|x-_X y\|_X)$;
\item $\|1_X\|_X =_\mathbb{R} 1_\mathbb{R}$.
\end{enumerate}
Note that the equality relation $x^X =_X y^X$ which is necessarily used in the expression of the vector space axioms is syntactically defined as $\|x -_X y\|_X =_\mathbb{R} 0_\mathbb{R}$. We define the equality for higher types as in the system $\mathcal{A}^\omega$, as extensional equality reducible to $=_0$ and $=_X$.
An issue when adding new constant symbols is their extensionality -- roughly, as the base system admits only a quantifier-free rule of extensionality, it is not clear that for a new function symbol $f$ that is added to the system (e.g. $+_X$ or $-_X$ from the above) one can prove in the new system a statement of the form
$$\forall x_1...\forall x_n\forall y_1...\forall y_n (\bigwedge_i x_i = y_i \to f(x_1,...,x_n) = f(y_1,...,y_n))$$
Some axioms above, like the eighth one, are written in this way purely to minimize the effort in writing such an extensionality proof; the rest of them yield it more readily in their classical forms. The result is that all new function symbols are provably extensional. The last axiom is added solely to ensure the non-triviality of the formalized space.
In order to formalize the fact that the space is Banach, i.e. its completeness, the following is done (see \cite[pp. 432-434]{Koh08}). We first note that the following operation on $X$-valued sequences is term-definable in the system:
\[
\widehat{x_n} :=
\begin{cases}
\hfill x_n, \hfill & \text{if, for all $k <n$, } [[d_X(x_k, x_{k+1})](k+1) <_\mathbb{Q} 6 \cdot 2^{-k-1}\\
\hfill x_k, \hfill & \text{where $k < n$ is the least such that } [[d_X(x_k, x_{k+1})](k+1) \geq_\mathbb{Q} 6 \cdot 2^{-k-1}\\
\end{cases}
\]
where we have used explicitly the encoding of reals as functions. The operation above transforms a sequence into a Cauchy one of prescribed rate $2^{-n+3}$. We now add a new constant $C$ of type $X(X(0))$, used to assign the limit to such sequences. This is enforced by the following additional axiom:
$$\forall x^{X(0)} \forall k^0 (d_X(C(x), \widehat{x_k}) \leq_\mathbb{R} 2^{-k+3}).$$
We have therefore obtained the system $\mathcal{A}^\omega[X,\|\cdot\|,\mathcal{C}]$, formalizing Banach spaces.
We say that a formula in our language is a $\forall$-formula (resp. an $\exists$-formula) iff it is formed by adjoining a list of universal (resp. existential) quantifiers over variables of admissible types to a quantifier-free formula.
Now, if $(X,\|\|)$ is a Banach space, we define a canonical associated set-theoretic model $\mathcal{S}^{\omega,X} = \{\mathcal{S}_\rho\}_{\rho \in \text{{\bf T}}^X}$ in all finite types by putting $\mathcal{S}_0 := \mathbb{N}$, $\mathcal{S}_X := X$ and $\mathcal{S}_{\tau(\rho)} := \mathcal{S}_{\tau}^{\mathcal{S}_\rho}$ (i.e. the set-theoretic Hom-set), assigning to any language constant its standard value, except for $1_X$, which can take any value of norm $1$ -- this is why we said ``a'' set-theoretic model. Also, we say that a sentence of our logical language is modeled by such a pair $(X,\|\|)$ iff it is satisfied in the usual Tarskian sense by all the possible models associated to it (i.e., regardless of the exact value $1_X$, which, however, makes the tenth axiom to be satisfied in this sense).
There is another relevant model associated to this kind of logical system. In order to introduce it, we define, for each $\rho \in \text{{\bf T}}^X$, the majorization relation $\gtrsim_\rho \subseteq \mathcal{S}_{\widehat{\rho}} \times \mathcal{S}_\rho$, inductively, as follows:
\begin{align*}
x^* \gtrsim_0 x :\Leftrightarrow& x^* \geq x\\
x^* \gtrsim_X x :\Leftrightarrow& x^* \geq \|x\|\\
x^* \gtrsim_{\tau(\rho)} x :\Leftrightarrow& \forall y^*,y (y^* \gtrsim_\rho y \to x^*y^* \gtrsim_\tau xy)\\
&\land \forall y^*,y (y^* \gtrsim_{\widehat{\rho}} y \to x^*y^* \gtrsim_{\widehat{\tau}} x^*y).
\end{align*}
We can now define the model of {\bf hereditarily strongly majorizable functionals}, $\mathcal{M}^{\omega,X} = \{\mathcal{M}_\rho\}_{\rho \in \text{{\bf T}}^X}$, by:
\begin{align*}
\mathcal{M}_0 &:= \mathbb{N} \\
\mathcal{M}_X &:= X \\
\mathcal{M}_{\tau(\rho)} &:= \{x \in \mathcal{M}_\tau^{\mathcal{M}_\rho} \mid \text{exists } x^* \in \mathcal{M}_{\widehat{\tau}}^{\mathcal{M}_{\widehat{\rho}}} \text{ such that } x^* \gtrsim_{\tau(\rho)} x \}
\end{align*}
One of the main uses of this majorizable model arises from the fact that, unlike the standard model, it is a model of bar recursion, which is needed in the current state of the art to interpret the principle of dependent choice. Therefore, the proof of the general logical metatheorems involves some constant juggling between the two models (see \cite[pp. 421-428]{Koh08}). As a consequence, the kind of sentences that one may freely add as axioms will be restricted here not only by the logical complexity, but also by the types involved. Here we see how the admissible types come into play -- for such a type $\rho$, it is the fact (see \cite[Lemma 5.7]{GunKoh16}) that $\mathcal{M}_\rho \subseteq \mathcal{S}_\rho$. This justifies the following definition.
\begin{definition}
We say that a formula in our system is a {\bf $\Delta$-sentence} if it is of the following form:
$$\forall\underline{a}^{\underline{\delta}} \exists \underline{b}^{\underline{\sigma}} \preceq_{\underline{\sigma}} \underline{r} \underline{a} \forall \underline{c}^{\underline{\gamma}} B_0(\underline{a},\underline{b},\underline{c}),$$
where underlined letters represent tuples of variables or types, $B_0$ is quantifier-free and devoid of any additional variables, $\underline{r}$ is a term tuple of the appropriate type, $\underline{\delta}$, $\underline{\sigma}$, $\underline{\gamma}$ are tuples of admissible types, and $\preceq$ is syntactic sugar for the following family of binary relations:
\begin{align*}
x \preceq_0 y &:\equiv x \leq y \\
x \preceq_X y &:\equiv \|x\| \leq \|y\| \\
x \preceq_{\tau(\rho)} y &:\equiv \forall z^\rho (x(z) \preceq_\tau y(z))
\end{align*}
\end{definition}
\begin{definition}
The {\bf Skolem normal form} of a $\Delta$-sentence written as above is:
$$\exists \underline{B}^{\underline{\sigma}(\underline{\delta})} \preceq_{\underline{\sigma}(\underline{\delta})} \underline{r} \forall \underline{a}^{\underline{\delta}} \forall \underline{c}^{\underline{\gamma}} B_0(\underline{a}, \underline{B}\underline{a}, \underline{c}) $$
\end{definition}
\begin{notation}
If $\Delta$ is a set of $\Delta$-sentences, we denote by $\widetilde{\Delta}$ the set of the Skolem normal forms of the sentences in the set $\Delta$.
\end{notation}
\begin{theorem}[{\cite[Lemma 5.11]{GunKoh16}}]
Let $(X,\|\|)$ be a Banach space, $\mathcal{S}^{\omega,X}$ and $\mathcal{M}^{\omega,X}$ be models associated with it as above. Let $\Delta$ be a set of $\Delta$-sentences. Suppose that $\mathcal{S}^{\omega,X} \models \Delta$. Then $\mathcal{M}^{\omega,X} \models \widetilde{\Delta}$.
\end{theorem}
The following result is the appropriate modification for our case of \cite[Theorem 5.13 and Corollary 5.14]{GunKoh16}.
\begin{theorem}[{Logical metatheorem for Banach spaces endowed with additional $\Delta$-axioms}]\label{S2thm}\ \\
Let $\rho \in \text{{\bf T}}^X$ be an admissible type. Let $B_\forall(x,u)$ be a $\forall$-formula with at most $x,u$ free and $C_\exists(x,v)$ an $\exists$-formula with at most $x,v$ free. Let $\Delta$ be a set of $\Delta$-sentences. Suppose that:
$$\mathcal{A}^\omega[X,\|\cdot\|,\mathcal{C}] + \Delta \vdash \forall x^\rho (\forall u^0 B_\forall(x,u) \to \exists v^0 C_\exists(x,v)).$$
Then one can extract a partial functional $\Phi : S_{\widehat{\rho}} \rightharpoonup \mathbb{N}$, whose restriction to the strongly majorizable functionals of $S_{\widehat{\rho}}$ is a bar-recursively computable functional of $\mathcal{M}^\omega$, such that for all Banach spaces $(X,\|\|)$ having the property that any associated set-theoretic model of it satisfies $\Delta$, we have that for all $x \in S_\rho$ and $x^* \in S_{\widehat{\rho}}$ such that $x^* \gtrsim_\rho x$, the following holds:
$$\forall u \leq \Phi(x^*) B_\forall(x,u) \to \exists v \leq \Phi(x^*) C_\exists(x,v).$$
In addition:
\begin{enumerate}[label=\arabic*.]
\item If $\widehat{\rho}$ is equal to $1$, then $\Phi$ is total.
\item All variables may occur as finite tuples satisfying the same restrictions.
\item If the proof in the system above proceeds without the use of the axiom of dependent choice, one can use solely the set-theoretical model $\mathcal{S}^{\omega, X}$, without any restriction to the majorizable functionals, and $\Phi$ is then a total computable functional which is higher-order (i.e. in the sense of G\"odel) primitive recursive. Also, the additional restriction imposed on $\rho$ is no longer necessary.
\end{enumerate}
\end{theorem}
\section{\texorpdfstring{The $\Delta$-axiomatization of $L^p(\mu)$ Banach spaces}{The axiomatization of Lp Banach spaces}}\label{sec:delta}
The goal of this section is to describe an extension of the theory in the previous section, one that can formalize the concept of an $L^p(\mu)$ Banach space. Since such spaces are usually defined explicitly, as equivalence classes of $p$-integrable real-valued functions on a measurable space, it is clear that an implicit characterization is needed. Such a characterization in terms of the natural lattice structure of $L^p(\mu)$ spaces was used in \cite{GunKoh16} in order to provide a logical metatheorem for this class of spaces. For our use, however, it is more helpful to use the following characterization, for which references are \cite{LinTza73, Lac74} and which uses solely the Banach space structure. In the sequel, we shall denote by $\mathbb{R}^n_p$ the Euclidean vector space $\mathbb{R}^n$ endowed with the standard $p$-norm.
\begin{definition}
Let $X$ and $Y$ be linearly isomorphic Banach spaces. The {\bf Banach-Mazur distance} between $X$ and $Y$ is
$$d(X,Y) := \inf \{ \|L\| \|L^{-1}\| \mid L \text{ is a linear isomorphism between $X$ and $Y$}\}.$$
\end{definition}
\begin{definition}
Let $p$, $\lambda > 1$. We say that a Banach space $X$ is an {\bf $\mathcal{L}_{p,\lambda}$ space} if for each finite dimensional subspace $Y$ of $X$ there exists a finite dimensional subspace $Z$ of $X$ such that $Y \subseteq Z$ and $d(Z,\mathbb{R}^{\dim_\mathbb{R} Z}_p) \leq \lambda$.
\end{definition}
\begin{theorem}[{\cite{LinPel68, Tza69}}]\label{Banach-char}
Let $p > 1$. A Banach space $X$ is isometric to some $L_p(\mu)$ space iff for all $\varepsilon > 0$, $X$ is an $\mathcal{L}_{p,1+\varepsilon}$ space.
\end{theorem}
The first step in converting the above characterization into a logical axiomatization consists of the following quantitative bounding lemmas. In proving them, we shall use an argument adapted from \cite[Proposition 3.7]{HenRay07}.
\begin{lemma}\label{l1}
Let $X$ be the the $L^p$ space on a measure space $(\Omega, \mathcal{F}, \mu)$. Then, for all $x_1$,..., $x_n$ in $X$ of norm less than $1$, and for all $N \in \mathbb{N}_{\geq 1}$, there is a subspace $C \subseteq X$ and $y_1$,..., $y_n$ in $C$ such that $C$ is of dimension at most $(2nN + 1)^n$, it is isometric to $\mathbb{R}^{\dim_\mathbb{R} C}_p$ and for all $i$, $\|x_i - y_i\| \leq \frac1N$.
\end{lemma}
\begin{proof}
For any $f : \Omega \to \mathbb{R}$, we denote by $|f| : \Omega \to \mathbb{R}$ the function defined, for all $\omega \in \Omega$, by $|f|(\omega) := |f(\omega)|$.
We fix from the beginning some representatives for $x_1$,..., $x_n$, denoting them by the same designators, and we note that all constructions below will be well-defined w.r.t. the a.e.-equality equivalence relation. We set $\varphi:=\sum_{j=1}^n |x_j|$ and, for each $i \in \{1,\ldots,n\}$ and $k \in \{0,\ldots,nN-1\}$:
$$A_{i,k}:=\{\omega \in \Omega \mid \frac{k}{nN}\varphi(\omega) < |x_i(\omega)| \leq \frac{k+1}{nN}\varphi(\omega)\},$$
$$A_{i,k,+} := \{\omega \in A_{i,k} \mid x_i(\omega) > 0\}, \quad A_{i,k,-} := \{\omega \in A_{i,k} \mid x_i(\omega) < 0\},$$
$$A_{i,\otimes} := \{\omega \in \Omega \mid x_i(\omega) = 0\}.$$
Clearly, for all $i$, we have that $\Omega = \bigcup_{k=0}^{nN-1} (A_{i,k,+} \cup A_{i,k,-}) \cup A_{i, \otimes}$ and this is a disjoint union in all of its components.
For each $i$, put $y_i := \sum_{k=0}^{nN-1} \frac{k}{nN} (\mathbbm{1}_{A_{i,k,+}}-\mathbbm{1}_{A_{i,k,-}}) \cdot \varphi$. Let $i \in \{1,\ldots,n\}$ and $\omega \in \Omega$ be such that $x_i(\omega) > 0$. Then, by the above, there is a unique $k_0$ such that $\omega \in A_{i,k_0,+}$ and there is not any $k$ such that $\omega \in A_{i,k,-}$. Therefore, $y_i(\omega) = \frac{k_0}{nN} \cdot \varphi(\omega)$. As $\omega \in A_{i,k_0,+}$, $x_i (\omega) \leq \frac{k_0+1}{nN}\varphi(\omega)$, so $x_i (\omega) - y_i(\omega) \leq \frac{\varphi(\omega)}{nN}$. Since we also have that $x_i (\omega) > \frac{k_0}{nN}\varphi(\omega) = y_i(\omega)$ (so $x_i(\omega) - y_i(\omega)>0$), we get that $|x_i(\omega) - y_i(\omega)|\leq \frac{\varphi(\omega)}{nN}$. Analogously, we might prove this result for $x_i(\omega) = 0$ and $x_i(\omega) < 0$. We have therefore established that for all $i$, $|x_i - y_i| \leq \frac1{nN} \cdot \varphi$. From that we get that for all $i$,
$$\|x_i - y_i \| \leq \frac1{nN} \cdot \|\varphi\| \leq \frac1{nN} \sum_{j=1}^n\|x_j\| \leq \frac1N.$$
Returning to the disjoint union from before, we remark that, for different $i$'s, those sets might overlap. Therefore, for each $l : \{1,\ldots,n\} \to ((\{0,\ldots,nN-1\} \times \{+,-\}) \cup \{\otimes\})$, set:
$$B_l := \bigcap_{i=1}^n A_{i,l(i)}$$
so
$$\Omega = \bigcup_l B_l$$
is a disjoint union. For each such $l$, of which there are $(2nN + 1)^n$, set now:
$$z_l := \mathbbm{1}_{B_l} \cdot \varphi.$$
We have, then, for each $i$, that:
\begin{align*}
y_i &= \sum_{k=0}^{nN-1} \frac{k}{nN} (\mathbbm{1}_{A_{i,k,+}}-\mathbbm{1}_{A_{i,k,-}}) \cdot \varphi \\
&= \sum_{k=0}^{nN-1} \frac{k}{nN} \left(\sum_{l(i)=(k,+)}\mathbbm{1}_{B_l}-\sum_{l(i)=(k,+)}\mathbbm{1}_{B_l}\right) \cdot \varphi \\
&= \sum_{k=0}^{nN-1} \frac{k}{nN} \left(\sum_{l(i)=(k,+)}z_l - \sum_{l(i)=(k,+)}z_l\right),
\end{align*}
i.e. a linear combination of $z_l$'s.
Let $D$ be the set of all $l$'s such that $z_l \neq 0$. We take $C$ to be the space spanned by all the $z_l$'s with $l \in D$. It clearly contains, by the above, all the $y_i$'s and is of dimension at most (actually, equal, as we shall see) the cardinality of $D$, which is in turn at most $(2nN + 1)^n$. It remains to show that it is isometric to $\mathbb{R}^D_p$. If $l \in D$, then:
$$0 \neq \|z_l\| = \left(\int_\Omega |z_l|^p \mathrm{d}\mu\right)^{\frac1p} = \left(\int_{B_l} |\varphi|^p \mathrm{d}\mu\right)^{\frac1p},$$
so we may put
$$\beta_l:=\frac1{\left(\int_{B_l} |\varphi|^p \mathrm{d}\mu\right)^{\frac1p}}.$$
We show now that the linear map $f:\mathbb{R}^D_p \to C$, defined on the standard basis vectors by $f(e_l):=\frac1{\beta_l}\cdot z_l$ is an isometry. Let $v \in \mathbb{R}^D_p$, so there exist $(\lambda_l)_{l \in D}$ such that $v = \sum_{l \in D} \lambda_l e_l$. Then we have that:
\begin{align*}
\|f(v)\| &= \left\|\sum_{l \in D} \frac{\lambda_l}{\beta_l}\cdot z_l \right\| \\
&= \left(\int_\Omega \left|\frac{\lambda_l}{\beta_l} \cdot \mathbbm{1}_{B_l} \cdot \varphi \right|^p \mathrm{d}\mu\right)^{\frac1p} \\
&= \left(\sum_{l \in D} \int_{B_l} \left|\frac{\lambda_l}{\beta_l}\right|^p \cdot \left|\varphi \right|^p \mathrm{d}\mu\right)^{\frac1p} &\text{(as the $B_l$'s are disjoint)} \\
&= \left(\sum_{l \in D} \left|\frac{\lambda_l}{\beta_l}\right|^p \int_{B_l} \left|\varphi \right|^p \mathrm{d}\mu\right)^{\frac1p} \\
&= \left(\sum_{l \in D} \left|\lambda_l\right|^p \right)^{\frac1p} \\
&= \left\| \sum_{l \in D} \lambda_l e_l \right\| \\
&= \|v\|,
\end{align*}
and we are done.
\end{proof}
\begin{lemma}\label{l1-2}
The statement of Lemma~\ref{l1} is still valid if we require that all $y_i$'s are of norm less than $1$ and we allow for $C$ to be of dimension at most $(4nN + 1)^n$.
\end{lemma}
\begin{proof}
We apply Lemma~\ref{l1} for our $x_i$'s, but with $N$ replaced by $2N$. We therefore obtain a subspace $C \subseteq X$ and $y'_1$,..., $y'_n$ in $C$ such that $C$ is of dimension at most $(4nN + 1)^n$, it is isometric to $\mathbb{R}^{\dim_\mathbb{R} C}_p$ and for all $i$, $\|x_i - y'_i\| \leq \frac1{2N}$. For each $i$, if $\|y'_i\|\geq 1$, set $y_i := \frac{y'_i}{\|y'_i\|}$, else put $y_i := y'_i$. For the ``unmodified'' $y_i$'s, clearly $\|x_i - y_i\| \leq \frac1N$. The others are certainly still in $C$, so we must only show for them that $\|x_i - y_i\| \leq \frac1N$.
Set $\alpha_i := \frac1{\|y'_i\|}$. Since $\|y'_i\| \leq \|x_i\| + \|y'_i - x_i\| \leq 1 + \frac1{2N}$, we get that $\frac{1-\alpha_i}{\alpha_i} \leq \frac1{2N}$, so:
$$\|x_i - y_i\| = \|x_i - \alpha_iy'_i\| \leq \|x_i - y'_i\| + \|y'_i - \alpha_iy'_i\| \leq \frac1{2N} + (1-\alpha_i)\|y'_i\| = \frac1{2N} + \frac{1-\alpha_i }{\alpha_i} \leq \frac1N,$$
and we are done.
\end{proof}
\begin{lemma}\label{l2-new}
Let $X$ be a Banach space that satisfies the conclusion of Lemma~\ref{l1-2}. Then, for all $x_1$,..., $x_n$ in $X$ of norm exactly $1$, and for all $N \in \mathbb{N}_{\geq 1}$, there is a subspace $C \subseteq X$ and $y_1$,..., $y_n$ of norm exactly $1$ in $C$ such that $C$ is isometric to $\mathbb{R}^{\dim_\mathbb{R} C}_p$ and for all $i$, $\|x_i - y_i\| \leq \frac1N$.
\end{lemma}
\begin{proof}
Let $x_1$,..., $x_n$ in $X$ of norm exactly $1$, and $N \in \mathbb{N}_{\geq 1}$. We apply our hypothesis (i.e. the conclusion of Lemma~\ref{l1-2}) for these $x_i$'s and we set $N$ to be $2N$. We therefore obtain a subspace $C \subseteq X$ and $y'_1$,..., $y'_n$ in $C$ of norm at most $1$ such that $C$ is of dimension at most $(8nN + 1)^n$ (note that we no longer care about this), it is isometric to $\mathbb{R}^{\dim_\mathbb{R} C}_p$ and for all $i$, $\|x_i - y'_i\| \leq \frac1{2N}$. For each $i$, we have that $1 = \|x_i\| \leq \|y'_i\| + \|x_i - y'_i\| \leq \|y'_i\| + \frac1{2N}$, from which we get that $\|y'_i\| \geq 1 - \frac1{2N} > 0$. We may therefore set $\alpha_i := \frac1{\|y'_i\|}$ and $y_i:=\alpha_i y'_i$. Those vectors are of norm $1$ and still in $C$, so what remains to be shown is that for each $i$, $\|x_i - y_i\| \leq \frac1N$.
For each $i$, $\|y'_i\| \leq 1$, so $\alpha_i -1 \geq 0$. Then from the relation $\|y'_i\| \geq 1 - \frac1{2N}$ obtained above, we get that $\frac{\alpha_i -1}{\alpha_i} \leq \frac1{2N}$, so:
$$\|x_i - y_i\| = \|x_i - \alpha_iy'_i\| \leq \|x_i - y'_i\| + \|y'_i - \alpha_iy'_i\| \leq \frac1{2N} + (\alpha_i -1)\|y'_i\| = \frac1{2N} + \frac{\alpha_i -1}{\alpha_i} \leq \frac1N.$$
\end{proof}
Plugging in the above result into the ``commutativity of approximation'' argument of \cite[p. 198]{LinTza73}, one obtains that an $L^p(\mu)$ space is actually a $\mathcal{L}_{p,1+\varepsilon}$ space for all $\varepsilon > 0$, thereby proving the ``only if'' direction of Theorem~\ref{Banach-char}. (Actually, the whole reason for the presence of Lemma~\ref{l2-new} here was to clarify why we can force the norm $1$ constraint on the $y_i$'s in that argument.) What is in fact relevant here is the extra information this detour gives us through Lemma~\ref{l1-2}, namely the equivalence of the two conditions of the theorem with a third one, expressed as follows:
\begin{center}
\begin{tabular}{p{12cm}}
\emph{for all $x_1$,..., $x_n$ in $X$ of norm less than $1$ and for all $N \in \mathbb{N}_{\geq 1}$, there is a subspace $C \subseteq X$ and $y_1$,..., $y_n$ in $C$ of norm less than $1$ such that $C$ is of dimension at most $(4nN + 1)^n$, it is isometric to $\mathbb{R}^{\dim_\mathbb{R} C}_p$ and for all $i$, $\|x_i - y_i\| \leq \frac1N$.}
\end{tabular}
\end{center}
The advantage of the condition above is that it is both intrinsic and quantitative, therefore amenable to a logical axiomatization.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{ | c |}
\hline
\ \\
$\psi_m(\underline{z}) := \forall \underline{\lambda} \left( \left\| \sum_{i=1}^m \lambda_i z_i \right\| = \left( \sum_{i=1}^m |\lambda_i|^p \right) ^{\frac1p} \right)$\\[2mm]
$\psi'_{m,n}(\underline{y},\underline{z}) := \bigwedge_{k=1}^n \left( \exists \underline{\lambda} \left( y_k = \sum_{i=1}^m \lambda_i z_i \right) \right)$\\[2mm]
$\psi''_{n,N}(\underline{x},\underline{y}) := \bigwedge_{k=1}^n \left( \|x_k - y_k\| \leq \frac1{N+1} \land \|y_k\| \leq 1 \right) $\\[2mm]
$\varphi_{n,m,N}(\underline{x}) := \exists \underline{y} \exists \underline {z} \left( \psi_m(\underline{z}) \land \psi'_{m,n}(\underline{y},\underline{z}) \land \psi''_{n,N}(\underline{x},\underline{y}) \right) $\\[2mm]
$\phi_{n,N}(\underline{x}) := \bigvee_{0 \leq m \leq (4nN + 1)^n} \varphi_{n,m,N}(\underline{x})$\\[2mm]
$ A_{n,N} := \forall \underline{x} \left( \left(\bigwedge_{k=1}^n \|x_k\|\leq 1 \right) \to \phi_{n,N} (\underline{x}) \right)$\\[2mm]
\ \\
\hline
\end{tabular}
\caption{A first axiomatization.}\label{tabel-1}
\end{center}
\end{table}
Table~\ref{tabel-1} shows one such axiomatization (into a crude first-order-like language), i.e. the characterization of the space is expressed by the simultaneous validity of all $A_{n,N}$ sentences. With that in mind, by closely examining the formulas, one can easily see that they represent a straightforward translation of the condition from before.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{ | c |}
\hline
\ \\
$\psi(m,z) := \forall \lambda^{1(0)(0)} \left( \left\| \sum_{i=1}^m |\lambda(i)|_\mathbb{R} \cdot_X z(i) \right\| =_\mathbb{R} \left( \sum_{i=1}^m |\lambda(i)|^p_\mathbb{R} \right)^{1/p} \right)$\\[2mm]
$\psi'(m,n,y,z,\lambda) := \forall k \preceq_0 (n-1) \left( y(k+1) =_X \sum_{i=1}^m \lambda(i) \cdot_C z(i) \right)$\\[2mm]
$\psi''(n,N,x,y) := \forall k \preceq_0 (n-1) \left( \left\| \widetilde{x(k+1)} - y(k+1) \right\| \leq_\mathbb{R} \frac1N \land \|y(k+1)\| \leq_\mathbb{R} 1 \right)$\\[2mm]
$\varphi(n,m,N,x,y,z,\lambda) := \psi(m,z) \land \psi'(m,n,y,z,\lambda) \land \psi''(n,N,x,y)$\\[2mm]
$B := \forall n^0, N^0 \geq 1 \forall x^{X(0)} \exists y, z \preceq_{X(0)(0)} 1_{X(0)(0)} \exists \lambda^{1(0)(0)(0)} \in [-2,2] \exists m \preceq_0 (4nN+1)^n$\\[2mm]
$\varphi(n,m,N,x,y,z,\lambda)$\\[2mm]
\ \\
\hline
\end{tabular}
\caption{The $\Delta$-axiomatization.}\label{tabel-2}
\end{center}
\end{table}
Table~\ref{tabel-2}, where we have used some of the notations from \cite[Definitions 7.9 and 7.10]{GunKoh16}, shows how one may translate the infinite family of axioms $A_{n,N}$ into the one axiom $B$ which is, like the one in \cite{GunKoh16}, representable as a $\Delta$-sentence. Let us see some details of the translation. Firstly, we remark that the operation $\widetilde{v} := \frac{v}{\max\{\|v\|,1\}}$ that we used excused us from writing the antecedent from $A_{n,N}$. Then we see that by substituting into $\psi_m(\underline{z})$ all $\lambda_i$'s with $0$, except for one which we set to $1$, we obtain the fact that all $z_i$'s are of norm one. We have also postulated that all $y_k$'s are of norm less than $1$. Thus, if we have, as in $\psi'_{m,n}(\underline{y},\underline{z})$, that for a given $k$:
$$y_k = \sum_{i=1}^m \lambda_i z_i,$$
the formula $\psi_m(\underline{z})$ tells us further that:
$$1 \geq \|y_k\| \geq \left\|\sum_{i=1}^m \lambda_i z_i\right\| = \left( \sum_{i=1}^m |\lambda_i|^p \right) ^{\frac1p},$$
from which we get that each such $\lambda_i$ is in the interval $[-1,1]$. These results allow us to correspondingly bound the $y$, the $z$ and the $\lambda$ (which are now properly functionals) in the axiom $B$. Another such bounding comes from the $(4nN+1)^n$ established before (i.e. here it matters that the characterization is quantitative), which helped us eliminate the potentially infinite disjunction in Table~\ref{tabel-1} (where such constraints were not yet relevant) and the unbounded existential quantifier in Table~\ref{tabel-2} (which would have hindered us in presenting the axiom $B$ as a $\Delta$-sentence). As a curiosity, we note that choosing to present $B$ as a single axiom and not as an infinite schema like in Table~\ref{tabel-1}, i.e. taking advantage of the arithmetic already present in the framework, adds a bit of strength to the system, given the fact that we do not work here with any sort of $\omega$-rule.
We denote by $\mathcal{A}^\omega[X,\|\cdot\|,\mathcal{C},L^p]$ the extension of the system $\mathcal{A}^\omega[X,\|\cdot\|,\mathcal{C}]$ by the constant $c_p$ of type $1$, together with the axiom $1_\mathbb{R} \leq_\mathbb{R} c_p$ and the axiom $B$ from above. From the above discussion, the following soundness theorem holds.
\begin{theorem}[{cf. \cite[Propositions 3.5 and 7.12]{GunKoh16}}]\label{sound}
Let $X$ be a Banach space and $p \geq 1$. Denote by $\mathcal{S}^{\omega,X}$ its associated set-theoretic model and let the constant $c_p$ in our extended signature take as a value the canonical representation of the real number $p$. Then $\mathcal{S}^{\omega,X}$ is a model of $\mathcal{A}^\omega[X,\|\cdot\|,\mathcal{C},L^p]$ iff $X$ is isomorphic to some $L^p(\Omega, \mathcal{F}, \mu)$ space.
\end{theorem}
Analogously to the treatment done in \cite{GunKoh16} for the classes of Banach lattices, we may now state the corresponding metatheorem for the system devised above.
\begin{theorem}[{Logical metatheorem for $L^p(\mu)$ Banach spaces, cf. \cite[Theorems 5.13 and 7.13]{GunKoh16}}]\label{S3thm}
Let $\rho \in \text{{\bf T}}^X$ be an admissible type. Let $B_\forall(x,u)$ be a $\forall$-formula with at most $x,u$ free and $C_\exists(x,v)$ an $\exists$-formula with at most $x,v$ free. Let $\Delta$ be a set of $\Delta$-sentences. Suppose that:
$$\mathcal{A}^\omega[X,\|\cdot\|,\mathcal{C},L^p] + \Delta \vdash \forall x^\rho (\forall u^0 B_\forall(x,u) \to \exists v^0 C_\exists(x,v)).$$
Then one can extract a partial functional $\Phi : S_{\widehat{\rho}} \rightharpoonup \mathbb{N}$, whose restriction to the strongly majorizable functionals of $S_{\widehat{\rho}}$ is a bar-recursively computable functional of $\mathcal{M}^\omega$, such that for all $L^p(\mu)$ Banach spaces $(X,\|\|)$ having the property that any associated set-theoretic model of it satisfies $\Delta$, we have that for all $x \in S_\rho$ and $x^* \in S_{\widehat{\rho}}$ such that $x^* \gtrsim_\rho x$, the following holds:
$$\forall u \leq \Phi(x^*) B_\forall(x,u) \to \exists v \leq \Phi(x^*) C_\exists(x,v).$$
All the additional considerations from Theorem~\ref{S2thm} also apply here.
\end{theorem}
\begin{proof}
This theorem extends Theorem~\ref{S2thm}. The two additional axioms are $\Delta$-axioms, and the constant $c_p$ is majorized (as in \cite[Lemma 17.8]{Koh08}) by $M(b) := \lambda n.j(b2^{n+2},2^{n+1} -1)$, where $j$ is the Cantor pairing function and $b \in \mathbb{N}$ such that $b \geq p$ (e.g., $b:= \lceil (c_p(0))_\mathbb{Q} \rceil +1$). We note that the $\Phi$ depends on $p$ only via this upper bound $b$.
\end{proof}
\section{Application: the derivation of the modulus of uniform convexity}\label{sec:app}
The axiomatization that we have just obtained has, essentially, the form of a comparison principle with respect to the $p$-normed Euclidean spaces. This suggests that it may be particularly application-friendly. Let us see why this is the case. Suppose that we have an existing mathematical theorem regarding $L^p$ spaces. The particularization of the proof to the Euclidean case is likely to be easily derivable in our higher systems of arithmetic (with the possible addition of universal lemmas), since statements about integrals are reduced to statements about sums and powers of real numbers. The second step would be to translate the result along the $\varepsilon$-close approximation of our characterization, a translation involving a sequence of boundings which is likely to leave the original statement intact if it is well-behaved enough. The following application illustrates this general strategy.
Uniform convexity is a fundamental notion in the theory of Banach spaces. As per \cite[Section 6.4]{GunKoh16}, the property can be formalized as:
$$\forall k^0 \exists n^0 \forall x_1, x_2 \preceq_X 1_X \left(\left\|\frac12(x_1 + x_2)\right\| \leq 1-2^{-n} \to \|x_1-x_2\|<2^{-k}\right).$$
and it is suitable for bound extraction. We note that, in the above statement, like in the definition of the convergence of a sequence, a bound (for $n$, in this case) is also a witness. Also, with the logical issues now resolved, we note that, for the ease of understanding, we shall work with $\varepsilon$-style characterizations. Therefore, following \cite[Section 2.1]{KohLeu12}, we define a {\bf modulus of uniform convexity} for a Banach space to be a function $\eta : (0,2] \to (0,\infty)$ such that for any $\varepsilon > 0$ and any $x_1$ and $x_2$ with $\|x_1\| \leq 1$, $\|x_2\| \leq 1$ and $\|x-y\|\geq \varepsilon$, we have that
$$\left\|\frac12(x_1 + x_2)\right\| \leq 1-\eta(\varepsilon).$$
We make the observation that what is usually called ``the'' modulus of uniform convexity of a space is the ``optimal'' such modulus, i.e. for each $\varepsilon > 0$ we take as $\eta(\varepsilon)$ the greatest value of $\delta$ that works for all suitable $x_1$, $x_2$, i.e. the minimum of the expression $1-\left\|\frac12(x_1 + x_2)\right\|$. The goal of this section is to derive a modulus of uniform convexity for $L^p(\mu)$ spaces using only the axiomatization established in the previous section. We will consider, for simplicity, $p \geq 2$, i.e. we add the additional admissible axiom $2 \leq_\mathbb{R} c_p$ to our system. We begin with some results of real analysis. The following lemma and corollary are standard in the literature.
\begin{lemma}
For all $x_1, x_2 \geq 0$, $x_1^p + x_2^p \leq (x_1^2 + x_2^2)^{p/2}$.
\end{lemma}
\begin{proof}
The case $x_2 = 0$ is clear. If $x_2 \neq 0$, we can divide by $x_2^p$ and we notice that we only have to prove that for all $t \geq 0$, $t^p + 1 \leq (t^2 + 1)^{p/2}$. Consider the function $f: \mathbb{R} \to \mathbb{R}$, defined, for all $t$, by $f(t):= (t^2 + 1)^{p/2} - t^p - 1$. Since $f'(t) = \frac{p}2(t^2 + 1)^{(p/2) - 1} \cdot 2t - pt^{-1} \geq pt^{p-2} \cdot t - pt^{p-1} = 0$ and $f(0) = 0$, we obtain that for all $t$, $f(t) \geq 0$, and hence the conclusion.
\end{proof}
\begin{corollary}\label{coro}
For all $a, b \in \mathbb{R}$, $\left|\frac{a+b}2\right|^p + \left|\frac{a-b}2\right|^p \leq \frac12(|a|^p + |b|^p)$.
\end{corollary}
\begin{proof}
We substitute into the above lemma $x_1 := \left|\frac{a+b}2\right|$ and $x_2 := \left|\frac{a-b}2\right|$. Since $\left|\frac{a+b}2\right|^2 + \left|\frac{a-b}2\right|^2 = \frac12(a^2 + b^2)$, we obtain that:
\begin{align*}
\left|\frac{a+b}2\right|^p + \left|\frac{a-b}2\right|^p &\leq \left( \frac12(a^2 + b^2) \right) ^{p/2} \\
&\leq \frac12 ((a^2)^{p/2} + (b^2)^{p/2}) \\
&= \frac12(|a|^p + |b|^p),
\end{align*}
where the last inequality follows from the convexity of the function $t \mapsto t^p$ on $(0, \infty)$, for any $p \geq 2$.
\end{proof}
Set, now, for all $a, d \in (0,1)$, $\sigma(a,d):=a-(1-((1-a^p)^{1/p} + d)^p )^{1/p}$.
\begin{lemma}
For all $a,d \in (0,1)$, $\sigma(a,d) > 0$.
\end{lemma}
\begin{proof}
Since $d>0$, we have that $(1-a^p)^{1/p} < (1-a^p)^{1/p} + d$, so
$$1-a^p < ((1-a^p)^{1/p} + d)^p.$$
From that we successively obtain:
$$a^p > 1- ((1-a^p)^{1/p} + d)^p,$$
$$a > (1- ((1-a^p)^{1/p} + d)^p)^{1/p},$$
$$a - (1- ((1-a^p)^{1/p} + d)^p)^{1/p} > 0.$$
\end{proof}
\begin{lemma}\label{lsigma}
For all $a,d \in (0,1)$ and all $\delta \in (0, \sigma(a,d))$, we have that:
$$(1 - (a-\delta)^p)^{1/p} \leq (1-a^p)^{1/p} + d.$$
\end{lemma}
\begin{proof}
Clearly $\sigma(a,d) < a$, so $(a-\delta)^p$ is well-defined. Now, since
$$\delta \leq a-(1-((1-a^p)^{1/p} + d)^p )^{1/p},$$
we obtain, successively, that:
$$ a - \delta \geq (1-((1-a^p)^{1/p} + d)^p )^{1/p},$$
$$ (a- \delta)^p \geq 1-((1-a^p)^{1/p} + d)^p,$$
$$ 1 - (a- \delta)^p \leq ((1-a^p)^{1/p} + d)^p,$$
$$ (1 - (a- \delta)^p)^{1/p} \leq (1-a^p)^{1/p} + d.$$
\end{proof}
Note that the statements of Corollary~\ref{coro} and Lemma~\ref{lsigma} are universal and therefore it is admissible to add them as supplementary axioms -- denote them by $C_1$ and $C_2$. We are now in a position to state the main theorem of this section.
\begin{theorem}\label{uconv}
Provably in the system $\mathcal{A}^\omega[X,\|\cdot\|,\mathcal{C},L^p] + \{2 \leq_{\mathbb{R}} c_p; C_1; C_2\}$, the function $\eta : (0,2] \to (0,\infty)$, defined, for any $\varepsilon >0$, by $\eta(\varepsilon) := 1- (1-(\frac{\varepsilon}2)^p)^{1/p}$, is a modulus of uniform convexity.
\end{theorem}
\begin{proof}
Let $\varepsilon > 0$. Take $x_1, x_2 \in X$ with $\|x_1\|, \|x_2\| \leq 1$ and $\|x_1-x_2\| \geq \varepsilon$. Let $c \in (0,1)$. Set $\delta := \min\{ \frac{c}2, \frac{\sigma(\frac{\varepsilon}2,\frac{c}2)}2 \}$. Take $y_1, y_2$, $z_1,\ldots,z_m$ like in our axiomatization (e.g., from Table~\ref{tabel-1}) such that for all $k \in \{1,2\}$,
$$\|x_k - y_k\| \leq \delta,\quad \|y_k\|\leq 1.$$
Write now:
$$y_1 = \sum_{i=1}^m \lambda_i z_i,\quad y_2 = \sum_{i=1}^m \mu_i z_i.$$
We have that:
\begin{align*}
\left\|\frac{y_1+y_2}2\right\|^p + \left\|\frac{y_1-y_2}2\right\|^p &= \left\|\sum_{i=1}^m \frac{\lambda_i + \mu_i}2 z_i\right\|^p + \left\|\sum_{i=1}^m \frac{\lambda_i - \mu_i}2 z_i\right\|^p \\
&= \sum_{i=1}^m \left( \left|\frac{\lambda_i+\mu_i}2\right|^p + \left|\frac{\lambda_i-\mu_i}2\right|^p \right) \\
&\leq \frac12 \sum_{i=1}^m (|\lambda_i|^p + |\mu_i|^p) \\
&= \frac12 (\|y_1\|^p + \|y_2\|^p ) \\
&\leq 1.
\end{align*}
Assume that $\|y_1-y_2\| \geq \rho$. Then we get that
$$\left\|\frac{y_1+y_2}2\right\| \leq \left(1-\left(\frac{\rho}2\right)^p\right)^{1/p}.$$
Incidentally, what we have shown above is the validity of $\eta$ as a modulus of uniform convexity for the $\mathbb{R}^m_p$ spaces (with $p \geq 2$).
Note that:
$$\varepsilon \leq \|x_1 - x_2\| \leq \|x_1 - y_1\| + \|y_1 - y_2\| + \|y_2 - x_2\| \leq \|y_1 - y_2\| + 2\delta$$
and hence we may take $\rho := \varepsilon - 2\delta > 0$ (since $\delta < \sigma(\frac{\varepsilon}2,\frac{c}2) < \frac{\varepsilon}2$). We have obtained that:
$$\left\|\frac{y_1+y_2}2 \right\| \leq \left(1-\left(\frac{\varepsilon}2 - \delta\right)^p\right)^{1/p}.$$
On the other hand,
$$\|x_1 + x_2\| \leq \|y_1+y_2\| + \|(x_1+x_2)-(y_1+y_2)\| \leq \|y_1+y_2\| + \|x_1 - y_1\| + \|x_2 - y_2\| \leq \|y_1 + y_2\| + 2\delta,$$
so
$$\left\|\frac{x_1 + x_2}2\right\| \leq \left\|\frac{y_1+y_2}2 \right\| + \delta \leq \left(1-\left(\frac{\varepsilon}2 - \delta\right)^p \right)^{1/p} + \delta.$$
Since $0<\delta < \sigma(\frac\varepsilon2,\frac c 2)$, we have that:
$$\left(1-\left(\frac{\varepsilon}2 - \delta\right)^p \right)^{1/p} \leq \left(1-\left(\frac{\varepsilon}2\right)^p \right)^{1/p} + \frac c 2.$$
Also, we know that $\delta \leq \frac c 2$, so we finally obtain that:
$$\left\|\frac{x_1 + x_2}2\right\| \leq \left(1-\left(\frac{\varepsilon}2\right)^p \right)^{1/p} + c.$$
Now, since $c \in (0,1)$ was arbitrarily chosen, we can apply the very definition of the relation $\leq_\mathbb{R}$ in our system (\cite[p. 80]{Koh08}) in order to get that:
$$\left\|\frac{x_1 + x_2}2\right\| \leq \left(1-\left(\frac{\varepsilon}2\right)^p \right)^{1/p},$$
showing, indeed, that $\eta$ is a modulus of uniform convexity.
\end{proof}
\section{Acknowledgments}
The author is grateful to Ulrich Kohlenbach and Lauren\c tiu Leu\c stean for the helpful discussions and suggestions regarding the subject matter.
This work was supported by a grant of the Romanian National Authority for Scientific Research, CNCS - UEFISCDI, project number PN-II-ID-PCE-2011-3-0383.
|
3,212,635,537,797 | arxiv | \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}}
\def\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}
\makeatother
\usepackage{amsmath,amssymb,amsthm}
\usepackage{mathrsfs}
\usepackage{mathabx}\changenotsign
\usepackage{dsfont}
\usepackage[normalem]{ulem}
\usepackage[dvipsnames]{xcolor}
\usepackage[backref]{hyperref}
\hypersetup{
colorlinks,
linkcolor={red!60!black},
citecolor={green!60!black},
urlcolor={blue!60!black}
}
\usepackage{graphicx}
\usepackage[open,openlevel=2,atend]{bookmark}
\usepackage[abbrev,msc-links,backrefs]{amsrefs}
\usepackage{doi}
\renewcommand{\doitext}{DOI\,}
\renewcommand{\PrintDOI}[1]{\doi{#1}}
\renewcommand{\eprint}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[babel]{microtype}
\usepackage[english]{babel}
\linespread{1.3}
\usepackage{geometry}
\geometry{left=27.5mm,right=27.5mm, top=25mm, bottom=25mm}
\numberwithin{equation}{section}
\numberwithin{figure}{section}
\usepackage{enumitem}
\def\upshape({\itshape \roman*\,}){\upshape({\itshape \roman*\,})}
\def\upshape(\Roman*){\upshape(\Roman*)}
\def\upshape({\itshape \alph*\,}){\upshape({\itshape \alph*\,})}
\def\upshape({\itshape \Alph*\,}){\upshape({\itshape \Alph*\,})}
\def\upshape({\itshape \arabic*\,}){\upshape({\itshape \arabic*\,})}
\usepackage{pgf,tikz}
\usepackage{listings}
\usepackage{xcolor}
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codelightgray}{gray}{0.9}
\lstdefinestyle{Maplestyle}{
backgroundcolor=\color{codelightgray},
commentstyle=\color{codegreen},
keywordstyle=\color{black}\bfseries,
stringstyle=\color{blue},
basicstyle=\ttfamily\footnotesize,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=4
}
\lstdefinelanguage{Maple}%
{morekeywords={and,assuming,break,by,catch,description,do,done,%
elif,else,end,error,export,fi,finally,for,from,global,if,%
implies,in,intersect,local,minus,mod,module,next,not,od,%
option,options,or,proc,quit,read,return,save,stop,subset,then,%
to, try,union,use,uses,while,xor},%
sensitive=true,%
morecomment=[l]\#,%
morestring=[b]",%
morestring=[d]"%
}[keywords,comments,strings]%
\lstset{language=Maple}
\lstset{style=Maplestyle}
\usepackage{array,multirow,colortbl}
\usepackage{caption} \captionsetup[table]{skip=10pt}
\makeatletter
\def\greek#1{\expandafter\@greek\csname c@#1\endcsname}
\def\Greek#1{\expandafter\@Greek\csname c@#1\endcsname}
\def\@greek#1{\ifcase#1
\or $\alpha$%
\or $\beta$%
\or $\gamma$%
\or $\delta$%
\or $\epsilon$%
\or $\zeta$%
\or $\eta$%
\or $\theta$%
\or $\iota$%
\or $\kappa$%
\or $\lambda$%
\or $\mu$%
\or $\nu$%
\or $\xi$%
\or $o$%
\or $\pi$%
\or $\rho$%
\or $\sigma$%
\or $\tau$%
\or $\upsilon$%
\or $\phi$%
\or $\chi$%
\or $\psi$%
\or $\omega$%
\fi}
\def\@Greek#1{\ifcase#1
\or $\mathrm{A}$%
\or $\mathrm{B}$%
\or $\Gamma$%
\or $\Delta$%
\or $\mathrm{E}$%
\or $\mathrm{Z}$%
\or $\mathrm{H}$%
\or $\Theta$%
\or $\mathrm{I}$%
\or $\mathrm{K}$%
\or $\Lambda$%
\or $\mathrm{M}$%
\or $\mathrm{N}$%
\or $\Xi$%
\or $\mathrm{O}$%
\or $\Pi$%
\or $\mathrm{P}$%
\or $\Sigma$%
\or $\mathrm{T}$%
\or $\mathrm{Y}$%
\or $\Phi$%
\or $\mathrm{X}$%
\or $\Psi$%
\or $\Omega$%
\fi}
\AddEnumerateCounter{\greek}{\@greek}{24}
\AddEnumerateCounter{\Greek}{\@Greek}{12}
\makeatother
\def\upshape({\itshape \greek*}){\upshape({\itshape \greek*})}
\def\upshape({\itshape \Greek*}){\upshape({\itshape \Greek*})}
\let\polishlcross=\ifmmode\ell\else\polishlcross\fi
\def\ifmmode\ell\else\polishlcross\fi{\ifmmode\ell\else\polishlcross\fi}
\def\ \text{and}\ {\ \text{and}\ }
\def\quad\text{and}\quad{\quad\text{and}\quad}
\def\qquad\text{and}\qquad{\qquad\text{and}\qquad}
\def\paragraph#1{%
\noindent\textbf{#1.}\enspace}
\let\emptyset=\varnothing
\let\setminus=\smallsetminus
\let\backslash=\smallsetminus
\let\sm=\setminus
\makeatletter
\def\mathpalette\mov@rlay{\mathpalette\mov@rlay}
\def\mov@rlay#1#2{\leavevmode\vtop{ \baselineskip\z@skip \lineskiplimit-\maxdimen
\ialign{\hfil$\m@th#1##$\hfil\cr#2\crcr}}}
\newcommand{\charfusion}[3][\mathord]{
#1{\ifx#1\mathop\vphantom{#2}\fi
\mathpalette\mov@rlay{#2\cr#3}
}
\ifx#1\mathop\expandafter\displaylimits\fi}
\makeatother
\newcommand{\charfusion[\mathbin]{\cup}{\cdot}}{\charfusion[\mathbin]{\cup}{\cdot}}
\newcommand{\charfusion[\mathop]{\bigcup}{\cdot}}{\charfusion[\mathop]{\bigcup}{\cdot}}
\DeclareFontFamily{U} {MnSymbolC}{}
\DeclareSymbolFont{MnSyC} {U} {MnSymbolC}{m}{n}
\DeclareFontShape{U}{MnSymbolC}{m}{n}{
<-6> MnSymbolC5
<6-7> MnSymbolC6
<7-8> MnSymbolC7
<8-9> MnSymbolC8
<9-10> MnSymbolC9
<10-12> MnSymbolC10
<12-> MnSymbolC12}{}
\DeclareMathSymbol{\powerset}{\mathord}{MnSyC}{180}
\usepackage{tikz}
\usetikzlibrary{calc,decorations.pathmorphing}
\pgfdeclarelayer{background}
\pgfdeclarelayer{foreground}
\pgfdeclarelayer{front}
\pgfsetlayers{background,main,foreground,front}
\let\epsilon=\varepsilon
\let\eps=\epsilon
\let\rho=\varrho
\let\theta=\vartheta
\let\wh=\widehat
\def{\mathds E}{{\mathds E}}
\let\E={\mathds E}
\def{\mathds N}{{\mathds N}}
\def{\mathds G}{{\mathds G}}
\def{\mathds Z}{{\mathds Z}}
\def{\mathds P}{{\mathds P}}
\let\Prob={\mathds P}
\def{\mathds R}{{\mathds R}}
\def{\mathds C}{{\mathds C}}
\def{\mathds Q}{{\mathds Q}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathscr{A}}{\mathscr{A}}
\newcommand{\mathscr{B}}{\mathscr{B}}
\newcommand{\mathscr{C}}{\mathscr{C}}
\newcommand{\mathscr{G}}{\mathscr{G}}
\newcommand{\mathscr{J}}{\mathscr{J}}
\newcommand{\mathscr{S}}{\mathscr{S}}
\newcommand{\mathscr{P}}{\mathscr{P}}
\newcommand{\mathscr{R}}{\mathscr{R}}
\newcommand{\mathscr{Z}}{\mathscr{Z}}
\def\hat p{\hat p}
\def\mathrm{e}{\mathrm{e}}
\def\longrightarrow{\longrightarrow}
\def\mathrm{RT}{\mathrm{RT}}
\def\mathrm{ex}{\mathrm{ex}}
\newcommand{\mathscr{F}}{\mathscr{F}}
\newcommand{\mathrm{Bin}}{\mathrm{Bin}}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\newtheorem{theorem}[thm]{Theorem}
\newtheorem{corollary}[thm]{Corollary}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{claim}[thm]{Claim}
\newtheorem{fact}[thm]{Fact}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{conclusion}[thm]{Conclusion}
\newtheorem{assertion}[thm]{Assertion}
\theoremstyle{definition}
\newtheorem{rem}[thm]{Remark}
\newtheorem{dfn}[thm]{Definition}
\newtheorem{exmp}[thm]{Example}
\newtheorem{conj}[thm]{Conjecture}
\newtheorem{prob}[thm]{Problem}
\newtheorem{quest}[thm]{Question}
\newtheorem{setup}[thm]{Setup}
\usepackage{accents}
\newcommand{\seq}[1]{\accentset{\rightharpoonup}{#1}}
\let\lra=\longrightarrow
\def\widetilde{w}{\widetilde{w}}
\let\phi=\varphi
\renewcommand*{\thefootnote}{\fnsymbol{footnote}}
\begin{document}
\title{Long twins in random words}
\author{Andrzej Dudek}
\address{Department of Mathematics, Western Michigan University, Kalamazoo, MI, USA}
\email{\tt andrzej.dudek@wmich.edu}
\thanks{The first author was supported in part by Simons Foundation Grant \#522400.}
\author{Jaros\ifmmode\ell\else\polishlcross\fi aw Grytczuk}
\address{Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland}
\email{j.grytczuk@mini.pw.edu.pl}
\thanks{The second author was supported in part by Narodowe Centrum Nauki, grant 2020/37/B/ST1/03298.}
\author{Andrzej Ruci\'nski}
\address{Department of Discrete Mathematics, Adam Mickiewicz University, Pozna\'n, Poland}
\email{\tt rucinski@amu.edu.pl}
\thanks{The third author was supported in part by Narodowe Centrum Nauki, grant 2018/29/B/ST1/00426}
\begin{abstract}\emph{Twins} in a finite word are formed by a pair of identical subwords placed at disjoint sets of positions. We investigate the maximum length of twins in \emph{a random} word over a $k$-letter alphabet. The obtained lower bounds for small values of $k$ significantly improve the best estimates known in the deterministic case.
Bukh and Zhou in 2016 showed that every ternary word of length $n$ contains twins of length at least $0.34n$. Our main result, with a computer-assisted proof, states that in a random ternary word of length $n$, with high probability, one can find twins of length at least $0.41n$. In the general case of alphabets of size $k\geq 3$ we obtain analogous lower bounds of the form $\frac{1.64}{k+1}n$ which are better than the known deterministic bounds for $k\leq 354$. In addition, we present similar results for \emph{multiple} twins in random words. \end{abstract}
\maketitle
\setcounter{footnote}{1}
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Introduction}
Looking for twin objects in mathematical structures is a natural trend originating in some ancient geometric dissection puzzles. Amusing as they are, problems of this nature can lead to pretty surprising observations like, for instance, the Bolyai-Gerwien theorem, stating that any two polygons with the same area can be dissected into a finite number of pieces that can be matched into congruent pairs, or the Banach-Tarski paradox, asserting that a similar property holds for any pair of solids in the $3$-dimensional space (regardless of their size) (see, e.g.,~\cite{TomkowiczWagon}).
Analogous properties can be considered for discrete structures as well. For instance, Ulam (see, e.g.,~\cite{ChungEGUY})
proposed to measure the degree of similarity of two graphs in terms of their edge decompositions into pairwise isomorphic subgraphs. If the number of subgraphs in a decomposition is small, then there must be a large pair among them, which leads to the following general question: given a combinatorial structure (or a pair of structures), how large disjoint isomorphic substructures does it (do they) contain?
In this paper we study this question for random words. To put our work in a broader perspective, we briefly discuss a couple of similar problems and results for graphs and permutations (see~\cite{Axenovich} for further examples).
\subsection{Twins in graphs and permutations} The first problem we would like to mention is to find two large isomorphic subgraphs in a given graph. Let $f(m)$ denote the largest integer $k$ such that every graph with $m$ edges contains a pair of twins, that is, two edge-disjoint isomorphic subgraphs with $k$ edges each. The problem was stated independently by Jacobson and Sch\"onheim (see~\cite{ErdosPachPyber}). Currently, the best general result, due to Lee, Loh, and Sudakov~\cite{LeeLohSudakov}, states that $f(m)=\Theta((m\log m)^{2/3})$. In~\cite{AlonCaroKrasikov}, Alon, Caro, and Krasikov proved that every tree with $m$ edges contains a pair of
twins of (asymptotically optimal) size $m/2-o(m)$.
Similar questions were studied for permutations, with the notion of twins based on the relative order of the elements, that is, twins are formed by two subsequences which are order-isomorphic. The length of the longest twins guaranteed in every permutation of order $n$ has been proved to be $\Omega(n^{3/5})$~\cite{BukhR} and $O(n^{2/3})$~\cite{Gawron}. Moreover, for random permutations, with high probability the length of the longest twins is $\Theta(n^{2/3})$~\cite{BukhR, DGR-Electronic}.
There is also a weaker notion of twins in permutations where similarity is claimed whenever only \emph{neighboring} elements preserve the order. In this case one may expect longer twins and, indeed, it was proved in~\cite{DGR-Integers} that the length of the longest \emph{weak} twins guaranteed in every permutation falls between $n/12$ and $n-O(n^{1/3})$. It is conjectured in~\cite{DGR-Integers} that, in fact, it is $n/2-o(n)$.
\subsection{Twins in words: history and background}\label{Section Twins History}
Axenovich, Person, and Puzynina~\cite{APP} were the first to study twins in words. We say that a word $w=w_1w_2\cdots w_n\in A^n$, $|A|<\infty$, contains \emph{twins} of \emph{length $t$} if there exist \emph{disjoint} subsets $\{i_1<i_2<\cdots<i_t\}$ and $\{j_1<j_2<\cdots<j_t\}$ of indices such that $w_{i_1}w_{i_2}\cdots w_{i_t}=w_{j_1}w_{j_2}\cdots w_{j_t}$. Let $t(w)$ denote the maximum length of twins in $w$ and let $t(n,k)$ denote the minimum of $t(w)$ over all words $w$ of length $n$ over a $k$-element alphabet $A$.
They proved in~\cite{APP} that $t(n,2)=n/2-o(n)$. In other words, every finite binary word can be split into a pair of identical subwords, up to an asymptotically negligible remainder. To get this striking result they developed a regularity lemma for words -- an interesting tool with a great potential for further applications (see, e.g.,~\cite{HanKP-S}).
At present, it is not known whether the same is true for larger alphabets. By considering the sub-word formed by two most frequent letters, the above mentioned estimate $t(n,2)=n/2-o(n)$ from~\cite{APP} implies that $t(n,k)\geq n/k -o(n)$. For $k\geq3$, this was slightly improved by Bukh and Zhou~\cite{BukhZ} to
\begin{equation}\label{Bukh-Zhou t_k(n)1}
t(n,k)\geq 1.02\cdot\frac{n}{k} -o(n)
\end{equation} and to
\begin{equation}\label{Bukh-Zhou t_k(n)2}
t(n,k)\geq \left(\frac{k}{81}\right)^{1/3}\cdot \frac{n}{k}-(k/3)^{1/3},
\end{equation}
the latter being a (much) better estimate for larger $k$. In~\cite{APP} and~\cite{BukhZ} there are also some upper bounds on $t(n,k)$ valid for $k\ge4$. In particular, $t(n,4)\leq0.4932n$.
Twins in \emph{random} words were not much studied before, although the proofs of the upper bounds in~\cite{APP} and~\cite{BukhZ} were obtained via the probabilistic method. The only work addressing this issue directly is a recent paper by He, Huang, Nam, and Thaper \cite{HeHNT}, where it is proved that, with high probability, binary words of length contain twins of size $n/2-\omega\sqrt{n}$, for any function $\omega=\omega(n)$ tending to infinity with $n$.
Moreover, the authors of \cite{HeHNT} formulate a striking conjecture that almost every binary word with even numbers of ones and zeros is a shuffle square. If true, this would imply that, with high probability, a random binary word of an \emph{even} length $n$ contains twins of size at least $n/2-1$ (when the number of ones is odd, this is the best one can get).
Another interesting problem, whose connection to twins in random words was noticed and utilized in \cite{BukhZ}, was raised by Chv\'atal and Sankoff in 1975~\cite{ChvatalSankoff}: what is the expected length of the \emph{longest common subsequence} in a pair of random words of length $n$ over an alphabet with $k$ letters? Denoting this quantity by $f_k(n)$, they observed that, for every fixed $k\geq 2$, the sequence $f_k(n)/n$ is convergent and asked for determination of the limit $\gamma_k:=\lim_{n\to\infty}\frac{f_k(n)}{n}$. The problem appeared to be quite challenging and, despite quite an effort (see, e.g.,~\cite{BaezaNGS}), none of the constants $\gamma_k$ is known exactly at present. One spectacular result was obtained by Kiwi, Loebl, and Matou\v {s}ek~\cite{KiwiLM}, who proved that $\lim_{k\to\infty}\gamma_k\sqrt{k}=2$, confirming thereby a conjecture of Sankoff and Mainville (see~\cite{SankoffKruskal}) and some earlier speculations on a mysterious relation with another celebrated problem concerning the \emph{longest increasing subsequence} in a random permutation (see~\cite{KiwiLM}).
\subsection{Our results}
In this paper we study twins in a \emph{random word} $W_k(n)$ obtained by drawing one with probability $k^{-n}$ out of all $k$-ary words of length $n$. Equivalently, one could toss a $k$-sided fair die, independently, $n$ times. Either way, this is an equiprobable space.
Our main result improves the lower bound \eqref{Bukh-Zhou t_k(n)1} for all but very few ternary ($k=3)$ words of length $n$.
\begin{thm}\label{thm:0.4} With probability $1-e^{-\Omega(n/(\log n))}$,
$$t(W_3(n))\ge 0.411 n.$$
\end{thm}
The proof involves computer-assisted calculations. It seems plausible that substantially stronger computational devices could bring further improvements of the bound. For computer-skeptical readers we have a `cheap substitute', a bound of only $0.375n$ proved by classical methods (cf.~Example \ref{375} in Subsection \ref{apple}).
For larger alphabets we have the following result. We present it in a form which emphasizes the improvement over the deterministic bound \eqref{Bukh-Zhou t_k(n)1}. We say that an event $\mathcal E_n$ holds \emph{asymptotically almost surely (a.a.s.)} if $\Pr(\mathcal E_n)\to1$ as $n\to\infty$.
\begin{thm}\label{main1} For each $k\ge3$ and large $n$, a.a.s.
\[
t(W_k(n))\ge\frac{1.64k}{k+1}\cdot\frac{n}{k},
\]
\end{thm}
\noindent This gives an improvement upon the (deterministic) estimates (\ref{Bukh-Zhou t_k(n)1}) and, for all $k\leq 354$, upon~(\ref{Bukh-Zhou t_k(n)2}) (see Table~\ref{table:r=2}).
\begin{table}
\begin{tabular}{ |>{\columncolor{codelightgray}}c||c | c | c | c | c | c | c | c |}
\hline
\rowcolor{codelightgray}
$k$ & 3 & 4 & 5 & 10 & 50 & 100 & 200 & 400\\ \hline\hline
$\left(\frac{k}{81}\right)^{1/3}$ & 0.333 & 0.367 & 0.395 & 0.498 & 0.851 & 1.073 & 1.352 & \textcolor{red}{1.703} \\ \hline
$\frac{1.64k}{k+1}$ & 1.230 & 1.312 & 1.367 & 1.491 & 1.608 & 1.624 & 1.632 & 1.636 \\ \hline
\end{tabular}
\caption{Comparing bound~\eqref{Bukh-Zhou t_k(n)2} of Bukh and Zhou~\cite{BukhZ} with Theorem~\ref{main1}.}
\label{table:r=2}
\end{table}
The proof uses a tool called the Boosting Lemma (Lemma \ref{k2k+1}). It is stated in terms of a special model of random words which allows for iterative enhancement of the twin length, while adding a new letter to the alphabet. This new model assumes that the numbers of occurrences of letters are fixed in such a way that the model is asymptotically equivalent to $W_k(n)$.
\subsection{Multiple twins}
We also consider a more general notion of \emph{multiple} twins. By \emph{$r$-twins} in a word $w$ we mean $r$ disjoint identical subwords of $w$. Let $t^{(r)}(w)$ be the maximum length of $r$-twins in $w$ and let $t^{(r)}(n,k)$ be the minimum of $t^{(r)}(w)$ over all words $w$ of length $n$ from a $k$-letter alphabet. By the results of~\cite{APP} and~\cite{BukhZ} we know, respectively, that $t^{(r)}(n,k)\sim n/r$ when $r\geq k$, and
\begin{equation}\label{BZr}
t^{(r)}(n,k)\geq C_r\cdot k^{1/\binom{2r-1}{r}}\cdot\frac{n}{k}-O(1),
\end{equation}
for $k>r\geq3$, where $C_r= \left(\frac{1}{2r-1}\right)^{1+1/\binom{2r-1}{r}}$ and the $O(1)$ term depends on $r$ and $k$ only.
Estimate \eqref{BZr} was only mentioned in the concluding remarks in \cite{BukhZ} and the explicit form of $C_r$ was not given. However, based on the proof of inequality \eqref{Bukh-Zhou t_k(n)2} therein, it is quite obvious how it should be computed.
For a random word $W_k(n)$ we get the following estimates which, again, for small $r$ and $k$ yield better lower bounds than \eqref{BZr} (see Table~\ref{table:r,k}). For $r,k\ge2$, let $\Pi_{r,k}=\prod_{j=r+1}^k\frac{j^r}{j^r-1}$.
\begin{thm}\label{main2} For every $k>r\ge3$, a.a.s.
\[
t^{(r)}(W_k(n))\ge\Pi_{r,k}\cdot\frac{n}{k}-o(n).
\]
\end{thm}
\begin{table}
\begin{tabular}{ |>{\columncolor{codelightgray}}c||c | c | c | c | c | c | c | c |}
\hline
\rowcolor{codelightgray}
$(r,k)$ & $(3,4)$ & $(3,10)$ & $(3,100)$ & $(3,1000)$ & $(3,10^{10})$ & $(4,10^{10})$ & $(4,10^{40})$ \\ \hline\hline
$C_r\cdot k^{1/\binom{2r-1}{r}}$ & 0.196 & 0.214 & 1.036 & 0.340 & \textcolor{red}{1.703} & 0.261 & \textcolor{red}{1.878}\\ \hline
$\Pi_{r,k}$ & 1.016 & 1.036 & 1.041 & 1.041 & 1.041 & 1.003 & 1.003\\ \hline
\end{tabular}
\caption{Comparing bound~\eqref{BZr} of Bukh and Zhou~\cite{BukhZ} with Theorem~\ref{main2}.}
\label{table:r,k}
\end{table}
\subsection{Organization} In the next section we prove Theorem \ref{thm:0.4}. Section \ref{genbou} begins with a short proof of the known bound $t^{(r)}(n,k)\sim n/r$, $r\ge k$, for random words. We do so for self-containment, as the proof in \cite{APP} (for \emph{all} words) is quite involved. The next subsection contains a standard proof of an asymptotic equivalence between $W_k(n)$ and another model of random words. Then comes the crucial Boosting Lemma, while the last subsection of Section \ref{genbou} brings applications of the Boosting Lemma, among them short proofs of Theorems \ref{main1} and \ref{main2}, the former utilizing also Theorem \ref{thm:0.4}.
The last section contains some remarks and open problems, while the Appendix presents a Maple code used for derivation of the data collected in Table~\ref{table:lambda}, as well as a proof of a technical estimate needed in Section \ref{boost}.
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Computer assisted bound}
In this section we prove Theorem \ref{thm:0.4}.
\begin{proof}[Proof of Theorem \ref{thm:0.4}]
Fix a positive integer $s$ and split the set of positions of a ternary random word $W_3(n)$ into $m:=n/s$ segments of length~$s$ (we assume $s|n$).
For $1\le j\le m$ and $1\le t \le \lfloor \frac{s}{2} \rfloor$, let $X^j_t$ be the indicator random variable such that $X^j_t=1$ if the $j$-th segment of a random word $W_3(n)$ contains twins of length $t$ but not $t+1$; otherwise $X^j_t=0$. Furthermore, define $X_t:=\sum_{j=1}^m X^j_t$.
First we calculate the expected value of $X_t$. Let $\lambda_t$ count the number of ternary words of length $s$ with twins of length $t$ but not $t+1$. Clearly, $\sum_{t=1}^{\lfloor s/2 \rfloor} \lambda_t = 3^s$. In general, finding (or even tightly approximating) $\lambda_t$ does not seem to be an easy problem. However, for small $s$ one can use a computer program to determine $\lambda_s$. In Table~\ref{table:lambda} we present all values of $\lambda_t$ for all $6\le s\le 14$ and $1\le t\le\lfloor s/2\rfloor$ (see Appendix~\ref{sec:maple} for more details). Since $X_t$ has the binomial distribution $\mathrm{Bin}(m,\lambda_t/3^s)$, its expectation is
\[
\E(X_t) = \frac{\lambda_t}{3^s} m = \frac{\lambda_t}{s3^s} n.
\]
Moreover, a standard application of the Chernoff inequality (see, e.g., ineq. (2.9) in~\cite{JLR}) together with the union bound yields that for all $1\le j\le m$ and $1\le t\le \lfloor \frac{s}{2} \rfloor$, $X_t$ is highly concentrated around its mean, i.e.,
\[
X_t = \frac{\lambda_t}{s3^s}(1+o(1)) n
\]
with probability $1-e^{-\Omega(n/(\log n))}$.
To finish the proof of Theorem~\ref{thm:0.4} we construct twins in $W_3(n)$ as follows. Let, for each $1\le j\le m$, $A_j$ and $B_j$ be a pair of the longest twins in the $j$-th segment. Observe that concatenating $A_j$ and $B_j$ over all $m$ segments yields, a.a.s., twins $A_1A_2\dots A_m$ and $B_1B_2\dots B_m$ of length
$$\sum_{t=1}^{\lfloor s/2 \rfloor} tX_t=\sum_{t=1}^{\lfloor s/2 \rfloor}\frac{t\lambda_t}{s3^s}(1+o(1))n=\rho_s(1+o(1))n,$$
where
\[
\rho_s:=\sum_{t=1}^{\lfloor s/2 \rfloor} \frac{t\lambda_t}{s3^s}.
\]
From the last column of Table~\ref{table:lambda} we compute that
$$\rho_{14}=\frac{4\cdot24\:\!894+5\cdot1\:\!312\:\!530+6\cdot 3\:\!196\:\!644+7\cdot248\:\!901}{14\cdot 3^{14}}=\frac{27\:\!584\:\!397}{66\:\!961\:\!566}>0.4119.$$
This completes the proof. \end{proof}
\begin{table}
\begin{tabular}{ |>{\columncolor{codelightgray}}c||c c c c c c c c c |}
\hline
\rowcolor{codelightgray}
$s$ & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14\\
\hline\hline
$\lambda_1$ & 42 & 6 & \multicolumn{3}{|c}{} & \multicolumn{4}{c|}{\multirow{2}{*}{\textsf{Zero}}} \\ \cline{4-6}
$\lambda_2$ & 594 & 1\:\!086 & 822 & 288 & \multicolumn{1}{c|}{42} & \multicolumn{4}{c|}{} \\ \cline{7-9
$\lambda_3$ & 93 & 1\:\!095 & 5\:\!118 & 11\:\!010 & 10\:\!806 & 5\:\!292 & 1\:\!350 & 162 & \multicolumn{1}{|c|}{} \\ \cline{2-3} \cline{10-10}
$\lambda_4$ & \multicolumn{2}{c|}{} & 621 & 8\:\!385 & 43\:\!776 & 106\:\!032 & 123\:\!750 & 75\:\!810 & 24\:\!894\\ \cline{4-5}
$\lambda_5$ & \multicolumn{4}{c|}{\multirow{2}{*}{}} & 4\:\!425 & 65\:\!823 & 373\:\!638 & 992\:\!244 & 1\:\!312\:\!530\\ \cline{6-7}
$\lambda_6$ & \multicolumn{6}{c|}{\textsf{Zero}} & 32\:\!703 & 526\:\!107 & 3\:\!196\:\!644\\ \cline{8-9}
$\lambda_7$ & \multicolumn{8}{c|}{} & 248\:\!901 \\
\hline
\end{tabular}
\caption{The exact values fo $\lambda_t$ for all $6\le s\le 14$ and $1\le t \le \lfloor s/2 \rfloor$.}
\label{table:lambda}
\end{table}
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{General bounds}\label{genbou}
\subsection{More twins than letters}\label{ratleastk} It was shown in~\cite{APP} that for all $r\ge k$
$$t^{(r)}(n,k)\ge \frac nr-O\left(\left(\frac{\log\log n}{\log n}\right)^{1/4}\right).$$
For self-containment (and self-contentment) we insignificantly improve this lower bound for \emph{almost} all words. The new bound holds with probability so close to 1 that a passage to the alternative model employed in Subsection \ref{apple} is still possible.
To this end, let us recall the main observation in~\cite{APP}: if there is a partition of a $k$-word~$w$ of length $n$ into $m$ segments $w_1w_2\cdots w_m$ of sizes $N:=n/m$ (we assume $m|n$) and in each segment each letter occurs at least $\mu$ times, then one can construct $r$-twins in $w$ via a natural interlacing. Namely, Twin 1 consists of $\mu$ elements $a_1$ of $w_1$, $\mu$ elements $a_2$ of $w_a$,\dots, $\mu$ elements $a_k$ of $w_k$, followed by $\mu$ elements $a_{1}$ of $w_{r+1}$, $\mu$ elements $a_2$ of $w_{r+2}$, and so on, for as long as there are still at least $r-1$ segments ahead. Twin 2 follows the same pattern except that it begins with $\mu$ elements $a_1$ of $w_2$, that is, it is shifted by one segment with respect to Twin~1. Then Twin 3 begins with $\mu$ elements $a_1$ of $w_3$, etc.
This way all $r$ twins are disjoint and they use together $k\mu$ elements from each segment except the first $k-1$ ones where the consumption gradually grows from $\mu$ to $(k-1)\mu$, and, in the worst case, when $m=r-1\ (\!\!\!\mod{r})$, the last $r-1$ ones, where the consumption declines from $(k-1)\mu$ to $\mu$, leaving the last $r-k$ segments completely untouched. The exact worst case count yields that
together twins $T_1,\dots, T_r$ cover at least
\begin{equation}\label{bound}
\left[m-(k-1)-(r-1)\right]k\mu+2\binom k2\mu=(m-r+1)k\mu
\end{equation}
elements of $w$.
The R-H-S of \eqref{bound} is, given $m\to\infty$ as $n\to\infty$, very close to $mk\mu$, which, in turn, will be close to $n$, provided
$\mu\sim N/k$. To get these relations, the authors of~\cite{APP} developed a regularity lemma for words. Here we are in a more comfortable situation as we are dealing with random words. Therefore, a simple application of Chernoff's bound yields the following result.
\begin{lemma}\label{interwining}
For $r\ge k\geq2$, with probability at least $1-O(n^{-k+1/3})$,
$$t^{(r)}(W_k(n))=n/r-O\left( n^{2/3}\sqrt{\log n}\right).$$
\end{lemma}
\proof Split the set of positions of $W_k(n)$ into $m:=n^{1/3}$ consecutive segments of length $N:=n^{2/3}$. For $j=1,\dots,m$ and $i=1,\dots,k$, let $X:=X_i^j$ be the number of elements $a_i$ in the $j$-th segment. Then $\E X=N/k$ and, by Chernoff's bound from~\cite{JLR}, ineq. (2.6), we have
\[
{\mathds P}(X\le(1-\eps)\E X)\le n^{-k},
\]
where $\eps=kn^{-1/3}\sqrt{2\log n}$.
As we only have $km=O(n^{1/3})$ random variables $X_i^j$, with probability at least $1-O(n^{-k+1/3})$, they all satisfy the opposite bound, that is, for all $j=1,\dots,m$ and $i=1,\dots,k$,
$$X_i^j\ge \mu:=(1-\eps)n^{2/3}/k=n^{2/3}/k -O(n^{1/3}\sqrt{\log n}).$$
Upon applying \eqref{bound} with this $\mu$, we conclude that the number of elements uncovered by the $r$-twins is, indeed, at most
$n - (m-r+1)k\mu = O(n^{2/3}\sqrt{\log n})$.
\qed
\medskip
Notice that for $r=k=2$, the estimate for $t(W_2(n))$ in the lemma is slightly weaker than the one from \cite{HeHNT} mentioned in the introduction. However, this fact does not affect our results, as all we really need in the proofs (see Example \ref{375} and the proof of Theorem \ref{main2}) is an estimate $t^{(r)}(W_k(n))=n/r-o(1)$. Besides, it is not clear how to generalize the result from \cite{HeHNT} to other values of $k$ and $r$.
\subsection{Equivalence of models of random words}\label{equiv}
Guided by analogy to random graphs, we consider two basic models of random words: the binomial and the fixed-letter-count model. Given positive integers $k$ and $n$, an alphabet $A=\{a_1,\dots,a_k\}$, and constants $0\le p_1,\dots,p_k\le 1$, where $p_1+\cdots+ p_k=1$, the \emph{binomial random word}
$W(n;p_1,\dots,p_k)$ is a sequence of independent random variables $(X_1,\dots,X_n)$, where for each $j=1,\dots,n$ and $i=1,\dots,k$ we have ${\mathds P}(X_j=a_i) = p_i$.
Here we are exclusively interested in the special, equiprobable instance $W_k(n):=W(n;1/k,\dots,1/k)$.
As a technical tool rather than an object of genuine interest we also define another model of a random word. Given integers $0\le M_1,\dots, M_k\le n$, where $M_1+\dots +M_k=n$, \emph{the fixed-letter-count random word} $W(n;M_1,\dots,M_k)$ is obtained by taking uniformly at random a permutation (with repetitions) of $n$ elements, among which, $i=1,\dots,k$, there are $M_i$ elements $a_i$. Thus, in the latter model, we restrict ourselves to words with prescribed numbers of each letter and every such word has the same probability $\binom n{M_1,\dots,M_k}^{-1}$ to be chosen.
The asymptotic equivalence between the two models goes smoothly one way (from fixed-letter-count to binomial), but to proceed the other way the models lack monotonicity, so we are forced to recourse to an analog of Pittel's inequality (see, e.g., ineq.~(1.6) in~\cite{JLR}) which bring, however, some limitations.
Let $Q$ be a property (subset) of words of length $n$ over alphabet~$A$. Although more general statements can be easily proved, we restrict ourselves to the special case of the model $W_k(n)$ and also to limiting probabilities equal to 1 only. The proofs follow those for random graphs (cf. Section 1.4 in~\cite{JLR}, in particular, proofs of Prop. 1.12 and of Pittel's inequality (1.6) therein) and rely on the law of total probability and the fact that the space of $W_k(n)$ conditioned on $M_i$'s being the numbers of occurrences of the elements $a_i\in A$, $i=1,\dots,k$, coincides with that of
$W(n;M_1,\dots,M_k)$.
\begin{prop}\label{M2p}
If for all $M_1,\dots,M_k$ such that $M_1+\cdots+M_k=n$ and $M_i=n/k+O(\sqrt n)$, $i=1,\dots,k$, ${\mathds P}(W(n;M_1,\dots,M_k)\in Q)\to1$ as $n\to\infty$, then ${\mathds P}(W_k(n)\in Q)\to1$ as $n\to\infty$.
\end{prop}
\proof Let $C$ be a large constant and define (for each $n$)
$${\mathcal M}_C=\{(M_1,\dots,M_k): M_1+\cdots+M_k=n\quad\mbox{and}\quad|M_i-n/k|\le C\sqrt n\}.$$
Let $(M^*_1,\dots,M^*_k)$ minimize ${\mathds P}(W(n;M_1,\dots,M_k)\in Q)$ over ${\mathcal M}_C$.
Finally, let $X_i$ be the number of occurrences of letter $a_i$ in $W_k(n)$. Note that each $X_i$ has binomial distribution with expectation $n/k$ and variance less than $n/k$.
Then, by the law of total probability
$${\mathds P}(W_k(n)\in Q)\ge{\mathds P}(W(n;M^*_1,\dots,M^*_k)\in Q){\mathds P}((X_1,\dots,X_k)\in{\mathcal M}_C).$$
By assumption, ${\mathds P}(W(n;M^*_1,\dots,M^*_k)\in Q)\to1$ as $n\to\infty$. By Chebyshev's inequality applied together with the union bound,
$${\mathds P}((X_1,\dots,X_k)\not\in{\mathcal M}_C)\le k\frac{n/k}{C^2n}=C^{-2}.$$
Thus, $\liminf_{n\to\infty}{\mathds P}(W_k(n)\in Q)\ge1-C^{-2}$. As this is true for every $C$, letting $C\to\infty$ yields $\lim_{n\to\infty}{\mathds P}(W_k(n)\in Q)=1$. \qed
\begin{prop}\label{p2M}
If ${\mathds P}(W_k(n)\in Q)=1-o(n^{-k/2})$ as $n\to\infty$, then for all $M_i=n/k+\omega_i$, where $|\omega_i|\le\sqrt{(n\log n)/(3k^2)}$ and $\sum_i\omega_i=0$, ${\mathds P}(W(n;M_1,\dots,M_k)\in Q)\to1$ as $n\to\infty$.
\end{prop}
\proof Fix $M_1,\dots,M_k$ as in the statement of the proposition. By the law of total probability, we obtain
\begin{align*}
{\mathds P}(W_k(n)\not\in Q)&=\sum_{M'_1,\dots,M'_k}{\mathds P}(W(n;M'_1,\dots,M'_k)\not\in Q)\binom n{M_1',\dots,M'_k}\frac1{k^n}\\&\ge {\mathds P}(W(n;M_1,\dots,M_k)\not\in Q)\binom n{M_1,\dots,M_k}\frac1{k^n},
\end{align*}
from which we get
$${\mathds P}(W(n;M_1,\dots,M_k)\not\in Q)\le\frac{k^n}{\binom n{M_1,\dots, M_k}}{\mathds P}(W_k(n)\not\in Q).$$
It remains to estimate the ratio $\frac{k^n}{\binom n{M_1,\dots, M_k}}$. Using Stirling's formula several times, we get
\[
\frac{k^n}{\binom n{M_1,\dots, M_k}}=O( n^{(k-1)/2})\prod_{i=1}^k(1+k\omega_i/n)^{M_i}\\
\le O( n^{(k-1)/2})\exp\left\{\sum_{i=1}^{k}k\omega_iM_i/n\right\}.
\]
Now since $M_i=n/k+\omega_i$ and $\sum_i\omega_i=0$, we get
\[
\sum_{i=1}^{k}k\omega_iM_i/n
= \sum_{i=1}^{k}k\omega_i\left(\frac{1}{k}+\frac{\omega_i}{n}\right)
= \sum_{i=1}^{k}\frac{k\omega_i^2}{n}
\le \frac{\log n}{3}
\]
and so ${k^n}/{\binom n{M_1,\dots, M_k}} = O( n^{k/2 - 1/6})$,
which yields that ${\mathds P}(W_k(n)\not\in Q)=o(1)$. \qed
\subsection{Boosting Lemma}\label{boost}
Fix $2\le r\le k$ and observe that if for all $n\ge n_0$ and some $\lambda>0$, we have $t^{(r)}(n,k)\ge\lambda n$, then also, by dropping the least frequent letter, $t^{(r)}(N,k+1)\ge\frac{\lambda k}{k+1}N$, provided $N\ge\tfrac{k+1}k n_0$.
In this section we show that for random words this trivial bound can be improved if one considers the fixed-letter-count model.
To get a similar result for the binomial model $W_k(n)$ one has to switch first to the fixed-letter-count model $W(n;M_1,\dots,M_{k})$ and then back to $W_{k+1}(N)$, the switches facilitated, respectively, by Propositions \ref{p2M} and \ref{M2p}.
The reason for switching is that the fixed-letter-count model can be broken into two phases allowing the enlargement of the twins.
Indeed, let $M_1,\dots,M_{k+1}$ be given such that $\sum_{i=1}^{k+1}M_i=N$. We generate the random word $W(N;M_1,\dots,M_{k+1})$ by first permuting all $n:=M_1+\cdots+M_k$ letters from $A\setminus\{a_{k+1}\}$ (Phase 1). This can be done in precisely $\binom{n}{M_1,\dots,M_k}$.
Then we throw in the $M_{k+1}$ letters $a_{k+1}$ which can go anywhere between the previously distributed letters (Phase 2). This can be done, by the formula for the number of ways to allocate $M_{k+1}$ balls into $n+1$ bins, in $\binom{N}{M_{k+1}}$ ways. Note that the product of these two numbers is, indeed, $\binom N{M_1,\dots,M_{k+1}}$ and that the outcome of Phase 1 is precisely the random word $W(n;M_1,\dots,M_k)$.
\begin{lemma}[Boosting Lemma]\label{k2k+1}
For $2\le r\le k$ and $n$ sufficiently large, let a partition $n=M_1+\cdots+M_k$ into nonnegative integers be given. If $M_{k+1}=n/k+O(\sqrt n)$ and, for some $\lambda=\lambda(n)>0$, a.a.s.~$t^{(r)}(W(n;M_1,\dots,M_k))\ge\lambda n$, then a.a.s.
$$t^{(r)}(W(N;M_1,\dots,M_k,M_{k+1}))\ge\left(1+\frac1{(k+1)^r-1}\right)\frac{\lambda k}{k+1}N(1-o(1)),$$
where $N=n+M_{k+1}$.
\end{lemma}
\noindent Note that the factor of $\frac{\lambda k}{k+1}N$ comes for free already after Phase 1, so that the actual improvement sits in the parentheses. Also, notice that
$$\frac{n}{N}=\frac{k}{k+1}\left(1+O\left(\frac1{\sqrt n}\right)\right).$$
In the proof of Lemma \ref{k2k+1} we will need a technical estimate on a ratio of binomial coefficients, the proof of which is deferred to Appendix~\ref{sec:fact}.
\begin{fact}\label{fact:ratio}
Let $M = \Theta(N)$ and $\ell^2 = o(N)$. Then,
\[
\frac{\binom{N-\ell}{M-\ell}}{\binom{N}{M}} = \left( \frac{M}{N} \right)^\ell \left(1+O\left(\frac{\ell^2}{N}\right) \right).
\]
\end{fact}
\begin{proof}[Proof of Lemma~\ref{k2k+1}]
We generate $W(N;M_1,\dots,M_{k+1})$ in two rounds as described prior to the statement of the lemma. Let $W'$ be the outcome of round one, that is, an instance of $W(n;M_1,\dots,M_k)$. Further, let $Q'$ be the event that $t^{(r)}(W(n;M_1,\dots,M_k))\ge\lambda n$ and $Q$ -- the ultimate event that
\[
t^{(r)}(W(N;M_1,\dots,M_k,M_{k+1}))\ge\left(1+\frac1{(k+1)^r-1}\right)\frac{\lambda k}{k+1}N(1-o(1)).\]
By the law of total probability,
\begin{equation}\label{PQ}
{\mathds P}(Q)=\sum_{W'}{\mathds P}(Q|W'){\mathds P}(W')=\sum_{W'\in Q'}{\mathds P}(Q|W'){\mathds P}(W')+o(1).
\end{equation}
We now focus on ${\mathds P}(Q|W')$ with $W'\in Q'$, that is, fixing an instance $W'$ of round one with $t^{(r)}(W)\ge\lambda n$, we are going to thoroughly investigate the outcome of the second round.
Fix $W'=w_1\cdots w_n$ and $r$-twins $T_1,\dots,T_r$ therein of length $|T_1|=\cdots=|T_r|=\lambda n$ (we may assume that $\lambda n$ is an integer). We treat the $n+1$ spaces before, between, and after the letters of $W'$ as bins and the $M_{k+1}$ letters $a_{k+1}$ as balls. Precisely, bin $b_0$ is in front of $w_1$, for $i=1,\dots,n-1$, bin $b_i$ lies between $w_i$ and $w_{i+1}$, and bin $b_n$ is to the right of $w_n$.
We group the bins lying immediately to the right of the elements of the $r$-wins $T_1,\dots,T_r$ into \emph{$r$-tuples of bins} denoted by $R_1,\dots, R_{\lambda n}$. Formally, if $T_j=w_{i_1}^{(j)}\cdots w_{i_{\lambda n}}^{(j)}$, $j=1,\dots,r$, then the $r$-tuple of bins $R_\ell$ consists of the bins $b_{i_\ell}^{(1)},\dots, b_{i_\ell}^{(r)}$, for $\ell=1,\dots,\lambda n$.
\begin{exmp} Let $k=r=3$, $n=27$, and
$$W'=w_1\cdots w_{27}= b\; \colorbox{cyan}{$a$}\; \colorbox{cyan}{$a$}\; c\; \colorbox{Lavender}{$a$}\; c\; b\; \colorbox{cyan}{$b$}\; \colorbox{Lavender}{$a$}\; \colorbox{Lavender}{$b$}\; \colorbox{cyan}{$c$}\; \colorbox{green}{$a$}\; \colorbox{Lavender}{$c$}\; a\; c\; \colorbox{green}{$a$}\; b\; b\; \colorbox{cyan}{$c$}\; \colorbox{green}{$b$}\; a\; \colorbox{green}{$c$}\; \colorbox{Lavender}{$c$}\; \colorbox{green}{$c$}\; b\; a\; b.$$
There are 3-twins of length 5 here, each forming the word $aabcc$, namely, $T_1=w_2w_3w_8w_{11}w_{19}$, $T_2=w_5w_9w_{10}w_{13}w_{23}$, and $T_3=w_{12}w_{16}w_{20}w_{22}w_{24}$. For instance, consider the first triple of bins, $R_1=\{b_2,b_5,b_{12}\}$. If the letter $d$ is inserted into each of these three bins, a longer word
$$b\; \colorbox{cyan}{$a$}\; \underline{d}\; \colorbox{cyan}{$a$}\; c\; \colorbox{Lavender}{$a$}\; \underline{d}\; c\; b\; \colorbox{cyan}{$b$}\; \colorbox{Lavender}{$a$}\; \colorbox{Lavender}{$b$}\; \colorbox{cyan}{$c$}\; \colorbox{green}{$a$}\; \underline{d}\; \colorbox{Lavender}{$c$}\; a\; c\; \colorbox{green}{$a$}\; b\; b\; \colorbox{cyan}{$c$}\; \colorbox{green}{$b$}\; a\; \colorbox{green}{$c$}\; \colorbox{Lavender}{$c$}\; \colorbox{green}{$c$}\; b\; a\; b,$$
is obtained, and the length of the twins, each forming now the word $adabcc$, increases by one.
\end{exmp}
Clearly, if each bin from an $r$-tuple receives $s$ balls (read:~$s$ letters $a_{k+1}$), then the length of the twins can be extended by $s$.
Since we would like to utilize several $r$-tuples of bins simultaneously, we categorize them with respect to the minimum number of balls.
For $1\le s \le \log n$, let $X_s$ be the number of $r$-tuples of bins with at least $s$ balls in each bin and exactly $s$ balls in some bin (i.e. there is a bin with exactly $s$ balls). We call each such $r$-tuple of bins an \emph{$s$-provider}. For each $\ell=1,\dots,\lambda n$, let $I_\ell$ be the indicator random variable such that $I_\ell=1$ if $R_\ell$ is an $s$-provider and 0 otherwise.
Hence, $X_s=\sum_{\ell=1}^{\lambda n}I_\ell$.
The event $\{I_\ell=1\}$ is the set difference of two events: that each bin in $_\ell$ has at least $s$ balls and that each bin in $R_\ell$ has at least $s+1$ balls. Thus, setting $M:=M_{k+1}$,
\[
{\mathds P}(I_i=1)=\frac{\binom{N-rs}{M-rs}}{\binom{N}{M}}-\frac{\binom{N-rs-r}{M-rs-r}}{\binom{N}{M}}
\]
and Fact~\ref{fact:ratio} yields
\begin{align*}
{\mathds P}(I_i=1) &= \left( \frac{M}{N} \right)^{rs} \left(1-\left( \frac{M}{N} \right)^{r} \right) \left(1+O\left(\frac{s^2}{N}\right) \right)\\
&= \left( \frac{1}{k+1} \right)^{rs} \left(1-\left( \frac{1}{k+1} \right)^{r} \right) \left(1+O\left(\frac1{\sqrt n}\right)\right).
\end{align*}
Thus, setting $\kappa=(k+1)^{-r}$, we have $\E X_s=\lambda n(1-\kappa)\kappa^s \left(1+O\left(1/{\sqrt n}\right)\right)$.
Our goal is to show a.a.s.~simultaneous concentration of each $X_s$, $s\le \log n$, near its expectation.
We are going to use Chebyshev's inequality (followed by the union bound over all $s$)
$${\mathds P}(|X_s-\E X_s|\ge\gamma\E X_s)\le\frac{Var X_s}{(\gamma\E X_s)^2}$$
with $\gamma = 1/\log n$.
To facilitate the future use of the union bound, we need to show that $Var X_s=o((\E X_s)^2/\log^3 n)$, which will imply that $\frac{Var X_s}{(\gamma\E X_s)^2} = o(1/\log n)$.
To this end we write
$$Var X_s=\E(X_s(X_s-1))+\E X_s-(\E X_s)^2.$$
Note that
$\{I_{\ell_1}=I_{\ell_2}=1\}=A\setminus(B_1\cup B_2)$, where $A$ is the event that all $2r$ bins in $R_{\ell_1}$ and $R_{\ell_2}$ contain each at least $s$ balls, while $B_i$, $i=1,2$, is the event that all bins in $R_{\ell_i}$ contain each at least $s$ ball and those in $R_{\ell_{3-i}}$ -- each at least $s+1$ balls. Thus
\begin{align*}
{\mathds P}(I_{\ell_1}=I_{\ell_2}=1)&={\mathds P}(A)-{\mathds P}(B_1)-{\mathds P}(B_2)+{\mathds P}(B_1\cap B_2)\\
&=\frac{\binom{N-2rs}{M-2rs}}{\binom{N}{M}}-2\frac{\binom{N-2rs-r}{M-2rs-r}}{\binom{N}{M}}+\frac{\binom{N-2rs-2r}{M-2rs-2r}}{\binom{N}{M}}
\end{align*}
which by Fact~\ref{fact:ratio} is equal to
\[
\left( \kappa^{2s}-2\kappa^{2s+1}+\kappa^{2s+2} \right) \left(1+O\left(\frac1{\sqrt n}\right)\right)
=\left((1-\kappa)\kappa^s\right)^2 \left(1+O\left(\frac1{\sqrt n}\right)\right).
\]
Thus,
\begin{align*}
Var X_s &= \lambda n(\lambda n-1) \left((1-\kappa)\kappa^s\right)^2 \left(1+O\left(\frac1{\sqrt n}\right)\right)\\
&\qquad + \lambda n(1-\kappa)\kappa^s \left(1+O\left(\frac1{\sqrt n}\right)\right)
- \left( \lambda n(1-\kappa)\kappa^s \left(1+O\left(\frac1{\sqrt n}\right)\right)\right)^2\\
&= O(n^{3/2}) = o((\E X_s)^2/\log^3 n),
\end{align*}
with a big margin.
Hence,
$$\sum_{s=1}^{\lfloor\log n\rfloor}{\mathds P}(|X_s-\E X_s|\ge\gamma\E X_s)=o(1),$$
and, in particular, a.a.s., for all $1\le s\le \log n$,
\begin{align*}
X_s &\ge \E X_s\left(1-\frac1{\log n}\right)\\
& = \lambda n(1-\kappa)\kappa^s \left(1+O\left(\frac1{\sqrt n}\right)\right)\left(1-\frac1{\log n}\right)
= \lambda n(1-\kappa)\kappa^s \left(1-O\left(\frac1{\log n}\right)\right).
\end{align*}
As each $X_s$ contributes $s$ towards an enlargement of the twins $T_1,\dots,T_r$, we need to calculate $\sum_{s=1}^{\lfloor\log n\rfloor} s X_s$.
Since for every positive integer $p$
\[
\sum_{s=1}^{p}s\kappa^s = \sum_{s=1}^{\infty}s\kappa^s - \sum_{s=p+1}^{\infty}s\kappa^s
= \frac{\kappa}{(1-\kappa)^2} - \frac{(1+(1-k)p)\kappa^{p+1}}{(1-\kappa)^2},
\]
we get
\begin{align*}
\sum_{s=1}^{\lfloor\log n\rfloor} s X_s
&\ge \lambda n(1-\kappa) \left(1-O\left(\frac1{\log n}\right)\right) \sum_{s=1}^{\lfloor\log n\rfloor} s\kappa^s \\
&= \lambda n(1-\kappa) \left(1-O\left(\frac1{\log n}\right)\right) \left( \frac{\kappa}{(1-\kappa)^2} - \Theta(\kappa^{\log n }) \right)\\
&= \frac{\lambda\kappa}{1-\kappa} n (1-o(1)) = \frac{\lambda}{(1+k)^r-1} n (1-o(1)).
\end{align*}
Hence, still conditioning on $W'$, a.a.s., there are in $W(N;M_1,\dots,M_k,M_{k+1})$ twins of length at least
\[
\lambda n + \frac{\lambda}{(1+k)^r-1} n (1-o(1))
= \left(1+\frac{1}{(1+k)^r-1}\right) \lambda n (1-o(1)).
\]
As $n = \frac{k}{k+1}N(1+o(1))$, this means that ${\mathds P}(Q|W')=1-o(1)$ for every $W'\in Q'$, and, by~\eqref{PQ}, the lemma follows.
\end{proof}
\begin{rem}\label{simpler} By considering only one random variable $X$ counting $r$-tuples of bins with no bin empty, the proof becomes a bit simpler, but the result is slightly weaker: a.a.s.
$$t^{(r)}(W(N;M_1,\dots,M_k,M_{k+1}))\ge\left(1+\frac1{(k+1)^r}\right)\frac{\lambda k}{k+1}N.$$
We get rid of $o(1)$, but lose $-1$ in the denominator, so overall the bound is lower, though for large $k$ the difference is insignificant.
\end{rem}
\subsection{Applications}\label{apple} In this subsection we present applications of the Boosting Lemma (Lemma \ref{k2k+1}), most importantly, in the proofs of Theorems \ref{main1} and \ref{main2}. Each time the scenario is the same: we pick an existing lower bound on the length of twins in $W_{k'}(n)$ (e.g., the bound in Theorem~\ref{thm:0.4} above or any deterministic result from~\cite{APP} or~\cite{BukhZ}), translate it via Proposition~\ref{p2M} to the fixed-letter-count model and then apply (iteratively) the Boosting Lemma to get a bound, still in the fixed-letter-count model, for the targeted value of $k>k'$.
At the end we go back to the binomial model $W_k(n)$ via Proposition \ref{M2p}. For this we need to have our estimate valid for all $M_i$, $i=1,\dots,k$, satisfying the assumptions of Proposition~\ref{M2p}.
This means, however, that also the input bound has to be valid for a corresponding range of $M_i$, $i=1,\dots,k'$.
Let us now analyze what it really means. Fix $k$ and let $M_i=n/k+w_i$, $i=1,\dots,k$, where $|w_i|=O(\sqrt n)$ and $\sum_{i=1}^kw_i=0$. How these assumptions alter when we drop $M_k$? Let $n_{k-1}=M_1+\cdots+M_{k-1}$. Then, for each $i=1,\dots,k-1$,
$$n_{k-1}=\frac{k-1}kn+\sum_{i=1}^{k-1}w_i=(k-1)M_i-(k-1)w_i-w_k,$$
thus
$$M_i=\frac{n_{k-1}}{k-1}+w_i+\frac{w_k}{k-1}=\frac{n_{k-1}}{k-1}+w_i^{(k-1)},$$
where $w_i^{(k-1)}=w_i+\frac{w_k}{k-1}$. Note that $\sum_{i=1}^{k-1}w_i^{(k-1)}=0$ as it should. We may iterate this relation all the way down to $k'$ (where we want to begin the process of applying Lemma \ref{k2k+1}), obtaining for $j=k-1,\dots,k'$ and $i=1,\dots,k'$,
with $n_j=M_1+\cdots+M_{j}$,
\begin{equation}\label{MM}
M_i=\frac{n_j}{k}+w_i^{(j)},
\end{equation}
where
\begin{equation}\label{ww}
w_i^{(j)}=w_i+\frac1{j}\sum_{q=j+1}^kw_q.
\end{equation}
Observe that $w_i^{(k')}=O(\sqrt n)$, so we stay within the range required in Proposition \ref{p2M}. Below we illustrate how to apply the Boosting Lemma.
\begin{exmp}\label{375}
Let $r=2$ and $k=3$. We know, either by our Lemma \ref{interwining} or by the result in~\cite{APP}, that a.a.s.~a random binary word $W_2(n)$ contains twins of length $\left(\tfrac12-o(1)\right)n$. We would like to deduce from this, via Lemma \ref{k2k+1}, a lower bound on the length of twins in the random ternary word $W_3(n)$ (so, we take $k'=2$ here).
Fix $M_i=n/3+w_i$, $i=1,2,3$, where $|w_i|=O(\sqrt n)$ and $w_1+w_2+w_3=0$. Suppressing $M_3$, we get, with $n'=n-M_3$, an instance of $W(n';M_1,M_2)$ which satisfies the assumptions of Proposition \ref{p2M}, that is, for $i=1,2$, we have $M_i=n'/2+w_i^{(2)}$, where $w_i^{(2)}=w_i+\frac{w_3}{2}=O(\sqrt n)$. Thus, we may conclude that a.a.s.~$t^{(2)}(W(n';M_1,M_2))\ge\lambda n'$ with $\lambda=\tfrac12-o(1)$. In turn, by Lemma \ref{k2k+1}, a.a.s.
$$t^{(2)}(W(n;M_1,M_2,M_3))\ge\left(1+\frac18\right)\frac{2\lambda}3 n(1-o(1))\ge 0.375n(1-o(1)).$$
Since this is true for all choices of $M_i$ as above, by Proposition \ref{M2p}, we finally get that a.a.s.~$t^{(2)}(W_3(n))\ge0.375n(1-o(1))$. This is much less than the bound in Theorem~\ref{thm:0.4}, so the result has some value only for computer-skeptical readers. On the other hand, it is still better than the bound $t^{(2)}(W_3(n))\ge0.34n(1-o(1))$ in \eqref{Bukh-Zhou t_k(n)1}, though the latter holds for \emph{all} ternary words.
Continuing with this example, let us iterate applications of Lemma \ref{k2k+1} till, say, $k=10$.
Skipping details, we see that the obtained bound is a.a.s.
\begin{align*}
t^{(2)}(W_{10}(n))&\ge\frac98\cdot\frac{16}{15}\cdot\frac{25}{24}\cdot\frac{36}{35}\cdot\frac{49}{48}\cdot\frac{64}{63}\cdot\frac{81}{80}\cdot\frac{100}{99}\cdot\frac{n}{10}(1+o(1))\\
&=\frac{15}{11}\cdot\frac{n}{10}(1-o(1))=1.\overline{36}\cdot \frac{n}{10}(1-o(1)).
\end{align*}
It is perhaps interesting to compare the above bound with $1.337(n/10)$ -- one obtained by the simpler and weaker version of Lemma \ref{k2k+1} mentioned in Remark \ref{simpler}.
\end{exmp}
\begin{proof}[Proof of Theorem \ref{main1}]
As a starting point we take our computer-assisted Theorem~\ref{thm:0.4}, so we set $k'=3$. Preparing for the final transition to the binomial model, fix any $M_i=n/k+w_i$, $i=1,\dots,k$, where $|w_i|=O(\sqrt n)$ and $\sum_{i=1}^kw_i=0$. By \eqref{MM} with $j=3$, we then have $M_i=n_3/3+w_i^{(3)}$, where $n_3=M_1+M_2+M_3$ and $w_i^{(3)}=O(\sqrt n)$, $i=1,2,3$, by \eqref{ww}.
By Theorem \ref{thm:0.4}, $t(W_3(n))\ge0.411n$ with probability at least $1-e^{-\Omega(n/(\log n))}=1-o(n^{-k/2})$. Thus, by Proposition \ref{p2M}, a.a.s.~$t(W(n_3;M_1,M_2,M_3))\ge0.411n$, and, in turn, by Boosting Lemma (Lemma \ref{k2k+1}) with $\lambda=0.411$, a.a.s.~$t(W(n_4;M_1,M_2,M_3,M_4))\ge\frac{16}{15}\cdot 1.233\cdot \frac{n_4}{4}$. We iterate this transition (or use induction) until reaching the random word $W(n;M_1,\dots,M_k)$.
Observe that
\[
\prod_{j=4}^k\frac{j^2}{j^2-1}
= \prod_{j=4}^k\frac{j^2}{(j-1)(j+1)}
= \frac{4^2}{3\cdot 5} \cdot \frac{5^2}{4\cdot 6} \cdot \frac{6^2}{5\cdot 7} \cdots \frac{k^2}{(k-1)(k+1)} = \frac{4k}{3(k+1)}.
\]
Hence, we get, a.a.s.,
$$t^{(2)}(W(n;M_1,\dots,M_k))\ge \prod_{j=4}^k\frac{j^2}{j^2-1} \cdot (3\rho_{14}) \cdot\frac{n}{k} (1-o(1))
\ge \frac{1.64k}{k+1} \cdot\frac{n}{k}.$$
(We drop $0.004$ to ``kill'' the error term $1-o(1)$.)
As this is true for all choices of $M_i=n/k+w_i$, we are in position to apply Proposition~\ref{M2p} and conclude that a.a.s.
$t^{(2)}(W_k(n))\ge \frac{1.64k}{k+1} \cdot\frac{n}{k}.$
\end{proof}
\begin{proof}[Proof of Theorem \ref{main2}] This proof is very similar except that we now take $k'=r$ and launch off with the bound $t^{(r)}(W_{k'}(n))=n/r-o(n)$ from Lemma~\ref{interwining} (or its deterministic counterpart from~\cite{APP}). Other minor differences are that for $r\ge3$ there is no nice formula for the product $\Pi_{r,k}=\prod_{j=r+1}^k\frac{j^r}{j^r-1}$ and, unlike before, we cannot get rid of the error term $1-o(1)$. We omit the details.
\end{proof}
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Concluding remarks}\label{Section Concluding}
Let us conclude the paper with some open questions and suggestions for future studies. The first problem naturally concerns twins in ternary words.
\begin{prob} Does $t(n,3)=n/2-o(n)$?
\end{prob}
Even if the answer is negative it can still be true that \emph{almost all} ternary words contain twins of such length, that is, a.a.s. $t(W_3(n))=n/2-o(n)$. It is known, as demonstrated in~\cite{BukhZ}, that a.a.s. $t(W_4(n))\leq0.4932n$ which makes a similar statement for random \emph{quaternary} words false. In general, it is not clear how close $t(W_k(n))$ and $t(n,k)$ are from each other.
\begin{prob}
Does $t(W_k(n))=t(n,k)+o(n)$ hold a.a.s.~for all $k\geq 2$?
\end{prob}
\noindent By the result of Axenovich, Person, and Puzynina~\cite{APP} we know that this is the case for $k=2$.
One may also consider more restricted notions of twins in words reflecting their placement in the word. For instance, it may happen that twins occupy together a connected segment forming a \emph{shuffle square}, as in the example below:
$$a\;b\;\colorbox{cyan}{c}\;\colorbox{cyan}{a}\;\colorbox{Lavender}{c}\;\colorbox{cyan}{b}\;\colorbox{Lavender}{a}\;\colorbox{Lavender}{b}\;c\;b\;a\;c.$$
As proved by Gu\'{e}gan and Ochem \cite{GueganOchem} (see also \cite{GrytczukKZ}, \cite{OchemRR}), there exist arbitrarily long words over a $7$-letter alphabet containing no shuffle squares. It is not known, however, if the size of the alphabet in this result is optimal and, more generally, what is the maximum length of a shuffle square guaranteed to be present in \emph{every} (long) $k$-letter word, $k=2,\dots,6$.
For random words, we formulate a question which is close in the spirit to the Chvatal and Sankoff problem discussed in Section \ref{Section Twins History}.
\begin{prob}
What is the expected maximum length of a shuffle square in a random word $W_k(n)$?
\end{prob}
This question sounds particularly intriguing in the light of a recent conjecture by He, Huang, Nam, and Thaper \cite{HeHNT} (mentioned already in Section \ref{Section Twins History}), that almost every binary word (with even numbers of ones and zeros) is a shuffle square. For more on counting shuffle squares and their variants see, e.g., \cite{HenshallRS}. Similar problems can also be considered for \emph{shuffle cubes}, or more generally, for arbitrary \emph{shuffle $r$-powers}, in analogy to general $r$-twins.
Even more restricted version of twins is obtained when each twin in a shuffle square occupies itself a connected segment, like in the example below:
$$a\;b\;\colorbox{cyan}{c}\;\colorbox{cyan}{a}\;\colorbox{cyan}{b}\;\colorbox{Lavender}{c}\;\colorbox{Lavender}{a}\;\colorbox{Lavender}{b}\;c\;b\;a\;c.$$
This basic structure is well-known and widely studied in combinatorics on words under the name of a \emph{square} or a \emph{repetition} (see \cite{Lothaire1}, \cite{RampersadShallit}). By the famous result of Thue~\cite{Thue} we know that there exist ternary words of any length with no squares altogether. In the binary case, it is known that there exist arbitrarily long words avoiding squares of length greater than $2$, as proved by Fraenkel and Simpson~\cite{FraenkelSimpson}. However, not much is known about squares in random words.
\begin{prob}
What is the expected maximum length of a square in a random word $W_k(n)$?
\end{prob}
Let us finely remark that many similar ideas are intensively studied in molecular biology in connection to DNA sequence analysis (see~\cite{Lothaire2}, \cite{Waterman}). For instance, in~\cite{ArratiaWaterman} Arratia and Waterman considered the maximum length $M_n$ of common blocks (subsequences consisting of consecutive terms) in two random binary words of length $n$. They proved that a.a.s. $M_n\sim 2\log_{1/p}n$, where $p\in(0,1)$ is the probability of picking one of the two letters in the alphabet. It is worth noticing that, accordingly to the classic Erd\H{o}s-R\'enyi law~\cite{ErdosRenyi}, this value is twice as big as the longest run of heads during $n$ tosses of a coin.
|
3,212,635,537,798 | arxiv | \section*{Abstract}
Herein we develop a dynamical foundation for fractional Brownian
Motion.
A clear relation is established between the asymptotic
behaviour of the correlation function and diffusion in a dynamical
system. Then, assuming that scaling is applicable, we establish a
connection between diffusion (either standard or anomalous) and the
dynamical indicator known as the Hurst coefficient. We argue on the
basis of
numerical simulations that
although we have been able to prove scaling only for "Gaussian"
processes, our conclusions may well apply to a wider class of systems.
On the other hand systems exist for which scaling might not
hold, so we speculate on the possible consequence on the various
relations
derived in the paper on such systems.
\newpage
\section{Introduction}
A stochastic process is in general characterized by two quantities, the
probability density describing the random nature of the fluctuations
and
the correlation function describing how a fluctuation at a given time
influences subsequent fluctuations. The statistics and the spectrum of
the
random variations are therefore independent quantities and both are
necessary to completely
describe a stochastic process. For example, a Gaussian distribution
may specify the statistics of a process, but the time dependence of the
variance, i.e. the width of the distribution, depends on the correlation
function or spectrum of the process.
Given that in many physical systems the Gaussian
distribution is a straightforward consequence of the Central Limit
Theorem,
investigators are often satisfied with measurements of the correlation
function to describe natural stochastic processes
particularly in the observation of large scale phenomena such as in
geophysics.
One such scientist who was most successful in this regard was Hurst,
who was particularly interested in problems of Hydrology and the Nile
river~\cite{it1,it2}.
Hurst~\cite{it1,it2,it3} developed a method called rescaled range
analysis, in which the span of a random process is devided by its
variance, resulting in a new variable that depends on the time length of
the data
record in a particularly interesting way. Let us define the time average
of the
random process $\xi(t)$ over the interval of time $\tau$: introducing
$t$, the discrete integer-valued time at which the observations are
recorded, and $\tau$, the total integer valued
time-interval considered, we have
\begin{equation}
\langle \xi \rangle_\tau = \frac{1}{\tau }\sum^\tau_{t =1} \xi (t)
\label{e1p1}
\end{equation}
Let us also define $X(t)$, the "accumulated departure" of $\xi(t)$ from
the mean
$\langle \xi \rangle_\tau$, i.e.
\begin{equation}
X (t,\tau ) \equiv \sum^t_{u = 1} \xi (u) - \langle \xi \rangle_\tau
\label{e1p2}
\end{equation}
so that the span of the process is defined by
\begin{equation}
S(\tau) = \max_{1 \le t \le \tau} X (t,\tau) - \min_{1 \le t \le \tau} X
(t,\tau)
\label{e1p3}
\end{equation}
Finally, let us also consider the
standard expression for the variance
\begin{equation}
V({\tau}) = \left[ \frac{1}{\tau}\sum^\tau_{t = 1} \left( \xi(t) -
\langle \xi \rangle_\tau \right)^2 \right]^\frac{1}{2}
\label{e1p4}
\end{equation}
The rescaled Hurst analysis consists in studying the properties of the
ratio
\begin{equation}
R (\tau) = S (\tau) / V (\tau)
\label{e1p5}
\end{equation}
He found that for the time records of over 850 phenomena $R$
is well described by the following empirical relation
\begin{equation}
R (\tau )= \left( \tau / 2 \right)^{H_H}
\label{e1p6}
\end{equation}
Here we refer to $H_H$ as the Hurst exponent (Hurst used the
symbol $K$ for this exponent).
Mandelbrot and co-workers~\cite{it4,it5,it6} modeled this heuristic
result
using the theory of fractional Brownian motion.
They showed that fractional Brownian motion would
provide an explicit statistical realization of (~\ref{e1p6}),
and that the theory would imply a reasonable
interpretation of the parameter $H_H$.
We emphasize that this important result supports the
interpretation of
natural phenomena in terms of fractal functions. However,
this
interpretation does not take into account the fact that fractal functions,
as
important as they may be, are still idealizations of natural phenomena.
These idealizations are not applicable to all time scales. In this
context we
mention the recent efforts to establish the physical foundation
of classical statistical mechanics using the concept of
chaos~\cite{it7,it8,it9}. These efforts rely
on there being a wide time scale separation between the microscopic
and the
macroscopic dynamical regimes.
The microscopic quantities of motion are valid on
a very short time scale, where the
conventional statistical arguments are inapplicable. Then, upon
increasing
of the time
scale considered, as a result of the action of chaos, the system of
interest
exhibits those statistical properties which are associated with
the
conventional ideas of a canonical distribution and Gaussian statistics.
The main purpose of the present paper is to provide a dynamical basis
for the
Hurst
rescaled range analysis. We show that the theory of Mandelbrot and
co-workers~\cite{it4,it5,it6} focuses on the asymptotic limit of the
dynamical processes considered herein.
There are a number of benefits arising from this change in
perspective. First of all, adding a "dynamical dimension" to the
Hurst
analysis, can be profitably used to quantitatively illustrate the
evolution
of a deterministic system away from a regular toward a totally
chaotic process, the
latter being, for many purposes, virtually indistinguishable from a
stochastic one.
Thus, the Hurst exponent may provide an independent means of
distinguishing
stochastic from chaotic motion~\cite{it10}.
Following this idea, we are tempted
to
speculate that the distinction between these two kinds of physical
processes
may be
merely a question of time scale.
Secondly we shall see that such a dynamical analysis, i.e., putting the
fractal
geometry of Mandelbrot into a dynamical setting, has the beneficial
effect
of
rigorously establishing a connection between the Hurst coefficient and
the behaviour of the autocorrelation functions at long times. We
link the coefficient to the existence of positive or negative tails
for the autocorrelation function of the random variable $\xi$,
\begin{equation}
C(t) = \langle \xi (0) \xi (t) \rangle
\label{e1p7}
\end{equation}
In molecular dynamics, dynamical processes characterized by a
long-time
regime with an inverse power-law correlation function
\begin{equation}
C(t) = \pm k/t^\alpha \; \; \; \; \; \; \mbox{for } \;\; t \rightarrow \infty
\label{e1p8}
\end{equation}
are denoted as {\em slow-decay} processes since the pioneering work
of Adler
and Wainwright~\cite{it11} where (~\ref{e1p8}) was first identified,
correlation functions have been the subject of an intense
debate~\cite{it12,it13}. From (~\ref{e1p8}) we note that the power
law
decay can have either a positive or a negative tail. We shall see that
the processes that Mandelbrot denotes as {\em persistent}, $H_H > 1/2$
in (~\ref{e1p6}), are connected by our theoretical analysis to a
positive tail, whereas those termed by him to be {\em antipersistent},
$H_H <
1/2$
in (~\ref{e1p6}), are connected by our theoretical analysis to
the existence of negative tails.
We support the analytic arguments presented in Section 2 and 3
with computer calculations
done for a substantial number of different dynamical systems.
The numerical results presented in Section 4
support the suggested dynamical approach to
the "geometrical" theory of Mandelbrot, but they also suggest that the
asymptotic time regime itself might be explored with different
mathematical
arguments, valid also for non-Gaussian statistics.
\section{Dynamical theory}
Let us now focus our attention on the following equation of motion
\begin{equation}
\dot{x} = \xi
\label{e2p1}
\end{equation}
The Hurst coefficient (~\ref{e1p6}) was defined in terms of a
discrete time process so any dynamical representation such as
(~\ref{e2p1})
ought to be discrete as well. However, to connect the process with the
field of molecular dynamics, we adopt a continuous time
representation. The formal time integration of (~\ref{e2p1}) yields
\begin{equation}
x (t) = \int_0^t \xi (t') d t' + x(0)
\label{e2p2}
\end{equation}
Thus the variable $x(t)$ undergoes a kind of motion with erratic
fluctuations induced by $\xi(t)$. Later on we shall define more precisely
the nature of such
"disordered" motion. For the time being we limit ourselves to the
conventional language of statistical mechanics. Thus, rather than
focusing on single
trajectories we shall study the mean values of quantities like $x^n (t)$.
We
make
the simplifying assumption that the erratic variable $\xi(t)$ fluctuates
around
zero. Thus $\langle x(t) \rangle = \langle x(0) \rangle$ where the
brackets denotes an average over an ensemble of realisations of
$\xi(t)$, as well as the distribution of initial state of $x(t)$.
We are therefore
obliged to study the mean value of $x^2 (t)$. By averaging $x^2 (t)$
over the fluctuations and initial conditions, we obtain
\begin{equation}
\langle x^2 (t) \rangle = \int_0^t dt' \int_0^t dt'' \langle \xi (t') \xi
(t'')
\rangle
+ 2 \int_0^t dt' \langle \xi (t') x(0) \rangle +
\langle x^2 (0) \rangle
\label{e2p3}
\end{equation}
We make the assumption that the second term on the r.h.s. of
(~\ref{e2p3})
vanishes, hypothesizing no correlation between the
initial
value of $x$ and $\xi$. This assumption certainly holds true when
there exists a large time
scale separation between the dynamics of the {\em fast variable} $\xi $
and that of the
{\em slow variable} $x$. However, this hypothesis must be used
with
caution in applications, which often refer to situations of slow
decay (~\ref{e1p8}), implying an exceptionally extended memory. We
then adjust the limits of the time integrals and obtain
\begin{equation}
\langle x ^2 (t) \rangle =
2\int_0^t dt' \int_0^{t'} dt'' \langle \xi (t') \xi (t'') \rangle +
\langle x^2 (0) \rangle
\label{e2p4}
\end{equation}
Under the assumption that the process $\xi (t)$ is stationary, i.e., its
moments
are independent of the time origin, so that
\begin{equation}
\langle \xi (t') \xi (t'') \rangle =\langle \xi (t'-t'') \xi (0) \rangle
\label{e2p5}
\end{equation}
we derive from (~\ref{e2p4}) the following integro-differential
equation of motion
\begin{equation}
\frac{d}{dt}\langle x^2 (t) \rangle = 2 \int_0^t \langle \xi (\tau) \xi (0)
\rangle d \tau
\label{e2p6}
\end{equation}
Clearly, the $\langle x^2 (t) \rangle$ appearing in the l.h.s.
of (~\ref{e2p6}) must be connected to the long-time diffusional regime
described by
\begin{equation}
\langle x^2 (t) \rangle = K t^{ 2 H_D}.
\label{e2p7}
\end{equation}
It is evident that the physical bounds on the possible values of $H_D$
are given by
\begin{equation}
0 < H_D < 1;
\label{e2p8}
\end{equation}
$H_D = 0$ defines the case of localization, which is the
lower limit of any diffusion process, and $H_D = 1$
obviously refers to the case of many uncorrelated deterministic
trajectories, with $x(t) - x(0)$ linearly proportional to time for each
of them. The bound $H_D < 1$ is a consequence of the fact that a
diffusional process cannot spread faster than a
collection of deterministic trajectories! Finally, the condition $H_D =
1/2$
is obtained for simple Brownian motion, where the variance increases
linearly with time.
On the other hand, using the definition of the correlation function
given in (~\ref{e1p7}), (~\ref{e2p6}) becomes
\begin{equation}
2 D(t) \equiv \frac{d}{dt}\langle x^2 (t) \rangle = 2 \int_0^\infty C(t')
dt'.
\label{e2p9}
\end{equation}
We can now show that, using (~\ref{e2p9}), the deviation of $H_D$
from the conventional diffusion prediction $H_D = 0.5$ can be explained
if
the correlation function $C(t)$ exhibits a slow decay. The joint use
of (~\ref{e1p8}), (~\ref{e2p7}) and (~\ref{e2p9}) leads to the following
long-time prediction
\begin{equation}
\frac{d^2}{dt^2}\langle x^2 (t) \rangle = 2 H_D ( 2 H_D -1 ) K t^{2
H_D-2} \sim 2 C(t) = \pm \frac{2k}{t^\alpha}
\label{e2p10}
\end{equation}
having assumed that the long-time limit of the correlation function
$C(t)$ is
dominated by the inverse power law of (~\ref{e1p8}). The positive
(negative) sign
refers to the case of the solid (dashed) line in Fig. 1.
{}From (~\ref{e2p10}) we determine that $H_D$ and $\alpha$
satisfy the relation
\begin{equation}
H_D = 1 - \alpha/2
\label{e2p11}
\end{equation}
obtained by matching the time dependences. It is also clear
from the coefficients in (~\ref{e2p10}) that
$H_D > 1/2$ implies a
positive long-time correlation, whereas $H_D < 1/2$ implies a negative
long-time correlation.
Let us now summarize the result of this simple theoretical analysis,
with
an eye to Fig. 1:
\begin{eqnarray*}
\mbox{Case exemplified by the solid line:} \; & 1 > H_D > 1/2; \;\;\;\; 1
> \alpha > 0 \\
\mbox{Case exemplified by the dashed line:} \; & 0 < H_D < 1/2; \;\;\;\; 2
> \alpha > 1 \\
\end{eqnarray*}
Thus, we see that the solid line correlation function of Fig. 1 leads
to a superdiffusive
behavior ranging from the standard diffusion $(H_D = 1/2)$ to the
ballistic
behavior $(H_D = 1)$. The dashed line correlation function of the same
figure leads to a
subdiffusive behavior ranging from the standard diffusion to no motion
at all.
It must be stressed that for superdiffusive correlation functions
(~\ref{e2p9}),
$D(\infty) = \infty$, whereas in the
case of subdiffusive correlation functions, $D(\infty)$ is finite or even
vanishing.
In this latter case we obtain what we define as {\em classical Anderson
localization}. At early times in the diffusional process the mean
square value of $x(t)$ increases. Then when the negative part of the
correlation function $C(t)$ becomes important, the rate of diffusion
decreases. When the negative tail completely compensates for the
positive
part of the relaxation process, the rate of diffusion virtually vanishes.
At this late time stage further diffusion is rigorously prevented and the
diffusing
particle becomes localized. Processes of this kind have recently been
discovered~\cite{it14} and the theory presented here affords a
remarkably
straightforward explanation of them. It is interesting that such
processes
should admit such a simple interpretation.
We point out that in the simple theory presented in this section, the
only significant assumptions made are the stationary property
of (~\ref{e2p5})
and the absence of correlation between $x(0)$ and $\xi$. No
assumption
was made
on the nature of the statistics of the stochastic process $x$
except that it has a finite correlation function. In the next
section we shall rederive (~\ref{e2p6}) under the more
restrictive assumption that the process $\xi$ and therefore $x$
is Gaussian. We also note that
the standard case $H_D = 0.5$ is compatible with almost any kind of
relaxation
process. The only condition to fulfill is that the correlation function
$C(t)$ is square integrable over the time interval . Thus, if we exclude
the case $H_D = 0.5$, we must invoke
an inverse power law decay to explain the behavior
given by (~\ref{e2p7}). This is so because (~\ref{e2p7})
implies the existence
of "stationary"
behavior and which, in turn, implies the existence of
an inverse power-law decay. A
power-law decay is the only way of "killing" the possibility itself of
defining a time scale, and this, in turn, is a condition essential to
explain the "stationary" nature of the diffusion regime
of (~\ref{e1p7}) with $H_D \ne 0.5$.
What do we mean by "stationary"? Upon increase of the time scale
considered,
the diffusion process is increasingly dependent on the tail of the
correlation function, until it becomes totally dominated by the
inverse power-law
tail. In this regime, since the power-law decay of
the correlation function implies no time scale
is dominant, the diffusion process becomes
stationary. The concept of a stationary diffusion process can be easily
expressed by referring to the Hurst coefficient rather than to
$H_D$. We shall see that $H_H$ is time dependent, and that it usually
reaches a
stationary value for $t\to \infty$. This definition of "stationary"
diffusion behavior implies that the diffusion coefficient $H_D$ can be
identified with the asymptotic value of the Hurst coefficient $H_H$. We
shall see that this is frequently the case even if we shall only be able
to {\em rigorously} prove it for Gaussian statistics.
\section{A Fokker-Planck treatment}
The next step is to determine the connection of the Hurst coefficient
$H_H$
with the diffusion
coefficient $H_D$. To do this we need to derive suitable expressions for
the quantities appearing in (~\ref{e1p5}), for any given dynamical
system. In part we follow the strategy of Mandelbrot:
the idea is that for a white Gaussian
process it is simple to carry out the theoretical analysis, hence it is
only necessary to find, for a given dynamical system, the corresponding
"Gaussian" approximation in the appropriate "reduced" time scale.
The details of the approach are given in reference~\cite{it15}.
For the given dynamical system we want
to study, we replace the dynamical equations with the
following equation of motion for the probability density
\begin{equation}
\frac{\partial}{\partial t}\rho (x,\xi ,\Gamma ;t) = L \rho (x,\xi
,\Gamma ;t).
\label{e3p1}
\end{equation}
Here $\Gamma$ stands for the entire set of variables necessary to
describe
the time evolution of $\xi $. The "Liouville-like" operator $L$ is divided
into two parts as follows,
\begin{equation}
L = L_I + L_B
\label{e3p2}
\end{equation}
where the "interaction" part determined by (~\ref{e2p1}) yields and the
phase space operator
\begin{equation}
L_I = - \xi \frac{\partial}{\partial x},
\label{e3p3}
\end{equation}
and $L_B$ defines the time evolution of the distribution of the variables
$\xi $ and $\Gamma$,
determined by the dynamics of the corresponding set of variables. It is
not necessary to define the explicit form
of the latter operator since it depends in detail on the specific
problem studied.
We now use the Zwanzig projection approach~\cite{it16}, which consists
in
integrating the total distribution $\rho$ over the degrees of freedom
that
are not of interest to us
\begin{equation}
\sigma (x,t) = \int d\xi d\Gamma \rho (x, \xi , \Gamma )
\label{e3p4}
\end{equation}
We apply this projection approach following the perturbation
prescriptions of~\cite{it14}, assuming that $L_I$ in (~\ref{e3p3})
is a "weak" perturbation. Using this basic
assumption, after some algebra described in detail in~\cite{it15},
we arrive at the result
\begin{equation}
\frac{\partial}{\partial t}\sigma (x,t) = \Xi (t)
\frac{\partial^2}{\partial x^2}\sigma (x,t),
\label{e3p5}
\end{equation}
where
\begin{equation}
\Xi (t) = \int_0^t C(\tau ) d \tau
\label{e3p6}
\end{equation}
If (~\ref{e3p5}) was used to determine the time evolution of
$\langle x^2 (t) \rangle$ i.e., multiply (~\ref{e3p5}) by $x^2$
and integrate over $x$, it would reduce to (~\ref{e2p9}).
In this sense (~\ref{e2p9})
and (~\ref{e3p5}) are equivalent. Unfortunately, the two equations are
not truely equivalent
because (~\ref{e2p9}) is obtained without making any
assumption regarding the statistics of $x$,
whereas (~\ref{e3p5}) is really the result of a
second-order perturbation treatment, equivalent to assuming that the
statistical process $x$ is Gaussian.
If now we rescale the time as follows
\begin{equation}
t^{*} = t^{2 H_D},
\label{e3p7}
\end{equation}
(~\ref{e3p5}) can be written as
\begin{equation}
\frac{\partial }{\partial t^{*}}\sigma (x; t^{*}) = \Phi \left( t^{*}
\right)\frac{\partial ^2}{\partial x^2}\sigma (x; t^{*})
\label{e3p8}
\end{equation}
where, of course,
\begin{equation}
\label{e3p9}
\Phi \left( t^{*} \right)=\frac{1}{2 H_D}{t^{*}}^{\frac{1}{2H_D}-1}
\int_0^{t^{* \frac{1}{2H_D}-1}} \frac{dt'^{*}}{2 H_D}
{t'^{*\frac{1}{2H_D}-1}} C \left( t'^{*} \right)
\end{equation}
After some algebra it can be shown that
\begin{equation}
\label{e3p10}
\lim_{t^{*} \rightarrow \infty} \Phi \left( t^{*} \right) = K
\end{equation}
where $K$ is a finite constant. It is evident that in the
asymptotic time limit (~\ref{e3p8}) becomes a standard (time
independent) Fokker-Planck equation, and the statistical process
defined in terms of the scaled variable
\begin{equation}
\label{e3p11}
y \equiv \frac{x}{\sqrt{K t^{*}}},
\end{equation}
becomes a Gaussian process with the distribution
\begin{equation}
\label{e3p12}
p (y) = \frac{1}{\sqrt{2 \pi }}\exp \left( - \frac{y^2}{2} \right).
\end{equation}
This result means that the random function $x(t)$ is proportional to
${t^{*}}^{1/2}$ or,
in the original time scale, to $t^{H_D}$. Following
Mandelbrot~\cite{it4}, we are then
led to identify $H_D$ with $H_H$.
This is the central theoretical result of this paper. In the
asymptotic limit the dynamical process described by (~\ref{e3p1})
becomes
the fractional Brownian motion of Mandelbrot. However, in the short
time
regime the process can be substantially different from such a stochastic
process. This is clearly illustrated by the numerical results of the
following section.
We note that $H_D$ is the coefficient appearing in
(~\ref{e2p7}).
However, to be able to identify this coefficient with $H_H$ it is
necessary
that we go through the Fokker-Planck equation of (~\ref{e3p5}), and
this implies making a second-order approximation, absent in the
derivation of (~\ref{e2p7}).
In the next section we show, using numerical methods that $H_H$
is very close in value to $H_D$ even in cases when (~\ref{e3p5})
should not be valid. This suggests the possibility of establishing the
property $H_H = H_D$ without using the second-order approximation
necessary for the derivation of (~\ref{e3p5}).
\section{Numerical simulations}
We now present numerical simulations of a number of different
dynamical systems, to which the Hurst analysis is then applied.
Comments on the
correspondence of $H_D$ and $H_H$ are discussed alongside the
numerical
data. For clarity, the different models are presented within separate
subsections.
\subsection{Microscopic dynamics described by the Langevin equation}
The easiest way to create a stochastic process $\xi$ with an exponential
correlation function is through the Langevin equation
\begin{equation}
\label{e4p1}
\dot{\xi } = - \gamma \xi (t) + f(t)
\end{equation}
where $f(t)$ is Gaussian white noise. Clearly, the relevant time scale
here is given by $1/\gamma$, and we expect that for times larger than
$1/\gamma$, we should find $H_H=0.5$. This is confirmed (see Fig. 2)
by the
numerical simulations: the expected asymptotic regime is reached at
shorter times for the case of larger $\gamma $.
A similar model is given by
the multidimensional Langevin equation
\begin{eqnarray}
\label{e4p2}
\dot {\xi } &=& w (t) \nonumber \\
\dot {w} &=& -\Gamma w (t) - \Omega^2 \xi (t) + F(t)
\end{eqnarray}
In the case $\Gamma \gg \Omega$, (~\ref{e4p2}) becomes
indistinguishable from (~\ref{e4p1})
with $\gamma = \Omega^2/\Gamma$. Again, as soon as the time scale
considered is larger than the typical time scale of the system, we should
have $H_H=0.5$. However, here a problem arises. It is possible
to consider either the time under the correlation function
or the time over which the "quasi energy" in (~\ref{e4p2}) looses
correlation:
these two times are in general very different. By inspection of Fig.
3, it is clear that numerical simulations done for different $\Omega^2$
and the
same $\gamma $ show a similar asymptotic behavior more or less in
the same time
region (compare the solid line and the dotdashed line in Fig. 3):
whereas, when a different $\gamma $ value is considered (compare the
solid and
dashed lines), the asymptotic regime sets in at different times.
This seems to imply that the relevant time scale is the time over which
the energy looses correlation.
Due to the linearity of (~\ref{e4p2}), the Gaussian statistics of the
stochastic force is transmitted to $\xi $ and hence to $x$. Thus, the
Central
Limit Theorem is fulfilled and the asymptotic behavior must be
characterized by $H_H = H_D = 0.5$. The parallelism between $R$ and
$\langle x^2 (t) \rangle^{1/2}$ of
Fig. 4 shows that the transient behavior corresponds exactly to the time
it takes for the second moment to reach the stationary condition,
corresponding to standard diffusion. In the case of (~\ref{e4p2}) this
transient time is a complex function of the parameters $\Gamma $ and
$\Omega $, which we
do not discuss here. But of course standard diffusion sets
in over time scales which are connected to the time taken by the
quasi-energy to decorrelate, hence justifying the behaviour
shown in Fig. 3.
\subsection{Bistable stochastic motion}
The next system we consider involves
the motion of a stochastic particle in
a bistable potential. The dynamics can be cast in the dimensionless form
\begin{equation}
\label{e4p3}
\dot {\xi } = \xi - \xi^3 + f(t)
\end{equation}
where $f(t)$ is a stochastic Gaussian white force [see (~\ref{e4p1})] of
intensity $D$.
The Hurst rescaled range
analysis is applied to the variable $\xi$. In the limit of small
noise intensities $D$ the system (~\ref{e4p3}) is characterized by two
different time scales: one is the time scale of the inter-well relaxation
time
($T_R$); the other one is the time scale of the intra-well dynamics
($T_K$,
related to the Kramers rate for the system). For $\tau \gg T_K$
the system should
behave as a dichotomous random process, hence $H_H=0.5$. Also, if
applicable, in the range $T_R \ll \tau \ll T_K$ we expect $H_H=0.5$: this
is
because before
the Hurst analysis is able to single out the dichotomous random process
(the behaviour for $\tau \gg T_K$ ), the condition $T_R \ll \tau$ implies
that, for the
relevant $\tau$ values, the dynamics is similar to that of
(~\ref{e4p1}). This
is confirmed by Fig. 5, where we show the result for $R (\tau)$ obtained
by digitally simulating (~\ref{e4p3}). For the simulations we use
$D=0.1$, which
yields a $T_K$ of approximately 30, and a $T_R$, determined by
the coefficients appearing
in the force in (~\ref{e4p3}), of order 1. It is possible to note that for
larger $\tau$ values $H_H$ approaches the correct value $0.5$, and that
for
$T_R < \tau < T_K$, $H_H$ is
smaller than it is, say, in the region $\tau \sim T_K$. Unfortunately,
the condition $ T_R \ll \tau \ll T_K$ is
only weakly satisfied by our choice of the parameter $D$ and it is not
possible
to observe exactly $H_H=0.5$ in the range of intermediate $\tau$.
\subsection{Noisy Lorenz model}
Another model to which we have applied the Hurst
rescaled range analysis is the Lorentz
model perturbed by Gaussian white noise~\cite{it17}. The
perturbed Lorenz model is
described by the stochastic differential equations
\begin{eqnarray}
\label{e4p4}
\dot\xi & = & \sigma \left(\xi - y\right) + f (t) \nonumber \\
\dot{y} & = & r \xi - y - \xi z \\
\dot{z} & = & - z + \xi y \nonumber
\end{eqnarray}
where $f(t)$ is a Gaussian white process with intensity $D$. We set
$\sigma = 8/3$, $b=10$ and $r=126.5$, a set of parameters for which
the Lorenz model is known to be periodic, as long as $D=0$. The chosen
value of the control parameter $r=126.5$ is roughly in the middle of a
small
periodic island within an $r$ region for which the Lorenz model
displays
chaos. The addition of a small stochastic force (i.e., $D \ne 0$) "kicks" the
system out of the periodic region, and leads to the observation of
chaos~\cite{it17}.
The transition periodic/chaotic motion as function of the noise
intensity $D$ is very smooth, and chaos, defined by a
positive Lyapunov exponent,
is observed for $D > 10^{-5}-10^{-4}$.
Applying the Hurst rescaled range
analysis to the variable $\xi $ of the Lorenz system, we
expect that $H_H$ will vary from around zero (the value for periodic
motion) to
0.5 (diffusive dynamics) for increasing noise. Also, the $\tau$ values for
which the asymptotic behaviour is observed should become smaller as
$D$ is
increased. The result of numerical simulations of (~\ref{e4p4}) is shown
in
Fig. 6. Note that in Fig. 6 we have plotted $R(\tau)/\tau^{1/2}$
versus $\tau$, to more clearly show
the asymptotic behaviour. For all curves, $R(\tau)$ increases for
small $\tau$, up to $\tau \sim 1$, which is the period of the noiseless
Lorenz
system
for the chosen parameters. In the case of very small noise intensities
(full and
dotted line), we have that for larger $\tau$, $R(\tau)$ goes like
$\tau^0$;
behaviour
typical of periodic motion. When we increase the noise intensity (small
dashes line), after an initial decrease, $R(\tau)$ increases: it
eventually
leads to $H_H=0.$5, for still larger $\tau$ values, but on the $\tau$
range shown the
asymptotic behaviour is not yet established. For this $D$ value,
we remark, that the Lorenz model is only very weakly chaotic. Finally,
for even larger noise intensities (large dashes line) the departure of
$R(\tau)$ from the noiseless curve takes place at yet smaller $\tau$
values, and
the
expected asymptotic behaviour ($H_H=0.5$, on the figure, represented
by an
horizontal line) for $R(\tau)$ is clearly identified.
\subsection{A two-dimensional potential}
We next study the deterministic motion of a particle in the
two-dimensional potential
\begin{equation}
\label{e4p5}
V(\xi ,y) = \cos \left(\xi +y \sqrt{3} \right) + \cos \left(\xi -y
\sqrt{3} \right) + \cos \left( 2 y / \sqrt{3} \right)
\end{equation}
which defines an infinite lattice of triangular symmetry (see~\cite{it18}
for more
details). The equations of motion we have integrated have the form
\begin{eqnarray}
\label{e4p6}
\dot\xi & = & v \nonumber \\
\dot{v} & = & -\frac{\partial V (\xi ,y)}{\partial \xi } \nonumber \\
\dot{y} & = & w \\
\dot{w} & = & -\frac{\partial V (\xi ,y)}{\partial y } \nonumber
\end{eqnarray}
It is known that when the energy of the system is
efficiently small the
particle moves from the bottom of each triangular
cell, only occasionally wandering from cell to cell. As a consequence, the
self-diffusion coefficient shows a peculiar behaviour as the energy is
changed.
It is clear that in principle this motion could lead to anomalous
diffusion: in particular, as the energy is decreased the "periodic" motion
within each cell should become more and more dominant in the
dynamics of
the particle. Obviously, the "random" diffusion from cell to cell is still
in place, hence we only expect some weak departure from $H_H=0.5$ as
the
mechanical energy is decreased. By inspection (see Fig. 2 in~\cite{it18})
it is
clear that the autocorrelation function of the velocity $v$ has a visible
negative tail (which is not at all surprising, remembering the
importance
of the periodic motion within each cell). We then expect that as the
energy is decreased $H_H$ should take on values smaller than 0.5. This
is
confirmed by the Hurst rescaled range
analysis applied to the variable $ \xi $, and shown in Fig. 7.
As clear from the discussion in the previous section, the fundamental
question is whether the coefficient $H_H$ should be
related to $H_D$ in situations of anomalous diffusion. We compare the
quantity $R(\tau)$ (Hurst analysis) and
$\langle v^2(\tau) \rangle^{1/2}$ (see Section 2) in Fig. 8: we used an
energy value equal to -0.90,
for which the dynamics is supposedly anomalous ($H_H = 0.43$). The
clear
parallelism between the two curves at large times establishes that the
diffusion is indeed anomalous as suggested by the Hurst analysis.
\subsection{Standard map}
We now present the results obtained in a discrete model, i.e. for the
standard map
\begin{eqnarray}
\label{e4p7}
x_{t+1} &=& x_t + \frac{K}{2 \pi} \sin \theta_t \nonumber \\
\theta_{t+1} &=& \theta_t + x_{t+1} \;\;\;\; (\mbox{mod} \;\;\; 2 \pi )
\end{eqnarray}
The standard map is very convenient to test our interpretation of the
Hurst rescaled analysis
analysis: it has been recently shown~\cite{it19} that for
appropriate
parameter values within the chaotic regime anomalous diffusion should
arise.
According to~\cite{it19}, the anomalous diffusion is caused by chaotic
orbits
sticking to critical tori encircling accelerator mode islands.
For this reason the correlation function $C(t)$ should have the power
law dependency of (~\ref{e2p10}). We studied the map for the same
values
considered in~\cite{it19}, i.e., for $K=3.86$, 6.4717, 6.9115 and 10.053:
the
dynamical variable we considered for the analysis is the quantity
$\xi_t=x_{t+1}-x_t$. A standard diffusion behaviour is expected for
$K=3.86$ and
10.053, and an anomalous diffusion behaviour for the other two $K$
values.
That this is qualitatively the case is clearly shown in Fig. 9, where we
have plotted $R(\tau)$ normalized to $\tau^{1/2}$ versus
$\tau$ for the different $K$'s: as
expected, the curve is an horizontal line (standard diffusion) for
$K=3.86$
and 10.053. We would now like to understand whether for the
anomalous
diffusion case we have some correspondence between the Hurst
rescaled range analysis and
the numerical work of~\cite{it19}. In the case $K = 6.9115$,
in~\cite{it19}
it is reported that theory and numerical simulations lead to
\begin{equation}
\label{e4p8}
\alpha \sim \frac{2}{3}
\end{equation}
Let us insert (~\ref{e4p8}) into (~\ref{e2p11}). Adopting the
notation of~\cite{it19}, i.e., $\zeta \equiv 2 H_D$, we obtain
\begin{equation}
\label{e4p9}
\zeta \sim \frac{4}{3}=1.3333333...
\end{equation}
According to~\cite{it19} this prediction fits very well the result of the
numerical calculation on diffusion.
We remark once more that for $K = 3.86$ and 10.053, values for which
the authors of~\cite{it19}
observe a standard diffusion we obtain, with a very high
degree of accuracy $H_H=0.5$;
and where ($K=6.9115$ and 6.4717) anomalous diffusion is predicted
we obtain a value of $H_H$ significantly different from
$H_H=0.5$. Moreover, for $K = 6.9115$ we obtain $\zeta = 1.2330$,
to be compared to $\zeta = 1.3333$ from (~\ref{e4p9}).
Table 1 summarizes the situation. Note that the fourth column,
$\zeta_{sim}$
denotes the result of the numerical simulation~\cite{it5} (the only value
there reported corresponds to $K = 6.9115$) and that the last column,
$\zeta(H_H)$
reports the values of $\zeta$ corresponding to the value of $H_H$
evaluated
numerically (third column).
\section{Conclusions}
It must be pointed out that the Mandelbrot analysis, leading to $H_H =
H_D$,
is essentially based on the Central Limit Theorem, assumed
to be valid even when anomalous diffusion occurs. The adoption of the
Fokker-Planck treatment of Section 3 leads to the following time
dependence of the $x$-distribution:
\begin{equation}
\label{e5p1}
\sigma (x;t) = \frac{1}{\left( 2 \pi K t^{2H} \right)^{1/2}}\exp \left( -
\frac{x^2}{4 K t^{2H}} \right)
\end{equation}
It is then evident that the moment $\langle x^n(t) \rangle$ rescales in
time as
$t^{2H}$. Since
the Hurst rescaled range
analysis refers to a quantity with the same dimension of $x$ it is
evident that it leads to
\begin{equation}
\label{e5p2}
H_H = H = H_D
\end{equation}
The functional form (~\ref{e5p1}) suggests that in general,
after an initial transient, the probability distribution
$\sigma (x;t)$ should perhaps be described by the equation
\begin{equation}
\label{e5p3}
\sigma (x;t) = \frac{1}{t^\beta} F \left( \frac{x}{t^\beta} \right)
\end{equation}
If the rescaling of (~\ref{e5p3}) applied, we would have
that indeed in general $H_H=H_D$.
Let us now briefly discuss some possible forms of $F$.
There are three possible conditions:
\begin{description}
\item[(i)] $\beta=1/2$, $F$ is a Gaussian function of its argument. This
is
the standard diffusion process.
\item[(ii)] $\beta \ne 1/2$, $F$ is a Gaussian function of its argument.
This is the
fractional Brownian motion process.
\item[(iii)] $\beta \ne 1/2$, $F$ is not a Gaussian function of its
argument.
Note that this occurs for a L\'evy stable process~\cite{it20}.
\end{description}
It must be pointed out, however, that from a physical point of view it is
hard to imagine a diffusion process with a deterministic origin agreeing
with (~\ref{e5p1}), and thus falling under case (ii), even in the case
$H > 0.5$. The reason is that
as established by the theoretical analysis of Section 2, this
anomalous behavior comes from an anomalously slow correlation
function,
namely the correlation function of (~\ref{e1p8}) with $\alpha < 1$.
In this physical situation, there is no hope to realize (~\ref{e5p1})
as an effect of the Central
Limit Theorem: the original process must be already Gaussian! In other
words, if there existed Gaussian statistical processes leading to the slow
decay of $C(t)$, then the anomalous diffusion would be compatible with
the time
rescaling of (~\ref{e5p1}). In our opinion, this is the physical nature of
the
fractional Brownian motion of Mandelbrot. It is the long-time
asymptotic
limit of a Gaussian process with an anomalously slow correlation
function.
We think that this situation might occur in statistical mechanics when
the source of the Brownian motion, the statistical process $x$,
refers to a physical condition characterized by a large number of
degrees
of freedom. However, in the last few years, there have been attempts to
build statistical mechanics on chaos, without the
joint action of a very large number of degrees of
freedom~\cite{it7,it8,it9}.
In this
physical situation $x$ is a non-Gaussian statistical process and the
Gaussian nature of diffusion stems from the action of the Central Limit
Theorem. If the process is not Gaussian, but it is fully chaotic, then the
correlation function is exponential or, more generally, characterized by
a well-defined time scale.
In the special case when chaos and ordered motion coexist, however,
the
dynamical behavior of the system becomes extremely more complex,
and a
correlation function with an inverse
power law might occur. This implies the
breakdown of the time scale separation between diffusion and
microscopic
dynamics, and the consequent breakdown of the Central Limit Theorem
itself. In this physical condition (~\ref{e5p1}) cannot apply.
Are there processes rescaling according to (iii), without implying the
Gaussian assumption? We think that if an anomalous diffusion exists,
then
it is quite probable that it belongs to the class (iii). We are convinced
that some of the processes examined in Section 4 belong to class (iii). If
the rescaling in (~\ref{e5p3}) with $\beta \ne 1/2$ holds true, then
we conclude
immediately that $H_H = \beta$. However, this special condition raises
the
intriguing question of whether or not $H_H = H_D$, in this case. The
dynamical
realization of the diffusion process is expressed by (~\ref{e2p1}).
Let us assume the $X$
is the maximum possible value of $\xi$. It is then evident that at the
time $N$
the $\xi$ distribution must be contained between $x_m$ and $- x_m$,
with $x_m = NX$.
Now, let us imagine that there are theoretical reasons to expect that the
$x$ distribution is characterized by long tails with an
inverse power law $1/x^m$. It is
then evident that the rescaling of (~\ref{e5p3}) cannot apply to the
whole space.
This might generate a discrepancy between $H_H$ and $H_D$. Let us
assume, for
simplicity, that $x_m = A t$. In such a case we get a rescaling of the
same
kind as (~\ref{e5p3}) only for $|x_m/t| < A$. Thus the moments of the
distribution
rescale with a power law different from that leading to the time
rescaling
of (~\ref{e5p3}).
We wonder if a possible discrepancy between the two coefficients might
be
derived using the data already available for the standard
map~\cite{it19}. According
to~\cite{it19}, the distribution rescales as in (~\ref{e5p3}), with
$\beta = 3/5$ for
$K = 6.9115$. However, this distribution is truncated at the value $|y |=
|x/t^\beta|= 1$. If we assume that $H_H$ is determined by the rescaling
of (~\ref{e5p3}), we obtain
\begin{equation}
\label{e5p4}
2 H_H = 2 \beta = 6/5 = 1.2
\end{equation}
thereby suggesting that the discrepancy between the numerical value
$2 H_H = 1.23330$, obtained in this paper, and the numerical value $2
H_D
= 1.3333333$,
determined numerically by the authors of~\cite{it19}, might be due not
to the
inaccuracy of the direct calculation of $H_D$ in~\cite{it19} (notice that
the
calculation of $H_H$ is expected to more accurate than that of $H_D$),
but it
might rather depend on the breakdown of the condition $H_H = H_D$,
due
to the non-Gaussian character of the distribution $F$ of (~\ref{e5p3}).
We shall address these questions in further investigations. For the time
being
we must limit ourselves to saying that the Hurst rescaled range
analysis seems to be an
efficient numerical technique to explore how a dynamical system
approaches
its long-time asymptotic limit, or, equivalently, which is the short time
dynamics of that asymptotic idealization referred to by
Mandelbrot~\cite{it4,it5,it6}
as fractional Brownian motion.
\section*{Acknowledgments}
We warmly thank professor Jaczek Kowalski on many illuminating
discussions
on the possible difference between $H_H$ and $H_D$. This work was
partially supported by the EC under contract n. SC1-CT91-0697 (TSTS).
One of us (B.J.W.) also thanks the Naval Air Warfare Center for partial
support of this research [contract No. N62269-92-C-0548] and the
Office of Naval Research [contract No. 73191].
\newpage
|
3,212,635,537,799 | arxiv |
\section{The \barc Coherence Protocol}
\label{sec:protocol}
This section first overviews \barc's design.
It then presents a \emph{baseline} version of Neat\xspace, which self-invalidates all private cache lines at acquires, and commits all dirty data at releases.
Finally, we present the full version of Neat\xspace that improves data reuse across synchronization operations, improving performance and energy
with acceptable additional complexity.
\subsection{\barc Overview}
\label{subsec:protocol-overview}
\barc is a set of modifications to a multicore processor that lacks support for cache coherence and core-to-core communication.
\later{This base multicore has
no MESI protocol implementation, no coherence directory,
no coherence protocol and no support for core-to-core communication.}%
A core's cache associates with each cache line
a valid bit that is either valid (V) or invalid (I).
\later{
and a dirty bit that indicates updated line data.
\rui{Why do we assume dirty bits here? I'm concerned that readers may confuse it with the per-byte write bits.}
}%
Without loss of generality,
this section assumes that each core has a single-level private cache
and that all cores share a last-level cache (LLC).
It is straightforward to extend Neat\xspace's support to
multiple private cache levels (L1 and L2 caches), as in our performance and energy evaluation (Section~\ref{sec:eval-perf}).
Unlike common implementations of MESI or prior work that tracks ownership in the LLC~\cite{coherence-primer,sarc-coherence,denovo,denovond,denovo-gpu,hlrc},
Neat\xspace does \emph{not} require that the LLC be inclusive of the L1 caches.
\barc does not require that the processor's interconnect supports core-to-core messages as implementations of MESI or prior work such as SARC and DeNovo do~\cite{coherence-primer,sarc-coherence,denovo,denovond,denovo-gpu,hlrc}.
\barc does not explicitly maintain the \later{\emph{single writer, multiple readers} (SWMR)}%
SWMR invariant
maintained by MESI
and other protocols that use writer-initiated invalidation.
Instead, in \barc a core may read from out-of-date copies of lines,
and it may write to a line without immediately updating or invalidating other cores' valid copies of the line.
\barc consequently provides coherence only for {\em data-race-free} (DRF) programs,
exploiting the DRF assumption that languages such as C++ provide~\cite{memory-models-cacm-2010,c++-memory-model-2008}.
Our work assumes that the compiler distinguishes synchronization operations
from regular memory operations in the compiled code
so that Neat\xspace operates with respect to synchronization from the original program source.
\barc maintains the data-value invariant and ensures that, for DRF programs,
each read sees the value written by the last ordered (well-synchronized) write to the memory location.
To provide this guarantee, private caches self-invalidate
valid lines
and commit\xspace dirty data
at acquires and releases, respectively.
\later{
\begin{itemize}[leftmargin=*]
\item
Private caches \emph{self-invalidate}
valid lines at synchronization \emph{acquire} operations, e.g., lock acquire,
monitor wait, thread join, and a read of a C++ \code{atomic} or Java \code{volatile} variable.
Self-invalidation handles the fact that at an acquire operation, all lines are potentially out of date
and may need to be updated for a correctly synchronized program.
For example, consider the sequence of operations such as \code{$c_1$: wr x; $c_1$: rel; $c_2$: \textbf{acq}; $c_2$: rd x},
where $c_1$ and $c_2$ are cores performing read, write, acquire, and release operations.
The \code{\textbf{acq}} must invalidate $c_2$'s cached copy of \code{x} to ensure that the core reads the up-to-date value of \code{x} written by $c_1$.
\item
Private caches \emph{commit} dirty lines by writing back their written-to data at synchronization \emph{release} operations such as
lock release, monitor signal, thread fork, and a write to a C++ \code{atomic} or Java \code{volatile} variable.
Dirty lines need be written back at a release operation
so that other cores that perform well-synchronized reads of the line
can see up-to-date values.
For example, consider the sequence of operations
\code{$c_1$: wr x; $c_1$: \textbf{rel}; $c_2$: acq; $c_2$: rd x}.
The \code{\textbf{rel}} must make $c_1$'s update to \code{x} visible to $c_2$.
\end{itemize}
\smallskip
}%
Unlike prior work that buffers and delays write-throughs or registrations of dirty data until the buffer is full or a release is reached~\cite{vips-directoryless-noc-coherence,denovo,denovond,denovo-gpu,hlrc},
\barc does not rely on buffers and instead holds all dirty data in private caches until committing them at the next release.
A core's private cache keeps track of whether \emph{each byte} is dirty,
using one \emph{write bit} per byte. This feature is necessary for correctness:
non-dirty bytes may be out of date and should not in general be written back to the LLC.
This feature also optimizes write-backs\xspace by writing back only a line's dirty bytes. Note that
multiple (write) bits per cache line have been explored in prior work on precise
conflict detection~\cite{conflict-exceptions,arc} and sector caches~\cite{sector-cache-design}.
A core normally executes in a \emph{normal execution (NE)} state.
When a core performs an acquire operation, it transitions to the
\emph{self-invalidation (SI)} state, during which a core invalidates all of the valid lines in its private cache.
If any valid line is dirty, then the private cache writes back its dirty bytes.
Similarly, when a core performs a release, it
enters the \emph{commit (CM)} state, during which a core writes back all dirty bytes in its dirty lines to the LLC.
Figure~\ref{fig:arch-cmp} shows the MESI\xspace and \barc architectures that are both implemented on the base multicore processor,
where MESI\xspace- and \barc-specific components are shaded grey.
Note that Figure~\ref{fig:arch-neat} includes support for mechanisms that avoid unnecessary self-invalidations and improve data reuse across synchronization operations---\emph{write signatures\xspace} and \emph{PI state}---introduced and explained in Section~\ref{subsec:opts}.
Compared with MESI\xspace, \barc eliminates the complex coherence directory and support for core-to-core communication,
and introduces relatively small structures (write signatures\xspace) in the shared LLC and extra metadata for private cache lines.
\later{
Sections~\ref{sec:protocol-verification} and \ref{sec:eval-perf} evaluate the complexity,
performance, and energy of \barc, compared with MESI\xspace. }%
\begin{figure}[t]
\centering
\subfloat[The MESI\xspace architecture with an inclusive directory.
Components added by MESI\xspace are shaded grey.]{
\includegraphics[height=0.2\textheight]{figs/architecture-mesi} \label{fig:arch-mesi}}
\medskip\\
\subfloat[The \barc architecture. Components added by \barc are shaded grey. \emph{Write sigs} = write signatures.]{
\makebox[0.7\linewidth][c]{\includegraphics[height=0.2\textheight]{figs/architecture-neat} \label{fig:arch-neat}}}
\caption{Illustrations of the MESI\xspace and \barc architectures.
\textnormal{Some components common to both designs, such as per-line valid bits, are omitted for simplicity.}}
\label{fig:arch-cmp}
\end{figure}
\subsection{Baseline \barc Protocol}
\label{subsec:unoptimized-protocol}
\input{table-protocol}
Figure~\ref{fig:unoptimized-protocol}
shows baseline Neat\xspace, which provides correct coherence for DRF programs and efficiently defers coalesced write-backs until synchronization operations.
Note that the basic protocol is inefficient because private caches invalidate all lines at acquires.
The table includes separate states and transitions for
(a) private cache lines,
(b) a core's private cache as a whole, and
(c) the LLC as a whole.
Note that Neat\xspace avoids the need for complex transient protocol states required by MESI,
and Neat\xspace adds no per-line state to the LLC (e.g.\xspace, no directory; Figure~\ref{fig:arch-cmp}).
As in prior work (e.g.\xspace,~\cite{denovo}), we assume each L1 controller has a
\emph{request buffer}, the storage array for which could be implemented as an explicit hardware buffer or using existing
miss status handling registers (MSHRs). The request buffer tracks outstanding requests to the LLC.
The L1 controller adds a request buffer entry while waiting for a response from the LLC,
which is either a data request or a write-back message due to eviction of a dirty line,
and it removes the entry from the request buffer after receiving a response.
In \barc,
accesses to invalid (\emph{I}) and valid (\emph{V}) states in private caches are straightforward,
handled as misses and hits, respectively, except that a write sets corresponding write bits.
When a private cache evicts a \emph{clean} line
(i.e.\xspace, a valid line without any write bits set),
it does so \emph{silently} (i.e.\xspace, without communicating with the LLC).
When a private cache evicts a \emph{dirty} line (i.e.\xspace, a valid line with at least one write bit set),
the private cache sends a write-back message
to the LLC that includes the dirty bytes and corresponding write bits, and a count value \emph{\code{CNT}\xspace{}=1} (detailed below), which
directs the LLC to send back an acknowledgment\xspace message \emph{PutAck} immediately after receiving this one write-back message.
The private cache changes the line state to I immediately, and
the core may continue execution in general (as long as there is no dependency with the pending requests in the request buffer),
but the core must wait for acknowledgments of all outstanding write-backs
before executing operations \emph{after} the next release.
An access miss or write-back message does not lead to transient states for cache lines because there are no conflicting access requests
forwarded from other cores.
\Barc does not risk deadlock either because the LLC responds to an access miss or write-back directly without relying on any responses from other cores.
\later{Besides the above actions during a core's normal execution in the \emph{NE} state,}%
A core transitions to a commit\xspace (\emph{CM}) state at a release and
to a self-in\-va\-li\-da\-tion\xspace (\emph{SI}) state at an acquire.
While in the CM or SI state, a core does not fetch or execute instructions,
and instead waits for the L1 controller to perform operations iteratively over all dirty lines (if in CM state) or valid lines (if in SI state).
During the CM state, the L1 controller writes back all its dirty lines and avoids per-line acknowledgment of each write-back\xspace by
having the LLC send a single \emph{PutAllAck} message once it has received all write-backs\xspace.
To facilitate the LLC acknowledging such bulk write-backs\xspace, each write-back\xspace message has an integer field \emph{\code{CNT}\xspace} that indicates the number of write-backs\xspace
that the LLC should receive before sending back an acknowledgment.
The L1 controller counts the number of write-backs\xspace when committing dirty lines; after sending all write-backs\xspace,
the L1 sends an extra write-back\xspace message to the LLC with \code{CNT}\xspace set to the number of write-backs and no data.
\later{
\mike{I thought we decided that the L1 could send the count with the last write-back\xspace message during CM?
I don't want to just change it here since it's not essential and it might require protocol table or other changes, too.
\rui{Yeah that's what we decided, but it's easier for us to use an extra data-less message in the protocol table.}}
}%
The LLC knows to wait before acknowledging write-backs\xspace because the L1 controller sets \code{CNT}\xspace to 0 for write-backs\xspace sent during the CM state.
The LLC maintains per-core counters \emph{\code{wbReceived}\xspace} that count the write-backs\xspace received from each core with \code{CNT}\xspace{}=0.
\later{Upon receiving a write-back\xspace with its \code{CNT}\xspace equal to 0, the LLC only increases \code{wbReceived}\xspace for the core rather than sending any acknowledgment.}%
After receiving the extra write-back\xspace with a \code{CNT}\xspace greater than 0, the LLC compares the \code{CNT}\xspace and \code{wbReceived}\xspace of the same core;
if the two are equal, it sends back a PutAllAck message to acknowledge all of the core's write-backs\xspace and clears the \code{wbReceived}\xspace counter.
We support the more general case of an out-of-order network between cores and the LLC,
so a core's bulk write-backs\xspace may be reordered with its extra write-back\xspace, allowing the LLC to receive a \code{CNT}\xspace greater than write-backs\xspace from the core so far.
In such a case, the LLC keeps waiting for more write-backs\xspace to arrive from the core, incrementing \code{wbReceived}\xspace when appropriate.
To ensure that a core's dirty data are correctly written back and become visible at releases,
a core in the CM state does not transition back to the NE state until it receives acknowledgments (PutAllAck and any PutAcks) for all outstanding write-backs\xspace.
We assume a centralized LLC for the above discussion, but it is straightforward to apply \barc's bulk write-backs\xspace to a distributed LLC.
With a distributed LLC, during the CM state, the L1 controller counts the number of write-backs\xspace it sends to each LLC bank and sends an extra write-back\xspace message to each bank with the corresponding count number.
Each bank acknowledges receiving all the write-backs\xspace by sending back a PutAllAck message.
A core in the CM state does not transition back to the NE state until it receives PutAllAcks from all LLC banks.
To handle dropped packets on an unreliable network, if a core does not receive a response within a time limit, it should retry committing write-backs from the beginning.
During the SI state, the L1 controller invalidates all valid lines.
It writes back dirty data for any dirty lines that it invalidates,
in the same way as the CM state's write-backs\xspace, avoiding per-line acknowledgment\xspace of each write-back\xspace.
After invalidating all valid lines, the L1 controller transitions back to the NE state only after receiving a PutAllAck message.
This behavior preserves that each core sees its own last write even in the context of an out-of-order network.
Note that the L1 controller does not need to wait for outstanding \emph{PutAck} responses (for write-backs\xspace due to \emph{dirty evictions})
before transitioning from SI to NE
because the corresponding write-backs occupy request buffer entries while waiting for acknowledgements.
\later{
\code{CNT}\xspace's value applies generally to all write-back\xspace messages, including bulk write-backs\xspace performed at synchronization operations and individual write-backs\xspace due to dirty evictions.
On a dirty eviction, the L1 controller sets the write-back\xspace message's \code{CNT}\xspace to 1 to inform the LLC that a \emph{PutAck} message should be sent back immediately
when the LLC receives the write-back\xspace.}%
\subsection{Full \Barc Protocol}
\label{subsec:opts}
Conservative self-invalidation is expensive in terms of run-time performance
\later{ traffic to LLC,}%
and energy because it
conservatively invalidates lines that may be out of date,
which can hurt cache locality and result in avoidable cache misses.
We introduce two mechanisms that significantly help reduce the costs of self-invalidation and improve data reuse across acquires.
Figure~\ref{fig:optimized-protocol} shows the corresponding full version of the Neat\xspace protocol.
\subsubsection{Partially invalid state}
We observe that, at an acquire operation, a core can \emph{delay} self-invalidating a line
until its next read access to any \emph{clean} byte in the line.
In other words, a core can write to the line or read a byte that the core has already written,
since these values will be up to date for a DRF execution.
To make use of this observation, we introduce a \emph{partially invalid (PI)} state for private cache lines to indicate that a line may have out-of-date data in its \emph{clean} bytes.
During self-in\-va\-li\-da\-tion\xspace,
instead of invalidating each valid line (i.e.\xspace, changing its state to I),
the private cache \emph{partially invalidates} each valid line, by changing its state to PI.
Any subsequent write to a PI line is a hit, and
a read to a dirty byte of a PI line is also a hit.
But a read to clean byte(s) of a PI line is a miss;
the private cache fetches updated value(s) of the byte(s) from the LLC, merges them into the private cache line
(overwriting only clean bytes), and marks the line valid (i.e.\xspace, PI $\rightarrow$ V).
Note that with this mechanism, while an L1 controller is in the SI state
(i.e.\xspace, at an acquire)
it does \emph{not} clear any line's write bits, regardless of whether
the line is partially invalidated or left valid (a line may remain valid after self-invalidation only if using the write signature mechanism, described below).
In contrast, during the CM state (i.e.\xspace, at a release),
the L1 controller clears a line's write bits when writing back dirty bytes,
and does \emph{not} invalidate or partially invalidate any lines.
In a DRF program, it is correct for a core to read from its own bytes at least until an acquire operation.
The PI state has some similarities to DeNovo's \emph{touched bit}~\cite{denovo}, but
there are some key differences.
The touched bit indicates that a word is exclusively read by the current core and
the core has up-to-date data for this word at the end of the current parallel phase.
The PI state indicates the current core may have stale data for those bytes that were not written by the core.
The touched bit is at word granularity while the PI state is at line granularity.
Finally, DeNovo's use of the touched bit relies on programmer annotations, while Neat's use of the PI state does not rely on annotations.
\subsubsection{Per-core write signatures}
Our second observation is that a core $c$ can \emph{skip} self-invalidating a line
if that line has not been updated in the LLC by another core since $c$'s last acquire operation.
To identify such lines, Neat\xspace maintains per-core \emph{write signatures}
at the LLC.
A core $c$'s write signature indicates which of $c$'s private lines were updated in the LLC \emph{by any other core}
since $c$'s last acquire operation.
Each write-back to an LLC line by core $d$ adds the line (address) to the write signatures for all cores \emph{other than} $d$.
A core fetches its write signature at the start of self-in\-va\-li\-da\-tion\xspace and only invalidates (or partially invalidates) those lines contained in the write signature.
The LLC clears a core's write signature once it services the core's fetch request for the write signature\xspace.
We note that a core fetches its write signature only after the core succeeds on an acquire, so under the DRF assumption, there is no race on write signatures.
The write signature mechanism is important mainly when acquire operations and thus self-in\-va\-li\-da\-tions\xspace are frequent.
When acquire operations are infrequent, self-in\-va\-li\-da\-tions\xspace are less frequent, and their costs tend to amortize
better over other execution costs.
We can thus optimize for frequent self-invalidations by making write signatures\xspace small,
which saves time and area by avoiding sending and storing large write signatures\xspace.
Our implementation uses Bloom filters~\cite{bloom-filter} for over-approximated write signatures
and sends compressed versions of sparsely populated write signatures (Section~\ref{sec:eval-perf}).
\later{
\rui{"Applying optimizations to VIPS or MESI: is there a reason why the same optimizations cannot be applied to either VIPS, SARC, or MESI? I realize and appreciate that applying them to MESI is harder, but is there a fundamental reason why they cannot be applied? This would eat further into the already small gains of Neat. Moreover, this does lead into the classic question for work in this area: given the very small gains in terms of performance and energy over MESI, why would a company want to adopt this solution?"\\
We had evaluated applying Neat's write signature optimization to SARC and observed perf improvement. Similarly, the optimization can be applied to VIPS. I don't understand how the optimizations can be applied to MESI, since MESI doesn't do self-invalidations. As for the adoption issue, we argue that Neat is simpler than MESI, but has comparable performance to MESI.}}%
\section{Background: Self-Invalidation-Based Coherence Protocols}
\label{sec:background}
This section overviews state-of-the-art
cache coherence protocols that
self-invalidate readers' copies at acquire operations~\cite{sarc-coherence,vips-directoryless-noc-coherence, denovo, denovond, denovo-gpu, hlrc, quickrelease}.\footnote{Self-invalidation and deferred write-backs
originated as \emph{release consistency} mechanisms
for distributed shared memory
systems~\cite{lazy-release-consistency-dsm,comparison-entry-release-consistency,
midway-dsm, efficient-flexible-object-sharing, vmm-shrimp, accelerated-dsm}.}
\notes{
Depending on which cache lines are self-invalidated, some self-invalidations can be unnecessary and avoided.
}%
These protocols differ from each other mainly in their strategies for committing dirty lines to implement the data-value invariant:
\begin{itemize}[leftmargin=*]
\item SARC\xspace retains MESI\xspace's directory to keep track of cache lines' ownership,
and a new writer initiates the write-back and invalidation of the old writer's copy~\cite{sarc-coherence} (Section~\ref{subsec:background-sarc}).
\item GPU coherence uses write-through caching for all data~\cite{denovo-gpu,quickrelease}, while VIPS\xspace classifies private and shared data and uses write-back caching for private data and write-through caching for shared data~\cite{vips-directoryless-noc-coherence} (Section~\ref{subsec:background-vips}).
\later{
This paper's \barc protocol
(especially the baseline version as Section~\ref{subsec:unoptimized-protocol} describes) is, in spirit, like GPU coherence~\cite{denovo-gpu} (or VIPS\xspace~\cite{vips-directoryless-noc-coherence}
without classification of private and shared data).
However, \barc makes important improvements over GPU coherence,
which lead to \barc's significantly better performance and energy efficiency compared with prior self-invalidation-based work.
}%
\item DeNovo and other DeNovo-based protocols rely on registering the ownership of dirty data in the LLC, rather than writing back dirty data directly~\cite{denovo, denovond, denovo-gpu, hlrc} (Section~\ref{subsec:background-registration}).
\end{itemize}
\notes{As discussed in Section~\ref{sec:intro}, assuming a DRF program,
either write-throughs (GPU coherence, VIPS\xspace) or ownership registration (DeNovo and DeNovo-based protocols) can be buffered and delayed until buffer overflow or a release.}%
\subsection{Using MESI-Style Write-Backs}
\label{subsec:background-sarc}
\emph{SARC's} design retains part of MESI's directory to track ownership of dirty lines~\cite{sarc-coherence}.
A writer initiates the write-back and invalidation of the last writer's line.
SARC extends MESI by supporting \emph{tear-off} copies of lines for reads,
avoiding the need to maintain read-sharers in the directory.
A core's private cache self-invalidates a tear-off copy of a line at an acquire.
\Barc also avoids tracking sharers in the directory
and uses self-invalidation to ensure coherence.
Unlike SARC, \barc eliminates the directory and MESI protocol \emph{entirely},
and writes back all dirty bytes at releases to provide coherence.
\notes{
Empirically, the number of writebacks tends to be very low, avoiding a performance problem (Section~\ref{sec:eval}).
}%
\subsection{Using Delayed Write-Throughs}
\label{subsec:background-vips}
While CPUs use the complex MESI protocol or its variants,
GPUs prefer a simpler coherence protocol that the literature refers to as \emph{GPU coherence}~\cite{denovo-gpu,quickrelease}.
GPU coherence uses
self-invalidation at acquires and private write-through caches to keep the shared cache up to date.
To reduce the costs of write-through caching, the write-throughs can be buffered and delayed until the next release or buffer overflow.
This paper's baseline \barc design resembles GPU coherence in spirit, but the following important differences exist between the two designs:
\begin{itemize}[leftmargin=*]
\item \Barc defers write-backs for \emph{all} dirty data, while GPU coherence uses buffers to hold outstanding write-throughs, which is subject to the capacity limits of the buffers.
\item \Barc introduces lightweight mechanisms that reduce unnecessary self-in\-va\-li\-da\-tion\xspace costs.
\end{itemize}
\later{\vignesh{Not sure if this point is worth mentioning above -- GPU coherence schemes also
have a different operating constraint than CPU coherence. GPU coherence schemes need to
handle higher levels of outstanding memory requests (and can tolerate latency better).
The implication of this design constraint is that any hardware buffering may need to
support high bandwidth and may get filled up much more quickly than buffers in multi-cores.}}%
\noindent The above differences help \barc achieve significantly better performance and energy efficiency
than prior work.
Similar to GPU coherence,
\emph{VIPS}
uses
self-invalidation at acquires and delays write-throughs until a timeout,
a miss status handling register (MSHR) eviction, or a release operation~\cite{vips-directoryless-noc-coherence}.
\Barc avoids write-through costs imposed by VIPS by deferring all write-backs until releases.
To reduce performance costs due to unnecessary self-in\-va\-li\-da\-tions\xspace,
VIPS optimizes self-in\-va\-li\-da\-tion\xspace by classifying pages as private or shared.
\later{Once a page is classified as some state, private or shared, each cached line of the page is marked as the same state as the page by setting a per-line state bit in the private caches.
Accesses to a private line do not trigger self-invalidation and do not write through.
When a private line becomes shared, its values are self-invalidated and written back.}%
For all shared pages, VIPS further distinguishes between read-only and read-write pages,
\later{and makes cache lines of read-only pages free from self-invalidation.}%
and self-invalidates only shared read-write pages.
While often beneficial, VIPS's classifications are sensitive to a program's data access patterns and have
limited impact on programs that mainly access data on shared read-write pages.
In contrast, our \Barc design introduces lightweight mechanisms to avoid unnecessary self-in\-va\-li\-da\-tions\xspace without relying on specific data access patterns.
\later{regardless of how frequently the program accesses shared (read-only versus read-write) versus private pages.}%
\notes{
Since \barc's optimizations are applied at cache line granularity compared to page granularity,
\barc's optimizations are more effective to avoid unnecessary self-in\-va\-li\-da\-tions\xspace.
\mike{That's not the only reason. That makes it sound simpler than it is. They're different optimizations.}
}%
\notes{
VIPS simulation. Rather than implementing MSHRs to enable non-blocking caches only to complicate the simulation, we simulate delayed write-through by buffering write-throughs until timeout, buffer being full (ie, “MSHR” eviction), or region boundaries. The buffer should be like a fully associative cache with some eviction policy (eg, LRU), with each of its entries storing the line address and per-byte dirty bits of write-throughs for a line. Since the buffer serves as MSHRs particularly for write-troughs, a reasonable size of such a buffer should somewhat smaller than normal MSHR sizes (say 8 assuming a normal MSHR size is 10 to 16), in order to reserve MSHRs for other types of outstanding mem requests.
A write-though leaves the buffer in the following 3 cases and triggers corresponding operations:
\begin{itemize}
\item Timeout. Perform the actual write-through.
\item Buffer being full. Evict a write-through as per the eviction policy.
\item Encountering a region boundary. Stream all outstanding write-throughs in the buffer into the LLC with overhead charged due to streaming traffic and one-time LLC communication, as what we do with bulk write-backs for Neat.
\end{itemize}
Further, in the simulation, VIPS distinguishes between the two types of sync points --- lock releases and acquires --- like what Neat does to make a fair comparison (originally VIPS performs the same operations at whichever sync point).
}%
\subsection{Using Ownership Registration}
\label{subsec:background-registration}
DeNovo and DeNovo-based protocols rely on registering the ownership of dirty data~\cite{denovo,denovond,denovo-gpu,hlrc},
while \barc writes back dirty lines in bulk at releases.
\emph{DeNovo}
uses self-invalidation for out-of-date reads and \emph{registration} in the LLC to track a line's writers~\cite{denovo}.
Registration requires inclusion at the LLC and an extra level of indirection for writing back the data, hurting LLC capacity and introducing latency.
DeNovo further requires the program to be written in a deterministic, data-race-free language~\cite{dpj}.
\emph{DeNovoND} extends DeNovo to allow parts of a program to be nondeterministic~\cite{denovond},
but relies on the compiler to identify atomic accesses to
maintain coherence.
DeNovo and DeNovoND are thus not applicable to programs written in standard languages.
While much of the DeNovo line of work relies on programmer annotations
(or has other major differences from \barc, e.g.\xspace, DeNovoSync~\cite{denovosync}; see Section~\ref{Sec:related}),
DeNovo and its ideas have been applied to GPUs
without requiring programmer annotations~\cite{denovo-gpu}.
This protocol uses registrations on dirty data; the registrations can be buffered and committed upon buffer overflow or at releases.
The protocol uses self-invalidations on all non-registered data at acquires to improve reuse of dirty data.
In contrast to the above DeNovo line of work, a \barc core writes back all dirty data to the LLC in bulk at releases, avoiding shared ownership metadata entirely.
\Barc uses lightweight mechanisms to improve reuse of both dirty and clean data without requiring programmer annotations.
\emph{Heterogenous lazy release consistency (hLRC}) builds on DeNovo, but exploits synchronization locality by registering only synchronization variables and lazily performing coherence actions
\later{(i.e.\xspace, write-backs of dirty data and self-invalidations of valid data)}%
only when a remote synchronization operation is detected~\cite{hlrc}.
\later{hLRC thus exploits synchronization locality by performing write-back and self-invalidation only when synchronization variable ownership changes cores.
Compared with DeNovo, hLRC avoids expensive coherence actions at local synchronization, but fails to reuse written data across global synchronization operations.}%
\Barc does not detect synchronization locality since it targets CPU coherence where synchronization is mostly global, but it would be straightforward to extend \barc to exploit synchronization locality by avoiding coherence actions at detected local synchronization operations.
\medskip
\noindent
In summary,
researchers have explored various self-in\-va\-li\-da\-tion\xspace approaches to provide simple, efficient cache coherence
that avoids the complex directory and transient states of MESI~\cite{vips-directoryless-noc-coherence,denovo,denovond,denovo-gpu,hlrc}.
\later{~\cite{sarc-coherence,vips-directoryless-noc-coherence,denovo,denovond,denovosync,
denovo-gpu,hlrc,spandex,quickrelease}.}%
Existing solutions for avoiding unnecessary self-in\-va\-li\-da\-tions\xspace either depend on program data access patterns or require programmer annotations.
Further, in order to implement the data-value invariant,
these approaches either retain part of the directory to maintain shared ownership metadata, or incur write-through costs.
\section{Conclusion}
\Barc is a new cache coherence design that avoids unnecessary self-invalidations
and performs bulk write-backs at synchronization operations.
Unlike other self-invalidation approaches,
\barc does not rely on programmer annotations or
specific access patterns for efficiency.
Our evaluation shows that \barc is simpler than MESI,
performs competitively with MESI especially under false sharing,
and provides better performance and energy efficiency than two state-of-the-art self-invalidation-based
approaches.
These results suggest that
\Barc provides efficient, complexity-effective coherence.
\section{Design}
\subsection{Base Design}
\mike{I like the design and optimizations; they seem correct and efficient to me.
We may need something more for avoiding self-invalidation in
general, e.g., for medium-length regions that saturate the write signature but
don't amortize self-invalidation costs, but I think it'd be good to design stuff
based on preliminary results.}
\mike{Rui, you mentioned that you think the design is looking kind of like DeNovoND's.
First, DeNovoND doesn't actually work with legacy programs; DeNovoND assumes that it knows which locks guard which variables.
Second, DeNovoND/DeNovo's ``registration'' seems related to deferred write-backs but also different; what do you think?}
\brandon{The lock association problem is a pretty high cost -- there has been lots of work on trying to infer that, some of which (Vaziri, et al) concluding that it is a tough mapping to figure out. Registration is, indeed, how they handle deferred write-backs, instead of doing self-invalidate with aggressive writeback. I think our bloom filter trick is like registration and a useful exercise might be understanding how/if they are duals of one another}
\paragraph*{Metadata.}
\begin{itemize}
\item Private line metadata -- One write bit per byte.
\end{itemize}
\paragraph*{Data Access.}
A core performs nothing special at each memory read. \brandon{This is a huge feature, but one that other work maybe also has? A qualitatively reduced cost with (hopefully) a quantitatively huge benefit.}
For each memory write, if the write hits in the private cache, the core sets the write bits for those written-to bytes
of the line. Otherwise, the core fetches the line from the LLC.
We don't need to handle WAR upgrades.
\paragraph*{Eviction of a private line.}
When a line is evicted from the L1/L2 to LLC, its written-to bytes (but not its other bytes) need to be written back to the LLC.
Note that a core performs \emph{silent} evictions for read-only lines without any communication with the LLC.
\paragraph*{Synchronization.}
When a core reaches a synchronization operation, i.e.\xspace, a lock acquire or release, it performs the following actions.
\\
At acquire:
\begin{itemize}
\item \textbf{Self-invalidation} -- The core invalidates all of its private cache lines.
If the line has its write bits set, the core needs to commit the line first, writing back dirty values and clearing write bits for the line.
\end{itemize}
~\\
At release:
\begin{itemize}
\item \textbf{Commit} -- For each dirty line, i.e.\xspace, a line with write bits set,
the core writes back its written-to bytes to the LLC and clears its
write bits.
\end{itemize}
\subsection{Opt: avoiding self-invalidation using write signatures}
\paragraph*{Metadata added.}
\begin{itemize}
\item Per-core metadata in the LLC -- A ???-bit bloom-filter as a write signature. (The
write signature may be fairly large (say, 10 times larger than the signature that'll fit in a flit).
If it's sparse, it can be communicated from the LLC to the core in a sparse format (e.g.\xspace, run-length encoding or a list of bit positions).
\end{itemize}
\brandon{We borrow from TM and Bulk and others, using signatures. We should look at Josep's papers carefully to be sure he's not using a similar Bulk summary trick to ensure DRF0. He has a million Bulk/Sig papers.}
\paragraph*{Eviction of a private line.}
The LLC updates other cores' write signatures with the evicted line.
\paragraph*{Synchronization.}
The following changes are made to the actions that a core performs at each synchronization operation.
\\
At acquire:
\begin{itemize}
\item \textbf{Self-invalidation} -- The core requests the LLC to send the updated write signature
(and to clear the write signature for the core \brandon{Let's think about the atomicity of this operation and the operations of the executing region.})
and invalidates a private cache line only if the write signature shows that ownership of the line may have changed during the acquire-free region (AFR) that is ending.
\end{itemize}
~\\
At release:
\begin{itemize}
\item \textbf{Commit} -- The LLC updates other cores' write signatures with all newly committed lines.
\end{itemize}
\subsection{Opt: deferring write-backs of dirty lines}
\paragraph*{Metadata.}
We add a \emph{deferred-writeback} bit for each private line to indicate whether the line is a deferred line, i.e.\xspace, a
line that has values whose write-backs to the LLC are deferred and an \emph{ownerID} for each shared line in the LLC to
register the owner core of each deferred line. Also, we use the write bits of each private line to indicate at which
bytes the line has the latest version of values, whenever they were written by the corresponding core.
\paragraph*{Synchronization.}
The following changes are made to the actions that a core performs at each synchronization operation.
\begin{itemize}
\item \textbf{self-invalidation} -- The core needs to ``undefer'' a deferred line before self-invalidating it,
i.e.\xspace, the core writes back the line's dirty values (and clears its write bits and deferred-writeback bit) and the LLC clears the ownerID for the line.
\\
Note that before the core writes back any dirty values (either of a deferred line or a normal dirty line), the LLC needs to fetch the line from its owner.
\item \textbf{commit} -- For each dirty line,
the core defers writing back its dirty values
until they are needed by another core or another core wants to write back or defer the line. The core keeps the write bits
for the line to indicate that the line has the latest version of values at corresponding bytes and also sets the
deferred-writeback bit of the line. The LLC registers the core as the owner core
of the line.
\\
If the line has already been deferred in a previous region of the
\emph{same} core, the core doesn't need to communicate with the LLC to defer it
again.
If the line has been deferred by a \emph{different} core, it needs to be fetched
from its old owner by the LLC before being deferred by the current core.
\end{itemize}
\paragraph*{Fetching a deferred line from its owner core.}
A deferred line is fetched to the LLC from its owner core when another core requests it at a read/write miss or wants to update it on eviction, during commit or self-invalidation
(or if the owner core has to evict it). The LLC fetches
the dirty values of the line before fulfilling the request and the owner core clears the write bits and the deferred-writeback bit for the line.
\subsection{Opt: avoiding self-invalidation with a special line state}
The above design with the optimization of deferring write-backs can use an optimization to avoid self-invalidating a line until
the next read at the line's \emph{non-written-to} bytes.
\paragraph*{Metadata.}
We add an additional state \code{Partial-invalid} for each private line to
indicate that the line might be invalid due to stale values at non-written-to
bytes, i.e.\xspace, bytes with write bits unset.
\paragraph*{Data Access.}
If a core hits a line in \code{Partial-invalid} state and wants to \emph{read} any of its non-written-to bytes,
the core needs to fetch the updated values for all these bytes from the LLC and change the line's state from \code{Partial-invalid} to \code{valid}.
\\
A core can write to any bytes or read \emph{written} bytes of a partially
invalid line directly.
\paragraph*{Synchronization.}
\begin{itemize}
\item \textbf{self-invalidation} -- If a line (might be a deferred line)
is contained in a core's write signature, the core changes the line's state to
\code{Partial-invalid}, rather than writing back and self-invalidating it.
\end{itemize}
~\\
Note that a deferred line can be in the \code{Partial-invalid} state in a core even if the core is the owner,
since ownership only ensures that a core has updated values at \emph{written} bytes.
\section{Evaluation of Correctness and Complexity}
\label{sec:protocol-verification}
We validated Neat\xspace's correctness and
quantitatively evaluated \barc's complexity, directly comparing to the complexity of a MESI\xspace protocol implementation,
in the widely used \emph{Murphi} model checking tool~\cite{murphi}.
\later{
\rui{"why didn't you include DeNovo in your comparison (since the files theoretically would have been included in the same release as the MESI ones)?"\\
I don't remember why we didn't include DeNovo in the complexity evaluation, but we should be able to do that.
\mike{Is it because DeNovo requires annotations of which locations will be accessed in a region?
What would that mean for the comparison in Murphi?}
\rui{They only modeled one address and one region in Murphi, so they actually assumed no programmer annotation and just let Murphi explore various combinations of reads and writes in a region as long as DRF is guaranteed. I'll modify their specification to make a comparable version to our \barc/MESI\xspace specifications.}}}%
\subsection{Methodology}
We implemented MESI in Murphi based on the GEMS two-level MESI protocol~\cite{gems}.
\notes{
We obtained a Murphi specification for MESI\xspace used in prior work~\cite{denovo-verification}.\footnote{We
have not compared with the specification for DeNovo~\cite{denovo-verification} because (1) it supports
only one line and byte per line, and (2) DeNovo annotation's requirement makes it not directly comparable to \barc.}
\mike{Rewrote above. Please check.}
The provided MESI\xspace specification is based on the GEMS two-level MESI protocol~\cite{gems} and models a single line with a single byte. We extended
the model to support a cache with multiple lines and bytes.
}%
We implemented Neat\xspace in Murphi by directly specifying the protocol described in Section~\ref{sec:protocol}.
For both MESI\xspace and Neat\xspace, our specifications model two cores, a two-level cache hierarchy (i.e.\xspace, private L1s and a shared LLC),
an unordered network with unlimited capacity,
two data values, and up to two lines and two bytes per line.
In addition to modeling standard cache operations due to reads, writes, and evictions of lines,
the MESI\xspace and Neat\xspace specifications model acquire and release operations.
\later{
\mike{I think we need to say more about the specifications,
e.g.\xspace, what the operations are: reads, writes, acq, rel operations.
\brandon{Agreed. More detail would help. We should also say, wherever it fits, that we'll release our model.}
\mike{Added something about the first above, but it could be improved. Added the second part below.}}
}%
Verification using a Murphi model
serves two purposes: providing a demonstration that the protocols are correct, and allowing a comparison of the complexity of the protocols.
We will make our Neat\xspace model and modified MESI\xspace model publicly available so that others can reproduce and modify our evaluation
of correctness and complexity.
To check correctness, both specifications check a ``last-write'' assertion for each state explored by Murphi:
any read by a core of a byte in a cache line should see the same value written by the last write by any core.
(The specifications also check protocol-specific assertions, e.g.\xspace, the MESI\xspace specification checks
the ``single writer'' invariant that there is at most one modifiable copy of each line at a time.)
We note that for \barc the last-write property holds only for
\emph{data-race-free} (DRF) executions; \barc has undefined behavior for data races.
MESI\xspace by contrast provides the last-write property for all executions---but
real systems that use MESI\xspace typically apply compiler and hardware optimizations
allowed by language and hardware memory models that yield undefined or weak semantics for executions with data races~\cite{memory-models-cacm-2010}.
It thus makes sense to verify \barc and MESI\xspace for DRF executions only.
To limit verification to DRF executions only, we extend the \barc and MESI\xspace specifications to detect a subset of data races
for which a byte's last write may be undefined, i.e.\xspace, any access to a byte is not ordered by synchronization with the prior write of that byte.
That is, any read or write to a byte by core $c_2$ that is preceded by a write to the same byte by a different core $c_1$
must be well synchronized, i.e.\xspace, there must exist a sequence of operations \code{$c_1$: wr x; $c_1$: rel; $c_2$: acq; $c_2$: rd/wr x},
where \code{rel} and \code{acq} are any release and acquire operations, respectively.
\later{
(Note that the specifications allow read--write races, which do not lead to undefined behavior for Neat\xspace.)
\brandon{The parenthetical here is little bit of a ``huh??'' Do we introduce this fact earlier in the paper?
If not, we should. Once we do, we can do a back-reference here to make this easier to understand in passing here.}
}%
If the MESI\xspace or \barc specification detects a violation of this property in an execution,
it transitions the execution state to the \emph{initial} state (i.e.\xspace, the state before execution starts),
effectively cutting off exploration of racy states.
\input{table-state-numbers}
\subsection{Protocol Complexity Results}
\label{subsec:complexity-results}
Table~\ref{tab:states:drf-only} reports how many execution states Murphi explored
for the MESI\xspace specification and three variants of the \barc specification.
\emph{\code{Neat base}\xspace} is the baseline \barc;
\emph{\code{Neat pi-only}\xspace} includes the partially invalid (PI) state mechanism; and
\emph{\bcbf} is the full version of \barc and includes the PI state and write signature mechanisms.
For each configuration, the table reports states explored by Murphi
for the four combinations of 1--2 lines and 1--2 bytes per line.
As the results show, MESI\xspace's complexity (i.e., number of states) increases more quickly than Neat\xspace's complexity as the number of lines in the cache increases. This is because MESI\xspace requires per-line transient states to keep the lines coherent.
In contrast, Neat\xspace's complexity increases more quickly than MESI\xspace's as the number of bytes per line increases, because Neat\xspace maintains per-byte write bits for private cache lines.
However, Neat\xspace's relative complexity for an additional byte is lower
than MESI\xspace's relative complexity for an additional line,
since MESI\xspace's per-line state space is considerably larger than Neat\xspace's per-byte state space: MESI\xspace adds many transient states per line, while Neat\xspace adds a single write bit per byte.
\notes{\vignesh{Does the last sentence mean that we have one transient state per byte in Neat\xspace? Stating the precise number of
transient states in MESI\xspace and \barc may be helpful here.
\mike{In what way is it a \textbf{transient} state?
I guess it's true that one could consider there to be 2$^{64}$ possible states for a cache line's write bits,
but I don't think we want to present it that way.}
\vignesh{Actually, I am not sure what the number of transient states should be for Neat\xspace. My confusion
was more because the last sentence seemed to compare objects of different types -- transient states in MESI\xspace
with bits in Neat\xspace.}
\mike{I believe ``transient states'' are states needed because of protocol races, and so \Barc doesn't have transient states.}}}%
The verification state space is large for a Neat\xspace or MESI\xspace configuration with two lines and two bytes per line.
Except for \code{Neat base}\xspace, Murphi never completed checking any of the configurations (indicated by \emph{Timeout} in the table),
even after exploring hundreds of millions of states, which took several days running on a
\later{
\brandon{which tier?
\mike{Couldn't figure this out easily. But in general we should switch to Standard Tier to save some \$.}}
}%
VM hosted by Google Compute Engine.
\later{
\vignesh{If it easy to find, we should report main memory available on the VM. COUP~\cite{coup-micro-2015}
did the same while reporting unfinished MURPHI runs
\mike{Rui doesn't know since it was a long time ago.}}
}%
\later{
Considering that a normal cache has significantly more lines than bytes within a line,
we expect Neat\xspace to have more significant advantage over MESI\xspace in real world than the simplified verifications.
\mike{How would this have a real-world advantage? I don't think there's a ``real-world verification'' that would try to model a real-size cache. It wouldn't scale.}
}%
The main takeaway from these results is that Neat\xspace is considerably less complex than MESI\xspace. MESI\xspace and the full Neat\xspace configurations
with just one line have comparable numbers of states, with Neat\xspace having somewhat fewer states.
For the two-line, one-byte configuration, \barc has \textbf{20\boldmath$\times$} fewer states than MESI\xspace,
due to Neat\xspace's small per-line state space.
\later{
For completeness, Table~\ref{tab:states:all} shows states explored by Murphi for the same configurations of MESI\xspace and \barc---but
modified to explore all execution states including those with data races.
\Barc would fail last-write assertions in this context,
so the configurations disable last-write tracking and assertion checking for both MESI\xspace and \barc.
These configurations also do not track state to detect last-write races.
Exploring execution states with races increases the state space,
while eliding last-write tracking and race detection decreases the state space
(Murphi inherently treats all specification states as part of the state space that it explores),
leading to a mix of increased and decreased states compared with Table~\ref{tab:states:drf-only}.
We provide these additional results to further characterize Neat\xspace's verification complexity.
Although the data show an increase in states for Neat\xspace,
the experiment explores states for which Neat\xspace has undefined behavior,
so the results are \emph{not} indicative of Neat\xspace having higher complexity than MESI\xspace.
\vignesh{I was wondering if all-executions results merit a table and a paragraph. Not sure what the significance of
reachable states is if we disable last-write tracking and assertion checking in both MESI\xspace and \barc. IOW, what property
are we verifying for racy executions? (I can see SWMR for MESI\xspace but not clear what we are checking for \barc).
\mike{Agreed. Removed it.}}
}%
\later{
\mike{The following seems redundant, particularly with the ``all executions'' paragraph removed.}
In summary, in addition to verifying that the \barc protocol is correct with respect to key invariants,
these results show that \barc is less complex than MESI\xspace because Neat\xspace's verification state space has fewer states than MESI\xspace's space.
}%
\section{Evaluation of Performance and Energy}
\label{sec:eval-perf}
This section evaluates the performance and energy usage of \barc,
compared with
a state-of-the-art MESI\xspace protocol implementation~\cite{illinois-mesi,coherence-primer} and
two self-invalidation-based protocols from the literature~\cite{sarc-coherence, vips-directoryless-noc-coherence}.
\subsection{Implementation and Methodology}
\label{sec:eval:perf:impl-meth}
Our experiments measure run-time performance and energy
consumption,
\later{and interconnect bandwidth consumption}%
using the RADISH simulator~\cite{radish} modified to
implement (1) \barc, (2) a directory-based MESI protocol
implementation~\cite{coherence-primer}, and two self-invalidation-based protocols,
(3) \emph{SARC}~\cite{sarc-coherence} and (4) \emph{VIPS}~\cite{vips-directoryless-noc-coherence}.
We will make our simulation and modeling infrastructure publicly available.
All simulator backends consume the same trace of instructions from a PIN-based front end~\cite{pin}.
Each core has a two-level private cache hierarchy, and the LLC is backed by off-chip main memory.
A core's L2 is inclusive of its L1.
The LLC is not inclusive of private caches for SARC\xspace, VIPS\xspace, and \barc (and need not be),
but the LLC is inclusive of the L2 for MESI\xspace to support an inclusive directory cache embedded in the LLC~\cite{coherence-primer}.
\input{table-sim-setup}
Our simulators model and measure execution cycles and on-chip traffic,
using parameters shown in Table~\ref{tab:arch-params}.
The simulators model single-issue, in-order cores in which non-memory instructions have an IPC of one,
and an interconnect network that uses 16-byte flits.
The \barc and VIPS\xspace simulators model the cycle cost of performing self-in\-va\-li\-da\-tion\xspace and commit\xspace (either bulk write-backs by \barc or delayed write-throughs by VIPS\xspace) at synchronization operations
based on the total size of messages sent and the bandwidth available between a core and the LLC.
We report the maximum cycles of any core as execution time.
To measure energy consumption, we use the \emph{McPAT} energy modeling tool~\cite{mcpat}, providing it with the output statistics from our simulator.
We report total energy for the \emph{cache and memory subsystem}, including the on-chip interconnect and LLC-to-memory communication.
We exclude reporting energy for operations within the cores because our simulator does not
collect detailed core-level statistics such as ALU and branch instructions, which McPAT needs to compute a core's energy usage.
However, these excluded operations and thus the excluded energy should be identical across all configurations.
We use McPAT to model energy of all MESI\xspace, \barc, SARC\xspace, and VIPS\xspace components
across the cache and memory subsystem, with one exception:
It is unclear how to model \barc's Bloom-filter-based write signatures\xspace (Section~\ref{subsec:opts}) in McPAT,
\later{When servicing a core's fetch request for the Bloom filter,
the LLC sends bit positions instead of a bit vector if it is more compact.}%
so we estimate the per-access energy of Bloom filters using values reported by prior work~\cite{l-cbf}.
Each core's write signature\xspace is a 1008-bit Bloom filter (which fits in eight 16-byte flits including a control message).
Specifically, we assume L-CBFs, each with 1008 1-bit counters, and derive their per-operation dynamic energy by assuming linear relationships
between the per-operation energy and entry count (as well as count width) for an L-CBF.
We compute total dynamic energy due to write signature\xspace operations by multiplying the per-operation energy by the numbers of write signature\xspace operations counted in the simulator.
\input{table-configs}
\subsubsection*{Evaluated configurations}
Table~\ref{tab:configs} shows the configurations we evaluate.
\emph{\bcunopt} is baseline \barc;
\emph{\bcpi} includes the partially invalid (PI) state mechanism; and
\emph{\ntbf} is the full version of \barc and includes the PI state and write signature mechanisms.
\emph{\code{Neat cla}\xspace} is \ntbf plus two page-level classification optimizations used by VIPS:
private vs.\ shared pages, and read-only vs.\ read-write pages~\cite{vips-directoryless-noc-coherence} (Section~\ref{subsec:background-vips}).
We evaluate a configuration called \emph{\code{SARC}\xspace} based on the design described by prior work~\cite{sarc-coherence}.
\later{Like the SARC paper's evaluation~\cite{sarc-coherence},
our SARC implementation treats all read-only lines as tear-off copies.
As a result, the directory does not include any sharer information.}%
We also implemented and evaluated an idealized (perfect) implementation of SARC's writer prediction~\cite{sarc-coherence},
but found that it had negligible performance impact, so we exclude writer prediction from the evaluation for simplicity.
\notes{
\mike{We've removed \textcolor{red}{\code{S/N hybrid}}\xspace from the results, so excluded paragraph about it below.
We could potentially still mention that we tried applying the write signature\xspace optimization to SARC\xspace,
but then it'd be kinda weird that we don't show results.}
To demonstrate the generality of Neat\xspace's write signature\xspace optimization,
we also evaluate a configuration\xspace that is a hybrid of SARC\xspace and \barc, called \emph{\textcolor{red}{\code{S/N hybrid}}\xspace}.
This configuration applies Neat\xspace's write signatures\xspace to \code{SARC}\xspace:
\textcolor{red}{\code{S/N hybrid}}\xspace uses write signatures\xspace to avoid self-invalidating privately cached lines unmodified in the LLC since the core's last self-invalidation.
While the results show that \textcolor{red}{\code{S/N hybrid}}\xspace often performs comparably with \bcbf,
\mike{Still true? In any case, could omit the first part of the sentence.
\rui{Not for servers. How about we commit the \textcolor{red}{\code{S/N hybrid}}\xspace results to emphasize the \ntbf and \ntpsrw results.}
\mike{Huh? Do you mean \textbf{omit} instead of commit?}}
\textcolor{red}{\code{S/N hybrid}}\xspace (and \code{SARC}\xspace) are arguably more complex than \bcbf,
since SARC builds on top of a MESI baseline (albeit a MESI baseline that does not track sharers)~\cite{sarc-coherence}.
}%
In contrast with \barc and SARC\xspace, VIPS\xspace uses write-through caches;
MSHRs buffer write-throughs until they run out, or a timeout or synchronization release is reached~\cite{vips-directoryless-noc-coherence}.
To approximate the MSHRs' behavior, our simulator implements a \emph{write-through buffer}
with an LRU eviction policy and an infinite timeout.
\later{Consequently, the VIPS\xspace simulator writes back to the LLC only at a release or write-through buffer eviction.}%
VIPS\xspace classifies private and shared memory pages and read-only and read-write pages to avoid unnecessary self-in\-va\-li\-da\-tion\xspace~\cite{vips-directoryless-noc-coherence} (Section~\ref{subsec:background-vips}).
To understand the effectiveness of VIPS\xspace's classifications compared to \barc's self-invalidation-reducing mechanisms,
we evaluate two VIPS\xspace configurations\xspace, \emph{\code{VIPS unopt}\xspace} and \emph{\code{VIPS cla}\xspace}.
\code{VIPS unopt}\xspace does \emph{not} classify private and shared pages or read-only and read-write pages,
but writes through and self-invalidates all lines, effectively treating all lines as shared and read-write.
\code{VIPS unopt}\xspace is like \bcunopt in how both protocols commit and self invalidate cache lines at synchronization operations,
though \code{VIPS unopt}\xspace is subject to the limited capacity of the write-through buffer (10 entries) and
does not wait until the next release to commit \emph{all} dirty lines as the \barc configurations do.
We note that \code{VIPS unopt}\xspace represents so-called ``GPU coherence'' (Section~\ref{subsec:background-vips}).
\emph{\code{VIPS cla}\xspace} \later{, which represents optimized VIPS\xspace~\cite{vips-directoryless-noc-coherence},}%
adds the classifications to \code{VIPS unopt}\xspace,
writing through and self-invalidating only those lines that are marked as shared and read-write.
For lines that are marked as private, \code{VIPS cla}\xspace writes them back to the LLC only on L2 eviction if they are dirty.
\newcommand\pthreads{pthreads\xspace}
\subsubsection*{Workloads}
\notes{Different from Peacenik, Neat enables optimizations (-O3) for PARSEC programs and default configs for the server programs.}%
Our experiments run the PARSEC 3.0 benchmarks~\cite{parsec-pact-2008}, three real server programs, and the Phoenix benchmarks that contain false sharing~\cite{phoenix}.
For most of the PARSEC benchmarks, we use \code{simsmall} inputs;
we use \code{simmedium} for \bench{swaptions} since \code{simsmall} does not support \textgreater 16 threads.
We use 11 of 13 PARSEC programs; \bench{facesim} fails to finish executing with
the simulators,
and \bench{freqmine} uses OpenMP instead of \pthreads.
\later{
\brandon{Doesn't OpenMP run with pthreads underneath? freqmine maybe should work...}
}%
The experiments execute three real server programs:
\later{Apache HTTP Server 2.4.23 (\bench{httpd})~\cite{httpd}, Memcached 1.5.2 (\bench{memcached})~\cite{memcached}, and MySQL Server 5.7.16 (\bench{mysqld})~\cite{mysqld}.}%
A\-pa\-che HTTP Server 2.4.23 (\bench{httpd}), Memcached 1.5.2 (\bench{memcached}), and MySQL Server 5.7.16 (\bench{mysqld}).
We configure each program to create a single process
\notes{(already the default for \bench{mysqld} and \bench{memcached})}%
with 32 worker threads.
For \bench{httpd}, we launch 32 client processes
that repeatedly and concurrently perform simple HTTP requests to HTML pages randomly selected from a pool of 100 pages.
For \bench{memcached} and \bench{mysqld}, we use the benchmark tools \code{memtier\_bench} and \code{sysbench}, respectively, to generate workloads.
Each of the benchmark tools starts 32 client threads to send workloads that perform different mixes of set and get operations (\bench{memcached}) or addition/deletion/update and select operations (\bench{mysqld}).
We configure \code{memtier\_bench} to generate mixed Memcached requests with
the following ratios of gets to sets:
0:100, 10:90, 50:50, and 10:90.
For \bench{mysqld}, \bench{sysbench} generates transactions of mixed SQL queries using the benchmark tool's build-in \emph{read-only (ro)}, \emph{read-write (rw)}, and \emph{write-only (wo)} workloads.
In our experiments, client processes or threads executing natively send 32,768 HTTP requests (\bench{httpd}), 262,144 Memcached requests (\bench{memcached}),
or 8,192 SQL transitions (\bench{mysqld}),
distributed evenly over all 32 client processes or threads.
To evaluate \barc's benefits on false sharing,
our experiments run three Phoenix benchmarks: \bench{histogram}, \bench{linear\_\allowbreak regression}, and \bench{word\_count}.
We selected the benchmarks in which prior work detected false sharing~\cite{huron},
excluding benchmarks in which false sharing is mostly inside pthread functions
(according to Linux's \code{perf c2c} utility), whose code is ignored by our simulators.
As observed by prior work, \code{gcc} eliminates false sharing at certain optimization levels~\cite{predator}.
Our experiments compile each program with \code{gcc} 4.8.5 at the highest optimization level at which false sharing exists:
\bench{histogram} and \bench{linear\_regression} at \code{O1}, and
\bench{word\_count} at \code{O3}.
Our simulators only compute cycles for the ``region of interest'' (ROI).
For each PARSEC or Phoenix program, its ROI includes its whole parallel phase;
\bench{vips} lacks an ROI annotation so we treat its entire execution as the ROI.
For server programs, the ROI is all execution except the startup and shutdown phases.
\subsubsection*{Handling \pthreads functions and atomic instructions}
All simulators identify each \pthreads function call as a synchronization operation.
\later{At each synchronization acquire, all self-invalidation-based configurations\xspace (i.e.\xspace, all configurations except MESI\xspace)
perform self-in\-va\-li\-da\-tion\xspace.
Before self-invalidating a dirty line, \barc and VIPS\xspace commit the line to the LLC.
At each release, the configurations perform different operations to commit dirty data to the LLC as follows.
The \barc configurations\xspace performs bulk write-backs for dirty lines.
\notes{
The \textcolor{red}{\code{S/N hybrid}}\xspace configuration\xspace commits to the LLC the write information of a core's current release-free region (RFR) to update other cores' write signatures\xspace at each release.
}%
The VIPS\xspace configurations\xspace commit to the LLC all delayed write-throughs in the write-through buffer.}%
Self-invalidation-based configurations\xspace
perform relevant coherence actions at synchronization operations,
while MESI ignores all synchronization operations.
All simulators ignore all instructions executed \emph{inside} \pthreads functions.
The \barc, SARC\xspace, and VIPS\xspace simulators each treat non-\pthreads atomic instructions (i.e.\xspace,
instructions with the \code{LOCK} prefix) as lock operations but \emph{not} region boundaries,
and thus do not perform any coherence actions but instead execute a distributed queue-based locking protocol~\cite{denovond}.
MESI\xspace treats atomic instructions as regular memory accesses.
Our \barc (and SARC\xspace and VIPS\xspace) implementations do not deal with how to implement spin-waiting efficiently under self-invalidation;
we assume a \barc implementation would use an existing mechanism~\cite{callbacks,denovosync,denovond}.
\subsection{Results}
\input{figure-perf}
\input{figure-energy}
\later{\input{figure-edp}}%
\later{
\rui{"Why do your results show different performance for VIPS relative to MESI for the same benchmarks?"\\
Not sure how to respond to this concern. Admittedly our simulation model is not as accurate as GEM5 used by VIPS and SARC.
\mike{Hmm, this definitely seems like something to look into.}
\rui{A possible cause of the result discrepancy is that we use different architectural parameters than VIPS, such as private cache levels and sizes (32 KB L1s + 256 KB L2s vs. VIPS's 32 KB L1s).
The VIPS paper says:"VIPS becomes slightly slower than MESI for large private hierarchies (but less than 3\% for 256KB)"~\cite{vips-directoryless-noc-coherence}.}
\mike{Oh okay, that might be worth looking into.
The VIPS paper also says: ``On average over the 15 applications, VIPS-M is 4.8\% faster
than Directory because of faster writes (no write-misses), faster reads (no directory indirection), and
less traffic in the NoC."}}
}%
Figures~\ref{fig:perf-all} and \ref{fig:energy-all}
\later{ and \ref{fig:edp-all}}%
evaluate \barc's performance and energy consumption,
\later{and power efficiency,}%
respectively, compared with MESI\xspace, SARC\xspace, and VIPS\xspace, for all programs.
Each bar is normalized to MESI\xspace in all figures.
\later{Figure~\ref{fig:bw-usage} compares the average on-chip bandwidth consumption among the three protocols, without normalization.
Table~\ref{tab:bench-table} shows how often cores perform synchronization operations, and }%
Table~\ref{tab:stats} shows how many lines are self-invalidated and committed per synchronization operation.
Note that the table's commit counts include (i) all write-throughs made by the VIPS\xspace configurations at write buffer evictions and synchronization operations and (ii) all write-backs made by the \barc configurations at synchronization operations, but do not include any regular write-backs due to L2 evictions.
\subsubsection*{Performance}
Figure~\ref{fig:perf-all} compares execution cycles (maximum cycles of any core)
for all programs.
The data show that, for most of the PARSEC programs, the configurations\xspace have similar performance.
For \bench{canneal}, \bench{fluidanimate}, and \bench{streamcluster},
the SARC\xspace and VIPS\xspace configurations are 3--16\% slower than MESI\xspace.\footnote{%
\renewcommand{\footnotesize}%
The VIPS paper also evaluated \bench{canneal}, \bench{raytrace}, and \bench{x264}, but reported different results,
presumably due to different architectural parameters used for simulation~\cite{vips-directoryless-noc-coherence}.
The VIPS paper acknowledges that VIPS is slightly (less than 3\%) slower than MESI for 256KB private caches,
which agrees with our results.}
\later{\mike{Comment about the footnote: ``presumably due to different architectural parameters used for simulation''
sounds like a cop-out, but if we don't know any better than that, it's okay (at least we mentioned it).}}%
In contrast, the full \barc configuration achieves virtually the same performance as MESI\xspace.
On average, \ntbf and \ntpsrw are nearly at parity with \code{MESI}\xspace, with an average run-time difference of less than 1\%.
For the server programs, some self-invalidation-based configurations are as much as 4$\times$ slower than MESI\xspace.
\Barc performs significantly better than SARC\xspace and VIPS\xspace,
with \ntbf
executing 1.2$\times$ and 1.6$\times$ faster than \code{SARC}\xspace and \code{VIPS cla}\xspace, respectively.
A run-time breakdown (results not shown) shows that the significant slowdowns for self-invalidation-based configurations
are caused mainly by cache misses due to unnecessary self-invalidation.
Table~\ref{tab:stats} shows that both of \Barc's mechanisms are collectively effective in reducing self-in\-va\-li\-da\-tions\xspace.
VIPS\xspace's classifications are effective, too,
further helping \ntpsrw achieve performance close to MESI\xspace's
(11\% slower on average).
For two of the Phoenix benchmarks,
\barc and VIPS\xspace improve performance significantly over MESI\xspace and SARC\xspace
by eliminating writer-initiated invalidations caused by false sharing.
\Barc and VIPS\xspace do not benefit \bench{word\_count} as much since the program does not contain as much false sharing as the other two.
\Barc's mechanisms and VIPS\xspace's classifications to avoid unnecessary self-invalidations have negligible impact on performance since the programs all execute few synchronization operations (by using fork--join parallelism).
\subsubsection*{Energy}
Figure~\ref{fig:energy-all} compares the energy consumption of all configurations\xspace for all the programs, which is divided into energy due to \emph{static} and \emph{dynamic} energy consumption.
For configurations\xspace with the write signature\xspace mechanism, dynamic energy is further divided into energy from write signature\xspace operations
and from other sources
(see Section~\ref{sec:eval:perf:impl-meth}).
\Barc imposes a lower static \emph{power} cost than MESI\xspace and SARC\xspace because it eliminates the need to maintain a large, shared coherence directory.
\Barc's mechanisms that avoid unnecessary self-invalidations further help reduce energy \later{, especially for those programs with small regions,
since they help reduce \emph{execution time}.
\notes{
($\textnormal{energy} = \textnormal{(static power + dynamic power)} \times \textnormal{time}$).
}%
VIPS\xspace imposes an even lower static power than \barc because it does not require per-byte metadata in the private caches as \barc does for the write bits.
On average, for the PARSEC benchmarks, \ntbf (and \ntpsrw) consume approximately 6\% and 7\% less energy than \code{MESI}\xspace and \code{SARC}\xspace, respectively, but 6\% more than \vipspsrw.
For the server programs, \ntbf consumes 44\% more energy than \code{MESI}\xspace,
but 18\% and 30\% less than \code{SARC}\xspace and \vipspsrw, respectively.
\ntpsrw consumes 9\% more energy than \code{MESI}\xspace, much less than the SARC\xspace and VIPS\xspace configurations.
For the Phoenix benchmarks, \ntbf (and \ntpsrw) consume approximately 32\%, 28\%, and 5\% less than MESI, \code{SARC}\xspace, and \vipspsrw, respectively.
\later{\subsubsection*{Energy Efficiency}%
To evaluate Neat\xspace's energy efficiency, we computed the \emph{energy--delay product} (EDP)~\cite{sarc-coherence} for each configuration\xspace.
Figure~\ref{fig:edp-all} shows the results.
For each PARSEC benchmark, VIPS\xspace and \barc configurations have similar EDP, and lower EDP than MESI\xspace
(or the same EDP as MESI\xspace for \bench{fluidanimate}).
\ntpsrw has 7\% lower EDP than MESI\xspace on average.
For server programs, \ntpsrw has 21\% higher EDP than MESI\xspace but 3$\times$ and 5$\times$ lower EDP than SARC\xspace and \vipspsrw, respectively.
For Phoenix benchmarks, \ntbf (and \ntpsrw) has approximately 47\%, 41\%, and 4\% lower EDP than MESI, \code{SARC}\xspace, and \vipspsrw, respectively.
}
\later{
\subsubsection*{Bandwidth usage}
Figure~\ref{fig:bw-usage} compares the average on-chip bandwidth consumption among MESI\xspace, SARC\xspace, and \barc on 32 cores.
Average on-chip bandwidth consumption is computed as traffic, in 16-byte flit granularity, per unit time.
We define on-chip traffic for \barc as all communication between cores and the LLC.
Traffic for MESI\xspace and SARC\xspace additional includes core-to-core communication.
As shown in Table~\ref{tab:arch-params}, our simulators assume 100 GB/s as the on-chip network's bandwidth limit.
For \bench{canneal}, \barc without the write signature\xspace optimization incurs high average on-chip bandwidth usage (90 GB/s), nearly saturating the available on-chip bandwidth.
The fully optimized \ntbf configuration and \textcolor{red}{\code{S/N hybrid}}\xspace incur comparable on-chip traffic to MESI\xspace for most programs.
}%
\input{table-inv-commit-stats}
\later{\input{table-region-size}}%
\medskip
\noindent
In summary, the results in this section show that for less-complex programs such as the PARSEC benchmarks,
\barc has performance and energy competitive
\later{and on-chip bandwidth consumption}%
with MESI\xspace and outperforms SARC\xspace and VIPS\xspace.
\Barc has significant benefits over MESI\xspace and SARC\xspace specifically for programs with false sharing.
For complex server programs, \barc is slower and consumes more energy than MESI\xspace, but still has significant benefits over SARC\xspace and VIPS\xspace.
\notes{
\Barc's write signature\xspace optimization\xspace is applicable to SARC\xspace and improves SARC\xspace to be comparable to \barc.
Similarly,
}%
VIPS\xspace's classifications benefit \barc and improve \barc
to be competitive with MESI\xspace.
The evaluations in this section and Section~\ref{sec:protocol-verification} show that \barc is not only a novel system design,
but it improves in complexity compared to MESI\xspace and improves in performance and energy consumption compared to SARC\xspace and VIPS\xspace (and MESI\xspace for programs with false sharing).
\section{Various Ideas and Thoughts}
Based on what we've learned from developing ARC, can we invent cheap support for DRF0?
DeNovoND~\cite{denovond} and MESI are the main competitors here; also the SARC-like work~\cite{sarc-coherence,vips-directoryless-noc-coherence,denovosync}
and release consistency~\cite{release-consistency,treadmarks}.
Would our design be superior to DeNovoND (which is already cheaper than MESI)?
An ARC-like design targeting DRF0 wouldn't need region boundary operations
except for pre-commit and self-invalidation. It could use some optimizations:\brandon{This is true, but the most basic idea of pre-commit and self-invalidation is bound to be quite inefficient. Our key problem is to describe how the ARC optimizations are the {\em right} ones to make an efficient self-invalidation system}
\begin{itemize}
\item Self-invalidation optimization for dirty lines, i.e.\xspace, L1/L2 can avoid invalidation of a dirty line if the version indicates no other core wrote to the line.
\item \emph{Cond-invalid} state (requires versions)
\item Per-core write signature: avoid invalidating any line that isn't indicated by the signature
\item Defer write-backs of dirty lines (the LLC knows that the core is the owner).
\end{itemize}
Some of those require maintaining per-line versions, but that seems like a good idea (since it enables those optimizations).
\brandon{We may want to propose a few different designs: maybe one that is more area efficient and more power efficient, but a little slower, and another that is less power and area efficient because it uses more metadata, but avoids more self-inval, so is faster? We need to fight the complexity-haters this time around.}
When a line is evicted from the L1/L2 to LLC, its written-to bytes (but not its
other bytes) need to be written back to the LLC.
\brandon{What is the area/power cost of adding versions? What is the performance benefit/reduction in self-inval of adding version? QUantitative reasoning would help here.}
What happens to a version when it's evicted from the LLC? We're back to the same problem we had with ARC.
One possibility is to maintain the ``max version (of any line) evicted from the LLC'' $\mathit{globalEvictedMax}$,
i.e.\xspace, $\mathit{globalEvictedMax} \gets \max(\mathit{version},\mathit{globalEvictedMax})$ for each LLC-evicted line,
and when a line is brought back into the LLC, assign $\mathit{version} \gets 1 + \mathit{globalEvictedMax}$.
Versions will overflow more quickly in that case.
We can either have long ($>$32-bit) versions or describe efficient version resetting
(I think a global operation that sets all LLC versions to 1 and all L1/L2 versions to 0, might do it).
\section{Introduction}
\label{sec:intro}
\later{\swarnendu{It is not super important, but a new edition of the primer book is available.}}%
Today's general-purpose processors have multiple cores with private and shared caches.
To provide the abstraction of shared memory in this context,
processors implement \emph{cache coherence}, which is defined by two invariants---the \emph{single writer, multiple readers} (SWMR) invariant and the \emph{data-value} invariant~\cite{coherence-primer}.
The SWMR invariant requires that at any given time, a memory location can only be written by a single core or read by multiple cores.
The data-value invariant requires that a read must see the \emph{up-to-date} value of the corresponding memory location it reads.
Most multicore processors use a variant of the MESI cache coherence protocol~\cite{coherence-primer,illinois-mesi}.
MESI enforces the two invariants using \emph{writer-initiated invalidation} and \emph{ownership tracking}:
whenever a core writes to a cache line,
the protocol invalidates shared copies of the line, by tracking where in the system a valid copy or copies of a line reside;
the protocol also records the ID of the writer (i.e.\xspace, owner ID) so that subsequent reads can be directed to the writer to get up-to-date values of the line.
As a result, MESI and its variants are efficient because they perform coherence actions only when accesses to a line by multiple cores conflict.
While often efficient, MESI has some serious drawbacks.
Protocol races---which occur even in executions of data-race-free programs---necessitate
transient states that complicate implementation and verification.
Optimized MESI implementations rely on a shared directory that maintains the coherence state and the owner ID of each line, typically containing an entry for every line in the shared cache.
MESI messages between cores' private caches and the shared cache require acknowledgments to ensure the SWMR and the data-value invariants,
e.g.\xspace, for writer-initiated invalidations and dirty write-backs,
incurring latency.
MESI maintains coherence states at the granularity of cache lines and is consequently susceptible to false sharing~\cite{huron}.
Researchers have introduced new cache coherence designs that aim to be simpler than
MESI~\cite{sarc-coherence,vips-directoryless-noc-coherence,denovo,denovond,denovosync,
denovo-gpu}.
A key aspect of these designs is that they exploit the \emph{data-race-free} (DRF) assumption
of language-level memory consistency models~\cite{memory-models-cacm-2010}.
These simpler coherence protocols exploit DRF to
leverage the semantics
of synchronization to enforce coherence.
As a result, they do not require cores to
exchange eager invalidations that directly implement the SWMR invariant.
Using these techniques, a core instead {\em self-invalidates} its valid lines at
a synchronization acquire operation.
Further, by exploiting DRF, the coherence protocols do not require cores to write back dirty data or register the owner ID in the shared cache immediately upon each individual write.
A core can defer flushing dirty data or ownership registration to the shared cache until a synchronization release operation.
Assuming a DRF program, such self-invalidations and deferred flushes are sufficient to ensure coherence.
\begin{table*}
\begin{tabular}{l|l|l|l}
\multirow{2}{*}{Coherence protocol} & Performance/energy & Performance/energy & \multirow{2}{*}{Complexity} \\
& cost (w/o false sharing) & cost under false sharing & \\\hline
MESI~\cite{illinois-mesi} and its variants& Low & High & High \\
Prior self-in\-va\-li\-da\-tion\xspace protocols~\cite{sarc-coherence,vips-directoryless-noc-coherence,denovo,denovond,denovosync,denovo-gpu}
& Medium & Low & Low \\
\Barc (this paper) & Low--medium & Low & Low
\end{tabular}
\vspace*{0.5em}
\caption{Qualitative comparison of coherence protocols.
Note that self-in\-va\-li\-da\-tion\xspace protocols that retain some MESI features~\cite{sarc-coherence,denovosync}
incur a combination of the advantages and disadvantages of both.}
\label{tab:qualitative-comparison}
\end{table*}
Self-in\-va\-li\-da\-tion\xspace has \emph{potential} advantages over MESI:
lower protocol complexity (by avoiding MESI's numerous transient states),
lower power (mainly by eliminating MESI's coherence directory states), and
lower per-operation latency (by eliminating protocol acknowledgments, as well as cache invalidations due to false sharing).
At the same time, self-invalidation can degrade performance by
invalidating up-to-date lines, causing unnecessary cache misses.
Some self-invalidation protocols try to improve performance by
relying on mechanisms to infer data access patterns or on programmers to write in new languages or use annotations about which cache lines need to be invalidated~\cite{vips-directoryless-noc-coherence,denovo,denovond,denovo-gpu}.
To implement the data-value invariant,
some self-invalidation-based approaches
retain some of MESI's directory and protocol complexity~\cite{sarc-coherence};
or defer ownership registrations and dirty write-throughs by \emph{buffering} them,
which incur performance and energy cost when the buffers overflow~\cite{vips-directoryless-noc-coherence,denovo,denovond,denovo-gpu}.
Section~\ref{sec:background} details closely related prior approaches.
Table~\ref{tab:qualitative-comparison}'s first two rows compare MESI and self-in\-va\-li\-da\-tion\xspace protocols.
The table highlights the performance--com\-plex\-i\-ty tradeoff between MESI and self-in\-va\-li\-da\-tion\xspace,
and self-in\-va\-li\-da\-tion\xspace's performance advantage under false sharing.
This paper's goal is to get the advantages of performing self-invalidations and deferred flushes without most of the disadvantages and issues---achieving
(1) significantly lower complexity than MESI,
(2) performance and energy usage on par with MESI and significantly better than prior designs that use self-invalidations and deferred flushes,
and (3) out-of-the-box support for legacy programs.
To achieve this goal, we introduce
a novel, low-complexity approach to multicore cache
coherence called \emph{Neat\xspace}.
\Barc consists of two main design elements that contribute to its efficacy, one for each type of coherence action at synchronization operations.
First, \barc
uses novel lightweight mechanisms to reduce self-invalidation costs significantly by improving reuse of \emph{both} dirty and clean data across synchronization acquires,
which differs from prior work that relies on programmer annotations or inferred sharing patterns\cite{vips-directoryless-noc-coherence,denovo}).
Second, in Neat\xspace, writes to privately cached lines are not propagated until synchronization releases,
which differs from prior work that either uses MESI-style mechanisms to maintain ownership and propagate data at individual memory accesses~\cite{sarc-coherence}
or uses buffers to defer flushes (i.e.\xspace, write-throughs or ownership registration requests) until overflowing buffers or reaching synchronization releases
~\cite{vips-directoryless-noc-coherence,denovo,denovond}.
\later{
\mike{Should the paper emphasize the second design element more? Maybe it's not such a big deal.
\rui{The second design element is two-sided: improving performance (due to better reuse of written data) by coalescing writebacks but causing bursty traffic at releases and having the overhead of per-byte write bits (similarly, DeNovo has fine-grained (per-word) registration bits).}}
}%
We perform two evaluations of \barc compared with state-of-the-art approaches.
First, we evaluate \barc's complexity
by implementing and verifying \barc and MESI\xspace in the Murphi model checking tool~\cite{murphi}.
\later{
\mike{Is comparing with (some version of) DeNovo a possibility? Didn't we get the initial model from the DeNovo authors?
\rui{Right we have the DeNovo model and should be able to use it for our evaluation. There are a couple of non-trivial revisions to the model though, to make it comparable with our \barc and MESI\xspace models. Such revisions mainly include adding support for multiple words per line and using the same MSHR configuration and data race detection algorithm as the \barc model. \\
But I was wondering if it'd be worth making the revisions and including DeNovo for only the complexity evaluation but not the performance evaluation. Wouldn't it make readers wonder why we compare with DeNovo only on complexity?}
\mike{Good point.}}
}%
Our evaluation shows that \barc is about
an order of magnitude less complex to verify than the MESI\xspace protocol.
Second, we implement a trace-based simulation of Neat\xspace,
compared with MESI and two self-invalidation-based coherence protocols called \emph{SARC} and \emph{VIPS\xspace}~\cite{sarc-coherence, vips-directoryless-noc-coherence},
and evaluate on the PARSEC benchmarks~\cite{parsec-pact-2008}, three real server programs,
and the Phoenix benchmarks~\cite{phoenix}.
Our evaluation shows that Neat\xspace has competitive performance and energy usage
with MESI, and outperforms MESI significantly under false sharing.
\barc also typically outperforms SARC and VIPS\xspace.
\barc reduces static power compared with MESI and SARC by eliminating the
coherence directory.
\later{
One of Neat\xspace's optimizations is also applicable to SARC, improving SARC to perform competitively with Neat\xspace;
however, SARC shares some of MESI's complexity including a directory and transient states.
}%
\Barc outperforms VIPS\xspace because \barc's mechanisms are more effective in avoiding unnecessary self-invalidation than VIPS\xspace's;
VIPS\xspace's optimizations are applicable to \barc, improving \barc's performance further.
\later{
\rui{Can we really claim that SARC\xspace ``extends'' MESI\xspace? I think it's just based on and simplifies MESI\xspace to some sense.
\mike{Revised. Does SARC have transient states?}
\rui{Yeah I think so because of the Fwd-GetM message for SARC's writer coherence.}}
}%
These comparative results---summarized qualitatively in Table~\ref{tab:qualitative-comparison}---suggest
that Neat\xspace is a compelling alternative to MESI and state-of-the-art self-invalidation approaches
in terms of complexity, performance, and energy.
\later{\vignesh{Adding quantitative performance results -- improvements over VIPS and proximity to MESI -- here (and at the end of abstract) may help in highlighting the contributions. It
also presents a good opportunity to focus more on server-workloads (which benefit more)}}%
\section*{Acknowledgments}
We thank Rakesh Komuravelli et al.\xspace\ for sharing the Murphi specification of MESI
from their work~\cite{denovo-verification}.
This material is based upon work supported by the National Science Foundation
under Grants CAREER-1253703, CCF-1421612, XPS-1629126, and XPS-1629196.
\bibliographystyle{abbrv}
\section{Related Work}
\label{Sec:related}
Sections~\ref{sec:background} compared \barc
qualitatively with closely related work that uses self-invalidations~\cite{sarc-coherence,vips-directoryless-noc-coherence, denovo, denovond, denovo-gpu,hlrc,quickrelease}.
This section discusses other related work.
\subsubsection*{Other DeNovo-based work}
\emph{DeNovoSync} uses DeNovo's protocol that employs self-in\-va\-li\-da\-tion\xspace and registration,
but---like SARC, VIPS, and Neat\xspace---DeNovoSync applies to general shared-memory programs~\cite{denovosync}.
\later{Since in a setting without writer-initiated invalidations,
that is widely used in traditional coherence protocols (e.g.\xspace, MESI)}%
DeNovoSync's contribution focuses on supporting arbitrary synchronization operations in the context of self-in\-va\-li\-da\-tion\xspace.
\later{by registering synchronization reads for ownership}%
The evaluation limits self-invalidation costs by assuming programmer annotations.
In contrast,
\later{the baseline Neat\xspace conservatively self-invalidates all touched data at acquire operations and the optimized}%
Neat\xspace uses automatic mechanisms to reduce self-invalidation
and assumes no additional knowledge.
\emph{Spandex} provides a flexible interface that supports various coherence protocols, including MESI\xspace, GPU coherence, and DeNovo~\cite{spandex}.
It necessarily suffers MESI's complexity overhead, including transient states, Inv and Ack messages, and support for core-to-core communication.
\later{Similar to DeNovo, the Spandex LLC registers written data to track ownership and thus requires inclusivity.}%
In contrast, \barc does not need to support MESI or other protocols.
\later{It does not have the complexity overhead of Spandex, including transient states in the Spandex LLC and support for core-to-core communication.
In fact, by designing \barc, we argue that CPUs can use simple GPU-style protocols and thus it is not necessary to have a complex coherence interface like Spandex that aims to support both complex MESI-style CPU protocols and simple GPU protocols.
Further, \barc does not require an inclusive LLC since it does not track ownership in the LLC.}%
\later{
\mike{What's \cite{dsi-isca-95} about?}
}%
\subsubsection*{Exploiting self-invalidation-based coherence}
\emph{ARC} uses self-invalidation and defers dirty write-backs until release operations~\cite{arc}.
While \barc assumes data race freedom (DRF),
ARC \emph{checks} DRF,
leading to a more complex system.
Although ARC optimizes self-invalidation and dirty write-back costs,
its optimizations differ from \barc's by leveraging mechanisms for performing conflict detection.
Jimborean et al.\xspace\ use compile-time analysis to detect \emph{extended DRF regions}
and thus reduce the frequency of self-invalidation~\cite{extended-drf-regions}.
Extended DRF regions would apply to Neat\xspace.
\later{
Kaxiras and Ros show how to apply self-invalidation mechanisms to simplify and optimize
virtually addressed caches and bus coherence~\cite{vips-virtual-cache-coherence,vips-snoopless-bus-coherence}.}%
\subsubsection*{Using write signatures to represent write sets}
Prior work has used write signatures to represent a core's own write set~\cite{denovond,tc-release-pp}. In contrast, in \barc a core's write signature represents \emph{writes by all other cores} (and the signatures are thus maintained at the LLC).
An exception is \emph{Racer}~\cite{racer-tso}, which maintains write signatures in a similar way to \barc, but for a distinct purpose: detecting read-after-write races in order to treat them as synchronization points.
|
3,212,635,537,800 | arxiv | \section{Introduction}
Methods for making groups from data sets are usually based on the idea of disjoint sets, such as the classical crisp clustering. The most well known are hierarchical and $k$-means \cite{Hartigan}, whose resulting clusters are sets will no intersection. However, this restriction may not be natural for some applications, where the condition for some objects may be to belong to two or more clusters, rather than only one.
Several methods for constructing overlapping clusters have been proposed in the literature \cite{Pyramids,Dunn, Hartigan}.
Since Zadeh introduced the concept of fuzzy sets \cite{Zadeh}, the principle of belonging to several clusters has been used in the sense of a degree of membership to such clusters.
In this direction, Bezdek \cite{Bezdek} introduced a fuzzy clustering method that became very popular since it solved the problem of representation of clusters with centroids and the assignment of objects to clusters, by the minimization of a well-stated numerical criterion.
Several methods for fuzzy clustering have been proposed in the literature; a survey of these methods can be found in \cite{Yang}.
In this paper we propose a new fuzzy clustering method based on the numerical principle of hyperbolic smoothing \cite{Xav}.
Fuzzy $C$-Means method is presented in Section \ref{sec:FCM}
and our proposed Hyperbolic Smoothing Fuzzy Clustering method in Section \ref{sec:HSFC}.
Comparative results between these two methods are presented in Section \ref{sec:results}.
Finally, Section \ref{sec:conclusion} is devoted to the concluding remarks.
\section{Fuzzy Clustering}
\label{sec:FCM}
The most well known method for fuzzy clustering is the original Bezdek's $C$-means method \cite{Bezdek} and
it is based on the same principles of $k$-means or dynamical clusters \cite{Bock}, that is, iterations on two main steps: i) class representations by the optimization of a numerical criterion, and ii) assignment to the closest class representative in order to construct clusters; these iterations are made until a convergence is reached to a local minimum of the overall quality criterion.
Let us introduce the notation that will be used and the numerical criterion for optimization.
Let $\textbf{X}$ be a $n \times p$ data matrix containing $p$ numerical observations over $n$ objects.
We look for a $K\times p$ matrix $\textbf{G}$ that represents centroids of $K$ clusters of the $n$ objects and an $n\times K$ membership matrix with elements $\mu_{ik}\in[0,1]$, such that the following criterion is minimized:
\begin{eqnarray}
\begin{array}{ll}
\multicolumn{2}{c}{W(\textbf{X}, \textbf{U},\textbf{G})=\displaystyle \sum_{i=1}^{n}\,\displaystyle \sum_{k=1}^{K}\, (\mu_{ik})^{m}\; \Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert^{2}}\\
\mbox{ subject to } & \sum_{k=1}^{K}\,\mu_{ik}=1, \mbox{ for all } i\in \{1,2,\ldots,n\}\\
& 0< \sum_{i=1}^{n}\,\mu_{ik}<n, \mbox{ for all }k\in \{1,2,\ldots,K\},
\end{array}
\label{funcional}
\end{eqnarray}
where $\textbf{x}_{i}$ is the $i$-th row of $\textbf{X}$ and $\textbf{g}_{k}$ is the $k$-th row of $\textbf{G}$, representing in $\mathbb{R}^{p}$ the centroid of the $k$-th cluster.
The parameter $m\neq 1$ in (\ref{funcional}) controls the fuzzyness of the clusters. According to the literature \cite{Yang}, it is usual to take $m=2$, since greater values of $m$ tend to give very low values of $\mu_{ik}$, tending to the usual crisp partitions such as in $k$-means. We also assume that the number of clusters, $K$, is fixed.
Minimization of (\ref{funcional}) represents a non linear optimization problem with constraints, which can be solved using Lagrange multipliers as presented in \cite{Bezdek}. The solution, for each row of the centroids matrix, given a matrix $\textbf{U}$, is:
\begin{equation}
\textbf{g}_{k}= { \sum_{i=1}^{n}\,(\mu_{ik})^{m}\textbf{x}_{i}} \left/ {\displaystyle \sum_{i=1}^{n}\,(\mu_{ik})^{m}}\right..
\label{centroids}
\end{equation}
The solution for the
membership matrix, given a matrix centroids $\textbf{G}$, is \cite{Bezdek}:
\begin{equation}
\mu_{ik}=\left[ \sum_{j=1}^{K}\,\left( \frac{||\textbf{x}_{i}-\textbf{g}_{k}||^{2}}{||\textbf{x}_{i}-\textbf{g}_{j}||^{2}}\right)^{1/(m-1)}\right]^{-1}.
\label{membership}
\end{equation}
The following pseudo-code shows the mains steps of Bezdek's Fuzzy $C$-Means method \cite{Bezdek}.
\paragraph{Bezdek's Fuzzy c-Means (FCM) Algorithm}
\begin{enumerate}
\item Initialize fuzzy membership matrix $\textbf{U}=[\mu_{ik}]_{n\times K}$\;
\item Compute centroids for fuzzy clusters according to (\ref{centroids})
\item Update membership matrix $\textbf{U}$ according to (\ref{membership})
\item If improvement in the criterion is less than a threshold, then stop; otherwise go to Step 2.
\end{enumerate}
Fuzzy $C$-Means method starts from an initial partition that is improved in each iteration, according to (\ref{funcional}), applying Steps 2 and 3 of the algorithm. It is clear that this procedure may lead to local optima of (\ref{funcional}) since iterative improvement in (\ref{centroids}) and (\ref{membership}) is made by a local search strategy.
\section{Algorithm for Hyperbolic Smoothing Fuzzy Clustering}
\label{sec:HSFC}
For the clustering problem of the $n$ rows of data matrix \textbf{X} in $K$ clusters, we can seek for the minimum distance between every $\textbf{x}_{i}$ and its class center $\textbf{g}_{k}$:
\begin{equation*}
z_{i}^2=\displaystyle \min_{\textbf{g}_{k}\in \textbf{G}}\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert^2_{2} \label{zi}
\end{equation*}
where $\Vert\cdot\Vert_{2}$ is the Euclidean norm.
The minimization can be stated as a sum-of-squares:
\begin{equation*}
\displaystyle \min \sum_{i=1}^{n}\, \min_{\textbf{g}_{k}\in \textbf{G}} \Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2}^{2}= \min \sum_{i=1}^{n}\,z_{i}^{2}
\end{equation*}
leading to the following constrained problem:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } z_{i}=\displaystyle \min_{\textbf{g}_{k}\in \textbf{G}}\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2}, \mbox{ with } i=1,\ldots,n.
$$
\pagebreak
\noindent
This is equivalent to the following minimization problem:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2}\leq 0 ,\mbox{ with } i=1,\ldots,n \mbox{ and } k=1,\ldots,K.
$$
Considering the function: $\varphi(y)=\max(0,y)$, we obtain the problem:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } \displaystyle \sum_{k=1}^{K}\, \varphi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2})=0\mbox{ for }i=1,\ldots,n.
$$
That problem can be re-stated as the following one:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } \displaystyle \sum_{k=1}^{K}\, \varphi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2})>0,\mbox{ for }i=1,\ldots,n.
$$
Given a perturbation $\epsilon>0$ it leads to the problem:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to }\displaystyle \sum_{k=1}^{K}\, \varphi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2})\geq \epsilon\mbox{ for }i=1,\ldots,n.
$$
It should be noted that function $\varphi$ is not differentiable. Therefore, we will make a smoothing procedure in order to formulate a differentiable function and proceed with a minimization by a numerical method. For that, consider the function:
$
\psi(y,\tau)= \frac{y+\sqrt{y^{2}+\tau^{2}}}{2},
$
for all $y\in \mathbb{R}$, $\tau>0$, and the function:
$\theta(\textbf{x}_{i},\textbf{g}_{k},\gamma)=\sqrt{ \sum_{j=1}^{p}\,(x_{ij}-g_{kj})^{2}+\gamma^{2}}$,
for $\gamma>0$.
Hence, the minimization problem is transformed into:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } \displaystyle \sum_{k=1}^{K}\, \psi(z_{i}-\theta(\textbf{x}_{i},\textbf{g}_{k},\gamma),\tau)\geq \epsilon, \mbox{ for } i=1,\ldots,n.
$$
Finally, according to the Karush--Kuhn--Tucker conditions \cite{Kar, KT}, all the constraints are active and the final formulation of the problem is:
\begin{equation}
\begin{array}{ll}
& \min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2}\\
\mbox{ subject to } & h_{i}(z_{i},\textbf{G})= \displaystyle\sum_{k=1}^{K}\, \psi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2},\tau)-\epsilon=0, \mbox{ for } i=1,\ldots,n,\\
& \epsilon,\tau,\gamma>0.
\end{array}
\label{problem}
\end{equation}
Considering (\ref{problem}), in \cite{Xav} it was stated the Hyperbolic Smoothing Clustering Method presented in the following algorithm.
\paragraph{Hyperbolic Smoothing Clustering Method (HSCM) Algorithm}
\begin{enumerate}
\item Initialize cluster membership matrix $\textbf{U}=[\mu_{ik}]_{n\times K}$
\item Choose initial values: $\textbf{G}^{0}, \gamma^{1}, \tau^{1}, \epsilon^{1}$
\item Choose values: $0<\rho_{1}<1$, $0<\rho_{2}<1$, $0<\rho_{3}<1$
\item Let $l=1$
\item Repeat steps 6 and 7 until a stop condition is reached:
\item Solve problem (P): $\min f(\textbf{G})=\displaystyle \sum_{i=1}^{n}\,z_{i}^{2}$ with $\gamma=\gamma^{l}$, $\tau=\tau^{l}$ y $\epsilon=\epsilon^{l}$, $\textbf{G}^{l-1}$ being the initial value and $\textbf{G}^{l}$ the obtained solution
\item Let $\gamma^{l+1}=\rho_{1}\gamma^{l}$,\; $\tau^{l+1}=\rho_{2}\tau^{l}$,\; $\epsilon^{l+1}=\rho_{3}\epsilon^{l}$ y $l=l+1$.
\end{enumerate}
The most relevant task in the hyperbolic smoothing clustering method is finding the zeroes of the function
$h_{i}(z_{i},\textbf{G})= \sum_{k=1}^{K}\, \psi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2},\tau)-\epsilon=0$
for $i=1,\ldots,n$.
In this paper, we used the Newton-Raphson method for finding these zeroes \cite{Burden}, particularly the BFGS procedure \cite{Li}.
Convergence of the Newton-Raphson method was successful, mainly, thank to a good choice of initial solutions.
In our implementation, these initial approximations were generated by calculating the minimum distance between the $i$-th object and the $k$-th centroid for a given partition.
Once the zeroes $z_{i}$ of the functions $h_{i}$ are obtained, it is implemented the hyperbolic smoothing.
The final solution for this method consists on solving a finite number of optimization subproblems corresponding to problem (P) in Step 6 of the HSCM algorithm.
Each one of these subproblems was solved with the R routine \textit{optim} \cite{R}, a useful tool for solving optimization problems in non linear programming.
As far as we know there is no closed solution for solving this step. For the future, we can consider writing a program by our means, but for this paper we are using this R routine.
Since we have that: $\sum_{k=1}^{K}\, \psi(z_{i}-\theta(\textbf{x}_{i},\textbf{g}_{k},\gamma),\tau)=\epsilon$, then each entry $\mu_{ik}$ of the membership matrix is given by:
$\mu_{ik}=\frac{\psi(z_{i}-d_{k},\tau)}{\epsilon}.$
It is worth to note that fuzzyness is controlled by parameter $\epsilon$.
The following algorithm contains the main steps of the Hyperbolic Smoothing Fuzzy Clustering (HSFC) method.
\paragraph{Hyperbolic Smoothing Fuzzy Clustering (HSFC) Algorithm}
\begin{enumerate}
\item Set $\epsilon>0$
\item Choose initial values for: $\textbf{G}^{0}$ (centroids matrix), $\gamma^{1}$, $\tau^{1}$ y $N$ (maximum number of iterations)
\item Choose values: $0<\rho_{1}<1$,\; $0<\rho_{2}<1$
\item Set $l=1$
\item While $l\leq N$:
\item Solve the problem (P): Minimize $f(\textbf{G})= \sum_{i=1}^{n}\,z_{i}^{2}$ con $\gamma=\gamma^{(l)}$ y $\tau=\tau^{(l)}$, with an initial point $\textbf{G}^{(l-1)}$ and $\textbf{G}^{(l)}$ being the obtained solution
\item Set $\gamma^{(l+1)}=\rho_{1}\gamma^{(l)}$,$\tau^{(l+1)}=\rho_{2}\tau^{(l)}$, y $l=l+1$
\item Set $\mu_{ik}=\psi(z_{i}-\theta(\textbf{x}_{i},\textbf{g}_{k},\gamma),\tau)/\epsilon$ para $i=1,\ldots,n$ y $k=1,\ldots,K$.
\end{enumerate}
\section{Comparative Results}
\label{sec:results}
Performance of the HSFC method was studied on a data table well known from the literature, the Fisher's iris \cite{Fisher} and 16 simulated data tables built from a semi-Monte Carlo procedure \cite{PSOclus}.
For comparing FCM and HSFC, we used the implementation of FCM in R package \textit{fclust} \cite{Gio}.
This comparison was made upon the within class sum-of-squares:\linebreak
$W(P)= \sum_{k=1}^{K}\, \sum_{i=1}^{n}\,\mu_{ik}\|\textbf{x}_{i}-\textbf{g}_{k}\|^{2}$.
Both methods were applied 50 times and the best value of $W$ is reported.
For simplicity here, for HSFC we used the following parameters:
$\rho_{1}=\rho_{2}=\rho_{3}=0.25$, $\epsilon=0.01$ and $\gamma=\tau=0.001$ as initial values.
In Table \ref{tab:classic:tables} the results for Fisher's iris are shown, in which case HSFC performs slightly better. It contains the Adjusted Rand Index (ARI) \cite{ARI} between HSFC and the best FCM result among 100 runs; RI and ARI compare fuzzy membership matrices crisped into hard partitions.
\begin{table}
\centering
\caption{Minimum sum-of-squares (SS) reported for the Fisher's iris data table with HSFC and FCM, $K$ being the number of clusters, RI and ARI comparing both methods. In bold best method.}
\label{tab:classic:tables}
\begin{tabular}{p{20mm}|p{5mm}p{20mm}p{20mm}p{8mm}}
\hline\noalign{\smallskip}
Table & $K$ & SS for HSFC & SS for FCM & ARI\\
\noalign{\smallskip}\hline\noalign{\smallskip}
& 2 & \textbf{152.348} & 152.3615 & \multicolumn{1}{c}{1}\\
{Fisher's iris} & 3 & \textbf{78.85567} & 78.86733 & 0.994\\
& 4 & 57.26934 & 57.26934 & 0.980\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
Simulated data tables were generated in a controlled experiment as in \cite{PSOclus}, with random numbers following a Gaussian distribution. Factors of the experiment were:
\vspace{-0.2cm}
\begin{itemize}
\item The number of objects (with 2 levels, $n=105$ and $n=525$).
\item The number of clusters (with levels $K=3$ and $K=7$).
\item Cardinality (card) of clusters, with levels i) all with the same number of objects (coded as card($=$)), and ii) one large cluster with 50\% of objects and the rest with the same number (coded as card($\not=$)).
\item Standard deviation of clusters, with levels i) all Gaussian random variables with standard deviation (SD) equal to one (coded as SD($=$)), and ii) one cluster with SD=3 and the rest with SD=1 (coded as SD($\not=$)).
\end{itemize}
Table \ref{table:nombre} contains codes for simulated data tables according to the codes we used.
\begin{table}
\centering
\caption{Codes and characteristics of simulated data tables; $n$: number of objects, $K$: number of clusters, card: cardinality, DS: standard deviation.}\label{table:nombre}
\begin{tabular}{lp{5cm}|lp{4.1cm}}
\hline\noalign{\smallskip}
{Table} & {Characteristcs} & {Table} & {Characteristcs} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
T1 & $n=525$, $K=3$, card($=$), SD($=$) &
T9 & $n=525$, $K=3$, card($\not=$), DS($=$) \\
T2 & $n=525$, $K=7$, card($=$), SD($=$) &
T10 & $n=525$, $K=7$, card($\not=$), DS($=$) \\
T3 & $n=105$, $K=3$, card($=$), SD($=$) &
T11 & $n=105$, $K=3$, card($\not=$), DS($=$) \\
T4 & $n=105$, $K=7$, card($=$), SD($=$) &
T12 & $n=105$, $K=7$, card($\not=$), DS($=$) \\
T5 & $n=525$, $K=3$, card($=$), SD($\not=$) &
T13 & $n=525$, $K=3$, card($\not=$), DS($\not=$) \\
T6 & $n=525$, $K=7$, card($=$), SD($\not=$) &
T14 & $n=525$, $K=7$, card($\not=$), DS($\not=$)\\
T7 & $n=105$, $K=3$, card($=$), SD($\not=$) &
T15 & $n=105$, $K=3$, card($\not=$), DS($\not=$)\\
T8 & $n=105$, $K=7$, card($=$), SD($\not=$) &
T16 & $n=105$, $K=7$, card($\not=$), DS($\not=$) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
Table \ref{table:results} contains the minimum values of the sum-of-squares obtained for our HSFC and Bezdek's FCM methods; the best solution of 100 random applications for FCM in presented and one run of HSFC. It also contains the ARI values for comparing HSFC solution with that best solution of FCM.
It can be seen that, generally, HSFC method tends to obtain better results than FCM, with only few exceptions. In 23 cases HSFC obtains better results, FCM is better in 5 cases, and results are in same in 17 cases. However, ARI shows that partitions tend to be very similar with both methods.
\begin{table}
\centering
\caption{Minimum sum-of-squares (SS) reported for HSFC and FCM methods on the simulated data tables.
Best method in bold.}
\label{table:results}
{\small
\begin{tabular}{p{7mm}p{3mm}p{12mm}p{12mm}p{18mm}|p{7mm}p{3mm}p{12mm}p{12mm}p{12mm}}
\hline\noalign{\smallskip}
Table & $K$ & SS for & SS for & ARI &
Table & $K$ & SS for & SS for & ARI\\
& & HSFC & FCM & &
& & HSFC & FCM &\\
\noalign{\smallskip}\hline\noalign{\smallskip}
& 2 & \textbf{7073.402} & 7073.814 & 0.780 & & 2 & 12524.31 & 12524.31 & 0.900\\
T1 & 3 & 3146.119 & 3146.119 & 1 &
T9 & 3 & \textbf{9269.361} & 9269.611 & 1\\
& 4 & 2983.651 & 2983.651 & 1 & & 4 & 6298.47 & \textbf{6298.368} & 1 \\ \hline
& 2 & \textbf{16987.19} & 16987.71 & 0.764 & & 2 & \textbf{5466.893} & 5466.912 & 0.890\\
T2 & 3 & 11653.22 & 11653.22 & 1 &
T10 & 3 & 2977.58 & 2977.58 & 1\\
& 4 & \textbf{7776.855} & 7777.396 & 1 & & 4 & \textbf{2745.721} & 2746.671 & 1 \\ \hline
& 2 & \textbf{3923.051} & 3923.062 & 0.763 & & 2 & \textbf{2969.247} & 2969.32 & 0.860\\
T3 & 3 & 2917.13 & 2917.13 & 0.754 &
T11 & 3 & 1912.323 & 1912.323 & 1\\
& 4 & 2287.523 & \textbf{2256.298} & 0.993 & & 4 & 1401.394 & 1401.394 & 1\\ \hline
& 2 & \textbf{1720.365} & 1720.374 & 0.992 & & 2 & 1816.056 & 1816.056 & 1\\
T4 & 3 & 569.3112 & 569.3112 & 1 &
T12 & 3 & 525.7118 & 525.7118 & 1 \\
& 4 & 535.5491 & \textbf{535.3541} & 1 & & 4 & \textbf{477.0593} & 477.2696 & 1\\ \hline
& 2 & 15595.67 & 15595.67 & 0.910 & & 2 & \textbf{12804.03} & 12805.05 & 0.920 \\
T5 & 3 & \textbf{11724.93} & 11725.28 & 1 &
T13 & 3 & \textbf{8816.805} & 8817.702 & 1\\
& 4 & 8409.738 & 8409.738 & 0.984 & & 4 & \textbf{6293.774} & 6293.951 & 1\\ \hline
& 2 & 11877.96 & 11877.96 & 0.970 & & 2 & \textbf{16228.07} & 16228.98 & 0.920\\
T6 & 3 & \textbf{8299.779} & 8300.718 & 1 &
T14 & 3 & \textbf{7255.113} & 7255.423 & 1\\
& 4 & \textbf{7212.611} & 7213.725 & 1 & & 4 & 6427.313 & 6427.313 & 1\\ \hline
& 2 & \textbf{4336.261} & 4336.507 & 0.955 & & 2 & \textbf{2616.286} & 2616.943 & 1 \\
T7 & 3 & 3041.076 & 3041.076 & 1 &
T15 & 3 & \textbf{1978.017} & 1978.233 & 1\\
& 4 & \textbf{2395.683} & 2421.333 & 1 & & 4 & \textbf{1526.895} & 1526.953 & 1 \\ \hline
& 2 & 1767.43 & 1767.43 & 1 & & 2 & 2226.923 & \textbf{2226.212} & 0.962 \\
T8 & 3 & \textbf{1380.766} & 1381.019 & 1 &
T16 & 3 & \textbf{1232.074} & 1232.124 & 1 \\
& 4 & 1215.302 & \textbf{1211.235} & 1 & & 4 & \textbf{982.7074} & 982.9721 & 1\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
}
\end{table}
\section{Concluding Remarks}
\label{sec:conclusion}
In hyperbolic smoothing, parameters $\tau$, $\gamma$ and $\epsilon$ tend to zero, so the constraints in the subproblems make that problem (P) tends to solve (\ref{funcional}).
Parameter $\epsilon$ controls the fuzzyness degree in clustering; the higher it is, the solution becomes more and more fuzzy; the less it is, the clustering is more and more crisp.
In order to compare results and efficiency of the HSFC method, zeroes of functions $h_{i}$ can be obtained with any method for solving equations in one variable or a predefined routine.
According to the results we obtained so far and the implementation of the hyperbolic smoothing for fuzzy clustering, we can conclude that, generally, the HSFC method has a slightly better performance than original Bezdek's FCM on small real and simulated data tables.
Further research is required for testing performance of HSFC method on very large data sets, with measures of efficiency, quality of solutions and running time.
We are also considering to study further comparisons between HSFC and FCM with different indices, and writing the program for solving Step 6 in HSFC algorithm, that is the minimization of $f(G)$, by our means, instead of using the \textit{optim} routine in R.
\subsection*{Acknowledgements}
D. Mas\'is acknowledges the School of Mathematics of the Costa Rica Institute of Technology for their support; this work is part of his M.Sc. dissertation at the University of Costa Rica.
E. Segura and J. Trejos acknowledge the Research Center for Pure and Applied Mathematics (CIMPA) of the University of Costa Rica for their support.
A.E. Xavier acknowledges the Federal University of Rio de Janeiro and the Federal University of Juiz Fora for their support.
|
3,212,635,537,801 | arxiv | \section*{Acknowledgements}
This work was carried out whilst the first author was visiting the
Centre for Mathematics and its Applications at the ANU.
He wishes to thank the Australian Research Council for support
and the CMA for hospitality.
The third author was an ARC Research Associate for the duration of the collaboration.
|
3,212,635,537,802 | arxiv | \section{Introduction}
\paragraph*{}Mesoscale rings were the object of numerous experiments exploring quantum mechanical phenomena involving phase shifts and coherence of electronic wavefunctions.
Among others, persistent currents \cite{gunther1969flux,buttiker1983josephson}, one-dimensional weak localization \cite{shea2000electrical} and the Aharonov-Bohm effect \cite{keyser2002aharonov} were studied.
These rings have been fabricated using various materials: superconductors \cite{doll1961experimental}, metals \cite{washburn1986aharonov}, semiconductors \cite{granados2003ga}, two-dimensional free electron gas systems \cite{fuhrer2001energy}, carbon nanotubes \cite{shea2000electrical} and graphene \cite{potasz2011electronic}.
As an example application for these structures, Suarez \textit{et al.} used epitaxial quantum rings (QRs) as gain medium for lasers \cite{suarez2004laser}.
Until recently, these structures were not available in colloidal form.
Since 2008, freestanding quantum wells called nanoplatelets (NPLs), nanodisks or nanobelts are accessible using colloidal synthesis, enabling a larger and cheaper source of quantum wells.
Many colloidal II-VI semiconductor nanoplatelets having a well-defined modulable thickness and atomically flat surface have been directly prepared by solution phase methods, \textit{e.g.} CdS \cite{li2012uniform, li2009cds, ithurria2011colloidal}, CdSe \cite{ithurria2008quasi, yu2009single, ouyang2008multiple, ithurria2011colloidal, li2011size}, \ce{CdS_xSe_{1-x}} \cite{fan2015colloidal}, CdTe \cite{wang2013non,wang2009single, ithurria2011colloidal,pedetti2013optimized} and ZnS \cite{buffard2015zns}.
Cation exchange methods also allowed the preparation of ZnSe and PbSe NPLs\cite{bouet2014synthesis}.
\paragraph*{} As the NPL growth originates from the direct reaction of precursor at the NPL periphery\cite{ithurria2011continuous}, it is possible to grow another semiconductor laterally around a NPL acting as a seed.
One such heterostructure, CdSe/CdS core/crown, has been synthesized by Prudnikau \textit{et al.} \cite{prudnikau2013cdse} and Tessier \textit{et al.} \cite{tessier2013efficient}.
Also, CdSe/CdTe core/crown have been prepared by Pedetti \textit{et al.} \cite{pedetti2014type} and Kelestemur \textit{et al.} \cite{kelestemur2015type}.
These former structures show good confinement because of rapid exciton transfer from the CdS crown to the CdSe core.
However, their emission is mostly dependent on the core dimensions and is not easily modulated \cite{tessier2013efficient}.
To extend the emission window of NPL based heterostructures, we propose CdSe colloidal QRs, thereafter cQRs, grown at the periphery of CdS NPLs.
As expected from the band alignment, the emission of these heterostructures originates from the excitonic recombination in the CdSe QR.
Such colloidal quantum rings have recently been reported in parallel by Delikanli \textit{et al.} \cite{delikanli2015continuously} but prepared by a different method and only with 4 monolayers (ML) thick CdS seed.
In comparison, the presented method is a user-friendly one-pot synthesis and was used to prepared both 3 and 4 ML heterostructures.
Colloidal QRs, as their core/crown analog, could be used in light harvesting and charge separator for solar cells, active medium for laser \cite{guzelturk2014amplified} and light-emitting diode \cite{vashchenko2014organic}, and even luminescent probe for biomedical imaging.
Furthermore, the peculiar electronic structure of QRs has an importance for more fundamental physical studies \cite{orellana2003conductance, bayer2003optical, keyser2002aharonov, fuhrer2001energy} and light matter interaction \cite{sen2007electron, warburton2002giant}.
\paragraph*{}Here, we present the synthesis and characterization of CdSe colloidal QRs grown around 3 and 4 monolayers.
The samples are characterized by elemental analysis, X-ray diffraction, transmission electron microscopy, absorption, photoluminescence emission and photoluminescence excitation spectroscopy.
We describe the evolution of the CdSe dimensionality with the width of the CdSe QR from 0D to 2D with an intermediate 1D state.
\section{Experimental Methods}
\paragraph*{Chemicals}Cadmium acetate dihydrate \\(\ce{Cd(Ac)2*2H2O}, \SI{98}{\%}), technical grade 1-octadecene (ODE, \SI{90}{\%}) and technical grade tri-$n$-butylphosphine (TBP, \SI{95}{\%}) were purchased from Alfa Aesar.
Technical grade oleic acid (OA, \SI{90}{\%}) and elemental selenium (\SI{99.5}{\%}) were purchased from Sigma Aldrich.
Chloroform and methanol were purchased from BDH and elemental sulfur (\SI{99}{\%}) from Laboratoire Mat.
All chemicals were used as purchased without further purification.
\paragraph*{Preparation of stock solutions} Sulfur stock solution was prepared by dissolving \SI{32}{mg} of elemental sulfur in \SI{20}{g} of ODE by sonication at room temperature.
\SI{0.1}{M} Se stock solution was prepared by heating, under inert atmosphere, \SI{318}{mg} of elemental selenium in \SI{40}{mL} of ODE at \SI{180}{\degC} overnight.
The resulting solution was yellow and was conserved under ambient conditions.
\paragraph*{\SI{4}{ML} CdS nanoplatelet synthesis} In a \SI{250}{mL} reaction flask, \SI{2}{mmol} of cadmium acetate, \SI{10}{g} of S stock solution, \SI{2}{mmol} of OA and 30 g of ODE were degassed under vacuum at \SI{50}{\degC} for \SI{30}{min} and purged twice with nitrogen.
The mixture was then heated under nitrogen to \SI{260}{\degC} in about \SI{15}{min}.
The mixture reacted at this temperature, but not above, for \SI{1}{min} before being cooled down to ambient temperature.
At the beginning of the cooldown, \SI{3}{mL} of OA was added to improve the purification process and the stability of the purified nanoplatelets.
After the cooldown, the mixture was centrifuged at \SI{6000}{rcf} for \SI{6}{min} and the precipitated nanoplatelets were redispersed in \SI{20}{mL} of chloroform.
\paragraph*{\SI{3}{ML} CdS nanoplatelet synthesis}\SI{3}{ML} CdS nanoplatelets were prepared in a similar manner.
Instead of oleic acid, the same molar amount of myristic acid was used for the reaction mixture.
For this synthesis, the reaction time and temperature were \SI{20}{min} and \SI{180}{\degC}.
\paragraph*{CdSe ring synthesis around CdS nanoplatelets} In a typical synthesis for cQRs, \SI{0.1}{mmol} of cadmium acetate, \SI{15}{mg} of oleic acid, \SI{0.5}{mL} of a CdS NPL dispersion, \SI{0.5}{mL} of the Se stock solution and \SI{15}{mL} of ODE were added to a \SI{100}{mL} reaction flask, degassed at \SI{50}{\degC} and purged twice with nitrogen.
The mixture was heated to \SI{190}{\degC} and kept at this temperature, never above, for \SI{10}{min}.
Afterward, \SI{1}{mL} of oleic acid was injected and the mixture was cooled to room temperature. The mixture was centrifuged at \SI{11000}{rcf} for \SI{8}{min}.
The precipitated cQRs were redispersed in \SI{6}{mL} of chloroform. The dispersions are colloidally stable for months.
\paragraph*{}The ring width is controlled by the amount of Se added.
This amount can be lowered down to \SI{5}{\micro mol} to obtain a narrower ring.
Above \SI{50}{\micro mol}, the concentration of selenium was too high and secondary CdSe NPL and dot nucleation hindered ring growth.
To obtain the sample with more than \SI{50}{\micro mol} of Se, cQRs with \SI{50}{\micro mol} of Se were prepared according to the method.
After the \SI{10}{min} reaction time, the remaining Se quantity, up to an additional \SI{50}{\micro mol} was injected in the reaction flask and the temperature was kept at \SI{190}{\degC} for 10 more minutes. Multiples injections were used for samples above \SI{100}{\micro mol} of Se.
\paragraph*{Energy-dispersive X-ray spectroscopy (EDS)}EDS measurements were obtained with a JEOL JSM-840-A scanning electron microscope using a PGT Avalon EDS with a NORA detecting unit.
The samples were scanned with a \SI{15}{kV} electron beam.
\paragraph*{Powder X-ray Diffraction (XRD)}Diffractograms were acquired using a Siemens-Bruker X-ray diffractometer with a 2D Hi-Star XRD detector.
The radiation source was a Kristalloflex 760 with a nickel window emitting the Cu K$_{\alpha}$ line ($\lambda = \SI{1.5418}{\angstrom}$) with an accelerating voltage and current of respectively \SI{40}{kV} and \SI{40}{mA}.
Diffractograms were recorded from \ang{10} to \ang{60}.
Background signal was automatically subtracted by the diffraction pattern treatment software GADDS.
\paragraph*{Purification method for EDS and XRD} cQR dispersions were centrifuged at low force to eliminate the remaining insoluble compounds.
Afterward, the cQRs were precipitated in a solution of \SI{10}{\%} TBP in methanol and centrifuged.
The precipitate was dispersed with chloroform.
This precipitation cycle was repeated once more with the TBP solution and once again with pure methanol as precipitating solvent.
Finally, the cQRs were redispersed in a minimal amount of chloroform and drop-casted on a Si wafer for the EDS and on a glass cover slip for XRD analysis.
\paragraph*{Transmission electron microscope (TEM)}The purified cQR samples morphology was characterized with a JEOL 1230.
The samples dispersed in chloroform were drop-casted on a Ni TEM grid coated with Formvar and carbon film.
\paragraph*{Optical spectroscopy}All optical characterizations were carried out on cQR dispersions in chloroform.
Absorption spectra of the samples were recorded at room temperature with a Varian Cary 50 Conc UV-visible spectrophometer from 300 to \SI{800}{nm}.
The steady state photoluminescence (PL) and photoluminescence excitation (PLE) spectra were acquired using a Jobin-Yvon Fluorolog equipped with a photomultiplicator tube.
The samples were excited at \SI{350}{nm} for PL spectra and the detection window was set at the emission maximum for PLE spectra.
Excitation and emission monochromators slits were set at \SI{1}{nm}.
\paragraph*{Photoluminescence peak fitting} PL spectra were fitted to extract their respective photoluminescence maximum energy (PL$_\text{max}$) and full width half maximum (FWHM).
Depending on the Se amount added during the cQR synthesis, the PL behaviour might correspond to either a Gaussian or Lorentzian peak shape, which correspond to different broadening types, as discussed in Spectroscopic Characterization section.
Thus, all PL spectra were fitted with both peak shapes on an interval corresponding to the exitonic peak.
Then, for each spectrum, the most representative peak shape was determined by comparing each fit's coefficient of determination ($R^2$) over the whole spectrum.
From this best fit, the PL$_\text{max}$ was directly obtained from the fit's center parameter and the FHWM was calculated from the fit's width parameter.
\section{Results and Discussion}
\subsection*{Synthesis and Expected Growth Mechanism}
\paragraph*{}The synthesis of these CdSe cQRs involves two steps.
First, CdS nanoplatelets of \SI{3}{ML} and \SI{4}{ML} are prepared following a one-pot method reported by M. Li \textit{et al.} \cite{li2009cds} and Z. Li \textit{et al.} \cite{li2012uniform}.
Compared to continuous injection methods, this approach is more easily implemented for various experimental conditions and facilitates synthesis scale up.
Second, the growth of a CdSe ring around the CdS NPL periphery is also inspired from a CdSe NPL synthesis \cite{li2011size}.
The one-pot method presented has the advantage of being more user-friendly than continuous injection method and yields reproducible results as all the results were replicated by another user.
As with CdSe NPL synthesis, the reaction conditions are critical for the growth of atomically flat CdSe.
The range of synthetic temperatures is below the solubilization temperature of the CdS and CdSe NPLs and higher than the monomer reaction temperature/sout{. The reaction temperature has to be higher than the monomer reaction temperature} (\SI{140}{\degC})\cite{li2011size}.
At higher temperatures (\SI{240}{\degC}), the NPLs are not stable and the formation of irregular free CdSe nanocrystals is more favorable than lateral growth.\cite{li2011size}
The Se precursor concentration is kept low to promote heterogeneous nucleation and growth of CdSe around CdS NPLs over secondary homogeneous CdSe nucleation.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.8\columnwidth]{fig/fig1.png}
\caption{Evolution of the cQR shape as function of the Se amount present in the synthesis.}
\end{figure}
The oleic acid concentration is also crucial to grow NPLs laterally instead of wires or dots \cite{srivastava2010surface}.
Indeed, the synthetic conditions with saturated \ce{Cd(Ac)2} combined with a suitable concentration of free oleic acid promotes lateral growth of CdSe.
\paragraph*{} Both the reaction conditions and the observed CdSe ring growth suggest the reaction mechanism to be analogous to CdSe NPL growth as described by Ithurria \textit{et al.} \cite{ithurria2011continuous}: monomers react with the CdS or CdSe peripheral edges to grow even further.
The monomers can be either cadmium carboxylate salts and dissolved Se or small building blocks containing both Cd and Se as proposed by Yu \textit{et al.}\cite{yu2010thermodynamic,yu2012cdse} for CdSe magic-sized clusters.
The lateral extent of the cQRs is controlled by the amount of Se, because the growth mechanism is the constant reaction of monomers and diffusion controlled.
When there is not enough CdSe to cover the entire periphery of every CdS NPL, CdSe forms dot-like structures on the edges of the CdS NPLs as illustrated in the top part of Fig. 1 and proved in the section Spectroscopic Characterization.
Following epitaxial growth terminology, these dot-like structures are named islands.
Assuming the CdSe heterogeneous nucleation is evenly distributed around every NPL, at first the growth is likely to proceed locally without a wetting layer (Volmer-Weber growth mode).
With sufficient amount of Se precursor, each island can grow in the direction of the width or extend itself around the periphery of the CdS NPL to form a complete ring.
Also, this competition between the two growth directions is apparent spectroscopically with a larger PL FWHM than an uniform and flat CdSe layer.
Once the islands have connected with each other as represented in the bottom part of Fig. 1, the growth is limited to the lateral direction and the CdSe grows like typical NPLs.
\subsection*{Material Characterization}
\paragraph*{}Elemental and structural analyses are performed on the purified samples to corroborate that CdSe grows around the rim of the CdS NPLs.
Table 1 presents the energy-dispersive X-ray spectroscopy (EDS) results that confirm a higher Se fraction is found in wider cQRs grown from a larger initial quantity of Se precursors.
There is an excess of Cd even when considering both basal planes are metal-terminated: the Cd molar fraction should be \SI{57}{\%} for \SI{3}{ML} cQRs, whereas the measured amount stated in Table 1 is around \SI{70}{\%}.
\begin{table*}
\centering
\begin{threeparttable}
\caption{Photoluminescence maximum, elemental analysis and estimated ring size of selected \SI{3}{ML} cQRs}
\begin{tabular}{cccccc}
\hline
\multirow{2}{*}{Se added}& \multirow{2}{*}{PL$_\text{max}$} & \multicolumn{3}{c}{Elemental analysis} & \multirow{2}{*}{Estimated ring width\tnote{a}} \\
& & Cd & S & Se & \\ \hline
\SI{}{\micro mol} & eV (nm) & \% & \% & \% & nm \\ \hline
10 & 2.879 (430.6)& 72 & 27 & 1 & 0.3 \\
50 & 2.703 (458.7)& 71 & 20 & 9 & 2.9 \\
100 & 2.692 (460.6)& 68 & 14 & 18 & 7.9 \\ \hline
\end{tabular}
\begin{tablenotes}
\item [a] Considering \SI{30}{nm} wide NPLs with square geometry and a complete covering.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\paragraph*{}This excess beyond the expected stoichiometry likely originates from Cd organic salts left in solution after the purification procedure.
As opposed to the excess Cd fraction, the Se fraction cannot come from Se impurities because the purification method with TBP efficiently removes the Se precursor.
Thus, the increasing Se fraction is directly linked to CdSe growth.
The ratio between S and Se molar fraction was used to estimate the CdSe ring width with the assumptions that the CdS NPLs are \SI{30}{nm} large squares and that CdSe forms an uniform layer.
These calculations could not be confirmed because of the lack of contrast between CdSe and CdS in TEM micrographs, thus the lack of direct CdSe ring width measurements.
Overall, the elemental analysis indicates that cQRs comprise both CdS and CdSe in agreement with the XRD pattern shown in Fig. S3\dag.
The XRD pattern matches the lines in the reference patterns of zinc blende CdS and zinc blende CdSe with no feature corresponding to a wurtzite phase.
The zinc blende crystalline phase is typical of Cd chalcogenides grown in NPL structures \cite{li2011size}.
The corresponding atomically flat morphology is confirmed by the TEM image of Fig. 2b where a single population of NPL-like structures is observed without any apparent secondary nucleation.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\columnwidth]{fig/fig2.png}
\caption{(a) Photographs of \SI {3}{ML} and \SI {4}{ML} cQRs under UV excitation. Ring width increases from left to right. (b) Transmission electron microscopy image of \SI {4}{ML} cQRs with \SI{50}{\micro mol} of Se.}
\end{figure}
\paragraph*{}These results and previous reports on lateral heterostructures grown in similar reaction conditions \cite{prudnikau2013cdse, tessier2013efficient, delikanli2015continuously} point to a morphology where CdSe surrounds the CdS NPLs in the plane, which is also supported by the spectroscopic characterization (section below).
\subsection*{Spectroscopic Characterization}
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{fig/fig3.pdf}
\caption{Absorption, photoluminescence and photoluminescence excitation spectra of (a) \SI {3}{ML} CdS nanoplatelets, (b) \SI {3}{ML} cQRs with \SI{10}{\micro mol} of Se and (c) \SI {3}{ML} cQRs with \SI{200}{\micro mol} of Se. HH-E and LH-E transitions for \SI {3}{ML} CdS and CdSe NPLs\cite{tessier2013efficient} are identified with dotted lines.}
\end{figure}
\paragraph*{}Further investigations with optical spectroscopy confirm that an electronically conjugated planar heterostructure is indeed synthesized.
In Fig. 3a, the absorption and PL spectra excited at \SI{350}{nm} are presented for \SI{3}{ML} CdS NPLs used as seeds in the CdSe ring growth process.
The electron-heavy hole (HH-E) and electron-light hole (LH-E) transition energies for \SI{3}{ML} CdS NPLs\cite{tessier2013efficient}, respectively at \SI{3.246}{eV} (\SI{382.0}{nm}) and \SI{3.324}{eV} (\SI{373.0}{nm}), are indicated by dashed lines and correspond to the overlapped peaks in the absorption spectrum.
\paragraph*{}The spectral characteristics of the cQR samples in Fig. 3b,c are notably different from those of \SI{3}{ML} CdS NPLs with the emergence of absorption and emission at energies below the CdS NPL spectra.
These features are attributed to the CdSe added around the NPLs.
The CdS NPL emission is actually quenched even without selective excitation of the CdSe rings and with a minimal amount of CdSe.
As soon as the CdSe nucleates into islands on the NPL rim with \SI{10}{\micro mol} of Se precursor added, the emission peak shifts from \SI{3.220}{eV} (\SI{385.0}{nm}) to \SI{2.861}{eV} (\SI{433.4}{nm}) with a final FWHM of \SI{215}{meV} (\SI{32.6}{nm}) between Fig. 3a and Fig. 3b.
When the Se amount reaches \SI{200}{\micro mol} in Fig. 3c, the PL emission peak has redshifted down to the energy of the HH-E transition of CdSe NPLs at \SI{2.67}{eV} (\SI{464}{nm}), with a FWHM of \SI{57}{meV} (\SI{9.8}{nm}), also typical of atomically flat 2D CdSe NPLs.
The low energy emission tail originating from trap states in islands with a large surface-to-volume ratio has also disappeared.
An extra peak is then observed at \SI{2.43}{eV} (\SI{510}{nm}) and is attributed to thicker \SI{4}{ML} CdSe 2D nanostructures either grown around thicker CdS secondary nucleation or direct CdSe structure nucleation.
For \SI{200}{\micro mol} sample, the absorption spectrum now features LH-E (\SI{2.863}{eV}, \SI{433.1}{nm}) and HH-E (\SI{2.695}{eV}, \SI{460.1}{eV}) transitions corresponding to isolated \SI{3}{ML} CdSe NPLs and the combined LH-E and HH-E transitions (\SI{3.29}{eV}, \SI{377}{nm}) of \SI{3}{ML} CdS NPLs as labeled in Fig. 3 \cite{tessier2013efficient}.
The photoluminescence excitation spectrum with a detection window centered on the PL maximum is also provided in each panel to show its close match to the absorption spectrum from the CdSe band edge up to \SI{3.5}{eV} (\SI{350}{nm}).
\begin{figure*}[!htb]
\centering
\includegraphics[width=0.8\textwidth]{fig/fig4.pdf}
\caption{Absorption spectra of (a) \SI {3}{ML} and (b) \SI {4}{ML} cQR samples. The spectra are normalized to the CdS NPL absorption transition.}
\end{figure*}
\paragraph*{} While Fig. 3b,c give an overview of the spectral characterizations done at both extreme Se amounts for \SI{3}{ML} CdS/CdSe heterostructures, the next Fig. 4-6 show the detail of those measurements for all Se doses and heterostructure thicknesses.
In Fig. 4, the CdSe absorption features gradually appear as more CdSe is grown around the CdS NPLs for both \SI{3}{ML} and \SI{4}{ML} CdS NPLs.
The CdSe is more apparent for \SI{3}{ML} samples as the CdS NPL seeds used are smaller than the \SI{4}{ML} ones.
With small Se amount, the CdSe absorption is very weak and only one absorption peak can be seen.
Starting at \SI{20}{\micro mol}, the HH-E and LH-E transitions are more visible but they are much larger and blue-shifted than those expected for CdSe NPLs.
At \SI{200}{\micro mol}, the transitions have shifted back to the characteristic energies of CdSe NPL transitions.
PLE spectra showing equivalent behaviour are presented in Fig. S4\dag.
PL spectra for all Se amounts and thicknesses are shown in Fig.5, in which both the PL$_\text{max}$ and FWHM shifts are apparent.
To extract quantitative information about those shifts, the PL spectra were fitted as described in the Experimental Methods section. The best fit PL$_\text{max}$ and FWHM are shown in Fig. 6.
These quantities are correlated, for both thicknesses: as the PL$_\text{max}$ shifts to the lower energy, so does the FWHM.
The best fit for lower Se amount PL spectra was systematically the Gaussian peak shape, while the Lorentzian peak shape was more suited to the higher Se dose PL spectra. For each thickness, the PL$_\text{max}$ and FWHM shifts are reduced after the change from Gaussian to Lorentzian peak shape.
\paragraph*{} The equivalence between the absorption and PLE spectra is shown in Fig. 3 and can be inferred by comparing Fig. 4 and S2.
This is attributed to ultrafast non-radiative relaxation bringing the photogenerated charge carriers to the lowest excited and emissive states in CdSe near the band edge where the PLE signal is recorded \cite{tonti2004excitation}, resulting in the CdS PL quenching.
This is coherent with Tessier \textit{et al.}\cite{tessier2013efficient} works with CdSe/CdS core/crown.
This demonstrates the formation of CdS/CdSe heterostructures with charge carrier diffusion between both materials instead of independent nanocrystal populations.
\paragraph*{} Comparison of the absorption and PLE normalized spectra shows that CdSe PLE peaks are less intense than their respective absorption peaks.
This observation could be the result of either secondary nucleation of CdSe NPLs, which have a lower quantum yield than CdSe rings, or a smaller quantum yield of the CdSe rings when directly excited.
In both cases, the diminished CdSe quantum yield would result in weaker CdSe PLE compared to CdSe absorption, whereas the CdS peaks would remain unchanged.
Importantly, the comparison of the normalized absorption and PLE spectra indicates that no isolated CdS NPLs are present in the sample.
Indeed, the presence of isolated CdS NPLs would increase CdS absorption while CdSe absorption of the overall sample would stay the same, thus decreasing the CdSe absorption when normalized to that of CdS.
On the other hand, any ringless CdS NPLs would not contribute to PLE as the acquisition window only probes CdSe emission.
The presence of CdS NPLs would therefore lead to the normalized CdSe PLE peaks being more intense than the normalized CdSe absorption, which is the opposite of what is observed.
In the latter case, the spectral signature of remaining CdS NPLs was observed
This indicates that the non-injection synthesis employed in the present study has the advantage of producing more well-defined heterostructures than the continuous injection method.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\columnwidth]{fig/fig5.pdf}
\caption{Photoluminescence spectra of (a) \SI {3}{ML} and (b) \SI {4}{ML} cQRs with increasing amount of Se added to the synthesis.}
\end{figure}
\paragraph*{} In addition to the proposed ring geometry, other heterostructure geometries could generate these results.
We will now demonstrate that the proposed geometry is the most likely candidate by refuting the two most probable geometry alternatives: CdS/CdSe core/shell and CdS/CdSe heterodimer.
Knowing that the CdS and CdSe are electronically connected, it is possible that a CdSe NPL has grown on top the CdS NPL instead of being connected to its rim, as in the case of core/shell CdSe/CdS NPLs\cite{mahler2012core}.
If it was the case, a striking redshift and a wider bandwidth of the CdSe absorption and emission transitions would be observed as demonstrated by Mahler \textit{et al.} \cite{mahler2012core} for CdSe NPLs with a CdS shell.
However, the CdSe emission is precisely the same as CdSe NPLs of the same thickness as its CdS NPL seeds without a shell\cite{mahler2012core}.
This confirms that CdSe is laterally attached to the CdS NPLs.
Two planar geometry limit cases are possible: the proposed CdSe rings or heterodimers\cite{de2011size}.
The ring geometry is more probable as our cQR synthesis is very analogous to the core/crown synthesis\cite{mahler2012core} and imply a similar reactivity for all facets of the periphery.
Furthermore, the quantum ring geometry was demonstrated by line-EDX for continous injection grown CdS/CdSe heterostructures by Delikanli \textit{et al.}\cite{ delikanli2015continuously}.
\paragraph*{}For larger Se amounts, the CdSe transitions have negligible inhomogeneous broadening resulting in a Lorentzian band shape and a small FWHM.
This is observed in Fig. 5 for both \SI{3}{ML} and \SI{4}{ML}.
The sharp absorption peaks of the LH-E and HH-E transitions are distinct and intense (Fig. 4).
For the higher Se amount, the CdSe have the electronic structure of atomically flat CdSe NPLs.
For small Se amounts, the CdSe islands around all CdS NPLs are randomly dispersed in size, resulting in a normal distribution of energy levels.
This inhomogeneous broadening translates in a broader Gaussian shaped band, often observed for ensemble colloidal quantum dots (cQDs), as shown in Fig. 5.
The absorption spectra of these samples do not have the LH-E and HH-E transitions of a quantum well but instead the broad $1S_e-1S_{3/2}$ transition of cQD ensembles.
However, the very small volume of CdSe islands compared to the CdS NPLs on which they reside makes their absorption appear only as a weak tail in Fig. 4.
In conjunction with both thermal and inhomogeneous broadening, it is thus difficult to discern distinct transition peaks for CdSe islands.
In light of these observations, these samples form CdSe dots around CdS NPLs.
\subsection*{Overview of Synthesized Samples}
\paragraph*{}The series of lateral CdS/CdSe heterostructures prepared emit in the higher energy portion of the visible spectrum, yielding the bright purple, blue and green PL shown in Fig. 2a.
Whereas such colors are obtainable from a number of nanocrystal types such as CdSe cQDs, optimizing their nucleation-growth to obtain smaller sizes for short-wavelength emission is more demanding.
Indeed, a large concentration of precursors is required to initiate the nucleation process, in turn calling for short growth times to keep the cQDs small.
\paragraph*{} In the lateral heterostructure cQR case, CdSe growth onto the CdS NPL atomically thin edges is reminiscent of deposition and epitaxial growth processes on crystalline substrates.
In this heterogeneous growth regime, the lateral extent is limited by the amount of Se precursor and without restrictions on reaction time for CdSe growth.
The resulting heterostructures are thin colloidal sheets curled into tubes as shown in Fig. 2b, just like the homogeneous NPLs \cite{bouet2013two}.
While control of the flat NPL thickness at the atomic level causes discrete jumps of the emission energy, as illustrated from blue colors to green ones in Fig. 1a corresponding to 3 and 4 ML respectively, the continuous PL tuning is recovered through the lateral extent of CdSe grown.
Therefore, cQRs provide two degrees of freedom to control the quantum confined energy levels.
Indeed, the thickness can be set to obtain emission from CdSe near the desired color and its width controls the PL bandwidth while fine tuning the emission wavelength.
\subsection*{Wavefunction Dimensionality Evolution}
\paragraph*{}As presented in Fig. 6, the sample series behaviour changes drastically as the Se amount increases.
This is attributed to a change in the CdSe dimensionality, \textit{i. e.} the degrees of freedom in which the charge carriers can move in the CdSe.
These varying degrees of freedom originate from a varying description of the charge carrier envelope wavefunction.
If charge carriers are free in a direction (\textit{e. g.} $x$), they are approximated as delocalized in that direction and, as in a bulk crystal, $\Ket{\vect{k}_x}$ is an adequate description of the wavefunction in $x$.
Inversely, if the charge carriers are confined in $x$, the bound states of the particle in a box model $\Ket{n_x}$ are more suitable to describe the wavefunction in $x$.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{fig/fig6.pdf}
\caption{Energy of the photoluminescence maximum (filled symbols) and FWHM (empty symbols) of (a) \SI {3}{ML} and (b) \SI {4}{ML} cQRs. The parameters are extracted from the best fit between a Gaussian and a Lorentzian peak shape. The Gaussian peak shape is preferable at lower Se amount, specifically up to \SI{20}{\micro mol} for \SI {3}{ML} and \SI{10}{\micro mol} for \SI {4}{ML} cQRs}.
\end{figure}
\paragraph*{} In the CdSe islands resulting from a low Se amount, the charge carriers have no degrees of freedom and their state can be described by $\Ket{n_x, n_y, n_z}$.
At high Se amount, the charge carriers are only confined in $z$.
Thus, they have two degrees of freedom and can be described by $\Ket{\vect{k}_x,\vect{k}_y,n_z}$.
Between these two extreme cases, the system goes through a 1D configuration with periodic boundary conditions with charge carriers diffusing around the ring.
It is this intermediate regime that is defined as the cQR.
The most appropriate representation of the envelope wavefunction in this case would be that of a cylindrical box with Bessel functions in the plane and thus $\Ket{n_r, l, n_z}$ orbitals.
This strictly unidimensional system with periodic boundary conditions can then provide a permanent lattice for the aforementioned fundamental physics mesoscale ring experiments such as the Aharanov-Bohm effect and persistent currents.
\section{Conclusion}
\paragraph*{} We demonstrated user-friendly one-pot synthesis of both \SI{3}{} and \SI{4}{ML} CdS/CdSe core/crown heterostructures.
The synthesis method yields better defined structures than continuous injection method.
In these heterostructures, excitons generated in the CdS NPLs are efficiently transferred and confined to the CdSe outer ring.
Ring width is controlled by the amount of Se precursor added to the synthesis.
At respectively low and high amount of Se, QD-like CdSe and CdSe NPL photoluminescence spectra are observed.
This shows a change in the electronic dimensionality from 0D to 2D with an intermediate 1D dimensionality ring.
The emission maximum can be adjusted from \SI{2.42}{eV} to \SI{2.90}{eV} (\SI{428}{nm} to \SI{512}{nm}) using both the CdS NPL seed thickness and the CdSe ring width, giving two degrees of freedom to the tuning of the optical properties.
Hence, we part with the rigid electronic structure of standard NPLs and are now able to continuously tune the electronic and optical properties of a CdS/CdSe lateral heterostructure.
\paragraph*{} The non-injection synthetic method should be applicable to the growth of CdSe around other NPLs.
Also, the quantum ring geometry grants electronic structure fine-tuning to NPLs that, despite their advantages over cQDs, could only be discretely tuned. As such, this geometry could advantageously be applied to other II-VI semiconductors than CdSe. Furthermore, quantum rings could be synthesized with semiconductor alloys, which would lead to an even greater control over their band alignment.
While this article identified the 0D and 2D geometries in photoluminescence and absorption spectra, the 1D quantum ring state was only interpolated.
An extensive study of these spectra with fine variation of Se amount around the observed shift from 0D to 2D behavior is needed to systematically synthesize true 1D heterostructure.
\begin{acknowledgement}
The research was supported by National Science and Engineering Research Council(NSERC). The authors thank Patrick Larochelle for technical support, Denis Boudreau for sharing experimental equipment, and Dominic Larivière as well as Anna Ritcey for their inputs on the manuscript and interesting discussions.
\end{acknowledgement}
\paragraph*{Supplementary materials:}Five figures: TEM micrographs for \SI{3}{ML} sample, EDS analysis example, X-ray diffractogram, PLE spectra for each sample, and relation between PL$_\text{max}$ and FWHM.
|
3,212,635,537,803 | arxiv | \section*{Introduction}
The process of the evolution after the end of nucleation (the
main stage) and the consumption of the main quantity of the
surplus metastable phase requires a separate description.
This description can be based on the real form of the size
spectrum \cite{preprint} or
can be done by the asymptotic analysis
\cite{ls}, \cite{kukos}.
The regime will be chosen
as the free molecular one. This is done because the
formation of the exhausted zones around the droplets strongly influences
the kinetics and the description can not be based on the
averaged characteristics. The only situation where the
diffusion regime can be combined with the averaged
characteristics is when the embryos go from one cave to
another and change many exhausted zones.
But this situation can be reduced to the effective absence
of the cave and, thus, the regime will be the free-molecular
one. It is important because
earlier there was the restriction that the
size is less than the length of the free motion of a
molecule divided by the coefficient
of condensation. Now this restriction is not essential.
The first stages of the late evolution were investigated in
\cite{preprint}.
\section{Balance equation}
Define $n$ as the number of embryos with $\nu$ molecules.
The evolution in time $t$ is described by
equation
$$
\frac{\partial n}{\partial t} = W^+(\nu-1) n(\nu-1)
-W^-(\nu) n(\nu)
-W^+(\nu) n(\nu)
+W^-(\nu+1) n(\nu+1)
$$
Here $W^+$ is the direct kinetic coefficient, $W^-$ is the
inverse kinetic coefficient. In the Fokker-Planck
approximation the will be a diffusion and the regular
growth.
In approximation of the pure diffusion the balance equation will be
$$
\frac{\partial n}{\partial t} = W^+(\nu)[ n(\nu-1)
- n(\nu)
- n(\nu)
+ n(\nu+1) ]
$$
and
$$
\frac{\partial n}{\partial t} = W^+(\nu)
\frac{\partial^2 n(\nu)}{\partial \nu^2}
$$
\section{Some estimates }
According to \cite{preprint} after the regular relaxation there
will be a diffusion errosion of the spectrum. For this
stage one can give some estimates.
\subsection{Diffusion}
One has to use $s$-scale. Here $s$ is the surface of the
embryo
$$
s = \nu^{2/3}
$$
This is because the kinetic coefficient of
absorption $W^+$ is proportional to $s$
$$
W^+ = W^0 s
$$
Then in diffusion
equation
will be
\begin{equation}\label{purdif}
\frac{\partial n}{\partial t} = W^0 \nu^{2/3}
\frac{\partial^2 n(\nu)}{\partial \nu^2}
\end{equation}
In the leading term
$$
\frac{\partial^2 n(\nu)}{\partial \nu^2}
\rightarrow
\nu^{-2/3}
\frac{\partial^2 n(\nu)}{\partial s^2}
$$
Then
$$
\frac{\partial n(\nu)}{\partial t} = W^0
\frac{\partial^2 n(\nu)}{\partial s^2}
$$
The same will be for $n(s)$.
The diffusion scale will be
$$
s_{diff} = t^{1/2}
$$
For the linear size of an embryo
$$
\rho = \nu^{1/3}
$$
we have
$$
\rho_{diff} \sim t^{1/4}
$$
\subsection{Regular growth}
The coordinate of the regular growth will be determined
from
$$
\frac{d \rho}{d t} = \frac{\zeta}{t_t}
$$
Here
$t_t$ is the characteristic time, $\zeta$ is the
supersaturation defined by
$$
\zeta = n_1/n_\infty - 1
$$
where $n_1$ is the molecules number density of vapor, $n_\infty$ is the
molecules number
density of the saturated vapor.
Then
$$
\rho_{reg} = \int \frac{\zeta}{t_t} dt
$$
The behavior of $\zeta $ is given by
$$
\zeta = \frac{2a}{3\rho_c}
$$
where index $c$ marks the critical embryo.
Having taken
$$
\rho_c \sim \rho_{diff}
$$
we get
$$
\zeta \sim \frac{2a}{3\rho_c}|_{\rho_c = \rho_{diff}}
$$
Then
$$
\zeta \sim \frac{2a}{3\rho_c}|_{\rho_c = t^{1/4}}
$$
and
$$
\zeta \sim t^{-1/4}
$$
Then
$$
\rho_{reg} \sim t^{3/4}
$$
We see that
$$
\rho_{diff} \ll \rho_{reg}
$$
and it is clear that the main quantity of substance will be
in the tail growing regularly.
Approximately we assume that at the period when the
tail will be essential the following approximation $W^0 \approx const$
takes place.
\section{The form of the tail}
We determine the form of the tail. Imagine that at big
sizes there is a dominating regular growth, at small sizes
of the critical order there is a dominating diffusion. The
boundary will be marked by the index $b$.
Certainly, at the
very small sizes there is a regular dissolution. But this
region is simply negligible and can be expelled by the
transition of a zero point.
Then for $\rho>\rho_b$
$$
\rho - \rho_b = \int_{t'}^t \frac{\zeta}{t_t} dt
$$
Then the distribution function $f$ is given by
$$
f(\rho, t ) = f_b(\rho_b, t')
$$
To get $f_b$ one can use the diffusion approximation
according to \cite{book2}, \cite{preprint}
$$
f_b = \frac{A_0}{\sqrt{4W^0t'}} \exp(-\frac{(s_b-s_c)^2}{4
W^0 t'})
$$
In the absolutely stationary approximation
one can put the normalizing constant
$$A_0 = 1$$
Due to the dissolution this constant will be another and
this question will be solved later.
Now we consider $$\frac{(s_b-s_c)^2}{4
W^0 t'}$$
As an estimate one can take
$$
\rho_b - \rho_c = 2 \rho_c
$$
Then
$$
\frac{(s_b-s_c)^2}{4
W^0 t'}
\sim
\frac{64 \rho_c^4}{4 W^0 t'}
$$
If we take $\rho_c \sim t^{1/4}$ (more precise it is
necessary to take $\rho_c \sim W^0 t^{1/4}$) then
$$
\frac{(s_b-s_c)^2}{4
W^0 t'}
\sim
\frac{64 }{4}
$$
Then
$$
f_b \sim
\frac{A_0}{\sqrt{4W^0t'}} \exp(-16)
$$
\section{More precise formula}
One can use another more precise equation
(see \cite{book2}, \cite{preprint})
$$
f_b =
\frac{A_0}{\sqrt{4W^0t}} \exp(-\frac{(s-s_{c0})^2}{4
W^0 t})
-
\frac{A_0}{\sqrt{4W^0t}} \exp(-\frac{(s+s_{c0})^2}{4
W^0 t})
$$
Here $s_{c0}$ marks the initial position of a critical
surface.
After the transformations we get
$$
f =
\frac{A_0}{\sqrt{4W^0t}}
\exp(-\frac{s^2}{4
W^0 t})
\exp(-\frac{s_{c0}^2}{4
W^0 t})
[\exp(\frac{2 s s_{c0}
}{4
W^0 t})
-
\exp(- \frac{2 s s_{c0}
}{4
W^0 t})
]
$$
One can see that
$$
s s_{c0} / 4 W^0 t \ll 1
$$
Really, since
$$
s^2 \sim 4 W^0 t
$$ and $s_{co}$ is fixed
we come to
$$
s s_{c0} / 4 W^0 t \sim t^{-1/2} \ll 1
$$
Then
after the decompositions of the corresponding exponents one
can come to
$$
f =
\frac{A_0}{\sqrt{4W^0t}}
\exp(-\frac{s^2}{4
W^0 t})
\exp(-\frac{s_{c0}^2}{4
W^0 t})
\frac{ 4 s s_{co}}{4 W^0 t}
$$
So,
$$
f \sim y \exp(-y^2)
$$
with
$$
y = \frac{s}{\sqrt{4 W^0 t}}
$$
The maximum of this distribution is attained at
$$
y_m = \sqrt{2}/2
$$
This value will be close to the critical value
$$
y_m \sim y_c
$$
For
$$
y_b = \alpha y_c
$$
with some parameter $\alpha$ we get
$$
f_b = \frac{A_0}{\sqrt{4 W^0 t}} \exp(-\beta)
$$
with constant $\beta$.
\section{Dissolution of the spectrum head}
The dissolution of the head determines the form of the
tail.
At first there is a pure dissolution of a gaussian and
$$
\rho_c \approx const
$$
Then
$$
f_b \sim \frac{1}{\sqrt{4 W^0 t}} \exp(-(s_b - s_c)^2 / (4
W^0 t ))
$$
or
$$
f_b \sim \frac{1}{\sqrt{4 W^0 t}} \exp(- 9 W^0 \rho_{c0}^4 / (4
W^0 t ))
$$
For the behavior of $f_b$ we get
$$
f_b \sim \frac{1}{t^{1/2}} \exp( - \gamma / t)
$$
with a positive parameter $\gamma = const$.
The last function is positive, it has an evident asymptotic
behavior
$f_b \rightarrow t^{-1/2}$ at big $t$. At small $t$ it goes
to zero. Thus, it has a maximum.
To see the formation of
the tail one can approximately write
$$
\rho - \rho_{c0} = \frac{\zeta_0}{t_t} (t-t')
$$
where $\zeta_0$ is some initial supersaturation.
Here we assume $\zeta_0$ to be a constant which is a very
strong assumption. So, below all results will have the
power of stimates.
The maximal size $ \rho_{lim}$ is
$$
\rho_{lim} = \rho_{c0} + \zeta_0 t / t_t
$$
Then having put the starting moment at zero one can get
$$
t' = (\rho - \rho_{lim}) t_t/ \zeta_0
$$
The distribution at the tail looks like
$$
\tilde{f}_\rho = f(z) = (\frac{\zeta_0}{z t_t})^{1/2}
\exp(-\frac{\gamma \zeta_0 }{z t_t})
$$
as a function of
$$
z = \rho_{lim} - \rho
$$
Now we shall calculate the quantity of substance in the
tail $G_{tail}$. We get
$$
G_{tail}
=
\int_{3\rho_c}^{\rho_{lim}}
\rho^3
(\frac{\zeta_0}{t_t(\rho_{lim} - \rho)})^{1/2}
\exp(-\gamma \frac{\zeta_0}{t_t(\rho_{lim} - \rho)})
d \rho
$$
Having introduced
$$
y = \frac{\rho_{lim} - \rho}{\epsilon} \ \ \ \ \ \epsilon=
\gamma \zeta_0 / t_t
$$
we get
$$
G_{tail} =
\frac{\epsilon}{\gamma^{1/2}}
\int_0^{\frac{\rho_{lim} - 3 \rho_c}{t_t}}
(\rho_{lim} - \epsilon y)^3 \frac{1}{y^{1/2}} \exp(-1/y) dy
$$
One can see that $G_{tail}$ grows very fast. The
asymptotics
is
$$
G_{tail} \rightarrow \rho_{lim}^{5/2} \sim t^{5/2}
$$
But this behavior will take place only until
$$
s s_c \sim 4 W^0 t
$$
After this moment of time the boundary condition will be
another
$$
f|_{\rho_b} = A_0 \exp(-const)
$$
or at larger scales of time
$$
f|_{\rho_b} \sim t^{-1/2}
$$
\subsection*{Alternative approach}
At the further stage
$$
f|_{\rho_b} \sim \frac{1}{\sqrt{t}}
$$
So, we see that the variation is very small. The drift of $\rho$ is
rather small also
$$
\rho \sim \int \zeta dt \sim t^{3/4}
$$
because of
$$
s_c \sim t^{1/2} \ \ \ \ \rho_c \sim t^{1/4} \ \ \ \
\zeta \sim t^{-1/4}
$$
The subintegral function $g_{tail}$ in the
quantity of substance
$$ G_{tail} = \int g_{tail} d \rho
$$
in the tail grows like
$$
g_{tail} \sim t^{9/4}
$$
It grows fast.
\section{Impulse regime}
Since $G_{tail}$ grows very fast one gas to analyze this
behavior.
At first
\begin{equation}\label{p}
G_{tail} \ll G_{total}
\end{equation}
where $G_{total}$ is the total quantity of substance in
droplets.
The main role here is played by diffusion and the solution can be
described with the help of combinations of Gaussians. This
model will be called as the diffusion-regular model. It is
investigated in \cite{preprint}.
Since
$g_{tail}$ grows fast it means that the monodisperse
approximation for $G_{tail}$ is quite suitable
$$
G_{tail} \sim \rho_{eff} N_{eff}
$$
Here $N_{eff}$ is the effective number of droplets
in the tail and $\rho_{eff}$ is their coordinate.
Certainly, $N_{eff}$ is not a constant value, it grows.
At $t$ determined by
$$
G_{tail} \sim G_{total} - G_{tail}
$$
the tail begins to eat the head of the spectrum.
Here the tail is non-essentially decreasing one.
It occurs
very fast and the spectrum eats at first the head and then
it begins to eat the the rather flat beginning of the tail.
This process can be easy described by the regular
growth of the tail and by the regular dissolution of the
eaten part of the spectrum.
Then only the end of the tail will exist. This end can be
then treated as a monodisperse spectrum.
Later this part will be dissolved by diffusion and the process
repeats.
We shall call this process as the "impulse condensation".
This part is described in \cite{preprint}.
\section{Short tail}
A question whether the process of impulse
condensation repeats arises here. From the first
point of view it seems that this process will infinitely
repeat. But there is one objection.
The function $f(t) = \exp(-\gamma/t)/\sqrt{t}$ described above has
a maximum. This maximum takes place at $t/\gamma \equiv y = y_m \approx 2$ and
for $y$ essentially less than $y_m$ there is a rapid
decrease to zero.
Certainly, after every cycle the time $t$ has to be
shifted.
The crucial point is whether at $y_m$ the condition
(\ref{p}) is observed. An alternative case
$$
G_{tail} \sim G_{total} - G_{tail}
$$
at $y_m$ leads to another further evolution.
Here the tail is essentially decreasing.
This case will
be called as "the case of adjusted tail".
In this section one can study the evolution on the base of
the regular growth. This radically simplifies the
situation.
Here we shall give the main ideas of description in this
situation.
The vapor consumption by the head of the tail can eat only
the same head of the tail. There is no other parts to eat.
Then we come to the following balanced picture:
\begin{itemize}
\item
The task is to determine $\rho_b = 2\div3 \rho_c $,
$\rho_c = 2a/3 \ln(\zeta+1) \approx 2a/3\zeta$ where $a$ is
a renormalized surface tension.
\item
The size distribution for $\rho > \rho_b $ is known.
We have
$$ f(\rho, t) = f(x) $$ for $$x = z-\rho$$ $z\equiv
\rho_{lim}(t)$. Then
$$
G_{tail} = \int^{z-\rho_b}_0 (z-x)^3 f(x) dx
$$
\item
For $f(x)$ we have
$$
f(x) \sim \exp(-\frac{\tilde{\gamma}}{x}) / \sqrt{x}
$$
\item
The balance equation is
$$
G_{tail} + G_{head} + n_{\infty}(\zeta+1) = const
$$
where $G_{head} $ is the quantity of the substance in a
region $0.7 \rho_c < \rho < \rho_b$. In the region
$\rho < 0.7 \rho_c$ one can simply neglect the substance.
\item
In the region $0.7 \rho_c < \rho < \rho_b$ one can write
the pure\footnote{I.e. without the regular growth and with the
constant coefficient of diffusion.} diffusion equation
(\ref{purdif}) with boundary conditions
$$f(\rho,t) = 0$$
at $\rho=0.7 \rho_c$
$$f(\rho,t) = f_{tail}(x)$$
at $\rho = \rho_b$
\item
In the region $\rho < 0.7 \rho_c$ one can simply neglect the
evolution.
\end{itemize}
\section{Further simplification}
Already the last system of equations can be solved when we
know $f(x)$. The function $f(x)$ has to be known from the
initial conditions. These conditions are:
\begin{itemize}
\item
The direct result of the nucleation period \cite{kuni}.
\item
The result of the impulse regime \cite{preprint}.
\end{itemize}
To show the result in a most simple way one can use some
simplifications.
Further simplification is the following
\begin{itemize}
\item
To write the balance equation as
$$
G_{tail} = fixed
$$
\item
To use that
$$
G_{tail} \sim N_{tail} \rho_b
$$
where $N_{tail}$ is the effective (proportional)
number of the droplets in the tail.
\end{itemize}
The function $f_{tail}$ is one of the possible
realizations, which has a certain disadvantage. Really, for
$\rho > \rho_{lim}$ the distribution is zero. Meanwhile,
the diffusion in the region $\rho> \rho_b$ inevitably leads
to the big droplets. Assume that the behavior of the gaussian at big
arguments can be approximated as
$$
\tilde{f} \sim \exp ( - \gamma' \rho)
$$
with some parameter $\gamma'$.
The same asymptotic can be derived from $f_{tail}$ by the
steepest descent method.
So, the last approximation is very fruitful. It allows to
calculate the integrals for $G_{tail}$ analytically.
One can make some new simplification. Since the spectrum is
rather sharp, it means that
$$
\frac{ d \rho_b}{dt} \sim \frac{ d \rho_c}{dt}
$$
which leads asymptotically to
$$
\zeta \sim t^{-1/2}
$$
$$
\rho \sim t^{1/2}
$$
Since the diffusion width is $s_d \sim t^{1/2}$, $\rho_d
\sim t^{1/4}$ we see that the diffusion does not radically
change the character of the process. However, since the
Gaussian does not have a finite support
the droplets with the
greatest sizes which will be the main in further
consumption appears due to diffusion. This is one more
point for the approximation $\tilde{f}$.
The constant $\gamma$ is not a precise
constant but a slowly varying function. This
effect can be taken into account by standard methods.
An approximation $\tilde{f}$ is so simple that we have no
need to use the monodisperse approximation.
One can calculate $G_{tail}$ directly by integration
$$
G_{tail} \sim \int_{\rho_b}^{\infty} (z-x)^3 \exp(-\gamma' x) dx
$$
and the integral can be taken analytically.
\section{Initial asymptotic solution}
Since we came to the balance equations typical for the
asymptotic solutions of Lifshic and Slyozov \cite{ls} it is
necessary to analyze this approach.
As a regular law of droplets growth one has to take the
precise expression
$$
\frac{d \rho}{dt}
=
\frac{6 \sigma}{9 t_t} (\frac{1}{\rho_c} - \frac{1}{\rho})
$$
where $\sigma$ is the renormalized surface tension.
For $$
u= \rho / \rho_c
$$
one gets
$$
\frac{du}{dt} =\frac{6 \sigma}{9 t_t
\rho_c } (\frac{1}{\rho_c} - \frac{1}{\rho}) -
\frac{u}{\rho_c} \frac{d\rho_c}{dt}
$$
In the reduced coordinates
$$
\frac{du}{d\tau} = (1-1/u) - \tilde{\gamma} u
$$
$$
\tilde{\gamma} = \rho_c \frac{d\rho_c}{dt}
$$
Then to escape the violence of the substance balance one
has to observe
$$
\frac{du}{dt} \leq 0
$$
for all $u$.
The last relation will take place for every time $\tau$
increasing in time since
$$
\frac{du}{d\tau} = \frac{du}{dt}\frac{dt}{d\tau}
$$
Namely
$$\tau = 3 \ln(\rho_c/\rho_c(t=0))$$ was chosen by
Lifshic, Slyozov (LS).
The last requirement
is very strong. In reality, one can imagine the
situation where this requirement is not valid. In the general case it is
necessary that this requirement takes place only in the
integral sense.
But in LS theory this requirement has to be valid at
every moment.
LS required that
$$max_{\{ u \}} (\frac{du}{d\tau}) \equiv r \rightarrow 0
$$
Meanwhile the
real condition should be the following:
\begin{itemize}
\item
If $r$ has a power asymptotics (asymptotics as an argument in some power)
then this asymptotics
has to be
zero
\end{itemize}
But the situation when $r$ has no asymptotics is also quite possible.
For example, the case of oscillations drops out of
attention here.
The crucial supposition of LS theory is the existence of
the asymptotics for $r$ (in the class of constants).
The recipe of LS
is to consider $r$ as the monotonous function and then it
is reasonable to put $r=0$ asymptotically.
It is necessary to stress that the condition
$du/dt \leq 0$ for every $u$
leads to
$$
\frac{d\rho}{dt} - \frac{\rho}{\rho_c} \frac{d \rho}{dt}
\leq 0
$$ for every $\rho$
or
$$
\frac{d \ln \rho}{dt} \leq \frac{d \ln \rho_c}{dt}
$$ for every $\rho$.
This is very important relation.
From the last inequality it follows that every $\rho$ can
not escape and will inevitably be dissolved.
But at some $u$ there will be no dissolution. Let it be
$u_0$. Requirement $r=0$
leads to
$$
u_0 =2
$$
$$
\frac{
\frac{6 \sigma}{9 t_t}
}{\rho_c^2 \frac{d\rho_c}{dt}} = 4
$$
The last equation can be easily integrated which leads to
\begin{equation}
\label{4/9}
-4 \rho_c^{-1} = const + t \frac{6 \sigma}{9 t_t}
\end{equation}
but one has to note that if we observe the validity of the
previous equation only in the averaged sense we get
approximately the same integral law. So, one can not regard
the law of $4/9$ (this is the characteristic coefficient in
the diffusion case, here the regime is the free-molecular one)
as an experimental justification of LS
approach.
The law (\ref{4/9}) can be easily justified. Really, if
there will be another power asymptotic then immediately we
come to the violence of the balance condition.
One has also to stress that it is absolutely impossible to
differentiate (\ref{4/9}), the result will have nothing in
common with a real situation. This fact is typical for such
asymptotics.
\section{The form of the spectrum in LS theory}
It is clear that the evolution of the system is governed by
the spectrum of the droplets sizes. This function
determines the behavior of the supersaturation in the
system.
In frames of LS approach the behavior of $\rho_c$
is already known.
Then it is possible to solve the inverse problem: to
reconstruct the spectrum on the base of the behavior of
supersaturation. The sense of this way of derivation is
wrong, but technically it is quite possible.
Under the regular law of growth one can find for the
distribution function
$$
\phi(\tau, u) = - \xi(\tau - \tau(u))/v_u
$$
where
$$
v_u = du/d\tau
$$
is already known
and
$$
\tau(u)
= \int_0^u du/v_u
$$
is also known and
$\xi$ is some arbitrary function which is going to be
determined.
The balance equation allows to determine the form of $\xi$
which solves the problem.
To solve it in the simple form $\tau$ is chosen as
$$\tau = 3 \ln(\rho_c(t)/\rho_c(t=0))$$. Then the balance
equation in a closed system will be
$$
\exp(\tau) \int_0^u u^3 \phi(\tau, u) du = 1
$$
which can take place only if
$$
\xi (...) = \exp(...)
$$
The spectrum is zero for $u>u_0$ and continuously goes to
zero for $u \rightarrow u_0-0$. So, the essential part of
the spectrum ill be dissolved at finite times $\tau$ and
$t$.
Here it is necessary to stress that now it is clear that
the deviation of $r$ from zero at some finite interval (it
is quite possible) can lead to the absolutely another form
of the spectrum. So, the form of the spectrum is determined
only with the help of the strong supposition of LS theory
about the behavior of the critical size, i.e. about the
supersaturation.
\section{Correction for $\gamma$}
The limit value $r\equiv 0 $ is not acceptable
even in LS theory. The next step leads to correction of
$r$.
Ordinary,
the correction is many times greater then the precedent
value of $r$ which allows Osipov and Kukushkin \cite{kukos}
to speak about the non-uniform character of LS
decompositions.
In frames of LS theory it is possible to write a
correction term and to fulfill the next step of
calculations. Osipov and Kukushkin (OK) \cite{kukos} follow
another way. Certainly, since the result in the first
approximation is known it is possible to rewrite the formulas
in such a way that $\gamma$ will be zero and the first
approximation has to inevitably include the correction term in
LS approach. Since the conclusions of OK theory are
important it is necessary to consider it.
We omit all details which can be found\footnote{The
initial total number of embryos is found with an error from \cite{kukos1}
because the asymptotic analysis \cite{kukos1} is not valid, it is necessary to use
\cite{kuni}.} in
\cite{kukos}.
Having introduced
$$\tau = \frac{1}{4} \int_0^t \rho_c^{-2} dt
$$
and redefined $u=\rho/2\rho_c$ one can come to
$$
\frac{du}{d\tau} =
\frac{(u-1)^2}{u} + \gamma u
$$
Now the condition $r=0$ corresponds to $\gamma=0$ and
every correction for $\gamma$ will be giant in comparison with $\gamma_0 =0$.
Then the
asymptotics here has to include the correction term already
in the main order.
The approach with correction gives $$\gamma = 1/(4\tau^2)$$
The substitution
$$ v = (1-u)^{-1} - \ln|u-1|
$$
brings the rate of growth to
$$
\frac{dv}{dt} = 1 + \gamma (1+\psi)^2
$$
where $\psi$ satisfies $$\psi + \ln|\psi| = v$$
and asymptotically
$$
\frac{dv}{dt} = 1 + \gamma v^2
$$
The initial asymptotics in LS theory corresponds to $\gamma =
0$ and then asymptotically
$$
v = \tau+ const
$$
The initial asymptotics in OK theory corresponds to $\gamma = 1/(4
\tau^2)$. Then
asymptotically
$$v \rightarrow 2 \tau
$$
\section{Correction for the distributions}
The distribution function can be in frames of LS and
OK theories found from
$$
f(u,\tau) = \xi(C(u,\gamma)) \frac{\partial C}{\partial u}
$$
where $C$ is the integral of the law of growth.
Concrete calculations will give the following results
\begin{itemize}
\item
In LS theory
$$v - \tau = const
$$ or
$$
\exp(v-\tau) = const
$$
i.e.
$$
C_{LS} = \frac{1}{1-u} \exp(-\frac{1}{1-u} - \tau)
$$
is the integral of evolution
Then $\xi$ is
$$
\xi \sim C_{LS}^2 \ \ at \ \ C_{LS}>0, \ \ \ \ \
\xi \sim 0 \ \ at \ \ C_{LS}<0
$$
Then the spectrum is
$$
\phi_{LS} = \frac{2u}{(1-u)^4} \exp(-\frac{2u}{1-u})
$$
\item
In OK theory
$$v - 2 \tau = const
$$ or
$$
\exp(v- 2 \tau) = const
$$
i.e.
$$
C_{OK} = \frac{1}{1-u} \exp(-\frac{1}{1-u} - 2 \tau)
$$
is the integral of evolution
Then $\xi$ is
$$
\xi \sim C_{OK} \ \ at \ \ C_{OK}>0, \ \ \ \ \
\xi \sim 0 \ \ at \ \ C_{OK}<0
$$
Then the spectrum is
$$
\phi_{OK} = \frac{u}{(1-u)^3} \exp(-\frac{u}{1-u})
$$
and it essentially differs from LS theory.
\end{itemize}
\section{Further corrections}
The essential difference of spectrums is the striking
feature of the LS theory. But it is rather
easy to see that such a feature will be natural for all
further corrections.
Suppose that the integral of evolution at the previous
step is established. Then we can reformulate the rate of growth as
$$
\frac{du}{dt} = F(u) - \gamma u
$$
where $F$ is a known function.
Here we use for simplicity $t$ instead of $\tau$.
In the zero approximation (LS theory)
$$
F \sim - u^{-1}+1/2
$$
in the first approximation\footnote{With a renormalization} (OK theory)
$$
F=(u-1)^2/u
$$
etc.
Let $w$ be
the solution of the equation with $\gamma=0$, i.e.
$$
\int\frac{du}{F(u)} = t + const
$$
Then at the previous step
$$
\frac{dw}{dt} = 1
$$
At the current step
$$
\frac{dw}{dt} = 1+ \gamma u(w)
$$
So, $w$ is a straight analog of $v$.
There can be two situations:
\begin{enumerate}
\item
Asymptotically $w
\rightarrow t$.
\item
Asymptotically $w
\rightarrow l(t) \neq t$.
\end{enumerate}
In the first situation the spectrum remains absolutely the
previous one. There is absolutely no corrections.
In the second situation the correction is essential. Instead of
$w-t$ as the integral of evolution one has to use
$w-l(t)$. It means that instead of $w$ one has to use
$l^{\{-1\}}(w)$ where $l^{\{-1\}}$ is the inverse function.
The difference in spectrums is striking.
So, the alternative is to have no corrections or the
striking corrections. This is the consequence of the use of
Lifshic-Slezov variables and a certain disadvantage of LS and OK approaches.
\section{Application of asymptotics}
The ideology of LS and OK asymptotic analysis is one and
the same:
\begin{itemize}
\item
The asymptotic of some given function
of the supersaturation in the class of powers is
prescribed.
\item
The balance condition at asymptotics leads to the
determination of the supersaturation as the function of
time.
\item
The form of the size spectrum is
reconstructed on the base of the given supersaturation.
\end{itemize}
Meanwhile, under the regular growth of the embryos the
natural sequence of actions is the following
\begin{itemize}
\item
The size spectrum is given from initial conditions
\item
The dissolution of the size spectrum together with the
balance equation determines the behavior of the
supersaturation
\item
The tail of the spectrum at big sizes determines the
asymptotics of the supersaturation and of coefficients in
the regular law of the droplets motion.
\end{itemize}
We see that from the last point of view the LS and OK
theories are inconsistent. Then why they
correspond to the asymptotic behavior found from
experiment?
At first one has to mention that the accuracy of
experimental measurements is not high: even the
striking difference
between LS and OK drops out of experiments.
Hence, we have to conclude that the
experiment gives only the approximate form of the size
spectrum with maximum and two different rather short wings.
Now we shall see that the form of the size spectrum
obtained in LS and OK theories is rather typical at least
approximately.
The tail of the Gaussian can well approximated by
$$
\xi_{as} = \exp(- const * \rho)
$$
with some constant $const$. We shall call this constant
$\Lambda$ and suppose it to be a slow function of $\rho$ to
have
$$
\Lambda(\rho+ \Lambda^{-1}) - \Lambda(\rho) \ll
\Lambda(\rho)
$$
Then the exponential approximation will be valid at least
at the essential part of the size spectrum.
These transformations are absolutely adequate to the
standard approach of the steepest descent method.
The same approximation can be established in $u$-scale. The
constant can simply cancelled by renormalization. Then
$$\xi_{as} \sim \exp(-u)$$
Now we shall approximate $\tau(u)$ for two values $u_1$
and $u_2$ at the essential part of the size spectrum. The
simplest approximation is the following
$$
\tau(u_1) - \tau(u_2) = \frac{d \tau(u)}{du} (u_1 - u_2)
$$
which is the linear connection. Then
the exponential form of the spectrum over $\tau(u)$ will be
conserved
$$
\xi_{as} \sim \exp(-\tau(u))
$$
Since $\xi$ has to be the function of $\tau - \tau(u)$ we
come to
$$
\xi = \exp(\tau - \tau(u))
$$
which lies in the base of LS and OK reconstruction of the
size spectrum.
To get the spectrum one has to divide $\xi$ on $v_u$. The
last value is determined by the regular law of growth.
Certainly, one can not
guarantee that the parameter $\gamma$ corresponds to $r=0$.
When it really corresponds to $r=0$ there will be LS
asymptotic.
When $\gamma$ corresponds to some small $r$ less than zero
then there will be an extremum associated with the minimum
of $v_u$.
One can see that since $r$ is rather small in the absolute
value there is a maximum of $\phi$ near $u_0$. One has to
note that in the experiment there is no evident way to
determine the critical size. Then there is no way to
determine the position of the size spectrum, but only it's
form.
Since $\Lambda$ is not a true constant one can say about
the quasistationary solution with a smooth variation of
parameters and a form of the spectrum.
These considerations evoke the ideas associated with the
square approximation of the rate of growth $du/d\tau$ near
the maximum which were presented in \cite{book1}.
\section{Direct determination of $\gamma$.}
Although the precise asymptotic can not be directly used
to determine the evolution of the system one can suggest
another approach.
Now we return to the description of the evolution.
The rate of growth can be written as
$$
\frac{du}{d\tau} =
\hat{F}(u, \gamma(\tau))
$$
The integration of this equation gives $\tilde{\tau}(u)$.
Symbolically this can be written as
$$
\tilde{\tau}(u) = \int \frac{du}{v_u}
$$
where
$$
v_u = \frac{du}{d\tau}
$$
The problem of integration of the rate of growth is very
important and analytically it is difficult to do, but
here we suppose that it is fulfilled.
The balance equation has to written as
$$
G_{total} = const
$$ It is possible to neglect the surplace mother
phase. It can be reduced to
$$
\exp(\tau) \int_0^{u_{lim}} u^3 \phi(u,\tau)
du = const
$$
where $\phi$ is the distribution function, $u_{lim}$ is the maximal size.
The regular law of growth leads to
$$
\phi(u, \tau) =
\frac{
\xi(\tau - \tilde{\tau}(u)) }{ -v_u}
$$
Then the balance equation will be
$$
\exp(\tau) \int_0^{u_{lim}} \frac{u^3\xi(\tau - \tilde{\tau}(u)) }{ -v_u
}
du = const
$$
But the last equation is not an equation on the function $\xi$
as in the LS and OK theories but an equation on $\tilde{\gamma}$.
The function $\xi$ is known from the initial conditions for
this equation.
This is the crucial difference between our theory and LS or
OK theory.
The dependence $\tilde{\gamma}(\tau)$ stands in $v_u$ and in
$\tilde{\tau}(u)$. Moreover in $\tilde{\tau}(u)$ it is at
least the dependence on $\tilde{\gamma}$ in all preceding moments
of time. So, the last equation is very complex and one gas
to suggest some methods of the approximate solution.
\section{Approximate solution}
When $\tilde{\gamma}$ is really smaller than $\tilde{\gamma}_0$ corresponding
to $r=0$ it is possible to see that some approximations for
$\tilde{\gamma}$ produces approximations for evolution.
When the main influence is ensured by the supercritical
tail then the condition
$$
G_{tail} = const
$$
is rather productive.
One can approximately say that in the region $\rho <
\rho_b$ the rate of growth is very small.
Another possibility is to use for $du/d\tau$ an
approximation which can be integrated analytically.
Namely $du/d\tau$ has to be used because for $u$ the
boundaries of the nearcritical region
and supercritical region will be fixed. For
$$
\frac{du}{d\tau} = (1-u^{-1}) - \tilde{\gamma} u
$$
we use the following constructions:
\begin{itemize}
\item
We give definitions
$$
F_1 = (1-u^{-1})
$$
$$
F_2 = r
$$
$$
F_3 = 1 - \tilde{\gamma} u
$$
\item
The first interval will be $[u_2, \infty[$. The value of
$u_2$ is the root of equation
$$
F_2=F_3
$$
At the first interval one can approximately write
$$
\frac{du}{d\tau} = F_3
$$
The last equation can be integrated analytically
\item
The second interval is
$[u_1,u_2]$
where
$u_1$ is the root of equation
$$
F_2=F_1
$$
At the second interval one can approximately write
$$
\frac{du}{d\tau} = F_2
$$
The last equation can be integrated analytically
\item
The third interval is
$[0,u_1]$.
At the third interval one can approximately write
$$
\frac{du}{d\tau} = F_1
$$
The last equation can be integrated analytically
\end{itemize}
Summarizing one can approximately write
$$
\frac{du}{d\tau} = \bar{F}(u,\tilde{\gamma})
$$
with an analytical solution.
Then we know $\bar{\tau}(u)$ and can write a closed
equation on $\tilde{\gamma}$.
This approach will be called as the "regular-regular
model".
In the general situation allowing LS asymptotics
$$
du/d\tau = F_1 - \tilde{\gamma} u
$$
and the procedure remains the same.
Another possibility is the \underline{quasistationary
approximation}.
Here $\tilde{\gamma}$ is supposed to be a local constant and the law of
growth is integrated
$$
\tau(u) = \int\frac{du}{F_1 - \tilde{\gamma} u}
$$
Then we know $\tau(u)$ and $\xi(\tau - \tau(u))$ as function
of $u,\tau$. Then the balance equation is the closed equation on $\tilde{\gamma}$.
Define as $\tau_{total}$ the time of dissolution from the
size $u_m$, where $du/d\tau$ attains maximum, up to zero.
It is clear that the necessary condition for the
applicability of the quasistationary approximation is
$$
\frac{|\tau_{total}(\tilde{\gamma}(\tau)) -
\tau_{total}(\tilde{\gamma}(\tau+\tau_{total}(\tilde{\gamma}(\tau))))|
}{\tau_{total}(\tilde{\gamma}(\tau))} \ll 1
$$
The difference between LS and OK theories symbolizes that
at the asymptotics $r=0$ the quasistationary approximation
can be hardly applied at the very end of the process. But
it can be fruitful at some earlier periods.
\section{Combination of approaches}
Here the regular-regular model and the diffusion-regular
models of solution were presented. Now it is necessary to
decide what model has to be used.
The main object will be the support of spectrum. We shall
define the essential support of the spectrum by the
following way:
\begin{itemize}
\item
For $G_{total}$ we define the subintegral function
$g$ by
$$
G_{total} = \int d \rho g_{\rho}
$$
or
$$
G_{total} = \int d u g_{u}
$$
\item
We define the maximum of $g$ and
$$\rho_{max} =
\arg(max(g))$$
$$u_{max} = \arg(max(g))$$
\item
Then we determine the region of support
by
$$
\int_0^{u_l} du g_u = \delta/2
$$
$$
\int_{u_r}^\infty du g_u = \delta/2
$$
or by
$$
\int_0^{\rho_l} d\rho g_\rho = \delta/2
$$
$$
\int_{\rho_r}^\infty d\rho g_\rho = \delta/2
$$
with some small parameter $\delta$.
This region can be also determined by conditions
$$
g(u_l) = g(u_r) = \delta g(u_{max})
$$
\end{itemize}
If the essential support belongs to the near-critical
region then we have to use the diffusion-regular model.
In the opposite situation we have to the regular-regular
model.
The impulse regime contains the sequential application of
the diffusion-regular model and the regular-regular
model. Here the evolution during the period described by
the regular-regular model occurs in a very simple manner -
the essentially supercritical droplets determine the
behavior of the supersaturation. So, there is no need to
use the complete formalism of the regular-regular model.
In the regime of adjusted tail one has to use the
regular-regular model which can not be simplified.
To see the times when the regular-regular model will be no
longer valid it is possible to apply the diffusion operator
to the solution obtained in the regular-regular model. When
the difference between this solution and the result of the
application of diffusion operator will be essential it means
that it is necessary to introduce the diffusion
corrections.
The presence of diffusion corrections will be very
essential at the tail of the size spectrum and namely this
tail will govern the evolution later. This tail can be
hardly described by the diffusion equation because there is
no statistics - only few droplets of big sizes determine
the evolution.
Ordinary, the process of nucleation results in existence of
several big droplets with stochastic sizes and namely their
competition in growth determines the latest stage of the
process. The sizes of these droplets are rather arbitrary.
So, the kinetics of interaction will be also unstable and
arbitrary.
This effect forms a matter of a special
publication.
Here it is necessary only to add that the
number of droplets strongly diminishes in time and to see
the impulse regime and later the regime of adjusted tail it
is necessary to have a nucleating system of huge sizes.
|
3,212,635,537,804 | arxiv | \section{Introduction}
Deep Neural Networks (DNNs) are powerful and have a wide range of applications in several fields such as image recognition~\cite{1-he2016deep}, object detection~\cite{2-ren2015faster}, visual segmentation~\cite{3-he2017mask}, text classification~\cite{LiuLCT2022}, etc.
However, DNNs generally require a lot of computational resources.
For example, the size of the well-known VGG-16 model built by Caffe is over 500MB and it consumes 16 GFLOPs, which makes it impractical to be deployed on low-end devices.
Hence, over the past few years, many methods have been proposed to reduce the computational complexity of DNNs, such as pruning~\cite{5-han2015deep}, low-rank decomposition~\cite{6-yu2017compressing}, knowledge distillation~\cite{7-xu2019positive}, and quantization~\cite{8-courbariaux2015binaryconnect,5-han2015deep,14-hubara2016binarized,15-rastegari2016xnor,9-wu2018training,19-DBLP:conf/cvpr/WuWGLCLHLY20,17-DBLP:conf/icml/ZhaoHDSZ19,12-zhou2016dorefa}.
Specifically, DNN quantization maps the network weights from high bits to low bits, significantly reducing memory usage, speeding up inference, and enabling the deployment of networks on mobile devices with dedicated chips~\cite{google-whitepaper}.
Although existing quantization approaches have achieved notable success, most of them rely on full-precision gradients to update the network weights~\cite{8-courbariaux2015binaryconnect,14-hubara2016binarized,15-rastegari2016xnor,12-zhou2016dorefa}, hindering their practical usages. In real-world applications, one may need to quantize a pre-trained full-precision DNN on different low-end devices for better adaptability, and the quantization procedure that is conducted on the device cannot involve any full-precision operations~\cite{9-wu2018training}.
On the other hand, as a powerful search framework, EAs do not use any gradient information \cite{LiuWTQY15,LIUTY2021}, which is naturally suitable for this scenario.
Therefore, in this work we advocate using EAs to search for the low-bits weights of quantized DNNs.
Specifically, we first formulate DNN quantization as a large-scale discrete optimization problem.
Since this problem involves a huge number of variables (network weights), e.g., ResNet-20 has 269722 parameters, we propose a novel EA based on cooperative coevolution to solve it.
Given a pre-trained full-precision DNN, our algorithm first quantizes it to obtain an initial solution and then leverages estimation of distribution algorithm (EDA) to optimize the low-bits weights.
To improve search efficiency, the algorithm repeatedly groups the network weights according to the confidence in their values and focuses on optimizing the ones with the least confidence.
Finally, we compare our algorithm with exiting quantization approaches by applying them to train a 4-bit ResNet-20 on the Cifar-10 dataset, without involving any full-precision operations.
The results show that our algorithm performs better and the quantized DNN obtains the same test accuracy as its full-precision counterpart, i.e., quantization without loss of accuracy.
In summary, we make the following contributions in this paper:
\begin{enumerate}
\item We propose a novel EA based on cooperative coevolution to train quantized DNNs. To the best of our knowledge, this is the first work that applies EAs to search for the optimal low-bits weights of DNNs.
\item We conduct experiments to verify the effectiveness of the proposed algorithm. Notably, it can train a 4-bit ResNet-20 without accuracy degradation compared to the full-precision DNN, which indicates the great potential of EAs in DNN quantization.
\end{enumerate}
\section{Related Work}
This section presents a brief literature review on the field of DNN quantization and cooperative coevolution.
\subsection{DNN Quantization}
DNN quantization is a popular research area, and researchers have proposed many quantization approaches~\cite{google-whitepaper}, which can be classified into two categories: quantization-aware training (QAT) and post-training quantization (PTQ).
PTQ directly quantizes well-trained full-precision networks without re-training\\\cite{19-DBLP:conf/cvpr/WuWGLCLHLY20}.
Two representative PTQ approaches are Outlier Channel Splitting (OCS)~\cite{17-DBLP:conf/icml/ZhaoHDSZ19} and Deep Compression~\cite{5-han2015deep}.
The former deals with outliers during quantization by duplicating channels containing outliers and halving the channel values. The latter introduces a three-stage pipeline: pruning, trained quantization, and Huffman coding, which work together to reduce the memory storage for DNNs.
Unlike PTQ, QAT quantizes and finetunes network parameters in the training process~\cite{242-Loss-Aware}, which usually obtains better performance, thus attracting much more research interest.
BinaryConnect~\cite{8-courbariaux2015binaryconnect} restricts the weights to two possible values, i.e., -1 or 1, but the activations are still full-precision.
BNN~\cite{14-hubara2016binarized} quantizes both weights and activations to -1 or 1.
XNOR-Net~\cite{15-rastegari2016xnor} proposes filter-wise scaling factors for weights and activations to minimize the quantization error. To further accelerate the training of DNNs, some work also attempts to quantize gradients. DoReFa-Net~\cite{12-zhou2016dorefa} uses quantized gradients in the backward propagation, but the weights and gradients are stored with full precision when updating the weights as the same as previous works.
To the best of our knowledge, WAGE~\cite{9-wu2018training} is currently the only work that updates the quantized weights with discrete gradients.
\subsection{Cooperative Coevolution}
Cooperative coevolution is a powerful framework that leverages the “divide-and-conquer” idea to solve large-scale optimization problems.
As first shown by Yang and Tang~\cite{28-DBLP:conf/ppsn/ChenWYT10,Tang2-DBLP:conf/ideal/LiuT13,13-DBLP:journals/tec/MaLZTLXZ19,13-yang2008large,Tang3-DBLP:conf/cec/YangTY08a}, the framework of cooperative coevolution consists of three parts: problem decomposition, subcomponent optimization, and subcomponents coadaptation.
Among them, problem decomposition is the key step~\cite{18-potter2000cooperative}. An effective decomposition can ease the optimization difficulty of a large-scale problem~\cite{20-DBLP:journals/tec/SunKH18}. In contrast, an improper decomposition may lead the algorithms to local optimums~\cite{ma2022merged,13-DBLP:journals/tec/MaLZTLXZ19,21-DBLP:journals/tec/SonB04}.
There are three categories of problem decomposition approaches: static decomposition, random decomposition, and learning-based decomposition\cite{ma2022merged}. Static decomposition approaches do not take account into the subcomponents interactions and fixedly decompose the decision variables into subcomponents~\cite{22-DBLP:journals/tec/vandenBerghE04,23-DBLP:conf/cec/CaoWS0RJL15,12-DBLP:conf/ppsn/PotterJ94}. Conversely, selecting decision variables randomly for each subcomponent is the main idea of random decomposition approaches\cite{26-DBLP:journals/tec/LiY12,25-DBLP:conf/cec/OmidvarLYY10,27-DBLP:journals/isci/TrunfioTW16,13-yang2008large}. One of the most famous random decomposition methods is EACC-G proposed by Yang and Tang~\cite{13-yang2008large}. The main idea of this method is to divide the interdependent variables into the same subcomponent, but the dependencies between subcomponents should be as weak as possible. The learning-based approaches try to discover the interactions between variables~\cite{28-DBLP:conf/ppsn/ChenWYT10,8-DBLP:journals/toms/MeiOLY16,14-DBLP:journals/tec/OmidvarLMY14}.
\section{Method}
In this section, we first formulate DNN quantization as a large-scale discrete optimization problem and introduce our quantization functions. Then we detail the EDA applied to this problem. Finally, to further improve the algorithm performance, the cooperative coevolution framework is proposed.
\subsection{Problem Formulation}
In a DNN with $L$ layers, let $\boldsymbol{w_l}$ represent the full-precision weights and $\boldsymbol{\hat{w_l}}$ represent the $k$ bits quantized weights at layer $l$, which both are an $n_l$-dimension vector, i.e., there are $n_l$ paremeters at layer $l$. We combine the quantized weights from all layers into $\boldsymbol{\hat{w}}=[\boldsymbol{\hat{w}_1},\boldsymbol{\hat{w}_2},…,\boldsymbol{\hat{w}_L}]$. The parameters in $\boldsymbol{\hat{w}}_l$ can only take one of $2^k$ possible discrete values, i.e., $\boldsymbol{\hat{w}_l} \in \{t_{l1}, t_{l2}, …, t_{l2^{k}}\}^{n_l}$. We formulate the DNN low-bit quantization problem as the following large-scale discrete optimization problem.
\begin{equation}
\max\limits_{\boldsymbol{\hat{w}}} f(\boldsymbol{\hat{w}})\quad s.t.\quad \boldsymbol{\hat{w}}_l\in \{t_{l1}, t_{l2}, …, t_{l2^{k}}\}^{n_l}, \quad l = 1,2,...,L,
\label{equ:optimization_problem}
\end{equation}
where $f(\boldsymbol{\hat{w}})$ represents the accuracy of the quantized DNN. Since a DNN usually has a huge number of parameters, e.g., ResNet-152 has around 11 million paremeters, this is a large-scale discrete optimization problem.
\subsection{Quantization Functions}
To obtain a quantized DNN and construct the search space of our algorithm, we need to identify all the possible discrete values for each weight and activation. Moreover, the initial solution of our algorithm is obtained from a full-precision DNN. Based on the above two considerations, we design two linear quantization functions to map full-precision weights and activations to discrete ones separately.
For the weights, let the maximum and minimum values of each layer weights $\boldsymbol{w_l}$ be $[w_l^{min}, w_l^{max}]$, then the full-precision weights $\boldsymbol{w_l}$ at layer $l$ are discretized with a uniform distance $\delta_l$:
\begin{equation}
\delta_l(k)=\frac{w_l^{max}-w_l^{min}}{2^k-1},
\end{equation}
where $k$ is the number of bits. The quantization function for weights can be represented as:
\begin{equation}
Q(\boldsymbol{w_l}) = Clip\{round(\frac{\boldsymbol{w_l}}{\delta_l(k)}) \cdot \delta_l(k), w_l^{min}, w_l^{max}\},
\label{eq:weight_quantization}
\end{equation}
where the $Clip$ function is the saturation function, and the $round$ function maps continuous values to their nearest integers.
For the remaining parameters including activations and the parameters in batch normalization layers, we assume that the range of parameters is $[-1,1]$ as WAGE~\cite{9-wu2018training}. The quantization function for activations $\boldsymbol{a_l}$ at layer $l$ can be represented as:
\begin{equation}
Q(\boldsymbol{a_l}) = round(\frac{\boldsymbol{a_l}}{\delta_l(k)}) \cdot \delta_l(k).
\label{eq:activation_quantization}
\end{equation}
\subsection{Estimation of Distribution Algorithm}
We propose to use the Estimation of Distribution Algorithm (EDA) to search for discrete weights.
The overall framework of EDA for training quantized DNNs is summarized in Algorithm \ref{alg::EDA}.
We encode the quantized DNN weights into a fixed-length 1-dimensional array as the representation of our solution, i.e., $\boldsymbol{\hat{w}}=[\hat{w}_1,\hat{w}_2, …, \hat{w}_n]$, where $n$ represents the total number of parameters in a DNN. Then we construct a probabilistic model over it. For simplicity, we assume that the weights of the neural network are all independent of each other like PBIL~\cite{22-baluja1994population}. Specifically, for each weight $\hat{w}_i$, there are $2^k$ possible values. Each possible value corresponds to a probability $p_j$, where $j =1, 2, …,2^k $, $k$ is the bit length of weights, and $\sum_{j=1}^{2^k}{p_j}=1$. After the initial quantized network is obtained (Line 1), i.e., $\boldsymbol{\hat{w}}=[\hat{w}_1=a_1,\hat{w}_2=a_2, …, \hat{w}_n=a_n]$, we initialize the probabilistic model $P$ of the weights using $\sigma$-greedy strategy (Line 2), which is shown as Eq. (\ref{eq:sigma-greedy}):
\begin{equation}
\left\{
\begin{array}{lc}
P(\hat{w}_i=a_i) = \sigma, & \\
P(\hat{w}_i= \text{one of the other possible values}) = \displaystyle{\frac{1-\sigma}{2^k-1}}.
\end{array}
\right.
\label{eq:sigma-greedy}
\end{equation}
That is, if $\hat{w}_i$ takes the value $a_i$, then $P(\hat{w}_i=a_i) = \sigma$. The probability of other possible values is $(1-\sigma)/(2^k-1)$, where $0<\sigma<1$. For each generation, we sample weights from the probabilistic model to generate new individuals (Line 4), get the fitness values of them (Line 5) and rank them by their fitness values in descending order (Line 6). To update the probabilistic model $P$, we calculate the probability of each possible value for $w_i$ according to the first $N_{best}$ new individuals and construct the probabilistic model $P_{best}$ of them (Line 9). Finally, we update $P$ using $(1-\alpha)P+\alpha P_{best}$ (Line 10), where $\alpha$ is updating step.
\begin{algorithm}[t]
\caption{Estimation of Distribution Algorithm}
\label{alg::EDA}
\begin{algorithmic}[1]
\Require
the number of best individuals $N_{best}$ to update the probabilistic model;
updating step $\alpha$ ;
generation number $G$;
the size of population $S$
\Ensure
best individual $I_{best}$
\State Initialize best individual $I_{best}=[\hat{w}_1=a_1,\hat{w}_2=a_2, …, \hat{w}_n=a_n]$
\State Initialize probabilistic model $P$ using $\sigma$-greedy strategy
\For{generation $i$ from 0 to $G$}
\State Generate $S$ new individuals according to $P$
\State Get the fitness values of the new individuals
\State Rank the new individuals by fitness values in descending order
\State Update the best individual $I_{best}$
\State Select the first $N_{best}$ best individuals
\State Construct the probabilistic model $P_{best}$ of the $N_{best}$ best individuals
\State Update the probabilistic model $P$: $P = (1-\alpha)P+\alpha P_{best}$
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Cooperative Coevolution}
Since our optimization problem has a huge number of decision valuables, to further improve the search efficiency, we propose a novel cooperative coevolution algorithm based on EDA inspired by Yang and Tang~\cite{13-yang2008large}, namely EDA+CC.
The most important part of the cooperative coevolution algorithm lies in the efficient grouping of variables. As the EDA searches, the probabilistic model $P$ gradually converges. However, different decision variables have different convergence rates. Fig.~\ref{fig::EDA} gives a simple example of the convergence of the probabilistic model when applying EDA to a $0/1$ optimization problem. Suppose the decision variables are encoded as $\boldsymbol{w}=[w_1,w_2,…,w_n]$ and variables are independent of each other. Initially, $P(w_i=1) = 0.5$. As the evolution proceeds, $P(w_i=1)$ gradually converges to $1$. For $w_i$, if $P(w_i)$ converges quickly, it intuitively shows that EDA is confident about the value of $w_i$, which means $w_i$ should not be changed in the subsequent searching process; conversely, if $P(w_i)$ converges slowly, then $w_i$ should be further optimized.
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{EDA.pdf}
\caption{$P(w_i)$ converges gradually as EDA evolving}
\label{fig::EDA}
\end{figure}
\begin{algorithm}[t]
\caption{EDA+CC}
\label{alg::EDA+CC}
\begin{algorithmic}[1]
\Require
the number of best individuals $N_{best}$ to update the probabilistic model;
updating step $\alpha$ ;
generation number $G$;
the size of population $S$;
accuracy threshold $T$;
flag $F$;
total number of weights $n$;
random number $\beta$
\Ensure
best individual $I_{best}$
\State Initialize best individual $I_{best}=[\hat{w}_1=a_1,\hat{w}_2=a_2, …, \hat{w}_n=a_n]$
\State Get the fitness values of $I_{best}$
\State Initialize probabilistic model $P$ using $\sigma$-greedy strategy
\State $F \leftarrow 0$
\While{fitness value of $I_{best}$ $\leq$ $T$ }
\If{F=1}
\State Sort the weights by the convergence speed of $P$ in descending order
\State Divide the first $\beta \cdot n$ weights into group A
\State Divide the remaining weights into group B
\State Reinitialize the probabilistic model over group B using $\sigma$-greedy strategy
\EndIf
\For{generation $i$ from 0 to $G$}
\State Generate $S$ new individuals according to $P$
\State Get the fitness values of the new individuals
\State Rank the new individuals by fitness values in descending order
\State Update the best individual $I_{best}$
\State Select the first $N_{best}$ best individuals
\State Construct the probabilistic model $P_{best}$ of the $N_{best}$ best individuals
\State Update the probabilistic model $P$: $P = (1-\alpha)P+\alpha P_{best}$
\State $F \leftarrow 1$
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
Based on this intuition, we group the decision variables according to the confidence in their values, i.e., the speed of convergence. Specifically, we rank the decision variables according to the convergence speed of the probabilistic model in descending order during the EDA run. We divide the first $\beta \cdot n$ variables (which converge fast) into one group and the remaining variables (which converge slowly) into another group, where $\beta \in [0,1]$ is a random number and $n$ is the total number of weights in the network. For the former, we fix them. For the latter, we first perturb the probabilistic model of them with the $\sigma$-greedy strategy and then use EDA to optimize them. Fig.~\ref{fig::EDA+CC-framwork} shows the framework of EDA+CC. Detials of EDA+CC are shown in Algorithm \ref{alg::EDA+CC}. Every $G$ generations we regroup the variables, perturb the probabilistic model, and run EDA again until the network accuracy reaches the threshold.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{framework.pdf}
\caption{EDA+CC framework}
\label{fig::EDA+CC-framwork}
\end{figure}
\section{Experiments}
We use EDA+CC to train 4-bit quantized ResNet-20 on the Cifar-10 training set and test its performance on the test set.
Firstly, we compare our EDA+CC algorithm with WAGE~\cite{9-wu2018training} (See Section~\ref{section: 4.2}). WAGE is the only work that can update discrete weights with quantized gradients.
Secondly, we investigate the influence of different initial quantized DNNs by ablation study (See Section~\ref{section 4.3}).
Finally, besides EDA we also test the performance of Genetic Algorithm (GA)~\cite{20-mitchell1991royal} and Local Search Algorithm (LS)~\cite{21-hoos2004stochastic} (See Section~\ref{section 4.4}).
\subsection{Experiment Settings} \label{section: 4.1}
We implement EDA+CC based on TensorFlow 2.1.0 with python 3.6.9 and run the experiments on Nvidia RTX 2080ti. The settings of algorithms are as follows. The number of generations $G$ is 500, the size of population $S$ is 20, the number of best individuals $N_{best}$ for each generation is 20, the updating step $\alpha$ is 0.1, and the parameter $\sigma$ in the $\sigma$-greedy strategy is 0.95. To enforce the randomness of the algorithm, we set $\beta$ as a random variable that obeys uniform distribution, $\beta \sim N(0.4,0.6)$.
In the following, we show how to construct an initial quantized network, in which the range of weights of each layer is known. First, we use Eq. (\ref{eq:weight_quantization}) and (\ref{eq:activation_quantization}) to quantize the pre-trained full-precision ResNet-20. To increase the randomness of the initial quantized network we randomly select s\% of all its parameters and perturb them to adjacent values ($s=20,30,40$ separately). We denote the quantized network obtained by the above process as ResNetQ-Switch-s\% and the pre-trained full-precision network as ResNet20-Float. Table \ref{table::accuracy-of-diff-inital-network} summarizes the accuracies of the different initial quantized DNNs.
Since ResNet-20 has 269722 parameters, to reduce the search space, we restrict weights to two possible values, i.e., the value before perturbation and the value next to it. Thus, the problem becomes a binary optimization problem and the search space size is $2^{269722}$. Note that this is still a huge search space, it is $10^{81022}$ times larger than the Go search space.
\begin{table}[t]
\centering
\caption{Accuracies of different initial quantized networks obtained by perturbing different proportional parameters. We also list the accuracy of the pre-trained full-precision network, namely ResNet20-Float.}
\setlength{\tabcolsep}{5mm}
\begin{tabular}{ccc}
\toprule
Network
&\begin{tabular}[c]{@{}c@{}} Training Set\end{tabular}
&\begin{tabular}[c]{@{}c@{}} Test Set\end{tabular} \\
\midrule
ResNet20-Float& 98.65\%&91.00\%\\
ResNet20Q-Switch-20\%& 88.20\%&85.38\%\\
ResNet20Q-Switch-30\%&46.65\%&46.79\%\\
ResNet20Q-Switch-40\%&20.32\%&19.60\%\\
\bottomrule
\end{tabular}
\label{table::accuracy-of-diff-inital-network}
\end{table}
\subsection{Verifying the Effectiveness of EDA+CC}\label{section: 4.2}
To verify the effectiveness of EDA+CC, we first compare EDA+CC with WAGE\\~\cite{9-wu2018training}, a representative quantization method that quantizes gradients.
The code of WAGE is available at~\cite{wage-github}.
To further examine the performance of the cooperative coevolution algorithm, we also compare EDA+CC with EDA w/o CC. EDA w/o CC re-initializes the probabilistic model using the $\sigma$-greedy strategy without grouping the decision valuables when EDA restarts. Both EDA+CC and EDA w/o CC use 150K fitness evaluations and take about 23.3 hours separately, in which the time complexity is acceptable.
Table \ref{table::Compare_EDA+CC} shows the accuracies of the quantized DNNs obtained by different approaches. We use ResNet20Q-Switch-30\% as our initial solution. The initial accuracy of ResNet20Q-Switch-30\% is 46.50\%. For EDA+CC, the training set accuracy only decreases by 0.15\% and the test set accuracy increases by 0.4\% compared to the full-precision network. In comparison, the accuracy of the network obtained by WAGE training is only about 43\%, which is much worse than EDA+CC. We speculate the reason for the poor performance of WAGE might be that WAGE is designed for quantized DNNs with 2-bit weights and 8-bit activations, while
our paper uses a more rigorous and hardware-friendly setting: 4-bit weights and 4-bit activations quantized DNNs. Comparing EDA+CC with EDA w/o CC, we can see the positive effect of cooperative coevolution. Applying cooperative coevolution increases the training accuracy from 98.05\% to 98.50\% and the testing accuracy from 89.40\% to 91.40\%. The effectiveness of cooperative coevolution is mainly shown in two aspects: improving the quality of the solution and accelerating the convergence. Fig.~\ref{fig::training_curve_for_EDA+CC} shows the training curves of EDA+CC and EDA w/o CC, i.e., the accuracy of the best individual in each generation. As Fig.~\ref{fig::training_curve_for_EDA+CC} shows, after using the $\sigma$-greedy strategy to re-initialize the probabilistic model $P$ and restarting EDA, EDA+CC can accelerate the convergence and help EDA find a better solution.
\begin{table}[t]
\caption{Compare EDA+CC with WGAE and EDA w/o CC.}
\setlength{\tabcolsep}{5mm}
\centering
\begin{tabular}{ccc}
\toprule
Algorithm& Training Set & Test Set \\ \midrule
EDA+CC& \textbf{98.50\%} & \textbf{91.40\%} \\
EDA w/o CC&98.05\%&89.40\% \\
WAGE&43.44\%&41.35\% \\ \bottomrule
\end{tabular}
\label{table::Compare_EDA+CC}
\end{table}
\begin{table}[t]
\caption{Results of different initial quantized networks.}
\setlength{\tabcolsep}{5mm}
\centering
\begin{tabular}{ccccc}
\toprule
Initial Quantized DNN& Training Set & Test Set &No. of FEs \\ \midrule
ResNet20Q-Switch-20 & 99.25\% & 91.50\%& 50K\\
ResNet20Q-Switch-30 & 98.50\% & 91.40\%& 150K\\
ResNet20Q-Switch-40& 90.09\% & 82.75\%& 150K \\ \bottomrule
\end{tabular}
\label{table::result-diff-initial-network}
\end{table}
\subsection{Ablation Study}\label{section 4.3}
We conduct more detailed studies on different initial quantized networks for EDA+CC. Table~\ref{table::result-diff-initial-network} shows the accuracies of the quantized networks obtained by EDA+CC with different initial networks. It can be seen that EDA+CC reaches 90.09\% accuracy after 150K fitness evaluations for ResNet20Q-Switch-40. We estimate that it will take about 500K fitness evaluations(FEs) for EDA+CC to reach around 98\% accuracy because each restart of EDA with $\sigma$-greedy strategy can improve the accuracy by about 0.9\%. In summary, Table~\ref{table::result-diff-initial-network} illustrates that as the initial accuracy decreases, EDA+CC requires more FEs to train a quantized DNN without accuracy decay compared to the full-precision network.
\begin{figure}[t]
\centering
\includegraphics[scale=0.2]{CC-without-CC.pdf}
\caption{Training curves of EDA+CC and EDA w/o CC.}
\label{fig::training_curve_for_EDA+CC}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.2]{CC-GA-LS.pdf}
\caption{Training curves of EDA, GA and LS.}
\label{fig::training_curve_for_3_algo}
\end{figure}
\subsection{Comparison of EDA, GA and LS} \label{section 4.4}
We compare three search algorithms, GA, LS, and EDA. We use ResNet20Q-Switch-50\% as the initial quantized network. Each algorithm uses 100K fitness evaluations. Fig.~\ref{fig::training_curve_for_3_algo} shows the training curves of the three algorithms. It can be seen that EDA performs significantly better than LS and GA, which indicates that the distribution estimation mechanism is more suitable than the crossover and mutation mechanisms for the problem considered in this study. The crossover and mutation mechanisms might break some good patterns in the individuals imperceptibly, while the distribution estimation mechanism optimizes the individuals in a global way. It is worth noting that, theoretically, in the binary space, ResNet20Q-Switch-50\% corresponds to random initialization, because half of the parameters are randomly perturbed. All three algorithms can obtain better accuracy than ResNet20Q-Switch-50\%, which illustrates the potential of search-based algorithms in training quantized DNNs.
\section{Conclusion and Future Work}
In this paper, we investigate search-based training approaches for quantized DNNs, focusing on exploring the application of cooperative coevolution to this problem. Unlike existing works, EDA+CC does not need gradient information. Considering the search space of this problem is extremely large (e.g., in our experiments it is $10^{81022}$ times larger than the Go search space), we propose to use cooperative coevolution to help solve this problem.
The results show that our method can obtain quantized networks without accuracy decay compared to floating-point networks in our experiment setting.
Overall, this work is a proof of concept that EAs can be applied to train quantized DNNs. There are many subsequent lines of research to pursue, e.g., the effects of other variable grouping mechanisms.
Moreover, the method of determining the ranges of discrete values should also be studied.
Finally, based on the cooperative coevolution framework, it is interesting to investigate on solving different sub-problems by different algorithms \cite{LiuT019,LiuTL020,LiuTY22,TangLYY21}, hopefully leading to better optimization performance.
\bibliographystyle{splncs04}
|
3,212,635,537,805 | arxiv |
\section{Problem formulation}
\label{secform}
In a typical supervised learning problem, we wish to fit a function $f_{\vct{\theta}}$, parameterized by $\vct{\theta}\in\mathbb{R}^p$ to a training data set of $n$ input-output pairs $\{(\vct{x}_i,y_i)\}_{i=1}^n$ drawn i.i.d.~from some common law $\mathcal{P}$. The fitting problem often consists of finding a parameter $\widehat{\vct{\theta}}$ that minimizes the empirical risk
\begin{align}
\label{empobj}
\widehat{\vct{\theta}} \in \arg\min_{{\boldsymbol{\theta}} \in \mathbb{R}^p}\quad\frac{1}{n}\sum_{i=1}^n \ell\left(\vct{x}_i,y_i;\vct{\theta}\right):=\frac{1}{n}\sum_{i=1}^n \widetilde{\ell}\left(f_{\vct{\theta}}(\vct{x}_i),y_i\right),
\end{align}
over the space of all parameters $\vct{\theta}$. The loss $\widetilde{\ell}(f_{\vct{\theta}}(\vct{x}),y)$ measures discrepancy between the output (or label) $y$ and the prediction $f_{\vct{\theta}}(\vct{x})$. The goal is of course to learn models that perform well on the yet unseen test data that is also generated from the same distribution $\mathcal{P}$. In particular, the empirical risk above serves as a surrogate for the population risk (loss) $\operatorname{\mathbb{E}}_{(\vct{x},y)\sim\mathcal{P}}[\ell(\vct{x},y;\vct{\theta})]$.
In practice, many models trained by following this paradigm are often highly vulnerable to adversarial perturbations with many well documented examples in deep learning. This observation has given rise to a surge of interest in both, finding such perturbations (a.k.a adversarial attacks) and also learning models that are robust against such perturbations (a.k.a. adversarial training). A line of recent work \cite{tsipras2018robustness,madry2017towards} propose training approaches that demonstrate promising empirical performance against adversarial perturbations. Motivated by applications in image processing, these papers consider an adversarial attack model where for a predefined perturbation set $\mathcal{S}$, the adversary has the power of perturbing
each data point $\vct{x}$ by adding an element of $\mathcal{S}$. Then an estimator ${\widehat{\boldsymbol{\theta}}}^{\mathcal{S}}$ is constructed by solving a saddle point problem that takes into account such manipulative power for the adversary:
\begin{align}\label{eq:hth-cS}
{\widehat{\boldsymbol{\theta}}}^{\mathcal{S}} \in \arg\min_{{\boldsymbol{\theta}}\in \mathbb{R}^p}\text{ }\max_{\bdelta_i\in \mathcal{S}}\quad \frac{1}{n}\sum_{i=1}^n \ell\left(\vct{x}_i+\bdelta_i,y_i;\vct{\theta}\right)\,.
\end{align}
To evaluate the performance of such an estimator, in this paper we consider two metrics of particular interest, \emph{standard risk} and \emph{adversarial risk}.
\medskip
\noindent{\bf Standard risk.} This is the expected prediction loss of an estimator ${\widehat{\boldsymbol{\theta}}}$ on an uncorrupted test data point that is generated from the same distribution as the training data. Namely,
\begin{align}\label{SR}
{\sf SR}({\widehat{\boldsymbol{\theta}}}):= \frac{1}{p} \operatorname{\mathbb{E}}\left[\ell\left(\vct{x},y;{\widehat{\boldsymbol{\theta}}}\right)\right]\,\quad\text{where}\quad (\vct{x},y)\sim \mathcal{P}\,.
\end{align}
\medskip
\noindent{\bf Adversarial risk.} This is the expected prediction loss of an estimator ${\widehat{\boldsymbol{\theta}}}$ on an adversarially corrupted test data point according to the attack model \eqref{eq:hth-cS}. Namely,
\begin{align}\label{AR}
{\sf AR}({\widehat{\boldsymbol{\theta}}}):= \frac{1}{p} \operatorname{\mathbb{E}}\Big[\max_{\bdelta\in \mathcal{S}} \ell\left(\vct{x}+\bdelta,y;{\widehat{\boldsymbol{\theta}}}\right) \Big]\,\quad\text{where}\quad (\vct{x},y)\sim \mathcal{P}\,.
\end{align}
Stated differently, the adversarial risk measures how well the estimator ${\widehat{\boldsymbol{\theta}}}$ performs in predicting the true label when it is fed with an adversarially corrupted test data point. We note that the factor $1/p$ is the proper scaling so the risk has a finite limit under our asymptotic regime.
Focusing on linear regression, in this paper we aim to derive asymptotically exact characterizations of these two metrics and study the tradeoff achieved by the class of estimators ${\widehat{\boldsymbol{\theta}}}^{\mathcal{S}}$ of the form~\eqref{eq:hth-cS}. These characterizations will also enable us to study the effect of various quantities (e.g.~size and quality of the training data, model size, etc.) on the trade-off between statistical and adversarial risk. Specifically, we consider the linear regression model below.
\begin{definition}[Linear Regression Setting]\label{linregmod} We consider
standard Gaussian linear regression model with the training data consisting of $n$ i.i.d pairs $(\vct{x}_i,y_i)$, with $\vct{x}_i\sim \normal(0,\mtx{I}_{p})$ representing the features and $y_i\in \mathbb{R}$ the corresponding label given by \footnote{We note that our analysis in Section~\ref{sec:proofs} can be extended to general Gaussian linear regression where $\vct{x}_i\sim \normal(0,\bSigma)$. This however requires more involved derivations that are not included in this version.}
\begin{align}\label{eq:linear}
y_i = \<\vct{x}_i,{\boldsymbol{\theta}}_0\>+ w_i\, \quad\text{where}\quad w_i\sim \normal(0,\sigma_0^2)\,.
\end{align}
We also focus on training linear models of the form $f_{\vct{\theta}}(\vct{x})=\langle \vct{x},\vct{\theta}\rangle$ via a quadratic loss $\ell(\vct{x},y;{\boldsymbol{\theta}}) = \frac{1}{2}(y-\<\vct{x},{\boldsymbol{\theta}}\>)^2$ and consider perturbation sets of the form $\mathcal{S}:= \{\bdelta\in \mathbb{R}^p:\, \twonorm{\bdelta}\le \epsilon\}$ where $\varepsilon$ is a measure of the adversary's power. To make the dependence on $\varepsilon$ explicit in our notation, we replace ${\widehat{\boldsymbol{\theta}}}^{\mathcal{S}}$ for this choice of $\mathcal{S}$ by ${\widehat{\boldsymbol{\theta}}}^\varepsilon$. In this case~\eqref{eq:hth-cS} takes the form
\begin{align}\label{eq:htheps}
{\widehat{\boldsymbol{\theta}}}^{\varepsilon} \in \arg\min_{{\boldsymbol{\theta}}\in \mathbb{R}^p}\, \max_{\twonorm{\bdelta_i}\le \varepsilon}\, \frac{1}{2n} \sum_{i=1}^n \left(y_i-\<\vct{x}_i+\bdelta_i,{\boldsymbol{\theta}}\>\right)^2\,.
\end{align}
\end{definition}
Next we formally introduce the asymptotic regime of interest in this paper.
\smallskip
\noindent{\bf Asymptotic regime.} For a given sample size $n$, we define an \emph{instance} of the standard Gaussian model by a tuple $({\boldsymbol{\theta}}_0,p,\sigma_0)$, with ${\boldsymbol{\theta}}_0\in \mathbb{R}^p$, $p\in \mathbb{N}$ and $\sigma_0\in \mathbb{R}_{\ge0}$. We consider sequence of instances of the Gaussian model indexed by the sample size $n$.
\begin{definition}\label{def:converging}
The sequence of instances $\{{\boldsymbol{\theta}}_0(n), p(n), \sigma_0(n)\}_{n\in \mathbb{N}}$ indexed by $n$ is called a converging sequence if:
\begin{itemize}
\item We have $\frac{n}{p}\to \delta \in (0,\infty)$ and $\frac{\sigma_0^2(n)}{p} \to \sigma^2$ as $n\to \infty$.
\item Empirical second moment of the signal converges, i.e., $\frac{1}{p} \sum_{i=1}^p \theta_{0,i}(n)^2 \to V^2 <\infty$, as $n\to \infty$.
\end{itemize}
\end{definition}
In summary, we have introduced the following notations and terms which will be used throughout the paper: the dimension $p$, number of training data points $n$, overparametrization parameter $\delta = n/p$, normalized noise power $\sigma^2$, normalized norm of the true model $V^2$, and the adversary's power $\varepsilon$.
\section{Introduction}
Recent advances in machine learning and deep learning in particular, have led to trained models with breakthrough performance in a variety of applications spanning visual object classification to speech recognition and natural language processing. Despite wide empirical success, these modern learning models are known to be highly vulnerable to small adversarial perturbations to their inputs \cite{biggio2013evasion,szegedy2014intriguing}. For instance, in the context of image classification even small perturbations of the image, which are imperceptible to a human, can lead to incorrect classification by these models. As these modern inferential techniques begin to be deployed in applications such as autonomous or recognition systems in which safety, reliability, and security are crucial, it is increasingly important to ensure trained models are robust against abrupt or adversarial perturbations to the input.
To mitigate the effect of adversarial perturbations, a wide variety of \emph{adversarial training} methods have been developed \cite{DBLP:journals/corr/GoodfellowSS14, kurakin2016adversarial, DBLP:conf/iclr/MadryMSTV18, DBLP:conf/iclr/RaghunathanSL18, DBLP:conf/icml/WongK18} which often involve augmenting the training loss so as to become more robust to input perturbations. While adversarial training methods have been rather successful at improving the accuracy of the trained model on adversarially perturbed inputs (\emph{robust accuracy}), often this benefit comes at the cost of decreasing accuracy on natural unperturbed inputs (\emph{standard accuracy}) \cite{DBLP:conf/iclr/MadryMSTV18}. Therefore, it is crucial to understand the tradeoff between robust and standard accuracy with adversarial training. Complicating matters further, recent empirical evidence suggest that a variety of other factors affect this tradeoff in somewhat surprising ways. For instance, experiments in \cite{tsipras2018robustness} demonstrate that while adversarial training typically has a negative effect on standard accuracy, it outperforms non-adverserial training methods when there are only a few training samples. Perhaps surprisingly, the recent paper by \cite{raghunathan2019adversarial} suggests that in some cases the tradeoff between standard and robust accuracy can be mitigated with additional unlabeled data. Towards demystifying these empirical phenomena, in this paper we aim to precisely characterize the role of adversarial training by focusing on the following key questions:
\begin{quote}
\textit{What is the fundamental tradeoff between robust and standard accuracies in both finite and infinite data limits? How can we algorithmically achieve this tradeoff and what is the role of adversarial training? What is the effect of the size/quality of the data on this tradeoff? How does the model size (e.g. overparametrization) change this tradeoff?}
\end{quote}
A few recent papers have begun to answer some of these questions in specific settings \cite{tsipras2018robustness,DBLP:conf/icml/ZhangYJXGJ19, raghunathan2019adversarial}. See Section \ref{related} for a detailed discussion. Despite this interesting recent progress, a comprehensive understanding of the role of adversarial training and how it precisely affects the aforementioned tradeoffs remains largely mysterious. In this paper we aim to provide a precise characterization of the role of adversarial training by focusing on the simple yet foundational problem of linear regression.
\medskip
\noindent{\textbf{Contributions.}}
We formally introduce the linear regression problem with adversarially perturbed inputs in Section~\ref{secform} and address the questions above in this setting.
\begin{itemize}[leftmargin=*]
\item We characterize the fundamental tradeoff between standard risk\footnote{Since we focus on a regression problem henceforth we focus on risk in lieu of accuracy.} (${\sf SR}$) and adversarial risk (${\sf AR}$) achievable by any algorithm regardless of the computational power and the size of the available training data (see Section \ref{sec31}). This is carried out by deriving the asymptotic expressions of standard and adversarial risks, and analysing the Pareto optimal points of a two dimensional region consisting of all the achievable $({\sf SR},{\sf AR})$ pairs. This analysis clearly demonstrates the existence of a non-trivial tradeoff between the two risks in linear regression as depicted in Figure~\ref{fig:curves}.
\item In Section \ref{sec32}, we turn our attention to modern adversarial training algorithms and provide a precise characterizition of the standard and adversarial risks achieved by them. This is carried out in a high-dimensional regime where the size of the training data $n$ and the number of parameters $p$ grow proportional to each other with their ratio $n/p \to \delta$ for fixed $\delta \in(0,+\infty)$. A key ingredient of our analysis is a powerful extension of a classical Gaussian process inequality \cite{gordon1988milman} known as the Convex Gaussian Minimax Theorem developed in \cite{thrampoulidis2015regularized} and further extended in \cite{thrampoulidis2018precise,deng2019model}.
\item Our precise characterization of the standard and robust risks for adversarial training algorithms allows us to rigorously study a variety of phenomena. First, we study the tradeoffs between standard and adversarial risks for a contemporary adversarial training algorithm and show that as the limiting ratio $n/p\to\delta$ between the number of training data $n$ and number of parameters $p$ grows, the algorithmic tradeoff curve approaches the fundamental (Pareto-optimal) tradeoff curve. These findings are manifested empirically in Figure~\ref{fig:curves}. We also characterize the effect of the size of the training data and model overparametrization (see Section~\ref{sec33}). We argue analytically and empirically that in the overparametrized regime (i.e. when $\delta < 1$) adversarial training helps improve standard risk (compared to normal training). However, as the size of training data grows (i.e. $\delta$ becomes large) adversarial training effectively hurts standard risk. In short, adversarial training improves generalization in the overparametrized regime, but effectively hurts generalization in the sufficiently underparametrized regime. Finally, in Section \ref{sec34} we demonstrate and prove the emergence of a phenomenon in adversarial training which is similar to the so-called double-descent phenomenon. When traditional training is used, the double-descent phenomena demonstrates that increasing the model complexity beyond a certain interpolation threshold always improves generalization. We show that the double-descent behavior continues to hold with adversarial training. However, for linear regression model considered in this paper, the global minimum of the risk is achieved under the interpolation threshold \emph{whose value changes with $\varepsilon$}. Our theory also allows us to study how the adversarial training affects the interpolation threshold.
\end{itemize}
\section{Main Results}\label{sec:main}
In this paper we wish to understand fundamental tradeoffs between standard and adversarial risks as well as what can be achieved by modern adversarial training approaches. In Section \ref{sec31} we characterize the fundamental tradeoff between standard and adversarial risk achievable by any algorithm regardless of the computational power and the size of the available training data. Then in Section \ref{sec32} we turn our attention to precisely characterizing the standard and adversarial accuracy tradeoffs achieved by modern adversarial training algorithms of the form \eqref{eq:hth-cS}. This is carried out in a high-dimensional regime where the size of the training data $n$ and the number of parameters $p$ grow proportional to each other with their ratio $n/p \to \delta$ for fixed $\delta \in(0,+\infty)$. Next, in Section \ref{sec33} we focus on studying the role of that the size of the training data plays and how it affects the standard accuracy. Finally, in Section \ref{sec34} we prove the emergence of a phenomena in adversarial training similar to the so-called double-descent phenomena without adversarial training.
\subsection{Fundamental tradeoffs between standard and adversarial risk}\label{sec:pareto}
\label{sec31}
Motivated by the conflict observed between standard and adversarial risk in modern adversarial training \cite{DBLP:conf/iclr/MadryMSTV18}, we first wish to understand the fundamental tradeoffs that can be achieved between the two objectives. That is, the optimal tradeoff that can be achieved between standard and adversarial risk objectives for \emph{any estimator} ${\widehat{\boldsymbol{\theta}}}$ even with access to infinite computational power and infinite training data. We discuss the tradeoffs achievable by specific algorithms with finite training data in the next section.
\medskip
\noindent\textbf{$({\sf SR}, {\sf AR})$ Region and its Pareto Optimal Curve:} As discussed previously in Section \ref{secform} for an estimator ${\widehat{\boldsymbol{\theta}}}$ we use ${\sf SR}({\widehat{\boldsymbol{\theta}}})$ and ${\sf AR}({\widehat{\boldsymbol{\theta}}})$ to denote the standard and adversarial risks achieved by ${\widehat{\boldsymbol{\theta}}}$. Thus, for any estimator ${\widehat{\boldsymbol{\theta}}}$ we obtain a point $({\sf SR}({\widehat{\boldsymbol{\theta}}}), {\sf AR}({\widehat{\boldsymbol{\theta}}}))$ in the 2-d plane. We refer to the set of all such points, for all ${\widehat{\boldsymbol{\theta}}} \in \mathbb{R}^p$, as the $({\sf SR}, {\sf AR})$ region. To obtain the optimal tradeoff between standard and adversarial risks we need to characterize the Pareto-optimal points of this region.\footnote{Given a region $\mathcal{C} \in \mathbb{R}^2$, a point $(x,y) \in \mathcal{C}$ is Pareto optimal if there exists no other point $(x', y') \in \mathcal{C}$ s.t. $x' \leq x$ and $y' \leq y$. }
In the linear regression setting of this paper the expressions of standard accuracy \eqref{SR} and adversarial accuracy \eqref{AR} are convex functions of ${\boldsymbol{\theta}}$. Therefore, using standard results in multi-objective optimization we can derive all the Pareto optimal points of the $({\sf SR}, {\sf AR})$ region, by minimizing a weighted combination of these two accuracies for different weights $\lambda$.
\begin{align}\label{eq:pareto}
{\boldsymbol{\theta}}^\lambda =\arg\min_{{\boldsymbol{\theta}}} \,\,{\lambda} \, \overbrace{\operatorname{\mathbb{E}}\left\{(y-\<\vct{x},{\boldsymbol{\theta}}\>)^2\right\}}^{\text{standard risk}} \, + \,\overbrace{\operatorname{\mathbb{E}}\Big\{\max_{\twonorm{\bdelta}\le \eps_{{\rm test}}} (y-\<\vct{x}+\bdelta,{\boldsymbol{\theta}}\>)^2 \Big\}}^{\text{adversarial risk}} \,.
\end{align}
The Pareto-optimal curve is then given by $\{({\sf SR}({\boldsymbol{\theta}}^\lambda), {\sf AR}({\boldsymbol{\theta}}^\lambda): \; \lambda\ge0\}$.
\medskip
\noindent\textbf{Analytical Expression of the Optimal Tradeoffs:} Before we proceed to calculate ${\boldsymbol{\theta}}^\lambda$, we derive the standard and adversarial risks (${\sf SR}({\widehat{\boldsymbol{\theta}}})$ and ${\sf AR}({\widehat{\boldsymbol{\theta}}})$) as a functions of ${\boldsymbol{\theta}}_0$ and $\sigma_0^2$ in the Gaussian linear regression model. We defer the proof of this Lemma to Section \ref{lem:SR-ARpf}.
\vspace{-.2cm}
\begin{lemma}\label{lem:SR-AR} Consider the linear regression setting of Definition \ref{linregmod}. For a given estimator ${\widehat{\boldsymbol{\theta}}}$ the standard risk \eqref{SR} is equal to
\vspace{-.1cm}
\begin{align*}
{\sf SR}({\widehat{\boldsymbol{\theta}}}):=\frac{1}{p}\operatorname{\mathbb{E}}\left[(y-\<\vct{x},{\widehat{\boldsymbol{\theta}}}\>)^2\right]= \frac{\sigma_0^2}{p} + \frac{1}{p} \twonorm{{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0}^2,
\end{align*}
\vspace{-.2cm}
Furthermore, the adversarial risk \eqref{AR} with a corruption level of $\eps_{{\rm test}}$ is equal to
\begin{align*}
{\sf AR}({\widehat{\boldsymbol{\theta}}}):=& \frac{1}{p}\operatorname{\mathbb{E}}\bigg[\max_{\twonorm{\bdelta}\le \eps_{{\rm test}}} (y-\<\vct{x}+\bdelta,{\widehat{\boldsymbol{\theta}}}\>)^2 \bigg]\\
=&\frac{1}{p} \left(\sigma_0^2+ \twonorm{{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0}^2+ \eps_{{\rm test}}^2\twonorm{{\widehat{\boldsymbol{\theta}}}}^2\right)+2\sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}}{\sqrt{p}} \twonorm{{\widehat{\boldsymbol{\theta}}}} \left(\frac{\sigma_0^2}{p}+ \frac{1}{p}\twonorm{{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0}^2\right)^{1/2}.
\end{align*}
\end{lemma}
With a precise expression of the standard and adversarial risk in hand our next theorem characterizes the solution ${\boldsymbol{\theta}}^\lambda$ of the optimization problem~\eqref{eq:pareto} which in conjunction with Lemma \ref{lem:SR-AR} determines the Pareto-optimal tradeoff curve. We defer the proof of this result to Section \ref{pro:hth-Lampf}.
\begin{propo}\label{pro:hth-Lam}
Under the linear regression setting of Definition \ref{linregmod}, the solution ${\boldsymbol{\theta}}^\lambda$ of the
optimization problem~\eqref{eq:pareto} is given by
\vspace{-0.65cm}
\begin{align*}
{\boldsymbol{\theta}}^\lambda = (1+\gamma_0^{\lambda})^{-1} {\boldsymbol{\theta}}_0\,,
\end{align*}
\vspace{-0.25cm}
with $\gamma_0^{\lambda}$ the fixed point of the following two equations:
\begin{align*}
\gamma_0^{\lambda} = \frac{\eps_{{\rm test}}^2+\sqrt{\frac{2}{\pi}} \eps_{{\rm test}} A^\lambda}{1+\lambda + \sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}}{A^\lambda}}\,\quad\text{and}\quad
A^\lambda = \frac{1}{\twonorm{{\boldsymbol{\theta}}_0}} \left((1+\gamma_0^\lambda)^2\sigma_0^2+ (\gamma_0^\lambda)^2\twonorm{{\boldsymbol{\theta}}_0}^2\right)^{1/2}\,.
\end{align*}
\end{propo}
In Figure~\ref{fig:curves} we plot the Pareto optimal curve in the $({\sf SR},{\sf AR})$ plane in black for an instance where $\eps_{{\rm test}}=0.5$ and the normalized norm of the true model and the noise power are both equal to one ($\sigma=V=1$). This curve serves as a fundamental limit on the performance of any algorithm even with access to infinite data and computational power. This figure also contains algorithmic tradeoffs which we discuss in further detail in the next section. In particular, in the next section we precisely characterize the SR-AR tradeoff achieved by a specific adversarial training algorithm.
\begin{figure}
\centering
\includegraphics[scale=1.2]{curves.pdf}
\caption{Pareto optimal curve along with algorithmic curves for several values of $\delta$. As $\delta$ grows the algorithmic tradeoff curves approach the fundamental Pareto optimal curve. The dots correspond to the empirical data obtained by solving for the optimal solution ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ of \eqref{eq:htheps} using gradient descent and then computing $({\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon), {\sf AR}({\widehat{\boldsymbol{\theta}}}^\varepsilon))$ from Lemma~\ref{lem:SR-AR} with different values of $\varepsilon$. Here, $\sigma = 1$, $V=1$, $p = 1000$, and $\eps_{{\rm test}}=0.5$.}
\label{fig:curves}
\end{figure}
\subsection{Algorithmic tradeoffs between standard and adversarial risks}
\label{sec32}
Given the fundamental tradeoff of the previous section, the natural question that arises is whether it is possible to achieve this tradeoff algorithmically with only finite data and computational power? Specifically, what is the tradeoff achieved by common adversarial training algorithms? In this section we consider the class of estimators ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ constructed through the saddle point problem~\eqref{eq:htheps} for various values $\varepsilon$ at training i.e.~$\{{\widehat{\boldsymbol{\theta}}}^\varepsilon:\, \varepsilon\ge 0\}$. We wish to precisely derive the tradeoff curve between the standard and the adversarial risks achieved by this class of estimators. We refer to such curve as \emph{algorithmic} tradeoff curve since it corresponds to the specific class of saddle point estimators as opposed to the Pareto optimal trade off curves studied in Section~\ref{sec:pareto} which serve as lowerbound for any estimator. To avoid any confusion about the tradeoffs discussed we would like to emphasize that:
\begin{itemize}
\item[(i)] In the \emph{training} phase, we are \emph{varying} the adversarial power $\varepsilon$, and accordingly, obtain a range of estimators ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ by solving \eqref{eq:htheps}.
\item[(ii)] At \emph{test} time, the adversarial power is fixed to a given value $\eps_{{\rm test}}$ and we will measure the (expected) standard and adversarial risks of the trained estimators ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ with respect to the true adversarial power $\eps_{{\rm test}}$. By varying $\varepsilon$ at training time, we expect to sweep a tradeoff between standard and adversarial risks, i.e. estimators ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ with large $\varepsilon$ should have a smaller adversarial risk but higher standard risk, and estimators with smaller $\varepsilon$ should behave the opposite.
\end{itemize}
\vspace{.2cm}
\noindent\textbf{Analytical Expression of the Algorithmic Tradeoffs.} Our goal for the rest of this section is to analytically derive the algorithmic tradeoffs in terms of the overparametrization parameter $n/p \to \delta \in (0, \infty)$ which represents the number of training data points per dimension. We focus on converging sequences of Gaussian model instances as described in Definition~\ref{def:converging}. Recall that By virtue of Lemma~\ref{lem:SR-AR}, in order to derive the asymptotic standard and adversarial risk of ${\widehat{\boldsymbol{\theta}}}^\varepsilon$, it suffices to obtain an exact characterization of the asymptotic error $\lim_{n\to\infty}\frac{1}{p}\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon-{\boldsymbol{\theta}}_0}^2$ and the asymptotic estimator norm $\lim_{n\to\infty}\frac{1}{p}\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon}^2$. This is the subject of the next theorem formally proven in Section \ref{Thm5pf}.
\begin{theorem}\label{thm:main}
Let $\{({\boldsymbol{\theta}}_0(n),p(n),\sigma_0(n))\}_{n\in \mathbb{N}}$ be a converging sequence of instances of the standard Gaussian design model. Consider the linear regression model~\eqref{eq:linear} and let ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ be a solution of \eqref{eq:htheps}. If $\varepsilon,\delta>0$ or $\varepsilon=0$, $\delta>1$, then
\begin{itemize}[leftmargin=*]
\item[(a)] The following convex-concave minimax scalar optimization has a unique solution $(\alpha_*,\beta_*,\gamma_*,\tau_{h*}, \tau_{g*})$:
\begin{align}
\label{eq019}
\max_{0\le\beta\le K_\beta}\sup_{\gamma,\tau_h\ge0}\;\;\min_{0\le \alpha\le K_\alpha}\;\;\min_{\tau_g\ge 0}\;\; \quad D(\alpha,\beta, \gamma,\tau_h,\tau_g)\,,\quad\text{where}
\end{align}
\vspace{-0.5cm}
\begin{align}\label{eq019-2}
&D(\alpha,\beta, \gamma,\tau_h,\tau_g):=\frac{\delta\beta}{2(\tau_g+\beta)} \left(\alpha^2+\sigma^2\right)\nonumber\\
&+\delta \mathbb{1}_{\big\{\frac{\gamma(\tau_g+\beta)}{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}>\sqrt{\frac{2}{\pi}}\big\}}\frac{\beta^2(\alpha^2+\sigma^2)}{2\tau_g(\tau_g+\beta)}\left({\rm erf}\left(\frac{\tau_*}{\sqrt{2}}\right)-\frac{\gamma(\tau_g+\beta)}{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}\;\tau_*\right)\nonumber\\
&-\frac{\alpha}{2\tau_h}(\gamma^2 +\beta^2)+ \gamma\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2}+V^2} -\frac{\alpha\tau_h}{2}+\frac{\beta\tau_g}{2}\,,
\end{align}
and $\tau_*$ is the unique solution to
\begin{align*}
\frac{\gamma(\tau_g+\beta)}{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}-\frac{\beta}{\tau_g}\tau-\tau\cdot {\rm erf}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}=0
\end{align*}
\item[(b)] It holds in probability that $\lim_{n\to\infty}\frac{1}{p} \twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon-{\boldsymbol{\theta}}_0}^2 = \alpha_*^2$.
\item[(c)] It holds in probability that
\vspace{-0.6cm}
\begin{align}
\lim_{n\to\infty}\frac{1}{\sqrt{p}}\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon}= \frac{\beta_* \tau_*\sqrt{\alpha_*^2+\sigma^2}}{\varepsilon \tau_{g*}}\,.
\end{align}
\end{itemize}
\end{theorem}
We note that the loss \eqref{eq:htheps} and its optimal solution are a rather complicated and high-dimensional function of the features/label pairs $\{(\vct{x}_i,y_i)\}_{i=1}^n$. Nevertheless the Theorem above provides a precise characterization of its properties using a 5 dimensional convex-concave mini-max optimization problem! Such a precise characterization allows us to provide a precise understanding of the standard and adversarial accuracies. In particular, combining Theorem~\ref{thm:main} (parts (b)-(c)) with Lemma~\ref{lem:SR-AR} we can obtain the asymptotic values of ${\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon)$ and ${\sf AR}({\widehat{\boldsymbol{\theta}}}^\varepsilon)$, and derive the algorithmic tradeoff curve achieved by the class $\{{\widehat{\boldsymbol{\theta}}}^\varepsilon:\, \varepsilon\ge 0\}$ as $\varepsilon$ varies (discussed in the next corollary proven in Section \ref{cor6pf}).
\begin{corollary}\label{cor6}
Let $\{({\boldsymbol{\theta}}_0(n),p(n),\sigma_0(n))\}_{n\in \mathbb{N}}$ be a converging sequence of instances of the standard Gaussian design model. Consider the linear regression model~\eqref{eq:linear} and let ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ be a solution of \eqref{eq:htheps}. Further assume that $\varepsilon,\delta>0$ or $\varepsilon=0$, $\delta>1$. Also denote $(\alpha_*,\beta_*,\gamma_*,\tau_{h*}, \tau_{g*})$ as the optimal solutions of the minimax optimization~\eqref{eq019}. Then, the following identities hold in probability:
\begin{align}
\lim_{n\to\infty} {\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon) &= \sigma^2 + \alpha_*^2\,,\\
\lim_{n\to\infty} {\sf AR}({\widehat{\boldsymbol{\theta}}}^\varepsilon) &= \left(\sigma^2+\alpha_*^2 + \eps_{{\rm test}}^2 (\alpha_*^2+\sigma^2) \left(\frac{\beta_*\tau_*}{\varepsilon \tau_{g*}}\right)^2\right)
+2\sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}\beta_*\tau_*}{\varepsilon \tau_{g*}} (\sigma^2+ \alpha_*^2)\,.
\end{align}
\end{corollary}
The corollary above provides a precise characterization of the standard and adversarial accuracy achieved by the adversarial training algorithm consisting of running gradient descent on the saddle point problem~\eqref{eq:htheps}. In Figure~\ref{fig:curves}, we plot the algorithmic tradeoff curve for several values of $\delta$ as well as the empirical values obtained by running gradient descent. As we observe, our theoretical prediction and the empirical values are rather close match even for moderately large parameter values ($p=1000$). Such a precise characterization allows us to rigorously study a variety of phenomena. We mention one such phenomena below and discuss others in the coming sections. The plots in Figure~\ref{fig:curves} clearly show that when $\delta$ grows the algorithmic tradeoff curve approaches the Pareto-optimal tradeoff curve. In other words, one can achieve optimal tradeoff of standard and adversarial risks by the specific class of estimators ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ constructed by the saddle point problem~\eqref{eq:htheps}. This observation is formally stated in the next theorem with the proof deferred to Section \ref{delta_limitpf}.
\begin{theorem} \label{delta_limit}
Let $\{({\boldsymbol{\theta}}_0(n),p(n),\sigma_0(n))\}_{n\in \mathbb{N}}$ be a converging sequence of instances of the standard Gaussian design model. Consider the linear regression model~\eqref{eq:linear}, and let ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ be a solution of \eqref{eq:htheps} and ${\boldsymbol{\theta}}^\lambda$ the solution of \eqref{eq:pareto}. Then for any $\lambda\ge 0$ there exists $\varepsilon = \varepsilon(\sigma, V, \eps_{{\rm test}}, \lambda)$, such that
\begin{eqnarray}
\lim_{\delta \to\infty}\lim_{n \to\infty} {\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon) = \lim_{p\to\infty} {\sf SR}({\boldsymbol{\theta}}^\lambda)\,,\quad\quad
\lim_{\delta \to\infty} \lim_{n \to\infty} {\sf AR}({\widehat{\boldsymbol{\theta}}}^\varepsilon) = \lim_{p\to\infty} {\sf AR}({\boldsymbol{\theta}}^\lambda)\,.
\end{eqnarray}
\end{theorem}
The theorem above formally proves that in the infinite data limit ($\delta\rightarrow +\infty$) one of the commonly used adversarial training algorithms achieves the optimal tradeoff between standard and robust accuracies.
\subsection{The role of the size of the training data and overparameterization}
\label{sec33}
\begin{figure}
\centering
\begin{minipage}{.485\textwidth}
\centering
\includegraphics[scale=1]{SR_vs_eps_deltaLess1.pdf}
\caption*{(a) Theoretical curves}
\end{minipage}\hspace{0.3cm}
\begin{minipage}{.485\textwidth}
\centering
\includegraphics[scale=1]{SR_vs_eps_deltaLess1_emp.pdf}
\caption*{(b) Empirical curves}
\end{minipage}
\caption{Standard risk (${\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon)$) versus $\varepsilon$ for several values of $\delta<1$. Left panel corresponds to the theoretical curve obtained by Theorem~\ref{thm:main} (with $\sigma =1$ and $V=1$), and the right panel corresponds to the empirical results (with $\sigma =1$ and $\theta_{0,i}\sim \normal(0,1)$). The empirical results are averaged over 100 different realizations of noise and features. As $\delta$ grows to one, we observe a faster decay in the standard risk with respect to the adversarial power $\varepsilon$.
\label{fig:SR_deltaLess1}}
\end{figure}
As discussed, our precise understanding of the optimal solution of adversarial training allows us to precisely characterize the effect of various phenomena. In particular in this section we focus on the role of the size of the training data. We begin by considering the common scenario in modern learning where trained models often consist of more parameters than the training data set. In Figure~\ref{fig:SR_deltaLess1}-(a) we plot the standard risk, using Theorem~\ref{thm:main} Part (b), versus $\varepsilon$ for different values of $\delta<1$. As we observe for small to moderate values of $\varepsilon$, this curve is decreasing in $\varepsilon$, which implies that adversarial training helps with improving standard accuracy. The standard risk falls steeper as $\delta$ becomes closer to one. In Figure~\ref{fig:SR_deltaLarger1}-(a) we observe a similar trend for $\delta>1$. However, as $\delta$ grows larger than one, the positive effect of the adversarial training on the standard risk falters and we see a lower decline. When $\delta= 10$, the curve almost levels at $\varepsilon=0$ and then starts to becomes increasing with $\varepsilon$. In other words, for larger $\delta$ we start to see that adversarial training has a negative effect on standard risk starting from smaller values of $\varepsilon$. Our theoretical prediction are in line with recent empirical observations of a similar flavor \cite{tsipras2018robustness} observed in neural networks. Therefore, our theoretical results formally proves the emergence of such a behavior. We provide further insight into the emergence of this phenomena a long with some more rigorous theoretical guarantees in Appendix \ref{insight}.
\begin{figure}
\centering
\begin{minipage}{.485\textwidth}
\centering
\includegraphics[scale=0.95]{SR_vs_eps_deltaLarger1.pdf}
\caption*{(a) Theoretical curves}
\end{minipage}\hspace{0.3cm}
\begin{minipage}{.485\textwidth}
\centering
\includegraphics[scale=0.95]{SR_vs_eps_deltaLarger1_emp.pdf}
\caption*{(b) Empirical curves}
\end{minipage}
\caption{Standard risk (${\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon)$) versus $\varepsilon$ for several values of $\delta>1$. Left panel corresponds to the theoretical curve obtained by Theorem~\ref{thm:main} (with $\sigma =1$ and $V=1$), and the right panel corresponds to the empirical results (with $\sigma =1$ and $\theta_{0,i}\sim \normal(0,1)$). The empirical results are averaged over 100 different realizations of noise and features. As $\delta$ grows, we observe a slower decay in the standard risk at small $\varepsilon$ due to adversarial training. For $\delta = 10$, the standard risk has a small initial slope with respect to $\varepsilon$ and then starts to increase rapidly. Put differently, with larger $\delta$, the negative effect of adversarial training on the standard risk starts at smaller $\varepsilon$.
\label{fig:SR_deltaLarger1}}
\end{figure}
\subsection{Double-descent in adversarial training}
\label{sec34}
When $\varepsilon=0$, the estimator ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ given by~\eqref{eq:htheps} reduces to the least-squares estimator. It is known that the plot of standard risk as a function of number of model complexity ($1/\delta = p/n$) exhibits a so-called `double-descent' behavior~\cite{belkin2018understand,belkin2018reconciling,hastie2019surprises}. Namely, (1) up to the interpolation threshold $\delta =1$ (beyond which the estimator achieves zero training error and the model interpolates the training data) the risk curve follows a U-shape; the risk first decreases as $p$ increases because the model becomes less biased but then starts to increase because of the inflated variance of the estimator. (2) After the peak at the interpolation threshold, the risk decreases and essentially attains its global minimum at `infinite' model complexity (extremely overparametrized regime).
The double-descent phenomenon is not limited to neural networks and have been empirically observed in a variety of models including random features and random forest models. Recently, analytical derivation of this phenomenon has been developed for least square regression and random features model~\cite{tsipras2018robustness,mei2019generalization}. For least square regression with Gaussian covariates, it is shown that the global minimum of the risk is achieved in the underparametrized setting $\delta>1$ (unless miss-specified structures are assumed). Nonetheless, these work are focused on training with unperturbed features.
In Figure~\ref{fig:DD} (a), we plot the standard risk (theoretical predictions from Theorem~\ref{thm:main}) versus $1/\delta = p/n$, for several values of adversarial power $\varepsilon$. We also depict the empirical version of these curves in Figure Figure~\ref{fig:DD} (b). These plots demonstrate that the double-descent phenomena continues to hold even with adversarial training. Interestingly however the interpolation threshold changes with $\varepsilon$. For small $\varepsilon$, we observe double-descent behavior with the interpolation threshold $\delta\approx 1$. However, as $\varepsilon$ increases the location of the peak shifts to higher values of $1/\delta$.
\begin{figure}
\centering
\begin{minipage}{.485\textwidth}
\centering
\includegraphics[scale=1]{DD.pdf}
\caption*{(a) Theoretical curves}
\end{minipage}\hspace{0.3cm}
%
\begin{minipage}{.485\textwidth}
\centering
\includegraphics[scale=1]{DD_emp.pdf}
\caption*{(b) Empirical curves}
\end{minipage}
\caption{Standard risk versus model complexity $1/\delta = p/n$. Left panel corresponds to the theoretical curve obtained by Theorem~\ref{thm:main} (with $\sigma =1$ and $V=1$), and the right panel corresponds to the empirical results (with $\sigma =1$ and $\theta_{0,i}\sim \normal(0,1)$). Here, we recover the double-descent behavior where the interpolation threshold shifts with $\varepsilon$.}\vspace{-0.3cm}
\label{fig:DD}
\end{figure}
\section*{Acknowledgements}
A. Javanmard is partially supported by a Google Faculty Research Award and the NSF CAREER Award DMS-1844481. M. Soltanolkotabi is supported by the Packard Fellowship in Science
and Engineering, a Sloan Research Fellowship in Mathematics, an NSF-CAREER under award
$\#1846369$, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP)
under award $\#$FA$9550-18-1-0078$, Darpa Learning with Less Labels (LwLL) program, an NSF-CIF award $\#1813877$, and a Google faculty research award. This work was done in part while M.S. was visiting the Simons Institute for the Theory of Computing. The research of H. Hassani is supported by NSF HDR TRIPODS award 1934876, NSF award CPS-1837253, NSF award CIF-1910056, and NSF CAREER award CIF-1943064.
\bibliographystyle{amsalpha}
\section{Further insights and guarantees into the effect of the size of the training data}
\label{insight}
To provide further insight into the role of the size of the training data on adversarial training we note that we have already shown in our proofs (See Section \ref{step1} and equation \eqref{concstep1}) that the inner maximization in the saddle point problem~\eqref{eq:htheps} has a closed form solution and the estimator ${\widehat{\boldsymbol{\theta}}}^\varepsilon$ can be equivalently defined by
\begin{align}\label{eq:htheps-2}
{\widehat{\boldsymbol{\theta}}}^{\varepsilon} \in \arg\min_{{\boldsymbol{\theta}}\in \mathbb{R}^p}\, \frac{1}{2p} \sum_{i=1}^n \left(|y_i-\<\vct{x}_i,{\boldsymbol{\theta}}\>|+ \varepsilon \twonorm{{\boldsymbol{\theta}}}\right)^2\,.
\end{align}
Therefore for linear regression, adversarial training by the saddle point optimization~\eqref{eq:hth-cS} amounts to a \emph{regularized estimator}. When $\delta<1$, we are in the overparametrized regime and regularization helps with standard accuracy. In particular, when $\delta\to 1$, the condition number of the covariate matrix diverges (a.k.a interpolation threshold~\cite{belkin2018understand,belkin2018reconciling,hastie2019surprises}) and the role of regularization becomes crucial, without which the standard risk would diverge. This is reflected in Figure~\ref{fig:SR_deltaLess1} in that the standard risk diverges at $\varepsilon= 0$ as $\delta \to 1$, and also the statistical risk plummets quickly with $\varepsilon$ ; See also Proposition~\ref{pro:slope} below.
Nonetheless, in the $\delta>1$ regime the effect of regularization starts to weaken. To see why, note that as $\delta$ grows, the ratio of sample size $n$ to the dimension $p$ increases, and the reduction in the variance of the estimator due to regularization becomes comparative to the increase in the bias caused by this term. As a result the overall positive effect of regularization on standard risk lessens and we see in Figure~\ref{fig:SR_deltaLarger1}, the negative slope at $\varepsilon=0$ decreases as $\delta$ increases. In addition, at large $\delta$, the standard risk will start to quickly becomes increasing with $\varepsilon$. In other words, for larger $\delta$, the negative effect of adversarial training on standard risk starts to emerge at smaller values of $\varepsilon$. (For example at $\delta=10$, this effect kicks in at $\varepsilon = 0.15$.)
Our next proposition describes the standard risk at small values of $\varepsilon$.
\begin{propo}\label{pro:slope}
Under the assumptions of Theorem~\ref{thm:main} and for $\delta\ge 1$ and $\varepsilon\le 1$, we have
\begin{align}
\lim_{n\to \infty} {\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon) = \frac{\delta \sigma^2}{\delta-1}
- 2\sqrt{\frac{2}{\pi}} \frac{\sigma^3\delta^{3/2}}{(\delta-1)^2} \cdot \frac{1}{\sqrt{\sigma^2+V^2(\delta -1)}}\, \varepsilon + O(\varepsilon^2)\,.
\end{align}
\end{propo}
As a result of Proposition~\ref{pro:slope}, for $\varepsilon$ small and $\delta\ge1$: (i) standard risk $\alpha_*$ falls with $\varepsilon$ at vicinity of $\varepsilon=0$ (ii) the risk falls slower at larger $\delta$ (iii) as $\delta \to 1$, the slope diverges and the risk plummets rapidly. These observations corroborates our justification and insights provided above.
We finish this appendix by the proof of Proposition~\ref{pro:slope}.
\begin{proof}[Proof of Proposition~\ref{pro:slope}]
Define $\vct{x}= (\alpha,\beta, \tau_h, \tau_g,\gamma)$.
We can write the objective of the convex-concave minimax problem~\eqref{eq019} as
\[
D(\alpha,\beta, \tau_h, \tau_g,\gamma) = \bar{D}(\alpha,\beta, \tau_h, \tau_g,\gamma) +\mathbb{1}_{\left\{\frac{\gamma(\tau_g+\beta)}{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}> \sqrt{\frac{2}{\pi}}\right\}} \widetilde{D}(\alpha,\beta, \tau_h, \tau_g,\gamma)\,,
\]
where $\bar{D}$ does not depend on $\varepsilon$.
It is easy to see that when $\varepsilon=0$, then $\gamma=0$. Otherwise $\tau^*=\infty$ and $\widetilde{D} = -\infty$ which implies that the maximum of $D$ over $\gamma$ is achieved at $\gamma=0$.
Therefore at $\varepsilon=0$, we get
\[
D = \bar{D} = \frac{\delta\beta}{2(\tau_g+\beta)}(\alpha^2+\sigma^2) - \frac{\alpha}{2\tau_h} \beta^2 -\frac{\alpha \tau_h}{2} + \frac{\beta\tau_g}{2}\,.
\]
The stationary point is given by $(\tau_g+\beta)^2 = \delta (\alpha^2+\sigma^2)$, $\tau_h = \beta$ and $\delta\alpha = \tau_g+\beta$, $\tau_g = \alpha$ (derivative with respect to $\beta$). Putting things together we have
\begin{align}\label{eq:zero_eps}
\alpha^2 = \frac{\sigma^2}{\delta -1}, \quad \tau_g = \alpha = \frac{\sigma}{\sqrt{\delta -1}},\quad
\tau_h = \beta = \sigma \sqrt{\delta -1}\,, \quad \gamma = 0\,.
\end{align}
We next study the behavior of the convex-concave minimax problem~\ref{eq019} at infinitesimal $\varepsilon$. Rewriting the expressions for $\bar{D}$ and $\widetilde{D}$, we have
\begin{align}
\bar{D} &= \frac{\delta\beta}{2(\tau_g+\beta)} \left(\alpha^2+\sigma^2\right) -\frac{\alpha}{2\tau_h}(\gamma^2 +\beta^2)+ \gamma\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2}+ V^2} -\frac{\alpha\tau_h}{2}+\frac{\beta\tau_g}{2}\,,\nonumber\\
\widetilde{D}&= \frac{\delta \beta^2(\alpha^2+\sigma^2)}{2\tau_g(\tau_g+\beta)}\left(\text{erf}\left(\frac{\tau^*}{\sqrt{2}}\right)-\frac{\gamma(\tau_g+\beta)}{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}\tau^*\right)\,.
\end{align}
Let $\gamma_0 := \sqrt{\frac{2}{\pi}}\frac{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}{\tau_g+\beta}$.
If $\gamma\ge \gamma_0$, then $D$ is a quadratic function of $\gamma$ with the peak location at
$$\gamma_1:= \sqrt{\beta^2+\frac{\tau_h^2 }{\alpha^2}V^2} - \frac{\tau_h\beta\sqrt{\alpha^2+\sigma^2}}{2\alpha \varepsilon\tau_g} \tau_*\,.$$
If $\gamma<\gamma_0$, then $D = \bar{D}$ is quadratic in $\gamma$ with the peak location at
$$\gamma_2:= \sqrt{\beta^2+\frac{\tau_h^2}{\alpha^2} V^2}\,.$$
Therefore, to find the optimal $\gamma$ we need to consider three different cases, giving us
\begin{align}\label{eq:gammas}
\gamma_* = \begin{cases}
\gamma_1 & \text{ if }\gamma_0\le \gamma_1\le \gamma_2\,,\\
\gamma_0 &\text{ if }\gamma_1\le \gamma_0 \le \gamma_2\,,\\
\gamma_2 &\text{ if }\gamma_1\le \gamma_2\le \gamma_0\,.
\end{cases}
\end{align}
As $\varepsilon \to 0$, we have $\gamma_0\to 0$. However, using \eqref{eq:zero_eps} we get $\gamma_2\to
\sqrt{\sigma^2(\delta-1) + (\delta-1)^2 V^2} > 0$. By continuity, at infinitesimal $\varepsilon$ we get $\gamma_0 < \gamma_2$.
Hence, in \eqref{eq:gammas} only the first two cases may happen. Suppose that the first case occurs. Then, $0\le \gamma_0\le \gamma_1$ and by definition of $\gamma_1$ we obtain that $\tau_* = O(\varepsilon)$. Invoking the characterization equation of $\tau_*$ as per~\eqref{eq:tau*}, we get
\begin{align}\label{eq:mycase1}
\frac{\gamma_*(\tau_g +\beta)}{\delta \varepsilon \beta\sqrt{\alpha^2+\sigma^2}} = \sqrt{\frac{2}{\pi}} + O(\varepsilon)\,,\quad \tau_* = O(\varepsilon)\,.
\end{align}
If the second case in~\eqref{eq:gammas} happens, we have $\gamma_* = \gamma_0 =\sqrt{\frac{2}{\pi}}\frac{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}{\tau_g+\beta}$ and $\tau_* = 0$. So this case is subsumed in~\eqref{eq:mycase1} and henceforth we can proceed with~\eqref{eq:mycase1}.
By Taylor expansion of the ${{\rm erf}}$ function we have
\[
\text{erf}\left(\frac{\tau^*}{\sqrt{2}}\right) = \sqrt{\frac{2}{\pi}} \tau_* + O(\tau_*^3)\,,
\]
which implies that $\widetilde{D} = O(\tau_*^3) = O(\varepsilon^3)$. Separating $O(\varepsilon^2)$ terms from the lower order terms we get
\begin{align}
&D(\alpha, \beta, \tau_g, \tau_h) = D_0(\alpha, \beta, \tau_g, \tau_h) + \varepsilon D_1(\alpha, \beta, \tau_g, \tau_h) + O(\varepsilon^2)\,,\\
&D_0(\alpha, \beta, \tau_g, \tau_h) = \frac{\delta\beta}{2(\tau_g+\beta)} \left(\alpha^2+\sigma^2\right) -\frac{\alpha}{2\tau_h} \beta^2-\frac{\alpha\tau_h}{2}+\frac{\beta\tau_g}{2}\,, \nonumber\\
&D_1(\alpha, \beta, \tau_g, \tau_h) = \sqrt{\frac{2}{\pi}} \frac{\delta\beta\sqrt{\alpha^2+\sigma^2}}{\tau_g+\beta}\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2}+ V^2} \,.
\end{align}
Letting $\vct{x}= (\alpha, \beta, \tau_g, \tau_h)$, we then have
\begin{align}\label{eq:D-eps}
\nabla D(\vct{x}) = \nabla D_0(\vct{x}) + \varepsilon\nabla D_1(\vct{x}) + O(\varepsilon^2)\,.
\end{align}
To get the stationary points, we need to solve for $\nabla D(\vct{x}) = 0$. However, to find the solution up to $O(\varepsilon)$ term we can instead solve for $\nabla D_0(\vct{x})+ \varepsilon\nabla D_1(\vct{x}) = 0$. To see why, suppose that $\nabla D(\vct{x}_*) = 0$ and write
$\vct{x}_*= \vct{x}_0 +\varepsilon \vct{x}_1+ O(\varepsilon^2)$.
Hence,
\begin{align}
{\bf 0} = \nabla D(\vct{x}_*) &= \nabla D_0(\vct{x}_*) + \varepsilon \nabla D_1(\vct{x}_*) + O(\varepsilon^2) \nonumber\\
&= \nabla D_0(\vct{x}_0) + \varepsilon (\nabla^2 D_0(\vct{x}_0) \vct{x}_1 + \nabla D_1(\vct{x}_0)) + O(\varepsilon^2)\,.
\end{align}
This implies that $\vct{x}_0$ and $\vct{x}_1$ should satisfy
\begin{align}\label{eq:x0-x1}
\nabla D_0(\vct{x}_0)= 0 \quad \text{and} \quad \nabla^2 D_0(\vct{x}_0) \vct{x}_1 + \nabla D_1(\vct{x}_0) = 0\,.
\end{align}
Likewise, let $\tilde{\vct{x}}_*$ be the solution of $\nabla D_0(\vct{x})+ \varepsilon \nabla D_1(\vct{x}) = 0$ and write $\tilde{\vct{x}}_* = \tilde{\vct{x}}_0+\varepsilon \tilde{\vct{x}}_1+O(\varepsilon^2)$. Then following similar arguments, we get
\begin{align}\label{eq:tx0-tx1}
\nabla D_0(\tilde{\vct{x}}_0)= 0 \quad \text{and} \quad \nabla^2 D_0(\tilde{\vct{x}}_0) \tilde{\vct{x}}_1 + \nabla D_1(\tilde{\vct{x}}_0) = 0\,.
\end{align}
Comparing equations \eqref{eq:x0-x1} and \eqref{eq:tx0-tx1}, we see that $\vct{x}_0=\tilde{\vct{x}}_0$ and $\vct{x}_1 = \tilde{\vct{x}}_1$. Therefore, to find the stationary point $\vct{x}_*$ up to $O(\varepsilon)$ terms, we can neglect $O(\varepsilon^2)$ term in~\eqref{eq:D-eps}.
We proceed by computing the stationary points of $D_0(\vct{x}) + \varepsilon D_1(\vct{x})$.
Writing KKT conditions with respect to $\alpha$, $\beta$, $\tau_g$, $\tau_h$ we have
\begin{align*}
&\frac{\sqrt{\frac{2}{\pi}} \alpha \beta \delta \varepsilon \sqrt{\frac{\alpha ^2 \beta ^2}{\tau_h^2}+V^2}}{\sqrt{\alpha ^2+\sigma ^2}(\beta+\tau_g)}+\frac{\sqrt{\frac{2}{\pi}} \alpha \beta ^3 \delta \varepsilon \sqrt{\alpha ^2+\sigma ^2}}{\tau_h^2(\beta+\tau_g) \sqrt{\frac{\alpha ^2 \beta ^2}{\tau_h^2}+V^2}}+\frac{\alpha \beta \delta}{\beta+\tau_g}-\frac{\beta ^2}{2 \tau_h}-\frac{\tau_h}{2}=0\,,\nonumber\\
&\frac{\sqrt{2} \alpha ^2 \beta ^2 \delta \varepsilon \sqrt{\alpha ^2+\sigma ^2}}{\sqrt{\pi} \tau_h^2(\beta+\tau_g) \sqrt{\frac{\alpha ^2 \beta ^2}{\tau_h^2}+V^2}}-\frac{\sqrt{\frac{2}{\pi}} \beta \delta \varepsilon \sqrt{\alpha ^2+\sigma ^2} \sqrt{\frac{\alpha ^2 \beta ^2}{\tau_h^2}+V^2}}{(\beta+\tau_g)^2}\nonumber\\
&\quad\quad\quad\quad \quad\quad +\frac{\sqrt{\frac{2}{\pi}} \delta \varepsilon \sqrt{\alpha ^2+\sigma ^2} \sqrt{\frac{\alpha ^2 \beta ^2}{\tau_h^2}+V^2}}{\beta+\tau_g}-\frac{\beta \delta \left(\alpha ^2+\sigma ^2\right)}{2(\beta+\tau_g)^2}+\frac{\delta \left(\alpha ^2+\sigma ^2\right)}{2(\beta+\tau_g)}-\frac{\alpha \beta}{\tau_h}+\frac{\tau_g}{2}=0\,,\\
&-\frac{\sqrt{\frac{2}{\pi}} \beta \delta \varepsilon \sqrt{\alpha ^2+\sigma ^2} \sqrt{\frac{\alpha ^2 \beta ^2}{\tau_h^2}+V^2}}{(\beta+\tau_g)^2}-\frac{\beta \delta \left(\alpha ^2+\sigma ^2\right)}{2(\beta+\tau_g)^2}+\frac{\beta}{2}=0\,,\\
&-\frac{\sqrt{\frac{2}{\pi}} \alpha ^2 \beta ^3 \delta \varepsilon \sqrt{\alpha ^2+\sigma ^2}}{\tau_h^3(\beta+\tau_g) \sqrt{\frac{\alpha ^2 \beta ^2}{\tau_h^2}+V^2}}+\frac{\alpha \beta ^2}{2 \tau_h^2}-\frac{\alpha}{2}=0\,.
\end{align*}
Second equation can be simplified using other equations as
\begin{align*}
&\frac{\alpha\beta}{2\tau_h} -\frac{\alpha \tau_h}{2\beta} - \frac{\beta}{2} -\frac{\delta(\alpha^2+\sigma^2)}{2(\beta+\tau_g)} +\frac{\beta+\tau_g}{2}+\frac{\delta \left(\alpha ^2+\sigma ^2\right)}{2(\beta+\tau_g)}-\frac{\alpha \beta}{\tau_h}+\frac{\tau_g}{2}=0\,.\\
&\to -\frac{\alpha\beta}{2\tau_h}+\tau_g -\frac{\alpha \tau_h}{2\beta} =0\,.
\end{align*}
The first equation also simplifies to
\begin{align*}
&-\frac{\alpha \beta\delta}{2(\beta+\tau_g)} +\frac{\alpha(\beta+\tau_g)\beta}{2(\alpha^2+\sigma^2)}
+\frac{\beta^2}{2\tau_h} - \frac{\tau_h}{2} +\frac{\alpha \beta \delta}{\beta+\tau_g}-\frac{\beta ^2}{2 \tau_h}-\frac{\tau_h}{2}= 0\,,\\
&\to \frac{\alpha \beta\delta}{2(\beta+\tau_g)} +\frac{\alpha(\beta+\tau_g)\beta}{2(\alpha^2+\sigma^2)} -\tau_h = 0\,.
\end{align*}
Define $\eta = \beta/\tau_h> 1$ (since $\varepsilon>0$). The second equation gives $\tau_g = \alpha/2(\eta+1/\eta)$. While this becomes useful in finding optimal $\tau_g$ it does not matter with our goal of finding $\alpha$ as everywhere $\tau_g$ appears in form $\beta+\tau_g$. The first equation though gives
\begin{align}\label{eq:dum0}
\frac{\delta}{\beta+\tau_g} + \frac{\beta+\tau_g}{(\alpha^2+\sigma^2)} = \frac{2}{\alpha \eta}\,.
\end{align}
The third equation gives
\begin{align}\label{eq:dum1}
2\sqrt{\tfrac{2}{\pi}} \delta\varepsilon\sqrt{\alpha^2+\sigma^2} \sqrt{\alpha^2\eta^2+V^2} + \delta (\alpha^2+\sigma^2) = (\beta+\tau_g)^2\,.
\end{align}
The fourth equation gives
\begin{align}
\frac{2\sqrt{\frac{2}{\pi}} \alpha \eta ^3 \delta \varepsilon \sqrt{\alpha ^2+\sigma ^2}}{(\eta^2-1) \sqrt{{\alpha ^2 \eta^2}+V^2}} = \beta+\tau_g\,.
\end{align}
Continuing from~\eqref{eq:dum0} we get
\begin{align}
\sqrt{\frac{2}{\pi}} \varepsilon \sqrt{\alpha^2\eta^2+V^2} + \sqrt{\alpha^2+\sigma^2} = (\alpha^2+\sigma^2) \frac{2\sqrt{\frac{2}{\pi}} \eta ^2 \varepsilon}{(\eta^2-1) \sqrt{{\alpha ^2 \eta^2}+V^2}} \,.
\end{align}
Simplifying this equation,
\begin{align}
\sqrt{\frac{2}{\pi}} \varepsilon \sqrt{\alpha^2\eta^2+V^2}\, (\eta^2-1) + \sqrt{\alpha^2+\sigma^2}(\eta^2-1) = (\alpha^2+\sigma^2) \frac{2\sqrt{\frac{2}{\pi}} \eta ^2 \varepsilon}{ \sqrt{{\alpha ^2 \eta^2}+V^2}} \,.
\end{align}
We now proceed by taking derivatives of both equations implicitly with respect to $\varepsilon$ and evaluate them at
\begin{align*}
\tau_g^* = \alpha^* = \frac{\sigma}{\sqrt{\delta -1}},\quad
\tau_h^* = \beta^* = \sigma \sqrt{\delta -1}\,, \quad \gamma^* = 0, \quad\text{and}\quad \varepsilon=0\,.
\end{align*}
Note that the derivative of the first equation yields
\begin{align*}
\frac{{\rm d}}{{\rm d}\varepsilon}\left( \sqrt{\alpha^2+\sigma^2}(\eta^2-1) +\varepsilon\left(\sqrt{\tfrac{2}{\pi}} \sqrt{\alpha^2\eta^2+V^2}(\eta^2-1) - (\alpha^2+\sigma^2) \tfrac{2\sqrt{\tfrac{2}{\pi}} \eta ^2 }{ \sqrt{{\alpha ^2 \eta^2}+V^2}}\right) \right)=0\,.
\end{align*}
Thus
\begin{align*}
&\frac{{\rm d}}{{\rm d}\varepsilon}\left( \sqrt{\alpha^2+\sigma^2}(\eta^2-1) \right) +\left(\sqrt{\tfrac{2}{\pi}} \sqrt{\alpha^2\eta^2+V^2}(\eta^2-1) - (\alpha^2+\sigma^2) \tfrac{2\sqrt{\tfrac{2}{\pi}} \eta ^2 }{ \sqrt{{\alpha ^2 \eta^2}+V^2}}\right)+\\
&\varepsilon\frac{{\rm d}}{{\rm d}\varepsilon}\left(\sqrt{\tfrac{2}{\pi}} \sqrt{\alpha^2\eta^2+V^2}(\eta^2-1) - (\alpha^2+\sigma^2) \tfrac{2\sqrt{\tfrac{2}{\pi}} \eta ^2 }{\sqrt{{\alpha ^2 \eta^2}+V^2}}\right)=0\,.
\end{align*}
Setting $\varepsilon=0$ in the above yields
\begin{align*}
\frac{{\rm d}}{{\rm d}\varepsilon} \left(\sqrt{\alpha^2+\sigma^2}(\eta^2-1)\right) +\left(\sqrt{\tfrac{2}{\pi}} \sqrt{\alpha^2\eta^2+V^2} (\eta^2-1)- (\alpha^2+\sigma^2) \tfrac{2\sqrt{\tfrac{2}{\pi}} \eta ^2 }{ \sqrt{{\alpha ^2 \eta^2}+V^2}}\right)=0\,.
\end{align*}
Thus
\begin{align*}
\frac{\alpha^*}{\sqrt{(\alpha^*)^2+\sigma^2}}\frac{{\rm d} \alpha}{{\rm d}\varepsilon}(\eta_*^2-1)+2\eta_*\sqrt{\alpha_*^2+\sigma^2}\frac{{\rm d} \eta}{{\rm d} \varepsilon}=(\alpha_*^2+\sigma^2) \tfrac{2\sqrt{\tfrac{2}{\pi}} \eta_* ^2 }{ \sqrt{{\alpha_* ^2 \eta_*^2}+V^2}} -\sqrt{\tfrac{2}{\pi}} \sqrt{\alpha_*^2 \eta_*^2+V^2}(\eta_*^2-1)\,.
\end{align*}
Setting $\eta_*=1$ this simplifies to
\begin{align*}
\frac{{\rm d} \eta}{{\rm d} \varepsilon}=\sqrt{(\alpha^*)^2+\sigma^2} \tfrac{\sqrt{\tfrac{2}{\pi}} }{ \sqrt{{(\alpha^*) ^2 }+V^2}} = \sigma \sqrt{\frac{2\delta}{\pi}} \frac{1}{\sqrt{\sigma^2+V^2(\delta -1)}}\,.
\end{align*}
In addition, from~\eqref{eq:dum0}
\[
\left(-\frac{\delta}{(\beta_*+\tau_{g*})^2}+ \frac{1}{\alpha_*^2+\sigma^2}\right) \frac{{\rm d}}{{\rm d} \varepsilon}(\beta+\tau_g)
-\frac{\beta_*+\tau_{g*}}{(\alpha_*^2+\sigma^2)^2} 2\alpha_* \frac{{\rm d} \alpha}{{\rm d} \varepsilon}
=- \frac{2}{\alpha_* \eta_*^2} \frac{{\rm d} \eta}{{\rm d} \varepsilon}-\frac{2}{\alpha_*^2 \eta_*} \frac{{\rm d} \alpha}{{\rm d} \varepsilon}\,.
\]
Plugging in for $\beta_*, \tau_{g*}, \alpha_*$ the coefficient of $\frac{{\rm d}}{{\rm d} \varepsilon} (\beta+\tau_g)$ vanishes and we arrive at
\[
\frac{\frac{\sigma \delta}{\sqrt{\delta-1}}}{\left(\frac{\sigma^2\delta}{\delta-1}\right)^2} \frac{\sigma}{\sqrt{\delta-1}} \frac{{\rm d} \alpha}{{\rm d} \varepsilon} = \frac{\sqrt{\delta-1}}{\sigma}\frac{{\rm d} \eta}{{\rm d} \varepsilon}
+\frac{\delta-1}{\sigma^2} \frac{{\rm d}\alpha}{{\rm d}\varepsilon}\,.
\]
Rearranging the terms, we obtain
\[
\frac{{\rm d} \alpha}{{\rm d} \varepsilon} = -\frac{\sigma\delta}{(\delta-1)^{3/2}} \frac{{\rm d} \eta}{{\rm d} \varepsilon}=
- \sqrt{\frac{2}{\pi}}\sigma^2 \left(\frac{\delta}{\delta-1}\right)^{3/2} \frac{1}{\sqrt{\sigma^2+V^2(\delta -1)}}\,.
\]
Now, invoking the definition of statistical risk we have
\begin{align}
{{\sf SR}}({\widehat{\boldsymbol{\theta}}}^\varepsilon) &= {{\sf SR}}({\widehat{\boldsymbol{\theta}}}^0) + \frac{{\rm d}}{{\rm d} \varepsilon} {{\sf SR}}({\widehat{\boldsymbol{\theta}}}^\varepsilon)\Big|_{\varepsilon=0} \varepsilon + O(\varepsilon^2)\nonumber\\
&=\sigma^2+\alpha_*^2 + 2\alpha_* \frac{{\rm d} \alpha}{{\rm d} \varepsilon} \Big|_{\varepsilon=0} + O(\varepsilon^2)\nonumber\\
&=\frac{\sigma^2\delta}{\delta -1} - \sqrt{\frac{2}{\pi}}\frac{\sigma^3\delta^{3/2}}{(\delta-1)^2} \cdot\frac{1}{\sqrt{\sigma^2+V^2(\delta -1)}} +O(\varepsilon^2)\,.
\end{align}
The proof is complete.
\end{proof}
\section{Proofs that the minimization and maximization primal problems can be restricted to a compact set}
\label{setres}
In this section we demonstrate how the minimization and maximization problems can be restricted to compacts sets. We start with the restriction on $\vct{z}$. To this aim recall that that one of the main goals of Theorem \ref{thm:main} is to characterize the distance of the optimal solution ${\widehat{\boldsymbol{\theta}}}^{\varepsilon}$ to $\vct{\theta}_0$ i.e.~$\frac{\twonorm{{\widehat{\boldsymbol{\theta}}}^{\varepsilon}-\vct{\theta}_0}}{\sqrt{p}}=\twonorm{ \widehat{\vct{z}}^{\varepsilon}}$ asymptotically and in particular to show $\twonorm{\vct{z}}\to \alpha_*$ as $n\to \infty$, in probability, for some $\alpha_*$ to be determined. Now define the set $\mathcal{S}_{{\vct{z}}} = \{{\vct{z}}|\;\; \twonorm{\vct{z}}\le K_\alpha\}$ with $K_\alpha = \alpha_*+\zeta$ for a constant $\zeta>0$ and consider the optimization problem
\begin{align}
\label{lin11}
\min_{{\vct{z}}\in\mathcal{S}_{\vct{z}},\vct{v}\in\mathbb{R}^n} \max_{\vct{u}\in\mathbb{R}^n}\; \frac{1}{\sqrt{p}}\left(\vct{u}^T\boldsymbol{X} \vct{z} - \vct{u}^T\vct{\omega} +\vct{u}^T \vct{v}\right)+\ell(\vct{v};\vct{z}
\end{align}
with $\vct{\omega}=\vct{w}/\sqrt{p}$. Based on the CGMT framework this optimization problem is equivalent to \eqref{lin} in an asymptotic fashion in the sense that if the Euclidean norm of the optimum solution to the above converges asymptotically to a value $\alpha_*$ in probability as $n\rightarrow +\infty$ then $\twonorm{\widehat{\vct{z}}}$ also converges to the same value ($\|\widehat{\vct{z}}\|\to\alpha_*$) in probability. See \cite[Theorem A.1]{thrampoulidis2018precise} for a formal argument.
The optimization problem above is still not in a form where CGMT can be applied as there are no compact restriction on $\vct{u}$. This is the subject of the next lemma.
\begin{lemma}
The optimal solution $\vct{u}^*$ of \eqref{lin11} satisfies $\twonorm{\vct{u}^*}\le K_\beta$ for a sufficiently large constant $K_\beta>0$ with probability at least $1-2e^{-cn}$.
\end{lemma}
\begin{proof}
Writing the KKT conditions for \eqref{lin11} we have
\begin{align*}
&\boldsymbol{X} \vct{z} - \frac{1}{\sqrt{p}} \vct{w}+\vct{v} = 0\\
&u_i = -\sqrt{p} [\nabla_{\vct{v}}\ell(\vct{v};\vct{z})]_i = -\frac{1}{\sqrt{p}} \left(v_i+\frac{\varepsilon}{p}\cdot \sgn{v_i} \twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}\right)
\end{align*}
From the first equation we have that $\vct{v}=\frac{\vct{w}}{\sqrt{p}}-\boldsymbol{X}\vct{z}$. Thus,
\begin{align*}
\twonorm{\vct{v}}\le& \frac{1}{\sqrt{p}}\twonorm{\vct{w}}+\twonorm{\boldsymbol{X}\vct{z}}\\
\le& \frac{1}{\sqrt{p}}\twonorm{\vct{w}}+\opnorm{\boldsymbol{X}}\twonorm{\vct{z}}\\
\overset{(a)}{\le}&C\sqrt{n}\sigma+C\left(\sqrt{p}+\sqrt{n}\right)\twonorm{\vct{z}}\\
\overset{(b)}{\le}&C\sqrt{n}\sigma+C\left(\sqrt{p}+\sqrt{n}\right)K_\alpha
\end{align*}
holds with probability at least $1-2e^{-cn}$. Here, (a) follows from well known bounds on the Euclidean norm of a Gaussian vector and the spectral norm of a Gaussian matrix and (b) follows from the fact that $\twonorm{\vct{z}}\le K_\alpha$. We thus have $\twonorm{\vct{v}}\le C_2\left(\sqrt{p}+\sqrt{n}\right)$, with high probability. Now using the second equation we have
\begin{align*}
\twonorm{\vct{u}}\le& \frac{\twonorm{\vct{v}}}{\sqrt{p}} + \frac{\varepsilon\sqrt{\delta}}{\sqrt{p}} \twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}\\
\le& C\sigma\sqrt{\delta}+C(1+\sqrt{\delta})K_\alpha+ \frac{\varepsilon\sqrt{\delta}}{\sqrt{p}} \twonorm{\vct{\theta}_0}+\varepsilon\sqrt{\delta}\twonorm{\vct{z}}\\
\le& C\sigma\sqrt{\delta}+C(1+\sqrt{\delta})K_\alpha+ \varepsilon\sqrt{\delta}\widetilde{C}+\varepsilon\sqrt{\delta}K_\alpha\\
\le& K_\beta\,,
\end{align*}
for some bounded constant $K_\beta$. In the penultimum step we used the fact that $\frac{\twonorm{\vct{\theta}_0}}{\sqrt{p}}$ is bounded and $\twonorm{\vct{z}}\le K_\alpha$.
\end{proof}
\section{Proofs for scalarization of Auxilary Optimization (AO)}
\subsection{Proof of Lemma \ref{conjlemma}}
\label{conjlemmapf}
We restate the lemma for the convenience of the reader.
\begin{lemma}\label{conjlemma2}[Restatement of Lemma \ref{conjlemma}] The conjugate of
\begin{align*}
\ell(\vct{v};\vct{z}):=\frac{1}{2p}\left(\twonorm{\vct{v}}^2+2\frac{\varepsilon}{\sqrt{p}}\onenorm{\vct{v}}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}+\frac{\varepsilon^2}{p}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}^2\right)
\end{align*}
with respect to the variable $\vct{z}$ is given by
\begin{align*}
\widetilde{\ell}(\vct{v};\vct{q}):=\sup_{\vct{z}} \vct{q}^T\vct{z}-\ell(\vct{v};\vct{z})=-\frac{1}{\sqrt{p}}\vct{q}^T\vct{\theta}_0+\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)_+^2 -\frac{1}{2p}\twonorm{\bv}^2.
\end{align*}
\end{lemma}
\begin{proof}
We begin by calculating the conjugate of a slightly simpler function
\begin{align*}
\bar{\ell}(\vct{v};\vct{\theta}):=\frac{1}{2p}\sum_{i=1}^n \left(|v_i| + \frac{\varepsilon}{\sqrt{p}}\tn{\vct{\theta}}\right)^2.
\end{align*}
We have
\begin{align*}
\bar{\ell}^*(\vct{v};\vct{q}) &= \sup_{\vct{\theta}} \vct{q}^T\vct{\theta} - \bar{\ell}(\vct{v};\vct{\theta})\\
&= \sup_{\vct{\theta}} \vct{q}^T\vct{\theta} - \frac{1}{2p}\sum_{i=1}^n \left(|v_i| + \frac{\varepsilon}{\sqrt{p}}\tn{\vct{\theta}}\right)^2\\
&=\sup_{\vct{\theta}} \sup_{\xi\ge0} \vct{q}^T\vct{\theta} - \frac{1}{2p} \left(\tn{\bv}^2 + \frac{2\varepsilon}{\sqrt{p}}\onenorm{\bv} \left(\frac{\tn{\vct{\theta}}^2}{2\xi} + \frac{\xi}{2}\right)+
\delta\varepsilon^2\tn{\vct{\theta}}^2\right)\\
&= \sup_{\xi\ge0} \sup_{\vct{\theta}} \vct{q}^T\vct{\theta} - \frac{1}{2p} \left(\tn{\bv}^2 + \frac{2\varepsilon}{\sqrt{p}}\onenorm{\bv} \left(\frac{\tn{\vct{\theta}}^2}{2\xi} + \frac{\xi}{2}\right)+
\delta\varepsilon^2\tn{\vct{\theta}}^2\right)
\end{align*}
Setting derivative w.r.t $\vct{\theta}$ to zero, we get
\[
\vct{q} - \frac{\varepsilon\onenorm{\bv}}{p^{3/2}\xi} \vct{\theta} - \frac{\delta\varepsilon^2}{p}\vct{\theta} = 0 \quad\Rightarrow\quad
\vct{\theta} = \left(\frac{\varepsilon\onenorm{\bv}}{p^{3/2}\xi}+ \frac{\delta\varepsilon^2}{p} \right)^{-1}\vct{q}
\]
Setting the derivative with respect to $\xi$ to zero we conclude that $\xi=\tn{\vct{\theta}}$. Plugging the latter into above we conclude that
\[
\vct{\theta} = \left(\frac{\varepsilon\onenorm{\bv}}{p^{3/2}\tn{\vct{\theta}}}+ \frac{\delta\varepsilon^2}{p} \right)^{-1}\vct{q}
\]
Taking the Euclidean norm from both sides we conclude that
\begin{align*}
\tn{\vct{\theta}} \left(\frac{\varepsilon\onenorm{\bv}}{p^{3/2}\tn{\vct{\theta}}}+ \frac{\delta\varepsilon^2}{p} \right)=\tn{\vct{q}}\quad\Rightarrow\quad \tn{\vct{\theta}}=\frac{\tn{\vct{q}}-\frac{\varepsilon\onenorm{\vct{v}}}{p^{\frac{3}{2}}}}{\frac{\delta\varepsilon^2}{p}}=\frac{p}{\delta\varepsilon^2}\tn{\vct{q}}-\frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\vct{v}}
\end{align*}
If $\frac{p}{\delta\varepsilon^2}\tn{\vct{q}}-\frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\vct{v}}<0$ then it is easy to verify that the objective is less than
$-\frac{1}{2p}\twonorm{\bv}^2$ and hence the optimal is given by $\vct{\theta} = 0$.
Thus
\begin{align*}
\vct{\theta}=\left(\frac{p}{\delta\varepsilon^2}\tn{\vct{q}}-\frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\vct{v}}\right)\frac{\vct{q}}{\tn{\vct{q}}}=\left(\frac{p}{\delta\varepsilon^2}-\frac{1}{\delta\varepsilon\sqrt{p}}\frac{\onenorm{\vct{v}}}{\tn{\vct{q}}}\right)\vct{q}
\end{align*}
Substituting for $\vct{\theta}$ we have
\begin{align*}
\bar{\ell}^*(\bv;\vct{q}) =& \left(\frac{p}{\delta\varepsilon^2}-\frac{1}{\delta\varepsilon\sqrt{p}}\frac{\onenorm{\vct{v}}}{\tn{\vct{q}}}\right) \tn{\vct{q}}^2\\
&- \frac{1}{2p} \left(\tn{\bv}^2 + \frac{2\varepsilon}{\sqrt{p}}\onenorm{\bv} \left(\frac{p}{\delta\varepsilon^2}\tn{\vct{q}}-\frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\vct{v}}\right)+
\delta\varepsilon^2\left(\frac{p}{\delta\varepsilon^2}\tn{\vct{q}}-\frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\vct{v}}\right)^2\right)\nonumber\\
=&\frac{p}{2\delta\varepsilon^2}\tn{\vct{q}}^2-\frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\vct{v}}\tn{\vct{q}}+\frac{1}{2\delta p^2}\onenorm{\vct{v}}^2-\frac{1}{2p}\twonorm{\vct{v}}^2\\
=&\frac{1}{2\delta p^2}\left(\frac{p^{\frac{3}{2}}}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)^2-\frac{1}{2p}\twonorm{\vct{v}}^2
\end{align*}
if $\onenorm{\vct{v}}\le \frac{p^{\frac{3}{2}}}{\varepsilon}\tn{\vct{q}}$. Otherwise,
\begin{align*}
\bar{\ell}^*(\bv;\vct{q}) = -\frac{1}{2p}\twonorm{\bv}^2\,.
\end{align*}
We can put the two cases together using the notation $z_+ = \max(z,0)$.
\begin{align*}
\bar{\ell}^*(\bv;\vct{q}) = \frac{1}{2\delta p^2}\left(\frac{p^{\frac{3}{2}}}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)_+^2 -\frac{1}{2p}\twonorm{\bv}^2\,.
\end{align*}
Now to calculate the conjugate of $\ell(\vct{v};\vct{z})$ note that
\begin{align*}
\ell(\vct{v};\vct{z})=\bar{\ell}\left(\vct{v};\vct{\theta}_0+\sqrt{p}\vct{z}\right)
\end{align*}
To continue note that if we have $f(\vct{x})=g(\mtx{A}\vct{x}+\vct{x}_0)$ the conjugate is given by
\begin{align*}
f^*(\vct{y})=-\langle \mtx{A}^{-1}\vct{x}_0,\vct{y}\rangle+g^*\left(\mtx{A}^{-T}\vct{y}\right)
\end{align*}
Thus using above with $\vct{x}_0=\vct{\theta}_0$ and $\mtx{A}=\sqrt{p}$ we arrive at
\begin{align*}
\widetilde{\ell}(\vct{v};\vct{q})=&-\frac{1}{\sqrt{p}}\langle\vct{\theta}_0,\vct{q}\rangle+\bar{\ell}^*\left(\bv;\frac{1}{\sqrt{p}}\vct{q}\right) \nonumber\\
=&-\frac{1}{\sqrt{p}}\vct{q}^T\vct{\theta}_0+\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)_+^2 -\frac{1}{2p}\twonorm{\bv}^2,
\end{align*}
concluding the proof.
\end{proof}
\subsection{Proof of Lemma \ref{cvxconcavelem}}
\label{cvxconcavelempf}
\begin{lemma}[Restatement of Lemma \ref{cvxconcavelem}]
\label{cvxconcavelem2}
The function
\begin{align*}
f(\gamma,\beta,\tau_h):=\gamma^2+\frac{\beta^2}{p}\tn{\vct{h}}^2- 2\frac{\gamma}{\sqrt{p}} \twonorm{\beta\vct{h}-\frac{\vct{\theta_0}}{\alpha}}
\end{align*}
is jointly convex in the parameters $(\gamma,\beta,\tau_h)$.
\end{lemma}
\begin{proof}
\begin{align*}
\gamma^2+\frac{\beta^2}{p}\tn{\vct{h}}^2- 2\frac{\gamma}{\sqrt{p}} \twonorm{\beta\vct{h}-\frac{\vct{\theta_0}}{\alpha}}=\gamma^2+\frac{\beta^2}{p}\tn{\vct{h}}^2- 2\frac{\gamma}{\sqrt{p}} \sqrt{\beta^2\twonorm{\vct{h}}^2+\frac{1}{\alpha^2}\tn{\vct{\theta}_0}^2-\frac{2}{\alpha}\beta\vct{h}^T\vct{\theta}_0}
\end{align*}
with the Hessian with respect to $(\gamma,\beta)$ equal to
\begin{align*}
\begin{bmatrix}
2 & -\frac{1}{\sqrt{p}}\frac{2\beta\twonorm{\vct{h}}^2-\frac{2}{\alpha}\vct{h}^T\vct{\theta}_0}{\sqrt{\beta^2\twonorm{\vct{h}}^2+\frac{1}{\alpha^2}\tn{\vct{\theta}_0}^2-\frac{2}{\alpha}\beta\vct{h}^T\vct{\theta}_0}}\\
-\frac{1}{\sqrt{p}}\frac{2\beta\twonorm{\vct{h}}^2-\frac{2}{\alpha}\vct{h}^T\vct{\theta}_0}{\sqrt{\beta^2\twonorm{\vct{h}}^2+\frac{1}{\alpha^2}\tn{\vct{\theta}_0}^2-\frac{2}{\alpha}\beta\vct{h}^T\vct{\theta}_0}}& 2\frac{\tn{\vct{h}}^2}{p}
\end{bmatrix}
\end{align*}
The determinant is equal to
\begin{align*}
&\frac{4}{p}\left(\tn{\vct{h}}^2-\frac{\left(\beta\twonorm{\vct{h}}^2-\frac{\vct{h}^T\vct{\theta}_0}{\alpha}\right)^2}{\beta^2\twonorm{\vct{h}}^2+\frac{1}{\alpha^2}\tn{\vct{\theta}_0}^2-\frac{2}{\alpha}\beta\vct{h}^T\vct{\theta}_0}\right)\\
&\quad\quad\quad\quad\quad\quad\quad=\frac{4}{p\alpha^2}\frac{1}{\beta^2\twonorm{\vct{h}}^2+\frac{1}{\alpha^2}\tn{\vct{\theta}_0}^2-\frac{2}{\alpha}\beta\vct{h}^T\vct{\theta}_0}\left(\tn{\vct{h}}^2\tn{\vct{\theta}_0}^2-\left(\vct{h}^T\vct{\theta}_0\right)^2\right)\\
&\quad\quad\quad\quad\quad\quad\quad\ge 0
\end{align*}
Thus
\begin{align*}
\frac{\alpha}{2}\gamma^2+\frac{\alpha}{2}\frac{\beta^2}{p}\tn{\vct{h}}^2- \frac{\gamma}{\sqrt{p}} \twonorm{\alpha\beta\vct{h}-\vct{\theta}_0}=\frac{\alpha}{2}\left(\gamma^2+\frac{\beta^2}{p}\tn{\vct{h}}^2- 2\frac{\gamma}{\sqrt{p}} \twonorm{\beta\vct{h}-\frac{\vct{\theta_0}}{\alpha}}\right)
\end{align*}
is jointly convex in $(\gamma,\beta)$. Therefore the perspective function
\begin{align*}
\tau_h\left(\frac{\alpha}{2}\left(\frac{\gamma}{\tau_h}\right)^2+\frac{\alpha}{2}\frac{\beta^2}{p\tau_h^2}\tn{\vct{h}}^2- \frac{\gamma}{\tau_h\sqrt{p}} \twonorm{\alpha\frac{\beta}{\tau_h}\vct{h}-\vct{\theta}_0}\right)=\frac{\alpha}{2\tau_h}\gamma^2+\frac{\alpha\beta^2}{2p\tau_h}\tn{\vct{h}}^2- \frac{\gamma}{\sqrt{p}} \twonorm{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}}
\end{align*}
is jointly convex in $(\gamma,\beta,\tau_h)$.
\end{proof}
\subsection{Proof of Lemma \ref{meenv}}
\label{meenvpf}
We begin by stating and proving the following lemma.
\begin{lemma}\label{lem:prox}
The value of the following problem (with $\lambda>1$)
\begin{align*}
\min_{\vct{v}\in\mathbb{R}^n}\quad \frac{\lambda}{2}\twonorm{\vct{x}-\vct{v}}^2-\frac{1}{2n}\left(\gamma-\onenorm{\vct{v}}\right)_{+}^2
\end{align*}
is given by
\begin{align*}
\min_{\tau \ge0}\quad \frac{\lambda}{2}\twonorm{\vct{x}-{\sf ST}(\vct{x};\tau)}^2-\frac{1}{2n}\left(\gamma-\onenorm{{\sf ST}(\vct{x};\tau)}\right)_{+}^2
\end{align*}
where ${\sf ST}(\vct{x};\tau)$ is the soft-thresholding function.
\end{lemma}
Notably the lemma above transforms the first optimization (on vector $\bv$) to an optimization over scalar $\tau$.
\begin{proof}
We consider two case:
\noindent\textbf{Case I: $\onenorm{\vct{x}}> \gamma$}\\
In this case the optimal value is achieved by $\bv = \vct{x}$ resulting in an objective value of zero. We shall proceed by contradiction and assume $\bv=\vct{x}$ is not an optimal solution. First note that under this contradictory assumption we must have $\onenorm{\bv} <\gamma$ as otherwise the $(\cdot)_{+}$ term would be inactive and the objective value would be greater than or equal to zero in which case $\bv = \vct{x}$ would achieve the optimum value negating the contradictory assumption. We thus focus on the case that $\onenorm{\bv} <\gamma$. To reach a contradiction in this case note that we have
\begin{align*}
&\frac{\lambda}{2}\twonorm{\vct{x}-\vct{v}}^2-\frac{1}{2n}\left(\gamma-\onenorm{\vct{v}}\right)_{+}^2\\
&\ge \frac{\lambda}{2}\twonorm{\vct{x}-\vct{v}}^2-\frac{1}{2n}\left(\onenorm{\vct{x}}-\onenorm{\vct{v}}\right)^2\\
&\ge \frac{\lambda}{2}\twonorm{\vct{x}-\vct{v}}^2-\frac{1}{2n} \onenorm{\vct{x} - \bv}^2\\
&> \frac{1}{2}\twonorm{\vct{x}-\vct{v}}^2-\frac{1}{2n} \onenorm{\vct{x} - \bv}^2\\
&\ge 0,
\end{align*}
where in the penultimum
Since ${\sf ST}(\vct{x};0)=\vct{x}$ and we showed that it is the optimal $\bv$, the claim holds in this case. Namely, the minimizer is achieved at a point in $\{{\sf ST}(\vct{x};\tau): \; \tau\ge 0\}$.
\noindent\textbf{Case II: $\onenorm{\vct{x}}\le \gamma$}\\
Since $\onenorm{\bv}$ is invariant with respect to the sign of its entries, it is clear that at the solution $\bv$, we must have $\sign(\bv) = \sign(\vct{x})$. Moreover, without loss of generality we can assume $\onenorm{\bv} \le \gamma$ as otherwise similar to the previous case $\bv=\vct{x}$ would be a solution and the minimizer is achieved at a point in $\{{\sf ST}(\vct{x};\tau): \; \tau\ge 0\}$. At the optimal solution we must have\footnote{We note that since $\lambda>1$ the objective $\frac{\lambda}{2}\twonorm{\vct{x}-\vct{v}}^2-\frac{1}{2n}\left(\gamma-\onenorm{\vct{v}}\right)_{+}^2$ is convex and thus optimality is given by zero being a sub-gradient.}
\[
\vct{0}\in \lambda(\bv-\vct{x}) -\frac{1}{n} (\onenorm{\bv}-\gamma) \partial \onenorm{\bv}
\]
As we argued previously at an optimal solution we must have $\sign(\bv) = \sign(\vct{x})$ and thus $\partial \onenorm{\bv}=\partial \onenorm{\vct{x}}$ rearranging the terms gives
\[
\bv \in \vct{x} + \frac{1}{\lambda n} (\onenorm{\bv}-\gamma) \partial \onenorm{\bv}=\vct{x} - \frac{1}{\lambda n} (\gamma-\onenorm{\bv}) \partial \onenorm{\vct{x}}
\]
Thus $\bv = {\sf ST}(\vct{x};\tau)$ for $\tau = \frac{1}{\lambda n} (\gamma- \onenorm{\bv})\ge 0$ and the claim follows.
\end{proof}
With the lemma above in place we turn our attention to completing the proof of Lemma \ref{meenv}. To this aim note that since $f(\bv)$ is convex and $\twonorm{\vct{x}-\bv}^2$ is strictly convex, then $\frac{1}{2\mu} \twonorm{\vct{x}-\bv}^2+f(\bv)$ is jointly striclty convex in $(\vct{x},\bv)$. Since partial minimization preserves convexity, $e_f(\vct{x};\mu)$ is strictly convex in $\vct{x}$ (also see \cite[Lemma C.5]{thrampoulidis2015precise}).
We write the Moreau envelope as
\begin{align*}
e_f(\vct{x};\mu) &= \min_{\bv} \frac{1}{2\mu} \twonorm{\vct{x}-\bv}^2+ \frac{1}{2} \twonorm{\bv}^2 -\frac{1}{2\delta p} \left(\frac{p}{\varepsilon}\gamma-\onenorm{\bv}\right)_+^2\\
&=\min_{\bv} \frac{1}{2} \left(\frac{1}{\mu}+1\right) \twonorm{\bv - \frac{\vct{x}}{\mu+1}}^2+ \frac{1}{2(\mu+1)} \twonorm{\vct{x}}^2 -\frac{1}{2\delta p} \left(\frac{p}{\varepsilon}\gamma-\onenorm{\bv}\right)_+^2
\end{align*}
Using Lemma~\ref{lem:prox} with $\lambda = \frac{1+\mu}{\mu}>1$, we arrive at
\begin{align}
e_f(\vct{x};\mu) &= \frac{1}{2(\mu+1)}\twonorm{\vct{x}}^2 +
\min_{\tau\ge 0} \frac{1}{2\mu(\mu+1)}\twonorm{\vct{x}-{\sf ST}(\vct{x};\tau(\mu+1))}^2\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\frac{1}{2n}\left(\frac{p}{\varepsilon}\gamma-\frac{1}{1+\mu}\onenorm{{\sf ST}(\vct{x};\tau(\mu+1))}\right)_+^2.
\end{align}
The result follows by a change of variable $\tau(\mu+1) \to \tau$.
\subsection{Proof of Lemma \ref{Glem}}
\label{Glempf}
We begin by restating the lemma for the convenience of the reader.
\begin{lemma}[Restatement of Lemma \ref{Glem}]\label{Glem2} Let $\vct{w}\in\mathbb{R}^n$ be a Gaussian random vector distributed as $\mathcal{N}(\vct{0},\omega^2\mtx{I}_n)$. Also assume
\begin{align*}
G(\vct{w};\mu,\tau) := \frac{1}{2\mu(\mu+1)}\twonorm{\vct{w}-{\sf ST}(\vct{w};\tau)}^2-\frac{1}{2n}\left(\frac{p}{\varepsilon}\gamma-\frac{1}{1+\mu}\onenorm{{\sf ST}(\vct{w};\tau)}\right)_+^2.
\end{align*}
Then
\begin{align*}
\underset{n \rightarrow \infty}{\lim}\text{ }\frac{1}{n}G(\vct{w};\mu,\tau)=&\frac{\omega^2}{2\mu(\mu+1)}\left(\left(1-\sqrt{\frac{2}{\pi}}\frac{\tau}{\omega} e^{-\frac{\tau^2}{2\omega^2}}\right)+\left(\frac{\tau^2}{\omega^2}-1\right)\text{erfc}\left(\frac{1}{\sqrt{2}}\frac{\tau}{\omega}\right)\right)\\
&-\frac{\omega^2}{2(\mu+1)^2}\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\frac{\tau}{\omega}\cdot\text{erfc}\left(\frac{1}{\sqrt{2}}\frac{\tau}{\omega}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2\omega^2}}\right)_+^2.
\end{align*}
Furthermore,
\begin{align*}
\underset{\tau\ge 0}{\min}\text{ }\underset{n \rightarrow \infty}{\lim}\text{ }&\frac{1}{n}G(\vct{w};\mu,\tau)\\
&\quad=
\begin{cases}
0 \quad &\text{ if }\gamma(\mu+1)\le \sqrt{\frac{2}{\pi}}\delta \varepsilon\omega\\
\frac{\omega^2}{2\mu(\mu+1)}\left(1-{\rm erfc}\left(\frac{\tau^*\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega},\mu\right)}{\sqrt{2}}\right)-\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}\tau^*\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega},\mu\right)\right)&\text{ if }\gamma(\mu+1)>\sqrt{\frac{2}{\pi}}\delta \varepsilon \omega
\end{cases}
\end{align*}
where $\tau^*(a,\mu)$ is the unique solution to
\begin{align}\label{eq:tau-star}
a-\frac{\mu+1}{\mu}\tau+\tau\cdot{\rm erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}=0
\end{align}
Alternatively using the fact that ${\rm erf}=1-{\rm erfc}$ we can rewrite this in the form
\begin{align*}
\underset{\tau\ge 0}{\min}\text{ }\underset{n \rightarrow \infty}{\lim}\text{ }&\frac{1}{n}G(\vct{w};\mu,\tau)\\
&\quad\quad=
\begin{cases}
0 \quad &\text{ if }\gamma(\mu+1)\le \sqrt{\frac{2}{\pi}}\delta \varepsilon\omega\\
\frac{\omega^2}{2\mu(\mu+1)}\left({\rm erf} \left(\frac{\tau^*\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega},\mu\right)}{\sqrt{2}}\right)-\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}\tau^*\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega},\mu\right)\right)&\text{ if }\gamma(\mu+1)>\sqrt{\frac{2}{\pi}}\delta \varepsilon \omega
\end{cases}
\end{align*}
where $\tau^*(a,\mu)$ is the unique solution to
\begin{align*}
a-\frac{1}{\mu}\tau-\tau\cdot{\rm erf}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}=0
\end{align*}
\end{lemma}
\begin{proof}
First note that by law-of large numbers we have
\begin{align*}
\underset{n \rightarrow \infty}{\lim}\text{ }\frac{1}{n}\twonorm{\vct{w}-{\sf ST}(\vct{w};\tau)}^2=&\operatorname{\mathbb{E}}_{g\sim\mathcal{N}(0,1)}\Big[\left(\omega g-{\sf ST}(\omega g;\tau)\right)^2\Big]\\
=&\omega^2\operatorname{\mathbb{E}}_{g\sim\mathcal{N}(0,1)}\Big[\left( g-{\sf ST}\left(g;\frac{\tau}{\omega}\right)\right)^2\Big]\\
=&\omega^2\left(\frac{2}{\sqrt{2\pi}}\int_{+\frac{\tau}{\omega}}^{+\infty} \frac{\tau^2}{\omega^2} e^{-\frac{x^2}{2}}dx+\frac{1}{\sqrt{2\pi}}\int_{-\frac{\tau}{\omega}}^{+\frac{\tau}{\omega}} x^2 e^{-\frac{x^2}{2}}dx\right)\\
=&\omega^2\left( \left(1-\sqrt{\frac{2}{\pi}}\frac{\tau}{\omega}e^{-\frac{\tau^2}{2\omega^2}}\right)+\left(\frac{\tau^2}{\omega^2}-1\right)\text{erfc}\left(\frac{1}{\sqrt{2}}\frac{\tau}{\omega}\right)\right)\\
=&\omega\left(\omega-\sqrt{\frac{2}{\pi}}\tau e^{-\frac{\tau^2}{2\omega^2}}\right)+\left(\tau^2-\omega^2\right)\text{erfc}\left(\frac{1}{\sqrt{2}}\frac{\tau}{\omega}\right)
\end{align*}
Next note that
\begin{align*}
\underset{n \rightarrow \infty}{\lim}\text{ }\frac{1}{n}\onenorm{{\sf ST}(\vct{w};\tau)}=&\operatorname{\mathbb{E}}_{g\sim\mathcal{N}(0,1)}\Big[\abs{{\sf ST}(\omega g;\tau)}\Big]\\
=&\omega\operatorname{\mathbb{E}}_{g\sim\mathcal{N}(0,1)}\Bigg[\abs{{\sf ST}\left( g;\frac{\tau}{\omega}\right)}\Bigg]\\
=&\frac{\omega}{\sqrt{2\pi}}\left(\int_{+\frac{\tau}{\omega}}^{+\infty}\left(x-\frac{\tau}{\omega}\right)e^{-\frac{x^2}{2}}dx-\int_{-\infty}^{-\frac{\tau}{\omega}}\left(x+\frac{\tau}{\omega}\right)e^{-\frac{x^2}{2}}dx \right)\\
=&\sqrt{\frac{2}{\pi}}\omega\left(\int_{+\frac{\tau}{\omega}}^{+\infty}\left(x-\frac{\tau}{\omega}\right)e^{-\frac{x^2}{2}}dx\right)\\
=&\sqrt{\frac{2}{\pi}}\omega\cdot e^{-\frac{\tau^2}{2\omega^2}}-\tau\cdot\text{erfc}\left(\frac{1}{\sqrt{2}}\frac{\tau}{\omega}\right)
\end{align*}
Therefore,
\begin{align*}
\underset{n\rightarrow \infty}{\lim}\quad\frac{1}{2n^2}\left(\frac{p}{\varepsilon}\gamma-\frac{1}{1+\mu}\onenorm{{\sf ST}(\vct{w};\tau)}\right)_+^2=&\underset{n\rightarrow \infty}{\lim}\quad\frac{1}{2}\left(\frac{\gamma}{\delta\varepsilon}-\frac{1}{1+\mu}\frac{\onenorm{{\sf ST}(\vct{w};\tau)}}{n}\right)_+^2\\
=&\frac{1}{2}\left(\frac{\gamma}{\delta\varepsilon}+\frac{\tau}{1+\mu}\text{erfc}\left(\frac{1}{\sqrt{2}}\frac{\tau}{\omega}\right)-\sqrt{\frac{2}{\pi}}\frac{\omega}{1+\mu} e^{-\frac{\tau^2}{2\omega^2}}\right)_+^2
\end{align*}
The proof of the first identity follows by combining the two summands.
To prove the second identity note that using a change of variable $\tau/\omega\rightarrow \tau$
\begin{align*}
&\underset{\tau\ge 0}{\min}\quad\underset{n \rightarrow \infty}{\lim}\text{ }\frac{1}{n}G(\vct{w};\mu,\tau\omega)\\
&\quad\quad\quad\quad\quad\quad\quad\quad=\frac{\omega^2}{2(\mu+1)^2}\cdot\underset{\tau\ge 0}{\min}\quad \frac{\mu+1}{\mu}\left(\left(1-\sqrt{\frac{2}{\pi}}\tau e^{-\frac{\tau^2}{2}}\right)+\left(\tau^2-1\right)\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)\right)\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad-\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\tau\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}\right)_+^2
\end{align*}
To continue note that if only the first term is active the derivative is given by
\begin{align*}
2\frac{\mu+1}{\mu}\tau\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)\ge 0
\end{align*}
and when both terms are active the derivative is given by
\begin{align*}
&2\tau\frac{\mu+1}{\mu}\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-2\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\tau\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}\right)\\
&\quad\quad=-2\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)\left((\mu+1)\left(\frac{\gamma}{\delta\varepsilon\omega}-\frac{\tau}{\mu}\right)+\tau\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}\right)
\end{align*}
We note that the function $(\mu+1)\left(\frac{\gamma}{\delta\varepsilon\omega}-\frac{\tau}{\mu}\right)+\tau\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}$ is always decreasing when $\tau\ge0 $ and its value at $\tau=0$ is given by $\frac{\gamma(\mu+1)}{\delta \varepsilon\omega}-\sqrt{\frac{2}{\pi}}$. To continue further consider two cases.
\noindent\underline{\textbf{ Case I: $\gamma(\mu+1)\le \sqrt{\frac{2}{\pi}} \delta\varepsilon\omega$:}}\\
In this case the function is always increasing in $\tau\in[0,+\infty)$ and thus the minimum is achieved at $\tau=0$ with the corresponding optimal value given by
\begin{align*}
-\frac{\omega^2}{2(\mu+1)^2}\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}-\sqrt{\frac{2}{\pi}}\right)_+^2=0
\end{align*}
\noindent\underline{\textbf{ Case II: $\gamma(\mu+1)> \sqrt{\frac{2}{\pi}}\delta\varepsilon\omega$:}}\\
In this case the function is decreasing at the beginning and then increases. Therefore, the minimum is achieved at a point where
\begin{align*}
(\mu+1)\left(\frac{\gamma}{\delta\varepsilon\omega}-\frac{\tau}{\mu}\right)+\tau\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}=0
\end{align*}
Note that at such a point we have
\begin{align*}
\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\tau\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}=\frac{\mu+1}{\mu}\tau
\end{align*}
and
\begin{align*}
\left(1-\sqrt{\frac{2}{\pi}}\tau e^{-\frac{\tau^2}{2}}\right)+\left(\tau^2-1\right)\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)=&\tau^2\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\tau\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}+1-\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)\\
=&\frac{\mu+1}{\mu}\tau^2-\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}\tau+1-\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)
\end{align*}
Thus
\begin{align*}
&\frac{\mu+1}{\mu}\left(\left(1-\sqrt{\frac{2}{\pi}}\tau e^{-\frac{\tau^2}{2}}\right)+\left(\tau^2-1\right)\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)\right)-\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\tau\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}\right)_+^2\\
&\quad\quad\quad=\frac{(\mu+1)^2}{\mu^2}\tau^2-\frac{\gamma(\mu+1)^2}{\delta\varepsilon\omega\mu}\tau+\frac{\mu+1}{\mu}\left(1-\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)\right)-\frac{(\mu+1)^2}{\mu^2}\tau^2\\
&\quad\quad\quad=\frac{\mu+1}{\mu}\left(1-\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}\tau\right)
\end{align*}
\end{proof}
\section{Sketch and roadmap of the proof}
To be able to provide a precise characterization of the various tradeoffs we need to develop a precise understanding of the adversarial training objective
\vspace{-0.3cm}
\begin{align*}
\underset{\vct{\theta}\in\mathbb{R}^p}{\min}\text{ }\mathcal{L}(\vct{\theta}):=\underset{\vct{\theta}\in\mathbb{R}^p}{\min}\text{ }\underset{\twonorm{\vct{\delta}_i}\le \varepsilon}{{\rm \max}}\;\; \frac{1}{2n} \sum_{i=1}^n \left(y_i - \<\vct{x}_i +\bdelta_i,{\boldsymbol{\theta}}\>\right)^2,
\end{align*}
and its optimal solution ${\widehat{\boldsymbol{\theta}}}^{\varepsilon} \in \arg\min_{{\boldsymbol{\theta}}\in \mathbb{R}^p }\mathcal{L}(\vct{\theta})$. To achieve this we carry out the following steps.
\medskip
\noindent\textbf{Step I: Simplification of the loss (Section \ref{step1}).} The maximization objective is equal to the optimal value of a maximization problem and hence characterizing its properties directly is challenging. In the first step of our proof we show that one can in-fact solve this maximization problem and derive an expression for the loss in closed form. Specifically, we show
\begin{align}
\label{simpobj}
\mathcal{L}(\vct{\theta})=\frac{1}{2n} \sum_{i=1}^n \left(|y_i-\<\vct{x}_i,{\boldsymbol{\theta}}\>|+ \varepsilon \twonorm{{\boldsymbol{\theta}}}\right)^2=\frac{1}{2n}\twonorm{\abs{\vct{y}-\mtx{X}\vct{\theta}}+\varepsilon\twonorm{\vct{\theta}}}^2.
\end{align}
The main intuition behind this derivation is that one can think of the min-max optimization problem above as a game between a learner and an adversary where the learner first chooses a parameter $\vct{\theta}$ and then the adversary changes each feature $\vct{x}_i$ given the label $y_i$ and the learner's choice of $\vct{\theta}$. We show that the best choice for the adversary to maximize the error is to pick $\vct{\delta}_i$ in the direction of $\vct{\theta}$ with a magnitude of $\varepsilon$ (maximum power of the adversary) and with the sign of the misfit on the $i$ the training data point ($\sgn{\langle \vct{x}_i,\vct{\theta}\rangle-y_i}$). We formally prove this result by connecting it to the well-known trust region subproblem in optimization.
\medskip
\noindent\textbf{Step II: Reduction to an Auxiliary Optimization (AO) problem (Section \ref{step2}).}\\
The loss \eqref{simpobj}, while significantly simplified, is still rather complicated and it is completely unclear how to precisely characterize its behavior and the quality of its optimal solution. In particular, the dependence on the random data matrix $\mtx{X}$ is still rather complex hindering statistical analysis even in an asymptotic setting. To bring the optimization problem into a form more amenable to precise asymptotic analysis we carry out a series of reformulations of the optimization problem. First, we rescale the loss. Next we consider a change of variable of the form $\vct{z}=\frac{1}{\sqrt{p}}(\vct{\theta}-\vct{\theta}_0)$ and add new variables by adding equality constraints. Finally, we use duality to cast the problem into a mini-max form. Combining these steps we arrive at the following equivalent Primal Optimization (PO) problem
\begin{align}
\label{equi1}
\min_{{\vct{z}}\in\mathbb{R}^p,\vct{v}\in\mathbb{R}^n} \max_{\vct{u}\in\mathbb{R}^n}\;\;\; \frac{1}{\sqrt{p}}\vct{u}^T\boldsymbol{X} \vct{z} - \frac{1}{\sqrt{p}}\vct{u}^T \vct{\omega} +\frac{1}{\sqrt{p}}\vct{u}^T \vct{v}+\ell(\vct{v};\vct{z})
\end{align}
where $\vct{\omega}=\frac{\vct{w}}{\sqrt{p}}\in\mathbb{R}^n$ is a Gaussian vector with i.i.d.~$\mathcal{N}(0,\sigma^2)$ entires and
\vspace{-0.4cm}
\begin{align*}
\ell(\vct{v};\vct{z}):= \frac{1}{2p}\left(\twonorm{\vct{v}}^2+2\frac{\varepsilon}{\sqrt{p}}\onenorm{\vct{v}}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}+\frac{\varepsilon^2}{p}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}^2\right).
\end{align*}
This equivalent form may be counter-intuitive as we started by simplifying a different mini-max optimization problem and we have now again introduced a new maximization! The main advantage of this new form is that it is in fact affine in the data matrix $\boldsymbol{X}$. This particular form allows us to use a powerful extension of a classical Gaussian process inequality due to \cite{gordon1988milman} known as Convex Gaussian Minimax Theorem (CGMT) \cite{thrampoulidis2015regularized} which focuses on characterizing the asymptotic behavior of mini-max optimization problems that are affine in a Gaussian matrix $\boldsymbol{X}$. This result enables us to characterize the properties of \eqref{equi1} by studying the asymptotic behavior of the following, arguable simpler, \emph{Auxiliary Optimization (AO)} problem instead
\begin{align}
\label{equiAO}
\min_{\vct{z}\in\mathbb{R}^p, \vct{v}} \max_{\vct{u}\in\mathbb{R}^n}\; \frac{1}{\sqrt{p}}\left(\twonorm{\vct{z}} \vct{g}^T\vct{u} + \twonorm{\vct{u}} \vct{h}^T\vct{z}- \vct{u}^T \vct{\omega} +\vct{u}^T \vct{v}\right)+\ell(\vct{v};\vct{z}).
\end{align}
We emphasize that the relationship between the above AO problem \eqref{equiAO} and how it is exactly related to the PO problem \eqref{equi1} is much more intricate and technical. See Section \ref{step2} for details.
The CGMT framework has been recently used to derive precise characterization of the generalization error of the max-margin linear classifiers in overparametrized regime with separable data~\cite{deng2019model,montanari2019generalization}. Also \cite{liang2020precise} uses the CGMT framework to analyze max-$\ell_1$-margin classifiers.
\smallskip
\noindent\textbf{Step III: Scalarization of the Auxiliary Optimization (AO) problem (Section \ref{step3}).}\\
In this step we further simplify the AO problem in \eqref{equiAO}. In particular we show the asymptotic behavior of the AO can be characterized rather precisely via the following scalar optimization problem involving five variables:
\begin{align}
\label{eq0192}
\max_{0\le\beta\le K_\beta}\sup_{\gamma,\tau_h\ge0}\;\;\min_{0\le \alpha\le K_\alpha}\;\;\min_{\tau_g\ge 0}\;\; \quad D(\alpha,\beta, \gamma,\tau_h,\tau_g)\,\quad\text{where}
\end{align}
%
\begin{align}\label{eq019-22}
D(\alpha,\beta, \gamma,\tau_h,\tau_g):=&\frac{\delta\beta}{2(\tau_g+\beta)} \left(\alpha^2+\sigma^2\right)\nonumber\\
&+\delta \mathbb{1}_{\big\{\gamma(\tau_g+\beta)>\sqrt{\frac{2}{\pi}}\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}\big\}}\frac{\beta^2(\alpha^2+\sigma^2)}{2\tau_g(\tau_g+\beta)}\left({\rm erf}\left(\frac{\tau_*}{\sqrt{2}}\right)-\frac{\gamma(\tau_g+\beta)}{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}\tau_*\right)\nonumber\\
&-\frac{\alpha}{2\tau_h}(\gamma^2 +\beta^2)+ \gamma\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2}+V^2} -\frac{\alpha\tau_h}{2}+\frac{\beta\tau_g}{2}
\end{align}
In particular a variety of conclusions can be derived based on the optimal solutions of the above optimization problem as we discuss in the next step. We note that while the expressions may look complicated we prove that this optimization problem is in fact convex in the minimization parameters $(\alpha,\tau_g)$ and concave in the maximization parameters $(\beta,\gamma,\tau_h)$ so that its optimal solutions can be easily derived via a simple low-dimensional gradient descent rather quickly and accurately. We also note that this proof is quite intricate and involved, so it is not possible to give an intuitive sketch of the arguments here. We refer to Section \ref{step3} for details.
\medskip
\noindent\textbf{Step IV: Completing the proof of the theorems (Sections \ref{funpf} and \ref{algpf}).}\\
Finally, we utilize the above scalar form to derive all of the different theorems and results stated in Section \ref{sec:main}. This is done by relating the quantities of interest in each theorem to the optimal solutions of \eqref{eq0192}. For instance, we show that $\lim_{n\to\infty}\frac{1}{p} \twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon-{\boldsymbol{\theta}}_0}^2 = \alpha_*^2$ with $\alpha_*$ the optimal solution over $\alpha$. These calculations/proofs are carried out in detail in Sections \ref{funpf} and \ref{algpf}. Since each argument is different we do not provide a summary here and refer to the corresponding sections.
\section{Proofs}
\label{sec:proofs}
\subsection{Notations}
We define the data matrix $\mtx{X}\in\mathbb{R}^{n\times p}$ with the rows consisting of the training data features $\vct{x}_1, \vct{x}_2, \ldots, \vct{x}_n$. For a convex function $f:\mathbb{R}^m\to \mathbb{R}$, we denote its its Fenchel conjugate by $f^*(\vct{y}) = \sup_{\vct{x}} \vct{y}^T\vct{x} - f(\vct{x})$. We also define the Moreau envelope function of $f$ at $\vct{x}$ with parameter $\tau$ as
\[
e_f(\vct{x};\tau) \equiv \min_{\vct{v}} \frac{1}{2\tau} \twonorm{\vct{x}-\vct{v}}^2+f(\vct{v})\,.
\]
\subsection{Simplification of the loss}
\label{step1}
As discussed earlier in this section we wish to derive a closed form for the loss
\begin{align}
\label{firstoptstep1}
\mathcal{L}(\vct{\theta}):=\underset{\twonorm{\vct{\delta}_i}\le \varepsilon}{{\rm max}}\;\; \frac{1}{2n} \sum_{i=1}^n \left(y_i - \<\vct{x}_i +\bdelta_i,{\boldsymbol{\theta}}\>\right)^2
\end{align}
and in particular show that
\begin{align}
\label{concstep1}
\mathcal{L}(\vct{\theta})=\frac{1}{2n} \sum_{i=1}^n \left(|y_i-\<\vct{x}_i,{\boldsymbol{\theta}}\>|+ \varepsilon \twonorm{{\boldsymbol{\theta}}}\right)^2=\frac{1}{2n}\twonorm{\abs{\vct{y}-\mtx{X}\vct{\theta}}+\varepsilon\twonorm{\vct{\theta}}}^2
\end{align}
To this aim first note that the maximization in \eqref{firstoptstep1} decouples over $i$ so that we can write
\begin{align*}
\mathcal{L}(\vct{\theta}):=\;\; \frac{1}{2n} \sum_{i=1}^n \underset{\twonorm{\vct{\delta}_i}\le \varepsilon}{{\rm max}}\left(y_i - \<\vct{x}_i +\bdelta_i,{\boldsymbol{\theta}}\>\right)^2
\end{align*}
To continue further define $\widetilde{y}_i:= y_i-\<\vct{x}_i,{\boldsymbol{\theta}}\>$. By expanding the square the optimization over $\bdelta_i$ can be rewritten in the form
\[
\underset{\twonorm{\vct{\delta}_i}\le \varepsilon}{\min}\;\; -\frac{1}{2}\widetilde{y}_i^2 + \widetilde{y}_i \<{\boldsymbol{\theta}},\bdelta_i\>- \frac{1}{2} \<{\boldsymbol{\theta}},\bdelta_i\>^2\,.
\]
Note that this is trust-region subproblem and $\bdelta_i$ is a solution if and only if $\twonorm{\bdelta_i}\le \varepsilon$ and there exists $\lambda_i \ge0$ such that
\begin{enumerate}
\item $(-{\boldsymbol{\theta}}\bth^\sT + \lambda_i \I)\bdelta_i = -\tilde{y}_i{\boldsymbol{\theta}}$\,.
\item $-{\boldsymbol{\theta}}\bth^\sT + \lambda_i \I \succeq \zero$ (or equivalently $\lambda_i \ge \twonorm{{\boldsymbol{\theta}}}^2$)
\item $\lambda_i(\varepsilon-\twonorm{\bdelta_i}) = 0$.
\end{enumerate}
Since by (2), $\lambda_i>0$, condition (3) reduces to $\twonorm{\bdelta_i} = \varepsilon$. Also from (1), we have
\begin{align}
\bdelta_i &= -\widetilde{y}_i (-{\boldsymbol{\theta}}\bth^\sT + \lambda_i \I)^{-1}{\boldsymbol{\theta}}\nonumber\\
&= -\lambda_i^{-1}\widetilde{y}_i \left(\I + \frac{{\boldsymbol{\theta}}\bth^\sT}{\lambda_i - \twonorm{{\boldsymbol{\theta}}}^2}\right){\boldsymbol{\theta}}\nonumber\\
&= -\lambda_i^{-1}\widetilde{y}_i {\boldsymbol{\theta}}\frac{\lambda_i}{\lambda_i - \twonorm{{\boldsymbol{\theta}}}^2}\nonumber\\
&= -\widetilde{y}_i {\boldsymbol{\theta}}\frac{1}{\lambda_i - \twonorm{{\boldsymbol{\theta}}}^2}\,.\label{eq:delta-i}
\end{align}
Using the fact that $\twonorm{\bdelta_i} = \varepsilon$ in the latter identity we thus conclude that $\lambda_i = (1/\varepsilon) \twonorm{{\boldsymbol{\theta}}}\abs{\widetilde{y}_i} + \twonorm{{\boldsymbol{\theta}}}^2$. Substituting for $\lambda_i$ in~\eqref{eq:delta-i} we obtain
\[\bdelta_i = - \frac{\widetilde{y}_i}{\abs{\widetilde{y}_i}} \frac{{\boldsymbol{\theta}}\varepsilon}{\twonorm{{\boldsymbol{\theta}}} }=-\varepsilon\sgn{ y_i-\<\vct{x}_i,{\boldsymbol{\theta}}\>}\frac{\vct{\theta}}{\twonorm{\vct{\theta}}}\,.\]
Substituting the latter into \eqref{firstoptstep1} we arrive at \eqref{concstep1} to complete our simplification of the loss.
\subsection{Reduction to an auxiliary optimization problem via CGMT}
\label{step2}
We are interested in characterizing the properties of the optimal paramter ${\widehat{\boldsymbol{\theta}}}^{\varepsilon}$ and thus it shall be convenient to work with a scaled version of the loss \eqref{concstep1}. This scaling of course does not affect the optimal solution ${\widehat{\boldsymbol{\theta}}}^{\varepsilon}$. Thus hence forth we focus on the following objective
\begin{align}\label{eq:hth-eps2}
{\widehat{\boldsymbol{\theta}}}^{\varepsilon} = \arg\min_{{\boldsymbol{\theta}}\in \mathbb{R}^p} \frac{1}{2p^2} \sum_{i=1}^n \left(\abs{y_i - \<\vct{x}_i,{\boldsymbol{\theta}}\>} + \varepsilon\twonorm{{\boldsymbol{\theta}}}\right)^2\,.
\end{align}
To continue further it is convenient to consider a change of variable of the form $\vct{z}=\frac{1}{\sqrt{p}}(\vct{\theta}-\vct{\theta}_0)$ and note that
\begin{align*}
y_i-\langle\vct{x}_i,\vct{\theta}\rangle=w_i+\langle\vct{x}_i,\vct{\theta}_0-\vct{\theta}\rangle=w_i-\sqrt{p}\langle\vct{x}_i,\vct{z}\rangle.
\end{align*}
Define $\ell(v;\vct{\theta}) := \frac{1}{2}\left(\abs{v}+\varepsilon\twonorm{\vct{\theta}}\right)^2$ and note that with this change of variable we have that $\widehat{\vct{z}}^{\varepsilon}=\frac{1}{\sqrt{p}}({\widehat{\boldsymbol{\theta}}}^{\varepsilon}-\vct{\theta}_0)$ is given by
\begin{align*}
\widehat{\vct{z}}^{\varepsilon} = \arg\min_{{\vct{z}}} \frac{1}{p^2}\sum_{i=1}^n \ell \left(w_i - \sqrt{p}\<\vct{x}_i,{\vct{z}}\>;\vct{\theta}_0+\sqrt{p}\vct{z}\right).
\end{align*}
Equivalently we can rewrite this optimization problem in the form
\begin{align}
\label{tmp11}
\min_{{\vct{z}}\in\mathbb{R}^p,\vct{v}\in\mathbb{R}^n} \frac{1}{p^2}\sum_{i=1}^n \ell\left(\sqrt{p}v_i;\vct{\theta}_0+\sqrt{p}\vct{z}\right)\quad\text{subject to}\quad\sqrt{p}\vct{v} = \vct{w}- \sqrt{p}\boldsymbol{X} {\vct{z}}.
\end{align}
We note that the scaling of $\vct{v}$ is arbitrary but serves the purpose of simplifying the exposition later on. The loss above is still rather complicated and it is unclear how to study and characterize the properties of its optimal solution in an asymptotic regime where the size of the training data and the number of parameters grow in proportion with each other. To study this loss in an asymptotic fashion we first cast it as a different mini-max optimization using duality. In particular by associating a dual variable $\frac{\vct{u}}{p}$ with the equality constraint, we obtain
\begin{align}
\label{lin}
&\min_{{\vct{z}}\in\mathbb{R}^p,\vct{v}\in\mathbb{R}^n} \max_{\vct{u}\in\mathbb{R}^n}\; \frac{1}{p}\Big\{\vct{u}^T(\sqrt{p}\boldsymbol{X}) \vct{z} - \vct{u}^T \vct{w} +\sqrt{p}\vct{u}^T \vct{v}\Big\}+\frac{1}{p^2}\sum_{i=1}^n \ell\left(\sqrt{p}v_i;\vct{\theta}_0+\sqrt{p}\vct{z}\right)\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad= \min_{{\vct{z}}\in\mathbb{R}^p,\vct{v}\in\mathbb{R}^n} \max_{\vct{u}\in\mathbb{R}^n}\; \frac{1}{p}\Big\{\vct{u}^T(\sqrt{p}\boldsymbol{X}) \vct{z} - \vct{u}^T \vct{w} +\sqrt{p}\vct{u}^T \vct{v}\Big\}+\ell(\vct{v};\vct{z}
\end{align}
where
\begin{align*}
\ell(\vct{v};\vct{z}):=\frac{1}{p^2}\sum_{i=1}^n \ell\left(\sqrt{p}v_i;\vct{\theta}_0+\sqrt{p}\vct{z}\right)= \frac{1}{2p}\left(\twonorm{\vct{v}}^2+2\frac{\varepsilon}{\sqrt{p}}\onenorm{\vct{v}}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}+\frac{\varepsilon^2}{p}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}^2\right)
\end{align*}
At first this may be counter-intuitive as we started by simplifying a different mini-max optimization problem and now we are again introducing a new maximization! The main advantage of this new form is that \eqref{lin} is in fact affine in the matrix. This particular form allows us to use a powerful extension of a classical Gaussian process inequality due to Gordon \cite{gordon1988milman} known as Convex Gaussian Minimax Theorem (CGMT) \cite{thrampoulidis2015regularized} which focuses on characterizing the asymptotic behavior of mini-max optimization problems that are affine in a Gaussian matrix $\boldsymbol{X}$. Formally, the CGMT framework shows that a problem of the form
\begin{align}
\label{generalPO}
\min_{\vct{z}\in\mathcal{S}_{\vct{z}}}\text{ }\max_{\vct{u}\in\mathcal{S}_{\vct{u}}}\quad \vct{u}^T\mtx{X}\vct{z}+\psi(\vct{z},\vct{u})
\end{align}
with $\mtx{X}$ a matrix with $\mathcal{N}(0,1)$ entries can be replaced asymptotically with
\begin{align}
\label{generalAO}
\min_{\vct{z}\in\mathcal{S}_{\vct{z}}}\text{ }\max_{\vct{u}\in\mathcal{S}_{\vct{u}}}\quad\twonorm{\vct{z}}\vct{g}^T\vct{u}+\twonorm{\vct{u}}\vct{h}^T\vct{z}+\psi(\vct{z},\vct{u})
\end{align}
where $\vct{g}$ and $\vct{h}$ are independent Gaussian vectors with i.i.d.~$\mathcal{N}(0,1)$ entries and $\psi(\vct{z},\vct{u})$ is convex in $\vct{z}$ and concave in $\vct{u}$. In the above $\mathcal{S}_{\vct{z}}$ and $\mathcal{S}_{\vct{u}}$ are compact sets. We refer to \cite[Theorem 3]{thrampoulidis2015regularized} for precise statements. Following \cite{thrampoulidis2015regularized} we shall refer to problems of the form \eqref{generalPO} and \eqref{generalAO} as the Primal Problem (PO) and the Auxiliary Problem (AO).
As evident from the above to be able to apply CGMT, requires the minimization/maximization to be over compact sets. To avoid this technical issue one can introduce ``artificial" boundedness constraint so that they do not change the optimal solution. More specifically, we can add constraints of the form $\mathcal{S}_{{\vct{z}}} = \{{\vct{z}}|\;\; \twonorm{\vct{z}}\le K_\alpha\}$ and $\mathcal{S}_{\vct{u}} = \{\vct{u}:\,\twonorm{\vct{u}}\le K_\beta \}$ for sufficiently large constants $K_\alpha$ and $K_\beta$ without changing the optimal solution of \eqref{lin} in a precise asymptotic sense. See Appendix \ref{setres} for precise statements and proofs. This allows us to replace \eqref{lin} with
\begin{align}
\label{linmod}
\min_{{\vct{z}}\in\mathcal{S}_{\vct{z}},\vct{v}\in\mathbb{R}^n} \max_{\vct{u}\in\mathcal{S}_{\vct{u}}}\;\;\; \frac{1}{\sqrt{p}}\vct{u}^T\boldsymbol{X} \vct{z} - \frac{1}{\sqrt{p}}\vct{u}^T \vct{\omega} +\frac{1}{\sqrt{p}}\vct{u}^T \vct{v}+\ell(\vct{v};\vct{z})
\end{align}
where $\vct{\omega}=\frac{\vct{w}}{\sqrt{p}}\in\mathbb{R}^n$ is a Gaussian vector with i.i.d.~$\mathcal{N}(0,\sigma^2)$ entires.
With these compact constraints in place we can now apply the CGMT result. To this aim note that this optimization is in the desired form of a Primary Optimization (PO): it has a bilinear term $\vct{u}^T\boldsymbol{X} {\vct{z}}$ plus a function
\begin{align*}
\psi({\vct{z}},\vct{u}) =\min_{\vct{v}\in\mathbb{R}^n}\frac{1}{\sqrt{p}} \left(-\vct{u}^T\vct{\omega}+\vct{u}^T\vct{v}\right) + \ell(\vct{v};\vct{z})
\end{align*}
which is convex in $\vct{z}$\footnote{Note that the prior to the minimization over $\vct{v}$ the problem is trivially jointly convex in $(\vct{z},\vct{v})$ and partial minimization preserves convexity.} and concave in $\vct{u}$. The corresponding Auxiliary Optimization (AO) thus takes the form
\begin{align}
\label{finalAO}
&\min_{\vct{z}\in\mathcal{S}_{{\vct{z}}}} \max_{\vct{u}\in\mathcal{S}_{\vct{u}}}\; \frac{1}{\sqrt{p}}\left(\twonorm{\vct{z}} \vct{g}^T\vct{u} + \twonorm{\vct{u}} \vct{h}^T\vct{z}\right)+\min_{\vct{v}\in\mathbb{R}^n}\frac{1}{\sqrt{p}}\left( - \vct{u}^T \vct{\omega} +\vct{u}^T \vct{v}\right)+\ell(\vct{v};\vct{z})\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad=\min_{\vct{z}\in\mathcal{S}_{{\vct{z}}}, \vct{v}} \max_{\vct{u}\in\mathcal{S}_{\vct{u}}}\; \frac{1}{\sqrt{p}}\left(\twonorm{\vct{z}} \vct{g}^T\vct{u} + \twonorm{\vct{u}} \vct{h}^T\vct{z}- \vct{u}^T \vct{\omega} +\vct{u}^T \vct{v}\right)+\ell(\vct{v};\vct{z}).
\end{align}
This completes the derivation of the AO.
\subsection{Scalarization of the auxilary optimization problem}
\label{step3}
In this section we continue our proof by significantly simplifying the AO problem. In particular we show that the behavior of the AO and hence the PO can be completely characterized by \eqref{eq019-2}. This is arguably the most intricate part of our proofs.
We begin simplifying the AO by maximizing over $\vct{u}$. To this aim we decompose the optimization problem over $\mathcal{S}_{\vct{u}}$ in terms of its direction and radius. Specifically, $\vct{u}=\beta\widetilde{\vct{u}}$ with $\widetilde{\vct{u}}\in\mathbb{S}^{n-1}$ and $0\le \beta\le K_\beta$. Using this decomposition we have
\begin{align*}
&\max_{\vct{u}\in\mathcal{S}_{\vct{u}}}\; \frac{1}{\sqrt{p}}\left(\twonorm{\vct{z}} \vct{g}^T\vct{u} + \twonorm{\vct{u}} \vct{h}^T\vct{z}- \vct{u}^T \vct{\omega} +\vct{u}^T \vct{v}\right)\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad= \max_{0\le \beta\le K_\beta}\text{ }\max_{\vct{u}\in\mathbb{S}^{n-1}}\; \frac{1}{\sqrt{p}}\left(\twonorm{\vct{z}} \vct{g}^T\vct{u} + \twonorm{\vct{u}} \vct{h}^T\vct{z}- \vct{u}^T \vct{\omega} +\vct{u}^T \vct{v}\right)\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad= \max_{0\le \beta\le K_\beta}\text{ }\max_{\vct{u}\in\mathbb{S}^{n-1}}\; \frac{1}{\sqrt{p}}\vct{u}^T\left(\twonorm{\vct{z}} \vct{g} - \vct{\omega} + \vct{v}\right)+\frac{\beta}{\sqrt{p}} \vct{h}^T\vct{z}\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad= \max_{0\le \beta\le K_\beta}\text{ } \frac{1}{\sqrt{p}}\twonorm{\twonorm{\vct{z}} \vct{g} - \vct{\omega} + \vct{v}}+\frac{\beta}{\sqrt{p}} \vct{h}^T\vct{z}.
\end{align*}
Plugging the latter into \eqref{finalAO} the AO reduces to
\begin{align*}
\min_{\vct{z}\in\mathcal{S}_{{\vct{z}}}, \vct{v}} \max_{0\le \beta\le K_\beta}\text{ } \frac{1}{\sqrt{p}}\twonorm{\twonorm{\vct{z}} \vct{g} - \vct{\omega} + \vct{v}}+\frac{\beta}{\sqrt{p}} \vct{h}^T\vct{z}+\ell(\vct{v};\vct{z}).
\end{align*}
We hope to eventually simplify the minimization over $\vct{v}$ and $\vct{z}$ also. For this minimization to become easier in our later calculation we proceed by writing $ \ell(\vct{v};\vct{z})$ in terms of its conjugate with respect to $\vct{z}$. That is,
\begin{align*}
\ell(\vct{v};\vct{z})=\sup_{\vct{q}} \vct{q}^T\vct{z}-\widetilde{\ell}(\vct{v};\vct{q})
\end{align*}
where $ \widetilde{\ell}(\vct{v};\vct{q})$ is the conjugate of $\ell$ with respect to $\vct{z}$. The logic behind this is that AO with then simplify to
\begin{align}
\label{simpAO}
\min_{\vct{z}\in\mathcal{S}_{\vct{z}},\vct{v}}\max_{0\le\beta\le K_\beta, \vct{q}}\;\; \frac{\beta}{\sqrt{p}} \twonorm{\twonorm{\vct{z}}\vct{g}-\vct{\omega}+\vct{v}} +\frac{\beta}{\sqrt{p}} \vct{h}^T \vct{z} +\vct{q}^T\vct{z}-\widetilde{\ell}(\vct{v};\vct{q}).
\end{align}
To proceed it would be convenient to flip the order of minimum and maximum in the above. However, for this to be allowed the mini-max problem typically has to be convex/concave in the min/max parameters (e.g.~via the celebrated Sion's min-max Theorem \cite{sion1958general}). It is not clear that the above objective has this form so that the flipping of the order of the min and max is justified. However, since the original PO problem is convex/concave in the min/max parameters one can justify such a flipping of the min and max in the AO based on the PO. We note that this is justified for asymptotic calculations and refer to \cite[Appendix A.2.4]{thrampoulidis2015precise} for precise details on this derivation. Thus, we will instead consider the following problem as the (AO) which is asymptotically equivalent to \eqref{simpAO}
\[
\max_{0\le\beta\le K_\beta, \vct{q}} \min_{\vct{z}\in\mathcal{S}_{\vct{z}},\vct{v}}\;\; \frac{\beta}{\sqrt{p}} \twonorm{\twonorm{\vct{z}}\vct{g}-\vct{\omega}+\vct{v}} +\frac{\beta}{\sqrt{p}} \vct{h}^T \vct{z} +\vct{q}^T\vct{z}-\widetilde{\ell}(\vct{v};\vct{q}).
\]
To simplify further we now optimize over the direction and norm of $\vct{z}$ ($\twonorm{\vct{z}}=\alpha$) to arrive at
\begin{align}
\label{usethm4}
\max_{0\le\beta\le K_\beta, \vct{q}} \min_{0\le \alpha\le K_\alpha,\vct{v}}\;\; \frac{\beta}{\sqrt{p}} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}} -\alpha\twonorm{\frac{\beta}{\sqrt{p}} \vct{h} +\vct{q}}-\widetilde{\ell}(\vct{v};\vct{q}).
\end{align}
Next note that $-\widetilde{\ell}(\vct{v};\vct{q})$ is convex in $\vct{v}$. To see this first note that
\begin{align*}
\widetilde{\ell}(\vct{v};\vct{q})=\sup_{\vct{z}} \vct{q}^T\vct{z}-\ell(\vct{v};\vct{z}).
\end{align*}
Also since $\ell$ is jointly convex in $(\vct{v},\vct{z})$, then $-\ell(\vct{v};\vct{z})$ is jointly concave in $(\vct{v},\vct{z})$. Also $\vct{q}^T\vct{z}$ is jointly concave in $(\vct{v},\vct{z})$. Therefore, $\vct{q}^T\vct{z}-\ell(\vct{v};\vct{z})$ is jointly concave in $(\vct{v},\vct{z})$ and based on the partial maximization rule we can conclude that $\widetilde{\ell}(\vct{v};\vct{q})$ should be concave in $\vct{v}$ which in turn implies $-\widetilde{\ell}(\vct{v};\vct{q})$ is convex in $\vct{v}$. The other terms are also trivially jointly convex in $\alpha, \vct{v}$ so that overall the objective is jointly convex in $\alpha, \vct{v}$. The objective above is also trivially jointly concave in $\beta, \vct{q}$. Thus based on Sion's min-max Theorem \cite{sion1958general}) we could change the order of the mins and maxs as we please. This allows us to reorder $\max_{\vct{q}}$ and $\min_{\alpha, \bv}$ to arrive at
\begin{align}
\max_{0\le\beta\le K_\beta} \min_{0\le \alpha\le K_\alpha,\vct{v}}\max_{\vct{q}} \;\; \frac{\beta}{\sqrt{p}} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}} -\alpha\twonorm{\frac{\beta}{\sqrt{p}} \vct{h} +\vct{q}}-\widetilde{\ell}(\vct{v};\vct{q})
\end{align}
To proceed, we first compute $\widetilde{\ell}(\vct{v};\vct{q})$ in the Lemma below with the proof deferred to Appendix \ref{conjlemmapf}.
\begin{lemma}\label{conjlemma} The conjugate of
\begin{align*}
\ell(\vct{v};\vct{z}):=\frac{1}{2p}\left(\twonorm{\vct{v}}^2+2\frac{\varepsilon}{\sqrt{p}}\onenorm{\vct{v}}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}+\frac{\varepsilon^2}{p}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}^2\right)
\end{align*}
with respect to the variable $\vct{z}$ is given by
\begin{align*}
\widetilde{\ell}(\vct{v};\vct{q}):=\sup_{\vct{z}} \vct{q}^T\vct{z}-\ell(\vct{v};\vct{z})=-\frac{1}{\sqrt{p}}\vct{q}^T\vct{\theta}_0+\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)_+^2 -\frac{1}{2p}\twonorm{\bv}^2.
\end{align*}
\end{lemma}
Using this characterization of $\widetilde{\ell}(\bv;\vct{q})$ we arrive at the following representation of AO problem
\begin{align}
\label{conjtemp}
\min_{0\le \alpha\le K_\alpha,\vct{v}}\max_{0\le\beta\le K_\beta}\max_{\vct{q}}\;\; &\frac{\beta}{\sqrt{p}} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}} -\alpha\twonorm{\frac{\beta}{\sqrt{p}} \vct{h} +\vct{q}}+ \frac{1}{\sqrt{p}}\vct{q}^T\vct{\theta}_0\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)_+^2 + \frac{1}{2p}\twonorm{\bv}^2
\end{align}
To simplify further we next focus on the maximization over $\vct{q}$ or equivalently the following minimization problem
\begin{align*}
&\min_{\vct{q}}\quad \alpha\twonorm{\frac{\beta}{\sqrt{p}} \vct{h} +\vct{q}} +\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)_+^2 - \frac{1}{\sqrt{p}}\vct{q}^T\vct{\theta}_0\\
&\min_{\vct{q}}\text{ }\inf_{\tau_h\ge 0}\text{ } \frac{\alpha}{2\tau_h}\twonorm{\frac{\beta}{\sqrt{p}} \vct{h} +\vct{q}}^2 +\frac{\alpha\tau_h}{2}+
\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)_+^2 - \frac{1}{\sqrt{p}}\vct{q}^T\vct{\theta}_0\\
&\min_{\vct{q}}\text{ }\inf_{\tau_h\ge 0}\text{ } \frac{\alpha}{2\tau_h}\tn{\vct{q}}^2+\frac{\alpha\beta^2}{2p\tau_h}\tn{\vct{h}}^2+\frac{\alpha\beta}{\tau_h\sqrt{p}}\vct{h}^T\vct{q} +\frac{\alpha\tau_h}{2}
+\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\tn{\vct{q}}-\onenorm{\vct{v}}\right)_+^2 - \frac{1}{\sqrt{p}}\vct{q}^T\vct{\theta}_0
\end{align*}
The above is a linear function of $\vct{q}$ plus a term depending on $\twonorm{\vct{q}}$. So fixing $\twonorm{\vct{q}} = \gamma\ge 0$ the optimal $\vct{q}$ is given by $\vct{q} =- \gamma \frac{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}}{\twonorm{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}}}$, which simplifies the above to
\begin{align*}
\inf_{\tau_h,\gamma\ge0}\quad \frac{\alpha}{2\tau_h}\gamma^2+\frac{\alpha\beta^2}{2p\tau_h}\tn{\vct{h}}^2
- \frac{\gamma}{\sqrt{p}} \twonorm{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}}
+\frac{\alpha\tau_h}{2}
+\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\gamma-\onenorm{\vct{v}}\right)_+^2
\end{align*}
Plugging the latter into \eqref{conjtemp} the AO reduces to
\begin{align}\label{conj2}
\min_{0\le \alpha\le K_\alpha,\vct{v}} \max_{0\le\beta\le K_\beta}\sup_{\gamma,\tau_h\ge0}\;\; &\frac{\beta}{\sqrt{p}} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}}+\frac{1}{2p}\twonorm{\vct{v}}^2\nonumber\\
&-\frac{\alpha}{2\tau_h}\gamma^2-\frac{\alpha\beta^2}{2p\tau_h}\tn{\vct{h}}^2+ \frac{\gamma}{\sqrt{p}} \twonorm{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}} -\frac{\alpha\tau_h}{2}
-\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\gamma-\onenorm{\vct{v}}\right)_+^2
\end{align}
To continue we state a lemma with the proof deferred to Appendix \ref{cvxconcavelempf}
\begin{lemma}
\label{cvxconcavelem}
The function
\begin{align*}
f(\gamma,\beta,\tau_h):=\gamma^2+\frac{\beta^2}{p}\tn{\vct{h}}^2- 2\frac{\gamma}{\sqrt{p}} \twonorm{\beta\vct{h}-\frac{\vct{\theta_0}}{\alpha}}
\end{align*}
is jointly convex in the parameters $(\gamma,\beta,\tau_h)$.
\end{lemma}
Using this lemma we can trivially conclude that the objective \eqref{conj2} is jointly concave in $(\gamma,\beta,\tau_h)$. Also note that $\widetilde{\ell}$ is concave in $\vct{v}$ and hence $-\widetilde{\ell}$ is convex in $\vct{v}$. This implies that the objective in \eqref{conjtemp} is jointly convex in $(\alpha,\vct{v})$. Since maximization (with respect to the direction of $\vct{q}$) preserves convexity therefore \eqref{conj2} is trivially jointly convex in $(\alpha,\vct{v})$. Therefore, we can flip the order of min and max in \eqref{conj2} (again using Sion's min-max Theorem) to arrive at
\begin{align}
\label{eq09}
\max_{0\le\beta\le K_\beta}\sup_{\gamma,\tau_h\ge0}\;\;\min_{0\le \alpha\le K_\alpha}\;\;\min_{\vct{v}}\;\; &\frac{\beta}{\sqrt{p}} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}}+\frac{1}{2p}\twonorm{\vct{v}}^2\nonumber\\
&-\frac{\alpha}{2\tau_h}\gamma^2-\frac{\alpha\beta^2}{2p\tau_h}\tn{\vct{h}}^2+ \frac{\gamma}{\sqrt{p}} \twonorm{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}} -\frac{\alpha\tau_h}{2}
-\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\gamma-\onenorm{\vct{v}}\right)_+^2
\end{align}
We now focus on minimization over $\bv$. To this aim note that
\begin{align}
\label{eq10}
\min_{\vct{v}}\;\; &\frac{\beta}{\sqrt{p}} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}}+\frac{1}{2p}\twonorm{\vct{v}}^2 -\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\gamma-\onenorm{\vct{v}}\right)_+^2\nonumber\\
\min_{\tau_g\ge0,\vct{v}}\;\; &\frac{\beta}{2\tau_g p} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}}^2 +\frac{\beta\tau_g}{2} +\frac{1}{2p}\twonorm{\vct{v}}^2 -\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\gamma-\onenorm{\vct{v}}\right)_+^2\nonumber\\
\min_{\tau_g\ge0,\vct{v}}\;\; &\frac{\beta}{2\tau_g p} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}}^2 +\frac{\beta\tau_g}{2} +\frac{1}{2p}\twonorm{\vct{v}}^2 -\frac{1}{2\delta p^2}\left(\frac{p}{\varepsilon}\gamma-\onenorm{\vct{v}}\right)_+^2
\end{align}
Recall the definition of the Moreau envelope function of a function $f$ at a point $\vct{x}$ with parameter $\mu$,
\[
e_f(\vct{x};\mu)\equiv \min_{\bv} \frac{1}{2\mu} \twonorm{\vct{x}-\bv}^2+ f(\bv)\,.
\]
and define
\begin{align}\label{eq:f}
f(\bv;\gamma)\equiv \frac{1}{2} \twonorm{\bv}^2 -\frac{1}{2\delta p} (\frac{p}{\varepsilon}\gamma-\onenorm{\bv})_+^2\,,
\end{align}
Note that $f(\bv;\gamma)$ is convex in $\bv$ (since $-\widetilde{\ell}(\bv;\vct{q})$ was convex in $\bv$). Thus, \eqref{eq10} can be rewritten in the more compact form
\begin{align}\label{eq:dum0}
\min_{\tau_g\ge0}\;\; &\frac{1}{p} e_f\left(\vct{\omega}-\alpha\vct{g};\frac{\tau_g}{\beta}\right) + \frac{\beta\tau_g}{2}
\end{align}
In our next lemma we compute $e_f$. We defer the proof to Appendix \ref{meenvpf}.
\begin{lemma}\label{meenv}
Consider the function $f$ given by~\eqref{eq:f}. Then,
\begin{align*}
e_f(\vct{x};\mu) &= \frac{1}{2(\mu+1)}\twonorm{\vct{x}}^2 +
\min_{\tau\ge 0} G(\vct{x};\mu,\tau)
\end{align*}
where
\begin{align*}
&G(\vct{x};\mu,\tau) = \frac{1}{2\mu(\mu+1)}\twonorm{\vct{x}-{\sf ST}(\vct{x};\tau)}^2-\frac{1}{2n}\left(\frac{p}{\varepsilon}\gamma-\frac{1}{1+\mu}\onenorm{{\sf ST}(\vct{x};\tau)}\right)_+^2.
\end{align*}
Furthermore, $e_f(\vct{x};\tau)$ is strictly convex in $\vct{x}$.
\end{lemma}
Plugging Lemma \ref{meenv} into \eqref{eq:dum0} we have
\begin{align*}
&\frac{1}{p} e_f\left(\alpha\vct{g}-\vct{\omega};\frac{\tau_g}{\beta}\right)+ \frac{\beta\tau_g}{2}
=\frac{\beta}{2(\tau_g+\beta)} \frac{1}{p}\twonorm{\alpha\vct{g}-\vct{\omega}}^2 + \min_{\tau\ge0} \frac{1}{p} G\left(\alpha\vct{g}-\vct{\omega};\frac{\tau_g}{\beta},\tau\right)+\frac{\beta\tau_g}{2}
\end{align*}
Plugging this in \eqref{eq09} the AO problem reduces to
\begin{align}
\label{6some}
\max_{0\le\beta\le K_\beta}\sup_{\gamma,\tau_h\ge0}\;\;\min_{0\le \alpha\le K_\alpha}\;\;\min_{\tau_g\ge 0}\;\;\min_{\tau\ge 0}\;\; \quad&\frac{\beta}{2(\tau_g+\beta)} \frac{1}{p}\twonorm{\alpha\vct{g}-\vct{\omega}}^2 + \frac{1}{p} G\left(\alpha\vct{g}-\vct{\omega};\frac{\tau_g}{\beta},\tau\right)+\frac{\beta\tau_g}{2}\nonumber\\
&-\frac{\alpha}{2\tau_h}\gamma^2-\frac{\alpha\beta^2}{2p\tau_h}\tn{\vct{h}}^2+ \frac{\gamma}{\sqrt{p}} \twonorm{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}} -\frac{\alpha\tau_h}{2}
\end{align}
We note that since the problem \eqref{eq10} was jointly convex in $(\vct{v},\alpha,\tau_g)$ and \eqref{eq09} jointly concave in $(\beta,\gamma,\tau_h)$ and partial minimization preserves convexity we thus conclude that the objective is jointly convex in $(\alpha,\tau_g)$ and jointly concave in $(\beta,\gamma,\tau_h)$ (after the minimization over $\tau\ge 0$ has been carried out).
Note that trivially in an asymptotic regime
\begin{align*}
\frac{\twonorm{\vct{h}}^2}{p} \rightarrow 1\quad\text{and}\quad \frac{\twonorm{\alpha\vct{g}-\vct{\omega}}^2 }{n}\rightarrow \left(\alpha^2+\sigma^2\right)
\end{align*}
Also using concentration of Lipschitz functions of Gaussian we have
\begin{align*}
\frac{1}{\sqrt{p}} \twonorm{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}}\rightarrow \frac{1}{\sqrt{p}}\sqrt{\operatorname{\mathbb{E}}\bigg[\twonorm{\frac{\alpha\beta}{\tau_h}\vct{h}-\vct{\theta_0}}^2\bigg]}\rightarrow\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2}+V^2}.
\end{align*}
Plugging all of the above in \eqref{6some} we arrive at
\begin{align}
\label{eq018}
\max_{0\le\beta\le K_\beta}\sup_{\gamma,\tau_h\ge0}\;\;\min_{0\le \alpha\le K_\alpha}\;\;\min_{\tau_g\ge 0}\;\;\min_{\tau\ge 0}\;\; \quad&\frac{\delta\beta}{2(\tau_g+\beta)} \left(\alpha^2+\sigma^2\right) + \frac{1}{p} G\left(\alpha\vct{g}-\vct{\omega};\frac{\tau_g}{\beta},\tau\right)+\frac{\beta\tau_g}{2}\nonumber\\
&-\frac{\alpha}{2\tau_h}\gamma^2-\frac{\alpha\beta^2}{2\tau_h}+ \gamma\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2}+V^2} -\frac{\alpha\tau_h}{2}
\end{align}
To simplify further we also need an asymptotic characterization of $\frac{1}{p} G\left(\alpha\vct{g}-\vct{\omega};\frac{\tau_g}{\beta},\tau\right)$. To this aim we prove the following lemma with the proof deferred to Appendix \ref{Glempf}.
\begin{lemma}\label{Glem} Let $\vct{w}\in\mathbb{R}^n$ be a Gaussian random vector distributed as $\mathcal{N}(\vct{0},\omega^2\mtx{I}_n)$. Also assume
\begin{align*}
G(\vct{w};\mu,\tau) := \frac{1}{2\mu(\mu+1)}\twonorm{\vct{w}-{\sf ST}(\vct{w};\tau)}^2-\frac{1}{2n}\left(\frac{p}{\varepsilon}\gamma-\frac{1}{1+\mu}\onenorm{{\sf ST}(\vct{w};\tau)}\right)_+^2.
\end{align*}
Then
\begin{align*}
\underset{n \rightarrow \infty}{\lim}\text{ }\frac{1}{n}G(\vct{w};\mu,\tau)=&\frac{\omega^2}{2\mu(\mu+1)}\left(\left(1-\sqrt{\frac{2}{\pi}}\frac{\tau}{\omega} e^{-\frac{\tau^2}{2\omega^2}}\right)+\left(\frac{\tau^2}{\omega^2}-1\right){\rm erfc}\left(\frac{1}{\sqrt{2}}\frac{\tau}{\omega}\right)\right)\\
&-\frac{\omega^2}{2(\mu+1)^2}\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\frac{\tau}{\omega}\cdot{\rm erfc}\left(\frac{1}{\sqrt{2}}\frac{\tau}{\omega}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2\omega^2}}\right)_+^2.
\end{align*}
Furthermore,
\begin{align*}
\underset{\tau\ge 0}{\min}\text{ }\underset{n \rightarrow \infty}{\lim}\text{ }&\frac{1}{n}G(\vct{w};\mu,\tau)\\
&\quad\quad=
\begin{cases}
0 \quad &\text{ if }\gamma(\mu+1)\le \sqrt{\frac{2}{\pi}}\delta \varepsilon\omega\\
\frac{\omega^2}{2\mu(\mu+1)}\left({\rm erf}\left(\frac{\tau^*\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega},\mu\right)}{\sqrt{2}}\right)-\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}\tau^*\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega},\mu\right)\right)&\text{ if }\gamma(\mu+1)>\sqrt{\frac{2}{\pi}}\delta \varepsilon \omega
\end{cases}
\end{align*}
where $\tau^*(a,\mu)$ is the unique solution to
\begin{align*}
a-\frac{1}{\mu}\tau-\tau\cdot{\rm erf}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}=0
\end{align*}
\end{lemma}
Plugging the above lemma in \eqref{eq018} we arrive at
\begin{align}
\label{eq019}
\max_{0\le\beta\le K_\beta}\sup_{\gamma,\tau_h\ge0}\;\;\min_{0\le \alpha\le K_\alpha}\;\;\min_{\tau_g\ge 0}\;\; \quad D(\alpha,\beta, \gamma,\tau_h,\tau_g)\,,
\end{align}
where
\begin{align}\label{eq019-2}
D(\alpha,\beta, \gamma,\tau_h,\tau_g)=&\frac{\delta\beta}{2(\tau_g+\beta)} \left(\alpha^2+\sigma^2\right)\nonumber\\
&+\delta \mathbb{1}_{\big\{\gamma(\tau_g+\beta)>\sqrt{\frac{2}{\pi}}\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}\big\}}\frac{\beta^2(\alpha^2+\sigma^2)}{2\tau_g(\tau_g+\beta)}\left({\rm erf}\left(\frac{\tau_*}{\sqrt{2}}\right)-\frac{\gamma(\tau_g+\beta)}{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}\tau_*\right)\nonumber\\
&-\frac{\alpha}{2\tau_h}(\gamma^2 +\beta^2)+ \gamma\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2}+V^2} -\frac{\alpha\tau_h}{2}+\frac{\beta\tau_g}{2}
\end{align}
and $\tau_*$ is the unique solution to
\begin{align}\label{eq:tau*}
\frac{\gamma(\tau_g+\beta)}{\delta\varepsilon\beta\sqrt{\alpha^2+\sigma^2}}-\frac{\beta}{\tau_g}\tau-\tau\cdot {\rm erf}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}=0
\end{align}
This completes the scalarization of the AO.
\begin{remarks} {\bf (Convergence analysis).} In above we showed the point wise convergence of the objective function in~\eqref{6some} to function D given by~\eqref{eq019-2}. However, what is required in this framework, is (local) uniform convergence so we get that the minimax solution of the objective function in~\eqref{6some} also converges to the minimax solution of the AO problem \eqref{eq019-2}. This can be shown by following similar arguments as in \cite[Lemma A.5]{thrampoulidis2018precise} that is essentially based on a result known as ``convexity lemma'' in the literature (see e.g. \cite[Lemma 7.75]{StatDecision}) by which point wise convergence of convex functions implies uniform convergence in compact subsets.
\end{remarks}
\subsection{Uniqueness of the solution of the AO problem}
As we discussed after Equation~\eqref{eq:f}, the function $f(\bv;\gamma)$ is convex in $\bv$. Furthermore, we wrote \eqref{eq10} (part of the objective that depends on $\bv$) in terms of the Moreau envelope $\frac{1}{p}e_f(\vct{\omega}-\alpha \vct{g}; \tfrac{\tau_g}{\beta})$ and as $n\to \infty$, its limit goes to the \emph{expected Moreau envelope}. Now by using the result of~\cite[Lemma 4.4]{thrampoulidis2018precise} the expected Moreau envelope of a convex function is \emph{strictly} convex ( without requiring any strong or strict convexity assumption
on the function itself). Therefore, the convexity-concavity property discussed after~\eqref{6some} is preserved after taking the limit and the AO objective $D(\alpha,\beta, \gamma,\tau_h,\tau_g)$ is jointly strictly convex in $(\alpha, \tau_g)$ and jointly concave in $(\beta, \gamma, \tau_h)$.
We next note that $\sup_{\beta, \gamma, \tau_h} D(\alpha, \beta,\gamma,\tau_h,\tau_g)$ is strictly convex in $(\alpha,\tau_g)$. This follows from the fact that if $f(\vct{x},\boldsymbol{y})$ is strictly convex in $\vct{x}$, then $\sup_{\boldsymbol{y}} f(\vct{x},\boldsymbol{y})$ is also strictly convex in $\vct{x}$. We next use \cite[Lemma C.5]{thrampoulidis2018precise} to conclude that $\inf_{\tau_g}\sup_{\beta, \gamma, \tau_h} D(\alpha, \beta,\gamma,\tau_h,\tau_g)$ is strictly convex in $\alpha >0$.Therefore, its minimizer over $\alpha\ge0$ is unique, which completes the proof.
\subsection{Proofs for fundamental tradeoffs}
\label{funpf}
\subsubsection{Proof of Lemma \ref{lem:SR-AR}}
\label{lem:SR-ARpf}
We have
\begin{align}
{\sf SR}({\widehat{\boldsymbol{\theta}}}): = \frac{1}{p}\operatorname{\mathbb{E}}\Big[(y-\<\vct{x},{\widehat{\boldsymbol{\theta}}}\>)^2 \Big] = \frac{1}{p}\operatorname{\mathbb{E}}\Big[(w-\<\vct{x},{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0\>)^2 \Big] =
\frac{\sigma_0^2}{p} + \frac{1}{p} \twonorm{{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0}^2.
\end{align}
To characterize ${\sf AR}({\widehat{\boldsymbol{\theta}}})$, note that by following a similar argument as in Section~\ref{step1}, the solution of the problem
\[
\max_{\twonorm{\bdelta}\le \eps_{{\rm test}}} (y-\<\vct{x}+\bdelta,{\widehat{\boldsymbol{\theta}}}\>)^2
\]
is given by
\[\bdelta_i = -\eps_{{\rm test}} \sgn{ y-\<\vct{x},{\widehat{\boldsymbol{\theta}}}\>}\frac{{\widehat{\boldsymbol{\theta}}}}{\twonorm{{\widehat{\boldsymbol{\theta}}}}}\,.\]
Therefore the adversarial risk can be written as
\begin{align}
{\sf AR}({\widehat{\boldsymbol{\theta}}}) = \frac{1}{p} \operatorname{\mathbb{E}} \left[\left(|y-\<\vct{x},{\widehat{\boldsymbol{\theta}}}\>| +\eps_{{\rm test}} \twonorm{{\widehat{\boldsymbol{\theta}}}}\right)^2\right]
\end{align}
By substituting for $y = \<\vct{x},{\boldsymbol{\theta}}_0\>+ w$ and expanding the terms, we get
\begin{align}
&\operatorname{\mathbb{E}} \left[\left(|y-\<\vct{x},{\widehat{\boldsymbol{\theta}}}\>| +\eps_{{\rm test}} \twonorm{{\widehat{\boldsymbol{\theta}}}}\right)^2\right] \nonumber\\
& = \operatorname{\mathbb{E}}[\<\vct{x},{\boldsymbol{\theta}}_0- {\widehat{\boldsymbol{\theta}}}\>^2] + \operatorname{\mathbb{E}}[w^2] + \eps_{{\rm test}}^2\twonorm{{\widehat{\boldsymbol{\theta}}}}^2 + 2\eps_{{\rm test}}\twonorm{{\widehat{\boldsymbol{\theta}}}} \operatorname{\mathbb{E}}\left[|\<\vct{x},{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0\>+w| \right]\nonumber\\
&= \twonorm{{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0}^2+ \sigma_0^2 +\eps_{{\rm test}}^2\twonorm{{\widehat{\boldsymbol{\theta}}}}^2+ 2\sqrt{\frac{2}{\pi}} \eps_{{\rm test}}\twonorm{{\widehat{\boldsymbol{\theta}}}} \Big(\sigma_0^2+ \twonorm{{\boldsymbol{\theta}}_0-{\widehat{\boldsymbol{\theta}}}}^2 \Big)^{1/2} \,,
\end{align}
where in the first line we used the fact that ${\widehat{\boldsymbol{\theta}}}$ is independent of $\vct{x}$ and $w$ (the test data and the corresponding response) and in the second line we used that $\<\vct{x},{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0\>+w\sim \normal(0,\sigma^2+\twonorm{{\widehat{\boldsymbol{\theta}}}-{\boldsymbol{\theta}}_0}^2)$ since $\vct{x}\sim\normal(0,\mtx{I}_p)$. This completes the proof.
\subsubsection{Proof of Proposition \ref{pro:hth-Lam}}
\label{pro:hth-Lampf}
By definition,
\[
{\boldsymbol{\theta}}^\lambda = \arg\min_{{\boldsymbol{\theta}}} \;\; \lambda {\sf SR}({\boldsymbol{\theta}}) + {\sf AR}({\boldsymbol{\theta}})
\]
Substituting for ${\sf SR}({\boldsymbol{\theta}})$ and ${\sf AR}({\boldsymbol{\theta}})$ from Lemma~\ref{lem:SR-ARpf} and scaling the objective by a factor $p$, we get
\[
{\boldsymbol{\theta}}^\lambda = \arg\min_{{\boldsymbol{\theta}}} \;\; (1+\lambda) \left(\sigma_0^2+\twonorm{{\boldsymbol{\theta}}-{\boldsymbol{\theta}}_0}^2\right) +\eps_{{\rm test}}^2\twonorm{{\boldsymbol{\theta}}}^2+ 2\sqrt{\frac{2}{\pi}} \eps_{{\rm test}}\twonorm{{\boldsymbol{\theta}}} \left(\sigma_0^2+ \twonorm{{\boldsymbol{\theta}}_0-{\boldsymbol{\theta}}}^2 \right)^{1/2}
\]
Now by setting the derivative to zero we arrive at the following identity for ${\boldsymbol{\theta}}^\lambda$:
\begin{align}
(1+\lambda)({\boldsymbol{\theta}}^\lambda-{\boldsymbol{\theta}}_0) + \eps_{{\rm test}}^2 {\boldsymbol{\theta}}^\lambda +\sqrt{\frac{2}{\pi}}\eps_{{\rm test}} \left(
\frac{{\boldsymbol{\theta}}^\lambda}{\|{\boldsymbol{\theta}}^\lambda\|} \left(\sigma_0^2+ \twonorm{{\boldsymbol{\theta}}_0-{\boldsymbol{\theta}}}^2 \right)^{1/2} + \frac{\|{\boldsymbol{\theta}}^\lambda\|}{\left(\sigma_0^2+ \twonorm{{\boldsymbol{\theta}}_0-{\boldsymbol{\theta}}}^2 \right)^{1/2}} ({\boldsymbol{\theta}}^\lambda-{\boldsymbol{\theta}}_0)\right)=0\,.
\end{align}
Adopting the shorthand
$$A^\lambda := \frac{1}{\|{\boldsymbol{\theta}}^\lambda\|}\left(\sigma_0^2+ \twonorm{{\boldsymbol{\theta}}_0-{\boldsymbol{\theta}}^\lambda}^2 \right)^{1/2}$$
and rearranging the terms term we get
\[
\left[\left(1+\lambda+\sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}}{A^\lambda}\right)+ \left(\eps_{{\rm test}}^2 + \sqrt{\frac{2}{\pi}} \eps_{{\rm test}} A^\lambda\right) \I\right]{\boldsymbol{\theta}}^\lambda
= \left(1+\lambda+\sqrt{\frac{2}{\pi}}\frac{\eps_{{\rm test}}}{A} \right) {\boldsymbol{\theta}}_0\,.
\]
The above equation can be written as
\[
{\boldsymbol{\theta}}^\lambda = (1+\gamma_0^\lambda)^{-1} {\boldsymbol{\theta}}_0\,,
\]
with
\[
\gamma_0:= \frac{\eps_{{\rm test}}^2 + \sqrt{\frac{2}{\pi}} \eps_{{\rm test}} A^\lambda}{1+\lambda+\sqrt{\frac{2}{\pi}}\frac{\eps_{{\rm test}}}{A}}\,,
\]
which is the desired claim.
The proof is complete by noting that
\[
A^\lambda := \frac{1}{\twonorm{{\boldsymbol{\theta}}^\lambda}}\left(\sigma_0^2+ \twonorm{{\boldsymbol{\theta}}_0-{\boldsymbol{\theta}}}^2 \right)^{1/2}
= \frac{1}{\twonorm{{\boldsymbol{\theta}}_0}} \left((1+\gamma_0^\lambda)^2\sigma_0^2+ (\gamma_0^\lambda)^2\twonorm{{\boldsymbol{\theta}}_0}^2\right)^{1/2}.
\]
\subsection{Proofs for algorithmic tradeoffs}
\label{algpf}
\subsubsection{Proof of Theorem \ref{thm:main}}
\label{Thm5pf}
We have already prove part (a) in the previous sections. Part (b) is also trivial from \eqref{eq019-2} as
\begin{align*}
\lim_{n\to\infty}\frac{1}{p} \twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon-{\boldsymbol{\theta}}_0}^2= \lim_{n\to\infty} \twonorm{\widehat{z}^{\varepsilon}}^2=\alpha_*^2.
\end{align*}
We thus turn our attention to part (c) and discuss how to calculate $\frac{\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon}}{\sqrt{p}}$ asymptotically. As discussed earlier using a change of variable of the form $\vct{\theta}=\vct{\theta}_0+\sqrt{p}\vct{z}$ the optimization problem can be written in the form
\[
\min_{{\vct{z}}\in\mathcal{S}_{{\vct{z}}},\vct{v}} \max_{\vct{u}\in\mathcal{S}_{\vct{u}}}\; \frac{1}{\sqrt{p}}\left(\vct{u}^T\boldsymbol{X} \vct{z} - \vct{u}^T \vct{\omega} +\vct{u}^T \vct{v}\right)+\ell(\vct{v};\vct{z})
\]
where
\begin{align*}
\ell(\vct{v};\vct{z}):= \frac{1}{2p}\left(\twonorm{\vct{v}}^2+2\frac{\varepsilon}{\sqrt{p}}\onenorm{\vct{v}}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}+\frac{\varepsilon^2}{p}\twonorm{\vct{\theta}_0+\sqrt{p}\vct{z}}^2\right)
\end{align*}
As in the previous argument on calculating $\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon-\vct{\theta}_0}$ asymptotically via the AO we proceed by writing $ \ell(\vct{v};\vct{z})$ in terms of its conjugate with respect to $\vct{z}$. That is,
\begin{align*}
\ell(\vct{v};\vct{z})=\sup_{\vct{q}} \vct{q}^T\vct{z}-\widetilde{\ell}(\vct{v};\vct{q})
\end{align*}
As discussed in Section \ref{step2} the conjugate function takes the form
\[
\widetilde{\ell}(\bv;\vct{q}) = -\frac{1}{\sqrt{p}} \vct{q}^T {\boldsymbol{\theta}}_0 + \frac{1}{2\delta p^2} \left(\frac{p}{\varepsilon} \twonorm{\vct{q}} - \onenorm{\bv}\right)_+^2 -\frac{1}{2p} \twonorm{\bv}^2
\]
and the AO problem can therefore be written as (same as \eqref{usethm4})
\begin{align*}
\min_{0\le \alpha\le K_\alpha,\vct{v}}\max_{0\le\beta\le K_\beta}\max_{\vct{q}}\;\; \frac{\beta}{\sqrt{p}} \twonorm{\alpha\vct{g}-\vct{\omega}+\vct{v}} -\alpha\twonorm{\frac{\beta}{\sqrt{p}} \vct{h} +\vct{q}}-\widetilde{\ell}(\bv;\vct{q})
\end{align*}
Our key observation is that the same AO can be used to calculate $\frac{\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon}}{\sqrt{p}}$. To make this precise we show how to write $\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon}$ in terms of functions of the $\vct{q}$ and $\bv$ that maximizes the AO. To this aim note that $\widehat{\vct{z}}=\frac{1}{\sqrt{p}}\left({\widehat{\boldsymbol{\theta}}}^\varepsilon-\vct{\theta}_0\right)$ obeys
\begin{align*}
\widehat{\vct{z}}&= \underset{\bz}{\arg\max}\text{ } \vct{q}^T \bz - \ell(\bv;\bz)\\
&= \underset{\bz}{\arg\max}\text{ } \vct{q}^T \bz -\frac{1}{2p} \sum_{i=1}^n \left(|v_i| + \frac{\varepsilon}{\sqrt{p}}\twonorm{{\boldsymbol{\theta}}_0 +\sqrt{p}\bz}\right)^2\\
&=\underset{\bz}{\arg\max}\text{ } \vct{q}^T \bz -\frac{1}{2p}\left(\twonorm{\bv}^2 + \frac{2\varepsilon}{\sqrt{p}}\onenorm{\bv} \twonorm{{\boldsymbol{\theta}}_0 +\sqrt{p}\bz}+ \delta \varepsilon^2 \twonorm{{\boldsymbol{\theta}}_0 +\sqrt{p}\bz}^2\right)
\end{align*}
Setting derivative w.r.t $\bz$ to zero we arrive at
\begin{align}
\vct{q} -\frac{\varepsilon}{p^{3/2}} \onenorm{\bv} \frac{{\boldsymbol{\theta}}_0+\sqrt{p}\widehat{\bz}}{\twonorm{{\boldsymbol{\theta}}_0+\sqrt{p}\widehat{\bz}}} \sqrt{p} - \frac{\delta \varepsilon^2}{p} ({\boldsymbol{\theta}}_0+\sqrt{p}\widehat{\bz}) \sqrt{p} = 0
\end{align}
Therefore
\[
{\boldsymbol{\theta}}_0 +\sqrt{p}\widehat{\bz} = \left(\frac{\varepsilon\onenorm{\bv}}{p\twonorm{{\boldsymbol{\theta}}_0+\sqrt{p}\widehat{\bz}}} + \frac{\delta\varepsilon^2}{\sqrt{p}}\right)^{-1}\vct{q}.
\]
Thus taking Euclidean norm of both sides of the identity we have
\begin{align*}
&\twonorm{{\boldsymbol{\theta}}_0 +\sqrt{p}\widehat{\bz}}\left(\frac{\varepsilon\onenorm{\bv}}{p\twonorm{{\boldsymbol{\theta}}_0+\sqrt{p}\widehat{\bz}}} + \frac{\delta\varepsilon^2}{\sqrt{p}}\right) = \twonorm{\vct{q}} \Rightarrow \\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\twonorm{{\boldsymbol{\theta}}_0 +\sqrt{p}\widehat{\bz}} = \frac{\twonorm{\vct{q}} - \frac{\varepsilon\onenorm{\bv}}{p}}{\frac{\delta\varepsilon^2}{\sqrt{p}}} = \frac{\sqrt{p}}{\delta \varepsilon^2}\twonorm{\vct{q}} - \frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\bv}.
\end{align*}
The latter holds as long as $\frac{\sqrt{p}}{\delta \varepsilon^2}\twonorm{\vct{q}} \ge \frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\bv}$. When $\frac{\sqrt{p}}{\delta \varepsilon^2}\twonorm{\vct{q}} < \frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\bv}$ it is easy to verify that that the objective value is smaller than or equal to $-\frac{\vct{q}^T{\boldsymbol{\theta}}_0}{\sqrt{p}} - \frac{1}{2p} \twonorm{\bv}^2$ and therefore $\widehat{\bz} = -{\boldsymbol{\theta}}_0/\sqrt{p}$ which in turn implies that $\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon}=\twonorm{{\boldsymbol{\theta}}_0+\sqrt{p}\widehat{\bz}} = 0$. We thus have
\begin{align}
\label{tmpkr}
\frac{1}{\sqrt{p}}\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon} = \frac{1}{\sqrt{p}} \twonorm{\sqrt{p}\widehat{\bz}+{\boldsymbol{\theta}}_0} = \frac{1}{\sqrt{p}}\left(\frac{\sqrt{p}}{\delta \varepsilon^2}\twonorm{\vct{q}} - \frac{1}{\delta\varepsilon\sqrt{p}}\onenorm{\bv}\right)_+
= \frac{1}{\delta\varepsilon p} \left(\frac{p}{\varepsilon}\twonorm{\vct{q}} - \onenorm{\bv}\right)_+.
\end{align}
So to get the asymptotic value of $\frac{1}{\sqrt{p}}\twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon}$ we can simply look at $\frac{1}{\delta\varepsilon p} \left(\frac{p}{\varepsilon}\gamma - \onenorm{\bv}\right)_+$ with $\vct{v}$ and $\gamma=\tn{\vct{q}}$ the optimal solutions of the AO. Note that based on the argument in Lemma \ref{Glem} for this optimal solution of $\vct{v}$ we have
\begin{align}
\label{tmpkr2}
\underset{n\rightarrow +\infty}{\lim}\frac{1}{n^2}\left(\frac{p}{\varepsilon}\gamma-\onenorm{\vct{v}}\right)^2=\frac{\omega^2}{(\mu+1)^2}\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\tau^*\cdot\text{erfc}\left(\frac{1}{\sqrt{2}}\tau^*\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{(\tau^*)^2}{2}}\right)_+^2
\end{align}
with $\omega=\sqrt{\alpha^2+\sigma^2}$, $\mu=\frac{\tau_g}{\beta}$, $\tau^*:=\tau^*\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega},\mu\right)$ and $\tau^*(a,\mu)$ is the unique solution to
\begin{align}\label{eq:tau-star}
a-\frac{\mu+1}{\mu}\tau+\tau\cdot\text{erfc}\left(\frac{\tau}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{\tau^2}{2}}=0
\end{align}
Therefore, squaring \eqref{tmpkr} and plugging in \eqref{tmpkr2} we conclude that
\begin{align*}
\lim_{p\to\infty} \frac{1}{p} \twonorm{{\widehat{\boldsymbol{\theta}}}^\varepsilon}^2&=\lim_{p\to\infty} \frac{1}{\delta^2\varepsilon^2 p^2} \left(\frac{p}{\varepsilon}\gamma - \onenorm{\bv}\right)_+^2\\
&= \frac{1}{\varepsilon^2}\cdot \lim_{n\to\infty} \frac{1}{n^2} \left(\frac{p}{\varepsilon}\gamma - \onenorm{\bv}\right)_+^2\\
&=\frac{\omega^2}{\varepsilon^2(\mu+1)^2}\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\tau^*\cdot\text{erfc}\left(\frac{1}{\sqrt{2}}\tau^*\right)-\sqrt{\frac{2}{\pi}} e^{-\frac{(\tau^*)^2}{2}}\right)_+^2\\
&=\frac{\omega^2}{\varepsilon^2(\mu+1)^2}\left(\frac{\gamma(\mu+1)}{\delta\varepsilon\omega}+\frac{\mu+1}{\mu}\tau^*-\frac{\gamma(\mu+1)}{\delta \varepsilon\omega}\right)_+^2\\
&=\frac{\omega^2}{\varepsilon^2(\mu+1)^2}\left(\frac{\mu+1}{\mu}\tau^*\right)_+^2\\
&=\frac{\omega^2\tau_*^2}{\varepsilon^2\mu^2}\\
&=\frac{(\alpha_*^2+\sigma^2)\tau_*^2}{\varepsilon^2\mu^2}.
\end{align*}
\subsubsection{Proof of Corollary \ref{cor6}}
\label{cor6pf}
The result follows readily from Lemma~\ref{lem:SR-AR} along with Theorem~\ref{thm:main} (Parts (b) and (c)).
\subsubsection{Proof of Theorem \ref{delta_limit}}
\label{delta_limitpf}
We start by analyzing $\lim_{p\to\infty}{\sf SR}({\boldsymbol{\theta}}^\lambda)$ and $\lim_{p\to\infty} {\sf AR}({\boldsymbol{\theta}}^\lambda)$. Using Lemma~\ref{lem:SR-AR}, we have
\begin{align}
\lim_{p\to\infty} {\sf SR}({\boldsymbol{\theta}}^\lambda)& = \sigma^2+ \lim_{p\to\infty} \frac{1}{p} \twonorm{{\boldsymbol{\theta}}^\lambda-{\boldsymbol{\theta}}_0}^2\nonumber\\
&= \sigma^2+ \lim_{p\to\infty} \frac{1}{p} \twonorm{{\boldsymbol{\theta}}_0}^2 \left(\frac{\gamma_0^\lambda}{1+\gamma_0^\lambda}\right)^2\nonumber\\
&= \sigma^2+ \left(\frac{\gamma_0^\lambda V}{1+\gamma_0^\lambda}\right)^2. \label{eq:SRL}
\end{align}
Likewise,
\begin{align}
\lim_{p\to\infty}{\sf AR}({\boldsymbol{\theta}}^\lambda) &= \sigma^2+ V^2 \left(\frac{\gamma_0^\lambda}{1+\gamma_0^\lambda}\right)^2 + \eps_{{\rm test}}^2 \frac{V^2}{(1+\gamma_0^\lambda)^2} + 2\sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}} V}{1+\gamma_0^\lambda} \left(\sigma^2+ \left(\frac{\gamma_0^\lambda V}{1+\gamma_0^\lambda}\right)^2\right)^{1/2}, \label{eq:ARL}
\end{align}
with $\gamma_0^\lambda$ the fixed point of the following two equations:
\begin{align}\label{eq:my0}
\gamma_0^\lambda = \frac{\eps_{{\rm test}}^2+\sqrt{\frac{2}{\pi}} \eps_{{\rm test}} A^\lambda}{1+\lambda + \sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}}{A^\lambda}}\,,\quad
A^\lambda = \frac{1}{V} \left((1+\gamma_0^\lambda)^2\sigma^2+ (\gamma_0^\lambda)^2 V^2\right)^{1/2}\,.
\end{align}
We next analyze $\lim_{\delta\to\infty} \lim_{n\to\infty}{\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon)$ and $\lim_{\delta\to\infty} \lim_{n\to\infty}{\sf AR}({\widehat{\boldsymbol{\theta}}}^\varepsilon)$. By using Corollary~\ref{cor6}, we have
\begin{align}
\lim_{\delta\to\infty}\lim_{n\to\infty} {\sf SR}({\widehat{\boldsymbol{\theta}}}^\varepsilon) &= \lim_{\delta\to\infty} \left(\sigma^2 + \alpha_*^2\right)\,, \label{eq:SReps}\\
\lim_{\delta\to\infty} \lim_{n\to\infty} {\sf AR}({\widehat{\boldsymbol{\theta}}}^\varepsilon) &= \lim_{\delta\to\infty} \left\{\sigma^2+\alpha_*^2 + \eps_{{\rm test}}^2 (\alpha_*^2+\sigma^2) \left(\frac{\beta_*\tau_*}{\varepsilon \tau_{g*}}\right)^2
+2\sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}\beta_*\tau_*}{\varepsilon \tau_{g*}} (\sigma^2+ \alpha_*^2)\right\}\,.\label{eq:AReps}
\end{align}
Therefore, we need to study the solution of the convex-concave minimax optimization~\eqref{eq019-2} at the limits $\delta \to \infty$. It is straightforward to see that as $\delta\to \infty$, the indicator in~\eqref{eq019-2} is active and hence it reduces to
\begin{align}
D(\alpha,\beta, \gamma,\tau_h,\tau_g)=&\frac{\delta\beta}{2(\tau_g+\beta)} \left(\alpha^2+\sigma^2\right)
+\frac{\delta\beta^2(\alpha^2+\sigma^2)}{2\tau_g(\tau_g+\beta)}{\rm erf}\left(\frac{\tau_*}{\sqrt{2}}\right)\nonumber\\
&-\frac{\alpha}{2\tau_h} \gamma^2 +\gamma \left(\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2}+ V^2} - \frac{\beta \tau_* \sqrt{\alpha^2+\sigma^2}}{\varepsilon\tau_g} \right) \nonumber\\
&-\frac{\alpha}{2\tau_h}\beta^2 -\frac{\alpha\tau_h}{2}+\frac{\beta\tau_g}{2}\,.\label{eq:newD}
\end{align}
Solving for $\gamma$, we obtain
\[
\gamma_* = \frac{\tau_h}{\alpha} \left(\sqrt{\frac{\alpha^2\beta^2}{\tau_h^2} + V^2} - \frac{\beta\tau_* \sqrt{\alpha^2+\sigma^2}}{\varepsilon\tau_g}\right)\,.
\]
Since $\gamma(\tau_g+\beta) > \sqrt{\frac{2}{\pi}} \delta \varepsilon \beta \sqrt{\alpha^2+\sigma^2}$, we have $\gamma\to \infty$ as $\delta \to \infty$, and by the above equation for $\gamma_*$, we obtain that $\tau_h\to \infty$. Therefore,
\begin{align}\label{eq:my-gamma}
\gamma_* \to \frac{\tau_h}{\alpha} \left(V - \frac{\beta\tau_* \sqrt{\alpha^2+\sigma^2}}{\varepsilon\tau_g}\right)\,.
\end{align}
In addition, $\tau_*\to 0$ as $\delta \to \infty$. Writing the Taylor expansion of the characteristic equation of $\tau_*$ as per~\eqref{eq:tau*}, we get
\begin{align}\label{eq:my-gamma2}
\frac{\gamma(\tau_g+\beta)}{\beta\delta\varepsilon\sqrt{\alpha^2+\sigma^2}} = \sqrt{\frac{2}{\pi}} + \frac{\beta\tau_*}{\tau_g} + O(\tau_*^2)\,.
\end{align}
We adopt the shorthands $\omega:=\sqrt{\alpha^2+\sigma^2}$ and $\mu := \frac{\tau_g}{\beta}$.
Combining \eqref{eq:my-gamma2} with \eqref{eq:my-gamma} yields
\[
\frac{\tau_h (\mu+1)}{\alpha \delta \varepsilon \omega} \left(V - \frac{\tau_* \omega}{\varepsilon\mu}\right) = \sqrt{\frac{2}{\pi}} + \frac{\tau_*}{\mu} + O(\tau_*^2)\,.
\]
Writing the objective $D$ given by~\eqref{eq:newD} in terms of $\omega$, $\mu$, $\eta$ and after substituting for $\gamma_*$ we arrive at
\begin{align}
D=&\frac{\delta\omega^2}{2(\mu+1)}
+\frac{\delta\omega^2}{2\mu(\mu+1)}{\rm erf}\left(\frac{\tau_*}{\sqrt{2}}\right)\nonumber\\
&+\frac{\tau_h}{2\alpha} \left(V- \frac{\tau_* \omega}{\varepsilon\mu}\right)^2-\frac{\alpha}{2\tau_h}\beta^2-\frac{\alpha\tau_h}{2}+\frac{\beta^2\mu}{2}\,.
\end{align}
Since $\delta, \tau_h\to \infty$, keeping only the dominant terms results in
\begin{align}
D=&\frac{\delta\omega^2}{2(\mu+1)}
+\frac{\delta\omega^2}{2\mu(\mu+1)}{\rm erf}\left(\frac{\tau_*}{\sqrt{2}}\right)+\frac{\tau_h}{2\alpha} \left(V- \frac{\tau_* \omega}{\varepsilon\mu}\right)^2-\frac{\alpha\tau_h}{2}\,,
\end{align}
and by keeping only terms of $O(\tau_*^2)$ we have
\begin{align}
D=&\frac{\delta\omega^2}{2(\mu+1)} \left(1+\sqrt{\frac{2}{\pi}} \frac{\tau_*}{\mu}\right)
+\frac{\tau_h}{2\alpha} \left(V- \frac{\tau_* \omega}{\varepsilon\mu}\right)^2-\frac{\alpha\tau_h}{2}\,.
\end{align}
Setting the derivative of $D$, with respect to $\tau_h$, to zero, we get
\begin{align}\label{eq:myalpha}
\alpha = V- \frac{\tau_*\omega}{\varepsilon \mu}\,.
\end{align}
We next set the derivative of $D$, with respect to $\alpha$, to zero, which implies
\[
\frac{\delta\alpha}{\mu+1} \left(1+\sqrt{\frac{2}{\pi}} \frac{\tau_*}{\mu}\right) -\frac{\tau_h}{2\alpha^2} \left(V- \frac{\tau_* \omega}{\varepsilon\mu}\right)^2 -\frac{\tau_h}{\alpha} \left(V- \frac{\tau_* \omega}{\varepsilon\mu}\right)\frac{\tau_*}{\varepsilon\mu} \frac{\alpha}{\omega} - \frac{\tau_h}{2} =0\,.
\]
Plugging in for $\alpha$ from~\eqref{eq:myalpha} we obtain
\begin{align}\label{eq:myalpha2}
\alpha = \varepsilon \omega \frac{\sqrt{\frac{2}{\pi}}+\frac{\tau_*}{\mu}}{1+\sqrt{\frac{2}{\pi}}\frac{\tau_*}{\mu}} \left(1+ \frac{\tau_*\alpha}{\varepsilon \mu\omega}\right)\,.
\end{align}
Defining $A^\varepsilon: = \frac{\varepsilon \mu}{\tau_* }$ and $\gamma_0^\varepsilon:= \frac{\varepsilon \mu V}{\tau_*\omega} - 1$, the above two equations~\eqref{eq:myalpha}, \eqref{eq:myalpha2} imply that
\begin{align}
\alpha &= V - \frac{\omega}{A^\varepsilon} =\frac{\gamma_0^\varepsilon V}{1+\gamma_0^\varepsilon}, \label{eq:my1}\\
\frac{\alpha\varepsilon}{\omega} & = \varepsilon^2 \frac{1+\sqrt{\frac{2}{\pi}}\frac{\mu}{\tau_*} }{\frac{\mu}{\tau_*}+\sqrt{\frac{2}{\pi}} } \left( 1+ \frac{\tau_*\alpha}{\varepsilon\mu\omega}\right) = \frac{\varepsilon^2+\sqrt{\frac{2}{\pi}}\varepsilon A^\varepsilon}{\frac{A^\varepsilon}{\varepsilon}+\sqrt{\frac{2}{\pi}} }\left( 1+ \frac{\tau_*\alpha}{\varepsilon\mu\omega}\right). \label{eq:my2}
\end{align}
From~\eqref{eq:my1} we obtain
\begin{align}\label{eq:my3}
\frac{1}{V}\left((1+\gamma_0^\varepsilon)^2\sigma^2 + (\gamma_0^\varepsilon)^2 V^2\right)^{1/2} = \frac{1+\gamma_0^\varepsilon}{V} \omega = A^\varepsilon.
\end{align}
In addition, from \eqref{eq:my1} and \eqref{eq:my2} we have
\begin{align}\label{eq:my4}
\gamma_0^\varepsilon = \frac{VA^\varepsilon}{\omega}-1 = \frac{A^\varepsilon \alpha}{ \omega}
=\frac{\varepsilon^2+\sqrt{\frac{2}{\pi}}\varepsilon A^\varepsilon}{1+\sqrt{\frac{2}{\pi}} \frac{\varepsilon}{A^\varepsilon}} \left(1+ \frac{\gamma_0^\varepsilon}{(A^\varepsilon)^2}\right).
\end{align}
Combining equations~\eqref{eq:my3} and \eqref{eq:my4}, we have that $\gamma_0^\varepsilon$ is the fixed point of the following two equations:
\begin{align}
\gamma_0^\varepsilon = \frac{\varepsilon^2+\sqrt{\frac{2}{\pi}} \varepsilon A^\varepsilon}{1 -(\frac{\varepsilon}{A^\varepsilon})^2} \,,\quad
A^\varepsilon = \frac{1}{V} \left((1+\gamma_0^\varepsilon)^2\sigma^2+ (\gamma_0^\varepsilon)^2 V^2\right)^{1/2}.
\end{align}
Now consider a fixed $\lambda \geq 0$ and let $\gamma_0^\lambda, A^\lambda$ be defined by~\eqref{eq:my0}. Comparing equations~\eqref{eq:SRL} and \eqref{eq:ARL} with \eqref{eq:SReps} and \eqref{eq:AReps}, we see that in order to prove the statement, it suffices to find corresponding $\varepsilon \geq 0$ such that
$\gamma_0^\varepsilon = \gamma_0^\lambda$ (Note that the statement $\gamma_0^\varepsilon = \gamma_0^\lambda$ implies that $A^\varepsilon = A^\lambda$ as well). Such value of $\varepsilon$ is hence found from the following equation (which equates $\gamma_0^\varepsilon = \gamma_0^\lambda$ and $A^\lambda = A^\varepsilon$):
$$ \frac{\eps_{{\rm test}}^2+\sqrt{\frac{2}{\pi}} \eps_{{\rm test}} A^\lambda}{1+\lambda + \sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}}{A^\lambda}} =
\frac{\varepsilon^2+\sqrt{\frac{2}{\pi}} \varepsilon A^\lambda}{1 -(\frac{\varepsilon}{A^\lambda})^2}
\,. $$
Rearranging terms, we reach to:
$$ \varepsilon^2 \left( 1+\lambda + \sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}}{A^\lambda} +\left(\frac{\eps_{{\rm test}}}{A^\lambda}\right)^2 + \sqrt{\frac{2}{\pi}} \frac{\eps_{{\rm test}}}{A^\lambda} \right) + \varepsilon \sqrt{\frac{2}{\pi}} \left( A^\lambda (1 + \lambda) +\sqrt{\frac{2}{\pi}} \eps_{{\rm test}} \right) - \left(\eps_{{\rm test}}^2 + \sqrt{\frac{2}{\pi}} \eps_{{\rm test}} A^\lambda \right) = 0\,.$$
The thesis now follows by noting that the above equation is a quadratic form in $\varepsilon$ and has always a positive solution, which gives the value of $\varepsilon$ in terms of $\lambda$.
\section{Further Related Work}
\label{related}
The trade-off between standard and adversarial accuracy has been studied recently in \cite{DBLP:conf/iclr/MadryMSTV18, DBLP:conf/nips/SchmidtSTTM18,tsipras2018robustness,raghunathan2019adversarial, DBLP:conf/icml/ZhangYJXGJ19, pydi2019adversarial}. An central question is whether standard and robust objectives are fundamentally at conflict? In other words, is there a predictor that can achieve both optimal standard accuracy and robust accuracy when the number of training data samples is sufficiently large? In this regard, \cite{tsipras2018robustness,DBLP:conf/icml/ZhangYJXGJ19} construct learning problems where the optimal robust accuracy is fundamentally at conflict with the standard accuracy, i.e. no predictor can achieve both optimal standard accuracy and robust accuracy even in the infinite data limit. However, there are clearly many natural learning problems in which a predictor with optimal standard and high robust accuracy exists (hence the two objectives are not at conflict). An instance of such cases has been studied in \cite{raghunathan2019adversarial} suggesting that the inconsistency between adversarial accuracy and standard accuracy may be due to insufficient number of training samples. In contrast, in this paper we have shown that a fundamental tradeoff exists between the two accuracies in linear regression even with limited~samples.
Another line of work considers the tradeoff between standard and robust accuracy when the capacity of the learning model varies \cite{nakkiran2019adversarial,DBLP:conf/nips/GaoCLHWL19}. In particular, \cite{nakkiran2019adversarial} provides classification problems where simple classifiers with high standard accuracy exist; but having high robust accuracy is possible through more complex classifiers.
The notions of capacity and complexity in the presence of adversarially perturbed inputs (a.k.a. adversarially robust learnability) have also been studied in a series of interesting papers \cite{DBLP:conf/icml/BubeckLPR19, DBLP:conf/nips/CullinaBM18, DBLP:journals/corr/abs-1810-09519, DBLP:conf/icml/YinRB19, DBLP:conf/colt/MontasserHS19}. In particular \cite{DBLP:conf/colt/MontasserHS19} show that any hypothesis class with finite VC dimension is adversarially-robust PAC learnable in the $\ell_\infty$ metric using modified (improper) learning rules. Finally, let us point out that under specific high-dimensional data distributions (e.g. isotropic Gaussian), any classifier becomes highly vulnerable to adversarial $\ell_2$ perturbations \cite{gilmer2018adversarial,DBLP:conf/iclr/ShafahiHSFG19}. Thus the adversarial error approaches $1$ as the dimension grows. This phenomenon does not occur in our regression setting as the regression loss is smoothly varying as opposed to the classification error.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.